PHP & Web Development Blogs

Search Results For: post
Showing 1 to 5 of 27 blog articles.
26234 views · 4 years ago
Introduction to Gitlab CI for PHP developers
As a developer, you've probably at least heard something about CI - Continuous integration. And if you haven't - you better fix it ASAP, because that's something awesome to have on your skill list and can get extremely helpful in your everyday work. This post will focus on CI for PHP devs, and specifically, on CI implementation from Gitlab. I will suppose you know the basics of Git, PHP, PHPUnit, Docker and unix shell. Intended audience - intermediate PHP devs.
Adding something to your workflow must serve a purpose. In this case the goal is to automate routine tasks and achieve better quality control. Even a basic PHP project IMO needs the following:
* linter) checks (cannot merge changes that are invalid on the syntax level)
* Code style checks
* Unit and integration tests
All of those can be just run eventually, of course. But I prefer an automated CI approach even in my personal projects because it leads to a higher level of discipline, you simply can't avoid following a set of rules that you've developed. Also, it reduces a risk of releasing a bug or regression, thus improving quality.
Gitlab is as generous as giving you their CI for free, even for your private repos. At this point it is starting to look as advertising, therefore a quick comparison table for Gitlab, Github, Bitbucket. AFAIK, Github does not have a built-in solution, instead it is easily integrated with third parties, of which Travis CI seems to be the most popular - I will therefore mention Travis here.

Public repositories (OSS projects). All 3 providers have a free offer for the open-source community!


| Provider | Limits |
|---|---|
| Gitlab | 2,000 CI pipeline minutes per group per month, shared runners |
| Travis | Apparently unlimited |
| Bitbucket| 50 min/month, max 5 users, File storage <= 1Gb/month |

Private repositories


| Provider | Price | Limits |
|---|---|---|
| Gitlab | Free | 2,000 CI pipeline minutes per group per month, shared runners |
| Travis | $69/month | Unlimited builds, 1 job at a time |
| Bitbucket| Free | 50 min/month, max 5 users, File storage <= 1Gb/month |

Getting started

I made a small project based on Laravel framework and called it "ci-showcase". I work in Linux environment, and the commands I use in the examples, are for linux shell. They should be pretty much the same on Mac and nearly the same on Windows though.
composer create-project laravel/laravel ci-showcase

Next, I went to gitlab website and created a new public project: https://gitlab.com/crocodile2u/ci-showcase. Cloned the repo and copied all files and folders from the newly created project - the the new git repo. In the root folder, I placed a .gitignore file:
.idea
vendor
.env

Then the .env file:
APP_ENV=development

Then I generated the application encryption key: php artisan key:generate, and then I wanted to verify that the primary setup works as expected: ./vendor/bin/phpunit, which produced the output OK (2 tests, 2 assertions). Nice, time to commit this: git commit &amp;&amp; git push

At this point, we don't yet have any CI, let's do something about it!

Adding .gitlab-ci.yml

Everyone going to implement CI with Gitlab, is strongly encouraged to bookmark this page: https://docs.gitlab.com/ee/ci/README.html. I will simply provide a short introduction course here plus a bit of boilerplate code to get you started easier.
First QA check that we're going to add is PHP syntax check. PHP has a built-in linter, which you can invoke like this: php -l my-file.php. This is what we're going to use. Because the php -l command doesn't support multiple files as arguments, I've written a small wrapper shell script and saved it to ci/linter.sh:
#!/bin/sh
files=<code>sh ci/get-changed-php-files.sh | xargs</code>last_status=0
status=0
# Loop through changed PHP files and run php -l on each
for f in "$files" ; do message=<code>php -l $f</code> last_status="$?" if [ "$last_status" -ne "0" ]; then # Anything fails -> the whole thing fails echo "PHP Linter is not happy about $f: $message" status="$last_status" fi
done
if [ "$status" -ne "0" ]; then echo "PHP syntax validation failed!"
fi
exit $status

Most of the time, you don't actually want to check each and every PHP file that you have. Instead, it's better to check only those files that have been changed. The Gitlab pipeline runs on every push to the repository, and there is a way to know which PHP files have been changed. Here's a simple script, meet ci/get-changed-php-files.sh:
#!/bin/sh
# What's happening here?
#
# 1. We get names and statuses of files that differ in current branch from their state in origin/master.
# These come in form (multiline)
# 2. The output from git diff is filtered by unix grep utility, we only need files with names ending in .php
# 3. One more filter: filter *out* (grep -v) all lines starting with R or D.
# D means "deleted", R means "renamed"
# 4. The filtered status-name list is passed on to awk command, which is instructed to take only the 2nd part
# of every line, thus just the filename
git diff --name-status origin/master | grep '\.php$' | grep -v "^[RD]" | awk '{ print }'

These scripts can easily be tested in your local environment ( at least if you have a Linux machine, that is ;-) ).
Now, as we have our first check, we'll finally create our .gitlab-ci.yml. This is where your pipeline is declared using YAML notation:
# we're using this beautiful tool for our pipeline: https://github.com/jakzal/phpqa
image: jakzal/phpqa:alpine
# For this sample pipeline, we'll only have 1 stage, in real-world you would like to also add at least "deploy"
stages: - QA
linter:
stage: QA
# this is the main part: what is actually executed
script: - sh ci/get-changed-php-files.sh | xargs sh ci/linter.sh

The first line is image: jakzal/phpqa:alpine and it's telling Gitlab that we want to run our pipeline using a PHP-QA utility by jakzal. It is a docker image containing PHP and a huge variety of QA-tools. We declare one stage - QA, and this stage by now has just a single job named linter. Every job can have it's own docker image, but we don't need that for the purpose of this tutorial. Our project reaches Step 2. Once I had pushed these changes, I immediately went to the project's CI/CD page. Aaaand.... the pipeline was already running! I clicked on the linter job and saw the following happy green output:
Running with gitlab-runner 11.9.0-rc2 (227934c0) on docker-auto-scale ed2dce3a
Using Docker executor with image jakzal/phpqa:alpine ...
Pulling docker image jakzal/phpqa:alpine ...
Using docker image sha256:12bab06185e59387a4bf9f6054e0de9e0d5394ef6400718332c272be8956218f for jakzal/phpqa:alpine ...
Running on runner-ed2dce3a-project-11318734-concurrent-0 via runner-ed2dce3a-srm-1552606379-07370f92...
Initialized empty Git repository in /builds/crocodile2u/ci-showcase/.git/
Fetching changes...
Created fresh repository.
From https://gitlab.com/crocodile2u/ci-showcase * [new branch] master -> origin/master * [new branch] step-1 -> origin/step-1 * [new branch] step-2 -> origin/step-2
Checking out 1651a4e3 as step-2...
Skipping Git submodules setup
$ sh ci/get-changed-php-files.sh | xargs sh ci/linter.sh
Job succeeded

It means that our pipeline was successfully created and run!

PHP Code Sniffer.

PHP Code Sniffer is a tool for keeping app of your PHP files in one uniform code style. It has a hell of customizations and settings, but here we will only perform simple check for compatibilty with PSR-2 standard. A good practice is to create a configuration XML file in your project. I will put it in the root folder. Code sniffer can use a few file names, of which I prefer phpcs.xml:
<?xml version="1.0"?>
/resources

I also will append another section to .gitlab-ci.yml:
code-style:	stage: QA	script:	# Variable $files will contain the list of PHP files that have changes	- files=<code>sh ci/get-changed-php-files.sh</code> # If this list is not empty, we execute the phpcs command on all of them - if [ ! -z "$files" ]; then echo $files | xargs phpcs; fi

Again, we check only those PHP files that differ from master branch, and pass their names to phpcs utility. That's it, Step 3 is finished! If you go to see the pipeline now, you will notice that linter and code-style jobs run in parallel.

Adding PHPUnit

Unit and integration tests are essential for a successful and maintaiable modern software project. In PHP world, PHPUnit is de facto standard for these purposes. The PHPQA docker image already has PHPUnit, but that's not enough. Our project is based on Laravel, which means it depends on a bunch of third-party libraries, Laravel itself being one of them. Those are installed into vendor folder with composer. You might have noticed that our .gitignore file has vendor folder as one of it entries, which means that it is not managed by the Version Control System. Some prefer their dependencies to be part of their Git repository, I prefer to have only the composer.json declarations in Git. Makes the repo much much smaller than the other way round, also makes it easy to avoid bloating your production builds with libraries only needed for development.
Composer is also included into PHPQA docker image, and we can enrich our .gitlab-ci.yml:
test:	stage: QA	cache:	key: dependencies-including-dev	paths: - vendor/	script:	- composer install	- ./vendor/bin/phpunit

PHPUnit requires some configuration, but in the very beginning we used composer create-project to create our project boilerplate.laravel/laravel package has a lot of things included in it, and phpunit.xml is also one of them. All I had to do was to add another line to it:
xml

APP_KEY enironment variable is essential for Laravel to run, so I generated a key with php artisan key:generate.
git commit & git push, and we have all three jobs on theQA stage!

Checking that our checks work

In this branch I intentionally added changes that should fail all three job in our pipeline, take a look at git diff. And we have this out from the pipeline stages:Linter:
$ ci/linter.sh
PHP Linter is not happy about app/User.php:
Parse error: syntax error, unexpected 'syntax' (T_STRING), expecting function (T_FUNCTION) or const (T_CONST) in app/User.php on line 11
Errors parsing app/User.php
PHP syntax validation failed!
ERROR: Job failed: exit code 255

**Code-style**:
$ if [ ! -z "$files" ]; then echo $files | xargs phpcs; fi
FILE: ...ilds/crocodile2u/ci-showcase/app/Http/Controllers/Controller.php
----------------------------------------------------------------------
FOUND 0 ERRORS AND 1 WARNING AFFECTING 1 LINE
---------------------------------------------------------------------- 13 | WARNING | Line exceeds 120 characters; contains 129 characters
----------------------------------------------------------------------
Time: 39ms; Memory: 6MB
ERROR: Job failed: exit code 123

**test**:
$ ./vendor/bin/phpunit
PHPUnit 7.5.6 by Sebastian Bergmann and contributors.
F. 2 / 2 (100%)
Time: 102 ms, Memory: 14.00 MB
There was 1 failure:
1) Tests\Unit\ExampleTest::testBasicTest
This test is now failing
Failed asserting that false is true.
/builds/crocodile2u/ci-showcase/tests/Unit/ExampleTest.php:17
FAILURES!
Tests: 2, Assertions: 2, Failures: 1.
ERROR: Job failed: exit code 1

Congratulations, our pipeline is running, and we now have much less chance of messing up the result of our work.

Conclusion

Now you know how to set up a basic QA pipeline for your PHP project. There's still a lot to learn. Pipeline is a powerful tool. For instance, it can make deployments to different environments for you. Or it can build docker images, store artifacts and more! Sounds cool? Then spend 5 minutes of your time and leave a comment, you can also tell me if there is a pipeline topic you would like to be covered in next posts.
6751 views · 2 years ago
10 SEO Best Practices for Web Developers

You've built an amazing website, but how do you make sure people can find your site via search engines? In this article we cover 10 best practices to make sure your article not only stands out, but ranks well with search engines.

1. Take time to research keywords


To determine the best keywords for your site, you'll need to do some keyword research. This usually consists of combing through your competitors' sites for the keywords that are driving them the most traffic. There are several ways you can get started with keyword research. One recommended way is to create a spreadsheet with your competitors' sites listed and add keywords that you can copy and paste your competitors' keywords into Google's Keyword Tool and Google Webmaster Tools Keyword Analyzer tool. Analyze your competition's site's website titles to find out what keywords they are using tools such as Ahrefs, Moz, SpyFu, or SEMrush to find out what keywords others are using on your competitor's site.

2. Focus on your Title tag


This is the headline for every article. It needs to be bold and attention grabbing so it can catch the eye of potential users. Pick it somewhere around 60-90 characters to make sure it is displayed properly in search engines as well as readable in the browser tab. As you write your title, focus on the unique keywords your readers are likely to search for. Also make sure that the keywords you select are relevant to your page. Another good practice is to make the title tag and your header (h1) the same.

3. Carefully craft your H1, H2, H3 tags


Careful usage of header tags helps search engines identify keywords within your page. To get the best results from your header tags, use H1, H2, and H3 in order with keywords in your H2 and H3 headers that support your H1 tag. Remember, your H1 tag should mimic your title tag, whereas the H2 and H3 can expand and add additional context. You can also utilize multiple H2 and H3 tags, however be sure that these headers are supporting the H1 tag and relevant to the content on your page. Using irrelevant header keywords can actually work to your disadvantage.

4. Avoid loading content with JavaScript


Despite it's popularity, JavaScript is not yet well supported by search engines and can mask important content. Progressive Web Apps in particular can suffer as key content is loaded after the page is spidered, or in the case of many search engines that do not yet index JavaScript not loaded at all. This is also the case for many social media sites, meaning that content loaded dynamically is not evaluated or pulled in, resulting in the default skeleton of your site being what shows up in search engines and in link previews.

5. Carefully name images


In the past search engines would evaluate your images based on their alt tag, however as more and more developers loaded irrelevant keywords into this hidden image text search engines instead added more emphasis to the actual name of the image itself. This means using generic image names such as 1.jpg can actually hurt your site ranking as search engines might be looking for seokeywords.jpg. Now, just because you're carefully naming your images with relevant keywords describing the image doesn't mean you should ignore the alt tag. Be sure to continue to include alt tags for older search engines, in the case the image doesn't load, and for accessibility (ie screen readers).

6. Work to improve your page load time


It’s not a secret that faster sites rank higher in search engines. Most search engines use the PageSpeed Index from Google to determine the speed of websites. One thing Google looks at is how fast images are loading. For this reason, we recommend taking a look at how long it takes for the first image to load on your site or even take advantage of lazy loading for non-critical images. You want images to be loading within 30 seconds at the absolute latest, before the user can actually click on the page. You also need to make sure that if you're using multiple images that they load as one group. Next, take a look at how long it takes to load a webpage. Are pages taking longer than three seconds to load on your site? You want to have pages that load fast for users, but your code and templates can easily be causing this to happen.

7. Optimize text throughout your page


Beyond your title tag, headers, and images it's important to work keywords into your standard content, while also working to avoid overloading keywords. To help prevent overloading and increase search engine rankings across multiple keywords you can use alternative phrases. In the case of "PHP training" an alternative phrase might be "PHP tutorials" or "PHP course." This both helps support the primary keyword, while also allowing the page to rank for these keywords as well. Remember to use the tools referenced above to find the keywords that are right for your site, and then work them in to natural sentences without forcing keywords or becoming overly repetitive. Also keep in mind, just as important as the content and keywords on the page are to search engines, how users engage with that content is also critical. If your page experience's high bounce rates or low engagement with the content, it is likely to be deprioritized by search engines, meaning a page highly optimized for search engines but not humans may enjoy a higher ranking, but only for a short time before it is heavily penalized.

8. Build your Domain and Page Authority


Domain and Page Authority are determined not just by the number of back-links (or sites linking to your domain or page), but also the quality of the sites and pages linking to you. One practice that has made obtaining a better DA or PA harder has been purchasing or acquiring bulk back-links. Note this practice is actually against Google's TOS and may result in your entire site being banned from their search results! Because of this practice, it's important to focus on high quality sites and work to get back-links naturally either through partnerships or syndicated content (such as blog posts). You can also check your DA here or using one of the many tools referenced above.

9. Take advantage of social media


Speaking of back-links, social media can be a powerful tool for increasing page visibility while also improving your search engine rankings! Remember, most social sites do not support or read JavaScript, so ensure your content is available on the page. If you do have a progressive web app with JavaScript loading your content, look into using Headless Chrome to render a JavaScript free version of your site for specific bots (note - the content MUST be the same content a user would see or your site may be blocked). There are also numerous tools to allow you to build the content via JavaScript on the server backend before passing it to your readers. To help get even more exposure, consider adding social share links or tools like AddThis.

10. Good SEO takes time


The truth is that there really aren't any special secrets or ingredients to ranking well in search engines (well not that Google has publicly shared). Instead it's about properly formatting your page, making sure it's readable to search engines, and providing content that your readers will engage with. As you provide more valuable content, and more people like and link to your content - your site's Domain Authority will gradually increase, giving your site and pages more powerful - resulting in a higher ranking.
5777 views · 2 years ago
A Beginners Guide To Artificial Intelligence For Web Developers

Artificial Intelligence has significantly transformed the way we work and interpret information. With technologies such as OCR, machine learning, deep learning, natural language processing, and computer vision; machines are now able to provide greater insights and perform tasks that typically required hours and hours of work from humans.

What is artificial intelligence?


A.I. or artificial intelligence is the technology that enables machines to perform tasks that usually require human intelligence. But instead of using human brains, A.I. uses different technologies such as computers, or even software algorithms, to perform tasks. Some of the most common A.I. technologies include speech recognition, voice recognition, machine translation, natural language processing, computer vision, and predictive analytics. The term artificial intelligence comes from the combination of artificial and intelligence. While artificial intelligence is a property of the physical world, intelligence is the property of the mind. How does it make sense in Web Development? As mentioned earlier, A.I. has significantly transformed the way we work and interpret information.

How is AI applied to web development?


In the majority of cases, AI is used to assist a developer in a number of functions: Automatically format existing content, analyze images for semantic meaning Break down complex tasks into smaller pieces Example applications of AI in web development Example image compression algorithms. Tools such as image recognition and machine learning have been key factors in the development of new image processing algorithms. Traditionally, manually processing an image was a lengthy and tedious process, but when computer vision was introduced into the process it drastically decreased the amount of time required to complete this task. Now, programs such as image recognition can identify objects in images and classify them based on both visual and metadata attributes.

Machine learning


When data is fed into a machine learning algorithm, the machine learns to understand it. For instance, if you provide a machine learning algorithm examples of dogs verses blueberries, the machine will learn to identify what a picture of a blueberry looks like, verses a picture of a dog. Natural Language Processing Natural language processing is a sub-field of machine learning. You can apply natural language processing for reading emails, chatting, or writing blog posts (such as this one!). A good example of natural language processing in action can be found in Microsoft's Cortana. Deep learning This is the most popular type of artificial intelligence today.

Deep learning


Deep learning algorithms are very similar to how the human brain works, with its built in mechanisms to learn and memorise a vast amount of information. It's these connections that enable machines to be able to recognise patterns and learn from them. An example of this is Google Translate, which recognises more than a 1,000 languages. This isn't an example of AI but it shows how useful these programs can be. Deep learning is one of the hottest technologies in the field of machine learning and this explains why almost all of the major technology companies are pushing these advances forward.

Natural language processing


For example, your phone can understand you better when you speak to it. If you say “Hey, Siri,” your phone will listen to you and respond to your questions. In general, it means that the system has been trained and is able to better understand the context of what you’re trying to communicate. This type of Natural Language Processing is used in the majority of companies today, including the likes of Google and Apple, to improve the user experience, provide better customer service, and to aid in the effective execution of processes. Machine learning Machine Learning is an extremely powerful technique used to further improve the knowledge of artificial intelligence, as well as to make machines smarter by discovering patterns and generalities in vast amounts of data.

Computer vision


Computer vision is a technology that has been able to recognize objects in images and video for eons. A popular example is Apple's Siri, which was one of the first software to use computer vision to provide contextual awareness. AI is built on this technology, providing the capability to recognize various images and videos. The industry is still in its infancy, but what we have seen so far has been incredibly incredible. What's amazing is that just a few years ago we thought that vision was completely under our control, but now, it has evolved to understand the nuances of objects.

Conclusion

“In the year 2050, the Amazon book you ordered for your Kindle will be delivered by a drone.”

This futuristic statement by Amazon CEO Jeff Bezos did leave you pondering. But it is one thing to dream about the future and another thing to think about the innovations taking place in the present and how you can exploit them to drive better business results. To make the most of the technologies coming to our everyday lives, we must acquire a knowledge of the AI technology, its features, and its application. Succeeding in today’s competitive and challenging business world, requires a broad set of skills such as coding, business analysis, computer programming, and ecommerce marketing.

Learn more about AI with our video library
8280 views · 3 years ago


Recently I was faced with a task to post data from a .csv file to an external REST API. I’m just going to log in to this article about what I did to get the job done.

Let’s start by creating a template for uploading the file. For this article’s sake, lets make the changes in the dashboard.blade.php file.


<form method="post" enctype="multipart/form-data"> @csrf <div class="custom-file"> <input type="file" accept=".csv" name="excel" class="custom-file-input" id="customFile" /> <label class="custom-file-label" for="customFile">Choose file</label > </div> <div> <button type="submit" class="btn btn-primary btn-sm" style="margin-top: 10px" >Submit> </div>

</form>

Note : Don’t forget to add enctype=”multipart/form-data”!



Once the user has submitted the file, we need a new router to process the file and send its content to the REST API. Let’s start by creating a Controller.


php artisan make:controller UploadController


Now in the web.php file,


Route::post('/upload', [UploadController::class, 'upload'])->name('upload')->middleware('auth');


In the UploadController.php , create a function named upload. We will be writing all the code inside this function. Also, we need an action for the form.


<form method="post" action="{{route('upload')}}" enctype="multipart/form-data">


Now inside the upload function, we need to get the submitted file and parse its contents.

Get the submitted file,


$file = $request->file('excel');


Parse the submitted file,


if (($handle = fopen($file, "r")) !== FALSE) { while (($data = fgetcsv($handle, 1000, ",")) !== FALSE) { ..... }

}


We will be using a dummy REST API to create users — https://reqres.in/api/users. This is the request body required to create a user.


{ "name": "test", "job": "test"

}


Keeping this in mind, we will create a sample .csv template to be submitted. The fields need to be two, namely Name and Job.



We need to send the values from this file as the request body to the API. So let’s add the code to loop through the content of this file.


if (($handle = fopen($file, "r")) !== FALSE) { while (($data = fgetcsv($handle, 1000, ",")) !== FALSE) { Http::post('https://reqres.in/api/users', [ 'name' => $data[0], 'job' => $data[1], ]); }

}


This will create each student for each row of the file. But we don’t need to send the data of the first row of the file.

Full code:


public function upload(Request $request){ $file = $request->file('excel'); if($file){ $row = 1; $array = []; if (($handle = fopen($file, "r")) !== FALSE) { while (($data = fgetcsv($handle, 1000, ",")) !== FALSE) { if($row > 1){ Http::post('https://reqres.in/api/users', [ 'name' => $data[0], 'job' => $data[1], ]); array_push($array,$data[0]); } $request->session()->flash('status', 'Users '.implode($array,", ").' created successfully!'); $row++; } } }else{ $request->session()->flash('error', 'Please choose a file to submit.'); } return view('dashboard');

}


This will post the data starting from the second row of the file, display a success message once the users are created, and an error message if the submit button is clicked without choosing a file.

Full template:


<div class="container max-w-7xl mx-auto sm:px-6 lg:px-8" style="width: 50%"> @if (session('status')) <div class="alert alert-success"> {{ session('status') }} </div> @endif @if (session('error')) <div class="alert alert-error"> {{ session('error') }} </div> @endif <form action="{{route('upload')}}" method="post" enctype="multipart/form-data"> @csrf <div class="custom-file"> <input type="file" accept=".csv" name="excel" class="custom-file-input" id="customFile" /> <label class="custom-file-label" for="customFile">Choose file</label> </div> <div> <button type="submit" class="btn btn-primary btn-sm" style="margin-top: 10px">Submit</button> </div> </form>

</div>




That’s it, thanks for reading :)
5291 views · 3 years ago
Top 12 PHP Libraries to Leverage Your Web App Development



PHP, by all means, is an immensely powerful language!



We may fall short of words, but there won't come any end to its qualities. The endless functionalities and possibilities of this server-side scripting language have managed to get it a strong and supportive community of PHP programmers on a global level. At present, PHP powers more than half on websites and applications on the internet.


Do you know what makes PHP so praiseworthy?



It is the simplicity, easy programming structure, and developer-friendly web functionalities that are to be credited to turn PHP into one of the top programming languages. You can create highly interactive and dynamic websites and applications with desired results by making use of PHP.



However, coding often could be a tough and tedious task to accomplish. As a solution to this, you get built-in PHP libraries that optimize the process of coding for maximum productivity.



But what are these libraries?




That's exactly what you will find out as you move ahead in this article, a list of top 12 PHP libraries capable of leading the development process in an intended manner.



So, without waiting any further, let's move ahead to learn about PHP libraries in-depth.



PChart




PChart is a PHP library assisting with the generation of text data in the form of something more appealing to the eyes and known as visual charts.



You can use this library to represent data as bar charts, pie charts, and many more different formats. The PHP script here utilizes SQL queries to put data in the impressive charts or graphs form.



Mink




Another well-known in the list of PHP libraries is Mink. It allows you to keep an eye on if a proper interaction is happening between your web apps and the browser. Eliminating the API differences between the two types of browser emulators, Mink offers an authentic testing environment for you. It also supports PHPUnit, Behat, and Symfony2.



Monolog




Monolog is a PHP logging library that helps you with saving logs to the specified locations by sending them to set files, sockets, inboxes, databases, or other web services. The use of the PSR-3 interface permits to type-hint logs in counter to your libraries that maintain optimum interoperability.



Hoa




This modular, extensible, and structured set of PHP libraries we know as Hoa establishes a link between the research and the industry.



It recommends essential paradigms, mechanisms, and algorithms for building the reliability of a site. Many PHP developers in different parts of the world use Hoa for ideal PHP development.



Guzzle




Guzzle is an HTTP client library for PHP that enables you to send HTTP requests to combine with web services.



It offers a simple interface that makes the development of query strings, POST requests, HTTP cookies, and many other attributes possible. You can also use Guzzle to send synchronous and asynchronous requests from the similar interface.



Ratchet




If your need is to develop real-time, two-directional apps between clients and servers over WebSockets, Ratchet is the PHP library you need to do it effectively.



Creating event-driven apps with Ratchet is a rapid, simple, and easy job to do!



Geocoder




Geocoder is a library to create applications that are very well geo-aware.



With Geocoder, there is an abstraction layer that helps with geocoding manipulations.



It is further split into two parts, known as HttpAdapter and Provider.



ImageWorkshop




ImageWorkshop is an open-source PHP library letting you work over the manipulation of images with layers. You can crop, resize, add watermarks, create thumbnails, and so much more. You can also enhance the images on the sites.



PhpThumb




phpThumb is the library specialized at handling the work associated with creating thumbnails with minimal coding. Accepting every image source type and image formats, it makes you do a lot ranging from rotating or cropping to watermarking or defining the image quality.



Parody




This simple library we know as Parody is used to copy classes and objects. It also provides results for method calls, acquiring properties, instantiating objects, and more. Sequential method chaining is used by Parody to produce defining class structures.



Imagine




This object-oriented PHP library is meant for working with images along with manipulating them. The often adopted operations such as resizing, cropping, and applying filters happen instantly and relatively well with Imagine.



With Imagine, you get a color class that forms the RGB values of any given color. Draw shapes like arc, ellipse, line, etc. with the features available.



PhpFastCache




PhpFastCache is an open-source PHP library that makes caching feasible. Coming as a single-file, it can be integrated within a matter of minutes.



Caching methods supported by PhpFastCache involve apc, memcache, memcached, wincache, pdo, and mpdo.


The Bottom Line




It's not about what extra difference these libraries make; it's about what significant individual contributions these libraries make for a final desired PHP app or website.



A PHP programmer, too, agrees with these libraries' benefits.



It's your time now to try and believe!

SPONSORS

PHP Tutorials and Videos