PHP & Web Development Blogs

Search Results For: memory
Showing 1 to 5 of 6 blog articles.
1882 views · 4 months ago

![](https://images.ctfassets.net/vzl5fkwyme3u/2onKLFlXK4GStdtncTzZiT/9b0b6c5c45bacad4b107264875514180/codewithme.png)

It took me quite some time to settle on my first blog post in this series and I found myself thinking about the most requested functionality in my career – The good ‘ol Custom CMS – typically geared towards clients that want a straight forward, secure solution that can be expanded upon in a modular format and that’s their IP.

This will be our starting point. A blank slate to build something epic with clean code and even cleaner design. And in the spirit of building from scratch, I will refrain from using classes or a framework. The main reasoning behind this is to truly get everyone acquainted with and excited about PHP development.

Join me as I transform rudimentary code into something extraordinary that can be morphed into just about any Content, PHP, and MySQL driven project. So without further ado, let’s jump into it!

### The bare necessities

If you’re just getting started with development, there’s a nifty bite sized server called [UniformServer](https://www.uniformserver.com/) that will be your best friend throughout your coding career. [PHPMyAdmin](https://www.phpmyadmin.net/) (an awesome visual db management tool) comes built in so if you’re looking for a work right out of the box solution, this is it.

Alternatively, you can opt for [XAMPP](https://www.apachefriends.org/index.html) or use an alternative server of your choice.

### Now here’s where the exciting stuff begins, mapping things out.

I don’t see this done/encouraged often enough. Feel free to grab a piece of paper to logically map out your steps or produce a rough draft of where you’d like this project to go.

In this tutorial, I would like to achieve the following:

![](https://cdn.filestackcontent.com/D0KhE3hEQbClPDYqChUm)

### DB, DB, Set up your DB.

This requires a bit of planning but let’s start of with the basic structure we need to see this through.

We are going to need a user table and a content table and are a few ways to tackle this.

If you’re using the PHPMyAdmin tool you can create your database, add user permissions (Click on Permissions after creating your database), and create a table with ease.

![](https://cdn.filestackcontent.com/PiEKcelYTFitGkAOuHhf)

If you’re like me and prefer to look at good ‘ol SQL then writing an SQL statement is the preferred approach.

```

CREATE TABLE `mydbname`.`content` ( `ID` INT(11) NOT NULL AUTO_INCREMENT , `title` VARCHAR(100) NOT NULL , `content` LONGTEXT NOT NULL , `author` VARCHAR(50) NOT NULL , PRIMARY KEY (`ID`)) ENGINE = MyISAM COMMENT = 'content table';

```

Understanding the SQL statement

In a nutshell we are creating a table with important fields. Namely:

#########################

ID | Title | Content | Author

#########################

The ID field is our unique identifier.

Now we can move on to the file structure.

### Everything has a place in the file structure game

You can use a structure that speaks to your coding style / memory.

I tend to use the following:

![](https://cdn.filestackcontent.com/ZhJPGlmGQGKYM1WNsipR)

Choose a name for your CMS, which should be placed at the webroot of your localhost/server.

Replicate the folder structure as per the above example.

### Next, we’re going to create a basic connection file.

You can create a `conn.php` file in your root/includes folder.

The connection file will provide crucial information to connect to the database.

Type the following into your `conn.php` file, remember to include your own database credentials.

```

<?php

$letsconnect = new mysqli("localhost","dbuser","dbpass","dbname");

?>

```

### Let’s go to the homepage (index.php)

Create a file called `index.php` at the root of your CMS folder.

I will be adding comments in my code to help you understand what each line does.

Comments are a useful tool for developers to add important notes private to their code.

We need to pull information from the database so it’s imperative that we include our connection file.

```

<?php

include('includes/conn.php');

if ($letsconnect -> connect_errno) { echo "Error " . $letsconnect -> connect_error;

}else{

$getmydata=$letsconnect -> query("SELECT * FROM content");

foreach($getmydata as $mydata){ echo "Title: "; echo $mydata['title']; echo "<br/>"; echo "Content: "; echo $mydata['content']; echo "<br/>"; echo "Author: "; echo $mydata['author']; echo "<br/>"; echo "<br/>";

}

}

$letsconnect -> close();

?>

```

### Let’s get a (very) basic backend up and running

Create a file called `index.php` in your backend folder.

We need to create a basic form to capture our data.

Let’s code some HTML!

```

<html>

<head><title>Backend - Capture Content</title></head>

<body>

<form action="<?php $_SERVER[‘PHP_SELF’];?>" method="post">

<input type="text" name="title" placeholder="Content Title here" required/>

<textarea name="content">Content Here</textarea>

<input type="text" name="author" placeholder="Author" required/>

<input type="submit" value="Save My Data" name="savedata"/>

</form>

</body>

</html>

```

### Next, we need to process the form data.

Type the following just above the ```<form> ``` tag.

```

<?php

if(isset($_POST['savedata'])){

include('../includes/conn.php');

if ($letsconnect->connect_error) {

die("Your Connection failed: " . $letsconnect->connect_error);

}else{

$sql = "INSERT INTO content(title,content,author)VALUES ('".$_POST["title"]."', '".$_POST["content"]."', '".$_POST["author"]."')";

if (mysqli_query($letsconnect, $sql)) {

echo "Your data was saved successfully!";

} else { echo "Error: " . $sql . "" . mysqli_error($letsconnect);

} $letsconnect->close();

}

}

?>

```

> Note, this is a basic MySQL query to insert data. However, before using this in production it's important to add proper escaping and security to prevent SQL injections. This will be covered in the next article.

### Congrats you made it to the end of tutorial 1!

Test out your creation, modify your content, and play around.

Go to your sitename/index.php to see your frontend after capturing data via sitename/backend/index.php

### Next Up:

#codewithme Now With Security, Functionality, and Aesthetics in mind.

### Conclusion

Coding doesn’t have to be daunting and it’s my aim to divide a complex system into bitesized tutorials so you can truly use the knowledge you’ve acquired in your own projects.

6774 views · 1 years ago

![Introduction to Gitlab CI for PHP developers](https://images.ctfassets.net/vzl5fkwyme3u/5EUoVwcn2inEG3LsNJFAYp/14e5c704d91665c0de6ffd506a283ec3/AdobeStock_90389954.png?w=1000)

As a developer, you've probably at least heard something about [CI - Continuous integration](https://en.wikipedia.org/wiki/Continuous_integration). And if you haven't - you better fix it ASAP, because that's something awesome to have on your skill list and can get extremely helpful in your everyday work. This post will focus on CI for PHP devs, and specifically, on CI implementation from [Gitlab](https://docs.gitlab.com/ee/ci/README.html). I will suppose you know the basics of [Git](https://git-scm.com/), [PHP](https://php.net/), [PHPUnit](https://phpunit.de/), [Docker](https://www.docker.com/) and unix shell. Intended audience - intermediate PHP devs.

Adding something to your workflow must serve a purpose. In this case the goal is to automate routine tasks and achieve better quality control. Even a basic PHP project IMO needs the following:

* [linter](https://en.wikipedia.org/wiki/Lint_(software)) checks (cannot merge changes that are invalid on the syntax level)

* Code style checks

* Unit and integration tests

All of those can be just run eventually, of course. But I prefer an automated CI approach even in my personal projects because it leads to a higher level of discipline, you simply can't avoid following a set of rules that you've developed. Also, it reduces a risk of releasing a bug or regression, thus improving quality.

Gitlab is as generous as giving you their CI for free, even for your private repos. At this point it is starting to look as advertising, therefore a quick comparison table for [Gitlab](https://about.gitlab.com/pricing/), Github, [Bitbucket](https://bitbucket.org/product/pricing). AFAIK, Github does not have a built-in solution, instead it is easily integrated with third parties, of which [Travis CI](https://github.com/marketplace/travis-ci/plan/MDIyOk1hcmtldHBsYWNlTGlzdGluZ1BsYW43MA==#pricing-and-setup) seems to be the most popular - I will therefore mention Travis here.

### Public repositories (OSS projects). All 3 providers have a free offer for the open-source community!

| Provider | Limits |

|---|---|

| Gitlab | 2,000 CI pipeline minutes per group per month, shared runners |

| Travis | Apparently unlimited |

| Bitbucket| 50 min/month, max 5 users, File storage <= 1Gb/month |

### Private repositories

| Provider | Price | Limits |

|---|---|---|

| Gitlab | Free | 2,000 CI pipeline minutes per group per month, shared runners |

| Travis | $69/month | Unlimited builds, 1 job at a time |

| Bitbucket| Free | 50 min/month, max 5 users, File storage <= 1Gb/month |

## Getting started

I made a small project based on Laravel framework and called it "ci-showcase". I work in Linux environment, and the commands I use in the examples, are for linux shell. They should be pretty much the same on Mac and nearly the same on Windows though.

```sh

composer create-project laravel/laravel ci-showcase

```

Next, I went to gitlab website and created a new public project: https://gitlab.com/crocodile2u/ci-showcase. Cloned the repo and copied all files and folders from the newly created project - the the new git repo. In the root folder, I placed a `.gitignore` file:

```

.idea

vendor

.env

```

Then the `.env` file:

```

APP_ENV=development

```

Then I generated the application encryption key: `php artisan key:generate`, and then I wanted to verify that the primary setup works as expected: `./vendor/bin/phpunit`, which produced the output `OK (2 tests, 2 assertions)`. Nice, time to commit this: `git commit && git push`

[At this point](https://gitlab.com/crocodile2u/ci-showcase/tree/step-1), we don't yet have any CI, let's do something about it!

### Adding .gitlab-ci.yml

Everyone going to implement CI with Gitlab, is strongly encouraged to bookmark this page: https://docs.gitlab.com/ee/ci/README.html. I will simply provide a short introduction course here plus a bit of boilerplate code to get you started easier.

First QA check that we're going to add is PHP syntax check. PHP has a built-in linter, which you can invoke like this: `php -l my-file.php`. This is what we're going to use. Because the `php -l` command doesn't support multiple files as arguments, I've written a small wrapper shell script and saved it to `ci/linter.sh`:

```sh

#!/bin/sh

files=`sh ci/get-changed-php-files.sh | xargs`

last_status=0

status=0

# Loop through changed PHP files and run php -l on each

for f in "$files" ; do message=`php -l $f` last_status="$?" if [ "$last_status" -ne "0" ]; then # Anything fails -> the whole thing fails echo "PHP Linter is not happy about $f: $message" status="$last_status" fi

done

if [ "$status" -ne "0" ]; then echo "PHP syntax validation failed!"

fi

exit $status

```

Most of the time, you don't actually want to check each and every PHP file that you have. Instead, it's better to check only those files that have been changed. The Gitlab pipeline runs on every push to the repository, and there is a way to know which PHP files have been changed. Here's a simple script, meet `ci/get-changed-php-files.sh`:

```sh

#!/bin/sh

# What's happening here?

#

# 1. We get names and statuses of files that differ in current branch from their state in origin/master.

# These come in form (multiline)

# 2. The output from git diff is filtered by unix grep utility, we only need files with names ending in .php

# 3. One more filter: filter *out* (grep -v) all lines starting with R or D.

# D means "deleted", R means "renamed"

# 4. The filtered status-name list is passed on to awk command, which is instructed to take only the 2nd part

# of every line, thus just the filename

git diff --name-status origin/master | grep '\.php$' | grep -v "^[RD]" | awk '{ print $2 }'

```

These scripts can easily be tested in your local environment ( at least if you have a Linux machine, that is ;-) ).

Now, as we have our first check, we'll finally create our `.gitlab-ci.yml`. This is where your pipeline is declared using [YAML notation](https://yaml.org/):

```yml

# we're using this beautiful tool for our pipeline: https://github.com/jakzal/phpqa

image: jakzal/phpqa:alpine

# For this sample pipeline, we'll only have 1 stage, in real-world you would like to also add at least "deploy"

stages: - QA

linter:

stage: QA

# this is the main part: what is actually executed

script: - sh ci/get-changed-php-files.sh | xargs sh ci/linter.sh

```

The first line is `image: jakzal/phpqa:alpine` and it's telling Gitlab that we want to run our pipeline using a PHP-QA utility by [jakzal](https://github.com/jakzal). It is a docker image containing PHP and a huge variety of QA-tools. We declare one stage - QA, and this stage by now has just a single job named `linter`. Every job can have it's own docker image, but we don't need that for the purpose of this tutorial. Our project reaches [Step 2](https://gitlab.com/crocodile2u/ci-showcase/tree/step-2). Once I had pushed these changes, I immediately went to the [project's CI/CD page](https://gitlab.com/crocodile2u/ci-showcase/pipelines). Aaaand.... the pipeline was already running! I clicked on the `linter` job and saw the following happy green output:

```

Running with gitlab-runner 11.9.0-rc2 (227934c0) on docker-auto-scale ed2dce3a

Using Docker executor with image jakzal/phpqa:alpine ...

Pulling docker image jakzal/phpqa:alpine ...

Using docker image sha256:12bab06185e59387a4bf9f6054e0de9e0d5394ef6400718332c272be8956218f for jakzal/phpqa:alpine ...

Running on runner-ed2dce3a-project-11318734-concurrent-0 via runner-ed2dce3a-srm-1552606379-07370f92...

Initialized empty Git repository in /builds/crocodile2u/ci-showcase/.git/

Fetching changes...

Created fresh repository.

From https://gitlab.com/crocodile2u/ci-showcase * [new branch] master -> origin/master * [new branch] step-1 -> origin/step-1 * [new branch] step-2 -> origin/step-2

Checking out 1651a4e3 as step-2...

Skipping Git submodules setup

$ sh ci/get-changed-php-files.sh | xargs sh ci/linter.sh

Job succeeded

```

It means that our pipeline was successfully created and run!

### PHP Code Sniffer.

[PHP Code Sniffer](https://github.com/squizlabs/PHP_CodeSniffer) is a tool for keeping app of your PHP files in one uniform code style. It has a hell of customizations and settings, but here we will only perform simple check for compatibilty with [PSR-2](https://www.php-fig.org/psr/psr-2/) standard. A good practice is to create a configuration XML file in your project. I will put it in the root folder. Code sniffer can use a few file names, of which I prefer `phpcs.xml`:

```xml

<?xml version="1.0"?>

/resources

```

I also will append another section to `.gitlab-ci.yml`:

```yml

code-style: stage: QA script: # Variable $files will contain the list of PHP files that have changes - files=`sh ci/get-changed-php-files.sh` # If this list is not empty, we execute the phpcs command on all of them - if [ ! -z "$files" ]; then echo $files | xargs phpcs; fi

```

Again, we check only those PHP files that differ from master branch, and pass their names to `phpcs` utility. That's it, [Step 3](https://gitlab.com/crocodile2u/ci-showcase/tree/step-3) is finished! If you go to see the pipeline now, you will notice that `linter` and `code-style` jobs run in parallel.

## Adding PHPUnit

Unit and integration tests are essential for a successful and maintaiable modern software project. In PHP world, [PHPUnit](https://phpunit.de/) is de facto standard for these purposes. The PHPQA docker image already has PHPUnit, but that's not enough. Our project is based on [Laravel](https://laravel.com/), which means it depends on a bunch of third-party libraries, Laravel itself being one of them. Those are installed into `vendor` folder with [composer](https://getcomposer.org/). You might have noticed that our `.gitignore` file has `vendor` folder as one of it entries, which means that it is not managed by the Version Control System. Some prefer their dependencies to be part of their Git repository, I prefer to have only the `composer.json` declarations in Git. Makes the repo much much smaller than the other way round, also makes it easy to avoid bloating your production builds with libraries only needed for development.

Composer is also included into PHPQA docker image, and we can enrich our `.gitlab-ci.yml`:

```yml

test: stage: QA cache: key: dependencies-including-dev paths: - vendor/ script: - composer install - ./vendor/bin/phpunit

```

PHPUnit requires some configuration, but in the very beginning we used `composer create-project` to create our project boilerplate. **laravel/laravel** package has a lot of things included in it, and `phpunit.xml` is also one of them. All I had to do was to add another line to it:

```xml

```

APP_KEY enironment variable is essential for Laravel to run, so I generated a key with `php artisan key:generate`.

`git commit` & `git push`, and we have all three jobs on the **QA** stage!

## Checking that our checks work

In [this branch](https://gitlab.com/crocodile2u/ci-showcase/tree/failing-checks) I intentionally added changes that should fail all three job in our pipeline, take a look at [git diff](https://gitlab.com/crocodile2u/ci-showcase/compare/step-4...failing-checks). And we have this out from the pipeline stages:

**Linter**:

```

$ ci/linter.sh

PHP Linter is not happy about app/User.php:

Parse error: syntax error, unexpected 'syntax' (T_STRING), expecting function (T_FUNCTION) or const (T_CONST) in app/User.php on line 11

Errors parsing app/User.php

PHP syntax validation failed!

ERROR: Job failed: exit code 255

```

**Code-style**:

```

$ if [ ! -z "$files" ]; then echo $files | xargs phpcs; fi

FILE: ...ilds/crocodile2u/ci-showcase/app/Http/Controllers/Controller.php

----------------------------------------------------------------------

FOUND 0 ERRORS AND 1 WARNING AFFECTING 1 LINE

---------------------------------------------------------------------- 13 | WARNING | Line exceeds 120 characters; contains 129 characters

----------------------------------------------------------------------

Time: 39ms; Memory: 6MB

ERROR: Job failed: exit code 123

```

**test**:

```

$ ./vendor/bin/phpunit

PHPUnit 7.5.6 by Sebastian Bergmann and contributors.

F. 2 / 2 (100%)

Time: 102 ms, Memory: 14.00 MB

There was 1 failure:

1) Tests\Unit\ExampleTest::testBasicTest

This test is now failing

Failed asserting that false is true.

/builds/crocodile2u/ci-showcase/tests/Unit/ExampleTest.php:17

FAILURES!

Tests: 2, Assertions: 2, Failures: 1.

ERROR: Job failed: exit code 1

```

Congratulations, our pipeline is running, and we now have much less chance of messing up the result of our work.

## Conclusion

Now you know how to set up a basic QA pipeline for your PHP project. There's still a lot to learn. Pipeline is a powerful tool. For instance, it can make deployments to different environments for you. Or it can build docker images, store artifacts and more! Sounds cool? Then spend 5 minutes of your time and leave a comment, you can also tell me if there is a pipeline topic you would like to be covered in next posts.

4997 views · 1 years ago

![Create Alarm and Monitoring on Custom Memory and Disk Metrics for Amazon EC2](https://images.ctfassets.net/vzl5fkwyme3u/2HgdCq2lZucMyuiYyYiQie/b7376c29a2f94799613e8c1cb8ff4d3b/AdobeStock_91111530.jpeg?w=1000)

Today I am going write a blog on how to Monitor Memory and Disk custom metrics and creating alarm in Ubuntu.

To do this, we can use Amazon CloudWatch, which provides a flexible, scalable and reliable solution for monitoring our server.

Amazon Cloud Watch will allow us to collect the custom metrics from our applications that we will monitor to troubleshoot any issues, spot trends, and configure operational performance. CloudWatch functions display alarms, graphs, custom metrics data and including statistics.

## Installing the Scripts

Before we start installing the scripts for monitoring, we should install all the dependent packages need to perform on Ubuntu.

First login to your AWS server, and from our terminal, install below packages

```ssh

sudo apt-get update

sudo apt-get install unzip

sudo apt-get install libwww-perl libdatetime-perl

```

### Now Install the Monitoring Scripts

Following are the steps to download and then unzip we need to configure the Cloud Watch Monitoring scripts on our server:

**1. In the terminal, we need to change our directory and where we want to add our monitoring scripts.**

**2. Now run the below command and download the source:**

```ssh

curl https://aws-cloudwatch.s3.amazonaws.com/downloads/CloudWatchMonitoringScripts-1.2.2.zip -O

```

**3. Now uncompress the currently downloaded sources using the following commands**

```ssh

unzip CloudWatchMonitoringScripts-1.2.2.zip && \

rm CloudWatchMonitoringScripts-1.2.2.zip && \

cd aws-scripts-mon

```

The directory will contain Perl scripts, because of the execution of these scripts only report memory run and disk space utilization metrics will run in our Ubuntu server.

Currently, our folder will contain the following files:

**mon-get-instance-stats.pl** - This Perl file is used to displaying the current utilization statistics reports for our AWS instance on which these file scripts will be executed.

**mon-put-instance-data.pl** - This Perl script file will be used for collecting the system metrics on our ubuntu server and which will send them to the Amazon Cloud Watch.

**awscreds.template** - This Perl script file will contain an example for AWS credentials keys and secret access key named with access key ID.

**CloudWatchClient.pm** - This Perl script file module will be used to simplify by calling Amazon Cloud Watch from using other scripts.

**LICENSE.txt** – This file contains the license details for Apache 2.0.

**NOTICE.txt** – This file contains will gives us information about Copyright notice.

**4. For performing the Cloud Watch operations, we need to confirm that whether our scripts have corresponding permissions for the actions:**

If we are associated with an IAM role with our EC2 Ubuntu instance, we need to verify that which will grant the permissions to perform the below-listed operations:

```ssh

cloudwatch:GetMetricStatistics

cloudwatch:PutMetricData

ec2:DescribeTags

cloudwatch:ListMetrics

```

Now we need to copy the ‘awscreds.template’ file into ‘awscreds.conf’ by using the command below and which will update the file with details of the AWS credentials.

```ssh

cp awscreds.template awscreds.conf

AWSAccessKeyId = my_access_key_id

AWSSecretKey = my_secret_access_key

```

Now we completed the configuration.

## mon-put-instance-data.pl

This Perl script file will collect memory, disk space utilization data and swap the current system details and then it makes handling a remote call to Amazon Cloud Watch to reports details to the collected cloud watch data as a custom metrics.

We can perform a simple test run, by running the below without sending data to Amazon CloudWatch

```ssh

./mon-put-instance-data.pl --mem-util --verify --verbose

```

Now we are going to set a cron for scheduling our metrics and we will send them to Amazon CloudWatch

**1. Now we need to edit the crontab by using below command:**

```ssh

crontab -e

```

**2. Now we will update the file using the following query which will disk space utilization and report memory for particular paths to Amazon CloudWatch in every five minutes:**

```ssh

*/5 * * * * ~/STORAGE/cloudwatch/aws-scripts-mon/mon-put-instance-data.pl --mem-util --mem-avail --mem-used --disk-space-util --disk-space-avail --disk-space-used --disk-path=/ --disk-path=/STORAGE --from-cron

```

If there is an error, the scripts will write an error message in our system log.

### Use of Options

**--mem-used**

The above command will collect the information about used memory and which will send the details of the reports in MBs into the MemoryUsed metrics. This will give us information about the metric counts memory allocated by applications and the OS as used.

**--mem-util**

The above command will collect the information about memory utilization in percentages and which will send the details of the Memory Utilization metrics and it will count the usage of the memory applications and the OS.

**--disk-space-util**

The above command will collect the information to collect the current utilized disk space and which will send the reports in percentages to the DiskSpaceUtilization for the metric and for the selected disks.

**--mem-avail**

The above command will collect the information about the available memory and which will send the reports in MBs to the MemoryAvailable metrics details. This is the metric counts memory allocated by the applications and the OS as used.

**--disk-path=PATH**

The above command will collect the information and will point out the which disk path to report disk space.

**--disk-space-avail**

The above command will collect the information about the available disk space and which will send the reports in GBs to the DiskSpaceAvailable metric for the selected disks.

**--disk-space-used**

The above command will collect the information about the disk space used and which will send the reports in GBs to the DiskSpaceUsed metric for the selected disks.

The PATH can specify to point or any of the files can be located on which are mounted point for the filesystem which needs to be reported.

If we want to points to the multiple disks, then specify both of the disks like below:

```ssh

--disk-path=/ --disk-path=/home

```

## Setting an Alarm for Custom Metrics

Before we are going to running our Perl Scripts, then we need to create an alarm that will be listed in our default metrics except for the custom metrics. You can see some default metrics are listed in below image:

![](https://4.bp.blogspot.com/-dYiNgtv5t4Q/XB-Kw76S49I/AAAAAAAAExM/DKKgzPUmVPUNpDqb0c57sVd3DjhY_3O6wCLcBGAs/s1600/aws.jpg)

Once we completed setting the cron, then the custom metrics will be located in Linux System Metrics.

Now we are going to creating the alarm for our custom metrics

**1. We need to open the cloudwatch console panel at https://console.aws.amazon.com/cloudwatch/home**

**2. Now navigate to the navigation panel, we need to click on Alarm and we can Create Alarm.**

**3. This will open a popup which with the list of the CloudWatch metrics by category.**

**4. Now click on the Linux System Metrics . This will be listed out with custom metrics you can see in the below pictures**

![](https://3.bp.blogspot.com/-3OxSPmHahYc/XB-MhlaGSAI/AAAAAAAAExY/d9UIOFakcGAOADlbNOyd75r0Vkl5GtOfwCLcBGAs/s1600/aws2.jpg)

![](https://2.bp.blogspot.com/-Urfud3sv5LA/XB-My0O-qXI/AAAAAAAAExg/1isOeglXYyoS3U0u2uhwJN8ddhtnoHUwACLcBGAs/s1600/aws3.jpg)

![](https://2.bp.blogspot.com/-tXU4ieWKKdo/XB-NDrArsII/AAAAAAAAExo/7oz3qQZ7GrMETiJwWJdK75_hJ0qfADLvgCLcBGAs/s1600/aws4.jpg)

**5. Now we need to select metric details and we need to click on the NEXT button. Now we need to navigate to Define Alarm step.**

![](https://3.bp.blogspot.com/-Vu0WoCCP_5o/XB-NiKGzmSI/AAAAAAAAExw/IPBX8ir-97Q_Q6L6Ajq2vNpukYGN8BB_QCLcBGAs/s1600/aws5.jpg)

**6. Now we need to define an Alarm with required fields**

Now we need to enter the Alarm name for identifying them. Then we need to give a description of our alarm.

Next, we need to give the condition with the maximum limit of bytes count or percentage when it notifies the alarm. If the condition satisfies, then the alarm will start trigger.

We need to provide a piece of additional information about for our alarm.

We need to define what are the actions to be taken when our alarm changes it state.

We need to select or create a new topic with emails needed for sending notification about alarm state.

**7. Finally, we need to choose the Create Alarm.**

So its completed. Now the alarm is created for our selected custom metrics.

### Finished!

Now the alarm will be listed out under the selected state in our AWS panel. Now we need to select an alarm from the list seen and we can see the details and history of our alarm.

5458 views · 1 years ago

![PHP IPC with Daemon Service using Message Queues, Shared Memory and Semaphores](https://images.ctfassets.net/vzl5fkwyme3u/4ULcw2rCysGcSGOAi2uKOk/450013591b84069c5536663430536714/AdobeStock_200383770.jpeg?w=1000)

# Introduction

In a previous article we learned about [Creating a PHP Daemon Service](https://beta.nomadphp.com/blog/50/creating-a-php-daemon-service). Now we are going to learn how to use methods to perform IPC - Inter-Process Communication - to communicate with daemon processes.

# Message Queues

In the world of UNIX, there is an incredible variety of ways to send a message or a command to a daemon script and vice versa. But first I want to talk only about message queues - "System V IPC Messages Queues".

A long time ago I learned that a queue can be either in the System V IPC implementation, or in the POSIX implementation. I want to comment only about the System V implementation, as I know it better.

Lets get started. At the "normal" operating system level, queues are stored in memory. Queue data structures are available to all system programs. Just as in the file system, it is possible to configure queues access rights and message size. Usually a queue message size is small, less than 8 KB.

This introductory part is over. Lets move on to the practice with same example scripts.

**queue-send.php**

```php

/ / Convert a path name and a project identifier to a System V IPC key

$key = ftok(__FILE__, 'A'); / / 555 for example

/ / Creating a message queue with a key, we need to use an integer value.

$queue = msg_get_queue($key);

/ / Send a message. Note that all required fields are already filled,

/ / but sometimes you want to serialize an object and put on a message or a lock.

/ / Note that we specify a different type. Type - is a certain group in the queue.

msg_send($queue, 1, 'message, type 1');

msg_send($queue, 2, 'message, type 2');

msg_send($queue, 3, 'message, type 3');

msg_send($queue, 1, 'message, type 1');

echo "send 4 messages

";

```

**queue-receive.php**

```php

$key = ftok('queue-send.php', 'A'); / / 555 for example

$queue = msg_get_queue($key);

/ / Loop through all types of messages.

for ($i = 1; $i <= 3; $i++) {

echo "type: {$i}

";

/ / Loop through all, read messages are removed from the queue.

/ / Here we find a constant MSG_IPC_NOWAIT, without it all will hang forever.

while ( msg_receive($queue, $i, $msgtype, 4096, $message, false, MSG_IPC_NOWAIT) ) {

echo "type: {$i}, msgtype: {$msgtype}, message: {$message}

";

}

}

```

Lets run on the first stage of the file queue-send.php, and then queue-receive.php.

```sh

u% php queue-send.php

send 4 messages

u% php queue-receive.php

type: 1

type: 1, msgtype: 1, message: s:15:"message, type 1";

type: 1, msgtype: 1, message: s:15:"message, type 1";

type: 2

type: 2, msgtype: 2, message: s:15:"message, type 2";

type: 3

type: 3, msgtype: 3, message: s:15:"message, type 3";

```

You may notice that the messages have been grouped. The first group gathered 2 messages of the first type, and then the remaining messages.

If we would have indicated to receive messages of type 0, you would get all messages, regardless of the type.

```php

while (msg_receive($queue, $i, $msgtype, 4096, $message, false, MSG_IPC_NOWAIT)) {

/ / ...

```

Here it is worth noting another feature of the queues: if we do not use the constant MSG_IPC_NOWAIT in the script and run the script queue-receive.php from a terminal, and then run periodically the file queue-send.php, we see how a daemon can effectively use this to wait jobs.

**queue-receive-wait.php**

```php

$key = ftok('queue-send.php', 'A'); / / 555 for example

$queue = msg_get_queue($key);

/ / Loop through all types of messages.

/ / Loop through all, read messages are removed from the queue.

while ( msg_receive($queue, 0, $msgtype, 4096, $message) ) {

echo "msgtype: {$msgtype}, message: {$message}

";

}

```

Actually that is the most interesting information of all I have said. There are also functions to get statistics, disposal and checking for the existence of queues.

Lets now try to write a daemon listening to a queue:

**queue-daemon.php**

```php

/ / Fork process

$pid = pcntl_fork();

$key = ftok('queue-send.php', 'A');

$queue = msg_get_queue($key);

if ($pid == -1) {

exit;

} elseif ($pid) {

exit;

} else {

while ( msg_receive($queue, 0, $msgtype, 4096, $message) ) {

echo "msgtype: {$msgtype}, message: {$message}

";

}

}

/ / Disengaged from the terminal

posix_setsid();

```

# Shared Memory

We have learned to work with queues, with which you can send small system messages. But then we may certainly be faced with the task of transmitting large amounts of data. My favorite type of system, System V, has solved the problem of rapid transmission and preservation of large data in memory using a mechanism called **Shared Memory**.

In short, the data in the Shared Memory lives until the system is rebooted. Since the data is in memory, it works much faster than if it was stored in a database somewhere in a file, or, God forgive me on a network share.

Lets try to write a simple example of data storage.

**shared-memory-write-base.php**

```php

/ / This is the correct and recommended way to obtain a unique identifier.

/ / Based on this approach, the system uses the inode table of the file system

/ / and for greater uniqueness converts this number based on the second parameter.

/ / The second parameter always goes one letter

$id = ftok(__FILE__, 'A');

/ / Create or open the memory block

/ / Here you can specify additional parameters, in particular the size of the block

/ / or access rights for other users to access this memory block.

/ / We can simply specify the id instead of any integer value

$shmId = shm_attach($id);

/ / As we have shared variables (any integer value)

$var = 1;

/ / Check if we have the requested variables.

if (shm_has_var($shmId, $var)) {

/ / If so, read the data

$data = (array) shm_get_var($shmId, $var);

} else {

/ / If the data was not there.

$data = array();

}

/ / Save the in the resulting array value of this file.

$data[time()] = file_get_contents(__FILE__);

/ / And writes the array in memory, specify where to save the variable.

shm_put_var($shmId, $var, $data);

/ / Easy?

```

Run this script several times to save the value in memory. Now lets write a script only to read from the memory.

**shared-memory-read-base.php**

```php

/ / Read data from memory.

$id = ftok(__DIR__ . '/shared-memory-write-base.php', 'A');

$shmId = shm_attach($id);

$var = 1;

/ / Check if we have the requested variables.

if (shm_has_var($shmId, $var)) {

$data = (array) shm_get_var($shmId, $var);

} else {

$data = array();

}

/ / Iterate received and save them to files.

foreach ($data as $key => $value) {

/ / A simple example, create a file from the data that we have saved.

$path = "/tmp/$key.php";

file_put_contents($path, $value);

echo $path . PHP_EOL;

}

```

# Semaphores

So, in general terms, it should be clear for you by now how to work with shared memory. The only problems left to figure out are about a couple of nuances, such as: "What to do if two processes want to record one block of memory?" Or "How to store binary files of any size?".

To prevent simultaneous accesses we will use semaphores. Semaphores allow us to flag that we want to have exclusive access to some resource, like for instance a shared memory block. While that happens other processes will wait for their turn on semaphore.

In this code it explained clearly:

**shared-memory-semaphors.php**

```php

/ / Let's try to save a binary file, the size of a couple of megabytes.

/ / This script does the following:

/ / If there is input, it reads it, otherwise it writes data into memory

/ / In this case, when writing to the memory we put a sign lock - semaphore

/ / Everything is as usual, read the previous comments

$id = ftok(__FILE__, 'A');

/ / Obtain a resource semaphore - lock feature. There is nothing wrong if we

/ / use the same id that is used to obtain a resource shared memory

$semId = sem_get($id);

/ / Put a lock. There's a caveat. If another process will encounter this lock,

/ / it will wait until the lock is removed

sem_acquire($semId);

/ / Specify your like picture

$data = file_get_contents(__DIR__.'/06050396.JPG', FILE_BINARY);

/ / These can be large, so precaution is necessary to allocate such a way that would be enough

$shmId = shm_attach($id, strlen($data)+4096);

$var = 1;

if (shm_has_var($shmId, $var)) {

/ / Obtain data from the memory

$data = shm_get_var($shmId, $var);

/ / Save our file somewhere

$filename = '/tmp/' . time();

file_put_contents($filename, $data, FILE_BINARY);

/ / Remove the memory block that started it all over again.

shm_remove($shmId);

} else {

shm_put_var($shmId, $var, $data);

}

/ / Releases the lock.

sem_release($semId);

```

Now you can use the md5sum command line utility to compare two files, the original and the saved file. Or, you can open the file in image editor or whatever prefer to compare the images.

With this we are done with shared memory and semaphores. As your homework I want to ask you to write code that a demon will use semaphores to access shared memory.

# Conclusion

Exchanging data between the daemons is very simple. This article described two options for data exchange: message queues and shared memory.

Post a comment here if you have questions or comments about how to exchange data with daemon services in PHP.

3538 views · 1 years ago

![Creating a PHP Daemon Service](https://images.ctfassets.net/vzl5fkwyme3u/18L41PfcrcYYkM0qAsCous/7caca26b8cfb5a643d8cb16b14ae5eae/AdobeStock_147870533.jpeg?w=1000)

# What is a Daemon?

The term daemon was coined by the programmers of Project MAC at MIT. It is inspired on Maxwell's demon in charge of sorting molecules in the background. The UNIX systems adopted this terminology for daemon programs.

It also refers to a character from Greek mythology that performs the tasks for which the gods do not want to take. As stated in the "Reference System Administrator UNIX", in ancient Greece, the concept of "personal daemon" was, in part, comparable to the modern concept of "guardian angel." BSD family of operating systems use the image as a demon's logo.

Daemons are usually started at machine boot time. In the technical sense, a demon is considered a process that does not have a controlling terminal, and accordingly there is no user interface. Most often, the ancestor process of the deamon is init - process root on UNIX, although many daemons run from special rcd scripts started from a terminal console.

Richard Stevenson describes the following steps for writing daemons:

1. Resetting the file mode creation mask to 0 function umask(), to mask some bits of access rights from the starting process.

2. Cause fork() and finish the parent process. This is done so that if the process was launched as a group, the shell believes that the group finished at the same time, the child inherits the process group ID of the parent and gets its own process ID. This ensures that it will not become process group leader.

3. Create a new session by calling setsid(). The process becomes a leader of the new session, the leader of a new group of processes and loses the control of the terminal.

4. Make the root directory of the current working directory as the current directory will be mounted.

5. Close all file descriptors.

6. Make redirect descriptors 0,1 and 2 (STDIN, STDOUT and STDERR) to /dev/null or files /var/log/project_name.out because some standard library functions use these descriptors.

7. Record the pid (process ID number) in the pid-file: /var/run/projectname.pid.

8. Correctly process the signals and SigTerm SigHup: end with the destruction of all child processes and pid - files and / or re-configuration.

# How to Create Daemons in PHP

To create demons in PHP you need to use the extensions pcntl and posix. To implement the fast communication withing daemon scripts it is recommended to use the extension libevent for asynchronous I/O.

Lets take a closer look at the code to start a daemon:

```php

umask(0); / / § 1

$pid = pcntl_fork(); / / § 2

if ($pid < 0) {

print('fork failed');

exit 1;

}

```

After a fork, the execution of the program works as if there are two branches of the code, one for the parent process and the second for the child process. What distinguishes these two processes is the result value returned the fork() function call. The parent process ID receives the newly created process number and the child process receives a 0.

```php

if ($pid > 0) {/ / the parent process

echo "daemon process started

";

exit; / / Exit

}

/ / (pid = 0) child process

$sid = posix_setsid();/ / § 3

if ($sid < 0) {

exit 2;

}

chdir('/'); / / § 4

file_put_contents($pidFilename, getmypid() ); / / § 6

run_process(); / / cycle start data

```

The implementation of step 5 "to close all file descriptors" can be done in two ways. Well, closing all file descriptors is difficult to implement in PHP. You just need to open any file descriptors before fork(). Second, you can override the standard output to an error log file using init_set() or use buffering using ob_start() to a variable and store it in log file:

```php

ob_start(); / / slightly modified, § 5.

var_dump($some_object); / /some conclusions

$content = ob_get_clean(); / / takes part of the output buffer and clears it

fwrite($fd_log, $content); / / retains some of the data output to the log.

```

Typically, ob_start() is the start of the daemon life cycle and ob_get_clean() and fwrite() calls are the end. However, you can directly override STDIN, STDOUT and STDERR:

```php

ini_set('error_log', $logDir.'/error.log'); / / set log file

/ / $logDir - /var/log/mydaemon

/ / Closes an open file descriptors system STDIN, STDOUT, STDERR

fclose(STDIN);

fclose(STDOUT);

fclose(STDERR);

/ / redirect stdin to /dev/null

$STDIN = fopen('/dev/null', 'r');

/ / redirect stdout to a log file

$STDOUT = fopen($logDir.'/application.log', 'ab');

/ / redirect stderr to a log file

$STDERR = fopen($logDir.'/application.error.log', 'ab');

```

Now, our process is disconnected from the terminal and the standard output is redirected to a log file.

# Handling Signals

Signal processing is carried out with the handlers that you can use either via the library pcntl (pcntl_signal_dispatch()), or by using libevent. In the first case, you must define a signal handler:

```php

/ / signal handler

function sig_handler($signo)

{

global $fd_log;

switch ($signo) {

case SIGTERM:

/ / actions SIGTERM signal processing

fclose($fd_log); / / close the log-file

unlink($pidfile); / / destroy pid-file

exit;

break;

case SIGHUP:

/ / actions SIGHUP handling

init_data();/ / reread the configuration file and initialize the data again

break;

default:

/ / Other signals, information about errors

}

}

/ / setting a signal handler

pcntl_signal(SIGTERM, "sig_handler");

pcntl_signal(SIGHUP, "sig_handler");

```

Note that signals are only processed when the process is in an active mode. Signals received when the process is waiting for input or in sleep mode will not be processed. Use the wait function pcntl_signal_dispatch(). We can ignore the signal using flag SIG_IGN: pcntl_signal(SIGHUP, SIG_IGN); Or, if necessary, restore the signal handler using the flag SIG_DFL, which was previously installed by default: pcntl_signal(SIGHUP, SIG_DFL);

# Asynchronous I/O with Libevent

In the case you use blocking input / output signal processing is not applied. It is recommended to use the library libevent which provides non-blocking as input / output, processing signals, and timers. Libevent library provides a simple mechanism to start the callback functions for events on file descriptor: Write, Read, Timeout, Signal.

Initially, you have to declare one or more events with an handler (callback function) and attach them to the basic context of the events:

```php

$base = event_base_new(); / / create a context for monitoring basic events

/ / create a context of current events, one context for each type of events

$event = event_new();

$errno = 0;

$errstr = '';

/ / the observed object (handle)

$socket = stream_socket_server("tcp:/ /$IP:$port", $errno, $errstr);

stream_set_blocking($socket, 0); / / set to non-blocking mode

/ / set handler to handle

event_set($event, $socket, EV_READ | EV_PERSIST, 'onAccept', $base);

```

Function handlers 'onRead', 'onWrite', 'onError' must implement the processing logic. Data is written into the buffer, which is obtained in the non-blocking mode:

```php

function onRead($buffer, $id)

{

/ / reading from the buffer to 256 characters or EOF

while($read = event_buffer_read($buffer, 256)) {

var_dump($read);

}

}

```

The main event loop runs with the function event_base_loop($base);. With a few lines of code, you can exit the handler only by calling: event_base_loobreak(); or after the specified time (timeout) event_loop_exit();.

Error handling deals with failure Events:

```php

function onError($buffer, $error, $id)

{

/ / declare global variables as an option - class variables

global $id, $buffers, $ctx_connections;

/ / deactivate buffer

event_buffer_disable($buffers[$id], EV_READ | EV_WRITE);

/ / free, context buffer

event_buffer_free($buffers[$id]);

/ / close the necessary file / socketed destkriptory

fclose($ctx_connections[$id]);

/ / frees the memory occupied by the buffer

unset($buffers[$id], $ctx_connections[$id]);

}

```

It should be noted the following subtlety: Working with timers is only possible through the file descriptor. The example of official the documentation does not work. Here is an example of processing that runs at regular intervals.

```php

$event2 = event_new();

/ / use as an event arbitrary file descriptor of the temporary file

$tmpfile = tmpfile();

event_set($event2, $tmpfile, 0, 'onTimer', $interval);

$res = event_base_set($event2, $base);

event_add($event2, 1000000 * $interval);

```

With this code we can have a working timer finishes only once. If we need a "permanent" Timer, using the function onTimer we need create a new event each time, and reassign it to process through a "period of time":

```php

function onTimer($tmpfile, $flag, $interval)

{

$global $base, $event2;

if ($event2) {

event_delete($event2);

event_free($event2);

}

call_user_function(‘process_data’,$args);

$event2 = event_new();

event_set($event2, $tmpfile, 0, 'onTimer', $interval);

$res = event_base_set($event2, $base);

event_add($event2, 1000000 * $interval);

}

```

At the end of the daemon we must release all previously allocated resources:

```php

/ / delete the context of specific events from the database monitoring is performed for each event

event_delete($event);

/ / free the context of a particular event is executed for each event

event_free($event);

/ / free the context of basic events monitoring

event_base_free($base);

/ / bind event to the base context

event_base_set($event, $base);

/ / add/activate event monitoring

event_add($event);

```

Also it should be noted that for the signal processing handler is set the flag EV_SIGNAL: event_set($event, SIGHUP, EV_SIGNAL, 'onSignal', $base);

If needed constant signal processing, it is necessary to set a flag EV_PERSIST. Here follows a handler for the event onAccept, which occurs when a new connection is a accepted on a file descriptor:

```php

/ / function handler to the emergence of a new connection

function onAccept($socket, $flag, $base) {

global $id, $buffers, $ctx_connections;

$id++;

$connection = stream_socket_accept($socket);

stream_set_blocking($connection, 0);

/ / create a new buffer and tying handlers read / write access to the buffer or illustrations of error.

$buffer = event_buffer_new($connection, 'onRead', NULL, 'onError', $id);

/ / attach a buffer to the base context

event_buffer_base_set($buffer, $base);

/ / exhibiting a timeout if there is no signal from the source

event_buffer_timeout_set($buffer, 30, 30);

event_buffer_watermark_set($buffer, EV_READ, 0, 0xffffff); / / flag is set

event_buffer_priority_set($buffer, 10); / / set priority

event_buffer_enable($buffer, EV_READ | EV_PERSIST); / / flag is set

$ctx_connections[$id] = $connection;

$buffers[$id] = $buffer;

}

```

# Monitoring a Daemon

It is good practice to develop the application so that it was possible to monitor the daemon process. Key indicators for monitoring are the number of items processed / requests in the time interval, the speed of processing with queries, the average time to process a single request or downtime.

With the help of these metrics can be understood workload of our demon, and if it does not cope with the load it gets, you can run another process in parallel, or for running multiple child processes.

To determine these variables need to check these features at regular intervals, such as once per second. For example downtime is calculated as the difference between the measurement interval and total time daemon.

Typically downtime is determined as a percentage of a measurement interval. For example, if in one second were executed 10 cycles with a total processing time of 50ms, the time will be 950ms or 95%.

Query performance wile be 10rps (request per second). Average processing time of one request: the ratio of the total time spent on processing requests to the number of requests processed, will be 5ms.

These characteristics, as well as additional features such as memory stack size queue, number of transactions, the average time to access the database, and so on.

An external monitor can be obtain data through a TCP connection or unix socket, usually in the format of Nagios or zabbix, depending on the monitoring system. To do this, the demon should use an additional system port.

As mentioned above, if one worker process can not handle the load, usually we run in parallel multiple processes. Starting a parallel process should be done by the parent master process that uses fork() to launch a series of child processes.

Why not run processes using exec() or system()? Because, as a rule, you must have direct control over the master and child processes. In this case, we can handle it via interaction signals. If you use the exec command or system, then launch the initial interpreter, and it has already started processes that are not direct descendants of the parent process.

Also, there is a misconception that you can make a demon process through command nohup. Yes, it is possible to issue a command: nohup php mydaemon.php -master >> /var/log/daemon.log 2 >> /var/log/daemon.error.log &

But, in this case, would be difficult to perform log rotation, as nohup "captures" file descriptors for STDOUT / STDERR and release them only at the end of the command, which may overload of the process or the entire server. Overload demon process may affect the integrity of data processing and possibly cause partial loss of some data.

# Starting a Daemon

Starting the daemon must happen either automatically at boot time, or with the help of a "boot script."

All startup scripts are usually in the directory /etc/rc.d. The startup script in the directory service is made /etc/init.d/ . Run command start service myapp or start group /etc/init.d/myapp depending on the type of OS.

Here is a sample script text:

```sh

#! /bin/sh

#

$appdir = /usr/share/myapp/app.php

$parms = --master –proc=8 --daemon

export $appdir

export $parms

if [ ! -x appdir ]; then

exit 1

fi

if [ -x /etc/rc.d/init.d/functions ]; then

. /etc/rc.d/init.d/functions

fi

RETVAL=0

start () {

echo "Starting app"

daemon /usr/bin/php $appdir $parms

RETVAL=$?

[ $RETVAL -eq 0 ] && touch /var/lock/subsys/mydaemon

echo

return $RETVAL

}

stop () {

echo -n "Stopping $prog: "

killproc /usr/bin/fetchmail

RETVAL=$?

[ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/mydaemon

echo

return $RETVAL

}

case $1 in

start)

start

;;

stop)

stop

;;

restart)

stop

start

;;

status)

status /usr/bin/mydaemon

;;

*)

echo "Usage: $0 {start|stop|restart|status}"

;;

RETVAL=$?

exit $RETVAL

```

# Distributing Your PHP Daemon

To distribute a daemon it is better to pack it in a single phar archive module. The assembled module should include all the necessary PHP and .ini files.

Below is a sample build script:

```php

if (is_file('app.phar')) {

unlink('app.phar');

}

$phar = new Phar('app.phar', 0, 'app.phar');

$phar->compressFiles(Phar::GZ);

$phar->setSignatureAlgorithm (Phar::SHA1);

$files = array();

$files['bootstrap.php'] = './bootstrap.php';

$rd = new RecursiveIteratorIterator(new RecursiveDirectoryIterator('.'));

foreach($rd as $file){

if ($file->getFilename() != '..' && $file->getFilename() != '.' && $file->getFilename() != __FILE__) {

if ( $file->getPath() != './log'&& $file->getPath() != './script'&& $file->getPath() != '.')

$files[substr($file->getPath().DIRECTORY_SEPARATOR.$file->getFilename(),2)] =

$file->getPath().DIRECTORY_SEPARATOR.$file->getFilename();

}

}

if (isset($opt['version'])) {

$version = $opt['version'];

$file = "buildFromIterator(new ArrayIterator($files));

$phar->setStub($phar->createDefaultStub('bootstrap.php'));

$phar = null;

}

```

Additionally, it may be advisable to make a PEAR package as a standard unix-console utility that when run with no arguments prints its own usage instruction:

```sh

#php app.phar

myDaemon version 0.1 Debug

usage:

--daemon – run as daemon

--debug – run in debug mode

--settings – print settings

--nofork – not run child processes

--check – check dependency modules

--master – run as master

--proc=[8] – run child processes

```

# Conclusion

Creating daemons in PHP it is not hard but to make them run correctly it is important to follow the steps described in this article.

Post a comment here if you have questions or comments on how to create daemon services in PHP.

SPONSORS