PHP & Web Development Blogs

Search Results For: multiple
Showing 16 to 20 of 20 blog articles.
10111 views · 5 years ago
Conferences are always looking for speakers - it can be hard to keep track of them all and the requirements they have. I wanted to put together this quick guide to make it easy for you to apply. Make sure to apply because as Wayne Gretzky said “You miss 100% of the shots you don’t take”!!!

phpDay 2019

First we have phpDay 2019 which will take place on May 10 & 11 at Hotel San Marco in Verona, Italy. Some facts about this call for papers:
*Submission deadline: February 4, 2019
*Submit via: https://cfp.phpday.it/
* For more info on the conference: https://2019.phpday.it/
* Twitter: (@phpday)
* Speaker package includes: Full conference pass (jsDay + phpDay), speaker dinner the first night, lunch, reception and activities included in regular conference.
* For speakers remote to the Area: A refund of up to €200 for travel costs (or €500 from US or extra-EU), 2 complimentary hotel nights (+1 hotel night for speakers presenting multiple talks or US/extra-EU) and Taxi fare from/to the airport.
*In Submission: make sure your talk title and abstract define the exact topic you want to talk about and what you hope people will learn from the session.
*Talk Ideas: APIs (REST, SOAP, etc.), Architectures, Continuous Delivery, Databases, Development, Devops, Frameworks, Internals, PHP 7.x / PHP 8, Security, Testing and UI/UX.

ScotlandPHP

Next we have ScotlandPHP which will take place on November 8 & 9 at Edinburgh International Conference Centre in Edinburgh, Scotland.
*Submission deadline: April 22, 2019
*Submit via: https://cfs.scotlandphp.co.uk/
* For more info on the conference: https://conference.scotlandphp.co.uk/
* Twitter: (@scotlandphp)
* Speaker package: Full conference pass, lunch, receptions and activities included in regular conference.
* For speakers remote to the Area: Complimentary airfare/travel, 2 complimentary hotel nights and we'll pick you up and drop you off to/from the airport so you don't have to worry about it.
* Speakers will be provided with a projector, a wireless lapel microphone and a screen for their presentation (size depends on the room). Speakers should bring any equipment they need to connect to projectors (VGA). It is also suggested that you reduce your dependency on the in-house internet connection as possible. We will however provide HDMI and Mini Display Port connections for all speakers on request. If you need something different or your selected talk needs audio equipment just let us know. We'll work it out.
* Looking for talks and workshops (November 8th).
*Talk Ideas: Virtualization and environments, Javascript, Alternate PHP run-times, PHP internals, Development principles, Security, Mobile-first design, Testing (unit, functional, etc.), Version control, User Experience/Usability, Building APIs (REST, SOAP, whatever), Continuous Integration, Framework-related topics, and Professional development.

Global diversity CFP day

In 2019 there will be numerous workshops hosted around the globe encouraging and advising newbie speakers to put together your very first talk proposal and share your own individual perspective on any subject of interest to people in tech.
* Twitter: (@gdcfpday)
*Save the Date: March 2, 2019
*Register here: https://www.globaldiversitycfpday.com/?utm_source=scotphp

CoderCruise

Then there is CoderCruise which will take place on August 19-23. It's a cruise that takes off from Port Canaveral, Florida and goes to the Bahamas.
* Twitter: (@codercruise)
*Submission deadline: March 3, 2019
*Submit via: https://www.papercall.io/codercruise-2019
* For more info on the conference: https://www.codercruise.com/
* This is a polyglot conference so looking for speakers on a wide variety of languages (PHP, JavaScript, Java, Python, etc.) and on various tech topics.

PHP Conference Asia 2019

There is also PHP Conference Asia 2019, which will take place on June 24-25 at Microsot Singapore.
*Submission deadline: March 8, 2019
*Submit via: https://cfp.phpconf.asia/
* For more info on the conference: https://2019.phpconf.asia/
* Twitter: (@PHPConfAsia)
* Speaker package includes: Speaker package: Full conference pass, lunch, receptions and activities included in regular conference. We'll pick you up and drop you off to/from the airport so you don't have to worry about it. Speakers' dinner on the first evening of the conference (24th June 2019). Transport to and from the conference venue will be included
* For speakers remote to the Area: 2 complimentary hotel nights and
we can consider providing grants to partially cover the air-fare for speakers who might have financial difficulties. This is on a case-by-case basis.
* Speakers will be provided with a projector, a wireless hand-held microphone and a screen for their presentation. Speakers should prepare their slides in 4x3 aspect ratio. Speakers should bring any equipment they need to connect to projectors (HDMI). It is also suggested that you reduce your dependency on the in-house internet connection as possible.
*In Submission: Make sure your talk title and abstract define the exact topic you want to talk about and what you hope people will learn from the session.
*Talk Ideas: Virtualization and environments, Javascript, Alternate PHP run-times, PHP internals, Development principles, Security, Mobile-first design, Testing (unit, functional, etc.), Version control, User Experience/Usability, Building APIs (REST, SOAP, whatever), Continuous Integration, Framework-related topics, and Professional development.

Cascadia PHP

Another conference to apply to is Cascadia PHP, which will take place on September 19-21 at University Place Hotel & Conference Center in Portland, Oregon.
*Submission deadline: April 15, 2019
*Submit via: https://cfp.cascadiaphp.com/
* For more info on the conference: https://www.cascadiaphp.com/venue
* Twitter: (@CascadiaPHP)
* Speaker package includes: Speaker package: Full conference pass, lunch, receptions and activities included in regular conference. For speakers remote to the Area: Complimentary airfare/travel, 2 complimentary hotel nights and we'll pick you up and drop you off to/from the airport so you don't have to worry about it.
Speakers will be provided with a projector, a wireless lapel microphone and a screen for their presentation (size depends on the room). Speakers should bring any equipment they need to connect to projectors (VGA). It is also suggested that you reduce your dependency on the in-house internet connection as possible.
*In Submission: make sure your talk title and abstract define the exact topic you want to talk about and what you hope people will learn from the session.
*Talk Ideas: PHP internals, Version control, Framework-related topics, Building APIs (REST, SOAP, whatever), Mobile-first design, Professional development, Testing (unit, functional, etc.), Alternate PHP run-times, Development principles, Continuous Integration, Getting involved in the PHP community, User Experience/Usability, Technology at large, Security, Connecting to Different APIs, Development Tools, Virtualization and environments, Javascript, Modern hosting practices, Language Features, Databases, Refactoring legacy applications, Running/contributing to open source projects, AI and AR, and User Groups.

Nomad PHP

Last but not least - this is an ongoing call for papers. This is perfect if you want to present from the comfort of your office, home or really wherever you are. It’s via RingCentral meetings and will be live and recorded. This is for none other than Nomad PHP.
* Twitter: (@nomadphp)
* Deadline: Anytime :D
* Talk length: 45 - 60 minutes.
* Talks should be unique to Nomad PHP and not available in video format online.
* Talk should not be recorded or made available elsewhere online for at least 3 months following your talk.
* The talk will be featured on our page and promoted via social media.
* Speakers will receive a financial stipend.
* Upon being selected we will reach out with further details.
*Talk ideas: AI & Machine Learning, APIs, Containerization, Databases, DevOps, Documentation, Frameworks, Performance, Security, Serverless, Testing, Tools, Upgrading/ Modernization, and more.
*Submit here: https://www.papercall.io/nomadphp
Now that you have some information - make sure to apply to all of these options! Can't wait to see all of your awesome talks you present :D!
15935 views · 5 years ago
Create Alarm and Monitoring on Custom Memory and Disk Metrics for Amazon EC2

Today I am going write a blog on how to Monitor Memory and Disk custom metrics and creating alarm in Ubuntu.

To do this, we can use Amazon CloudWatch, which provides a flexible, scalable and reliable solution for monitoring our server.

Amazon Cloud Watch will allow us to collect the custom metrics from our applications that we will monitor to troubleshoot any issues, spot trends, and configure operational performance. CloudWatch functions display alarms, graphs, custom metrics data and including statistics.

Installing the Scripts


Before we start installing the scripts for monitoring, we should install all the dependent packages need to perform on Ubuntu.

First login to your AWS server, and from our terminal, install below packages

sudo apt-get update
sudo apt-get install unzip
sudo apt-get install libwww-perl libdatetime-perl


Now Install the Monitoring Scripts


Following are the steps to download and then unzip we need to configure the Cloud Watch Monitoring scripts on our server:
1. In the terminal, we need to change our directory and where we want to add our monitoring scripts.
2. Now run the below command and download the source:

curl https://aws-cloudwatch.s3.amazonaws.com/downloads/CloudWatchMonitoringScripts-1.2.2.zip -O

3. Now uncompress the currently downloaded sources using the following commands

unzip CloudWatchMonitoringScripts-1.2.2.zip && \

rm CloudWatchMonitoringScripts-1.2.2.zip && \

cd aws-scripts-mon


The directory will contain Perl scripts, because of the execution of these scripts only report memory run and disk space utilization metrics will run in our Ubuntu server.
Currently, our folder will contain the following files:
mon-get-instance-stats.pl - This Perl file is used to displaying the current utilization statistics reports for our AWS instance on which these file scripts will be executed.
mon-put-instance-data.pl - This Perl script file will be used for collecting the system metrics on our ubuntu server and which will send them to the Amazon Cloud Watch.
awscreds.template - This Perl script file will contain an example for AWS credentials keys and secret access key named with access key ID.
CloudWatchClient.pm - This Perl script file module will be used to simplify by calling Amazon Cloud Watch from using other scripts.
LICENSE.txt – This file contains the license details for Apache 2.0.
NOTICE.txt – This file contains will gives us information about Copyright notice.
4. For performing the Cloud Watch operations, we need to confirm that whether our scripts have corresponding permissions for the actions:

If we are associated with an IAM role with our EC2 Ubuntu instance, we need to verify that which will grant the permissions to perform the below-listed operations:

cloudwatch:GetMetricStatistics

cloudwatch:PutMetricData

ec2:DescribeTags

cloudwatch:ListMetrics


Now we need to copy the ‘awscreds.template’ file into ‘awscreds.conf’ by using the command below and which will update the file with details of the AWS credentials.

cp awscreds.template awscreds.conf

AWSAccessKeyId = my_access_key_id

AWSSecretKey = my_secret_access_key


Now we completed the configuration.

mon-put-instance-data.pl


This Perl script file will collect memory, disk space utilization data and swap the current system details and then it makes handling a remote call to Amazon Cloud Watch to reports details to the collected cloud watch data as a custom metrics.

We can perform a simple test run, by running the below without sending data to Amazon CloudWatch

./mon-put-instance-data.pl --mem-util --verify --verbose


Now we are going to set a cron for scheduling our metrics and we will send them to Amazon CloudWatch
1. Now we need to edit the crontab by using below command:

 crontab -e

2. Now we will update the file using the following query which will disk space utilization and report memory for particular paths to Amazon CloudWatch in every five minutes:

*/5 * * * * ~/STORAGE/cloudwatch/aws-scripts-mon/mon-put-instance-data.pl --mem-util --mem-avail --mem-used --disk-space-util --disk-space-avail --disk-space-used --disk-path=/ --disk-path=/STORAGE --from-cron


If there is an error, the scripts will write an error message in our system log.

Use of Options

--mem-used
The above command will collect the information about used memory and which will send the details of the reports in MBs into the MemoryUsed metrics. This will give us information about the metric counts memory allocated by applications and the OS as used.
--mem-util
The above command will collect the information about memory utilization in percentages and which will send the details of the Memory Utilization metrics and it will count the usage of the memory applications and the OS.
--disk-space-util
The above command will collect the information to collect the current utilized disk space and which will send the reports in percentages to the DiskSpaceUtilization for the metric and for the selected disks.
--mem-avail
The above command will collect the information about the available memory and which will send the reports in MBs to the MemoryAvailable metrics details. This is the metric counts memory allocated by the applications and the OS as used.
--disk-path=PATH
The above command will collect the information and will point out the which disk path to report disk space.
--disk-space-avail
The above command will collect the information about the available disk space and which will send the reports in GBs to the DiskSpaceAvailable metric for the selected disks.
--disk-space-used
The above command will collect the information about the disk space used and which will send the reports in GBs to the DiskSpaceUsed metric for the selected disks.

The PATH can specify to point or any of the files can be located on which are mounted point for the filesystem which needs to be reported.

If we want to points to the multiple disks, then specify both of the disks like below:

--disk-path=/ --disk-path=/home


Setting an Alarm for Custom Metrics


Before we are going to running our Perl Scripts, then we need to create an alarm that will be listed in our default metrics except for the custom metrics. You can see some default metrics are listed in below image:



Once we completed setting the cron, then the custom metrics will be located in Linux System Metrics.

Now we are going to creating the alarm for our custom metrics
1. We need to open the cloudwatch console panel at https://console.aws.amazon.com/cloudwatch/home
2. Now navigate to the navigation panel, we need to click on Alarm and we can Create Alarm.
3. This will open a popup which with the list of the CloudWatch metrics by category.
4. Now click on the Linux System Metrics . This will be listed out with custom metrics you can see in the below pictures






5. Now we need to select metric details and we need to click on the NEXT button. Now we need to navigate to Define Alarm step.



6. Now we need to define an Alarm with required fields

Now we need to enter the Alarm name for identifying them. Then we need to give a description of our alarm.

Next, we need to give the condition with the maximum limit of bytes count or percentage when it notifies the alarm. If the condition satisfies, then the alarm will start trigger.

We need to provide a piece of additional information about for our alarm.

We need to define what are the actions to be taken when our alarm changes it state.

We need to select or create a new topic with emails needed for sending notification about alarm state.
7. Finally, we need to choose the Create Alarm.

So its completed. Now the alarm is created for our selected custom metrics.

Finished!

Now the alarm will be listed out under the selected state in our AWS panel. Now we need to select an alarm from the list seen and we can see the details and history of our alarm.
13880 views · 5 years ago
Creating a Tiny Blog Management system in Laravel 5.7

Hey There,
I am expecting you are familiar with PHP. In this post I will be using the Laravel framework to create a small blog system. I am showing here very simple steps to create blogs, If you want this complete code then please message me.
What are major Prequisites for Laravel:
* PHP version >= 5.6
* Composer should be installed in system

Create a project with name tiny_blog with following command

composer create-project laravel/laravel --prefer-dist tiny_blog


enter into the laravel project

cd tiny_blog


create a migration file using following artisan command
<pre>php artisan make:migration create_blog_table</pre>
After this command you will found a new file created in database/migrations folder in your project, Just edit the file having 'create_blog_table' appended in its name

Now replace following code to create table schema with function up(), So now the method will look like following:

public function up()
{
Schema::create('blogs', function (Blueprint $table) {
$table->increments('id');
$table->integer('user_id');
$table->string('category');
$table->string('title');
$table->text('description');
$table->timestamps();
});

}


replace following snippet with down method, it will look like following:

public function down()
{
Schema::dropIfExists('blogs');
}


Its time to run the migration file we have created

php artisan migrate



After running,It will create the blogs table in database.Now time to create form and insert data into the table

Laravel itsef provide authentication , use following artisan command :

php artisan make:auth


Now start Larvel:

php artisan serve


it will start the laravel development server at http://127.0.0.1:8000


Now if you run that url the basic default ui will be created and login & register link you can see in Top right position of header

You can register and login now.this feature is provided by authentication module.
Now we need to create a controller for manage blogs with following command:

php artisan make:controller BlogController


will create a file namedBlogController.php in** app/HTTP/controllers** folder location

Now we need to create a Model also, use following command

php artisan make:model Blog


will create a file namedBlog.php in app folder location

Now in Controller we need to create a method for create blogs and available that method in Routes to access it via url. Just editroutes/web.php file and add the following line

Route::get('blog/create','BlogController@createBlog');

/create/blog/ will be url route that land on Blog Controller's createBlog method using get method.

Now before running this route just go to the app/Http/Controllers folder and Edit BlogController.php file and Add the createBlog method in that class as following

public function createBlog()
{
return view('blog.create');
}


This code will try to load the view from/resources/views/blog/create.blade.php

In Laravel blade is a template engine. As we had not created the view file yet, so we need to create a blog folder inside/resources/views/ folder then inside blog folder create a file create.blade.php with following form

@extends('layouts.app')

@section('content')
<div class="container">
@if ($errors->any())
<div class="alert alert-danger">
<ul>
@foreach ($errors->all() as $error)
<li>{{ $error }}</li>
@endforeach
</ul>
</div><br />
@endif
<div class="row">
<form method="post" action="{{url('blog/create')}}">
<div class="form-group">
<input type="hidden" value="{{csrf_token()}}" name="_token" />
<label for="title">Title:</label>
<input type="text" class="form-control" name="title"/>
</div>
<div class="form-group">
<label for="title">Category/Tags:</label>
<input type="text" class="form-control" name="category"/>
</div>
<div class="form-group">
<label for="description">Description:</label>
<textarea cols="10" rows="10" class="form-control" name="description"></textarea>
</div>
<button type="submit" class="btn btn-primary">Submit</button>
</form>
</div>
</div>
@endsection



Now we need to add a additional route to handle the post request on blog/create route, Just edit routes/web.php file and just add following line in last:

Route::post('blog/create','BlogController@saveBlog'); 


post route to handle the form post on route blog/create


Now create a method name saveBlog to save the user input data in the form
 public function saveBlog(Request $request)
{
$blog = new Blog();

$this->validate($request, [
'title'=>'required',
'category'=>'required',
'description'=> 'required'
]);

$blog->createBlog($request->all());
return redirect('blog/index')->with('success', 'New blog has been created successfully :)'); }


Notice This method is using Blog object that we don't know that where it comes from? , So to make above code working we need to include the model which we created earlier need to include in our controller file So use following code to include it before the class created.

use App\Blog;


Now following line shows that there is a method named createBlog in Model(app/Blog.php), but in actual it is not there:

$blog->createBlog($data);



So go to the file app/Blog.php and Edit it and inside the class add following method:

 public function createBlog($data)
{

$this->user_id = auth()->user()->id;
$this->title = $data['title'];
$this->description = $data['description'];
$this->category = $data['category'];
$this->save();
return 1;
}


Now the creation of blog task has been done , Its time to show the created Entries So just create a route blog/index in routes/web.php

Route::get('blog/index','BlogController@showAllBlogs');


get route blog/index to show all the created blogs by current user


Now just add a method in controller
public function showAllBlogs()
{
$blogs = Blog::where('user_id', auth()->user()->id)->get();

return view('blog.index',compact('blogs'));
}



This method requires to create a index view in blog folder , So create a file named index.blade.php in /resources/views/blog/ folder with following code

@extends('layouts.app')

@section('content')
<div class="container">
@if(\Session::has('success'))
<div class="alert alert-success">
{{\Session::get('success')}}
</div>
@endif
<a type="button" href="{{url('blog/create')}}" class="btn btn-primary">Add New Blog</a>
<br>
<table class="table table-striped">
<thead>
<tr>
<td>ID</td>
<td>Title</td>
<td>Category</td>
<td>Description</td>
<td colspan="2">Action</td>
</tr>
</thead>
<tbody>
@foreach($blogs as $blog)
<tr>
<td>{{$blog->id}}</td>
<td>{{$blog->title}}</td>
<td>{{$blog->category}}</td>
<td>{{$blog->description}}</td>
<td>Edit</td>
<td>Delete</td>
</tr>
@endforeach
</tbody>
</table>
<div>
@endsection



Now all code is ready but we need to add 1 line of code to prevent the blog controller without authentication or without login

just add the following constructor method in BlogController class

 public function __construct()
{
$this->middleware('auth');
}


this constructor method will call very first when user will try to access any of BlogController class method, and the middleware will check whether user is logged in then only it will allow to access that method otherwise it will redirect to login page automatically.


After It Run your Code and you will able to create and listing your created blogs/articles. but the Edit and Delete links are not working right now, If you want that also working then please comment here or message me. If we get multiple requests then definitely i will write its part 2 article


Thanks very much for reading this blog, if you have any doubt about it then let me know in comments or by messaging me.

Following is the final code for BlogController.php

<?php

namespace App\Http\Controllers;

use Illuminate\Http\Request;
use App\Blog;



class BlogController extends Controller
{
public function __construct()
{
$this->middleware('auth');
}

public function createBlog()
{
return view('blog/create');
}


public function saveBlog(Request $request)
{
$blog = new Blog();

$this->validate($request, [
'title'=>'required',
'category'=>'required',
'description'=> 'required'
]);

$blog->createBlog($request->all());
return redirect('blog/index')->with('success', 'New blog has been created successfully :)');
}

public function showAllBlogs()
{
$blogs = Blog::where('user_id', auth()->user()->id)->get();

return view('blog.index',compact('blogs'));
}

}

14970 views · 5 years ago
Creating a PHP Daemon Service

What is a Daemon?

The term daemon was coined by the programmers of Project MAC at MIT. It is inspired on Maxwell's demon in charge of sorting molecules in the background. The UNIX systems adopted this terminology for daemon programs.

It also refers to a character from Greek mythology that performs the tasks for which the gods do not want to take. As stated in the "Reference System Administrator UNIX", in ancient Greece, the concept of "personal daemon" was, in part, comparable to the modern concept of "guardian angel." BSD family of operating systems use the image as a demon's logo.

Daemons are usually started at machine boot time. In the technical sense, a demon is considered a process that does not have a controlling terminal, and accordingly there is no user interface. Most often, the ancestor process of the deamon is init - process root on UNIX, although many daemons run from special rcd scripts started from a terminal console.

Richard Stevenson describes the following steps for writing daemons:
    . Resetting the file mode creation mask to 0 function umask(), to mask some bits of access rights from the starting process.
    . Cause fork() and finish the parent process. This is done so that if the process was launched as a group, the shell believes that the group finished at the same time, the child inherits the process group ID of the parent and gets its own process ID. This ensures that it will not become process group leader.
    . Create a new session by calling setsid(). The process becomes a leader of the new session, the leader of a new group of processes and loses the control of the terminal.
    . Make the root directory of the current working directory as the current directory will be mounted.
    . Close all file descriptors.
    . Make redirect descriptors 0,1 and 2 (STDIN, STDOUT and STDERR) to /dev/null or files /var/log/project_name.out because some standard library functions use these descriptors.
    . Record the pid (process ID number) in the pid-file: /var/run/projectname.pid.
    . Correctly process the signals and SigTerm SigHup: end with the destruction of all child processes and pid - files and / or re-configuration.

How to Create Daemons in PHP

To create demons in PHP you need to use the extensions pcntl and posix. To implement the fast communication withing daemon scripts it is recommended to use the extension libevent for asynchronous I/O.

Lets take a closer look at the code to start a daemon:
umask(0);
$pid = pcntl_fork(); 
if ($pid < 0) {
print('fork failed');
exit 1;
}


After a fork, the execution of the program works as if there are two branches of the code, one for the parent process and the second for the child process. What distinguishes these two processes is the result value returned the fork() function call. The parent process ID receives the newly created process number and the child process receives a 0.
if ($pid > 0) { echo "daemon process started
";
exit; }

$sid = posix_setsid(); if ($sid < 0) {
exit 2;
}

chdir('/'); file_put_contents($pidFilename, getmypid() );
run_process();


The implementation of step 5 "to close all file descriptors" can be done in two ways. Well, closing all file descriptors is difficult to implement in PHP. You just need to open any file descriptors before fork(). Second, you can override the standard output to an error log file using init_set() or use buffering using ob_start() to a variable and store it in log file:
ob_start();
var_dump($some_object);
$content = ob_get_clean();
fwrite($fd_log, $content); 


Typically, ob_start() is the start of the daemon life cycle and ob_get_clean() and fwrite() calls are the end. However, you can directly override STDIN, STDOUT and STDERR:
ini_set('error_log', $logDir.'/error.log');
fclose(STDIN); 
fclose(STDOUT);
fclose(STDERR);
$STDIN = fopen('/dev/null', 'r');
$STDOUT = fopen($logDir.'/application.log', 'ab');
$STDERR = fopen($logDir.'/application.error.log', 'ab');


Now, our process is disconnected from the terminal and the standard output is redirected to a log file.

Handling Signals

Signal processing is carried out with the handlers that you can use either via the library pcntl (pcntl_signal_dispatch()), or by using libevent. In the first case, you must define a signal handler:
function sig_handler($signo)
{
global $fd_log;
switch ($signo) {
case SIGTERM:
fclose($fd_log); unlink($pidfile); exit;
break;
case SIGHUP:
init_data(); break;
default:
}
}

pcntl_signal(SIGTERM, "sig_handler");
pcntl_signal(SIGHUP, "sig_handler");


Note that signals are only processed when the process is in an active mode. Signals received when the process is waiting for input or in sleep mode will not be processed. Use the wait function pcntl_signal_dispatch(). We can ignore the signal using flag SIG_IGN: pcntl_signal(SIGHUP, SIG_IGN); Or, if necessary, restore the signal handler using the flag SIG_DFL, which was previously installed by default: pcntl_signal(SIGHUP, SIG_DFL);

Asynchronous I/O with Libevent

In the case you use blocking input / output signal processing is not applied. It is recommended to use the library libevent which provides non-blocking as input / output, processing signals, and timers. Libevent library provides a simple mechanism to start the callback functions for events on file descriptor: Write, Read, Timeout, Signal.

Initially, you have to declare one or more events with an handler (callback function) and attach them to the basic context of the events:
$base = event_base_new();
$event = event_new();
$errno = 0;
$errstr = '';
$socket = stream_socket_server("tcp://$IP:$port", $errno, $errstr);
stream_set_blocking($socket, 0); event_set($event, $socket, EV_READ | EV_PERSIST, 'onAccept', $base);


Function handlers 'onRead', 'onWrite', 'onError' must implement the processing logic. Data is written into the buffer, which is obtained in the non-blocking mode:
function onRead($buffer, $id)
{
while($read = event_buffer_read($buffer, 256)) {
var_dump($read);
}
}


The main event loop runs with the function event_base_loop($base);. With a few lines of code, you can exit the handler only by calling: event_base_loobreak(); or after the specified time (timeout) event_loop_exit();.

Error handling deals with failure Events:
function onError($buffer, $error, $id)
{
global $id, $buffers, $ctx_connections;
event_buffer_disable($buffers[$id], EV_READ | EV_WRITE);
event_buffer_free($buffers[$id]);
fclose($ctx_connections[$id]);
unset($buffers[$id], $ctx_connections[$id]);
}


It should be noted the following subtlety: Working with timers is only possible through the file descriptor. The example of official the documentation does not work. Here is an example of processing that runs at regular intervals.
$event2 = event_new();
$tmpfile = tmpfile();
event_set($event2, $tmpfile, 0, 'onTimer', $interval);
$res = event_base_set($event2, $base);
event_add($event2, 1000000 * $interval);


With this code we can have a working timer finishes only once. If we need a "permanent" Timer, using the function onTimer we need create a new event each time, and reassign it to process through a "period of time":
function onTimer($tmpfile, $flag, $interval)
{
$global $base, $event2;

if ($event2) {
event_delete($event2);
event_free($event2);
}

call_user_function(‘process_data’,$args);

$event2 = event_new();
event_set($event2, $tmpfile, 0, 'onTimer', $interval);
$res = event_base_set($event2, $base);
event_add($event2, 1000000 * $interval);
}


At the end of the daemon we must release all previously allocated resources:
event_delete($event);
event_free($event);
event_base_free($base);

event_base_set($event, $base);
event_add($event);


Also it should be noted that for the signal processing handler is set the flag EV_SIGNAL: event_set($event, SIGHUP, EV_SIGNAL, 'onSignal', $base);

If needed constant signal processing, it is necessary to set a flag EV_PERSIST. Here follows a handler for the event onAccept, which occurs when a new connection is a accepted on a file descriptor:
function onAccept($socket, $flag, $base) {
global $id, $buffers, $ctx_connections;
$id++;
$connection = stream_socket_accept($socket);
stream_set_blocking($connection, 0);
$buffer = event_buffer_new($connection, 'onRead', NULL, 'onError', $id);
event_buffer_base_set($buffer, $base);
event_buffer_timeout_set($buffer, 30, 30);
event_buffer_watermark_set($buffer, EV_READ, 0, 0xffffff); event_buffer_priority_set($buffer, 10); event_buffer_enable($buffer, EV_READ | EV_PERSIST); $ctx_connections[$id] = $connection;
$buffers[$id] = $buffer;
}


Monitoring a Daemon

It is good practice to develop the application so that it was possible to monitor the daemon process. Key indicators for monitoring are the number of items processed / requests in the time interval, the speed of processing with queries, the average time to process a single request or downtime.

With the help of these metrics can be understood workload of our demon, and if it does not cope with the load it gets, you can run another process in parallel, or for running multiple child processes.

To determine these variables need to check these features at regular intervals, such as once per second. For example downtime is calculated as the difference between the measurement interval and total time daemon.

Typically downtime is determined as a percentage of a measurement interval. For example, if in one second were executed 10 cycles with a total processing time of 50ms, the time will be 950ms or 95%.

Query performance wile be 10rps (request per second). Average processing time of one request: the ratio of the total time spent on processing requests to the number of requests processed, will be 5ms.

These characteristics, as well as additional features such as memory stack size queue, number of transactions, the average time to access the database, and so on.

An external monitor can be obtain data through a TCP connection or unix socket, usually in the format of Nagios or zabbix, depending on the monitoring system. To do this, the demon should use an additional system port.

As mentioned above, if one worker process can not handle the load, usually we run in parallel multiple processes. Starting a parallel process should be done by the parent master process that uses fork() to launch a series of child processes.

Why not run processes using exec() or system()? Because, as a rule, you must have direct control over the master and child processes. In this case, we can handle it via interaction signals. If you use the exec command or system, then launch the initial interpreter, and it has already started processes that are not direct descendants of the parent process.

Also, there is a misconception that you can make a demon process through command nohup. Yes, it is possible to issue a command: nohup php mydaemon.php -master >> /var/log/daemon.log 2 >> /var/log/daemon.error.log &

But, in this case, would be difficult to perform log rotation, as nohup "captures" file descriptors for STDOUT / STDERR and release them only at the end of the command, which may overload of the process or the entire server. Overload demon process may affect the integrity of data processing and possibly cause partial loss of some data.

Starting a Daemon

Starting the daemon must happen either automatically at boot time, or with the help of a "boot script."

All startup scripts are usually in the directory /etc/rc.d. The startup script in the directory service is made /etc/init.d/ . Run command start service myapp or start group /etc/init.d/myapp depending on the type of OS.

Here is a sample script text:
#! /bin/sh
#
$appdir = /usr/share/myapp/app.php
$parms = --master –proc=8 --daemon
export $appdir
export $parms
if [ ! -x appdir ]; then
exit 1
fi

if [ -x /etc/rc.d/init.d/functions ]; then
. /etc/rc.d/init.d/functions
fi

RETVAL=0

start () {
echo "Starting app"
daemon /usr/bin/php $appdir $parms
RETVAL=$?
[ $RETVAL -eq 0 ] && touch /var/lock/subsys/mydaemon
echo
return $RETVAL
}

stop () {
echo -n "Stopping $prog: "
killproc /usr/bin/fetchmail
RETVAL=$?
[ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/mydaemon
echo
return $RETVAL
}

case in
start)
start
;;
stop)
stop
;;
restart)
stop
start
;;
status)
status /usr/bin/mydaemon
;;
*)
echo "Usage:
php
if (is_file('app.phar')) {
unlink('app.phar');
}
$phar = new Phar('app.phar', 0, 'app.phar');
$phar->compressFiles(Phar::GZ);
$phar->setSignatureAlgorithm (Phar::SHA1);
$files = array();
$files['bootstrap.php'] = './bootstrap.php';
$rd = new RecursiveIteratorIterator(new RecursiveDirectoryIterator('.'));
foreach($rd as $file){
if ($file->getFilename() != '..' && $file->getFilename() != '.' && $file->getFilename() != __FILE__) {
if ( $file->getPath() != './log'&& $file->getPath() != './script'&& $file->getPath() != '.')
$files[substr($file->getPath().DIRECTORY_SEPARATOR.$file->getFilename(),2)] =
$file->getPath().DIRECTORY_SEPARATOR.$file->getFilename();
}
}
if (isset($opt['version'])) {
$version = $opt['version'];
$file = "buildFromIterator(new ArrayIterator($files));
$phar->setStub($phar->createDefaultStub('bootstrap.php'));
$phar = null;
}
 {start|stop|restart|status}"
;;

RETVAL=$?
exit $RETVAL


Distributing Your PHP Daemon

To distribute a daemon it is better to pack it in a single phar archive module. The assembled module should include all the necessary PHP and .ini files.

Below is a sample build script:
#php app.phar
myDaemon version 0.1 Debug
usage:
--daemon – run as daemon
--debug – run in debug mode
--settings – print settings
--nofork – not run child processes
--check – check dependency modules
--master – run as master
--proc=[8] – run child processes


Additionally, it may be advisable to make a PEAR package as a standard unix-console utility that when run with no arguments prints its own usage instruction:
[NMD%%CODE%%]


Conclusion

Creating daemons in PHP it is not hard but to make them run correctly it is important to follow the steps described in this article.

Post a comment here if you have questions or comments on how to create daemon services in PHP.
12716 views · 6 years ago
Welcome to PHP 7.1

In case you are living under a rock, the latest version of PHP released last week. PHP developers around the world began rebuilding their development containers with it so they can run their tests. Now it’s your turn. If you haven’t already installed it, you can download it here http://php.net/downloads.php Grab it, get it running in your development environment, and run those unit tests. If all goes well, you can begin planning your staged deployment to production.

If you need a quick start guide to get you going, our good friend Mr Colin O’Dell has just the thing for you “Installing PHP 7.1”. It’ll get you up and going quickly on PHP 7.1.

What’s the big deal about PHP 7.1? I am so glad you asked. Here are the major new features released in PHP 7.1.

* Nullable types
* Void return type
* Iterable pseudo-type
* Class constant visiblity modifiers
* Square bracket syntax for list() and the ability to specify keys in list()
* Catching multiple exceptions types

Now if you want a quick intro to several of these new features, check out our “RFCs of the Future” playlist on YouTube. In it, I talk about 4 of the new features.

Oh and while you are watching things download & compile, why not take the time to give a shoutout to all the core contributors, and a special thank you to Davey Shafik and Joe Watkins, the PHP 7.1 release managers.

Cheers!
=C=

SPONSORS

PHP Tutorials and Videos