PHP & Web Development Blogs

Search Results For: library
Showing 6 to 10 of 10 blog articles.
6371 views · 4 years ago


At Nomad PHP our goal is to empower developers in building a habit of continuous learning - and that means we have a habit of continuous improvement ourselves. Here are just some of the things we've done this year (with much more coming down the road)!

Website Redesign


We've refreshed the look and feel of Nomad PHP to better emphasize the goal of Nomad PHP - to help developers build a habit of continuous learning and grow their careers. This includes numerous usability enhancements as well as a focus on our new book library, blogs, and certification in addition to virtual meetups, workshops, conferences, and on-demand videos.

Free Meetups


As technology has advanced, more and more meetups and usergroups are able to stream their local usergroup meetings.

As our goal has always been to make technology accessible, we are proud to provide free streaming technology for local user groups, and share local user group meetings on our live virtual meetup schedule.

Student and Professional subscribers will continue to have access to our monthly conference level Pro Talks, hands on virtual workshops, and live conference streams in addition to streams by local user groups.

You can find a list of all upcoming talks (free and Pro) on our Live Meetings Page, or add your user group stream here.

Free Subscriber Tier


As our mission has evolved from being the meetup for developers without a meetup group to building an inclusive community of PHP developers where you can network, grow your skills, and share your knowledge with others - we are excited to announce our new Free Tier.

With a free Nomad PHP account you can:

* Stream free meetups

* Watch ad-supported videos in SD

* Read PHP blogs and write your own

* Network with other PHP developers

Create your free developer account to get started.

New Student Tier


To provide the best value, we've also restructured our plans to provide professional online meetings, workshops, and conference streaming to our Student Tier. This will allow students and new developers the chance to learn from the best speakers and top practioners and obtain entry level certifications at the best price possible.

However, with the addition of PHP Books and Magazines, and in order to provide the best value while keeping the Student plan affordable, new Student subscribers will not have access to the PHP Book and Magazine Library, or advanced certifications. These will now require a professional plan.

Student plans start at $12.95/mo

PHP Books and Magazines


We're excited to announce that we have expanded our PHP library. In addition to the ability to read the latest issues of php[architect] magazine, Professional subscribers now have access to read PHP and web development books online.

We're excited to announce the availability of Chris Hartjes' bookThe Grumpy Programmer's Guide to Testing PHP Applications, as well as several titles from Notes for Professionals, andUndisturbed REST: a Guide to Designing the Perfect API.

More titles including exclusive titles will be made available for online reading soon.

You can view our entire PHP Library here.

Blog Updates


We've received a lot of feedback on the blog writing process, and have upgraded several aspects of our blogging software. This includes the ability to save drafts prior to publishing, and the ability to upload, edit, and crop images and videos. We've also added some bug fixes for editing and writing code.

We're also excited to share that members with Student and Professional plans can now have their ownVLOG (video blog) with the ability to screencast/ record video from your webcam within the blog.

To see the most recent blog posts, or write your own, visit the Nomad PHP Blogs.

Certification Updates


We've updated our certifications for better usability and readability. We've also reworked some of the code samples and questions in our Level 1 PHP Certification exam.

You can find our available exams, test your skills, and obtain your Nomad PHP certification here.

Team Management


Our new team manager allows you to easily add or remove team members with your Nomad PHP team subscription. You'll also find real time metrics on how your team is using Nomad PHP, who on your team is investing in their growth and streaming meetups, watching videos, reading books, and earning certifications, and the overall content value consumed by your team.

The Team Manager is available to new teams, and will be made available to existing team managers over the next several weeks.

2020 Roadmap


There's still plenty of more great things coming in 2020. Here are the items at the top of our list:

* Mobile app for offline viewing

* Desktop app for offline viewing

* Nomad PHP member only books

* PHP Level 2 Certification

* Interactive tutorials

* Better video support in blogs

* Ability to schedule blog posts

* Meeting software for local usergroups

* Improved plan management for subscribers
Of course, what's most important to us is what's most important to you. Leave what you want to see on Nomad PHP in the comments below and if we're able to we'll get it added to our roadmap!
40372 views · 5 years ago
Securing PHP RESTful APIs using Firebase JWT Library

Hello Guys,

In our Last Blog Post, we have created restful apis,But not worked on its security and authentication. Login api can be public but after login apis should be authenticate using any secure token. one of them is JWT, So i am providing the Steps for Create and use JWT Token in our already created API.


Now its time To Implement JWT Authentication IN our Api, So these are the steps to implement it in our already created Apis


Step 1:Install and include Firebase JWT(JSON WEB TOKEN) in our project with following composer command        


 composer require firebase/php-jwt 


include the composer installed packages
require_once('vendor/autoload.php');


use namespace using following:
 use \Firebase\JWT\JWT; 



Step 2: Create a JWT server side using Firebase Jwt Library's encode method in Login action , and return it to Client



Define a private variable named Secret_Key in Class like following:

 private {
$payload = array(
'iss' => $_SERVER['HOST_NAME'],
'exp' => time()+600, 'uId' => $UiD
);
try{
$jwt = JWT::encode($payload, $this->Secret_Key,'HS256'); $res=array("status"=>true,"Token"=>$jwt);
}catch (UnexpectedValueException $e) {
$res=array("status"=>false,"Error"=>$e->getMessage());
}
return $res;
}


In our login action , if the user has been logged in successfully then with the status,_data_ and message just replace the login success code with following code:

$return['status']=1;
$return['_data_']=$UserData[0];
$return['message']='User Logged in Successfully.';

$jwt=$obj->generateToken($UserData[0]['id']);
if($jwt['status']==true)
{
$return['JWT']=$jwt['Token'];
}
else{
unset($return['_data_']);
$return['status']=0;
$return['message']='Error:'.$jwt['Error'];
}





Step 3: Now with every request after login should have the JWT token in its Post(even we can receive it in get or authentication header also but here we are receiving it in post)



No afetr successfully login you will get the JWt Token in your response,Just add that Token with every post request of after login api calls. So we will do it using postman, Find the screenshot 1 for checking the JWT Token is coming in login api response

JWT DEMO LOGIN API RESPONSE


Step 4:After reciving the JWt in every after login api call, we need to check whether the token is fine using JWT decode method in After login Apis like
UserBlogs
is a After login Api, So for verify that we are creating Authencate method in class like following:


 public function Authenticate($JWT,$Curret_User_id)
{
try {
$decoded = JWT::decode($JWT,$this->Secret_Key, array('HS256'));
$payload = json_decode(json_encode($decoded),true);

if($payload['uId'] == $Curret_User_id) {
$res=array("status"=>true);
}else{
$res=array("status"=>false,"Error"=>"Invalid Token or Token Exipred, So Please login Again!");
}
}catch (UnexpectedValueException $e) {
$res=array("status"=>false,"Error"=>$e->getMessage());
}
return $res;

}


Step 5: Cross check the response returned by Authenticate method in
UserBlogs
Action of api , replace the
UserBlogs
Action inner content with following code:


 if(isset($_POST['Uid']))
{

$resp=$obj->Authenticate($_POST['JWT'],$_POST['Uid']);
if($resp['status']==false)
{
$return['status']=0;
$return['message']='Error:'.$resp['Error'];
}
else{
$blogs=$obj->get_all_blogs($_POST['Uid']);
if(count($blogs)>0)
{
$return['status']=1;
$return['_data_']=$blogs;
$return['message']='Success.';
}
else
{
$return['status']=0;
$return['message']='Error:Invalid UserId!';
}
}
}
else
{
$return['status']=0;
$return['message']='Error:User Id not provided!';
}


Ah great its time to check out the UserBlogs Api, please find the screenshoot for that, Remember we need to put the JWt Token in POST Parameter as we have already recived that Value in Login Api call.

JWT DEMO Authentication in userBlogs API Call

Now if you want to verify that token is expiring in given time(10 minutes after generation time/login time), i am just clicking the same api with same token after 10 minutes and you can see there will not return any data and it is returning status false with following message :


JWT DEMO Authentication in userBlogs API Call


Also if you want to eloborate it more then i suggest you to try with modify Uid value with same token , you will another authentication issue and also if you modify the JWT token also then also you will not get the desired result and get authentication Issue

Thanks for reading out if you want the complete code of this file then please find following:
<?php 
header("Content-Type: application/json; charset=UTF-8");
require_once('vendor/autoload.php');
use \Firebase\JWT\JWT;

class DBClass {

private $host = "localhost";
private $username = "root";
private $password = ""; private $database = "news";

public $connection;

private $Secret_Key="*$%43MVKJTKMN$#";
public function connect(){

$this->connection = null;

try{
$this->connection = new PDO("mysql:host=" . $this->host . ";dbname=" . $this->database, $this->username, $this->password);
$this->connection->exec("set names utf8");
}catch(PDOException $exception){
echo "Error: " . $exception->getMessage();
}

return $this->connection;
}

public function login($email,$password){

if($this->connection==null)
{
$this->connect();
}

$query = "SELECT id,name,email,createdAt,updatedAt from users where email= ? and password= ?";
$stmt = $this->connection->prepare($query);
$stmt->execute(array($email,md5($password)));
$ret= $stmt->fetchAll(PDO::FETCH_ASSOC);
return $ret;
}

public function get_all_blogs($Uid){

if($this->connection==null)
{
$this->connect();
}

$query = "SELECT b.*,u.id as Uid,u.email as Uemail,u.name as Uname from blogs b join users u on u.id=b.user_id where b.user_id= ?";
$stmt = $this->connection->prepare($query);
$stmt->execute(array($Uid));
$ret= $stmt->fetchAll(PDO::FETCH_ASSOC);
return $ret;
}

public function response($array)
{
echo json_encode($array);
exit;
}

public function generateToken($UiD)
{
$payload = array(
'iss' => $_SERVER['HOST_NAME'],
'exp' => time()+600, 'uId' => $UiD
);
try{
$jwt = JWT::encode($payload, $this->Secret_Key,'HS256'); $res=array("status"=>true,"Token"=>$jwt);
}catch (UnexpectedValueException $e) {
$res=array("status"=>false,"Error"=>$e->getMessage());
}
return $res;
}

public function Authenticate($JWT,$Current_User_id)
{
try {
$decoded = JWT::decode($JWT,$this->Secret_Key, array('HS256'));
$payload = json_decode(json_encode($decoded),true);

if($payload['uId'] == $Current_User_id) {
$res=array("status"=>true);
}else{
$res=array("status"=>false,"Error"=>"Invalid Token or Token Exipred, So Please login Again!");
}
}catch (UnexpectedValueException $e) {
$res=array("status"=>false,"Error"=>$e->getMessage());
}
return $res;

}
}

$return=array();
$obj = new DBClass();
if(isset($_GET['action']) && $_GET['action']!='')
{
if($_GET['action']=="login")
{
if(isset($_POST['email']) && isset($_POST['password']))
{
$UserData=$obj->login($_POST['email'],$_POST['password']);
if(count($UserData)>0)
{
$return['status']=1;
$return['_data_']=$UserData[0];
$return['message']='User Logged in Successfully.';

$jwt=$obj->generateToken($UserData[0]['id']);
if($jwt['status']==true)
{
$return['JWT']=$jwt['Token'];
}
else{
unset($return['_data_']);
$return['status']=0;
$return['message']='Error:'.$jwt['Error'];
}

}
else
{
$return['status']=0;
$return['message']='Error:Invalid Email or Password!';
}
}
else
{
$return['status']=0;
$return['message']='Error:Email or Password not provided!';
}
}
elseif($_GET['action']=="UserBlogs")
{
if(isset($_POST['Uid']))
{

$resp=$obj->Authenticate($_POST['JWT'],$_POST['Uid']);
if($resp['status']==false)
{
$return['status']=0;
$return['message']='Error:'.$resp['Error'];
}
else{
$blogs=$obj->get_all_blogs($_POST['Uid']);
if(count($blogs)>0)
{
$return['status']=1;
$return['_data_']=$blogs;
$return['message']='Success.';
}
else
{
$return['status']=0;
$return['message']='Error:Invalid UserId!';
}
}
}
else
{
$return['status']=0;
$return['message']='Error:User Id not provided!';
}
}
}
else
{
$return['status']=0;
$return['message']='Error:Action not provided!';
}
$obj->response($return);
$obj->connection=null;
?>

15511 views · 6 years ago
Creating a PHP Daemon Service

What is a Daemon?

The term daemon was coined by the programmers of Project MAC at MIT. It is inspired on Maxwell's demon in charge of sorting molecules in the background. The UNIX systems adopted this terminology for daemon programs.

It also refers to a character from Greek mythology that performs the tasks for which the gods do not want to take. As stated in the "Reference System Administrator UNIX", in ancient Greece, the concept of "personal daemon" was, in part, comparable to the modern concept of "guardian angel." BSD family of operating systems use the image as a demon's logo.

Daemons are usually started at machine boot time. In the technical sense, a demon is considered a process that does not have a controlling terminal, and accordingly there is no user interface. Most often, the ancestor process of the deamon is init - process root on UNIX, although many daemons run from special rcd scripts started from a terminal console.

Richard Stevenson describes the following steps for writing daemons:
    . Resetting the file mode creation mask to 0 function umask(), to mask some bits of access rights from the starting process.
    . Cause fork() and finish the parent process. This is done so that if the process was launched as a group, the shell believes that the group finished at the same time, the child inherits the process group ID of the parent and gets its own process ID. This ensures that it will not become process group leader.
    . Create a new session by calling setsid(). The process becomes a leader of the new session, the leader of a new group of processes and loses the control of the terminal.
    . Make the root directory of the current working directory as the current directory will be mounted.
    . Close all file descriptors.
    . Make redirect descriptors 0,1 and 2 (STDIN, STDOUT and STDERR) to /dev/null or files /var/log/project_name.out because some standard library functions use these descriptors.
    . Record the pid (process ID number) in the pid-file: /var/run/projectname.pid.
    . Correctly process the signals and SigTerm SigHup: end with the destruction of all child processes and pid - files and / or re-configuration.

How to Create Daemons in PHP

To create demons in PHP you need to use the extensions pcntl and posix. To implement the fast communication withing daemon scripts it is recommended to use the extension libevent for asynchronous I/O.

Lets take a closer look at the code to start a daemon:
umask(0);
$pid = pcntl_fork(); 
if ($pid < 0) {
print('fork failed');
exit 1;
}


After a fork, the execution of the program works as if there are two branches of the code, one for the parent process and the second for the child process. What distinguishes these two processes is the result value returned the fork() function call. The parent process ID receives the newly created process number and the child process receives a 0.
if ($pid > 0) { echo "daemon process started
";
exit; }

$sid = posix_setsid(); if ($sid < 0) {
exit 2;
}

chdir('/'); file_put_contents($pidFilename, getmypid() );
run_process();


The implementation of step 5 "to close all file descriptors" can be done in two ways. Well, closing all file descriptors is difficult to implement in PHP. You just need to open any file descriptors before fork(). Second, you can override the standard output to an error log file using init_set() or use buffering using ob_start() to a variable and store it in log file:
ob_start();
var_dump($some_object);
$content = ob_get_clean();
fwrite($fd_log, $content); 


Typically, ob_start() is the start of the daemon life cycle and ob_get_clean() and fwrite() calls are the end. However, you can directly override STDIN, STDOUT and STDERR:
ini_set('error_log', $logDir.'/error.log');
fclose(STDIN); 
fclose(STDOUT);
fclose(STDERR);
$STDIN = fopen('/dev/null', 'r');
$STDOUT = fopen($logDir.'/application.log', 'ab');
$STDERR = fopen($logDir.'/application.error.log', 'ab');


Now, our process is disconnected from the terminal and the standard output is redirected to a log file.

Handling Signals

Signal processing is carried out with the handlers that you can use either via the library pcntl (pcntl_signal_dispatch()), or by using libevent. In the first case, you must define a signal handler:
function sig_handler($signo)
{
global $fd_log;
switch ($signo) {
case SIGTERM:
fclose($fd_log); unlink($pidfile); exit;
break;
case SIGHUP:
init_data(); break;
default:
}
}

pcntl_signal(SIGTERM, "sig_handler");
pcntl_signal(SIGHUP, "sig_handler");


Note that signals are only processed when the process is in an active mode. Signals received when the process is waiting for input or in sleep mode will not be processed. Use the wait function pcntl_signal_dispatch(). We can ignore the signal using flag SIG_IGN: pcntl_signal(SIGHUP, SIG_IGN); Or, if necessary, restore the signal handler using the flag SIG_DFL, which was previously installed by default: pcntl_signal(SIGHUP, SIG_DFL);

Asynchronous I/O with Libevent

In the case you use blocking input / output signal processing is not applied. It is recommended to use the library libevent which provides non-blocking as input / output, processing signals, and timers. Libevent library provides a simple mechanism to start the callback functions for events on file descriptor: Write, Read, Timeout, Signal.

Initially, you have to declare one or more events with an handler (callback function) and attach them to the basic context of the events:
$base = event_base_new();
$event = event_new();
$errno = 0;
$errstr = '';
$socket = stream_socket_server("tcp://$IP:$port", $errno, $errstr);
stream_set_blocking($socket, 0); event_set($event, $socket, EV_READ | EV_PERSIST, 'onAccept', $base);


Function handlers 'onRead', 'onWrite', 'onError' must implement the processing logic. Data is written into the buffer, which is obtained in the non-blocking mode:
function onRead($buffer, $id)
{
while($read = event_buffer_read($buffer, 256)) {
var_dump($read);
}
}


The main event loop runs with the function event_base_loop($base);. With a few lines of code, you can exit the handler only by calling: event_base_loobreak(); or after the specified time (timeout) event_loop_exit();.

Error handling deals with failure Events:
function onError($buffer, $error, $id)
{
global $id, $buffers, $ctx_connections;
event_buffer_disable($buffers[$id], EV_READ | EV_WRITE);
event_buffer_free($buffers[$id]);
fclose($ctx_connections[$id]);
unset($buffers[$id], $ctx_connections[$id]);
}


It should be noted the following subtlety: Working with timers is only possible through the file descriptor. The example of official the documentation does not work. Here is an example of processing that runs at regular intervals.
$event2 = event_new();
$tmpfile = tmpfile();
event_set($event2, $tmpfile, 0, 'onTimer', $interval);
$res = event_base_set($event2, $base);
event_add($event2, 1000000 * $interval);


With this code we can have a working timer finishes only once. If we need a "permanent" Timer, using the function onTimer we need create a new event each time, and reassign it to process through a "period of time":
function onTimer($tmpfile, $flag, $interval)
{
$global $base, $event2;

if ($event2) {
event_delete($event2);
event_free($event2);
}

call_user_function(‘process_data’,$args);

$event2 = event_new();
event_set($event2, $tmpfile, 0, 'onTimer', $interval);
$res = event_base_set($event2, $base);
event_add($event2, 1000000 * $interval);
}


At the end of the daemon we must release all previously allocated resources:
event_delete($event);
event_free($event);
event_base_free($base);

event_base_set($event, $base);
event_add($event);


Also it should be noted that for the signal processing handler is set the flag EV_SIGNAL: event_set($event, SIGHUP, EV_SIGNAL, 'onSignal', $base);

If needed constant signal processing, it is necessary to set a flag EV_PERSIST. Here follows a handler for the event onAccept, which occurs when a new connection is a accepted on a file descriptor:
function onAccept($socket, $flag, $base) {
global $id, $buffers, $ctx_connections;
$id++;
$connection = stream_socket_accept($socket);
stream_set_blocking($connection, 0);
$buffer = event_buffer_new($connection, 'onRead', NULL, 'onError', $id);
event_buffer_base_set($buffer, $base);
event_buffer_timeout_set($buffer, 30, 30);
event_buffer_watermark_set($buffer, EV_READ, 0, 0xffffff); event_buffer_priority_set($buffer, 10); event_buffer_enable($buffer, EV_READ | EV_PERSIST); $ctx_connections[$id] = $connection;
$buffers[$id] = $buffer;
}


Monitoring a Daemon

It is good practice to develop the application so that it was possible to monitor the daemon process. Key indicators for monitoring are the number of items processed / requests in the time interval, the speed of processing with queries, the average time to process a single request or downtime.

With the help of these metrics can be understood workload of our demon, and if it does not cope with the load it gets, you can run another process in parallel, or for running multiple child processes.

To determine these variables need to check these features at regular intervals, such as once per second. For example downtime is calculated as the difference between the measurement interval and total time daemon.

Typically downtime is determined as a percentage of a measurement interval. For example, if in one second were executed 10 cycles with a total processing time of 50ms, the time will be 950ms or 95%.

Query performance wile be 10rps (request per second). Average processing time of one request: the ratio of the total time spent on processing requests to the number of requests processed, will be 5ms.

These characteristics, as well as additional features such as memory stack size queue, number of transactions, the average time to access the database, and so on.

An external monitor can be obtain data through a TCP connection or unix socket, usually in the format of Nagios or zabbix, depending on the monitoring system. To do this, the demon should use an additional system port.

As mentioned above, if one worker process can not handle the load, usually we run in parallel multiple processes. Starting a parallel process should be done by the parent master process that uses fork() to launch a series of child processes.

Why not run processes using exec() or system()? Because, as a rule, you must have direct control over the master and child processes. In this case, we can handle it via interaction signals. If you use the exec command or system, then launch the initial interpreter, and it has already started processes that are not direct descendants of the parent process.

Also, there is a misconception that you can make a demon process through command nohup. Yes, it is possible to issue a command: nohup php mydaemon.php -master >> /var/log/daemon.log 2 >> /var/log/daemon.error.log &

But, in this case, would be difficult to perform log rotation, as nohup "captures" file descriptors for STDOUT / STDERR and release them only at the end of the command, which may overload of the process or the entire server. Overload demon process may affect the integrity of data processing and possibly cause partial loss of some data.

Starting a Daemon

Starting the daemon must happen either automatically at boot time, or with the help of a "boot script."

All startup scripts are usually in the directory /etc/rc.d. The startup script in the directory service is made /etc/init.d/ . Run command start service myapp or start group /etc/init.d/myapp depending on the type of OS.

Here is a sample script text:
#! /bin/sh
#
$appdir = /usr/share/myapp/app.php
$parms = --master –proc=8 --daemon
export $appdir
export $parms
if [ ! -x appdir ]; then
exit 1
fi

if [ -x /etc/rc.d/init.d/functions ]; then
. /etc/rc.d/init.d/functions
fi

RETVAL=0

start () {
echo "Starting app"
daemon /usr/bin/php $appdir $parms
RETVAL=$?
[ $RETVAL -eq 0 ] && touch /var/lock/subsys/mydaemon
echo
return $RETVAL
}

stop () {
echo -n "Stopping $prog: "
killproc /usr/bin/fetchmail
RETVAL=$?
[ $RETVAL -eq 0 ] && rm -f /var/lock/subsys/mydaemon
echo
return $RETVAL
}

case in
start)
start
;;
stop)
stop
;;
restart)
stop
start
;;
status)
status /usr/bin/mydaemon
;;
*)
echo "Usage:
php
if (is_file('app.phar')) {
unlink('app.phar');
}
$phar = new Phar('app.phar', 0, 'app.phar');
$phar->compressFiles(Phar::GZ);
$phar->setSignatureAlgorithm (Phar::SHA1);
$files = array();
$files['bootstrap.php'] = './bootstrap.php';
$rd = new RecursiveIteratorIterator(new RecursiveDirectoryIterator('.'));
foreach($rd as $file){
if ($file->getFilename() != '..' && $file->getFilename() != '.' && $file->getFilename() != __FILE__) {
if ( $file->getPath() != './log'&& $file->getPath() != './script'&& $file->getPath() != '.')
$files[substr($file->getPath().DIRECTORY_SEPARATOR.$file->getFilename(),2)] =
$file->getPath().DIRECTORY_SEPARATOR.$file->getFilename();
}
}
if (isset($opt['version'])) {
$version = $opt['version'];
$file = "buildFromIterator(new ArrayIterator($files));
$phar->setStub($phar->createDefaultStub('bootstrap.php'));
$phar = null;
}
 {start|stop|restart|status}"
;;

RETVAL=$?
exit $RETVAL


Distributing Your PHP Daemon

To distribute a daemon it is better to pack it in a single phar archive module. The assembled module should include all the necessary PHP and .ini files.

Below is a sample build script:
#php app.phar
myDaemon version 0.1 Debug
usage:
--daemon – run as daemon
--debug – run in debug mode
--settings – print settings
--nofork – not run child processes
--check – check dependency modules
--master – run as master
--proc=[8] – run child processes


Additionally, it may be advisable to make a PEAR package as a standard unix-console utility that when run with no arguments prints its own usage instruction:
[NMD%%CODE%%]


Conclusion

Creating daemons in PHP it is not hard but to make them run correctly it is important to follow the steps described in this article.

Post a comment here if you have questions or comments on how to create daemon services in PHP.
19356 views · 6 years ago
Creating a Virus with PHP

In his talk, “Writing Viruses for Fun, Not Profit,”Ben Dechrai (after making the viewer take a pledge to only use this knowledge for good and not evil) walks through how many viruses operate, and just how easy it is to build your own self-replicating virus in PHP.

The danger of many of these viruses according to Ben is that the most dangerous viruses often escape detection by not looking like a virus. Instead they encrypt their code to hide their true intent, while also constantly adapting and evolving.

Perhaps even more dangerously, they act like they’re benign and don’t actually do anything - often times laying dormant until called upon by the malicious actor.

Creating the Virus

What’s scary is just how simple it was for Ben to create such a virus, one that mutated ever so slightly as it infected every other file on the server. Opening up unlimited possibilities from scraping customer data, to DDOS attacks, to simply hijacking your domain.



But those attacks are just the start as Ben demonstrated how easy it is to write new files, delete files, eval() and execute foreign code - which could even be extended to accessing the underlying server itself if shell_exec() is enabled.

To add to the problem, Ben shares how challenging it can be to identify malicious code on your server as many of these attacks are far more sophisticated than the the virus he created in a matter of minutes - hiding themselves and often appearing as if they are part of the original source code.

Deploying the Virus

To drive his point home, Ben demonstrates how even seemingly secure systems can be vulnerable - as all it takes is one tiny misstep within your application.

He highlights this by building what should be a secure photo gallery - one that checks the extension and mime-type of the image - and even stores it outside of the public directory. He goes even farther by adding additional sanity checks with a PHP script that then renders the image.

After walking through the code and it’s security features, he then downloads a simple image from the internet. Opening his editor he quickly injects the virus (written in PHP) into the image and uploads it, passing all of the server checks.

Surely, since it passed these checks the system is secure, right? Ben loads the gallery to proudly show off the image - which is just that… an image, with nothing special or out of the ordinary.
Except that when he opens the image gallery files, each has been infected with the malicious code.

The culprit that allowed for Ben to hijack an entire system and execute foreign code, create new files, and even hijack the entire site? When displaying the image the file was included using PHP’s include() function, instead of pulling in the data using file_get_contents() and echoing it out.

Such a simple mistake provided Ben, if he was a malicious hacker, complete access to all of the files on the system.

Protecting Yourself

Security always exists in layers - and this could have been prevented by including a few more layers, such as using an open source library to rewrite the image, reviewing the image source before pulling it in, or again not giving it executable access by using the PHP include() function.

But what’s terrifying is how simple it is to hijack a site, how easy it is to get access to your system and private data, and how easy it is to overlook security vulnerabilities - especially with open source tooling and those that take plugins.

As Ben explains, sometimes the core code itself is really secure, but then you get two different plugins that when used together accidentally create a security vulnerability. That by itself is one of the most challenging as you can audit each plugin individually, and still not know you’re opening up your system to malicious actors.

This is why it's not just important to stay up to date on the latest security measures and best practices, but to be constantly thinking like a hacker and testing your code for vulnerabilities.

Learn More

You can watch thefull video to learn more how viruses operate, how to quickly build your own PHP virus (but you must promise to use it for good), and what to watch for in order to protect yourself, your customers, and your architecture.
12949 views · 6 years ago
Five Composer Tips Every PHP Developer Should Know

Composer is the way that that PHP developers manage libraries and their dependencies. Previously, developers mainly stuck to existing frameworks. If you were a Symfony developer, you used Symfony and libraries built around it. You didn’t dare cross the line to Zend Framework. These days however, developers focus less on frameworks, and more on the libraries they need to build the project they are working on. This decoupling of projects from frameworks is largely possible because of Composer and the ecosystem that has built up around it.

Like PHP, Composer is easy to get started in, but complex enough to take time and practice to master. The Composer manual does a great job of getting you up and running quickly, but some of the commands are involved enough so that many developers miss some of their power because they simply don’t understand.

I’ve picked out five commands that every user of Composer should master. In each section I give you a little insight into the command, how it is used, when it is used and why this one is important.

1: Require

Sample:

$ composer require monolog/monolog


Require is the most common command that most developers will use when using Composer. In addition to the vendor/package, you can also specify a version number to load along with modifiers. For instance, if you want version 1.18.0 of monolog specifically and never want the update command to update this, you would use this command.

$ composer require monolog/monolog:1.18.0


This command will not grab the current version of monolog (currently 1.18.2) but will instead install the specific version 1.18.0.

If you always want the most recent version of monolog greater than 1.8.0 you can use the > modifier as shown in this command.

$ composer require monolog/monolog:>1.18.0


If you want the latest in patch in your current version but don’t want any minor updates that may introduce new features, you can specify that using the tilde.

$ composer require monolog/monolog:~1.18.0


The command above will install the latest version of monolog v1.18. Updates will never update beyond the latest 1.18 version.

If you want to stay current on your major version but never want to go above it you can indicate that with the caret.

$ composer require monolog/monolog:^1.18.0


The command above will install the latest version of monolog 1. Updates continue to update beyond 1.18, but will never update to version 2.

There are other options and flags for require, you can find the complete documentation of the command here.

2: Install a package globally

The most common use of Composer is to install and manage a library within a given project. There are however, times when you want to install a given library globally so that all of your projects can use it without you having to specifically require it in each project. Composer is up to the challenge with a modifier to the require command we discussed above, global. The most common use of this is when you are using Composer to manage packages like PHPUnit.

$ composer global require "phpunit/phpunit:^5.3.*"


The command above would install PHPUnit globally. It would also allow it to be updated throughout the 5.0.0 version because we specified ~5.3.* as the version number. You should be careful in installing packages globally. As long as you do not need different versions for different projects you are ok. However, should you start a project and want to use PHPUnit 6.0.0 (when it releases) but PHPUnit 6 breaks backwards compatibility with the PHPUnit 5.* version, you would have trouble. Either you would have to stay with PHPUnit 5 for your new project, or you would have to test all your projects to make sure that your Unit Tests work after upgrading to PHPUnit 6.

Globally installed projects are something to be thought through carefully. When in doubt, install the project locally.

3: Update a single library with Composer

One of the great powers of Composer is that developers can now easily keep their dependencies up-to-date. Not only that, as we discussed in tip #1, each developer can define exactly what “up-to-date” means for them. With this simple command, Composer will check all of your dependencies in a project and download/install the latest applicable versions.

$ composer update


What about those times when you know that a new version of a specific package has released and you want it, but nothing else updated. Composer has you covered here too.

$ composer update monolog/monolog


This command will ignore everything else, and only update the monolog package and it’s dependencies.

It’s great that you can update everything, but there are times when you know that updating one or more of your packages is going to break things in a way that you aren’t ready to deal with. Composer allows you the freedom to cherry-pick the packages that you want to update, and leave the rest for a later time.

4: Don’t install dev dependencies

In a lot of projects I am working on, I want to make sure that the libraries I download and install are working before I start working with them. To this end, many packages will include things like Unit Tests and documentation. This way I can run the unit Tests on my own to validate the package first. This is all fine and good, except when I don’t want them. There are times when I know the package well enough, or have used it enough, to not have to bother with any of that.

Many packages create a distribution package that does not contain tests or docs. (The League of Extraordinary Packages does this by default on all their packages.) If you specify the --prefer-dist flag, Composer will look for a distribution file and use it instead of pulling directly from github. Of course if you want want to make sure you get the full source and all the artifacts, you can use the --prefer-src flag.

5: Optimize your autoload

Regardless of whether you --prefer-dist or --prefer-source, when your package is incorporated into your project with require, it just adds it to the end of your autoloader. This isn’t always the best solution. Therefore Composer gives us the option to optimize the autoloader with the --optimize switch. Optimizing your autoloader converts your entire autoloader into classmaps. Instead of the autoloader having to use file_exists() to locate a file, Composer creates an array of file locations for each class. This can speed up your application by as much as 30%.

$ composer dump-autoload --optimize


The command above can be issued at any time to optimize your autoloader. It’s a good idea to execute this before moving your application into production.

$ composer require monolog/monolog:~1.18.0 -o


You can also use the optimize flag with the require command. Doing this every time you require a new package will keep your autoloader up-to-date. That having said, it’s still a good idea to get in the habit of using the first command as a safety net when you roll to production, just to make sure.

BONUS: Commit your composer.lock

After you have installed your first package with composer, you now have two files in the root of your project, composer.json and composer.lock. Of the two, composer.lock is the most important one. It contains detailed information about every package and version installed. When you issue a composer install in a directory with a composer.lock file, composer will install the exact same packages and versions. Therefore, by pulling a git repo on a production server will replicate the exact same packages in production that were installed in development. Of course the corollary of this is that you never want to commit your vendor/ directory. Since you can recreate it exactly, there is no need to store all of that code in your repo.

It is recommended that also commit your composer.json. When you check out your repo into production and do an install, composer will use the composer.lock instead of the composer.json when present. This means that your production environment is setup exactly like your development environment.

SPONSORS

The Ultimate Managed Hosting Platform