PHP & Web Development Blogs

Search Results For: beta
Showing 1 to 5 of 6 blog articles.
2043 views · 1 years ago

![New Beta: Transcriptions and Closed Captioning](https://images.ctfassets.net/vzl5fkwyme3u/3Cz2r74iChZbJyAFS2ydPo/f67d7369647247806bbacb8f48c9fd98/captions.png)

With the mission of making technology more accessible, I'm pleased to announce a beta of transcriptions and closed captioning on select videos.

This beta offers a more in-depth look into videos before watching them, the ability to dig into the video's text afterwards for review, and the ability to watch the video with closed captioning assistance.

Closed Captioning may be enabled on the video player by clicking the [cc] icon and selecting your preferred language (at this time, only English is supported). You may also find a transcription of the talk below the description for more insight into the talk before watching, or to find specific information/ sections afterwards for review.

![Example of Captions and Transcriptions](https://images.ctfassets.net/vzl5fkwyme3u/6O6eSdVYuCslcsrRzfmGbl/24c598e80195d4839662f2833c1cfa3c/captions-example2.png)

As our first attempt at transcriptions and closed captions, it won't be perfect, but if it is beneficial this is something we hope to bring to all future videos, and selectively add closed captioning to past videos based on demand and availability.

.

#### Today the following videos are available to watch with closed captioning:

* [Data Lakes in AWS](https://nomadphp.com/video/239/data-lakes-in-aws)

* [The Dark Corners of the SPL](https://nomadphp.com/video/234/the-dark-corners-of-the-spl)

* [Git Legit](https://nomadphp.com/video/233/git-legit)

* [Advanced WordPress Plugin Creation](https://nomadphp.com/video/224/advanced-wordpress-plugin-creation)

* [The Faster Web Meets Lean and Mean PHP](https://nomadphp.com/video/216/the-faster-web-meets-lean-and-mean-php)

.

#### Closed Captioning for Live Events

Unfortunately, at this time we are not able to offer closed captioning for live events including our monthly meetings, workshops, and streamed conferences. This is an area we are continuing to look into, and working on identifying partners to make possible in the future.

.

#### Providing Feedback/ Requesting Closed Captioning

If you have a specific request for a video you would like to see closed captioned, please let us know using the [Nomad PHP feedback form](https://nomadphp.com/feedback).

.

#### Next on the Roadmap

Also look for these new features coming over the next several months:

* Lightning talks

* Team management for team account owners

* Mobile Apps for Android/ iOS with offline viewing

5530 views · 1 years ago

![Create Alarm and Monitoring on Custom Memory and Disk Metrics for Amazon EC2](https://images.ctfassets.net/vzl5fkwyme3u/2HgdCq2lZucMyuiYyYiQie/b7376c29a2f94799613e8c1cb8ff4d3b/AdobeStock_91111530.jpeg?w=1000)

Today I am going write a blog on how to Monitor Memory and Disk custom metrics and creating alarm in Ubuntu.

To do this, we can use Amazon CloudWatch, which provides a flexible, scalable and reliable solution for monitoring our server.

Amazon Cloud Watch will allow us to collect the custom metrics from our applications that we will monitor to troubleshoot any issues, spot trends, and configure operational performance. CloudWatch functions display alarms, graphs, custom metrics data and including statistics.

## Installing the Scripts

Before we start installing the scripts for monitoring, we should install all the dependent packages need to perform on Ubuntu.

First login to your AWS server, and from our terminal, install below packages

```ssh

sudo apt-get update

sudo apt-get install unzip

sudo apt-get install libwww-perl libdatetime-perl

```

### Now Install the Monitoring Scripts

Following are the steps to download and then unzip we need to configure the Cloud Watch Monitoring scripts on our server:

**1. In the terminal, we need to change our directory and where we want to add our monitoring scripts.**

**2. Now run the below command and download the source:**

```ssh

curl https://aws-cloudwatch.s3.amazonaws.com/downloads/CloudWatchMonitoringScripts-1.2.2.zip -O

```

**3. Now uncompress the currently downloaded sources using the following commands**

```ssh

unzip CloudWatchMonitoringScripts-1.2.2.zip && \

rm CloudWatchMonitoringScripts-1.2.2.zip && \

cd aws-scripts-mon

```

The directory will contain Perl scripts, because of the execution of these scripts only report memory run and disk space utilization metrics will run in our Ubuntu server.

Currently, our folder will contain the following files:

**mon-get-instance-stats.pl** - This Perl file is used to displaying the current utilization statistics reports for our AWS instance on which these file scripts will be executed.

**mon-put-instance-data.pl** - This Perl script file will be used for collecting the system metrics on our ubuntu server and which will send them to the Amazon Cloud Watch.

**awscreds.template** - This Perl script file will contain an example for AWS credentials keys and secret access key named with access key ID.

**CloudWatchClient.pm** - This Perl script file module will be used to simplify by calling Amazon Cloud Watch from using other scripts.

**LICENSE.txt** – This file contains the license details for Apache 2.0.

**NOTICE.txt** – This file contains will gives us information about Copyright notice.

**4. For performing the Cloud Watch operations, we need to confirm that whether our scripts have corresponding permissions for the actions:**

If we are associated with an IAM role with our EC2 Ubuntu instance, we need to verify that which will grant the permissions to perform the below-listed operations:

```ssh

cloudwatch:GetMetricStatistics

cloudwatch:PutMetricData

ec2:DescribeTags

cloudwatch:ListMetrics

```

Now we need to copy the ‘awscreds.template’ file into ‘awscreds.conf’ by using the command below and which will update the file with details of the AWS credentials.

```ssh

cp awscreds.template awscreds.conf

AWSAccessKeyId = my_access_key_id

AWSSecretKey = my_secret_access_key

```

Now we completed the configuration.

## mon-put-instance-data.pl

This Perl script file will collect memory, disk space utilization data and swap the current system details and then it makes handling a remote call to Amazon Cloud Watch to reports details to the collected cloud watch data as a custom metrics.

We can perform a simple test run, by running the below without sending data to Amazon CloudWatch

```ssh

./mon-put-instance-data.pl --mem-util --verify --verbose

```

Now we are going to set a cron for scheduling our metrics and we will send them to Amazon CloudWatch

**1. Now we need to edit the crontab by using below command:**

```ssh

crontab -e

```

**2. Now we will update the file using the following query which will disk space utilization and report memory for particular paths to Amazon CloudWatch in every five minutes:**

```ssh

*/5 * * * * ~/STORAGE/cloudwatch/aws-scripts-mon/mon-put-instance-data.pl --mem-util --mem-avail --mem-used --disk-space-util --disk-space-avail --disk-space-used --disk-path=/ --disk-path=/STORAGE --from-cron

```

If there is an error, the scripts will write an error message in our system log.

### Use of Options

**--mem-used**

The above command will collect the information about used memory and which will send the details of the reports in MBs into the MemoryUsed metrics. This will give us information about the metric counts memory allocated by applications and the OS as used.

**--mem-util**

The above command will collect the information about memory utilization in percentages and which will send the details of the Memory Utilization metrics and it will count the usage of the memory applications and the OS.

**--disk-space-util**

The above command will collect the information to collect the current utilized disk space and which will send the reports in percentages to the DiskSpaceUtilization for the metric and for the selected disks.

**--mem-avail**

The above command will collect the information about the available memory and which will send the reports in MBs to the MemoryAvailable metrics details. This is the metric counts memory allocated by the applications and the OS as used.

**--disk-path=PATH**

The above command will collect the information and will point out the which disk path to report disk space.

**--disk-space-avail**

The above command will collect the information about the available disk space and which will send the reports in GBs to the DiskSpaceAvailable metric for the selected disks.

**--disk-space-used**

The above command will collect the information about the disk space used and which will send the reports in GBs to the DiskSpaceUsed metric for the selected disks.

The PATH can specify to point or any of the files can be located on which are mounted point for the filesystem which needs to be reported.

If we want to points to the multiple disks, then specify both of the disks like below:

```ssh

--disk-path=/ --disk-path=/home

```

## Setting an Alarm for Custom Metrics

Before we are going to running our Perl Scripts, then we need to create an alarm that will be listed in our default metrics except for the custom metrics. You can see some default metrics are listed in below image:

![](https://4.bp.blogspot.com/-dYiNgtv5t4Q/XB-Kw76S49I/AAAAAAAAExM/DKKgzPUmVPUNpDqb0c57sVd3DjhY_3O6wCLcBGAs/s1600/aws.jpg)

Once we completed setting the cron, then the custom metrics will be located in Linux System Metrics.

Now we are going to creating the alarm for our custom metrics

**1. We need to open the cloudwatch console panel at https://console.aws.amazon.com/cloudwatch/home**

**2. Now navigate to the navigation panel, we need to click on Alarm and we can Create Alarm.**

**3. This will open a popup which with the list of the CloudWatch metrics by category.**

**4. Now click on the Linux System Metrics . This will be listed out with custom metrics you can see in the below pictures**

![](https://3.bp.blogspot.com/-3OxSPmHahYc/XB-MhlaGSAI/AAAAAAAAExY/d9UIOFakcGAOADlbNOyd75r0Vkl5GtOfwCLcBGAs/s1600/aws2.jpg)

![](https://2.bp.blogspot.com/-Urfud3sv5LA/XB-My0O-qXI/AAAAAAAAExg/1isOeglXYyoS3U0u2uhwJN8ddhtnoHUwACLcBGAs/s1600/aws3.jpg)

![](https://2.bp.blogspot.com/-tXU4ieWKKdo/XB-NDrArsII/AAAAAAAAExo/7oz3qQZ7GrMETiJwWJdK75_hJ0qfADLvgCLcBGAs/s1600/aws4.jpg)

**5. Now we need to select metric details and we need to click on the NEXT button. Now we need to navigate to Define Alarm step.**

![](https://3.bp.blogspot.com/-Vu0WoCCP_5o/XB-NiKGzmSI/AAAAAAAAExw/IPBX8ir-97Q_Q6L6Ajq2vNpukYGN8BB_QCLcBGAs/s1600/aws5.jpg)

**6. Now we need to define an Alarm with required fields**

Now we need to enter the Alarm name for identifying them. Then we need to give a description of our alarm.

Next, we need to give the condition with the maximum limit of bytes count or percentage when it notifies the alarm. If the condition satisfies, then the alarm will start trigger.

We need to provide a piece of additional information about for our alarm.

We need to define what are the actions to be taken when our alarm changes it state.

We need to select or create a new topic with emails needed for sending notification about alarm state.

**7. Finally, we need to choose the Create Alarm.**

So its completed. Now the alarm is created for our selected custom metrics.

### Finished!

Now the alarm will be listed out under the selected state in our AWS panel. Now we need to select an alarm from the list seen and we can see the details and history of our alarm.

3774 views · 1 years ago

![Press Release](https://images.ctfassets.net/vzl5fkwyme3u/6iJumo9OXCq0m0EqOuueAm/61e3dea3c18f9d770a6906099856e015/press_release.jpeg?w=1000)

To say that we have been hard at work here at Nomad PHP, or that I'm excited about these three announcements would be a tremendous understatement. Over the past several months, behind the scenes, we've been working to bring even more features and benefits to Nomad PHP - these have already included unlimited streaming of all past meetings and access to PHP Architect.

Available today, however, you'll also have access to online, live workshops - as well as soon have the ability to stream select PHP conferences live, and finally to prove the knowledge you have gained through our online certification.

## Online, Live Workshops

Like our online meetings, we are excited to announce that available today you can participate in online, live, and interactive workshops. Our [first workshop](https://beta.nomadphp.com/live/bC7lqFvjeouMoC4cqoaU6/Workshop--Achieving-Undisturbed-REST--Part-1-/) will feature Michael Stowe, author of Undisturbed REST: a guide to Designing the Perfect API as he demonstrates how to build the perfect API using modern technologies and techniques.

Additional workshops will be announced as we continue, with a minimum of one workshop per quarter. These workshops will be part of your Nomad PHP subscription, and will be recorded for later viewing.

## Nomad PHP Certification

With the many changes impacting the PHP ecosystem, we're proud to announce the ability to prove your knowledge with our online certification. Each certification is made up numerous, randomly selected questions to be completed within a specific time frame. Depending on the exam it may or may not be proctored, but all exams monitor user activity to ensure compliance.

To pass the exam, a passing grade (specified on each exam) must be completed for each section within the allotted time frame. Failure to complete or pass any section will result in a failing grade for the entire exam.

Upon completion, you will receive a digital certification with verification to post on LinkedIn or your website, as well as having your Nomad PHP updated to show the passed certification.

Initial certification exams will include PHP Developer Level I, PHP Engineer Level II, and API Specialist Level I. The PHP Developer exam will cover core components of PHP, the Engineer will cover a broad spectrum of topics including modern technologies, and the API Specialist will cover REST design and architecture practices.

All three exams will be available by January 31, 2019, and will be included with a Nomad PHP subscription.

## Stream Select PHP Conferences Live

One of the primary goals of Nomad PHP is to bring the community together, and allow users all over the country to participate in conference level talks. What better way to do this than to bring community conferences online?

Like our traditional talks, these conferences and select conference sessions will be live-streamed as part of your Nomad PHP subscription, allowing you to participate in real-time with in-person conference attendees.

The first conference to be streamed will be [DayCamp4Developers: Beyond Performance](https://daycamp4developers.com/) on January 18, 2018. Additional conferences to be streamed will be announced shortly.

## Community and Corporate Sponsorships

With these new additions to Nomad PHP, now is the perfect time to take advantage of our new [Community and Corporate sponsorships](/static/advertise).

Your support of Nomad PHP not only makes all the above possible, but allows Nomad PHP to continue to serve and give back the community. We're proud, that despite operating at a loss, to have already contributed over **$4,000** to the PHP community in the last 5 months.

To learn more about the sponsorship and community opportunities we have available, please visit our [Advertising section](/static/advertise).

### Other Ways to Support Nomad PHP

Of course, while financial support helps us keep afloat and do more for the community, there are even more, and just as important ways to support Nomad PHP. Please consider [linking to Nomad PHP](/static/webmasters), or [sharing the service](/invite) with your friends.

5945 views · 1 years ago

![PHP IPC with Daemon Service using Message Queues, Shared Memory and Semaphores](https://images.ctfassets.net/vzl5fkwyme3u/4ULcw2rCysGcSGOAi2uKOk/450013591b84069c5536663430536714/AdobeStock_200383770.jpeg?w=1000)

# Introduction

In a previous article we learned about [Creating a PHP Daemon Service](https://beta.nomadphp.com/blog/50/creating-a-php-daemon-service). Now we are going to learn how to use methods to perform IPC - Inter-Process Communication - to communicate with daemon processes.

# Message Queues

In the world of UNIX, there is an incredible variety of ways to send a message or a command to a daemon script and vice versa. But first I want to talk only about message queues - "System V IPC Messages Queues".

A long time ago I learned that a queue can be either in the System V IPC implementation, or in the POSIX implementation. I want to comment only about the System V implementation, as I know it better.

Lets get started. At the "normal" operating system level, queues are stored in memory. Queue data structures are available to all system programs. Just as in the file system, it is possible to configure queues access rights and message size. Usually a queue message size is small, less than 8 KB.

This introductory part is over. Lets move on to the practice with same example scripts.

**queue-send.php**

```php

/ / Convert a path name and a project identifier to a System V IPC key

$key = ftok(__FILE__, 'A'); / / 555 for example

/ / Creating a message queue with a key, we need to use an integer value.

$queue = msg_get_queue($key);

/ / Send a message. Note that all required fields are already filled,

/ / but sometimes you want to serialize an object and put on a message or a lock.

/ / Note that we specify a different type. Type - is a certain group in the queue.

msg_send($queue, 1, 'message, type 1');

msg_send($queue, 2, 'message, type 2');

msg_send($queue, 3, 'message, type 3');

msg_send($queue, 1, 'message, type 1');

echo "send 4 messages

";

```

**queue-receive.php**

```php

$key = ftok('queue-send.php', 'A'); / / 555 for example

$queue = msg_get_queue($key);

/ / Loop through all types of messages.

for ($i = 1; $i <= 3; $i++) {

echo "type: {$i}

";

/ / Loop through all, read messages are removed from the queue.

/ / Here we find a constant MSG_IPC_NOWAIT, without it all will hang forever.

while ( msg_receive($queue, $i, $msgtype, 4096, $message, false, MSG_IPC_NOWAIT) ) {

echo "type: {$i}, msgtype: {$msgtype}, message: {$message}

";

}

}

```

Lets run on the first stage of the file queue-send.php, and then queue-receive.php.

```sh

u% php queue-send.php

send 4 messages

u% php queue-receive.php

type: 1

type: 1, msgtype: 1, message: s:15:"message, type 1";

type: 1, msgtype: 1, message: s:15:"message, type 1";

type: 2

type: 2, msgtype: 2, message: s:15:"message, type 2";

type: 3

type: 3, msgtype: 3, message: s:15:"message, type 3";

```

You may notice that the messages have been grouped. The first group gathered 2 messages of the first type, and then the remaining messages.

If we would have indicated to receive messages of type 0, you would get all messages, regardless of the type.

```php

while (msg_receive($queue, $i, $msgtype, 4096, $message, false, MSG_IPC_NOWAIT)) {

/ / ...

```

Here it is worth noting another feature of the queues: if we do not use the constant MSG_IPC_NOWAIT in the script and run the script queue-receive.php from a terminal, and then run periodically the file queue-send.php, we see how a daemon can effectively use this to wait jobs.

**queue-receive-wait.php**

```php

$key = ftok('queue-send.php', 'A'); / / 555 for example

$queue = msg_get_queue($key);

/ / Loop through all types of messages.

/ / Loop through all, read messages are removed from the queue.

while ( msg_receive($queue, 0, $msgtype, 4096, $message) ) {

echo "msgtype: {$msgtype}, message: {$message}

";

}

```

Actually that is the most interesting information of all I have said. There are also functions to get statistics, disposal and checking for the existence of queues.

Lets now try to write a daemon listening to a queue:

**queue-daemon.php**

```php

/ / Fork process

$pid = pcntl_fork();

$key = ftok('queue-send.php', 'A');

$queue = msg_get_queue($key);

if ($pid == -1) {

exit;

} elseif ($pid) {

exit;

} else {

while ( msg_receive($queue, 0, $msgtype, 4096, $message) ) {

echo "msgtype: {$msgtype}, message: {$message}

";

}

}

/ / Disengaged from the terminal

posix_setsid();

```

# Shared Memory

We have learned to work with queues, with which you can send small system messages. But then we may certainly be faced with the task of transmitting large amounts of data. My favorite type of system, System V, has solved the problem of rapid transmission and preservation of large data in memory using a mechanism called **Shared Memory**.

In short, the data in the Shared Memory lives until the system is rebooted. Since the data is in memory, it works much faster than if it was stored in a database somewhere in a file, or, God forgive me on a network share.

Lets try to write a simple example of data storage.

**shared-memory-write-base.php**

```php

/ / This is the correct and recommended way to obtain a unique identifier.

/ / Based on this approach, the system uses the inode table of the file system

/ / and for greater uniqueness converts this number based on the second parameter.

/ / The second parameter always goes one letter

$id = ftok(__FILE__, 'A');

/ / Create or open the memory block

/ / Here you can specify additional parameters, in particular the size of the block

/ / or access rights for other users to access this memory block.

/ / We can simply specify the id instead of any integer value

$shmId = shm_attach($id);

/ / As we have shared variables (any integer value)

$var = 1;

/ / Check if we have the requested variables.

if (shm_has_var($shmId, $var)) {

/ / If so, read the data

$data = (array) shm_get_var($shmId, $var);

} else {

/ / If the data was not there.

$data = array();

}

/ / Save the in the resulting array value of this file.

$data[time()] = file_get_contents(__FILE__);

/ / And writes the array in memory, specify where to save the variable.

shm_put_var($shmId, $var, $data);

/ / Easy?

```

Run this script several times to save the value in memory. Now lets write a script only to read from the memory.

**shared-memory-read-base.php**

```php

/ / Read data from memory.

$id = ftok(__DIR__ . '/shared-memory-write-base.php', 'A');

$shmId = shm_attach($id);

$var = 1;

/ / Check if we have the requested variables.

if (shm_has_var($shmId, $var)) {

$data = (array) shm_get_var($shmId, $var);

} else {

$data = array();

}

/ / Iterate received and save them to files.

foreach ($data as $key => $value) {

/ / A simple example, create a file from the data that we have saved.

$path = "/tmp/$key.php";

file_put_contents($path, $value);

echo $path . PHP_EOL;

}

```

# Semaphores

So, in general terms, it should be clear for you by now how to work with shared memory. The only problems left to figure out are about a couple of nuances, such as: "What to do if two processes want to record one block of memory?" Or "How to store binary files of any size?".

To prevent simultaneous accesses we will use semaphores. Semaphores allow us to flag that we want to have exclusive access to some resource, like for instance a shared memory block. While that happens other processes will wait for their turn on semaphore.

In this code it explained clearly:

**shared-memory-semaphors.php**

```php

/ / Let's try to save a binary file, the size of a couple of megabytes.

/ / This script does the following:

/ / If there is input, it reads it, otherwise it writes data into memory

/ / In this case, when writing to the memory we put a sign lock - semaphore

/ / Everything is as usual, read the previous comments

$id = ftok(__FILE__, 'A');

/ / Obtain a resource semaphore - lock feature. There is nothing wrong if we

/ / use the same id that is used to obtain a resource shared memory

$semId = sem_get($id);

/ / Put a lock. There's a caveat. If another process will encounter this lock,

/ / it will wait until the lock is removed

sem_acquire($semId);

/ / Specify your like picture

$data = file_get_contents(__DIR__.'/06050396.JPG', FILE_BINARY);

/ / These can be large, so precaution is necessary to allocate such a way that would be enough

$shmId = shm_attach($id, strlen($data)+4096);

$var = 1;

if (shm_has_var($shmId, $var)) {

/ / Obtain data from the memory

$data = shm_get_var($shmId, $var);

/ / Save our file somewhere

$filename = '/tmp/' . time();

file_put_contents($filename, $data, FILE_BINARY);

/ / Remove the memory block that started it all over again.

shm_remove($shmId);

} else {

shm_put_var($shmId, $var, $data);

}

/ / Releases the lock.

sem_release($semId);

```

Now you can use the md5sum command line utility to compare two files, the original and the saved file. Or, you can open the file in image editor or whatever prefer to compare the images.

With this we are done with shared memory and semaphores. As your homework I want to ask you to write code that a demon will use semaphores to access shared memory.

# Conclusion

Exchanging data between the daemons is very simple. This article described two options for data exchange: message queues and shared memory.

Post a comment here if you have questions or comments about how to exchange data with daemon services in PHP.

1582 views · 2 years ago

![Happy Thanksgiving](https://images.ctfassets.net/vzl5fkwyme3u/210WuoFQk8i8YI8iAUsyuA/a31e17d34e57fd9dc83c795d1b59ac73/AdobeStock_177353492.jpeg?w=1000)

A brief (by Mike's standards) note

> As we express our gratitude, we must never forget that the highest appreciation is not to utter words, but to live by them. - John F. Kennedy

I wanted to take a brief moment to express my gratitude this holiday season. First and foremost, a huge thank you to the beautiful Tanja Hoefler who has put in countless hours behind the scenes of Nomad PHP, ranging from finding the best articles and tweeting them out, to tracking down great speakers, to countless hours of video editing (including fixing all my mistakes from the live broadcast).

### Thank you to our Founders

I also need to thank Cal and Kathy Evans, an amazing husband and wife team who have done so much for the community over many, many years - including founding and being an invaluable source as Tanja and I took over Nomad PHP. They are truly an inspiration for Tanja and myself, and I hope some day we can do as much for the community as they have.

### Thank you to our Speakers

And I need to express my gratitude to our amazing speakers who spend countless hours preparing their presentations, and even stay up all night practicing or get up at 5am to be ready to share their knowledge with the community.

### Thank you to our Advisors

Another special shout out goes to Eric Poe, Eric Hogue, and Andrew Caya who have all been tremendous advocates of Nomad PHP, as well as the foundation of our meetings. Their feedback, along with our amazing Advisory Board has helped shake the direction of Nomad PHP, and the many great things we hope to do in the future.

### Thank you to our Sponsors

Which brings me to our sponsors. As we try to grow and expand Nomad PHP, as well as bring in more resources and make it more valuable (and affordable), we have done so at a fairly significant loss. That's ok, as that was the plan for 2018, but companies like **[RingCentral](https://developers.ringcentral.com)**, **[Twilio](http://twilio.com)**, **[Auth0](http://auth0.com)**, and **[OSMI](http://osmihelp.org)** have played a critical role in letting us move forward and keeping that loss manageable. Without them, I'm not sure we would be able to be offering the service we do, or have the plans we do for 2019!

*On a side note, if you're not familiar with OSMI, they're a GREAT non-profit who has done so much good in the tech space raising awareness about mental health, and educating employers - I highly recommend supporting this great non-profit organization.*

### Thank you to our Family and Friends

Of course, I need to thank my family, my friends, and all those who have supported myself, Tanja, and Nomad PHP over the years.

### Thank YOU

Least, but certainly not last - in fact perhaps most important of all - I want to thank the tremendous Nomad PHP community - over 3,000 members strong - that make Nomad PHP what it is. Without you, Nomad PHP wouldn't exist - it wouldn't need to. And **without you, and the greater PHP community, I wouldn't be here today, doing what I love to do**.

For those that do not know my story, I grew up in medicine - becoming a first responder and pursuing a career as a lifeflight paramedic (helicopter ambulance) before realizing two things... ok three: I had a tremedous fear of heights, I hated needles, and I loved programming.

Leaving the nursing program left me unsure of what to do next, so I did what I loved - programming - where the number of mistakes I made I'm sure outnumbered the lines of good code. If it wasn't for the community being so patient, so encouraging, and helping me grow - I'm not sure what I would be doing today, but it certainly wouldn't involve PHP, Developer Relations, Nomad PHP, or the community.

So for that I want to say one final thank you - a thank you for giving me the gift to do what I love, and the opportunity to hopefully pay this forward, and give back to the community, helping others do the same.

Next year will be an amazing one for Nomad PHP, but I can't thank you all for how incredible and amazing these few short months in 2018 have been - because without you, there would be no 2019.

Have a very wonderful Thanksgiving,

Mike

***PS - want to support our 2019 initiatives and be recognized on Nomad PHP in the process? [Become a Supporter](https://beta.nomadphp.com/static/advertise#supportUs).***

SPONSORS