• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
ZendCon 2011 : PHP in the Dark
 

ZendCon 2011 : PHP in the Dark

on

  • 6,681 views

PHP is well known for it's uses in the web context, but sometimes you need to support your web application with scripts running in the background. This talk is about console scripting, I/O, Daemons, ...

PHP is well known for it's uses in the web context, but sometimes you need to support your web application with scripts running in the background. This talk is about console scripting, I/O, Daemons, Signals, etc... These are the slide for the talk as given at the uncon of ZendCon 2011. Replaces previouses PHP in the Dark talks!

Statistics

Views

Total Views
6,681
Views on SlideShare
6,666
Embed Views
15

Actions

Likes
19
Downloads
78
Comments
2

3 Embeds 15

http://a0.twimg.com 11
http://paper.li 2
http://www.linkedin.com 2

Accessibility

Categories

Upload Details

Uploaded via as Apple Keynote

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel

12 of 2 previous next

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • \n
  • \n
  • When writing web applications, most of the action happens in a web context. But sometimes you need to support your application with scripts that run in the background.\nSome tasks can be generating reports, performing maintenance, loading external content, aggregating or analyzing data, sending out mass mailing, and much more.\nScripts performing these tasks, aren’t run via the browser. PHP CLI, short for Command Line Interface, is a special SAPI or Server API that allows you to run php scripts on the command line.\n
  • POSIX or "Portable Operating System Interface for Unix” is a set of standards to define the API , along with shell and utilities interfaces, for software compatible with variants of unix.\n\nThe first version was created in 1985 and extended over the years, with the last revision in 2009.\n
  • The Server API or SAPI is responsible for coordinating the PHP lifecycle. You can look at it as the bridge between the web server or command line and php.\n\nThe sapi passes requests to PHP core, which will handle the requests, but is also responsible for low-level operations like file streams, error handling, etc...\n\nNext to this, we find the Zend Engine that parses and compiles the scripts we write and executes them in the Virtual Machine. \n\nAt times, Zend Engine hands over control to the extension layer where the php extension inject new functionality into php.\n
  • The simplest way to execute a php script is to call the php binary and pass the php script’s filename as a param. But with a shebang, we don’t even need to do that. The shebang is a line that is added to the top of your php script and that consists of the hash sign and exclamation mark, followed by the php command.\n
  • The simplest way to execute a php script is to call the php binary and pass the php script’s filename as a param. But with a shebang, we don’t even need to do that. The shebang is a line that is added to the top of your php script and that consists of the hash sign and exclamation mark, followed by the php command.\n
  • The simplest way to execute a php script is to call the php binary and pass the php script’s filename as a param. But with a shebang, we don’t even need to do that. The shebang is a line that is added to the top of your php script and that consists of the hash sign and exclamation mark, followed by the php command.\n
  • When creating php scripts to run on the command line, you loose all functionality related to the web context. \n\nThis is mostly reflected in the php globals. $_GET and $_POST aren’t available anymore. $_SERVER is still there but is missing all web-related values. It did get terminal related information which can be quite useful at times.\n
  • While you can run your scripts manually, one of the typical uses of the php on the command line is via the cron. Using cron, you can schedule when a specific script has to be executed.\n\nIt is possible to define constants which will be accessible in the $_SERVER global. We typically use this to pass on the application environment so the code knows which config to load from the ini files. \n
  • At the top of this slide, you can see that I defined 2 constants.\n\nI also configured 3 scripts to run at specific times. I’m not going into this now, but you can always ask me after the presentation if you want more information.\n
  • One of the key differences between web and command line scripts is the way they handle input/output. As said earlier, you don’t have access to the request globals, but your command line is also not able to display html in a nice way.\n
  • The simplest way to do input, is using params that are passed when executing the script. 2 globals help you out with this, the first is $argc which holds the number of arguments and the other is $argv which is an array containing every argument.\n\nThe first element of the array is always the filename of the php script. As a result $argc is always 1 or bigger.\n\nYou can also find the argc and argv in the $_SERVER global array.\n
  • So if we have a quick look at this script ...\n
  • This gives following output. As you can see nothing is linked, it’s just an array with a bunch of values.\n\nWhile this might be good for very simple things, you usually wants something more.\n
  • PHP has an implementation of the GNU getopt functions. Getopt allows you to parse arguments. Before php 5.3 only short options were possible, but since that, long options are also available.\n\nPer option you can define whether it has no value, an optional value or a required value. Whenever a value is optional, it needs to be attached to the short option, otherwise getopt will not be able to link it to the option.\n\nThere is no validation of the values entered and even in cases where you do something wrong, you don’t always receive an error.\n
  • In this example you can see I defined the short options in a string and the long options in an array. I then pass this to getopt and receive an array of options.\n
  • Let’s run this code. \n\nIn the first example, we provide a value for -u (which is required), and a value for -p (which is optional). As you can see, the value passwd is attached to the short option -p. The resulting array looks like we would expect it.\n\nLet’s mess things up a bit.\n\nIn the second example, I leave out the required value for -u and detach the value passwd for the short option -p. We don’t get an error, but a result that looks like this. \n\nAs you can see, -u has a value, which is the first argument following the option being -u. -p has no value, and the string “passwd” has been ignored.\n\nSo it’s already better than argv and argc, but there is still room for improvement.\n
  • Let’s run this code. \n\nIn the first example, we provide a value for -u (which is required), and a value for -p (which is optional). As you can see, the value passwd is attached to the short option -p. The resulting array looks like we would expect it.\n\nLet’s mess things up a bit.\n\nIn the second example, I leave out the required value for -u and detach the value passwd for the short option -p. We don’t get an error, but a result that looks like this. \n\nAs you can see, -u has a value, which is the first argument following the option being -u. -p has no value, and the string “passwd” has been ignored.\n\nSo it’s already better than argv and argc, but there is still room for improvement.\n
  • Let’s run this code. \n\nIn the first example, we provide a value for -u (which is required), and a value for -p (which is optional). As you can see, the value passwd is attached to the short option -p. The resulting array looks like we would expect it.\n\nLet’s mess things up a bit.\n\nIn the second example, I leave out the required value for -u and detach the value passwd for the short option -p. We don’t get an error, but a result that looks like this. \n\nAs you can see, -u has a value, which is the first argument following the option being -u. -p has no value, and the string “passwd” has been ignored.\n\nSo it’s already better than argv and argc, but there is still room for improvement.\n
  • Pear, the ConsoleTools from EZ Components and Zend Framework each have there implementation of GetOpt. Personally I like the Zend Framework implementation best. The Zend_Console_Getopt class is a little gem. \n\nIt supports short and long options, which are linked. So it knows that -u and --user are the same thing. You can define a help message for each option and you can set whether value is required, optional or forbidden.\n\nThe class provides the getUsageMessage method which will generate a usage message based on the help messages you provided.\n\nAfter parsing the options, the options are available as properties. The properties are available with there short and long option name. These are aliases for each other, so even if you used -u with a value, you can access it on the return object using the user property.\n\nZend_Console_Getopt will throw exceptions in case of issues, so you can for instance show the usage message when this occurs. There are some extra features and if you need to know more, I suggest you check out the class reference on Zend Framework.\n\nLets have a look at a little code example.\n
  • In contrast to the example for the basic getopt, you can see that the options are now linked together and that we provided a help message for each. This config array is passed when creating a new instance of Zend_Console_Getopt.\n\nAfter calling the parse method, we can now access the options via properties on the object. You can see we access help, user, password, but also v the short for verbose. When we get an exception or if the help option is provided, we display the usage message.\n\nWhen you run this code, it behaves as expected. But let’s have a quick look at the usage message, which is a nice feature.\n
  • \n
  • While command line arguments are one way to get input, it’s not always what you want. One of the strengths of the console is that you can interact with your user. It is possible to fiddle with input streams, but it’s not required.\n\nOn linux the GNU readline library does just that. It allows you to ask for information and read in what the customer provided. It has built in support for autocompletion and command history.\n\nIf you need interleaving of IO and user input, there is also support for callback handlers in combination with advanced stream stuff. But I haven’t tried that out yet myself. :)\n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • Pear, Zend Framework and the consoletools of EZ Components all provide functionality that allow you to make the output of your scripts a bit more attractive.\n\nThey allow for text formatting which includes coloring text and background, put text in bold or italics, underline texts and even supports blinking!\n\nThere is also support for progress bars and conversion of arrays into tables, including callback functionality on the table columns.\n\nI’ve added some examples from the pear console classes\n
  • If you want to really go overboard, there is ncurses. Ncurses is also an implementation of a gnu library and allows you to create windows, it supports input from the mouse and keyboard, coloring, and much more.\n\nOne of the disadvantages of ncurses is that it’s documentation is really bad. If you want to use this, you might have to look at documentation of ncurses implementations in other languages or the linux manpages.\n\nJoshua Thijssen gave me an interesting tip about a linux command line tool called whiptail. The tool creates similar interfaces and it takes a lot of the work away from you. You can execute this from php, capture the return value and use this back in your script.\n
  • Sometimes you need more. When I worked for Roulette69 we had a background process which analyzed real-time the games that were played, generated statistics and put those in memcache where they were picked up and showed to all players that were online.\n\nThe script responsible for this was a daemon. A daemon or service is basically a background process designed to run autonomously, with little or none user interaction.\n\nThe name has it’s origins in Greek mythology: daemons were neither good or evil, they were actually little spirits that did useful things for mankind.\n\nThe first time I created a daemon, I simply wrote a php script with an endless loop in it. Then I called it with an ampersand after the command, and it was sent to the background. It was easy.\n\nBut it was also bad. When you need a daemon, there are a couple of things you need to do to make sure it runs smoothly.\n
  • Those 6 steps are:\n\n- Fork off the parent process\n- Change the file mode mask\n- Open any logs for writing\n- Create a new session id & detach current session\n- Change the current working directory\n- Close standard file descriptors\n\nAfter you have taken care of this, you can add the payload, the code you actually want to execute.\n\nSo, what does this all mean?\n
  • Those 6 steps are:\n\n- Fork off the parent process\n- Change the file mode mask\n- Open any logs for writing\n- Create a new session id & detach current session\n- Change the current working directory\n- Close standard file descriptors\n\nAfter you have taken care of this, you can add the payload, the code you actually want to execute.\n\nSo, what does this all mean?\n
  • Those 6 steps are:\n\n- Fork off the parent process\n- Change the file mode mask\n- Open any logs for writing\n- Create a new session id & detach current session\n- Change the current working directory\n- Close standard file descriptors\n\nAfter you have taken care of this, you can add the payload, the code you actually want to execute.\n\nSo, what does this all mean?\n
  • Those 6 steps are:\n\n- Fork off the parent process\n- Change the file mode mask\n- Open any logs for writing\n- Create a new session id & detach current session\n- Change the current working directory\n- Close standard file descriptors\n\nAfter you have taken care of this, you can add the payload, the code you actually want to execute.\n\nSo, what does this all mean?\n
  • Those 6 steps are:\n\n- Fork off the parent process\n- Change the file mode mask\n- Open any logs for writing\n- Create a new session id & detach current session\n- Change the current working directory\n- Close standard file descriptors\n\nAfter you have taken care of this, you can add the payload, the code you actually want to execute.\n\nSo, what does this all mean?\n
  • Those 6 steps are:\n\n- Fork off the parent process\n- Change the file mode mask\n- Open any logs for writing\n- Create a new session id & detach current session\n- Change the current working directory\n- Close standard file descriptors\n\nAfter you have taken care of this, you can add the payload, the code you actually want to execute.\n\nSo, what does this all mean?\n
  • Step one is forking of the parent process. A daemon can be started by the system itself or by a user on the terminal. When it is started, it behaves like an other executable on the system. To make it run autonomously, we must detach it from where it was started. You do this by creating a child process where the actual code is executed. This is known as forking.\nWhen you fork, you create a full copy of the original process. The original is called the parent, the copy the child. The only way they differ is in there process id (or pid) and their parent id (or ppid). \nThis also means that all variables initiated in the parent before the fork, are also available as is in the child’s thread of execution. This can lead to some unexpected and unwanted behaviours. For this reason, you always have to code as defensively as possible when working with daemons and do tons of error checking.\n
  • When forking, we can have 3 return values:\n\nOn success, the PID of the child process is returned in the parent’s and 0 in the child’s thread of execution.\n\nOn failure, -1 is returned in the parent’s context, no child process will be created and a php error is raised.\n
  • Our child process is a clone of the parent process up till the point of the fork. This means that amongst other things, we also inherited the umask of the parent. \n\nThe umask or user file creation mask limits the default permissions of newly created files and folders. The default permissions are 0777 (which stands for read/write/execute for all) on directories and 0666 (or read/write for all) on files. The system will typically set the umask to 0022. This means that it takes away the write access for group and other.\n\nThe child has no idea what the umask is set to, so it’s always good to reset it by using umask(0), even if we don’t plan to use it, so the daemon can write files (including logs) that receive the proper permissions.\n
  • \n
  • Since we don’t receive any feedback from the command line, we need an alternative: logging. This allows you to follow what is going on.\n\nLogging can happen to the database, to files or even using syslog.\n\n
  • Syslog sends your log messages to a system wide logger, where they can be configured to be written to a file, send to a network server or filtered away entirely.\n\nI included a quick example for reference, but I’m not going to go into this right now.\n
  • Each process on a unix or linux system is a member of a process group or session. The id of each group is the process id of it’s owner. \n\nAfter forking, the child inherits the process group of the parent. The child’s parent process id is equal to the parent’s process id.\n\nSince the parent is going to exit, the child needs to create its own process group and become its own process leader, otherwise it will become an orphan in the system.\n
  • In php we detach our session using posix_setsid. Returns the new session_id on success or -1 on errors.\n
  • You can already guess it. Our child also inherited the working directory of the parent.\n\nThe working directory could be a network mount, a removable drive or somewhere the administrator may want to unmount at some point.\n\nTo unmount any of these the system will have to kill any processes still using them, which would be unfortunate for our daemon.\n\nFor this reason we set our working directory to the root directory, which we are sure will always exist and can’t be unmounted.\n
  • Since we detached the child from the terminal, it can’t interact with the user directly. As a consequence it has no use for the standard file descriptors STDIN, STDOUT and STDERR.\n\nAs with everything else, the file descriptors are inherited from the parent. The child has no idea what they are connected to. So we close the file descriptors. \n\nIf you don’t do this and you still have your terminal open after launching the daemon, you might get unwanted output from it at times.\n
  • One of the cool things about the file descriptors is that after you have closed them, the system will reattach them to the firstly opened resources.\n\nThere is still little use to connect STDIN, so we point it to read from /dev/null.\n\nOn the other hand we reconnect the STDOUT & STDERR to logfiles. Whenever you echo something to the screen, it will be written to the logfile.\n\n
  • Let’s put it all together.\n
  • Let’s put it all together.\n
  • Let’s put it all together.\n
  • Let’s put it all together.\n
  • Let’s put it all together.\n
  • Let’s put it all together.\n
  • Let’s put it all together.\n
  • Let’s put it all together.\n
  • Let’s put it all together.\n
  • Let’s put it all together.\n
  • Let’s put it all together.\n
  • Let’s put it all together.\n
  • Let’s put it all together.\n
  • Let’s put it all together.\n
  • One of the things you need to keep in mind when writing daemons, is that you’re in it for the long run. Since php is typically used for short scripts, it doesn’t usually garbage collect during execution, but when the script has finished. This is problematic for daemons and can lead to a memory udage build up.\n\nBefore php 5.3 there wasn’t much you could do. Since then we got circular reference garbage collect which can make our lives a little easier. To get this to work, you need to decrease the reference count to chunks of memory by setting variables to null or by unsetting them.\n\nOnce in a while you should run gc_collect_cycles in your while loop to take out the trash. Don’t do this too often though.\n\nAnother thing to keep in mind is that php generates file statistics in the cache whenever it uses file functions. If you perform a lot of file operations on the same files in different runs of your loop, you will work on cached information instead of real-time information. If your daemon is running for a long time, this might ba problem, so you should run clearstatcache in regular intervals.\n
  • \n
  • \n
  • \n
  • The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec — Executes specified program in current process space\n\n
  • The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec — Executes specified program in current process space\n\n
  • The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec — Executes specified program in current process space\n\n
  • The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec — Executes specified program in current process space\n\n
  • The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec — Executes specified program in current process space\n\n
  • The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec — Executes specified program in current process space\n\n
  • The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec — Executes specified program in current process space\n\n
  • The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec — Executes specified program in current process space\n\n
  • The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec — Executes specified program in current process space\n\n
  • The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec — Executes specified program in current process space\n\n
  • The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec — Executes specified program in current process space\n\n
  • The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec — Executes specified program in current process space\n\n
  • The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec — Executes specified program in current process space\n\n
  • The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec — Executes specified program in current process space\n\n
  • The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec — Executes specified program in current process space\n\n
  • The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec — Executes specified program in current process space\n\n
  • The man/info page states that signal 0 is special and that the exit code from kill tells whether a signal could be sent to the specified process (or processes).\nSo kill -0 will not terminate the process, and the return status can be used to determine whether a process is running.\n\npcntl_exec — Executes specified program in current process space\n\n
  • Sometimes you will need to communicate with a daemon process. One way to do so is by sending “signals”. There are a number of different signals you can send, some with a specific meaning, others interpreted by the application.\n\n
  • To stop a process you can use SIGTERM and SIGKILL. Sigterm is the polite way to kill a script: you can catch it and end your daemon gracefully. You can’t catch sigkill.\nSIGHUP is typically a signal you send if you want the daemon to reinitialize (cfr. reloading logs).\nSIGINT typically gets triggered on system shutdown\nSIGUSR1 is typically a request to dump states to syslog\nSend signals from your script using posix_kill( $pid, $signal )\n
  • To stop a process you can use SIGTERM and SIGKILL. Sigterm is the polite way to kill a script: you can catch it and end your daemon gracefully. You can’t catch sigkill.\nSIGHUP is typically a signal you send if you want the daemon to reinitialize (cfr. reloading logs).\nSIGINT typically gets triggered on system shutdown\nSIGUSR1 is typically a request to dump states to syslog\nSend signals from your script using posix_kill( $pid, $signal )\n
  • To stop a process you can use SIGTERM and SIGKILL. Sigterm is the polite way to kill a script: you can catch it and end your daemon gracefully. You can’t catch sigkill.\nSIGHUP is typically a signal you send if you want the daemon to reinitialize (cfr. reloading logs).\nSIGINT typically gets triggered on system shutdown\nSIGUSR1 is typically a request to dump states to syslog\nSend signals from your script using posix_kill( $pid, $signal )\n
  • \n
  • \n
  • \n
  • \n
  • \n
  • \n
  • When a process ends, all of the memory and resources associated with it are deallocated so they can be used by other processes.\nA zombie process or defunct process is a process that has completed execution but still has an entry in the process table. This entry is still needed to allow the parent process to read the child’s exit status. The resources are not deallocated until the process is killed.\nThe parent can read the child's exit status by executing a wait system call, at which stage the zombie is removed. It is commonly executed in a SIGCHLD signal handler on the parent (SIGCHLD is received when a child has died).\nIf the parent explicitly ignores SIGCHLD by setting its handler to SIG_IGN, all child exit status information will be discarded and no zombie processes will be left\n
  • -1 wait for any child process; this is the same behaviour that the wait function exhibits.\n pcntl_waitpid — Waits on or returns the status of a forked child\n WNOHANG return immediately if no child has exited.\n pcntl_wifexited — Checks if status code represents a normal exit\n
  • Sometimes you need to do so much work that you could use an extra pair of hands. One of the way to do this is to start a script a number of times, but you loose some form of control. \n
  • What you really want is a dynamic number of concurrent workers, that are managed by an overseer. This overseer can add new workers when he needs to, distribute work among on the workers, etc...\nSometimes people talk about “multi-threading” in php, well, this is what they mean. It is NOT multithreading, it’s parallel or concurrent processing.\nHow do you do this? You start of by daemonizing your overseer following the steps as we saw before. When that is done, you do another round of forking.\nYou fork of each worker. Important to note is that you don’t have to follow the steps we discussed earlier. One of the reasons we did, was that we were unsure of how the process was started. For the workers, we know and we are in full control.\nEspecially don’t change the session id, since you want all your workers to be in the same process group.\nLet’s have a very quick look at some code...\n
  • \n
  • \n
  • Socket pairs provide a way to do bi-directional communication. It uses the socket_* functions in php.\n\nThe messaging functions may be used to send and receive messages to/from other processes. They provide a simple and effective means of exchanging data between processes, without the need for setting up an alternative using Unix domain sockets. \n\nmsg_* underneath semaphore functions in php docs.\n\n
  • Semaphores may be used to provide exclusive access to resources on the current machine, or to limit the number of processes that may simultaneously use a resource.\n\nShmop is an easy to use set of functions that allows PHP to read, write, create and delete Unix shared memory segments.\n\nMemcached is a highly effective caching daemon, which was especially designed to decrease database load in dynamic web applications.\n\nThe Alternative PHP Cache or APC is a free and open opcode cache for PHP.\n\n\n
  • Gearman is a system to farm out work to other machines, dispatching function calls to machines that are better suited to do work, to do work in parallel, to load balance lots of function calls, or to call functions between languages.\n\nØMQ is a high-performance asynchronous messaging library aimed to use in scalable distributed or concurrent applications. It provides a message queue.\n\nThe Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using a simple programming model. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-avaiability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-availabile service on top of a cluster of computers, each of which may be prone to failures.\n
  • http://pleac.sourceforge.net/pleac_php/processmanagementetc.html\n

ZendCon 2011 : PHP in the Dark ZendCon 2011 : PHP in the Dark Presentation Transcript