Considerations when working with async PHP runtimes like swoole

09 April 2020 Comments

Warning! This post was published almost 4 years ago, so it can contain outdated information. Bear this in mind when putting it into practice or leaving new comments.

Asynchronous and non-blocking runtimes are pretty usual in many programming languages, as well as long-lived web apps that stay in memory and are capable of dispatching multiple HTTP requests without having to be fully bootstrapped every time. This has not been traditionally the case with PHP apps.

However, many projects are starting to get some adoption, that bring this long-lived approach to the PHP world.

The problem with this is that PHP developers are not used to this, and they tend to continue acting as they have always done. Also, libraries might have made some assumptions, making them not work as expected when used under this context.

Some of these projects are swoole, ReactPHP or Amphp, to give some examples.

In this article, I will be focusing on the first one, and explain how to approach some of the pain points I have found after using it for a year and a half on an existing PHP project.

Introducing Swoole

The main difference swoole has compared to other similar projects is that it’s written as a native PHP extension, and needs to be installed with pecl.

This is obviously a bit cumbersome, but the benefit is that, since it’s written in C++, it has a better handling of memory allocation, which is very important to achieve what it tries to do.

What swoole does is running a main process which bootstraps the PHP application once, and then keeps it in memory so that it can continue dispatching requests without having to load resources every time. This makes it really fast.

Those HTTP requests are handled by a set of child processes called “web workers”. Swoole forks the contents in memory for the main process to each one of the workers, so each worker has a “copy” of the application in memory.

Each worker is independent, but each one of them can only handle one request at a time, so you have to increase the number of workers if you expect a lot of concurrent traffic.

Other than web workers, swoole also has another set of child processes called “task workers”, which can be used to delegate long tasks at runtime to prevent web workers from getting blocked.

Here you can see a diagram of swoole’s architecture.

How Swoole works

More information can be found in Swoole’s website.

Problems to solve

Now that we know how swoole works, let’s see some of the challanges I have faced in th past, which are usually not that ovbious when you are used to think in terms of “blocking” and “short-lived”.

Database connections are kept open

One of the first things you find out is that, since the application stays in memory, any database connection that has been opened by a worker (either web or task worker), will be kept open and reused in subsequent executions or HTTP requests served by the app.

This, which is in theory good, has a side effect, which is not obvious during development: the opened connection will eventually expire.

Since most of the popular database abstraction layers in PHP are coming from the times in which PHP was just a bootstrap-on-every-request technology, they never cared too much about expired database connections, as they would be opened again on next request.

However, when using swoole and the like, you have to implement some mechanism which takes care of reconnecting or gracefully recovering when this happens.

I wrote an article explaining how I solved this on a middleware-based app served with swoole and using doctrine: How to properly handle a doctrine entity manager on an expressive application served with swoole

Obviously, swoole (and also others) provides some tools to work with databases (like this MySQL client), which I’m sure do not have this problem (although, I haven’t personally tried this), but they are more limited than other well adopted libraries.

In-memory caches don’t behave as expected

It is a frequent practice to use local in-memory caches, like APCu, to improve applications performance in production.

Usually APCu is faster than other options, like redis or memcached, so it’s the choice when you only have one instance of your app running (meaning, you don’t have a cluster behind a load balancer).

However, even if you are running your app on just one server with swoole, because of the web worker system explained earlier, you actually have several copies of your app in memory, and each one of them will have a separated and not shared cache when using APCu.

This can lead to inconsistencies which are hard to debug, so you need to have it in mind.

Also, since the information that is being loaded is kept in memory anyway, you will not notice such a big improvement by using APCu or other in-memory caches.

Notice that this problem does not happen when using distributed/centralized caching solutions, such as redis or memcached, as in that case, the caching technology is shared by all workers.

No need to optimize file including/requiring

Many projects have complex configuration systems which are spread into several files. While this make it more maintainable, it also has a performance impact when all those files need to be loaded on every request.

Because of that, there are libraries like laminas-config-aggregator that can dynamically load config from many sources and, if some conditions are met, they can merge the result in a single bigger file that can be used in later requests.

In the case of swoole, as all the files will be kept in memory once loaded for the first time, there’s no need for this kind of optimization.

Class autoloading happens once

This point is very related with the previous one.

Composer offers a lot of optimization options so that class autoloading is faster in production. While you usually will still want to use some of those so that the first time a class is being loaded it is still reasonably fast, any subsequent hits will just use the class which is already in memory.

One of the composer flags you will probably don’t want use with swoole is --apcu-autoloader, because of the reasons mentioned two points above this one.

Services should be truly stateless

This is something that you should always do anyway, but it’s specially important when a service instance is kept in memory between HTTP requests.

Sometimes it’s easy to end up having some small piece of internal state on a service, where we save something as a side effect of a method call, to end up using it when another one of the public methods is invoked.

While this could seem harmless when the whole app is bootstrapped on every request, it can have very dangerous side effects when the app is served with swoole, because the instance will persist until another request is dispatched by the same web worker.

Just imagine you saved some user data on a service, and it ends up being served to another user on the next request. That won’t look good.

Because of this, make sure your services are truly stateless, and if for any reason that’s not possible, at least remember to have some mechanism in place that flushes the data just before finishing the request.

Hot reloading for dev envs is not that good

One thing we usually take for granted is that we can modify a PHP file and just by making a new request, the new code will be executed. However, when the code is kept in memory, it’s not that straightforward.

When you are developing a project which is served with swoole, you will have to restart the server every time to get your changes applied.

Swoole has a mechanism to do a light server reload, which is faster. You can find the docs in it’s website.

The problem with this is that you need to implement the mechanism to invoke that when a file is changed, or use some library that does it for you.

Also, I have noticed that even calling that and seeing that the server is reloaded, does not always have the expected effect, and you have to still end up doing a hard reload. It’s something I don’t fully understand and I still need to investigate a bit further.

Conclusion

As it always happens when new technologies appear, it takes some time to get used to them, but I strongly suggest you to give a try to swoole or any other non-blocking runtime, because it makes your apps super fast.

Just remember to have all of the above in mind ;-)

That said, here you can find some libraries that will make your life easier when trying to serve existing projects with swoole, without having to couple with it and being able to continue using your favourite framework:

Also, here you can find some more articles talking about this topic: