So, you want to place a large number of low-volume sites on a single server. In order to do so there are three factors to take into consideration:
By far, the number one factor to take into consideration is memory usage. This is because we set up each site to use a generous amount of memory by default. Which means that you will likely need to modify the amount of memory each site uses in order to place more of them on the same on the same server.
When we provision a site, we automatically provide it with two active PHP workers and one spare worker. We allow the number of workers to scale up to 5.
Each worker is given a maximum of 128 MB of memory which means that, with overhead, each worker uses approximately 150 MB of memory. Since 3 workers are always active, up to 450 MB of memory can be consumed by a single site. And that can go all the way up to 750 MB if all 5 workers are used and consuming the maximum amount of memory.
In the real world, on sites with a low volume of traffic, not all PHP workers will be active on each site. And, those workers will likely not be using the maximum amount of memory. So on a 1 GB server, you will likely find that a site might be using as little as 150 MB of memory unless it is flooded with traffic and running scripts that consume a lot of memory.
BUT, if that does happen, Linux will likely kill the process that uses the most amount of memory. Which is not going to be a PHP worker since each one only uses up to 150 MB of memory. Instead, it is likely to kill your database server (MYSQL) or the web server (NGINX).
And that is going to cause all kinds of issues for ALL sites on the server.
Instead it is better to reduce the maximum amount of memory that each site can use.
To reduce the maximum memory that each PHP worker can use you can set a memory option in PHP.INI. You’ll need to do this for each version of PHP you use as follows:
sudo nano /etc/php/7.4/fpm/php.ini
sudo systemctl restart php7.4-fpm
Note that these memory limits will not work on sites with WooCommerce or heavy-duty themes or page builders. But then again, those aren’t the kinds of volume sites that are suitable for stuffing dozens into a single server.
And, even with this change, each site still will end up using up to 650 MB of memory if all PHP workers are maxed out. So, the next step is to reduce the number of PHP workers assigned to each site.
Unfortunately, with this step, you have to do it for each site.
pm.max_children = 2 pm.start_servers = 1 pm.min_spare_servers = 1 pm.max_spare_servers = 1
sudo systemctl restart php7.4-fpm
With these changes, now each site will use a max of around 260MB of memory.
If all your sites are running the same version of PHP then you can deactivate unused versions.
You can do this under the SERVICES tab for the server.
Scroll down to the PHP PROCESSES section and click the DEACTIVATE button next to the your unused PHP versions.
The easiest way to reduce the amount of memory each site uses is to modify the maxConnections and maxSSLConnections in the server level configuration file. This file is located at:
/usr/local/lsws/conf/httpd_config.conf
Do not set these values too low though. Start with 15 for each item and go up or down from there.
An additional item you can ADD to the file is the httpdworkers limit. To find the number of httpd workers being used you can run the following command:
ps -ef | grep openlitespeed
This will output the main worker and the number of child workers in use – you’ll probably see at least one child worker per cpu.
You can limit the workers by adding something like this close to the top of the file:
httpdworkers 2
You can also modify the maxConnections and maxSSLConnections for each site in the site level vhost configuration. This file is located at:
/usr/local/lsws/conf/vhosts/<yourdomain.com>/vhconf.conf
The values in here should not exceed the values in the server level config file.
Note: If you install the OpenLiteSpeed Console you’ll be able to change these values without needing to use the command line. Just make sure you uninstall the console when you are finished with your tweaks.
For most cloud servers, the next thing you might run up against is disk space IF you enable our backups. This is because we keep a certain number of backups on the local disk.
If your sites uses a lot of disk space then your backups will use a lot of disk space. So you’ll need to limit the number of backups you keep locally for each site by setting the RETENTION DAYS to a low number. Or, you can set it to -1 which will not keep any backups locally (not recommended.)
It’s easy to overwhelm your CPU even if each site has a very very low volume of traffic. Here’s an example:
Assume each site has Wordfence installed plus a backup plugin. Wordfence will respond to every hit on the site – which includes bots and attacks. That uses CPU. Then, your backup plugin fires at midnight to backup a site. But it also fires on all the other sites on your server because you set them all to backup at the same time. That’s a big oops!
So, think carefully about the processes that are running on your server and WHEN they are running in order to use your CPU resources effectively. Standard practice that is fine for a few sites will not necessarily translate to a server hosting 20 or 30 sites!