Managing system resources is a vital task for any Linux administrator, especially when dealing with background tasks or runaway processes that threaten to consume all available CPU cycles. This article focuses on two primary tools: cpulimit, which restricts the CPU usage of a process by pausing it at intervals, and cgroups (Control Groups), a powerful kernel feature used to allocate resources among hierarchical groups of processes. These tools allow you to maintain server responsiveness even under heavy workloads.

Whether you are running a high-intensity backup, a complex script, or managing a multi-tenant environment on a SolusVM 2 node, preventing a single process from hogging the CPU is essential. Here are the most effective methods to cap CPU usage on your Linux server.

1. Using cpulimit to Restrict a Running Process

The cpulimit utility is an excellent choice for desktop or simple server tasks. It works by sending SIGSTOP and SIGCONT signals to a process to keep its usage within a specific percentage. Note that the percentage is per CPU core; if you have a quad-core system, the total possible usage is 400%.

cpulimit -p 1234 -l 50
  • -p 1234 → targets a specific process ID (PID).
  • -l 50 → limits the CPU usage to 50 percent of a single core.

2. Limiting a Process by Name

If you do not know the PID but know the name of the executable (for example, a compression task), you can target the process by its binary name.

cpulimit -e backup.sh -l 25
  • -e backup.sh → targets the process by the name of the executable file.
  • -l 25 → restricts the specified executable to 25 percent CPU usage.

3. Launching a New Process with a Limit

You can also use cpulimit as a wrapper to start a program with a predefined limit already in place.

cpulimit -l 30 -- /usr/bin/php script.php
  • -l 30 → sets the maximum CPU usage at 30 percent.
  • -- → signals the end of cpulimit options and the start of the command to be executed.

4. Using Systemd Scopes (cgroups v2)

On modern distributions like AlmaLinux 9 or 10, systemd-run leverages cgroups v2 to manage resources more cleanly than signal-based tools. This method is highly recommended for production server environments.

systemd-run --scope -p CPUQuota=20% /usr/bin/python3 app.py
  • --scope → runs the command in a transient scope unit.
  • -p CPUQuota=20% → assigns a hard limit of 20 percent of CPU time to the process group.

5. Adjusting Priority with the nice Command

While nice does not set a hard percentage limit, it changes the priority of a process. This tells the Linux kernel to give the process fewer CPU cycles when other, more important tasks need them. It is the "polite" way to run background tasks.

nice -n 19 /usr/bin/tar -czf backup.tar.gz /data
  • nice → invokes the utility to modify process priority.
  • -n 19 → sets the "niceness" level to 19, which is the lowest possible priority.

Final thoughts

Choosing the right tool depends on your specific environment. For quick, manual intervention, cpulimit is simple and effective. However, for a professional web hosting environment or automated deployments, leveraging systemd and cgroups provides a more stable and integrated solution. By implementing these limits, you ensure that critical services—like your web server or database—remain responsive regardless of what background tasks are running. Proper resource management is the hallmark of a well-tuned Cloudfanatic server.

Je li Vam ovaj odgovor pomogao? 91 Korisnici koji smatraju članak korisnim (324 Glasovi)