Is there a way to prevent users from locking up a linux machine with code something along the lines of:
#import <stdio.h>
int main (int argc, char** argv)
{
while (1)
fork();
}
The computers in question are in a computer lab, so I can’t exactly disallow compiling… but is there some way of ensuring such processes only consume a certain portion of the system resources? The importance of this issue is compounded by the fact that any user can ssh into any of the systems, so really the only reason this hasn’t become a problem yet is most users are more or less unfamiliar with C or other low-level languages.
Still, I’d like to nip this one in the bud…
You can limit the total number of concurrent processes that each user is allowed to create. I think it’s in
/etc/security/limits.confand theNPROCfield is what you need to set.Update: Just looked it up here and it appears my memory isn’t failing me after all 🙂
The simplest way is to enter:
which will limit all users to 50 processes. You may want to have a little more fine-grained control than that.
Alternatively, you can use
ulimitto enforce the limit iflimits.confis not available on your system. You will have to ensure that all started processes are restricted by, for example, putting it into/etc/profileand all other possible entry points: