Difference between revisions of "Is my ulimit exceeded"
imported>ThorstenStaerk m (ThorstenStaerk moved page Ulimit to Is my ulimit exceeded) |
m |
||
(8 intermediate revisions by one other user not shown) | |||
Line 1: | Line 1: | ||
− | + | Under high load it happens from time to time that network connections fail although everything they work perfectly under low load. The reason be the ulimit set on a process. A ulimit restricts the process from opening up more files (and network connections) than a certain number. | |
+ | |||
+ | To display you ulimit settings use the command ulimit: | ||
# ulimit -a | # ulimit -a | ||
core file size (blocks, -c) unlimited | core file size (blocks, -c) unlimited | ||
Line 20: | Line 22: | ||
* soft nofile 10000 | * soft nofile 10000 | ||
− | + | From my practice I can tell that the most prominent ulimit is -n, the count of open files allowed per process. I can also tell that the most interesting questions that are never answered in man pages are: | |
+ | |||
+ | * how is my ulimit for a given process? | ||
+ | * how much of the ulimit is already used up? | ||
+ | * have there been problem with the limit beeing set too small? | ||
+ | |||
+ | = How is my ulimit for a given process? = | ||
+ | Let's take firefox as an example: | ||
+ | <pre> | ||
+ | # ps -A | grep firefox | ||
+ | 10975 ? 00:00:01 firefox | ||
+ | # cd /proc/10975/ | ||
+ | # cat limits | ||
+ | Limit Soft Limit Hard Limit Units | ||
+ | Max cpu time unlimited unlimited seconds | ||
+ | Max file size unlimited unlimited bytes | ||
+ | Max data size unlimited unlimited bytes | ||
+ | Max stack size 8388608 unlimited bytes | ||
+ | Max core file size 0 unlimited bytes | ||
+ | Max resident set unlimited unlimited bytes | ||
+ | Max processes 11848 11848 processes | ||
+ | Max open files 1024 4096 files | ||
+ | Max locked memory 65536 65536 bytes | ||
+ | Max address space unlimited unlimited bytes | ||
+ | Max file locks unlimited unlimited locks | ||
+ | Max pending signals 11848 11848 signals | ||
+ | Max msgqueue size 819200 819200 bytes | ||
+ | Max nice priority 0 0 | ||
+ | Max realtime priority 0 0 | ||
+ | Max realtime timeout unlimited unlimited us | ||
+ | </pre> | ||
+ | |||
+ | = How much of the ulimit is already used up? = | ||
+ | Let's see for firefox: | ||
+ | # ps -A | grep firefox | ||
+ | 10975 ? 00:00:03 firefox | ||
+ | # cd /proc/10975/fd | ||
+ | # ls -1|wc -l | ||
+ | 55 | ||
+ | |||
+ | ok, so firefox is consuming 55 of 1024 open file descriptors, much headroom left. | ||
+ | |||
+ | = Test case = | ||
+ | I wrote a simple C program that does nothing but open files: | ||
'''main.c''' | '''main.c''' | ||
Line 29: | Line 74: | ||
{ | { | ||
FILE *handle; | FILE *handle; | ||
− | for (int i= | + | for (int i=0;i<=2048;i++) |
{ | { | ||
− | + | printf("%d",(fopen("testfile", "wb")==0)); | |
} | } | ||
while (true){}; | while (true){}; | ||
− | |||
− | |||
} | } | ||
</pre> | </pre> | ||
Line 53: | Line 96: | ||
And count the number of files: | And count the number of files: | ||
− | # | + | # ls -1 [[piping||]] wc -l |
− | + | 1024 | |
+ | |||
+ | [[strace]]'ing it gives me: | ||
+ | open("testfile", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 1023 | ||
+ | open("testfile", O_WRONLY|O_CREAT|O_TRUNC, 0666) = -1 EMFILE (Too many open files) | ||
+ | |||
− | + | [[Category:TroubleShooting]] | |
− | |||
− |
Latest revision as of 20:05, 25 March 2021
Under high load it happens from time to time that network connections fail although everything they work perfectly under low load. The reason be the ulimit set on a process. A ulimit restricts the process from opening up more files (and network connections) than a certain number.
To display you ulimit settings use the command ulimit:
# ulimit -a core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited file size (blocks, -f) unlimited pending signals (-i) 32768 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 stack size (kbytes, -s) 8192 cpu time (seconds, -t) unlimited max user processes (-u) 32768 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
You can permanently set the limits in /etc/security/limits.conf. You will have to re-login afterwards. To set the number of file descriptors for all users, the syntax is:
* hard nofile 10000 * soft nofile 10000
From my practice I can tell that the most prominent ulimit is -n, the count of open files allowed per process. I can also tell that the most interesting questions that are never answered in man pages are:
- how is my ulimit for a given process?
- how much of the ulimit is already used up?
- have there been problem with the limit beeing set too small?
How is my ulimit for a given process?
Let's take firefox as an example:
# ps -A | grep firefox 10975 ? 00:00:01 firefox # cd /proc/10975/ # cat limits Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 unlimited bytes Max resident set unlimited unlimited bytes Max processes 11848 11848 processes Max open files 1024 4096 files Max locked memory 65536 65536 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 11848 11848 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us
How much of the ulimit is already used up?
Let's see for firefox:
# ps -A | grep firefox 10975 ? 00:00:03 firefox # cd /proc/10975/fd # ls -1|wc -l 55
ok, so firefox is consuming 55 of 1024 open file descriptors, much headroom left.
Test case
I wrote a simple C program that does nothing but open files:
main.c
#include <stdio.h> int main() { FILE *handle; for (int i=0;i<=2048;i++) { printf("%d",(fopen("testfile", "wb")==0)); } while (true){}; }
Compile this file with the command
g++ main.c
Run this program (and send it to the background) then with the command
./a.out &
Now find out the process ID:
# ps -A|grep a.out 29232 pts/3 00:04:15 a.out
Now go into the process' file descriptor directory:
cd /proc/29232/fd
And count the number of files:
# ls -1 | wc -l 1024
strace'ing it gives me:
open("testfile", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 1023 open("testfile", O_WRONLY|O_CREAT|O_TRUNC, 0666) = -1 EMFILE (Too many open files)