Difference between revisions of "Background:How caching works"
imported>ThorstenStaerk |
imported>ThorstenStaerk |
||
Line 1: | Line 1: | ||
− | You know that? The longer you work with your Linux system, the bigger the memory consumption is. However, performance is not negatively affected. Looking closely at it, you see only the [ | + | You know that? The longer you work with your Linux system, the bigger the memory consumption is. However, performance is not negatively affected. Looking closely at it, you see only the [http://simple.wikipedia.org/wiki/Cache cache] has grown bigger. The article writes a program that eats up all memory, then terminates with a std::bad_alloc. Before terminating, it has of course eaten up all your cache and transformed into usr memory. Then, the memory is freed and there is no more cache. |
====== main.cpp ====== | ====== main.cpp ====== |
Revision as of 05:54, 27 January 2012
You know that? The longer you work with your Linux system, the bigger the memory consumption is. However, performance is not negatively affected. Looking closely at it, you see only the cache has grown bigger. The article writes a program that eats up all memory, then terminates with a std::bad_alloc. Before terminating, it has of course eaten up all your cache and transformed into usr memory. Then, the memory is freed and there is no more cache.
main.cpp
#include <iostream> using namespace std; void pollute() { int* i=new int(); cout << i << " "; } int main() { while (true) { pollute(); } }
Compile, link and run it:
g++ main.cpp && ./a.out
How can you use this
Imagine, you are doing a file system read benchmark. Your system is fresh:
tweedleburg:~ # free total used free shared buffers cached Mem: 4053216 795664 3257552 0 352 54624 -/+ buffers/cache: 740688 3312528 Swap: 0 0 0
You have 54624 bytes in all caches.
tweedleburg:~ # dd if=wine-1.0-rc2.tar of=/dev/null 197360+0 records in 197360+0 records out 101048320 bytes (101 MB) copied, 2.278 s, 44.4 MB/s
You get 44.4 MB/s for disk reads, a realistic result.
tweedleburg:~ # dd if=wine-1.0-rc2.tar of=/dev/null 197360+0 records in 197360+0 records out 101048320 bytes (101 MB) copied, 0.190445 s, 531 MB/s
At the second time, you get 531 MB/s for disk reads, an unrealistically good result. The culprit are the caches that stored the file content:
tweedleburg:~ # free total used free shared buffers cached Mem: 4053216 886360 3166856 0 528 145748 -/+ buffers/cache: 740084 3313132 Swap: 0 0 0
You want to clear the caches.
Note: To drop your caches, you can also use /proc/sys/vm/drop_caches
You use my program to clear the caches:
tweedleburg:~ # ./a.out >/dev/null terminate called after throwing an instance of 'St9bad_alloc' what(): std::bad_alloc Aborted
My program allocates memory till it can no more. All caches are eaten up. Then it stops and frees its memory:
tweedleburg:~ # free total used free shared buffers cached Mem: 4053216 794916 3258300 0 344 57360 -/+ buffers/cache: 737212 3316004 Swap: 0 0 0
You repeat the file read:
tweedleburg:~ # dd if=wine-1.0-rc2.tar of=/dev/null 197360+0 records in 197360+0 records out 101048320 bytes (101 MB) copied, 2.21617 s, 45.6 MB/s
and get a realistic result. To show you I am not telling bullshit, we repeat the disk read without cleaning the caches and get again an unrealistic result:
tweedleburg:~ # dd if=wine-1.0-rc2.tar of=/dev/null 197360+0 records in 197360+0 records out 101048320 bytes (101 MB) copied, 0.165862 s, 609 MB/s
You see there is a big difference between a cached and a non-cached read; but also there is a difference to a direct read:
tweedleburg:~ # dd iflag=direct if=wine-1.0-rc2.tar of=/dev/null 197360+0 records in 197360+0 records out 101048320 bytes (101 MB) copied, 23.6831 s, 4.3 MB/s
reading with dd using the direct flag yields the same performance as reading with an empty cache, you just have to adjust the blocksize:
tweedleburg:~ # dd iflag=direct if=wine-1.0-rc2.tar bs=1024k of=/dev/null 96+1 records in 96+1 records out 101048320 bytes (101 MB) copied, 2.42799 s, 41.6 MB/s