Linux内存使用机制详解[En]

On 2010年12月30日, in linux, tips, by netoearth

一篇非常详细的Linux内存使用机制的文章
Linux ate my ram!!

What’s going on?

Linux is borrowing unused memory for disk caching. This makes it looks like you are low on memory, but you are not! Everything is fine!

Why is it doing this?

Disk caching makes the system much faster! There are no downsides, except for confusing newbies. It does not take memory away from applications in any way, ever!

What if I want to run more applications?

If your applications want more memory, they just take back a chunk that the disk cache borrowed. Disk cache can always be given back to applications immediately! You are not low on ram!

Do I need more swap?

No, disk caching only borrows the ram that applications don’t currently want. It will not use swap. If applications want more memory, they just take it back from the disk cache. They will not start swapping.

How do I stop Linux from doing this?

You can’t disable disk caching. The only reason anyone ever wants to disable disk caching is because they think it takes memory away from their applications, which it doesn’t! Disk cache makes applications load faster and run smoother, but it NEVER EVER takes memory away from them! Therefore, there’s absolutely no reason to disable it!

Why does top and free say all my ram is used if it isn’t?

This is just a misunderstanding of terms. Both you and Linux agrees that memory taken by applications is “used”, while memory that isn’t used for anything is “free”.But what do you call memory that is both used for something and available for applications?

You would call that “free”, but Linux calls it “used”.

Memory that is You’d call it Linux calls it
taken by applications Used Used
available for applications, and used for something Free Used
not used for anything Free Free

This “something” is what top and free calls “buffers” and “cached”. Since your and Linux’s terminology differs, you think you are low on ram when you’re not.

How do I see how much free ram I really have?

Too see how much ram is free to use for your applications, run free -m and look at the row that says “-/+ buffers/cache” in the column that says “free”. That is your answer in megabytes:

$ free -m
             total       used       free     shared    buffers     cached
Mem:          1504       1491         13          0         91        764
-/+ buffers/cache:        635        869
Swap:         2047          6       2041
$

If you don’t know how to read the numbers, you’ll think the ram is 99% full when it’s really just 42%.

How can I verify these things?

Experiments and fun with the Linux disk cache

Hopefully you are now convinced that Linux didn’t just eat your ram. Here are some interesting things you can do to learn how the disk cache works.

Effects of disk cache on application memory allocation

Since I’ve already promised that disk cache doesn’t prevent applications from getting the memory they want, let’s start with that. Here is a C app (munch.c) that gobbles up as much memory as it can, or to a specified limit:

#include <stdlib.h>
#include <stdio.h>
#include <string.h>

int main(int argc, char** argv) {
    int max = -1;
    int mb = 0;
    char* buffer;

    if(argc > 1)
        max = atoi(argv[1]);

    while((buffer=malloc(1024*1024)) != NULL && mb != max) {
        memset(buffer, 0, 1024*1024);
        mb++;
        printf("Allocated %d MB\n", mb);
    }

    return 0;
}

Running out of memory isn’t fun, but the OOM killer should end just this process and hopefully the rest will be unperturbed. We’ll definitely want to disable swap for this, or the app will gobble up that as well.

$ sudo swapoff -a

$ free -m
             total       used       free     shared    buffers     cached
Mem:          1504       1490         14          0         24        809
-/+ buffers/cache:        656        848
Swap:            0          0          0

$ gcc munch.c -o munch

$ ./munch
Allocated 1 MB
Allocated 2 MB
(...)
Allocated 877 MB
Allocated 878 MB
Allocated 879 MB
Killed

$ free -m
             total       used       free     shared    buffers     cached
Mem:          1504        650        854          0          1         67
-/+ buffers/cache:        581        923
Swap:            0          0          0

$

Even though it said 14MB “free”, that didn’t stop the application from grabbing 879MB. Afterwards, the cache is pretty empty, but it will gradually fill up again as files are read and written. Give it a try.

Effects of disk cache on swapping

I also said that disk cache won’t cause applications to use swap. Let’s try that as well, with the same ‘munch’ app as in the last experiment. This time we’ll run it with swap on, and limit it to a few hundred megabytes:

$ free -m
             total       used       free     shared    buffers     cached
Mem:          1504       1490         14          0         10        874
-/+ buffers/cache:        605        899
Swap:         2047          6       2041

$ ./munch 400
Allocated 1 MB
Allocated 2 MB
(...)
Allocated 399 MB
Allocated 400 MB

$ free -m
             total       used       free     shared    buffers     cached
Mem:          1504       1090        414          0          5        485
-/+ buffers/cache:        598        906
Swap:         2047          6       2041

munch ate 400MB of ram, which was taken from the disk cache without resorting to swap. Likewise, we can fill the disk cache again and it will not start eating swap either. If you run watch free -m in one terminal, and find . -type f -exec cat {} + > /dev/null in another, you can see that “cached” will rise while “free” falls. After a while, it tapers off but swap is never touched1

Clearing the disk cache

For experimentation, it’s very convenient to be able to drop the disk cache. For this, we can use the special file /proc/sys/vm/drop_caches. By writing 3 to it, we can clear most of the disk cache:

$ free -m
             total       used       free     shared    buffers     cached
Mem:          1504       1471         33          0         36        801
-/+ buffers/cache:        633        871
Swap:         2047          6       2041

$ echo 3 | sudo tee /proc/sys/vm/drop_caches 
3

$ free -m
             total       used       free     shared    buffers     cached
Mem:          1504        763        741          0          0        134
-/+ buffers/cache:        629        875
Swap:         2047          6       2041

Notice how “buffers” and “cached” went down, free mem went up, and free+buffers/cache stayed the same.

Effects of disk cache on load times

Let’s make two test programs, one in Python and one in Java. Python and Java both come with pretty big runtimes, which have to be loaded in order to run the application. This is a perfect scenario for disk cache to work its magic.

$ cat hello.py
print "Hello World! Love, Python"

$ cat Hello.java
class Hello {
    public static void main(String[] args) throws Exception {
        System.out.println("Hello World! Regards, Java");
    }
}

$ javac Hello.java

$ python hello.py
Hello World! Love, Python

$ java Hello
Hello World! Regards, Java

$

Our hello world apps work. Now let’s drop the disk cache, and see how long it takes to run them.

$ echo 3 | sudo tee /proc/sys/vm/drop_caches
3

$ time python hello.py
Hello World! Love, Python

real	0m1.026s
user	0m0.020s
sys	    0m0.020s

$ time java Hello
Hello World! Regards, Java

real	0m2.174s
user	0m0.100s
sys	    0m0.056s

$

Wow. 1 second for Python, and 2 seconds for Java? That’s a lot just to say hello. However, now all the file required to run them will be in the disk cache so they can be fetched straight from memory. Let’s try again:

$ time python hello.py
Hello World! Love, Python

real    0m0.022s
user    0m0.016s
sys     0m0.008s

$ time java Hello
Hello World! Regards, Java

real    0m0.139s
user    0m0.060s
sys     0m0.028s

$

Yay! Python now runs in just 22 milliseconds, while java uses 139ms. That’s a 95% improvement! This works the same for every application!

Effects of disk cache on file reading

Let’s make a big file and see how disk cache affects how fast we can read it. I’m making a 200mb file, but if you have less free ram, you can adjust it.

$ echo 3 | sudo tee /proc/sys/vm/drop_caches
3

$ free -m
             total       used       free     shared    buffers     cached
Mem:          1504        546        958          0          0         85
-/+ buffers/cache:        461       1043
Swap:         2047          6       2041

$ dd if=/dev/zero of=bigfile bs=1M count=200
200+0 records in
200+0 records out
209715200 bytes (210 MB) copied, 6.66191 s, 31.5 MB/s

$ ls -lh bigfile
-rw-r--r-- 1 vidar vidar 200M 2009-04-25 12:30 bigfile

$ free -m
             total       used       free     shared    buffers     cached
Mem:          1504        753        750          0          0        285
-/+ buffers/cache:        468       1036
Swap:         2047          6       2041

$ 

Since the file was just written, it will go in the disk cache. The 200MB file caused a 200MB bump in “cached”. Let’s read it, clear the cache, and read it again to see how fast it is:

$ time cat bigfile > /dev/null

real    0m0.139s
user    0m0.008s
sys     0m0.128s

$ echo 3 | sudo tee /proc/sys/vm/drop_caches
3

$ time cat bigfile > /dev/null

real    0m8.688s
user    0m0.020s
sys     0m0.336s

$

That’s more than fifty times faster!

Conclusions

The Linux disk cache is very unobtrusive. It uses spare memory to greatly increase disk access speeds, and without taking any memory away from applications. A fully used store of ram on Linux is efficient hardware use, not a warning sign.

LinuxAteMyRam.com was presented by VidarHolen.net

1. This is somewhat oversimplified. While newly allocated memory will always be taken from the disk cache instead of swap, Linux can be configured to preemptively swap out other unused applications in the background to free up memory for cache. The is tunable through the ‘swappiness’ setting, accessible through /proc/sys/vm/swappiness.

A server might want to swap out unused apps to speed up disk access of running ones (making the system faster), while a desktop system might want to keep apps in memory to prevent lag when the user finally uses them (making the system more responsive). This is the subject of much debate.

Comments are closed.