This is a relief to find! I just looked at htop and panicked over the high amount of “used” memory.

  • Ephera
    link
    fedilink
    32 years ago

    This is also the reason “unused RAM is wasted RAM” makes little sense in an application context. OS designers realized that wisdom a long time ago, so they already made sure to utilize that unused RAM via disk caching.

    Now, if Chrome or Chrome VSCode or Chrome Discord or Chrome MS Teams requests tons of RAM, it most likely gets this used-but-available RAM, which your OS was using for disk caching.

    In the case of Chrome itself, this will make Chrome faster at the expense of your other applications’ performance.
    In the case of non-browser applications based on Chrome, your system’s performance is sacrificed, so that Microsoft can rake in its profit without actually investing money into proper application development. 🙂

  • @sasalzig@lemmy.ml
    link
    fedilink
    32 years ago

    This doesn’t explain what a disk cache (afaik often refered to as page cache) is, so here goes: When any modern OS reads a file from the disk, it (or the parts of it that are actually read) gets copied to RAM. These “pages” only get thrown out (“dropped”) when RAM is needed for something else, usually starting with the one that hasn’t been accessed for the longest time.

    You can see the effect of this by opening a program (or large file), closing it, and opening it again. The second time it will start faster because nothing needs to get read from disk.

    Since the pages in RAM are identical to the sectors on disk (unless the file has been modified in RAM by writing to it), they can be dropped immediately and the RAM can be used for something else if needed. The downside being obviously that the dropped file needs to be read again from disk when it is needed the next time.

    • AmiceseOP
      link
      fedilink
      0
      edit-2
      2 years ago

      How can I adjust my programs to utilize disk caches?

      • @sasalzig@lemmy.ml
        link
        fedilink
        12 years ago

        As I said, every file read from disk, be it an executable, image or whatever gets cached in RAM automatically and always.

        Having said that, if you read a file using read(2) (or like any API that uses read() internally, which is most), then you end up with two copies of the file in RAM, the version your OS put in the disk cache, and the copy you created in your process’s memory. You can avoid this second copy by using mmap(2). In this case the copy of the file in the disk cache gets mapped into your process’s memory, so the RAM is shared between your copy and the disk cache copy.

        You can also give hints to the disk cache subsystem in the kernel using fadvise(2). Don’t though unless you know what you’re doing.

  • @testman@lemmy.ml
    link
    fedilink
    12 years ago

    Didn’t Fedora introduce something that prevents the system from allowing programs to actually eat whole RAM and cause whole system to freeze?

    Are any other distros working on stealing acquiring this functionality?

    • @joojmachine@lemmy.ml
      link
      fedilink
      22 years ago

      They just implemented a systemd feature, systemd-oomd. To be honest, it can cause issues in some edge cases, but it works pretty well in any distro (that uses systemd).