Tuesday, January 18, 2011

How much space should you leave free on a hard disk?

Is there a rule of thumb for how much space to leave free on a hard disk? I used to hear you should leave at least 5% free to avoid fragmentation.

[I know the answer depends on usage (eg: video files vs text), size of disk, RAID level, disk format, disk size - but as it's impractical to ask 100 variations of the same question, any information is wlecome]

  • I would say typically 15%, however with how large hard drives are now adays, as long as you have enough for your temp files and swap file, technically you are safe. But as a safe practice, once you hit 15%, time to start thinking of doing some major cleanup and/or purchasing a larger/second hard drive.

    From
  • From what I understand, it also depends on the file system on the drive. Some file systems are more resilient to things like disk fragmentation.

    username : +1 oops. i'm amending my question to mentione volume format
    From Psycho Bob
  • You generally want to leave about 10% free to avoid fragmentation, but there is a catch. Linux, by default, will reserve 5% of the disk for the root user. When you use 'df', the output doesn't include that 5% if you run it as a non-root user. Just something to keep in mind when doing your calulations.

    Incidentally, you can change the root reserve by using tune2fs. For example

    tune2fs -m 2 /dev/hda1
    

    will set the root reserve at 2%. Generally this is not recommended of course, unless you have a very specific purpose in mind.

    nedm : Good tip -- I do this for our terabyte drives and knock it down to 1% -- that's still 10GB reserved for root, and I've never run into any problems with it.
    James : that's ext3-specific, not all Linux filesystems do that.
    From jedberg
  • I would recommend 10% plus on Windows because defrag won't run if there is not about that much free on the drive when you run it. Leaving free space however will not necessarily stop fragmentation from occuring. As you already mentioned it is dependent on the usage. Fragmentation is caused more based on the amount of variance to the data on the drive and the size of the files being written and removed. The more the data changes and the more random the file sizes the more chance you have of fragmentation occurring.

    The only real way to minimise fragmentation would be to defragment the drive on a regular basis manually or use a tool like Diskeeper which runs in the background on Windows cleaning up when the machine is idle. There are filesystems where fragmentation is handled by the OS in the background so manually running a defragmentation is not necessary.

  • Yes, depends on usage and underlying storage system. Some systems, like high end SAN based disk arrays laugh at file fragmentation making the only system impact of fragmentation is OS overhead in scattering things all over hither and yon. Other systems, like laptop, drives are another story all together. And that doesn't get into newer file systems, such as ZFS, where the concept of a hard limit to space is nebulous at best.

    NTFS is its own beast, of course. These days I give C:\ a total size of 15GB for XP systems, and haven't played with Vista/Win7 enough to know what to recommend there. You really don't want to get much below a GB free on C:. Using Shadow Copies means you should keep more 'empty' space around than you otherwise would, and I'd say 20% free-space is the marker for when more needs to be added or a clean-up needs to happen.

    For plain old NTFS data volumes, I get worried when it gets under 15%. However, if the drive is a 1TB drive, 15% is still a LOT of space to work with and allocate new files into (the converse being that it takes a lot longer to defrag).

  • I try to keep the used space under 80%. Above this number the filesystem generally has to work harder to place data on the disk, leading to fragmentation.

  • SSDs add a new layer to this. Wear Leveling and Write Amplification. For these reasons you want more free space than you absolutely need on traditional hard drives.

    Short Stroking a traditional hard drive reduces latency for random reads/writes. "Short Stroking" a SSD gives the drive controller more unused blocks for garbage collection/wear leveling routines so it won't speed up the drive but it will increase longevity and prevent the speed loss that is seen when a SSD fills up.

    You still don't want to fill the drive but with SSDs the immediate effect isn't there and the reason why is different.

    From pplrppl
  • I'd always try and keep around 50% free on system volumes of any kind, and possibly smaller data volumes. Sizing 2 for 1 if possible and I'd set a warning threshold at 75% or something.

    But for data storage however it's only a matter of data growth rate which needs to be monitored and/or estimated when setting up the monitoring... if the data doesn't grow very fast on for example a 1TB volume - a few % for the warning threshold would be fine and I'd be comfortable with 90-95% utilization. If the growth rate is higher, adjust it down to get notified in time... fragmentation can often be dealt with if the data isn't growing and just changing with scheduled defrags.

  • I try to leave 50% of it free for a couple of reasons:

    1. I'd heard that - despite the page file's relatively small size - that leaving that much room can speed things up. I believe it was from the very helpful book "PC Hacks" published a few years ago.

    2. No matter how large the drive, having the goal of only filling it halfway makes you mindful of what's on there, and - for me - it means I'm better about archiving or moving stuff to a larger external.

0 comments:

Post a Comment