CodeVirtuoso CodeVirtuoso - 10 days ago 4
Linux Question

Maximum number of files/folders on Linux?

I'm developing a LAMP online store, which will allow admin to upload multiple images for each item.

My concern is - right off the bat there will be 20000 items meaning roughly 60000 images.


  1. What is the maximum number of files and/or folders on Linux?

  2. What is the usual way of handling this situation (best practice)?

My idea was to make a folder for each item, based on it's unique ID, but then I'd still have 20000 folders in a main uploads folder, and it will grow indefinitely as old items won't be removed.

Thanks for any help.


ext[234] filesystems have a fixed maximum number of inodes; every file or directory requires one inode. You can see the current count and limits with df -i. For example, on a 15GB ext3 filesystem, created with the default settings:

Filesystem           Inodes  IUsed   IFree IUse% Mounted on
/dev/xvda           1933312 134815 1798497    7% /

There's no limit on directories in particular beyond this; keep in mind that every file or directory requires at least one filesystem block (typically 4KB), though, even if it's a directory with only a single item in it.

As you can see, though, 80,000 inodes is unlikely to be a problem. And with the dir_index option (enablable with tune2fs), lookups in large directories aren't too much of a big deal. However, note that many administrative tools (such as ls or rm) can have a hard time dealing with directories with too many files in them. As such, it's recommended to split your files up so that you don't have more than a few hundred to a thousand items in any given directory. An easy way to do this is to hash whatever ID you're using, and use the first few hex digits as intermediate directories.

For example, say you have item ID 12345, and it hashes to 'DEADBEEF02842.......'. You might store your files under /storage/root/d/e/12345. You've now cut the number of files in each directory by 1/256th.