1. Modern versions of Unix support various filesystems; such limits (if any)
would more likely be in the filesystem code than in the mainline kernel code,
and could be different for different filesystem types or implementations.
2. Most (but not all) Unix filesystems have fairly simple directory structures
that are searched linearly (one entry at a time). Their performance
deteriorates badly as the directory gets large, and worse when it grows to
require indirect blocks (which may take anything from 5KB directory up to
640KB or even more).
3. An arrangement where you have a directory with single-character
subdirectories, and generate that character from the first (or last, if that's
more evenly distributed) character of the filename (see the terminfo directory
tree under /usr/lib or /usr/share/lib for an example) will generally give
good performance on most Unix filesystems.
ftp> get |fortune
377 I/O error: smart remark generator failed
Bogonics: the primary language inside the Beltway