ls *

Posted on 2000-03-14
Last Modified: 2010-04-21
"ls *" or any shell command with '*' gives me an error output when executed in a directory where I have over 2000 files, because
the resulting parameter list is too long.
How can I change the maximum value of this in my shell?
Question by:Eric98
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 3
  • 2
  • 2
  • +2

Expert Comment

ID: 2616612
Comment. There are alot of reasons to take it as a warning rather than a problem. You might fix it for a few commands but others will still have problems. The operating system itself doesnt like handling directorys that are that full.

This is why an ISP which gets over 1000 customers often shifts to a subdirectory home structure. From /home/gp1628 to /home/g/gp1628
and really large ISPs sometimes go deeper than that. Its much more efficient that way. You can make the system operate with that many files but its usually not worth the effort.

Anytime Im stuck with that many in a directory I tend to do kindof the same thing. I work with all a* then all b* etc. Or break it down by number of characters such as  ls ???    ls ????  ls ?????


Author Comment

ID: 2616821
I cannot break down my list of files, as those files are the input of a tool that requires to find all files in one time and in one and the same directory.
In other words, my problem remains open ...

Expert Comment

ID: 2616830
It sounds like you are using csh for you command line shell. csh has limitations that vary depending on the operating system.

For example, on a Solaris 7, csh accepts no more than 1706 arguments, the arguments must be no more than 1024 characters long, and the total argument list length must be less than 1M. On HP 10.20, the limit is 10240 characters for the argument list. You can determine the limits for your system by reading "man csh", and looking for the section near the end labeled "WARNINGS" or "NOTES".

You are running into the limits because the shell is expand the wildcard "*" to match all files in the directory, and that exceeds one of the limits described above.

There is no way to adjust the limits with the csh provided on your system. Your choices are:

1. Use another shell that does not have the same limitations. Bourne shell (/bin/sh) and Korn shell (/bin/ksh) will work fine for 'ls *' in a directory with more than 2000 files.

2. Use the find command:

"find . -name "*" -prune -type f -print" will list all the files match "*" in the current directory. The output will look similar to "ls -1 *".

3. Write your own version of csh.

You can probably find source code somewhere, and modify csh to raise of eliminate the limitations.
Get 15 Days FREE Full-Featured Trial

Benefit from a mission critical IT monitoring with Monitis Premium or get it FREE for your entry level monitoring needs.
-Over 200,000 users
-More than 300,000 websites monitored
-Used in 197 countries
-Recommended by 98% of users

LVL 84

Expert Comment

ID: 2617273
Can you do `ls` instead of `ls *`?
or `ls | xargs ls`
LVL 14

Accepted Solution

chris_calabrese earned 50 total points
ID: 2617374
This is not a shell limitation but rather a limitation in the kernel on how much memory can be allocated for the arugent list when executing a program.  It's usually 10k-20k depending on the Unix flavor, etc.

As ozo suggested, the best way to solve this problem is to simply use 'ls' instead of 'ls *'.  Similarly, for other programs you can use things like
  ls | xargs myprogram
  find . -exec myprogram '{}' ';'

Not to mention that using '*' is a security risk in a directory writable to others because it's conceivable that a malicious user may have dropped a file with a name like '; rm -rf /'.
LVL 14

Expert Comment

ID: 2617385
Oh yeah, and this is also not usually why people go for the /home/g/gp1628 type naming convention, but rather because many systems also have a limit on the number of files per direcotory or because performance on directory operations gets really bad on directories with lots of entries.

Author Comment

ID: 2618873
I am very surprised one cannot change the limit itself, but the xargs solution helps out.
Thanks to all

Author Comment

ID: 2618879
ozo deserves the points too, I have no idea how tpo proceed ...

Expert Comment

ID: 2619778
That happens alot.

you can create another question that says "for OZO" and give it points.
When he answers, accept it.


Featured Post

Transaction Monitoring Vs. Real User Monitoring

Synthetic Transaction Monitoring Vs. Real User Monitoring: When To Use Each Approach? In this article, we will discuss two major monitoring approaches: Synthetic Transaction and Real User Monitoring.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Java performance on Solaris - Managing CPUs There are various resource controls in operating system which directly/indirectly influence the performance of application. one of the most important resource controls is "CPU".   In a multithreaded…
Why Shell Scripting? Shell scripting is a powerful method of accessing UNIX systems and it is very flexible. Shell scripts are required when we want to execute a sequence of commands in Unix flavored operating systems. “Shell” is the command line i…
Learn how to find files with the shell using the find and locate commands. Use locate to find a needle in a haystack.: With locate, check if the file still exists.: Use find to get the actual location of the file.:
In a previous video, we went over how to export a DynamoDB table into Amazon S3.  In this video, we show how to load the export from S3 into a DynamoDB table.

707 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question