Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum. |
| Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
 |
|
06-09-2025, 12:19 PM
|
#16
|
|
LQ Guru
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 11,311
|
[ Log in to get rid of this advertisement]
When you are dealing with "hundreds of thousands of files," as sometimes we all do, you learn to have patience. If you are asking even a modern computer to "do a big task," it might take a little time. No biggie: "it reliably gets done." And that's the only point.
|
|
|
1 members found this post helpful.
|
06-10-2025, 12:54 AM
|
#17
|
|
LQ Addict
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 24,471
|
Quote:
Originally Posted by TB0ne
Wrong. Aside from the aforementioned operations (permissions, etc.), it doesn't iterate 50000 times to delete those files, and even if it does, what's the actual problem/issue here??? I can remove huge directories in under a second or two, near zero system load. Before you claim this, you may want to read up on how this program actually works. It removes the pointer/metadata of the file, which then frees up the associated blocks, allowing them to be overwritten. This is exactly why undelete/recovery utilities can sometimes bring back deleted files.
So then tell us the whole 'truth', and tell us where you read this. Claiming such things with the convenient excuse of "can't remember where" doesn't give this statement any credibility.
|
Not to speak about cache, it all will/may happen without even "touching" the real disk.
It is probably worth noting also that it is (rm -rf <dir>) still the fastest way to delete those files.
|
|
|
|
06-10-2025, 04:20 PM
|
#18
|
|
LQ Veteran
Registered: Feb 2013
Location: Tucson, AZ, USA
Distribution: Rocky 9.6
Posts: 5,901
|
Quote:
Originally Posted by sundialsvcs
When you are dealing with "hundreds of thousands of files," as sometimes we all do, you learn to have patience. If you are asking even a modern computer to "do a big task," it might take a little time. No biggie: "it reliably gets done." And that's the only point.
|
I once worked in a data warehousing shop where the warehouse was always two days old. Why? Because the production volume was such that it took two days to load it.
|
|
|
|
06-11-2025, 06:02 AM
|
#19
|
|
Member
Registered: Oct 2022
Posts: 124
Original Poster
Rep:
|
Quote:
Originally Posted by michaelk
Are you referring to my post #8 or somewhere else on the internet? What do you think is the other half?
|
Quote:
Originally Posted by TB0ne
tell us where you read this.
|
Found it again: Why does rmdir (the system call) only work on empty directory? - Unix & Linux Stack Exchange
Quote:
|
Originally Posted by Paul Pedant
On the contrary, rmdir only works on a empty directory. It throws an error ENOTEMPTY if the directory contains any actual entries (i.e. other than . and ..). The "why" is because it does not know where to put safely any files that are present, and it prefers you not to lose a whole directory/file tree by accident.
|
Quote:
|
Originally Posted by waltinator
That's a feature not a bug!
One can easily remove empty directories with rmdir *, while leaving non-empty directories alone. This is a system management practice I use often.
There is rm -rf to delete whole directory trees (all files and subdirectories).
|
Quote:
Originally Posted by michaelk
What do you think is the other half?
|
Quote:
Originally Posted by TB0ne
So then tell us the whole 'truth'
|
Deleting an entire directory tree is far more resource-intensive given that the space for every single file has to be marked free individually, so it is out of scope for rmdir. To me it seems preventing deletion accidents is a lucky side effect of this fact.
I like that rmdir does not delete entire directory trees so I can prevent myself from accidentally deleting stuff when I don't mean it.
Similarly, I use the "unlink" command if I don't intend to delete more than one file, given that the command is incapable of deleting multiple files in a single command. Some would say rm makes unlink obsolete because it can do everything that unlink can, but unlink just feels safer.
Last edited by exerceo; 06-11-2025 at 06:04 AM.
Reason: elaborated
|
|
|
|
06-11-2025, 06:44 AM
|
#20
|
|
LQ Addict
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 24,471
|
you missed a very important point: rm, rmdir, unlink are just tools and you (as user) need to use the appropriate one.
If you want to remove a single file, an empty dir, a bunch of files, a whole tree ..... you need to do them differently. (do you want to do that in script, or by hand, ...?). There is no single way, no single solution to everything.
In some cases this one is more useful, in some other cases that one is more handy.
Quote:
|
Deleting an entire directory tree is far more resource-intensive given that the space for every single file has to be marked free individually, so it is out of scope for rmdir. To me it seems preventing deletion accidents is a lucky side effect of this fact.
|
It is just wrong. If someone wants to delete a tree, they have to use something, no matter how resource-intensive it is. rmdir is just unable to do that. they need to use something else.
rmdir also can't switch off your TV, is it a lucky side effect too? Or it is just not the appropriate tool to do that.
unlink is just the low level command (system call) to unlink a file, rm is the high level user interface to call either unlink or [the low level] rmdir (depending on the arguments passed). It is not safer, it is just different.
|
|
|
3 members found this post helpful.
|
06-11-2025, 09:49 AM
|
#21
|
|
Moderator
Registered: Aug 2002
Posts: 26,853
|
I also disagree. The original Unix command was not a system call nor was preventing deletion a lucky side effect.
https://linuxgazette.net/issue49/fischer.html
|
|
|
2 members found this post helpful.
|
06-11-2025, 11:32 AM
|
#22
|
|
Member
Registered: Oct 2022
Posts: 124
Original Poster
Rep:
|
Quote:
Originally Posted by pan64
It is probably worth noting also that it is (rm -rf <dir>) still the fastest way to delete those files.
|
And find "<dir>" -delete. It should be equally as fast.
|
|
|
|
06-11-2025, 11:35 AM
|
#23
|
|
LQ Guru
Registered: Jul 2003
Location: Birmingham, Alabama
Distribution: SuSE, RedHat, Slack,CentOS
Posts: 27,780
|
Quote:
Originally Posted by exerceo
And find "<dir>" -delete. It should be equally as fast.
|
No, because you're now doing TWO operations; one is a find on whatever (in this case, a wild card), and the second is a delete after it's found. Not sure why any of this actually matters at all.
|
|
|
|
06-11-2025, 11:45 AM
|
#24
|
|
LQ Addict
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 24,471
|
Quote:
Originally Posted by TB0ne
No, because you're now doing TWO operations; one is a find on whatever (in this case, a wild card), and the second is a delete after it's found. Not sure why any of this actually matters at all.
|
No, it is not important at all, just if you are interested. It was told how heavy is that rm -fr operation, but actually that is a most lightweight solution to delete a tree.
Quote:
Originally Posted by exerceo
And find "<dir>" -delete. It should be equally as fast.
|
Actually find is much slower than an ls command (you can safely measure it), therefore find -delete would be definitely slower than rm -rf
|
|
|
|
Today, 01:55 AM
|
#25
|
|
LQ Veteran
Registered: May 2008
Posts: 7,175
|
Quote:
Originally Posted by pan64
Actually find is much slower than an ls command (you can safely measure it), therefore find -delete would be definitely slower than rm -rf
|
Code:
$ time ls -R /local/src/linux-stable >/dev/null
real 0m0.109s
user 0m0.056s
sys 0m0.053s
$ time find /local/src/linux-stable >/dev/null
real 0m0.092s
user 0m0.032s
sys 0m0.060s
$ find /local/src/linux-stable |wc -l
95287
Not much in it, but find has a very slight edge here. This was on an ext4 fs. Both timings were second runs to eliminate the cache issue from the results.
If you're seeing a large difference, then it's probably the effect of vfs cache and the order in which you ran both tests.
|
|
|
|
Today, 03:53 AM
|
#26
|
|
LQ Addict
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 24,471
|
1 million files on NFS:
Code:
$ time ls -R . >/dev/null
real 0m2.991s
user 0m0.547s
sys 0m0.976s
$ time ls -R . >/dev/null
real 0m1.046s
user 0m0.382s
sys 0m0.658s
$ time find . >/dev/null
real 0m3.125s
user 0m1.012s
sys 0m1.073s
$ time find . >/dev/null
real 0m1.762s
user 0m0.729s
sys 0m1.023s
|
|
|
|
All times are GMT -5. The time now is 04:19 AM.
|
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|