LinuxQuestions.org
Visit Jeremy's Blog.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Software
User Name
Password
Linux - Software This forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.

Notices


Reply
  Search this Thread
Old 06-09-2025, 12:19 PM   #16
sundialsvcs
LQ Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 11,311
Blog Entries: 4

Rep: Reputation: 4152Reputation: 4152Reputation: 4152Reputation: 4152Reputation: 4152Reputation: 4152Reputation: 4152Reputation: 4152Reputation: 4152Reputation: 4152Reputation: 4152

[Log in to get rid of this advertisement]
When you are dealing with "hundreds of thousands of files," as sometimes we all do, you learn to have patience. If you are asking even a modern computer to "do a big task," it might take a little time. No biggie: "it reliably gets done." And that's the only point.
 
1 members found this post helpful.
Old 06-10-2025, 12:54 AM   #17
pan64
LQ Addict
 
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 24,471

Rep: Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048
Quote:
Originally Posted by TB0ne View Post
Wrong. Aside from the aforementioned operations (permissions, etc.), it doesn't iterate 50000 times to delete those files, and even if it does, what's the actual problem/issue here??? I can remove huge directories in under a second or two, near zero system load. Before you claim this, you may want to read up on how this program actually works. It removes the pointer/metadata of the file, which then frees up the associated blocks, allowing them to be overwritten. This is exactly why undelete/recovery utilities can sometimes bring back deleted files.

So then tell us the whole 'truth', and tell us where you read this. Claiming such things with the convenient excuse of "can't remember where" doesn't give this statement any credibility.
Not to speak about cache, it all will/may happen without even "touching" the real disk.

It is probably worth noting also that it is (rm -rf <dir>) still the fastest way to delete those files.
 
Old 06-10-2025, 04:20 PM   #18
scasey
LQ Veteran
 
Registered: Feb 2013
Location: Tucson, AZ, USA
Distribution: Rocky 9.6
Posts: 5,901

Rep: Reputation: 2294Reputation: 2294Reputation: 2294Reputation: 2294Reputation: 2294Reputation: 2294Reputation: 2294Reputation: 2294Reputation: 2294Reputation: 2294Reputation: 2294
Quote:
Originally Posted by sundialsvcs View Post
When you are dealing with "hundreds of thousands of files," as sometimes we all do, you learn to have patience. If you are asking even a modern computer to "do a big task," it might take a little time. No biggie: "it reliably gets done." And that's the only point.
I once worked in a data warehousing shop where the warehouse was always two days old. Why? Because the production volume was such that it took two days to load it.
 
Old 06-11-2025, 06:02 AM   #19
exerceo
Member
 
Registered: Oct 2022
Posts: 124

Original Poster
Rep: Reputation: 30
Lightbulb

Quote:
Originally Posted by michaelk View Post
Are you referring to my post #8 or somewhere else on the internet? What do you think is the other half?
Quote:
Originally Posted by TB0ne View Post
tell us where you read this.
Found it again: Why does rmdir (the system call) only work on empty directory? - Unix & Linux Stack Exchange

Quote:
Originally Posted by Paul Pedant
On the contrary, rmdir only works on a empty directory. It throws an error ENOTEMPTY if the directory contains any actual entries (i.e. other than . and ..). The "why" is because it does not know where to put safely any files that are present, and it prefers you not to lose a whole directory/file tree by accident.
Quote:
Originally Posted by waltinator
That's a feature not a bug!

One can easily remove empty directories with rmdir *, while leaving non-empty directories alone. This is a system management practice I use often.

There is rm -rf to delete whole directory trees (all files and subdirectories).
Quote:
Originally Posted by michaelk View Post
What do you think is the other half?
Quote:
Originally Posted by TB0ne View Post
So then tell us the whole 'truth'
Deleting an entire directory tree is far more resource-intensive given that the space for every single file has to be marked free individually, so it is out of scope for rmdir. To me it seems preventing deletion accidents is a lucky side effect of this fact.

I like that rmdir does not delete entire directory trees so I can prevent myself from accidentally deleting stuff when I don't mean it.

Similarly, I use the "unlink" command if I don't intend to delete more than one file, given that the command is incapable of deleting multiple files in a single command. Some would say rm makes unlink obsolete because it can do everything that unlink can, but unlink just feels safer.

Last edited by exerceo; 06-11-2025 at 06:04 AM. Reason: elaborated
 
Old 06-11-2025, 06:44 AM   #20
pan64
LQ Addict
 
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 24,471

Rep: Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048
you missed a very important point: rm, rmdir, unlink are just tools and you (as user) need to use the appropriate one.
If you want to remove a single file, an empty dir, a bunch of files, a whole tree ..... you need to do them differently. (do you want to do that in script, or by hand, ...?). There is no single way, no single solution to everything.
In some cases this one is more useful, in some other cases that one is more handy.
Quote:
Deleting an entire directory tree is far more resource-intensive given that the space for every single file has to be marked free individually, so it is out of scope for rmdir. To me it seems preventing deletion accidents is a lucky side effect of this fact.
It is just wrong. If someone wants to delete a tree, they have to use something, no matter how resource-intensive it is. rmdir is just unable to do that. they need to use something else.
rmdir also can't switch off your TV, is it a lucky side effect too? Or it is just not the appropriate tool to do that.
unlink is just the low level command (system call) to unlink a file, rm is the high level user interface to call either unlink or [the low level] rmdir (depending on the arguments passed). It is not safer, it is just different.
 
3 members found this post helpful.
Old 06-11-2025, 09:49 AM   #21
michaelk
Moderator
 
Registered: Aug 2002
Posts: 26,853

Rep: Reputation: 6356Reputation: 6356Reputation: 6356Reputation: 6356Reputation: 6356Reputation: 6356Reputation: 6356Reputation: 6356Reputation: 6356Reputation: 6356Reputation: 6356
I also disagree. The original Unix command was not a system call nor was preventing deletion a lucky side effect.

https://linuxgazette.net/issue49/fischer.html
 
2 members found this post helpful.
Old 06-11-2025, 11:32 AM   #22
exerceo
Member
 
Registered: Oct 2022
Posts: 124

Original Poster
Rep: Reputation: 30
Quote:
Originally Posted by pan64 View Post
It is probably worth noting also that it is (rm -rf <dir>) still the fastest way to delete those files.
And find "<dir>" -delete. It should be equally as fast.
 
Old 06-11-2025, 11:35 AM   #23
TB0ne
LQ Guru
 
Registered: Jul 2003
Location: Birmingham, Alabama
Distribution: SuSE, RedHat, Slack,CentOS
Posts: 27,780

Rep: Reputation: 8193Reputation: 8193Reputation: 8193Reputation: 8193Reputation: 8193Reputation: 8193Reputation: 8193Reputation: 8193Reputation: 8193Reputation: 8193Reputation: 8193
Quote:
Originally Posted by exerceo View Post
And find "<dir>" -delete. It should be equally as fast.
No, because you're now doing TWO operations; one is a find on whatever (in this case, a wild card), and the second is a delete after it's found. Not sure why any of this actually matters at all.
 
Old 06-11-2025, 11:45 AM   #24
pan64
LQ Addict
 
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 24,471

Rep: Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048
Quote:
Originally Posted by TB0ne View Post
No, because you're now doing TWO operations; one is a find on whatever (in this case, a wild card), and the second is a delete after it's found. Not sure why any of this actually matters at all.
No, it is not important at all, just if you are interested. It was told how heavy is that rm -fr operation, but actually that is a most lightweight solution to delete a tree.

Quote:
Originally Posted by exerceo View Post
And find "<dir>" -delete. It should be equally as fast.
Actually find is much slower than an ls command (you can safely measure it), therefore find -delete would be definitely slower than rm -rf
 
Old Today, 01:55 AM   #25
GazL
LQ Veteran
 
Registered: May 2008
Posts: 7,175

Rep: Reputation: 5335Reputation: 5335Reputation: 5335Reputation: 5335Reputation: 5335Reputation: 5335Reputation: 5335Reputation: 5335Reputation: 5335Reputation: 5335Reputation: 5335
Quote:
Originally Posted by pan64 View Post
Actually find is much slower than an ls command (you can safely measure it), therefore find -delete would be definitely slower than rm -rf
Code:
$ time ls -R /local/src/linux-stable >/dev/null

real    0m0.109s
user    0m0.056s
sys     0m0.053s
$ time find /local/src/linux-stable >/dev/null

real    0m0.092s
user    0m0.032s
sys     0m0.060s

$ find /local/src/linux-stable |wc -l
95287
Not much in it, but find has a very slight edge here. This was on an ext4 fs. Both timings were second runs to eliminate the cache issue from the results.

If you're seeing a large difference, then it's probably the effect of vfs cache and the order in which you ran both tests.
 
Old Today, 03:53 AM   #26
pan64
LQ Addict
 
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 24,471

Rep: Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048Reputation: 8048
1 million files on NFS:
Code:
$ time ls -R . >/dev/null

real    0m2.991s
user    0m0.547s
sys     0m0.976s
$ time ls -R . >/dev/null

real    0m1.046s
user    0m0.382s
sys     0m0.658s
$ time find . >/dev/null

real    0m3.125s
user    0m1.012s
sys     0m1.073s
$ time find . >/dev/null

real    0m1.762s
user    0m0.729s
sys     0m1.023s
 
  


Reply

Tags
deletion



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] F27 - Empty Trash Bin icon is showing the non-empty icon Connor84 Fedora 2 02-17-2018 01:18 AM
[SOLVED] Non answered Questions ?? clifftec LQ Suggestions & Feedback 11 08-03-2017 02:05 AM
[SOLVED] [BASH] non-empty variable before loop end, is empty after exiting loop aitor Programming 2 08-26-2010 09:57 AM
can not delete NON-EMPTY directories through samba share Sandor Linux - Software 2 02-16-2006 03:06 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Software

All times are GMT -5. The time now is 04:19 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration