Solved

confused on hardlinks

Posted on 2003-10-28
14
230 Views
Last Modified: 2010-07-27
Hello... probably simple question, but I'm confused.

Hard links... I have some folder called /home/peon/bin  which has a few binaries.

I have another folder called /home/newpeon/bin that will contain the exact same binaries

So I used the command   cp -al /home/peon/bin/* /home/newpeon/bin/  to make links, right?

Well... everything went fine, but look...

$ du --max-depth=1 /home/peon/bin

8       ./.ssh
22272    ./bin

$ du --max-depth=1 /home/newpeon/bin

8       ./.ssh
22272    ./bin

They are the same!  If they are hardlinks, shouldn't one be MUCH smaller???

Ok, so then to verify that they are indeed hardlinks I do the following

ls -i /home/peon/bin  (and the same for newpeon) and sure enough the inodes are all identical.

Same inodes but they both take up space??

What am I missing here?

Thanks.
0
Comment
Question by:s_mack
  • 5
  • 4
  • 4
14 Comments
 
LVL 20

Expert Comment

by:Gns
Comment Utility
Do "ls -l home/newpeon/bin /home/peon/bin". Pay attention to the link count field (the second one, after the -rwx------). If these are hard linked files, they will be regular files with a link count greater than 1. As you ls -i show, the files are in fact just references to the same inodes in different directories.

This is obvious to you&me ... but not to du.
du just blithely counts what's in the directory. After all, it would be even more wrong to report it as 0 size:-).
The obvious followup "Can't I rely on du?" would then have an equally obvious answer "not further than you can throw it."...:-).

df wouldn't change though. Apart for the allocation of the directory.

-- Glenn
0
 
LVL 20

Expert Comment

by:Gns
Comment Utility
BTW, we're touching on the reason why you cannot do hardlinks past the filesystem boundary here... You might have the same inode on separate fs's, so it would be a real nono to allow...

-- Glenn
0
 

Author Comment

by:s_mack
Comment Utility
hmm... so how does one find an accurate count of how much space a user is using?
0
 

Author Comment

by:s_mack
Comment Utility
and... what is the point of the -l flag with du???  If it counts the links anyway, why have an option specifically for counting links?
0
 
LVL 3

Expert Comment

by:guynumber5764
Comment Utility
<I>hmm... so how does one find an accurate count of how much space a user is using?</I)

I would use the owner of the file rather than the path as my criteria.  But that still doesn't solve your problem.  We'll get back to why in a sec.

I think that you are confusing hard links with symbolic links.  The difference quickly becomes apparent when peon deletes his bin directory.  A symbolic link would get all noodled because the linked file no longer exists.  A hard link would be fine because it <b>is</b> the linked file under a different filename (the link count would just get decremented).  the trick is that the original directory entry and the hard link created later are totally symmetrical.

As for du -l, the option affects how it handles encountering more than one occurrence of the same file (through multiple hard links).  An example explains it best (x, y, z are hard links to the same file)...

[~/tmp]$ ls -l
total 1440
lrwxrwxrwx    1 peon         peons           1 Oct 29 00:04 s -> x
-rw-------          3 peon          peons      486331 Sep 18 18:08 x
-rw-------          3 newpeon   peons      486331 Sep 18 18:08 y
-rw-------          3 peon          peons      486331 Sep 18 18:08 z
[~/tmp]$ du .
480K    .
[~/tmp]$ du -l .
1.5M    .

Getting back to the original question of distribution look at the above directory listing and see if you can decide who is supposed to get dinged for the space occupied by x, y, and z and how badly.  Just to make it fun, the original directory entry was deleted and belonged to someone else (it really was).

0
 
LVL 3

Expert Comment

by:guynumber5764
Comment Utility
Oops sorry about the tags...thought I was on /. for a sec...
E.
0
Backup Your Microsoft Windows Server®

Backup all your Microsoft Windows Server – on-premises, in remote locations, in private and hybrid clouds. Your entire Windows Server will be backed up in one easy step with patented, block-level disk imaging. We achieve RTOs (recovery time objectives) as low as 15 seconds.

 
LVL 20

Expert Comment

by:Gns
Comment Utility
> BTW, we're touching on the reason why you cannot do hardlinks past the filesystem boundary here... You might have the same inode on separate fs's, so it would be a real nono to allow...
Not to mention that there is no provision in any fs to reference an inode on another fs... other than symbolically (covered nicely by guynumber5764... Thanks BTW for the nice illustration of why du will never know what to do with hard links).

-- Glenn
0
 

Author Comment

by:s_mack
Comment Utility
> BTW, we're touching on the reason why you cannot do hardlinks past the filesystem boundary here... You might have the same inode on separate fs's, so it would be a real nono to allow...

/home is the fs in question.  so same fs.  each user gets a chrooted jail with their own bin, lib, etc. those are the ones that are hardlinked.  There's a "template" peon that I cp'd all the files to... then cp -al'd all those to the other peons.

ok ok... this is where it becomes apparant that you guys have an understanding of what you are talking about, and I'm still stuck on "if Johnny has two apples and he eats one...."

Forget the why's for a second... how can I be convinced that I am in fact saving space by using cp -al instead of just cp?  Really, which user gets dinged for it is insignificant because the space taken up by the "shared" directories (bin and lib) is .0008% of their total allowed space. But with 64,000 peons... it ads up.  So I just wanted to be sure that I was using less space.

And for some strange reason, df has been #%@#@ up for ages on this system.  It has reported that the /home fs is 36% full since time began.  At one point I'm sure it was, but then we deleted the entire /home and it was still 36% full... filled it right up to the point of disk error... still 36% full.. with the exact number of kb reportedly being used.   So I've given up on df and was hoping du could do what I want.
0
 
LVL 20

Assisted Solution

by:Gns
Gns earned 25 total points
Comment Utility
> Forget the why's for a second... how can I be convinced that I am in fact saving space by using cp -al instead of just cp?  Really, which user gets dinged for it is
> insignificant because the space taken up by the "shared" directories (bin and lib) is .0008% of their total allowed space. But with 64,000 peons... it ads up.  So I just
> wanted to be sure that I was using less space.
You answered that yourself! With your ls -i, which shows that there has only been allocated one inode (well..), and the ls -l show (link count _huge_ in your case then:-) that too... If you have that kind of setup, the space savings would be ... huge.

> And for some strange reason, df has been #%@#@ up for ages on this system.  It has reported that the /home fs is 36% full since time began.  At one point I'm sure
> it was, but then we deleted the entire /home and it was still 36% full... filled it right up to the point of disk error... still 36% full.. with the exact number of kb
> reportedly being used.   So I've given up on df and was hoping du could do what I want.

df should work... Have you fsck'd it recently? Did you run out of space or inodes?

-- Glenn
0
 

Author Comment

by:s_mack
Comment Utility
you can run out of  inodes?!?!?!
0
 
LVL 20

Expert Comment

by:Gns
Comment Utility
Yes. On most filesystems this is something you set at creation time (nocely obfuscated as "inode density" parameters:-).
Check out the fs you are using (man mke2fs for ext2/ext3...).

-- Glenn
0
 
LVL 3

Expert Comment

by:guynumber5764
Comment Utility

df should work fine but remember it works with volumes (partitions) not directories.  

If there is a discrepancy between it and du make sure your volumes are mounted the way that you think they are.  

As an experiment do a df -v then create a (real not linked) large file and df -v again.  One of your volumes (not always the one you expect) will show a difference in available space.  If the file shows up on the / volume instead of the /home volume then you have to empty the /home directory and then mount /home.


0
 
LVL 3

Accepted Solution

by:
guynumber5764 earned 25 total points
Comment Utility
To answer your 2nd question: "how can I be convinced"... du -l /home should be different from from du /home.  The difference is the amount of space you are saving.
0

Featured Post

How your wiki can always stay up-to-date

Quip doubles as a “living” wiki and a project management tool that evolves with your organization. As you finish projects in Quip, the work remains, easily accessible to all team members, new and old.
- Increase transparency
- Onboard new hires faster
- Access from mobile/offline

Join & Write a Comment

Linux users are sometimes dumbfounded by the severe lack of documentation on a topic. Sometimes, the documentation is copious, but other times, you end up with some obscure "it varies depending on your distribution" over and over when searching for …
It’s 2016. Password authentication should be dead — or at least close to dying. But, unfortunately, it has not traversed Quagga stage yet. Using password authentication is like laundering hotel guest linens with a washboard — it’s Passé.
Learn how to get help with Linux/Unix bash shell commands. Use help to read help documents for built in bash shell commands.: Use man to interface with the online reference manuals for shell commands.: Use man to search man pages for unknown command…
This demo shows you how to set up the containerized NetScaler CPX with NetScaler Management and Analytics System in a non-routable Mesos/Marathon environment for use with Micro-Services applications.

763 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

6 Experts available now in Live!

Get 1:1 Help Now