Go Premium for a chance to win a PS4. Enter to Win

x
?
Solved

confused on hardlinks

Posted on 2003-10-28
14
Medium Priority
?
286 Views
Last Modified: 2010-07-27
Hello... probably simple question, but I'm confused.

Hard links... I have some folder called /home/peon/bin  which has a few binaries.

I have another folder called /home/newpeon/bin that will contain the exact same binaries

So I used the command   cp -al /home/peon/bin/* /home/newpeon/bin/  to make links, right?

Well... everything went fine, but look...

$ du --max-depth=1 /home/peon/bin

8       ./.ssh
22272    ./bin

$ du --max-depth=1 /home/newpeon/bin

8       ./.ssh
22272    ./bin

They are the same!  If they are hardlinks, shouldn't one be MUCH smaller???

Ok, so then to verify that they are indeed hardlinks I do the following

ls -i /home/peon/bin  (and the same for newpeon) and sure enough the inodes are all identical.

Same inodes but they both take up space??

What am I missing here?

Thanks.
0
Comment
Question by:s_mack
  • 5
  • 4
  • 4
13 Comments
 
LVL 20

Expert Comment

by:Gns
ID: 9635474
Do "ls -l home/newpeon/bin /home/peon/bin". Pay attention to the link count field (the second one, after the -rwx------). If these are hard linked files, they will be regular files with a link count greater than 1. As you ls -i show, the files are in fact just references to the same inodes in different directories.

This is obvious to you&me ... but not to du.
du just blithely counts what's in the directory. After all, it would be even more wrong to report it as 0 size:-).
The obvious followup "Can't I rely on du?" would then have an equally obvious answer "not further than you can throw it."...:-).

df wouldn't change though. Apart for the allocation of the directory.

-- Glenn
0
 
LVL 20

Expert Comment

by:Gns
ID: 9635519
BTW, we're touching on the reason why you cannot do hardlinks past the filesystem boundary here... You might have the same inode on separate fs's, so it would be a real nono to allow...

-- Glenn
0
 

Author Comment

by:s_mack
ID: 9637788
hmm... so how does one find an accurate count of how much space a user is using?
0
Learn Veeam advantages over legacy backup

Every day, more and more legacy backup customers switch to Veeam. Technologies designed for the client-server era cannot restore any IT service running in the hybrid cloud within seconds. Learn top Veeam advantages over legacy backup and get Veeam for the price of your renewal

 

Author Comment

by:s_mack
ID: 9637807
and... what is the point of the -l flag with du???  If it counts the links anyway, why have an option specifically for counting links?
0
 
LVL 3

Expert Comment

by:guynumber5764
ID: 9640164
<I>hmm... so how does one find an accurate count of how much space a user is using?</I)

I would use the owner of the file rather than the path as my criteria.  But that still doesn't solve your problem.  We'll get back to why in a sec.

I think that you are confusing hard links with symbolic links.  The difference quickly becomes apparent when peon deletes his bin directory.  A symbolic link would get all noodled because the linked file no longer exists.  A hard link would be fine because it <b>is</b> the linked file under a different filename (the link count would just get decremented).  the trick is that the original directory entry and the hard link created later are totally symmetrical.

As for du -l, the option affects how it handles encountering more than one occurrence of the same file (through multiple hard links).  An example explains it best (x, y, z are hard links to the same file)...

[~/tmp]$ ls -l
total 1440
lrwxrwxrwx    1 peon         peons           1 Oct 29 00:04 s -> x
-rw-------          3 peon          peons      486331 Sep 18 18:08 x
-rw-------          3 newpeon   peons      486331 Sep 18 18:08 y
-rw-------          3 peon          peons      486331 Sep 18 18:08 z
[~/tmp]$ du .
480K    .
[~/tmp]$ du -l .
1.5M    .

Getting back to the original question of distribution look at the above directory listing and see if you can decide who is supposed to get dinged for the space occupied by x, y, and z and how badly.  Just to make it fun, the original directory entry was deleted and belonged to someone else (it really was).

0
 
LVL 3

Expert Comment

by:guynumber5764
ID: 9640167
Oops sorry about the tags...thought I was on /. for a sec...
E.
0
 
LVL 20

Expert Comment

by:Gns
ID: 9640390
> BTW, we're touching on the reason why you cannot do hardlinks past the filesystem boundary here... You might have the same inode on separate fs's, so it would be a real nono to allow...
Not to mention that there is no provision in any fs to reference an inode on another fs... other than symbolically (covered nicely by guynumber5764... Thanks BTW for the nice illustration of why du will never know what to do with hard links).

-- Glenn
0
 

Author Comment

by:s_mack
ID: 9644097
> BTW, we're touching on the reason why you cannot do hardlinks past the filesystem boundary here... You might have the same inode on separate fs's, so it would be a real nono to allow...

/home is the fs in question.  so same fs.  each user gets a chrooted jail with their own bin, lib, etc. those are the ones that are hardlinked.  There's a "template" peon that I cp'd all the files to... then cp -al'd all those to the other peons.

ok ok... this is where it becomes apparant that you guys have an understanding of what you are talking about, and I'm still stuck on "if Johnny has two apples and he eats one...."

Forget the why's for a second... how can I be convinced that I am in fact saving space by using cp -al instead of just cp?  Really, which user gets dinged for it is insignificant because the space taken up by the "shared" directories (bin and lib) is .0008% of their total allowed space. But with 64,000 peons... it ads up.  So I just wanted to be sure that I was using less space.

And for some strange reason, df has been #%@#@ up for ages on this system.  It has reported that the /home fs is 36% full since time began.  At one point I'm sure it was, but then we deleted the entire /home and it was still 36% full... filled it right up to the point of disk error... still 36% full.. with the exact number of kb reportedly being used.   So I've given up on df and was hoping du could do what I want.
0
 
LVL 20

Assisted Solution

by:Gns
Gns earned 100 total points
ID: 9644250
> Forget the why's for a second... how can I be convinced that I am in fact saving space by using cp -al instead of just cp?  Really, which user gets dinged for it is
> insignificant because the space taken up by the "shared" directories (bin and lib) is .0008% of their total allowed space. But with 64,000 peons... it ads up.  So I just
> wanted to be sure that I was using less space.
You answered that yourself! With your ls -i, which shows that there has only been allocated one inode (well..), and the ls -l show (link count _huge_ in your case then:-) that too... If you have that kind of setup, the space savings would be ... huge.

> And for some strange reason, df has been #%@#@ up for ages on this system.  It has reported that the /home fs is 36% full since time began.  At one point I'm sure
> it was, but then we deleted the entire /home and it was still 36% full... filled it right up to the point of disk error... still 36% full.. with the exact number of kb
> reportedly being used.   So I've given up on df and was hoping du could do what I want.

df should work... Have you fsck'd it recently? Did you run out of space or inodes?

-- Glenn
0
 

Author Comment

by:s_mack
ID: 9644516
you can run out of  inodes?!?!?!
0
 
LVL 20

Expert Comment

by:Gns
ID: 9644612
Yes. On most filesystems this is something you set at creation time (nocely obfuscated as "inode density" parameters:-).
Check out the fs you are using (man mke2fs for ext2/ext3...).

-- Glenn
0
 
LVL 3

Expert Comment

by:guynumber5764
ID: 9645940

df should work fine but remember it works with volumes (partitions) not directories.  

If there is a discrepancy between it and du make sure your volumes are mounted the way that you think they are.  

As an experiment do a df -v then create a (real not linked) large file and df -v again.  One of your volumes (not always the one you expect) will show a difference in available space.  If the file shows up on the / volume instead of the /home volume then you have to empty the /home directory and then mount /home.


0
 
LVL 3

Accepted Solution

by:
guynumber5764 earned 100 total points
ID: 9645959
To answer your 2nd question: "how can I be convinced"... du -l /home should be different from from du /home.  The difference is the amount of space you are saving.
0

Featured Post

Get free NFR key for Veeam Availability Suite 9.5

Veeam is happy to provide a free NFR license (1 year, 2 sockets) to all certified IT Pros. The license allows for the non-production use of Veeam Availability Suite v9.5 in your home lab, without any feature limitations. It works for both VMware and Hyper-V environments

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

I am a long time windows user and for me it is normal to have spaces in directory and file names. Changing to Linux I found myself frustrated when I moved my windows data over to my new Linux computer. The problem occurs when at the command line.…
Fine Tune your automatic Updates for Ubuntu / Debian
Learn several ways to interact with files and get file information from the bash shell. ls lists the contents of a directory: Using the -a flag displays hidden files: Using the -l flag formats the output in a long list: The file command gives us mor…
Get a first impression of how PRTG looks and learn how it works.   This video is a short introduction to PRTG, as an initial overview or as a quick start for new PRTG users.
Suggested Courses
Course of the Month5 days, 22 hours left to enroll

773 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question