Link to home
Start Free TrialLog in
Avatar of perlperl
perlperl

asked on

libraries

I understand the difference between static (.a) and dynamic libraries (.so) used at the linkage time. However I do not understand why one is preferred over other. I see in my application that sometimes they use .a and sometimes they use .so

1) The whole point of .so is its dynamic (like DLL in windows). if a binary is lined to .so, there is no need to recompile the entire binary and restart it. One can only compile the code related to .so and just drop it on the cluster without restrating the binary.

2) However if we link the binary to the static .a library, then we need to rebuid the .a, recreate the binary by linking to new .a, then drop it to the cluster and maybe a restart is required.

Clearly static is not a good choice. So why would someone link to static library .a. I am just trying to understand some use case of it that will help me understand why we are doing that way.
ASKER CERTIFIED SOLUTION
Avatar of Kent Olsen
Kent Olsen
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of perlperl
perlperl

ASKER

so if the static has huge performance impact then why not link everything to static. Why even use .so or DLL :)

Also is my understanding correct that when we link a binary to .so and for some reason deploy (update) the .so we don't have to reload the binary?
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
One point where 'static linking' has benefits over 'dynamic libraries' is simply deployment. If you need to run an application on a machine without going through a more or less complex setup process, 'static' is the way to go. E.g. if you want one binary image to run on quite a variety of Linux distributions, you'd have to ensure that the exactly matching version of libgcc is installed, or your application will not run. This issue can be remedied if you link it like '-static-libgcc -Xlinker -static'. This way you can manage to run one single image on pretty much each Linux system (as an anecdote on the side, that even worked back to a Gentoo from 2005)
Some applications are created as a set of utilities. Think about the ImageMagick for command-line processing of images. There are commands created as executables (here from Windows, but the same applies to UNIX-based systems):

compare.exe, composite.exe (removing .exe from now on), conjure, convert, dcraw, emfplus, ffmpeg, hp2xx, identify, imdisplay, mogrify, montage, and stream.

They implement the interface (name of the command typed on command line, various forms of arguments), and they call the core functionality that overlaps. There is also a static linked lib version. However, for the dynamic link version, the commands are about 200 KB executables. The core libraries are DLLs that implement some API, and they do not care about the outer interface.

Also, the separation may help to write cross-platform application in the case like that.