We help IT Professionals succeed at work.

Bind Memory usage

I have just upgraded from Bind 9.7 to Bind 9.8.1 Patch 1  and I'm noticing that the occupied physical memory is increasing to values larger than usual. Whilst in the former release the occupied physical memory stabilises at a value of approximately 4GB, I am now noticing that the occupied memory is using all the 16GB available to the server.

Was there any major change or could this be a memory leak in the named daemon process? I am using a Solaris 10 Operating System running on Oracle Hardware with Sparc Architecture.
Watch Question

Top Expert 2015

What are you serving? Mine uses like 100MB proxying whole net and active directory for 1000 users or so.


Hi gheist

I'm severing around 60k users this is the dns server of an isp. This was working fine with the previous bind 9.7. Last week I've even replaced reinstalled the same bind on another spark server to eliminate the possibility of hardware failure but when I've put it in the live environment it did the same.

Top Expert 2015

max-cache-size parameter specifies how much of records are cached.
practical description here:

i get a feeling that default 32mb cache is not enough for you and your server lags because it makes parallel queries for all clients all the time requesting and that takes a lot of RAM for network buffers.

PS MaraDNS is significantly more compact.


The default settings specifies that there is no limit for the cache size, thus occupying all the memory. However in previous Bind releases, this occupied memory did not exceed 4GB, whilst with the new release we are using all of the 16GB available to the system.

Is this expected behavior?
Top Expert 2015

"occupying all memory" means using swap.....
maybe limit it in some way like half of RAM.

Is it DNS cache or authority or both?

Maradns is really smaller and faster.


not using Swap. Physical memory

We are using both an authoritative and recursive server.

we will limit it ourselves.
Top Expert 2015

old bind was 32bit and thus limited to 4gb
new looks 64bit one and using even more.

having cache limited as such may save you from pollutiong cache by people tunneling connections via DNS and others rolling names quickly.


yes it seems so.

Is there a command that I can check if it is 32bit or 64bit?

Top Expert 2015

file `which named`


Hi Gheist,

I think that the X64 bit is the issue.

Still the above command didn't return anything.

is there any other command which i can see if the bind is x64bit?

Top Expert 2015

$ file `which nslookup`
/usr/bin/nslookup: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.16, stripped

(this consumes file named /etc/magic to identify files)

$ ldd `which nslookup`
        blah blah
        libc.so.6 => /lib64/libc.so.6 (0x0)
        libgssapi_krb5.so.2 => /usr/lib64/libgssapi_krb5.so.2 (0x0)
        blah blah blah

(ld.so resolver)


Hi gheist

the command worked ... but it showed that the bind is 32bit  :(

Sun Microsystems Inc.   SunOS 5.10      Generic January 2005
# file `which named`
/usr/sbin/named:        ELF 32-bit MSB executable SPARC Version 1, dynamically l                                                 inked, stripped
Top Expert 2015

Is system using RAM as a disk cache? Or you see 16GB used by a single process???

Try this http://www.maradns.org/tutorial/bind2csv2.html
It really saves memory compared to BIND and AD which were in place before.

Splitting recursive and authoritative is a better architecture for BIND itself. Other DNS servers do not have same problem.


System is using the RAM and there is no particular process using all the memory

below find the output from the "top" command.

last pid: 29251;  load avg:  4.02,  4.05,  4.04;       up 33+20:43:46  16:14:44
39 processes: 37 sleeping, 2 on cpu
CPU states: 88.1% idle,  7.9% user,  4.1% kernel,  0.0% iowait,  0.0% swap
Memory: 16G phys mem, 1170M free mem, 2048M total swap, 2048M free swap

 10212 named     35  53    0 1447M 1411M cpu/35 2020.3 10.80% named
 10366 root       1  59    0   11M 8320K sleep   20.8H  0.15% snmpd
 29250 root       1  59    0 3712K 2728K cpu/39   0:00  0.01% top
  6949 noaccess  18  59    0  175M   96M sleep   75:02  0.00% java
  6605 root       3  59    0 6136K 2840K sleep    0:12  0.00% automountd
 29235 dcauchi    1  59    0 7752K 6424K sleep    0:00  0.00% sshd
    10 root      12  59    0   16M   11M sleep    0:57  0.00% svc.startd
 24189 root      26  59    0   10M 8056K sleep    2:42  0.00% nscd
  6770 root       1  59    0 9776K 3320K sleep    1:52  0.00% sendmail
    12 root      15  59    0   11M 8640K sleep    3:17  0.00% svc.configd
  6954 root       4  59    0   12M 5648K sleep    0:49  0.00% inetd
    86 root       7  59    0 4328K 1736K sleep    1:02  0.00% devfsadm
  6673 root       1  59    0 1760K  864K sleep    0:07  0.00% utmpd
   709 root       9  59    0 6336K 5040K sleep   11:26  0.00% picld
  6617 root      27  59    0   20M   10M sleep    1:23  0.00% fmd
Top Expert 2015
The feature is called "unified buffer cache" or so - namely using all of (unused) memory as (read) cache of the disk.
Nothing to worry about, system will discard less used data as programs need memory.

I see you are monitoring your system using SNMP, if it has some (CPU&network packets/s) trends analysis you are more or less covered for future. Memory usage of bind will not exceed 4GB, it stores DNS records in more compact format than config files in those 1.5GB of RAM.

So far I see no anomaly, you might also consider running process accounting to identify trends in bind's resource usage.

Also when reading network stats you need to know packet rate because gigabit wthernet will be limited at 90k packets/s in and same out. That applies to short DNS requests (say 50 bytes) which may be accounted wrongly thus giving no indication that your GigE network card is actually maxed out at mere 5MB/s