[Last Call] Learn how to a build a cloud-first strategyRegister Now

x
?
Solved

ulimit for core file size -new

Posted on 2011-04-20
11
Medium Priority
?
963 Views
Last Modified: 2012-05-11
Below are my system ulimit settings, the core file size is set to zero by default, but i got core dump file produced a week back with 2GB file size. Do you know what could be the reason.


$ ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
pending signals                 (-i) 1024
max locked memory       (kbytes, -l) 32
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
stack size              (kbytes, -s) 10240
cpu time               (seconds, -t) unlimited
max user processes              (-u) 1191936
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

$less /etc/security/limits.conf
#<domain>      <type>  <item>         <value>
#

#*               soft    core            0
#*               hard    rss             10000
#@student        hard    nproc           20
#@faculty        soft    nproc           20
#@faculty        hard    nproc           50
#ftp             hard    nproc           0
#@student        -       maxlogins       4

0
Comment
Question by:wasman
  • 5
  • 3
  • 2
  • +1
11 Comments
 
LVL 8

Expert Comment

by:point_pleasant
ID: 35433492
check the /root/.bashrc for anything that over rides the ulimit -c setting (something like ulimit -u unlimited or something)
0
 

Author Comment

by:wasman
ID: 35433895
nope i just checked it doesn't have any ulimit settings. Infact i check for all the other users also none of them have ulimit settings, the server that produces core dump is runs as different user, i start the server using sudo command. Ofcourse there is no issues with permissions so far.

For some reason my box is producing core dump file with 2gb size (regard less of ulimit settings), at the same time i am not able to successfully analyse the core dump file using my dump analyser bcz my analyser is failling in middle saying file size exceeded. It won't help even if i change my file size to unlimited.

Also there is enough disk space on the box


0
 
LVL 5

Assisted Solution

by:balasundaram_s
balasundaram_s earned 400 total points
ID: 35433938
For 'ulimit', there is a Soft limit and Hard limit, so it shows that the Hard limit for 'core' file is 2GB.

You may verify, by

$ ulimit -Ha

0
Efficient way to get backups off site to Azure

This user guide provides instructions on how to deploy and configure both a StoneFly Scale Out NAS Enterprise Cloud Drive virtual machine and Veeam Cloud Connect in the Microsoft Azure Cloud.

 
LVL 5

Expert Comment

by:balasundaram_s
ID: 35433953
Or the Hard limit might be 'unlimited', but the size of the core file was only 2GB
0
 

Author Comment

by:wasman
ID: 35433955
$ ulimit -Ha
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
pending signals                 (-i) 1024
max locked memory       (kbytes, -l) 32
max memory size         (kbytes, -m) unlimited
open files                      (-n) 10240
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
stack size              (kbytes, -s) unlimited
cpu time               (seconds, -t) unlimited
max user processes              (-u) 1191936
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
0
 
LVL 38

Expert Comment

by:wesly_chen
ID: 35434663
in manpage:
 A soft limit may be increased up to the value of the hard limit; a hard limit can NOT be increased once it is set;
 If  neither  -H nor -S is specified, both the soft and hard limits are set.

So your previous  ulimit setting for core is soft limit, which is increasable.
$ ulimit -Hc 0
To set it to hard limit.
0
 

Author Comment

by:wasman
ID: 35435026
This doesn't answer my original question....not sure you understand my issue and question
0
 
LVL 38

Accepted Solution

by:
wesly_chen earned 1600 total points
ID: 35435132
OK, your ulimit setting is "soft" from the information you provided.
Since it is "soft", so it can be overwritten by users or application.
If you didn't find any environment setting for ulimit, then the best chance is the application itself have core size setting and over ride the system setting, since it is "soft".
0
 

Author Comment

by:wasman
ID: 35435454
are you talking about the soft limit (#*               soft    core            0
)  in --> /etc/security/limits.conf . I doin't think my application is having any core file size settings.....but when i run $ ulimit -Ha
core file size          (blocks, -c) unlimited
the hard limit is set to unlimited right ?

0
 
LVL 38

Assisted Solution

by:wesly_chen
wesly_chen earned 1600 total points
ID: 35435561
>  my application is having any core file size settings
Might not in the script, the application binary code can call a system function to set the core dump size and file.

>$ ulimit -Ha
> core file size          (blocks, -c) unlimited
Your hard limit for core is "unlimited" which no limitation to the user's application.
0
 

Author Closing Comment

by:wasman
ID: 35461207
Thank you for taking time to help me. Have a nice day
0

Featured Post

Independent Software Vendors: We Want Your Opinion

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Introduction We as admins face situation where we need to redirect websites to another. This may be required as a part of an upgrade keeping the old URL but website should be served from new URL. This document would brief you on different ways ca…
This article will show you step-by-step instructions to build your own NTP CentOS server.  The network diagram shows the best practice to setup the NTP server farm for redundancy.  This article also serves as your NTP server documentation.
Connecting to an Amazon Linux EC2 Instance from Windows Using PuTTY.
In a previous video, we went over how to export a DynamoDB table into Amazon S3.  In this video, we show how to load the export from S3 into a DynamoDB table.
Suggested Courses
Course of the Month17 days, 16 hours left to enroll

830 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question