Core file not being dumped - Redhat 6.1

Posted on 2000-02-09
Medium Priority
Last Modified: 2010-04-22
I have a Redhat 6.1 running on Pentium II. Trouble is whena program SIGSEGV's I don't get a core file. I tried setting ulimit -c 10000000 but it didn't help.

I appreciate any insight.

Question by:Marthi
  • 2
  • 2

Accepted Solution

edskee earned 400 total points
ID: 2504346
Not sure which shell you are using, but some shells (tcsh, the one I prefer) have an option for core dump sizes. Try setting, in your login 'script' (.cshrc if you are using tcsh, .bashrc if using bash, etc) the command:

limit coredumpsize unlimited

If you have a coredump limit set in there it'll keep you from getting core files. Dunno how this interacts with ulimit (I dont even have it installed on my system so I dunno what it does) but that line I gave you works for me.

Expert Comment

ID: 2504351
Aah, my bad, it's not a shell option, it's a shell command (I never used it that way before)

Type limit and see what it says for coredumpsize. If it's something other than unlimited, use the command I gave you.

Author Comment

ID: 2504623
Thanks for your suggestion.
I use bash and on bash, the shell command happens to be ulimit. When I type ulimit at my command line, I get back unlimited.

Here's what the help on ulimit says
ulimit: ulimit [-SHacdfmstpnuv [limit]]
    Ulimit provides control over the resources available to processes
    started by the shell, on systems that allow such control.  If an
    option is given, it is interpreted as follows:
        -S      use the `soft' resource limit
        -H      use the `hard' resource limit
        -a      all current limits are reported
        -c      the maximum size of core files created
        -d      the maximum size of a process's data segment
        -m      the maximum resident set size
        -s      the maximum stack size
        -t      the maximum amount of cpu time in seconds
        -f      the maximum size of files created by the shell
        -p      the pipe buffer size
        -n      the maximum number of open file descriptors
        -u      the maximum number of user processes
        -v      the size of virtual memory
    If LIMIT is given, it is the new value of the specified resource.
    Otherwise, the current value of the specified resource is printed.
    If no option is given, then -f is assumed.  Values are in 1k
    increments, except for -t, which is in seconds, -p, which is in
    increments of 512 bytes, and -u, which is an unscaled number of

I tried ulimit -c 1000000000, but still no core file :-(


Author Comment

ID: 2504645
Yay ! I got a core file per suggestion.
I used ulimit -c unlimited (instead of giving it a number). ulimit -a showed all settings.

Thanks a lot !

Featured Post

Receive 1:1 tech help

Solve your biggest tech problems alongside global tech experts with 1:1 help.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Have you ever been frustrated by having to click seven times in order to retrieve a small bit of information from the web, always the same seven clicks, scrolling down and down until you reach your target? When you know the benefits of the command l…
The purpose of this article is to demonstrate how we can upgrade Python from version 2.7.6 to Python 2.7.10 on the Linux Mint operating system. I am using an Oracle Virtual Box where I have installed Linux Mint operating system version 17.2. Once yo…
Planning to migrate your EDB file(s) to a new or an existing Outlook PST file? This video will guide you how to convert EDB file(s) to PST. Besides this, it also describes, how one can easily search any item(s) from multiple folders or mailboxes…
In this video I will demonstrate how to set up Nine, which I now consider the best alternative email app to Touchdown.
Suggested Courses

601 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question