Core file not being dumped - Redhat 6.1

I have a Redhat 6.1 running on Pentium II. Trouble is whena program SIGSEGV's I don't get a core file. I tried setting ulimit -c 10000000 but it didn't help.

I appreciate any insight.

Thanks,
Kailash
MarthiAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

edskeeCommented:
Not sure which shell you are using, but some shells (tcsh, the one I prefer) have an option for core dump sizes. Try setting, in your login 'script' (.cshrc if you are using tcsh, .bashrc if using bash, etc) the command:

limit coredumpsize unlimited

If you have a coredump limit set in there it'll keep you from getting core files. Dunno how this interacts with ulimit (I dont even have it installed on my system so I dunno what it does) but that line I gave you works for me.
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
edskeeCommented:
Aah, my bad, it's not a shell option, it's a shell command (I never used it that way before)

Type limit and see what it says for coredumpsize. If it's something other than unlimited, use the command I gave you.
0
MarthiAuthor Commented:
Thanks for your suggestion.
I use bash and on bash, the shell command happens to be ulimit. When I type ulimit at my command line, I get back unlimited.

Here's what the help on ulimit says
ulimit: ulimit [-SHacdfmstpnuv [limit]]
    Ulimit provides control over the resources available to processes
    started by the shell, on systems that allow such control.  If an
    option is given, it is interpreted as follows:
   
        -S      use the `soft' resource limit
        -H      use the `hard' resource limit
        -a      all current limits are reported
        -c      the maximum size of core files created
        -d      the maximum size of a process's data segment
        -m      the maximum resident set size
        -s      the maximum stack size
        -t      the maximum amount of cpu time in seconds
        -f      the maximum size of files created by the shell
        -p      the pipe buffer size
        -n      the maximum number of open file descriptors
        -u      the maximum number of user processes
        -v      the size of virtual memory
   
    If LIMIT is given, it is the new value of the specified resource.
    Otherwise, the current value of the specified resource is printed.
    If no option is given, then -f is assumed.  Values are in 1k
    increments, except for -t, which is in seconds, -p, which is in
    increments of 512 bytes, and -u, which is an unscaled number of
    processes.


I tried ulimit -c 1000000000, but still no core file :-(

Kailash
0
MarthiAuthor Commented:
Yay ! I got a core file per suggestion.
I used ulimit -c unlimited (instead of giving it a number). ulimit -a showed all settings.

Thanks a lot !
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Linux OS Dev

From novice to tech pro — start learning today.