Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 1701
  • Last Modified:

SSH server unreachable when server overloaded - how to setup sshd with maximum priority

Hello,

sometimes my webserver get overloaded (spamassasin, buggy url rewriting,...) I know how to manage that, but my only concern is how to reach the ssh server at that time. What i've noticed is that even if the server get overloaded (Load average > 10) every internet services works almost fine (apache, mysql, qmail, etc.) but there is no way to login via ssh even with my private key or by typing the password (using Putty or any SSH client). The only way I know is to manage it via webmin : kill overloading process, wait the LA (LoadAverage) to go under 1 and then I can log...

Here is my sshd config expurge from the # commentaries (/etc/ssh/sshd_config )

Protocol 2
SyslogFacility AUTHPRIV
X11Forwarding yes
Subsystem       sftp    /usr/libexec/openssh/sftp-server

I tried to change the priority via webmin (-20 max priority) and also looked at some nice/renice capabilites with not much success.

To summarize : how to get SSH server to get maximum priority over every other running processes when the server reaches a given load average..

My server is running Linux Redhat 7 (enigma)

Any clues ?

Thanks
0
FFT
Asked:
FFT
  • 5
  • 4
  • 3
  • +1
2 Solutions
 
FFTAuthor Commented:
I don't think it goes that high, I've taken a look at your links but I find this to be really "heavy" solutions compared to my problem. bandwith It not the clue here but the CPU load yes, OK you can tell that I could upgrade my server ;-) But i'm sure there is a smatest way to do this instead of installing more and more software. Just for your info I have plenty of bandwith when I try to connect thru SSH even when the server is overloaded, this is really a priority problem about processes, not bandwith (I think... can't be 100% sure anyway))

Thanks for you infos anyway.
0
 
wesly_chenCommented:
Hi,

  Check your /etc/ssh/sshd_config for "ReverseMappingCheck" or "VerifyReverseMapping".
Set it to "no" and restart sshd (/etc/init.d/sshd restart).

   Sometimes the reverse DNS lookup will cause ssh very long login session (somtimes 10 minutes...)

   Enigma is RedHat 7.2, right? Try to download and upgrade most of patches from
http://download.fedoralegacy.org/redhat/7.2/updates/i386/

   Specially the kernel, sometime kernel upgrade will fix the high load issue (it is kernel bug, which doesn't free up the CPU
from dead process).

Regards,

Wesly  
0
Free Backup Tool for VMware and Hyper-V

Restore full virtual machine or individual guest files from 19 common file systems directly from the backup file. Schedule VM backups with PowerShell scripts. Set desired time, lean back and let the script to notify you via email upon completion.  

 
pablouruguayCommented:
X forwarding too. cause long logins!
0
 
lhboiCommented:
Hi FFT,
Please check if your server runs out of memory. In that situation the processes run slowly because they are swapped out and in too much.
Boi
0
 
FFTAuthor Commented:
Hello,

thanks for your answer, for instance i've just changed the line on my /etc/init.d/sshd script so now it reads

start-stop-daemon --start --quiet --pidfile /var/run/sshd.pid -N -20 --exec /usr/sbin/sshd -- $SSHD_OPTS

Instead of

start-stop-daemon --start --quiet --pidfile /var/run/sshd.pid --exec /usr/sbin/sshd -- $SSHD_OPTS

here, I use the- N option of the start-stop-daemon command so it uses max priority (-20) for the sshd process.

Unfortunately when the system goes out of memory I'm not sure that it will change something.

ReverseMappingCheck and X forwarding  set to no has already been included with the /etc/ssh/sshd_config file with no more success.

Thanks.
0
 
lhboiCommented:
Hi FFT,
If your system goes out of memory, you should add more memory to it. Using max priority for the sshd process doesn't help in this case because the sshd process always are swapped out between your keystroke on the ssh client.
If you cannot add memory to your system, try to limit the number of instances of other processes so that your system always has enough memory for the sshd process.
Regards,
Boi
0
 
FFTAuthor Commented:
Hello lhboi,

In fact I've got RAM (1.5GB) but that's not the problem, if for one reason or another a bugged sript decides to fill the RAM, size won't matter (It may take more time to overload the server but it will still overload it). It's more of a software problem here, how to keep a secured memory/cpu area that allow to connect through SSH...
0
 
wesly_chenCommented:
Did you upgrad the kernel for your RedHat?
If it is kernel bug, then it should be fixed once you upgrade the kernel.
0
 
lhboiCommented:
Hi FFT,
Because you have a huge RAM space, you normally do not need swap space. So please try to turn off swap. There are two possibilities then:
1) The buggy programms do not have enough memory to run and they fail.
2) If you are lucky, sshd can run without swapped out & in.
This is my 2 cents.
Boi Le
0
 
FFTAuthor Commented:
To Wesly = i've not seen a declared memory bug for the kernel i currently use : 2.4.20-8, i've made a fast search with google and redhat with no success (merely security holes but not memory problems ;-), so I guess this is not the point.

To lhboi : simple question how to turn of swap space ? I have a swap partition (wich is a bit small : 756MO, should be twice the RAM = 3GB i know...)

Thanks
0
 
lhboiCommented:
Swap is turned on during linux boot by the command swapon. It is invoked in the file /etc/rc.d/rc.sysinit. To disable swap, you can use the command "swapoff -a", or you can comment out the command swapon in the above file.
Regards,
Boi Le
0
 
wesly_chenCommented:
> i've not seen a declared memory bug for the kernel i currently use : 2.4.20-8
Well, the chance you have is that people may not report the bug since they don't know if it is kernel bug or not.
As my experience with RedHat Linux, the RedHat kernel team change the kernel code to fit their own need.
However, sometimes they introduce the bug for kswap or ksmem from time to time.

Besides, cpu/memory allocation is done by kernel and application.
If you think there is no problem with kernel, then you need to find out which applicatioon eat out the resource.
So upgrade the kernel won't hurt.

My experience with RedHat 7.x with the uptime over 20 from time to time.
I don't see those happened on RHEL 3.0 that often.
0
 
FFTAuthor Commented:
Since i've changed my server (to a much more reliable Debian Sarge 3.1 with kernel 2.6.12.2) I've had no time find if the solutions provided was the only working solutions, anyway I've splited the points since both comments were smart and would probably have led to some relief on the server, there is still a bit mystery there...
0

Featured Post

What does it mean to be "Always On"?

Is your cloud always on? With an Always On cloud you won't have to worry about downtime for maintenance or software application code updates, ensuring that your bottom line isn't affected.

  • 5
  • 4
  • 3
  • +1
Tackle projects and never again get stuck behind a technical roadblock.
Join Now