how to log the info of users remotely connected to the server?

Posted on 2002-03-26
Medium Priority
Last Modified: 2013-12-27
recently someone somehow made a terrible mistake (destroyed an important database application by re-initialised a disk).

the shell used was ksh with HISTFILE configured in .profile.

I can see in .sh_history.xxxx files what that person did.

But I just don't know from which machine (IP) the telnet connection was made.

Users don't have the console access to the server.  Thay can only access the server remotely (telnet, FTP etc)

Was wondering if there is an utility can be configured to record the info like IP or host name of a remote machine making connecttions like telnet to the server.

My Solaris is 8 and everybody uses ksh by default.

Any advise/help will be greatly appreciated.

Question by:frankf
  • 2
  • 2
  • 2
  • +3
LVL 40

Expert Comment

ID: 6898051
The last command will show you who logged and from where.

Accepted Solution

besky earned 200 total points
ID: 6899482
There is a possibility to log all TCP connections
to the syslog.

Edit /etc/init.d/inetsvc, at the end inetd will be started.
add a -t as option and the connection will show up in
/var/adm/messages as a deamon.notice message.
It should look like this :
/usr/sbin/inetd -s -t &

and you have to give the running inetd a kick with:

pkill -HUP inetd

btw, nice users you have there

Expert Comment

ID: 6899497
I forgot, if rootpermissions are required to do these tasks, let noone log in as root.
If they connects as a user and use the su command you can check the sulog for usage of the su command.

Or you could install sudo, a good freewarepgm to handle rights to commands and admintasks

Free Tool: Site Down Detector

Helpful to verify reports of your own downtime, or to double check a downed website you are trying to access.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.


Expert Comment

by:Ryan Rowley
ID: 6899696
If a remote telnet or ftp user can cause this type of situation, I would be concerned about security.
Is there a reason for remote users to have this level
of access.

Expert Comment

ID: 6900552
Are your users on fixed IP addresses or DHCP allocated addresses - if the latter then capturing the address is pointless.

I agree with the other posters however - your major problem is not really capturing the address of people connecting to your system but the lack of security that allows a user to format a disk!

Expert Comment

ID: 6900800
It sounds like you have some "fiddlers" on your roof!

Here is what I would do..
Install TCP Wrappers - this will give you control and detailed connection history.
Follow the advice of besky by running inetd with the -s and -t flags.
alter the entries in the /etc/inetd.conf file for ftp to
add the -dl flags.
install tripwire - this will allow you to track changes to any file on the system.
if users are using DHCP - consider permanent leases for the users who have access to the system (this can be enforced by tcpwrappers mentioned above).
if the system is Solaris - get a hold of sunscreen lite (host based firewall).
consider the use of process accounting or BSM (be careful with BSM - it creates a LOT of information)
if you suspect that the connections may not have been made by "authorised" users - consider installing ssh and giving your users an ssh client like "putty".
by far the BEST suggestion i could give you is to build a box running an IDS like NFR, or SNORT with specific rules for connections to your system - this will not cost a lot (and in the case of snort it is free) but will provide an independent source of corroborative evidence (should it be required).

I hope this helps

Author Comment

ID: 6900971
thanks for all your advise/comments.

my department is a technical support department.

the nature of everybody's work decides that everyone needs root user access to all servers.

everybody uses static IP - thats why I want to know where the connection is made from

this time one of my servers was screwed

it takes fair bit of time to re-set the whole thing up including Oracle and Apache databases

Although I can't "shoot" that guy but at least I need to set something up which enables me to know who is the one keep making this silly mistakes in the future

Expert Comment

ID: 6901000

Without wishing to be rude, why does everyone need root access to all the servers?

If this is so perhaps you could quarantine the databases onto a server (or servers) to which they don't have root access.

Alternatively, if they need root access to a server to test problem solutions could they not have access to a chrooted file system so they can only mess up their own pseudo-server rather than stuff up the underlying system.

Just some avenues of thought that might be worth exploring to prevent you having to rebuild servers in the future.

LVL 40

Expert Comment

ID: 6901153
I understand the issues with needing multiple people to have root access to a group of servers. While that's not the ideal case, it is the real world in a lot of cases. My preferred solution in that environment is to have only one person set a root password on each of the servers. That root password(s) get recorded and locked up in a place that only senior managment/admins have access to in the event that those passwords are required, like for a single user boot. Ordinary  admins are only allowed root access via sudo, which records each time they exercise root privs.

Expert Comment

ID: 6901203
I concur with jlevie, but understand that "support" staff
often require root privs. It is far better to have good auditing and controls, and even better to have very few with access to root - but sometimes it is more practical
(not that I would advocate it).

I have handled this before with sudo, C-based setuid wrappers for common functions etc. but in the end if there is root access on the local system being used - you may not be able to rely upon the log files (especially if the users are clever).

External controls are the most effective way of monitoring - it is for this reason that I normally setup sniffers in front of any honeypot i deploy - to capture everything.

The only thing to be aware of is that sniffers can generate a lot of logs (if the system is constantly being accessed) - it is often necessary to limit the tracking to one or two protocols.

This question really should be in the security forum.

Kind Regards,

Author Comment

ID: 6904354
I accepted besky's comments as the answer.  I implemented and it worked as I hoped.

in a short term, I still have to let everybody has root access to all servers as this is our department's "culture".  If I have this changed/removed obiousely I'll upset alot of people.

in a long term, I have to do some thing, as you guys suggested, to prevent this kind of thing from happening again.

Thanks again for all your input!


Featured Post

Free Tool: Path Explorer

An intuitive utility to help find the CSS path to UI elements on a webpage. These paths are used frequently in a variety of front-end development and QA automation tasks.

One of a set of tools we're offering as a way of saying thank you for being a part of the community.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

Join & Write a Comment

In tuning file systems on the Solaris Operating System, changing some parameters of a file system usually destroys the data on it. For instance, changing the cache segment block size in the volume of a T3 requires that you delete the existing volu…
Java performance on Solaris - Managing CPUs There are various resource controls in operating system which directly/indirectly influence the performance of application. one of the most important resource controls is "CPU".   In a multithreaded…
Learn how to navigate the file tree with the shell. Use pwd to print the current working directory: Use ls to list a directory's contents: Use cd to change to a new directory: Use wildcards instead of typing out long directory names: Use ../ to move…
In a previous video, we went over how to export a DynamoDB table into Amazon S3.  In this video, we show how to load the export from S3 into a DynamoDB table.

587 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question