Link to home
Start Free TrialLog in
Avatar of tonelm54
tonelm54

asked on

Ubuntu Server - Monitor log file and post changes

Ive got a server running Nagios, and I want to post its logs onto another site. Ive been trying to figure out a way I can post the results by modifying the code, but it seems not to be a great idea with updates, then I thought I could monitor more log files, so it wouldnt be great to modify each program to do it.

So, my next idea was to monitor the log file, and post the results. I can use the command:-
sudo less +F  /var/log/nagios/nagios.log

Open in new window

But it echos out onto the main console. Is it possible to post the changes to the log file to a webpage using a curl script?

Thank you
Avatar of Pierre François
Pierre François
Flag of Belgium image

The less command is made for viewing files on the console, not to copy files. For that purpose, you better use cp. If you copy from or to a remote host, scp is the tool to use. I don't think curl is the best tool to use in your case, because it can only access files served by a http, https, ftp, etc... server.

By the way, I can't imagine it is not possible to configure nagios to save its logs on a remote server.
Avatar of tonelm54
tonelm54

ASKER

It is possible Im sure to configure Nagios, but I also want to do this with several other programs, so thought using tail (or eqivelent) and the output of that post to a curl script to upload the changes to a server for additinal work.

So, what Im looking for is something to monitor a file and when its written to pass those changes to a script. - so in tails example instead of posting them to the console, send them to a curl script
ASKER CERTIFIED SOLUTION
Avatar of skullnobrains
skullnobrains

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
1) Ive got a server running Nagios, and I want to post its logs onto another site.

Use rsyslog with an external/different machine for logging.

2) Is it possible to post the changes to the log file to a webpage using a curl script?

Yes. And this will never work correctly. Use rsyslog instead.

Logging is more complex than it might seem on the surface.
this will never work correctly

i have tons of experience that prooves this assertion wrong. both with custom scripts and above mentioned tools.


that said, redirecting the logs to the target server might actually fit the bill in the author's case. usually, we use intermediate log files to make sure log lines are not lost when either machine or the network goes down. it also usually helps by grouping log lines in packets which plays better with some tools. not all.
I should have been more clear.

I should have said, "this will never work easily + correctly + ensuring all log data flows independent of network conditions."

Anything will work, if sufficient dev time is invested by someone who understands logging.

Logging is complex, as end file pointer management is required.

Also "logs onto another site" means you'll have to develop your own queuing mechanism, so if the remote machine or network between is down, you queue log data, then resume logging when the machine or network resumes.

So in essence, you can do this... and you'l be redeveloping most or all the rsyslogd code.

I'd rather just use rsyslogd for this, as much of the heavy lifting is already done for you.
Could you be more specific which events you want to get to the other server.
What is available on the other server.
Is snmptrap an option.

First it is important to understand what you want to transfer and to what end, archival/warehouse purpose of events?
Also "logs onto another site" means you'll have to develop your own queuing mechanism, so if the remote machine or network between is down, you queue log data, then resume logging when the machine or network resumes.

the above tools handle that perfectly : they follow the log files and send new data when available. when the network is down, they just keep retrying until it works. so the queuing mechanism is actually the log file. that is exactly the part rsyslog does not handle, or barely for a few seconds. on the other hand, if the machine dies completely and cannot be brought back up again, some log lines might indeed be lost.

it is fairly easy to monitor the lateness of said tools by comparing the last line or possibly the offset with the log file size.

if needed, i still have php and shell scripts around that can handle the same task but i see no point in using them nowadays unless more complex treatment is needed such as grabbing information from multiple messy lines and grouping them on the fly. initially, i developed them to parse postfix logs which tend to scatter information across multiple lines and at the time said tools did not exist or where not mature enough yet. today, i would rather hack/configure the postfix queue manager to log all the required information on a single line and use a state of the art tool. and actually did so in the mean time.

currently, i work with an inherited elasticsearch infrastructure and a slightly hacked version of node_logstash ( to handle bulk inserts ). we have lots of machines inserting lots of different information so we have extra rabbitmqs to prevent concurrency issues. convergence happens withing mere seconds most of the time across multiple datacenters with hundreds of writers. network outages produce a few warnings but do not ever require human intervention.