hankknight
asked on
Linux: Count lines in file that begin with
My file is in this format:
/home/xyz/my.log
How can I count the number of lines that begin with the date and time stamp that matches one hour or less from now?
Nov 10 04:03:00 Consectetur adipiscing elit. Mauris luctus, nulla eu pellentesque interdum, arcu quam elementum magna
Nov 10 04:03:07 Sollicitudin scelerisque magna lacus sit amet magna. Nunc iaculis arcu a egestas rutrum.
Nov 10 04:04:01 Nulla quis feugiat dolor, vitae ultrices quam. In cursus eu est id luctus. Cras at velit eleifend
Nov 10 04:04:01 Tincidunt felis sed, elementum quam. Donec sit amet nisi vulputate, lobortis arcu ut, mattis tellus.
It is located here:/home/xyz/my.log
How can I count the number of lines that begin with the date and time stamp that matches one hour or less from now?
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
Thank you both.
I like the simplicity of Mazdajai's solution however there is a problem. Instead of giving the results for the past 60 minutes it returns results based on the value of the current hour.
For example at 11:59 your solution returns 5000 results and at 12:00 your solution returns 0 results.
I want a solution that returns results based on the past hour, not the value of the current hour.
woolmilkporc, your idea gives me a syntax error.
I like the simplicity of Mazdajai's solution however there is a problem. Instead of giving the results for the past 60 minutes it returns results based on the value of the current hour.
For example at 11:59 your solution returns 5000 results and at 12:00 your solution returns 0 results.
I want a solution that returns results based on the past hour, not the value of the current hour.
woolmilkporc, your idea gives me a syntax error.
Please, what is the exact error message?
I can guess a lot, but not everything.
I can guess a lot, but not everything.
ASKER
syntax error near unexpected token `
line 5: ` [[ $logdate -ge $start ]] && ((i+=1))
I don't have such a backtick ` in front of "[[ $logdate ... ."
Where does it come from?
Where does it come from?
ASKER
woolmilkporc, the problem must be on my end. All the other code you have provided for other questions works perfectly.
That said, I still prefer Mazdajai's approach here:
That said, I still prefer Mazdajai's approach here:
grep -c "`date | awk '{print $2,$3,$4}' |awk -F: '{print $1}'`" /home/xyz/my.log
How can that code be adjusted to include all records from the past 60 minutes? Maybe the date needs to be converted into a traditional timestamp on every line before processing it?
>> Maybe the date needs to be converted into a traditional timestamp <<
That's what I did. Looking forward to see Mazdajai's solution.
Here is my version again, this time in "code" format which might be better for copy and paste, and again, there is no backtick anywhere:
That's what I did. Looking forward to see Mazdajai's solution.
Here is my version again, this time in "code" format which might be better for copy and paste, and again, there is no backtick anywhere:
input="/home/xyz/my.log"
start=$(date -d "1 hour ago" "+%s"); i=0
while read line; do
logdate=$(date -d "$(echo $line | awk '{print $1, $2, $3}')" "+%s")
[[ $logdate -ge $start ]] && ((i+=1))
done < $input
echo "Lines in $input not older than 1 hour:" $i
ASKER
This can return (inaccurate) results almost instantly even for very large files (10+ gigs)
IDEA 1: Log entries are sequential. Break the loop as soon as a non-matching line is found.
IDEA 2: Before starting the loop, check every 100th result and cut the file as soon as an older match is found
IDEA 3: Use Mazdajai's code to get all results from this hour and from last hour then use your approach to get the results for this hour only
grep -c "`date | awk '{print $2,$3,$4}' |awk -F: '{print $1}'`" /home/xyz/my.log
The code you posted can take more than 5 minutes. Can the performance of your solution be enhanced? IDEA 1: Log entries are sequential. Break the loop as soon as a non-matching line is found.
IDEA 2: Before starting the loop, check every 100th result and cut the file as soon as an older match is found
IDEA 3: Use Mazdajai's code to get all results from this hour and from last hour then use your approach to get the results for this hour only
What exactly do you mean with "Log entries are sequential"? Do you mean that new records are added at the top of the file? I strongly doubt this.
If (as ususal) new entries are added at the end of the file you can try (following your IDEA 1):
If (as ususal) new entries are added at the end of the file you can try (following your IDEA 1):
input="my.log"
start=$(date -d "1 hour ago" "+%s"); i=0
tac $input | while read line; do
logdate=$(date -d "$(echo $line | awk '{print $1, $2, $3}')" "+%s")
if [[ $logdate -ge $start ]]; then ((i+=1))
else break
fi
done
echo "Lines in $input younger than 1 hour:" $i
Try..
awk '
BEGIN {
cmd="date -d \"1 hour hour ago\" +%H:%M:%S"
cmd | getline onehourago
close(cmd)
}
{
cmd="date -d \""$3"\" +%H:%M:%S"
cmd | getline d
close(cmd)
if (d > onehourago) print
}
' /home/xyz/my.log|wc -l
ASKER
Both your ideas return 0 even when there are entries.
New entries are always added to the bottom. So I guess we would need to start at the bottom of the file and work our way up and then break when the time condition is not met.
New entries are always added to the bottom. So I guess we would need to start at the bottom of the file and work our way up and then break when the time condition is not met.
That's exactly what I tried to do with "tac". Are you sure that your file's contents are OK?
Or is there an empty line at the end? Anyway, please try
input="/home/xyz/my.log"
start=$(date -d "1 hour ago" "+%s"); i=0
tac $input | while read line; do
if ! echo $line | grep -Eq "^[a-zA-Z]" ; then continue; fi
logdate=$(date -d "$(echo $line | awk '{print $1, $2, $3}')" "+%s")
if [[ $logdate -ge $start ]]; then ((i+=1))
echo $i > /tmp/counter.$$
else break
fi
done
echo "Lines in $input younger than 1 hour:" $(</tmp/counter.$$)
rm /tmp/counter.$$
Please note that I made some modification in the handling of the counter variable (use a counter file instead) to work around an old bash blur.
This avoids using an external counter file:
input="/home/xyz/my.log"
start=$(date -d "1 hour ago" "+%s"); i=0
while read line; do
if ! echo $line | grep -Eq "^[a-zA-Z]" ; then continue; fi
logdate=$(date -d "$(echo $line | awk '{print $1, $2, $3}')" "+%s")
if [[ $logdate -ge $start ]]; then ((i+=1))
else break
fi
done <<< $(tac $input)
echo "Lines in $input younger than 1 hour:" $i
Note "<<<" is not a typo!
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
ASKER
Thank you all. Here is the code I will use. It uses parts from all your comments however it executes faster than any single solution provided.
#!/bin/bash
t1=$( date "+%s.%N" )
NOW=$( date +"%s" )
while read MONTH DAY TIME DATA
do
THEN=$( date -d "$MONTH $DAY $TIME" +"%s" 2>/dev/null )
if [ $(( NOW - THEN )) -gt 3600 ];
then break
else
COUNT=$((COUNT+1))
fi
done <<< "$(tac $1)"
t2=$( date "+%s.%N" )
DIFF=$(echo "scale=3; ($t2 - $t1)/1" | bc)
echo "Execution time: $DIFF seconds"
echo "Lines younger than 1 hour:" $COUNT
input="/home/xyz/my.log"
start=$(date -d "1 hour ago" "+%s"); i=0
while read line; do
logdate=$(date -d "$(echo $line | awk '{print $1, $2, $3}')" "+%s")
[[ $logdate -ge $start ]] && ((i+=1))
done < $input
echo "Lines in $input not older than 1 hour:" $i