Avatar of hankknight
hankknightFlag for Canada asked on

Linux: Count lines in file that begin with

My file is in this format:
Nov 10 04:03:00 Consectetur adipiscing elit. Mauris luctus, nulla eu pellentesque interdum, arcu quam elementum magna
Nov 10 04:03:07 Sollicitudin scelerisque magna lacus sit amet magna. Nunc iaculis arcu a egestas rutrum. 
Nov 10 04:04:01 Nulla quis feugiat dolor, vitae ultrices quam. In cursus eu est id luctus. Cras at velit eleifend
Nov 10 04:04:01 Tincidunt felis sed, elementum quam. Donec sit amet nisi vulputate, lobortis arcu ut, mattis tellus. 

Open in new window

It is located here:
/home/xyz/my.log

How can I count the number of lines that begin with the date and time stamp that matches one hour or less from now?
Linux

Avatar of undefined
Last Comment
hankknight

8/22/2022 - Mon
woolmilkporc

Again, a nice task.

input="/home/xyz/my.log"
start=$(date -d "1 hour ago" "+%s"); i=0
while read line; do
  logdate=$(date -d "$(echo $line | awk '{print $1, $2, $3}')" "+%s")
   [[ $logdate -ge $start ]] && ((i+=1))
done < $input
echo "Lines in $input not older than 1 hour:" $i
SOLUTION
Mazdajai

Log in or sign up to see answer
Become an EE member today7-DAY FREE TRIAL
Members can start a 7-Day Free trial then enjoy unlimited access to the platform
Sign up - Free for 7 days
or
Learn why we charge membership fees
We get it - no one likes a content blocker. Take one extra minute and find out why we block content.
See how we're fighting big data
Not exactly the question you had in mind?
Sign up for an EE membership and get your own personalized solution. With an EE membership, you can ask unlimited troubleshooting, research, or opinion questions.
ask a question
ASKER
hankknight

Thank you both.

I like the simplicity of Mazdajai's solution however there is a problem.  Instead of giving the results for the past 60 minutes it returns results based on the value of the current hour.

For example at 11:59 your solution returns 5000 results and at 12:00 your solution returns 0 results.

I want a solution that returns results based on the past hour, not the value of the current hour.

woolmilkporc, your idea gives me a syntax error.
woolmilkporc

Please, what is the exact error message?

I can guess a lot, but not everything.
I started with Experts Exchange in 2004 and it's been a mainstay of my professional computing life since. It helped me launch a career as a programmer / Oracle data analyst
William Peck
ASKER
hankknight

syntax error near unexpected token `
line 5: `   [[ $logdate -ge $start ]] && ((i+=1))

Open in new window

woolmilkporc

I don't have such a backtick  ` in front of "[[ $logdate ... ."

Where does it come from?
ASKER
hankknight

woolmilkporc, the problem must be on my end.  All the other code you have provided for other questions works perfectly.

That said, I still prefer Mazdajai's approach here:
grep -c "`date | awk '{print $2,$3,$4}' |awk -F: '{print $1}'`" /home/xyz/my.log

Open in new window

How can that code be adjusted to include all records from the past 60 minutes?  Maybe the date needs to be converted into a traditional timestamp on every line before processing it?
Get an unlimited membership to EE for less than $4 a week.
Unlimited question asking, solutions, articles and more.
woolmilkporc

>> Maybe the date needs to be converted into a traditional timestamp <<

That's what I did. Looking forward to see Mazdajai's solution.

Here is my version again, this time in "code" format  which might be better for copy and paste, and again, there is no backtick anywhere:

input="/home/xyz/my.log"
start=$(date -d "1 hour ago" "+%s"); i=0
while read line; do
  logdate=$(date -d "$(echo $line | awk '{print $1, $2, $3}')" "+%s")
   [[ $logdate -ge $start ]] && ((i+=1))
done < $input
echo "Lines in $input not older than 1 hour:" $i 

Open in new window

ASKER
hankknight

This can return (inaccurate) results almost instantly even for very large files (10+ gigs)
grep -c "`date | awk '{print $2,$3,$4}' |awk -F: '{print $1}'`" /home/xyz/my.log

Open in new window

The code you posted can take more than 5 minutes.  Can the performance of your solution be enhanced?  

IDEA 1: Log entries are sequential.  Break the loop as soon as a non-matching line is found.
IDEA 2: Before starting the loop, check every 100th result and cut the file as soon as an older match is found
IDEA 3: Use Mazdajai's code to get all results from this hour and from last hour then use your approach to get the results for this hour only
woolmilkporc

What exactly do you mean with "Log entries are sequential"? Do you mean that new records are added at the top of the file? I strongly doubt this.

If (as ususal) new entries are added at the end of the file you can try (following your IDEA 1):
input="my.log"
start=$(date -d "1 hour ago" "+%s"); i=0
tac $input | while read line; do
  logdate=$(date -d "$(echo $line | awk '{print $1, $2, $3}')" "+%s")
  if [[ $logdate -ge $start ]]; then ((i+=1))
   else break
  fi
done
echo "Lines in $input younger than 1 hour:" $i

Open in new window

All of life is about relationships, and EE has made a viirtual community a real community. It lifts everyone's boat
William Peck
Mazdajai

Try..

awk '
        BEGIN {
                cmd="date -d \"1 hour hour ago\" +%H:%M:%S"
                cmd | getline onehourago
                close(cmd)
        }
        {
                cmd="date -d \""$3"\" +%H:%M:%S"
                cmd | getline d
                close(cmd)
                if (d > onehourago) print
        }
' /home/xyz/my.log|wc -l 

Open in new window

ASKER
hankknight

Both your ideas return 0 even when there are entries.  

New entries are always added to the bottom.  So I guess we would need to start at the bottom of the file and work our way up and then break when the time condition is not met.
woolmilkporc

That's exactly what I tried to do with "tac". Are you sure that your file's contents are OK?
Get an unlimited membership to EE for less than $4 a week.
Unlimited question asking, solutions, articles and more.
woolmilkporc

Or is there an empty line at the end? Anyway, please try
input="/home/xyz/my.log"
start=$(date -d "1 hour ago" "+%s"); i=0
tac $input | while read line; do
  if ! echo $line | grep -Eq "^[a-zA-Z]" ; then continue; fi 
  logdate=$(date -d "$(echo $line | awk '{print $1, $2, $3}')" "+%s")
  if [[ $logdate -ge $start ]]; then ((i+=1))
     echo $i > /tmp/counter.$$
   else break
  fi
done
echo "Lines in $input younger than 1 hour:" $(</tmp/counter.$$)
rm /tmp/counter.$$

Open in new window

Please note that I made some modification in the handling of the counter variable (use a counter file instead) to work around an old bash blur.
woolmilkporc

This avoids using an external counter file:
input="/home/xyz/my.log"
start=$(date -d "1 hour ago" "+%s"); i=0
while read line; do
  if ! echo $line | grep -Eq "^[a-zA-Z]" ; then continue; fi 
  logdate=$(date -d "$(echo $line | awk '{print $1, $2, $3}')" "+%s")
  if [[ $logdate -ge $start ]]; then ((i+=1))
   else break
  fi
done <<< $(tac $input)
echo "Lines in $input younger than 1 hour:" $i

Open in new window

Note "<<<" is not a typo!
ASKER CERTIFIED SOLUTION
Log in to continue reading
Log In
Sign up - Free for 7 days
Get an unlimited membership to EE for less than $4 a week.
Unlimited question asking, solutions, articles and more.
SOLUTION
Log in to continue reading
Log In
Sign up - Free for 7 days
Get an unlimited membership to EE for less than $4 a week.
Unlimited question asking, solutions, articles and more.
ASKER
hankknight

Thank you all.  Here is the code I will use.  It uses parts from all your comments however it executes faster than any single solution provided.
#!/bin/bash
t1=$( date "+%s.%N" )
NOW=$( date +"%s" )
while read MONTH DAY TIME DATA
do
    THEN=$( date -d "$MONTH $DAY $TIME" +"%s" 2>/dev/null )
      if [ $(( NOW - THEN )) -gt 3600 ];
       then break
      else
        COUNT=$((COUNT+1))
      fi
done <<< "$(tac $1)"
t2=$( date "+%s.%N" )
DIFF=$(echo "scale=3; ($t2 - $t1)/1" | bc)
echo "Execution time: $DIFF seconds"
echo "Lines younger than 1 hour:" $COUNT

Open in new window

This is the best money I have ever spent. I cannot not tell you how many times these folks have saved my bacon. I learn so much from the contributors.
rwheeler23