开发者

Converting dates in AWK

开发者 https://www.devze.com 2022-12-18 08:13 出处:网络
I have a file containing many columns of text, including a timestamp along the lines of Fri Jan 02 18:23 and I need to convert that date into MM/DD/YYYY HH:MM format.

I have a file containing many columns of text, including a timestamp along the lines of Fri Jan 02 18:23 and I need to convert that date into MM/DD/YYYY HH:MM format.

I have been trying to use the standard `date' tool with awk getline to do the conversion, but I can't quite figure out how to pass the fields into the 'date' command in the format it expects (quoted with 开发者_开发百科" or 's,) as getline needs the command string enclosed in quotes too.

Something like "date -d '$1 $2 $3 $4' +'%D %H:%M'" | getline var

Now that I think about it, I guess what I'm really asking is how to embed awk variables into a string.


If you're using gawk, you don't need the external date which can be expensive to call repeatedly:

awk '
BEGIN{
   m=split("Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec",d,"|")
   for(o=1;o<=m;o++){
      months[d[o]]=sprintf("%02d",o)
    }
format = "%m/%d/%Y %H:%M"
}
{
split($4,time,":")
date = (strftime("%Y") " " months[$2] " " $3 " " time[1] " " time[2] " 0")
print strftime(format, mktime(date))
}'

Thanks to ghostdog74 for the months array from this answer.


you can try this. Assuming just the date you specified is in the file

awk '
{
    cmd ="date \"+%m/%d/%Y %H:%M\" -d \""$1" "$2" "$3" "$4"\""
    cmd | getline var
    print var
    close(cmd)
}' file

output

$ ./shell.sh
01/02/2010 18:23

and if you are not using GNU tools, like if you are in Solaris for example, use nawk

nawk 'BEGIN{
   m=split("Jan|Feb|Mar|Apr|May|Jun|Jul|Aug|Sep|Oct|Nov|Dec",d,"|")
   for(o=1;o<=m;o++){
      months[d[o]]=sprintf("%02d",o)
   }
   cmd="date +%Y"
   cmd|getline yr
   close(cmd)
}
{
    day=$3
    mth=months[$2]
    print mth"/"day"/"yr" "$4
} ' file


I had a similar issue converting a date from RRDTool databases using rrdfetch but prefer one liners that I've been using since Apollo computer days.

Data looked like this:

localTemp             rs1Temp             rs2Temp      thermostatMode
1547123400: 5.2788174937e+00 4.7788174937e+00 -8.7777777778e+00 2.0000000000e+00
1547123460: 5.1687014581e+00 4.7777777778e+00 -8.7777777778e+00 2.0000000000e+00

One liner:

rrdtool fetch -s -14400 thermostatDaily.rrd MAX | sed s/://g | awk '{print "echo ""\`date -r" $1,"\`" " " $2 }' | sh

Result:

Thu Jan 10 07:25:00 EST 2019 5.3373432378e+00
Thu Jan 10 07:26:00 EST 2019 5.2788174937e+00

On the face of it this doesn't look very efficient to me but this kind of methodology has always proven to be fairly low overhead under most circumstances even for very large files on very low power computer (like 25Mhz NeXT Machines). Yes Mhz.

Sed deletes the colon, awk is used to print the other various commands of interest including just echoing the awk variables and sh or bash executes the resulting string.

For methodology or large files or streams I just head the first few lines and gradually build up the one liner. Throw away code.

0

精彩评论

暂无评论...
验证码 换一张
取 消