I have a large text file (~100MB) that need to be parsed to extract information. I would like to find an efficient way of doing it. The file is structured in block:
Mon, 01 Jan 2010 01:01:01
Token1 = ValueXYZ
Token2 = ValueABC
Token3 = ValuePQR
...
TokenX = Value123
Mon, 01 Jan 2010 01:02:01
Token1 = ValueXYZ
Token2 = ValueABC
Token3 = ValuePQR
...
TokenY = Value456
Is there a library that could help in parsing this file? (In Java, Python, any command line tool)
Edit: I know the question is vague, but the key element is not the way to read a file, parse it with regex, etc. I was looking more in a library, 开发者_运维问答or tools suggestions in terms of performance. For example, Antlr could have been a possibility, but this tool loads the whole file in memory, which is not good.
Thanks!
For efficient parsing of files, especially on a big file, you can use awk. An example
$ awk -vRS= '{print "====>" $0}' file
====>Mon, 01 Jan 2010 01:01:01
Token1 = ValueXYZ
Token2 = ValueABC
Token3 = ValuePQR
...
TokenX = Value123
====>Mon, 01 Jan 2010 01:02:01
Token1 = ValueXYZ
Token2 = ValueABC
Token3 = ValuePQR
...
TokenY = Value456
====>Mon, 01 Jan 2010 01:03:01
Token1 = ValueXYZ
Token2 = ValueABC
Token3 = ValuePQR
As you can see with the arrows , each record is now one block from the "====>" arrows to the next (by setting Record separator RS to blanks). you can then set field separator, eg a newline
$ awk -vRS= -vFS="\n" '{print "====>" $1}' file
====>Mon, 01 Jan 2010 01:01:01
====>Mon, 01 Jan 2010 01:02:01
====>Mon, 01 Jan 2010 01:03:01
So in the above example, every 1st field is the date/time stamp. To get "token1" for example, you could do this
$ awk -vRS= -vFS="\n" '{for(i=1;i<=NF;i++) if ($i ~/Token1/){ print $i} }' file
Token1 = ValueXYZ
Token1 = ValueXYZ
Token1 = ValueXYZ
Usually, we do something like this. The re
library pretty much handles it. The use of a generator function copes the the nested structure.
def gen_blocks( my_file ):
header_pat= re.compile( r"\w3, \d2 \w3 \d4 \d2:\d2:\d2" )
detail_pat = re.compile( r"\s2\S*\s+=\s+\S*" )
lines = []
for line in my_file:
hdr_match=header_pat.match( line )
if hdr_match:
if lines:
yield header, lines
lines= []
header= hdr.match.groups()
continue
dtl_match= detail_pat.match( line )
if dtl_match:
lines.append( dtl_match.groups() )
continue
# Neither kind of line, maybe blank or maybe an error
if lines:
yield header, lines
for header, lines in gen_blocks( some_file ):
print header, lines
IMO this data is so well structured that an external package to process it isn't needed. It probably wouldn't take more than a few minutes to write the parser for it. It would run pretty fast.
Rather than incurring the extra library dependency, and getting up the learning curve with that new library, it would seem more efficient to just write vanilla code. My algorithm would look something like this (using quick and sloppy Java):
// HOLDER FOR ALL THE DATA OBJECT THAT ARE EXTRACTED FROM THE FILE
ArrayList allDataObjects = new ArrayList();
// BUFFER FOR THE CURRENT DATA OBJECT BEING EXTRACTED
MyDataObject workingObject = null;
// BUILT-IN JAVA PARSER TO HELP US DETERMINE WHETHER OR NOT A LINE REPRESENTS A DATE
SimpleDateFormat dateFormat = new SimpleDateFormat("EEE, dd MMM yyyy HH:mm:ss");
// PARSE THROUGH THE FILE LINE-BY-LINE
BufferedReader inputFile = new BufferedReader(new FileReader(new File("myFile.txt")));
String currentLine = "";
while((currentLine = inputFile.readLine()) != null)
{
try
{
// CHECK WHETHER OR NOT THE CURRENT LINE IS A DATE
Date parsedDate = dateFormat.parse(currentLine.trim());
}
catch(ParseException pe)
{
// THE CURRENT LINE IS NOT A DATE. THAT MEANS WE'RE
// STILL PULLING IN TOKENS FOR THE LAST DATA OBJECT.
workingObject.parseAndAddToken(currentLine);
continue;
}
// THE ONLY WAY WE REACH THIS CODE IS IF THE CURRENT LINE
// REPRESENTS A DATE, WHICH MEANS WE'RE STARTING ON A NEW
// DATA OBJECT. ADD THE LAST DATA OBJECT TO THE LIST,
// AND START UP A NEW WORKING DATA OBJECT.
if(workingObject != null) allDataObjects.add(workingObject);
workingObject = new MyDataObject();
workingObject.parseAndSetDate(currentLine);
}
inputFile.close();
// NOW YOU'RE READY TO DO WHATEVER WITH "allDataObjects"
Of course, you'd have to flesh out the missing functionality for the "MyDataObject" class. However, this basically does what you're asking for in about 20 or so lines of code (stripping out the comments) and not external library dependencies.
Since that's a custom format, there's likely no library available. So write one yourself.
Here's a kickoff example, assuming that the file format is consitent as you posted in the question. You only may want to use a List<Block>
instead:
Map<Date, Map<String, String>> blocks = new LinkedHashMap<Date, Map<String, String>>();
SimpleDateFormat sdf = new SimpleDateFormat("EEE, dd MMM yyyy HH:mm:ss", Locale.ENGLISH);
BufferedReader reader = null;
try {
reader = new BufferedReader(new InputStreamReader(new FileInputStream("/input.txt"), "UTF-8"));
Date date = null;
Map<String, String> block = null;
for (String line; (line = reader.readLine()) != null;) {
line = line.trim();
if (date == null) {
date = sdf.parse(line);
block = new LinkedHashMap<String, String>();
blocks.put(date, block);
} else if (!line.isEmpty()) {
String[] parts = line.split("\\s*=\\s*");
block.put(parts[0], parts[1]);
} else {
date = null;
}
}
} finally {
if (reader != null) try { reader.close(); } catch (IOException ignore) {}
}
To verify the contents, use this:
for (Entry<Date, Map<String, String>> block : blocks.entrySet()) {
System.out.println(block.getKey());
for (Entry<String, String> token : block.getValue().entrySet()) {
System.out.println("\t" + token.getKey() + " = " + token.getValue());
}
System.out.println();
}
精彩评论