I have data that looks like this:
#info
#info2
1:SRX004541
Submitter: UT-MGS, UT-MGS
Study: Glossina morsitans transcript sequencing project(SRP000741)
Sample: Glossina morsitans(SRS002835)
Instrument: Illumina Genome Analyzer
Total: 1 run, 8.3M spots, 299.9M bases
Run #1: SRR016086, 8330172 spots, 299886192 bases
2:SRX004540
Submitter: UT-MGS
Study: Anopheles stephensi transcript sequencing project(SRP000747)
Sample: Anopheles stephensi(SRS002864)
Instrument: Solexa 1G Genome Analyzer
Total开发者_JAVA百科: 1 run, 8.4M spots, 401M bases
Run #1: SRR017875, 8354743 spots, 401027664 bases
3:SRX002521
Submitter: UT-MGS
Study: Massive transcriptional start site mapping of human cells under hypoxic conditions.(SRP000403)
Sample: Human DLD-1 tissue culture cell line(SRS001843)
Instrument: Solexa 1G Genome Analyzer
Total: 6 runs, 27.1M spots, 977M bases
Run #1: SRR013356, 4801519 spots, 172854684 bases
Run #2: SRR013357, 3603355 spots, 129720780 bases
Run #3: SRR013358, 3459692 spots, 124548912 bases
Run #4: SRR013360, 5219342 spots, 187896312 bases
Run #5: SRR013361, 5140152 spots, 185045472 bases
Run #6: SRR013370, 4916054 spots, 176977944 bases
What I want to do is to create a hash of array with first line of each chunk as keys and SR## part of lines with "^Run" as its array member:
$VAR = {
'SRX004541' => ['SRR016086'],
# etc
}
But why my construct doesn't work. And it must be a better way to do it.
use Data::Dumper;
my %bighash;
my $head = "";
my @temp = ();
while ( <> ) {
chomp;
next if (/^\#/);
if ( /^\d{1,2}:(\w+)/ ) {
print "$1\n";
$head = $1;
}
elsif (/^Run \#\d+: (\w+),.*/){
print "\t$1\n";
push @temp, $1;
}
elsif (/^$/) {
push @{$bighash{$head}}, [@temp];
@temp =();
}
}
print Dumper \%bighash ;
An alternative way to do parsing like this is to read entire paragraphs. For more information on the input record separator ($/
), see perlvar.
For example:
use strict;
use warnings;
use Data::Dumper qw(Dumper);
my %bighash;
{
local $/ = "\n\n"; # Read entire paragraphs.
while (my $paragraph = <>){
# Filter out comments and handle extra blank lines between sections.
my @lines = grep {/\S/ and not /^\#/} split /\n/, $paragraph;
next unless @lines;
# Extract the key and the SRR* items.
my $key = $lines[0];
$key =~ s/^\d+://;
$bighash{$key} = [map { /^Run \#\d+: +(SRR\d+)/ ? $1 : () } @lines];
}
}
print Dumper(\%bighash);
Replace
push @{$bighash{$head}}, [@temp];
with
push @{$bighash{$head}}, @temp;
You only have one array per $head
value, right? The second statement adds all the values in @temp
to the arrayref in $bighash{$head}
. The first form, on the other hand, constructs an array reference out of the items in @temp
and pushes that to $bighash{$head}
, giving you an arrayref of arrayrefs.
Alternately you might want
$bighash{$head} = [@temp];
If you only expect to encounter each $head
value once.
Based on your code, here is one way to do it
my $head;
my %result;
while (<>) {
chomp;
next if (/^\#/);
if ( /^\d{1,2}:(\w+)/ ) {
$result{$1} = [];
$head = $1; # $head will be used to know which key the following values
# will be assigned to
}
elsif (/^Run \#\d+: (\w+),.*/) {
push(@{$result{$head}},$1); #Add the number found to the array that is assigned to the
#last key found
}
}
The code looks correct but I'll strongly recommend adding:
use warnings
use strict
in everything but the most trivial one liners, also add
elsif ($head && /^$/) {
to your last condition, to catch problems.
Problems with your state machine, I think you may use this logic:
if(!$head) { # seek and get head } else { if (!$total) { # seek and get total } else { # seek run # if found : # push run to temp and decrease total # if total eq 0 : # push temp to bighash # reset head, total and temp } }
精彩评论