English 中文(简体)
How can I quickly parse large (>10GB) files?
原标题:

I have to process text files 10-20GB in size of the format: field1 field2 field3 field4 field5

I would like to parse the data from each line of field2 into one of several files; the file this gets pushed into is determined line-by-line by the value in field4. There are 25 different possible values in field2 and hence 25 different files the data can get parsed into.

I have tried using Perl (slow) and awk (faster but still slow) - does anyone have any suggestions or pointers toward alternative approaches?

FYI here is the awk code I was trying to use; note I had to revert to going through the large file 25 times because I wasn t able to keep 25 files open at once in awk:

chromosomes=(1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25)
for chr in ${chromosomes[@]}
do

awk < my_in_file_here -v pat="$chr"  {if ($4 == pat) for (i = $2; i <= $2+52; i++) print i}  >> my_out_file_"$chr".query 

done
最佳回答

Here is a solution in Python. I have tested it on a small fake file I made up. I think this will be acceptably fast for even a large file, because most of the work will be done by C code inside of Python. And I think this is a pleasant and easy to understand program; I prefer Python to Perl.

import sys

s_usage = """
Usage: csplit <filename>
Splits input file by columns, writes column 2 to file based on chromosome from column 4."""

if len(sys.argv) != 2 or sys.argv[1] in ("-h", "--help", "/?"):

    sys.stderr.write(s_usage + "
")
    sys.exit(1)


# replace these with the actual patterns, of course
lst_pat = [
     a ,  b ,  c ,  d ,  e ,  f ,  g ,  h ,  i ,  j ,
     k ,  l ,  m ,  n ,  o ,  p ,  q ,  r ,  s ,  t ,
     u ,  v ,  w ,  x ,  y 
]


d = {}
for s_pat in lst_pat:
    # build a dictionary mapping each pattern to an open output file
    d[s_pat] = open("my_out_file_" + s_pat, "wt")

if False:
    # if the patterns are unsuitable for filenames (contain  * ,  ? , etc.) use this:
    for i, s_pat in enumerate(lst_pat):
        # build a dictionary mapping each pattern to an output file
        d[s_pat] = open("my_out_file_" + str(i), "wt")

for line in open(sys.argv[1]):
    # split a line into words, and unpack into variables.
    # use  _  for a variable name to indicate data we don t care about.
    # s_data is the data we want, and s_pat is the pattern controlling the output
    _, s_data, _, s_pat, _ = line.split()
    # use s_pat to get to the file handle of the appropriate output file, and write data.
    d[s_pat].write(s_data + "
")

# close all the output file handles.
for key in d:
    d[key].close()

EDIT: Here s a little more information about this program, since it seems you will be using it.

All of the error handling is implicit. If an error happens, Python will "raise an exception" which will terminate processing. For example, if one of the files fails to open, this program will stop executing and Python will print a backtrace showing which line of code caused the exception. I could have wrapped the critical parts with a "try/except" block, to catch errors, but for a program this simple, I didn t see any point.

It s subtle, but there is a check to see if there are exactly five words on each line of the input file. When this code unpacks a line, it does so into five variables. (The variable name "_" is a legal variable name, but there is a convention in the Python community to use it for variables you don t actually care about.) Python will raise an exception if there are not exactly five words on the input line to unpack into the five variables. If your input file can sometimes have four words on a line, or six or more, you could modify the program to not raise an exception; change the main loop to this:

for line in open(sys.argv[1]):
    lst = line.split()
    d[lst[3]].write(lst[1] + "
")

This splits the line into words, and then just assigns the whole list of words into a single variable, lst. So that line of code doesn t care how many words are on the line. Then the next line indexes into the list to get the values out. Since Python indexes a list using 0 to start, the second word is lst[1] and the fourth word is lst[3]. As long as there are at least four words in the list, that line of code won t raise an exception either.

And of course, if the fourth word on the line is not in the dictionary of file handles, Python will raise an exception for that too. That would stop processing. Here is some example code for how to use a "try/except" block to handle this:

for line in open(sys.argv[1]):
    lst = line.split()
    try:
        d[lst[3]].write(lst[1] + "
")
    except KeyError:
        sys.stderr.write("Warning: illegal line seen: " + line)

Good luck with your project.

EDIT: @larelogio pointed out that this code doesn t match the AWK code. The AWK code has an extra for loop that I do not understand. Here is Python code to do the same thing:

for line in open(sys.argv[1]):
    lst = line.split()
    n = int(lst[1])
    for i in range(n, n+53):
        d[lst[3]].write(i + "
")

And here is another way to do it. This might be a little faster, but I have not tested it so I am not certain.

for line in open(sys.argv[1]):
    lst = line.split()
    n = int(lst[1])
    s = "
".join(str(i) for i in range(n, n+53))
    d[lst[3]].write(s + "
")

This builds a single string with all the numbers to write, then writes them in one chunk. This may save time compared to calling .write() 53 times.

问题回答

With Perl, open the files during initialization and then match the output for each line to the appropriate file:

#! /usr/bin/perl

use warnings;
use strict;

my @values = (1..25);

my %fh;
foreach my $chr (@values) {
  my $path = "my_out_file_$chr.query";
  open my $fh, ">", $path
    or die "$0: open $path: $!";

  $fh{$chr} = $fh;
}

while (<>) {
  chomp;
  my($a,$b,$c,$d,$e) = split " ", $_, 5;

  print { $fh{$d} } "$_
"
    for $b .. $b+52;
}

do you know why its slow? its because you are processing that big file 25 times with the outer shell for loop.!!

awk  
$4 <=25 {
    for (i = $2; i <= $2+52; i++){
        print i >> "my_out_file_"$4".query"
    }
}  bigfile

There are times when awk is not the answer.

There are also times when scripting languages are not the answer, when you are just plain better off biting the bullet and dragging down your copy of K&R and hacking some C code.

If your operating system implements pipes using concurrent processes and interprocess communications, as opposed to big temp files, what you might be able to do is write an awk script that reformats the line, to put the selector field at the beginning of the line in a format easily readable with scanf(), write a C program that opens the 25 files and distributes lines among them, and the pipe the awk script output into the C program.

Sounds like you re on your way, but I just wanted to mention Memory Mapped I/O as being a huge help when working with gigantic files. There was a time when I had to parse a .5GB binary file with Visual Basic 5 (yes)... importing the CreateFileMapping API allowed me to parse the file (and create several-gig "human-readable" file) in minutes. And it only took a half hour or so to implement.

Here s a link describing the API on Microsoft platforms, though I m sure MMIO should be on just about any platform: MSDN

Good luck!

There are some precalculations that may help.

For example you can precalculate the outputs for each value of your field2. Admiting that they are 25 like field4:

my %tx = map {my $tx=  ; for my $tx1 ($_ .. $_+52) {$tx.="$tx1
"}; $_=>$tx} (1..25);

Latter when writing you can do print {$fh{$pat}} $tx{$base};





相关问题
Why does my chdir to a filehandle not work in Perl?

When I try a "chdir" with a filehandle as argument, "chdir" returns 0 and a pwd returns still the same directory. Should that be so? I tried this, because in the documentation to chdir I found: "...

How do I use GetOptions to get the default argument?

I ve read the doc for GetOptions but I can t seem to find what I need... (maybe I am blind) What I want to do is to parse command line like this myperlscript.pl -mode [sth] [inputfile] I can use ...

Object-Oriented Perl constructor syntax and named parameters

I m a little confused about what is going on in Perl constructors. I found these two examples perldoc perlbot. package Foo; #In Perl, the constructor is just a subroutine called new. sub new { #I ...

Where can I find object-oriented Perl tutorials? [closed]

A Google search yields a number of results - but which ones are the best? The Perl site appears to contain two - perlboot and perltoot. I m reading these now, but what else is out there? Note: I ve ...

热门标签