Simpom Simpom - 7 days ago 7
Python Question

Fastest way to "grep" big files

I have big log files (from 100MB to 2GB) that contain a (single) particular line I need to parse in a Python program. I have to parse around 20,000 files. And I know that the searched line is within the 200 last lines of the file, or within the last 15000 bytes.

As it is a recurring task, I need it be as fast as possible. What is the fastest way to get it?

I have thought about 4 strategies:


  • read the whole file in Python and search a regex (method_1)

  • read only the last 15,000 bytes of the file and search a regex (method_2)

  • make a system call to grep (method_3)

  • make a system call to grep after tailing the last 200 lines (method_4)



Here are the functions I created to test these strategies :

import os
import re
import subprocess

def method_1(filename):
"""Method 1: read whole file and regex"""
regex = r'\(TEMPS CP :[ ]*.*S\)'
with open(filename, 'r') as f:
txt = f.read()
match = re.search(regex, txt)
if match:
print match.group()

def method_2(filename):
"""Method 2: read part of the file and regex"""
regex = r'\(TEMPS CP :[ ]*.*S\)'
with open(filename, 'r') as f:
size = min(15000, os.stat(filename).st_size)
f.seek(-size, os.SEEK_END)
txt = f.read(size)
match = re.search(regex, txt)
if match:
print match.group()

def method_3(filename):
"""Method 3: grep the entire file"""
cmd = 'grep "(TEMPS CP :" {} | head -n 1'.format(filename)
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
print process.communicate()[0][:-1]

def method_4(filename):
"""Method 4: tail of the file and grep"""
cmd = 'tail -n 200 {} | grep "(TEMPS CP :"'.format(filename)
process = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE)
print process.communicate()[0][:-1]


I ran these methods on two files ("trace" is 207MB and "trace_big" is 1.9GB) and got the following computation time (in seconds):

+----------+-----------+-----------+
| | trace | trace_big |
+----------+-----------+-----------+
| method_1 | 2.89E-001 | 2.63 |
| method_2 | 5.71E-004 | 5.01E-004 |
| method_3 | 2.30E-001 | 1.97 |
| method_4 | 4.94E-003 | 5.06E-003 |
+----------+-----------+-----------+


So method_2 seems to be the fastest. But is there any other solution I did not think about?

Edit



In addition to the previous methods, Gosha F suggested a fifth method using mmap :

import contextlib
import math
import mmap

def method_5(filename):
"""Method 5: use memory mapping and regex"""
regex = re.compile(r'\(TEMPS CP :[ ]*.*S\)')
offset = max(0, os.stat(filename).st_size - 15000)
ag = mmap.ALLOCATIONGRANULARITY
offset = ag * (int(math.ceil(offset/ag)))
with open(filename, 'r') as f:
mm = mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_COPY, offset=offset)
with contextlib.closing(mm) as txt:
match = regex.search(txt)
if match:
print match.group()


I tested it and get the following results:

+----------+-----------+-----------+
| | trace | trace_big |
+----------+-----------+-----------+
| method_5 | 2.50E-004 | 2.71E-004 |
+----------+-----------+-----------+

Answer

You may also consider using memory mapping (mmap module) like this

def method_5(filename):
    """Method 5: use memory mapping and regex"""
    regex = re.compile(r'\(TEMPS CP :[ ]*.*S\)')
    offset = max(0, os.stat(filename).st_size - 15000)
    with open(filename, 'r') as f:
        with contextlib.closing(mmap.mmap(f.fileno(), 0, access=mmap.ACCESS_COPY, offset=offset)) as txt:
            match = regex.search(txt)
            if match:
                print match.group()

also some side notes:

  • in the case of using a shell command, ag may be in some cases orders of magnitude faster than grep (although with only 200 lines of greppable text the difference probably vanishes compared to the overhead of starting a shell)
  • just compiling your regex in the beginning of the function may make some difference