I am in looking for a buffer code for process huge records in tuple / csv file / sqlite db records / numpy.darray, the buffer may just like linux command "more".
The request came from processing huge data records(100000000 rows maybe), the records may look like this:
0.12313 0.231312 0.23123 0.152432
0.22569 0.311312 0.54549 0.224654
0.33326 0.654685 0.67968 0.168749
...
0.42315 0.574575 0.68646 0.689596
I want process them in numpy.darray. For example, find special data process it and store them back, or process 2 cols. However it is too big then if numpy read the file directly it will give me a Memory Error.
Then, I think an adapter like mem cache page or linux "more file" command may save the memory when processing.
Because those raw data may presents as different format - csv / sqlite_db / hdf5 / xml. I want this adapter be more normalized, then, use the "[]" as a "row" may be a more common way because I think each records can be represented as a [].
So the adapter what I want may looks like this:
fd = "a opend big file" # or a tuple of objects, whatever, it is an iterable object can access all the raw rows
page = pager(fd)
page.page_buffer_size = 100 # buffer 100 line or 100 object in tuple
page.seek_to(0) # move to start
page.seek_to(120) # move to line #120
page.seek_to(-10) # seek back to #120
page.next_page()
page.prev_page()
page1 = page.copy()
page.remove(0)
page.sync()
Can someone show me some hints to prevent reinvent the wheel?
By the way, ATpy, http://atpy.sourceforge.net/ is a module can sync the numpy.array with raw datasource in different format, however, it also read all the data in-a-go into memory.
And the pytable is not suitable for me so far because SQL is not supported by it and the HDF5 file may not as popular as sqlite db(forgive me if this is wrong).
My plan is write this tools in this way:
1. helper.py <-- define all the house-keeping works for different file format
|- load_file()
|- seek_backword()
开发者_如何学Go |- seek_forward()
| ...
2. adapter.py <-- define all the interface and import the helper to interact
with raw data and get a way to interact with numpy.darray in somehow.
|- load()
|- seek_to()
|- next_page()
|- prev_page()
|- sync()
|- self.page_buffer_size
|- self.abs_index_in_raw_for_this_page = []
|- self.index_for_this_page = []
|- self.buffered_rows = []
Thanks,
Rgs,
KC
Ummmm.... You're not really talking about anything more than a list.
fd = open( "some file", "r" )
data = fd.readlines()
page_size = 100
data[0:0+page_size] # move to start
data[120:120+page_size] # move to line 120
here= 120
data[here-10:here-10+page_size] # move back 10 from here
here -= 10
data[here:here+page_size]
here += page_size
data[here:here+page_size]
I'm not sure that you actually need to invent anything.
The linecache module may be helpful — you can call getline(filename, lineno)
to efficiently retrieve lines from the given file.
You'll still have to figure out how high and wide the screen is. A quick googlance suggests that there are about 14 different ways to do this, some of which are probably outdated. The curses module may be your best bet, and will I think be necessary if you want to be able to scroll backwards smoothly.
精彩评论