best way to handle this in Python
Ian Kelly
ian.g.kelly at gmail.com
Fri Jul 20 14:14:30 EDT 2012
More information about the Python-list mailing list
Fri Jul 20 14:14:30 EDT 2012
- Previous message (by thread): best way to handle this in Python
- Next message (by thread): best way to handle this in Python
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
On Fri, Jul 20, 2012 at 4:34 AM, Rita <rmorgan466 at gmail.com> wrote: > Thats an interesting data structure Dennis. I will actually be running this > type of query many times preferable in an ad-hoc environment. That makes it > tough for sqlite3 since there will be several hundred thousand tuples. Several hundred thousand is not an enormous number. I think you're underestimating sqlite3. I just tried a test with one million tuples, six colors per tuple (six million rows altogether). Each row contains a primary key, a timestamp, a color, and a count, with an index on the timestamp column. Building the database from scratch took about a minute; adding the index took about another minute. Incremental updates would of course be much faster. Queries like "select * from data where timestamp between 500000 and 600000" return instantly (from a user perspective). Cheers, Ian
- Previous message (by thread): best way to handle this in Python
- Next message (by thread): best way to handle this in Python
- Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
More information about the Python-list mailing list