开发者

Please help me design a database schema for this:

开发者 https://www.devze.com 2023-02-17 03:52 出处:网络
I\'m designing a python application which works with a database. I\'m planning to use sqlite. There are 15000 objects, and each object has a few attributes. every day I need to add some data for e开发

I'm designing a python application which works with a database. I'm planning to use sqlite. There are 15000 objects, and each object has a few attributes. every day I need to add some data for e开发者_Python百科ach object.(Maybe create a column with the date as its name).

However, I would like to easily delete the data which is too old but it is very hard to delete columns using sqlite(and it might be slow because I need to copy the required columns and then delete the old table)

Is there a better way to organize this data other than creating a column for every date? Or should I use something other than sqlite?


It'll probably be easiest to separate your data into two tables like so:

CREATE TABLE object(
    id        INTEGER PRIMARY KEY,
    ...
);

CREATE TABLE extra_data(
    objectid  INTEGER,
    date      DATETIME,
    ...
    FOREIGN KEY(objectid) REFERENCES object(id)
);

This way when you need to delete all of your entries from a date it'll be an easy:

DELETE FROM extra_data WHERE date = curdate;

I would try and avoid altering tables all the time, usually indicates a bad design.


For that size of a db, I would use something else. I've used sqlite once for a media library with about 10k objects and it was slow, like 5 minutes to query it all and display, searches were :/, switching to postgres made life so much easier. This is just on the performance issue only.

It also might be better to create an index that contains the date and the data/column you want to add and a pk reference to the object it belongs and use that for your deletions instead of altering the table all the time. This can be done in sqlite if you give the pk an int type and save the pk of the object to it, instead of using a Foreign Key like you would with mysql/postgres.


If your database is pretty much a collection of almost-homogenic data, you could as well go for a simpler key-value database. If the main action you perform on the data is scanning through everything, it would perform significantly better.

Python library has bindings for popular ones as "anydbm". There is also a dict-imitating proxy over anydbm in shelve. You could pickle your objects with the attributes using any serializer you want (simplejson, yaml, pickle)

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号