I am using Python 2.6.6 and SQLAlchemy 0.6.6 to handle a one to many relationship in my database, and am unsure how to prevent SQLAlchemy from adding new child records in the case where similar data already exists.
Database code:
from sqlalchemy import *
from sqlalchemy.orm import backref, relationship, sessionmaker, create_session
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
# Parent table, no foreign key.
class Author(Base):
__tablename__ = 'authors'
id = Column(Integer, primary_key=True)
username = Column(String)
author_metadata = relationship('AuthorMetadata', backref='author')
# Child table, many records with same author_id.
class AuthorMetadata(Base):
__tablename__ = 'author_metadata'
id = Column(Integer, primary_key=True)
author_id = Column(Integer, ForeignKey('authors.id'))
metakey = Column(String)
metavalue = Column(Text)
Example script:
if __name__ == '__main__':
engine = create_engine('database_details', pool_recycle=90)
session = create_session(bind=engine)
author = session.query(Author).filter_by(username='Godfrey').first()
if not author:
author = Author()
author.username = 'Godfrey'
author.author_metadata = [
AuthorMetadata(metakey='location', metavalue='New York'),
AuthorMetadata(metakey='posts', metavalue='5')]
session.add(author)
session.flush()
The first time I run the example script, the following appears in the database (as expected):
dev=# select id from authors where username = 'Godfrey';
id
------
5025
(1 row)
dev=# select id, author_id, metakey, metavalue from author_metadata order by id desc limit 2;
id | author_id | metakey | metavalue
-------+-----------+----------+-----------
85090 | 5025 | posts | 5
85089 | 5025 | location | New York
(2 rows)
If I run the example script again though, you can see that the exis开发者_如何学Cting metadata record's author ids have been set to null and new records have been inserted:
dev=# select id, author_id, metakey, metavalue from author_metadata order by id desc limit 4;
id | author_id | metakey | metavalue
-------+-----------+----------+-----------
85092 | 5025 | posts | 5
85091 | 5025 | location | New York
85090 | | posts | 5
85089 | | location | New York
(4 rows)
I don't find this surprising, but am wondering if there is a nice way to be able to communicate to SQLAlchemy that it should only insert/modify/delete author metadata rows if the new list of metadata differs from the existing list.
You could explicitely check the contents of the list and only append new AuthorMetadata objects if they don't exist, rather than delete the entire collection and re-create it with brand new objects. That would at least avoid discarding the previously created records.
Your use case matches attribute_mapped_collection and association_proxy quite well, so you probably want to go with one of them.
精彩评论