开发者

Schema qualified tables with SQLAlchemy, SQLite and Postgresql?

开发者 https://www.devze.com 2022-12-28 05:05 出处:网络
I have a Pylons project and a SQLAlchemy model that implements schema qualified tables: class Hockey(Base):

I have a Pylons project and a SQLAlchemy model that implements schema qualified tables:

class Hockey(Base):
    __tablename__ = "hockey"
    __table_args__ = {'schema':'winter'}
    hockey_id = sa.Column(sa.types.Integer, sa.Sequence('score_id_seq', optional=True), primary_key=True)
    baseball_id = sa.Column(sa.types.Integer, sa.ForeignKey('summer.baseball.baseball_id'))

This code works great with Postgresql but fails when using SQLite on table and foreign key names (due to SQLite's lack of schema support)

sqlalchemy.exc.OperationalError: (OperationalError) unknown database "winter" 'PRAGMA "winter".table_info("hockey")' ()

I'd like to continue using SQLite for dev and testing.

Is ther开发者_开发问答e a way of have this fail gracefully on SQLite?


I'd like to continue using SQLite for dev and testing.

Is there a way of have this fail gracefully on SQLite?

It's hard to know where to start with that kind of question. So . . .

Stop it. Just stop it.

There are some developers who don't have the luxury of developing on their target platform. Their life is a hard one--moving code (and sometimes compilers) from one environment to the other, debugging twice (sometimes having to debug remotely on the target platform), gradually coming to an awareness that the gnawing in their gut is actually the start of an ulcer.

Install PostgreSQL.

When you can use the same database environment for development, testing, and deployment, you should.

Not to mention the QA team. Why on earth are they testing stuff they're not going to ship? If you're deploying on PostgreSQL, assure the quality of your work on PostgreSQL.

Seriously.


I'm not sure if this works with foreign keys, but someone could try to use SQLAlchemy's Multi-Tenancy Schema Translation for Table objects. It worked for me but I have used custom primaryjoin and secondaryjoinexpressions in combination with composite primary keys.

The schema translation map can be passed directly to the engine creator:

...

if dialect == "sqlite":
    url = lambda: "sqlite:///:memory:"
    execution_options={"schema_translate_map": {"winter": None, "summer": None}}
else:
    url = lambda: f"postgresql://{user}:{pass}@{host}:{port}/{name}"
    execution_options=None

engine = create_engine(url(), execution_options=execution_options)

...

Here is the doc for create_engine. There is a another question on so which might be related in that regard.

But one might get colliding table names all schema names are mapped to None.


I'm just a beginner myself, and I haven't used Pylons, but...

I notice that you are combining the table and the associated class together. How about if you separate them?

import sqlalchemy as sa
meta = sa.MetaData('sqlite:///tutorial.sqlite')
schema = None
hockey_table = sa.Table('hockey', meta,
                      sa.Column('score_id', sa.types.Integer, sa.Sequence('score_id_seq', optional=True), primary_key=True),
                      sa.Column('baseball_id', sa.types.Integer, sa.ForeignKey('summer.baseball.baseball_id')),
                      schema = schema,
                    )

meta.create_all()

Then you could create a separate

class Hockey(Object):
    ...

and

mapper(Hockey, hockey_table)

Then just set schema above = None everywhere if you are using sqlite, and the value(s) you want otherwise.

You don't have a working example, so the example above isn't a working one either. However, as other people have pointed out, trying to maintain portability across databases is in the end a losing game. I'd add a +1 to the people suggesting you just use PostgreSQL everywhere.

HTH, Regards.


I know this is a 10+ year old question, but I ran into the same problem recently: Postgres in production and sqlite in development.

The solution was to register an event listener for when the engine calls the "connect" method.

@sqlalchemy.event.listens_for(engine, "connect")
def connect(dbapi_connection, connection_record):
    dbapi_connection.execute('ATTACH "your_data_base_name.db" AS "schema_name"')

Using ATTACH statement only once will not work, because it affects only a single connection. This is why we need the event listener, to make the ATTACH statement over all connections.

0

精彩评论

暂无评论...
验证码 换一张
取 消