I have a table in a postgres database that's being used in a testing environment where we need to add and drop several columns at a time. The problem is that postgres has a maximum of 1600 columns, and this count includes dropped columns. My table will never have 1600 total `un-dropped' columns, but over time it accumulates to more than the 开发者_运维问答1600 with drops.
I've tried using VACUUM
and VACUUM FULL
, and I've tried recasting an existing column as its own type (ALTER TABLE table ALTER COLUMN anycol TYPE anytype
) to cause postgres to scan all of the columns and clean up the memory from dropped columns, but none of these reset postgres' `column number.'
I know that this could be solved by copying the entire table, but that has its own issues, and is in a separate question.
Do you know of a way to make postgres forget it had dropped columns?
I know that postgres was not designed for applications like this, but I'm not going to get into why we chose to implement it this way. If you have an alternate tool to use, I'd be interested to hear about it, but I'd still like to find a solution to this.
This is not possible, other than by recreating the table, as you already found out.
Otherwise the database system would have to keep track somehow when the storage used by a dropped column has been cleared everywhere, and then renumber the attributes. That would be incredibly expensive and complicated.
The systemtable pg_attribute still shows old (deleted) columns. I don't know why, but it looks like a bug to me.
SELECT
relnatts,
attname,
attisdropped
FROM
pg_class
JOIN pg_attribute att ON attrelid = pg_class.oid
WHERE
relname = 'your_table_name';
Could you send a bugreport, including a simple example, to pgsql-bugs@postgresql.org or http://www.postgresql.org/support/submitbug ?
精彩评论