My program is sucking up a meg every few seconds. I read that python doesn't see curors in garbage collection, so I have a feeling that I might be doing something wron开发者_运维技巧g with my use of pydbc
and sqlalchemy
and maybe not closing something somwhere?
#Set up SQL Connection
def connect():
conn_string = 'DRIVER={FreeTDS};Server=...;Database=...;UID=...;PWD=...'
return pyodbc.connect(conn_string)
metadata = MetaData()
e = create_engine('mssql://', creator=connect)
c = e.connect()
metadata.bind = c
log_table = Table('Log', metadata, autoload=True)
...
atexit.register(cleanup)
#Core Loop
line_c = 0
inserts = []
insert_size = 2000
while True:
#line = sys.stdin.readline()
line = reader.readline()
line_c +=1
m = line_regex.match(line)
if m:
fields = m.groupdict()
...
inserts.append(fields)
if line_c >= insert_size:
c.execute(log_table.insert(), inserts)
line_c = 0
inserts = []
Should I maybe move the metadata block or part of it to the insert block and close the connection each insert?
Edit:
Q: Does it every stabilize?A: Only if you count Linux blowing away the process :-) (Graph does exclude Buffers/Cache from Memory Usage)
I would not necessarily blame SQLAlchemy. It could also be a problem of the underlaying driver. In general memory leaks are hard to detect. In any case you should ask on the SQLALchemy mailing list where the core developer Michael Bayer is responding on almost every question...perhaps a better chance to get real help there...
精彩评论