开发者

Good way to read csvData using psycopg2

开发者 https://www.devze.com 2023-01-13 07:11 出处:网络
I am trying to get a fast i.e. fast and not a lot of code, way to get csv data into postgres data base. I am reading into python using开发者_运维问答 csvDictreader which works fine. Then I need to gen

I am trying to get a fast i.e. fast and not a lot of code, way to get csv data into postgres data base. I am reading into python using开发者_运维问答 csvDictreader which works fine. Then I need to generate code somehow that takes the dicts and puts it into a table. I want to do this automaticaly as my tables often have hundreds of variables. (I don't want to read directly to Postgres because in many cases I must transform the data and python is good for that)

This is some of what I have got:

import psycopg2
import sys
import  itertools

import sys, csv
import psycopg2.extras
import psycopg2.extensions

csvReader=csv.DictReader(open( '/home/matthew/Downloads/us_gis_data/statesp020.csv',  "rb"),  delimiter = ',')
#close.cursor()
x = 0
ConnectionString = "host='localhost' dbname='mydb' user='postgres' password='######"
try:
    connection = psycopg2.extras.DictConnection(ConnectionString)
    print "connecting"
except:
    print "did not work"
# Create a test table with some data

dict_cur = connection.cursor()

#dict_cur.execute("CREATE TABLE test (id serial PRIMARY KEY, num integer, data varchar);")
for i in range(1,50):
    x = x+1
    print x
    dict_cur.execute("INSERT INTO test (num, data) VALUES(%s, %s)",(x, 3.6))#"abc'def"))
   ### how to I create the table and insert value using the dictreader?

dict_cur.execute("SELECT * FROM test")
for k in range(0,x+1):
    rec = dict_cur.fetchone()
    print rec['num'], rec['data']


Say you have a list of field names (presumably you can get this from the header of your csv file):

fieldnames = ['Name', 'Address', 'City', 'State']

Assuming they're all VARCHARs, you can create the table "TableName":

sql_table = 'CREATE TABLE TableName (%s)' % ','.join('%s VARCHAR(50)' % name for name in fieldnames)
cursor.execute(sql_table)

You can insert the rows from a dictionary "dict":

sql_insert = ('INSERT INTO TableName (%s) VALUES (%s)' % 
              (','.join('%s' % name for name in fieldnames),
               ','.join('%%(%s)s' % name for name in fieldnames)))
cursor.execute(sql_insert, dict)

Or do it in one go, given a list dictionaries:

dictlist = [dict1, dict2, ...]
cursor.executemany(sql_insert, dictlist)

You can adapt this as necessary based on the type of your fields and the use of DictReader.


I am a novice but this worked for me. I used PG Admin to create the 'testCSV' table.

import psycopg2 as dbapi

con = dbapi.connect(database="testpg", user="postgres", password="secret")

cur = con.cursor()

import csv
csvObject = csv.reader(open(r'C:\testcsv.csv', 'r'), dialect = 'excel',  delimiter = ',') 

passData = "INSERT INTO testCSV (param1, param2, param3, param4, param5) VALUES (%s,%s,%s,%s,%s);" 

for row in csvObject:  
    csvLine = row       
    cur.execute(passData, csvLine) 

con.commit()
0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号