I have a MySQL database that is populated via a C# application, mass uploading records on the scale of 100+ million records. After these records are imported, further analysis and other scoring tools are used on the data. When 开发者_如何学GoI choose a smaller subset of data, the records get in fine, however, when I use the full dataset, once record #16777216 is reached, this error occurs. Before I had unique primary settings in place, duplicate ID records were being created here and there, but the data was all getting in. However, with dup records in place, futher processing was producing incorrect results.
My question is simple, has anyone heard of this problem before, and if so, what is going on? Is this a bug in my version of MySQL? I am running MySQL 5.0.67 on Windows XP.
Thanks so much!!
just curious...
any particular reason you're not using load data infile to populate your tables ?
if you need to process data in your application before loading you can still do that but output to csv file instead of calling a sproc 100 million times. load data infile will be much faster !!
see here - http://dev.mysql.com/doc/refman/5.1/en/load-data.html
do you really need a bigint unsigned primary key (8 bytes) vs. unsigned integer (4 bytes) with max value of 4294967295 (4 billion)
see here - http://dev.mysql.com/doc/refman/5.0/en/numeric-types.html
just thoughts...
Are you sure there isn't a MEDIUMINT
involved somewhere? Run show create table
on the table that's showing this problem.
精彩评论