Today I've encountered one of the str开发者_如何学JAVAangest things with MySQL I've seen. I have a trivial table:
CREATE TABLE `features`
(
`feature_id` mediumint(6) unsigned NOT NULL AUTO_INCREMENT,
`feature_name` varchar(100) CHARACTER SET latin1 COLLATE latin1_general_cs NOT NULL,
PRIMARY KEY (`feature_id`),
UNIQUE KEY `feature_name_key` (`feature_name`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci;
I am inserting data inside with Java and mysql-connector-java-5.1.15 library. Data in feature_name may duplicate and I want just unique values. I may use INSERT IGNORE but in case data is too long I may overlook it so I use this:
pstmt = conn.prepareStatement(
"INSERT INTO features (feature_name) VALUES (?)" );
for ( String featureName: data4db.keySet() )
{
pstmt.setString(1, featureName );
try
{
pstmt.executeUpdate();
}
catch ( SQLException se )
{
if ( se.getErrorCode() == 1062 ) // duplicate entry
{
continue; // ignore
}
throw se; // do not ignore anything else
}
}
After data has been inserted into db I've noticed that there were some problems I've not even expected. There are roughly 4000 records in above table which is ok. The only problem is that some data could not be inserted due to duplicate primary key so I've looked inside how auto inc values look like for this table. It turns out that for most of data next adjacent rows' id was incremented by 1 as expected. For reason I do not know sometimes feature_id was incremented by 3, 5, 1000, 100000 - completely random value. Hence I've 'run out of place in this table' since it could not be inserted once id reached max val for medium int.
How can this happen? Has anyone encountered sth similar? It is worth to say there was only one program with one thread writing to this table. I' have one more table almost identical - column widths and names are different. For this one there is similar problem.
BTW - some more data:
mysql> show global variables like 'auto_inc%';
+--------------------------+-------+
| Variable_name | Value |
+--------------------------+-------+
| auto_increment_increment | 1 |
| auto_increment_offset | 1 |
+--------------------------+-------+
2 rows in set (0.01 sec)
mysql> show global variables like 'ver%';
+-------------------------+------------------------------+
| Variable_name | Value |
+-------------------------+------------------------------+
| version | 5.5.10 |
| version_comment | MySQL Community Server (GPL) |
| version_compile_machine | x86 |
| version_compile_os | Win32 |
+-------------------------+------------------------------+
Thank you for any hints in advance.
It's normal MySQL behavior. What happened is the following: you inserted the data up to auto_increment key 3 then you got duplicate key since your feature_name_key is defined unique. Thing is, MySQL will "waste" integer 4 and will move on to the next one, it won't reuse integers that had failed write due to key constraint.
If you had something like this:
PK | feature_name_key
1 | key1
2 | key2
3 | key3
4 | key1 (fails due to constraint, 4 is not going to be used for next successful insertion, hence the gaps in your primary key)
then you lose on integers available for primary key / auto_increment. Rethink your strategy while inserting or constructing the table to hold your data.
Between your insertions, have you deleted rows? MySQL might simply be remembering the autoincrement counter.
精彩评论