I am using mysql driven application where in all data comes from mysql and display it using php.
when designing the DB what al开发者_如何学Gol precautions must be taken so that mysql gives best performance.
i had a small doubt . I have text column in my form , and i dono how much length data will be entered . So instead of varchar(255) if i create it as varchar(2000) does it effects the performance ?
It doesn't. Varchars define a maximum length. They use as much or as little space (plus a small fixed overhead) as the data they contain.
The only thing you may need to consider is the total row length and this varies depending on what storage engine you use. See 13.2.14. Restrictions on InnoDB Tables:
The maximum row length, except for variable-length columns (VARBINARY, VARCHAR, BLOB and TEXT), is slightly less than half of a database page. That is, the maximum row length is about 8000 bytes. LONGBLOB and LONGTEXT columns must be less than 4GB, and the total row length, including BLOB and TEXT columns, must be less than 4GB.
If a row is less than half a page long, all of it is stored locally within the page. If it exceeds half a page, variable-length columns are chosen for external off-page storage until the row fits within half a page, as described in Section 13.2.11.2, “File Space Management”.
and 10.5. Data Type Storage Requirements:
The maximum size of a row in a MyISAM table is 65,535 bytes. This figure excludes BLOB or TEXT columns, which contribute only 9 to 12 bytes toward this size.
varchar takes the number of bytes of the data + length prefix regardless of the length define
eg.
'xyz' will take 4 bytes, 3 bytes for the data and 1 or 2 bytes will be added for length prefix, A column uses 1 byte if values require no more than 255 bytes, 2 bytes if values may require more than 255 bytes. In this case 1 byte will add up since data require less than 255 bytes.
So go on buddy affect on the performance is negligible, that's the beauty of VARCHAR.
So instead of varchar(255) if i create it as varchar(2000) does it effects the performance ?
It rather depends. While the system can accomodate this, consider the following table:
create table example (
id int NOT NULL AUTOINCREMENT,
smallstr VARCHAR(20),
anumber mediumint(6) DEFAULT 0,
bigstr (2000)
PRIMARY KEY (id)
KEY lookup (anumber, smallstr)
)
As others have said, the size of each row will vary - lets say there's an average of 1kbytes, and a million rows. But if your queries are mostly NOT looking at that big varchar field, then the structure is very inefficient - it still has to be read off the disk each time, even though it never gets to the result set. So you need to read at least a gigabyte of data. But if you split this:
create table example2 (
id int NOT NULL AUTOINCREMENT,
smallstr VARCHAR(20),
anumber mediumint(6) DEFAULT 0
PRIMARY KEY (id)
KEY lookup (anumber, smallstr)
);
create table example2_sub (
ex2_id int NOT NULL,
bigstr (2000)
PRIMARY_KEY (ex2_id)
);
The average row size would drop to around 25 bytes - i.e. 1/40th of that using a consolidated table. OTOH, when you do need to perform a bulk query on example2 and example2_sub it will be slower than the consolidated table (I would expect that it would slow down by something of the order of 30% - but I've not tested it).
(note that you should restrict the size of HTML input fields to the corresponding database attributes wherever possible).
HTH
C.
精彩评论