开发者

Partition and subpartition mysql table by primary key and 2 indexes to increase performance

开发者 https://www.devze.com 2023-03-28 15:05 出处:网络
i have got a very large table and i preform a very simple queries on it like id look ups. the query got slower with time because of the increasing of the table rows.

i have got a very large table and i preform a very simple queries on it like id look ups. the query got slower with time because of the increasing of the table rows. i perform around 300 queries in second and it makes my script runs slow and my memory to 99% (the memory is lower then the size of the DB) i want to partition and subpartition the table for maximum performance, here is my table (out of 30 tables like it), please add a code because i am very new to partitioning and don't know much. the select queries are only where clause for id look ups and simple insert ones. i want to upgrade the RAM to be higher then the DB size but i want to avoid it and i don't know if it will solve the problem better then partitioning.

CREATE TABLE `books` (
    `id` INT(10) NOT NULL AUTO_INCREMENT,
    `name` VARCHAR(200) NULL DEFAULT NULL COLLATE 'utf8_unicode_ci',
    `picture` VARCHAR(500) NULL DEFAULT NULL COLLATE 'utf8_unicode_ci',
    `url` VARCHAR(500) NULL DEFAULT NULL COLLATE 'utf8_unicode_ci',
    `created` TIMESTAMP NULL DEFAULT CURRENT_TIMESTAMP,
    PRIMARY KEY (`id`),
    INDEX `url` (`url`(333)),
    INDEX `name` (`name`)
)
COLLATE='utf8_unicode_ci'
ENGINE=MyISAM
ROW_FORMAT=DEFAULT
AUTO_INCREMENT=937

here is some example of my queries:

SELECT id FROM books WHERE url = 'blabla';
INSERT INTO user_books SET book_id = '3', user_id = '1开发者_如何转开发0';

each query takes around 0.05-0.2 seconds

i got around 5-10 million rows in each table.

DB size is 10GB i thought about upgrading the RAM to 16GB


If you just want to try adding partitions, I'd recommend using partition by key. This will essentially just partition the table with an internal hash function applied to the column(s) specified with the partitioning.

In your case, the queries seem to use the id the most, and since this is the primary key, the syntax is very simple:

CREATE TABLE `books` (
    `id` INT(10) NOT NULL AUTO_INCREMENT,
    `name` VARCHAR(200) NULL DEFAULT NULL COLLATE 'utf8_unicode_ci',
    `picture` VARCHAR(500) NULL DEFAULT NULL COLLATE 'utf8_unicode_ci',
    `url` VARCHAR(500) NULL DEFAULT NULL COLLATE 'utf8_unicode_ci',
    `created` TIMESTAMP NULL DEFAULT CURRENT_TIMESTAMP,
    PRIMARY KEY (`id`),
    INDEX `url` (`url`(333)),
    INDEX `name` (`name`)
)
PARTITION BY KEY()
PARTITIONS 8;
COLLATE='utf8_unicode_ci'
ENGINE=MyISAM
ROW_FORMAT=DEFAULT
AUTO_INCREMENT=937

The above will create 8 partitions. Since I've not specified any column to use with the partitioning clause, MySQL defaults to using the primary key, which would be fine in your case.

Assuming a reasonably balanced spread of the integer values used in the id column, each partition would be approx 1.25 GB. Firing a query for a specific id should now be selecting data only from one partition, which would make for faster access.

You can refer to MySQL documentation here: http://dev.mysql.com/doc/refman/5.5/en/partitioning-key.html

Needless to say, id is a primary key, so this should already have been quite performant. I'm not sure of more caveats that may apply to indexes+partitions with MyISAM. I work mostly with InnoDb, and this is what I'd do, with InnoDb at least, to improve my query performance.

0

精彩评论

暂无评论...
验证码 换一张
取 消