开发者

MySQL cluster error 1114 (HY000): The table 'users' is full

开发者 https://www.devze.com 2023-01-25 05:38 出处:网络
I have a MySQL cluster setup, There are 2 data nodes and 1 management node. We are now getting errors on our data nodes when doing inserts.

I have a MySQL cluster setup, There are 2 data nodes and 1 management node. We are now getting errors on our data nodes when doing inserts.

ERROR 1114 (HY000): The table 'users' is full

Please any help is ap开发者_StackOverflow中文版preciated, Is this a config issue, if so which node? each node is an Ubuntu 9 server.


The answer is here: http://dev.mysql.com/doc/refman/5.0/en/faqs-mysql-cluster.html#qandaitem-B-10-1-13

in short: ndb engine holds all data in RAM. If your nodes have 1GB RAM and you are trying to load a 4GB database you are (nearly) out of luck.

There is a way to configure ndb to use disk to store data so that RAM is only used to store indexes (they still have to be stored in RAM), here is how: http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-disk-data-objects.html

If you do that, however, you will get far lower performances.


Just in case someone passes by this issue, The root cause for this issue is the value for MaxNoOfConcurrentOperations, which should be raised accordingly in conjunction to the number of rows the table is expected to hold.

The following should be solutions for this issue.

  1. Change the CREATE TABLE of the table initial creation to include MAX_ROWS value to be double the number of rows this table is expected to hold.

  2. Increase the value of MaxNoOfConcurrentOperations and MaxNoOfConcurrentTransactions to a value higher than the largest table in the cluster. This is probably because MySQL Cluster works on all rows in parallel when applying any operation on a large table.


Check the innodb_data_file_path setting - this error suggests you have exceeded available space defined in that key. Check this link for redefining space for InnoDB.

Alternatively, you could just have run out of disk space on the partition that data is stored on.


I faced the same error when I was loading only DB tables' structure. Which means DataMemory or IndexMemory were not of help here. Also the number of tables didn't reach the limit in MaxNoOfTables so it is not the issue as well. The solution for me here was to increase the the values for MaxNoOfOrderedIndexes and MaxNoOfUniqueHashIndexes which reflect the max number of indexes you can have in the cluster. So if there are many indexes in your DB try to increase those variables accordingly. Of course, a [rolling restart][1] must be done after that change to take effect!

I hope this might help someone!

0

精彩评论

暂无评论...
验证码 换一张
取 消