开发者

Is there a way to speed up this query with no WHERE clause?

开发者 https://www.devze.com 2023-01-07 13:50 出处:网络
I have about 1 million rows so its going pretty slow. Here\'s the query: $sql = \"SELECT `plays`,`year`,`month`开发者_JAVA百科 FROM `game`\";

I have about 1 million rows so its going pretty slow. Here's the query:

$sql = "SELECT `plays`,`year`,`month`开发者_JAVA百科 FROM `game`";

I've looked up indexes but it only makes sense to me when there's a 'where' clause.

Any ideas?


Indexes can make a difference even without a WHERE clause depending on what other columns you have in your table. If the 3 columns you are selecting only make up a small proportion of the table contents a covering index on them could reduce the amount of pages that need to be scanned.

Not moving as much data around though, either by adding a WHERE clause or doing the processing in the database would be better if possible.


If you don't need all 1 million records, you can pull n records:

$sql = "SELECT `plays`,`year`,`month` FROM `game` LIMIT 0, 1000";

Where the first number is the offset (where to start from) and the second number is the number of rows. You might want to use ORDER BY too, if only pulling a select number of records.


You won't be able to make that query much faster, short of fetching the data from a memory cache instead of the db. Fetching a million rows takes time. If you need more speed, figure out if you can have the DB do some of the work, e.g. sum/group togehter things.

If you're not using all the rows, you should use the LIMIT clause in your SQL to fetch only a certain range of those million rows.


If you really need all the 1 million rows to build your output, there's not much you can do from the database side.

However you may want to cache the result on the application side, so that the next time you'd want to serve the same output, you can return the processed output from your cache.


The realistic answer is no. With no restrictions (ie. a WHERE clause or a LIMIT) on your query, then you're almost guaranteed a full table scan every time.

The only way to decrease the scan time would be to have less data (or perhaps a faster disk). It's possible that you could re-work your data to make your rows more efficient (CHARS instead of VARCHARS in some cases, TINYINTS instead of INTS, etc.), but you're really not going to see much of a speed difference with that kind of micro-optimization. Indexes are where it's at.

Generally if you're stuck with a case like this where you can't use indexes, but you have large tables, then it's the business logic that requires some re-working. Do you always need to select every record? Can you do some application-side caching? Can you fragment the data into smaller sets or tables, perhaps organized by day or month? Etc.

0

精彩评论

暂无评论...
验证码 换一张
取 消