I've been wrestling with this problem for quite a while now and the automatic mails with 'Slow Quer开发者_如何转开发y' warnings are still popping in. Basically, I have Blogs with a corresponding table as well as a table that keeps track of how many times each Blog has been viewed. This last table has a huge amount of records since this page is relatively high traffic and it logs every hit as an individual row. I have tried with indexes on the fields that are included in the WHERE clause, but it doesn't seem to help. I have also tried to clean the table each week by removing old (> 1.weeks) records. SO, I'm asking you guys, how would you solve this?
The query that I know is causing the slowness is generated by Rails and looks like this:
SELECT count(*) AS count_all
FROM blog_views
WHERE (created_at >= '2010-01-01 00:00:01' AND blog_id = 1);
The tables have the following structures:
CREATE TABLE IF NOT EXISTS 'blogs' (
'id' int(11) NOT NULL auto_increment,
'name' varchar(255) default NULL,
'perma_name' varchar(255) default NULL,
'author_id' int(11) default NULL,
'created_at' datetime default NULL,
'updated_at' datetime default NULL,`
'blog_picture_id' int(11) default NULL,
'blog_picture2_id' int(11) default NULL,
'page_id' int(11) default NULL,
'blog_picture3_id' int(11) default NULL,
'active' tinyint(1) default '1',
PRIMARY KEY ('id'),
KEY 'index_blogs_on_author_id' ('author_id')
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
And
CREATE TABLE IF NOT EXISTS 'blog_views' (
'id' int(11) NOT NULL auto_increment,
'blog_id' int(11) default NULL,
'ip' varchar(255) default NULL,
'created_at' datetime default NULL,
'updated_at' datetime default NULL,
PRIMARY KEY ('id'),
KEY 'index_blog_views_on_blog_id' ('blog_id'),
KEY 'created_at' ('created_at')
) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;
Most likely, you'll need one index on both columns blog_id and created_at
Not two indexes with one column each...
...
KEY 'whizzy' ('blog_id', 'created_at'),
...
From CREATE TABLE you can have composite indexes/keys:
...
{INDEX|KEY} [index_name] [index_type] (index_col_name,...)
...
and also try reversing the order if this does not work
I wouldn't log every hit as a unique row....
If what you want to do is only to count how many times a blog is viewed, why dont you create one row per blog, and add view_count field to your table. Then only this field is being updated....
If you need to keep track of users/ips, and you want to know how many times each user clicked on the blogs, then create a extra table in a structure similiar to this:
id (pk)
blog_id
user_id / ip_address
dates
hit_counts
With such a structure you can have an overview of how many times a blog has been viewed, but also by how many users and how many times each user clicked...
Or else, if you really want to create rows for every hit, and that indexes did not sped enough the process, you might consider a more powerful server, or a different database management system....
Hope this helps!
My Idea is same as piero but some difference.
You can add one column in to blog table named "hits" and always increase value of this field when an hit occurs and same insert a new record in blog_view table as you are doing now.
by doing you need not to run a query to count hits and you can also view the hits details when it is needed.
精彩评论