开发者

how to get load time in milliseconds or microseconds in mysql

开发者 https://www.devze.com 2023-02-13 06:15 出处:网络
I\' ve searched and sear开发者_如何学Goched, but I wasn\'t able to find an easy way to get this:

I' ve searched and sear开发者_如何学Goched, but I wasn't able to find an easy way to get this:

Query OK, 50000 rows affected (0.35 sec) 

in milliseconds or microseconds.

How can I achieve it?


I came with the same problem, I did my queries from a linux console using time

$ time mysql --user="user" -D "DataBase" -e "SELECT SQL_NO_CACHE COUNT(1) FROM table"

 ------------
 count(1)
 ------------
 750
 ------------

 real 0m0.269s
 user 0m0.014s
 sys  0m0.015s

or

$ time -f"%e" mysql --user="user" -D "DataBase" -e "SELECT SQL_NO_CACHE COUNT(1) FROM table"

------------
 count(1)
------------
 750
------------
 0.24

It gives different values from "mysql" but at least is something you can work with, for example this script:

#!/bin
temp = 1
while [ $temp -le 1000]
do
    /usr/bin/time -f"%e" -o"/home/admin/benchmark.txt" -a mysql --user="user" -D "DataBase" -e "SELECT SQL_NO_CACHE COUNT(1) FROM table"  > /dev/null 2> /dev/null
    let temp=$temp+1
done

Execute the query 1000 times, -f shows only the real time, -o the output file, -a appends to the output, > /dev/null 2> /dev/null ignores the query output so it doesn't print in console each time.


That time's calculated by the mysql monitor application and isn't done by the mysql server. It's not something you can retrieve programatically by doing (say) select last_query_execution_time() (which would be nice).

You can simulate it in a coarse way by doing the timing in your application, by taking system time before and after calling the query function. Hopefully the client-side overhead would be minimal compared to the mysql portion.


You could time it yourself in the code that runs the query:

Pseudo code:

double StartTime = <now>
Execute SQL Query 
double QueryTime = <now> - StartTime
0

精彩评论

暂无评论...
验证码 换一张
取 消