I am using pg_buffercache
module for finding hogs eating开发者_C百科 up my RAM cache. For example when I run this query:
SELECT c.relname, count(*) AS buffers
FROM pg_buffercache b INNER JOIN pg_class c
ON b.relfilenode = c.relfilenode AND
b.reldatabase IN (0, (SELECT oid FROM pg_database WHERE datname = current_database()))
GROUP BY c.relname
ORDER BY 2 DESC
LIMIT 10;
I discover that sample_table
is using 120 buffers.
How much is 120 buffers in bytes?
PostgreSQL has a hard coded block size of 8192 bytes -- see the pre-defined block_size variable. This used to be a number to hold in mind whenever you edited the config to specify shared_buffers
, etc., but the config now supports suffixes like MB
which will do the conversion for you.
It is possible, with hard work, to change block_size to other values. For a minority of applications there might be a more optimal size, but the number of places the code makes an assumption about the size is large.
According to what Edmund said, we can make this select in our scheme database:
SELECT c.relname,
Pg_size_pretty(Count(*) * 8192)
AS buffered,
Round(100.0 * Count(*) / (SELECT setting
FROM pg_settings
WHERE name = 'shared_buffers') :: INTEGER, 1)
AS
buffers_percent,
Round(100.0 * Count(*) * 8192 / Pg_relation_size(c.oid), 1)
AS
percent_of_relation
FROM pg_class c
INNER JOIN pg_buffercache b
ON b.relfilenode = c.relfilenode
INNER JOIN pg_database d
ON ( b.reldatabase = d.oid
AND d.datname = Current_database() )
WHERE Pg_relation_size(c.oid) > 0
GROUP BY c.oid,
c.relname
ORDER BY 3 DESC
LIMIT 10;
精彩评论