Archive for category Tuning

PGREPLAY (replicating production load in a test environment)

Ever wondered why all the testing passes ok but when it comes to real production load things start to look bad on the database. Only if we knew how to replicate the database load in real time on a testing environment we would have some data to analyze. I was given the same task and pgreplay came to rescue.
I find this tool very helpul when:

–> I want to test a new version of PostgreSQL database
–> Or even when I need to have some statistics around how the database would behave on a new hardware.
–> Also, if I make a configuration change to the database how would it behave in a real time load scenario.

All of the above scenarios are worth spending time and you can have a clear idea how your database will behave when its live in production.

About the module:

pgreplay reads the SQL statements in a PostgreSQL log file, extracts them and executes them in the same order and relative time against a PostgreSQL database cluster.

Installation:

Assuming you have a test box with the same data as production and are using postgres as your database.
Download and install pgreplay from the below link(I would install this on a test box and not production)

su - postgres
wget http://pgfoundry.org/frs/download.php/3345/pgreplay-1.2.0.tar.gz
tar -xvf pgreplay-1.2.0.tar.gz
./configure --with-postgres=/opt/PostgreSQL/9.2.4/bin
make
sudo make install

Configuration:

pgreplay needs to read a postgres log file and that log has to be in a specific format. The change would need a restart of you postgres database. Make a copy of the postgresql.conf file before you go ahead with this change.

log_destination = 'stderr'
logging_collector = on
log_directory = '/somemountpoint/pg_log' (this is the location where your postgres log file will be generated. Make sure postgres user has write privs)
log_min_duration_statement = 100 (this value is set to milliseconds. If you want to log all statements put 0)
log_min_messages = error  (or more)
   (if you know that you have no cancel requests, 'log' will do)
log_min_error_statement = log  (or more)
log_connections = on
log_disconnections = on
log_line_prefix = '%m|%u|%d|%c|'  (if you don't use CSV logging)
log_statement = 'all'
lc_messages must be set to English (the encoding does not matter)
bytea_output = escape  (from version 9.0 on, only if you want to replay
                        the log on 8.4 or earlier)

Thats it! restart postgres and logging will start the way pgreplay can parse the statements.

Usage:

There are two ways to replay the postgres logs to another postgres database

1. pgreplay [<parse options>] [<replay options>] [<infile>] 

This will parse a PostgreSQL log file and replays the
statements against a database.

2. pgreplay -f [<parse options>] [-o <outfile>] [<infile>]

This will parse a PostgreSQL log file and writes the
contents to a “replay file” that can be replayed with -r

I like the second option better because then I can just copy the parsed file to any database and do whatever testing is necessary
Now that you have a couple of postgres log files in the log location you specified in postgresql.conf file all you have to do is copy the logs over to a test box that has pgreplay installed and run the below procedure:

I first create a parse file:

pgreplay -f -o test.replay /somemountpoint/pg_log/postgresql.log
notice you will find the test.replay log file in the current working directory

Now just replay the test.replay parse file and watch the postgres database replay all the statements from production to a test box:

pgreplay -r -j test.replay

There are a lot more options that you can try out with pgreplay I will not be explaining them all but feel free to ask any questions about the module. More information on pgreplay is available here.

3 Comments

pgstattuple

I got a very interesting question today from a developer. What is the size of a tuple in a table. Well you kinda pause for a while and think ummm I thought I knew this one. Well postgresql has a contrib module that can help you out with this. Lets take a look at pgstattuple, this contrib module will give you information like table length, number of live tuples in a table, total length of live tuples, number of dead tuples, free space in a table and much more.

All this information is extremely helpful when you want to check the stats on a specific table. Lets go ahead and get this conrib module started in our current database:

Assuming you have source install for postgresql available

$ cd postgresql-8.4.9/contrib/pgstattuple
$ make
$ make install

Now we need to the pgstattuple functions to be available in the database we are trying to check the tuples on. For that run the sql:

$ psql dbname -f pgstattuple.sql

That’s it we are all set to check the tuples. Connect to your database and call the function as:

test=# SELECT * FROM pgstattuple('test.account_services');
-[ RECORD 1 ]------+---------
table_len          | 50683904
tuple_count        | 597431
tuple_len          | 47914138
tuple_percent      | 94.54
dead_tuple_count   | 0
dead_tuple_len     | 0
dead_tuple_percent | 0
free_space         | 207320
free_percent       | 0.41

I got the approximate/average length of a tuple by tuple_len/tuple_count (not the smartest way to get that info but oh well it worked for me)

You can also use pgstatindex to get information on the indexes.

hdap=# SELECT * FROM pgstatindex('pg_cast_oid_index');
-[ RECORD 1 ]------+------
version            | 2
tree_level         | 0
index_size         | 8192
root_block_no      | 1
internal_pages     | 0
leaf_pages         | 1
empty_pages        | 0
deleted_pages      | 0
avg_leaf_density   | 37.54
leaf_fragmentation | 0

,

Leave a comment

Maintaining indexes

During the course of database and application maturity there are times when we try hard to improve the performance of the database to meet customer expectations in accordance to the agreed standards. These are the times when we think lets add an index on that old table to make it work faster.

There are a couple of things you should keep in mind when you are looking at adding an index to your table. Ask yourself the question “will this really help in performance”? Consider a scenario where you have a table that is pretty busy with inserts all day. You have a requirement for a report from that table during the night and add an index to speed up that report. Well works perfect for your little report running during off hours. How about the inserts happening during the day on the table that is being used excessively. You will notice there is a fall in performance there. Reason being you added another index and that index also needs to be updated with every insert/update. Increasing the time it takes and reducing the performance.

Make sure you run the below query once a month to get a report on the unused indexes on your database. This can be pretty useful when trying to boost your database’s performance. Also you will find that you have freed up quite some space that was lying around not being used by the database.

I would recommend running this off of crontab and sending the output via email every month.

Unused indexes:

select 

    indexrelid::regclass as index, relid::regclass as table 

from 

    pg_stat_user_indexes 

    JOIN pg_index USING (indexrelid) 

where 

    idx_scan = 0 and indisunique is false;
 

NOTE: don’t just blindly drop the indexes listed in here. Make sure they are useless first. Also, running this is useless after a fresh restore or after calling pg_stat_reset(). If your stats have reset run at least one business cycle on the database before running the sql queries.

Index maintenance also covers removing duplicates. A lot of times I have seen indexes on a table with a unique key and then the same column has a primary key constraint. Primary keys are unique. Run the below SQL and identify the duplicate indexes that can be removed from the database. This will also help in improving performance and reclaiming lost space.

Duplicate Indexes:

SELECT pg_size_pretty(sum(pg_relation_size(idx))::bigint) AS size,
       (array_agg(idx))[1] AS idx1, (array_agg(idx))[2] AS idx2,
       (array_agg(idx))[3] AS idx3, (array_agg(idx))[4] AS idx4
FROM (
    SELECT indexrelid::regclass AS idx, (indrelid::text ||E'\n'|| indclass::text ||E'\n'|| indkey::text ||E'\n'||
                                         coalesce(indexprs::text,'')||E'\n' || coalesce(indpred::text,'')) AS KEY
    FROM pg_index) sub
GROUP BY KEY HAVING count(*)>1
ORDER BY sum(pg_relation_size(idx)) DESC;

If you ever thought about finding the foreign key columns that are missing indexes. Below is something that might help. Again check if you really need to add the index don’t just assume based on the query below:

Missing indexes:

select src_table, dst_table, fk_name, pg_size_pretty(s_size) as s_size, pg_size_pretty(d_size) as d_size, d
    from (
        select
                distinct on (1,2,3,4,5)
                textin(regclassout(c.conrelid))  as src_table,
                textin(regclassout(c.confrelid)) as dst_table,
                c.conname                        as fk_name,
                pg_relation_size(c.conrelid)     as s_size,
                pg_relation_size(c.confrelid)    as d_size,
                array_upper(di.indkey::int[], 1) + 1 - array_upper(c.conkey::int[], 1) as d
        from pg_constraint c
        left join pg_index di on di.indrelid = c.conrelid
                and array_to_string(di.indkey, ' ') ~ ('^' || array_to_string(c.conkey, ' ') || '( |$)')
        join pg_stat_user_tables st on st.relid = c.conrelid
        where c.contype = 'f'
        order by 1,2,3,4,5,6 asc
    ) mfk
    where mfk.d is distinct from 0 and mfk.s_size > 1000000
    order by mfk.s_size desc, mfk.d desc;

, ,

Leave a comment

pgbench

Today I am going to write about a bench marking tool that has been part of the postgres contrib module for a couple of years now and i have found it very useful for a couple of reasons:

–> If you have a specific batch of queries that you would like to benchmark it can be done using pgbench.

–> Upgrading to the new postgres version and would like to set a couple of benchmark jobs to check the performance.

–> Thinking about changing memory parameters and would like to run some tests before you can implement it in production.

–> Testing new hardware

First lets talk about the install process:

I am assuming you have postgres setup via source and source is available for install of pgbench.

Change working directory to contrib/pgbench

postgres@debian:~$make

postgres@debian:~$make install

Once you have pgbench installed its time to initialize pgbench on the database you would like to run pgbench on.

postgres@debian:~$pgbench -i test

-i will initialize pgbench as in it creates tables and functions in the pgbench schema that will be used for bench marking tests.

next we can tell pgbench to use number of transactions(say 10), number of clients(10) to connect for this test.

postgres@debian:~$pgbench -c 10 -t 10 test

if you are getting the error

ERROR:  relation "pgbench_branches" does not exist at character 22

you might need to initialize pgbench for the database you are running the test on.

By the end of the test you will receive the result set as below:

number of clients: 10
number of transactions per client: 10
number of transactions actually processed: 100/100
tps = 964.822569 (including connections establishing)
tps = 1145.317940 (excluding connections establishing)

A more realistic result would be by increasing the number of transactions and the clients connected with the number that matches your current production standards. Run the test a couple of times and take an average to find what kind of results you can expect.

Apart from the standard test using pgbench you can also use it to test some of your current queries:

postgres@debian:~$pgbench -f testscript.sh test

test script can have any production sql in there and pgbench will execute them and provide the time taken to run the statements.

There are many more options like how long the test should last. If vacuum should run before the benchmark test or not, scale factor, select only tests etc. You can use any of those options to get the desired test results.

If you have any questions then leave a comment and I will be sure to get back.

,

Leave a comment

Maintaining system tables

This is probably something you learn as you work more on PostgreSQL server rather than in any user guide out there.

System catalog tables in Postgres maintain information on every stats available for Postgres to device a plan for execution. These tables also need maintenance from time to time as the indexes too get bloats and performance degrades. We often run vacuum and reindex on all our user tables but it is important to keep the system tables in good shape too.

Below is a script that can be put in crontab and is advisable to run it every other month (takes only a couple of seconds to process, still run it during off production hours).

for db in `psql -l -t|awk '{print $1}'|grep -v -e ":" -e "template0"|sed '/^$/d'`; do reindexdb -s -d $db; done

, , ,

Leave a comment

Tuning shared_buffers

It is always at the back of a DBA’s mind “have i tuned Postgres shared_buffers properly?” And we start looking at all the blogs about how to set it and a hit and trial method or increasing it and then decreasing it to find the optimal value. Although it is true to probably set it a maximum of 25% of your total memory and do the math around the database size but there is one such tool that can give you an idea if it is time to increase the shared_buffers or have we just made it too big that performance is falling down.

pg_buffercache contrib module will help make a decision for you. How? lets see:

Assuming you have the source installer in place just navigate to the contrib directory in source here you would find a module called pg_buffercache.

All you would need to do to install it is

$ make

$make install

To register the functions with your database

$ psql -d &lt;database&gt; -f pg_buffercache.sql

Its now time to check if we need to increase the shared_buffers. Remember to run the below SQL at high peak load times a couple of times to get a more realistic data.

The query below will show usage count data:

select usagecount, count(*), isdirty from pg_buffercache
group by isdirty, usagecount order by isdirty, usagecount;
usagecount | count  | isdirty
------------+--------+---------
0 | 167072 | f
1 | 125923 | f
2 |  22361 | f
3 |  14248 | f
4 |  13284 | f
5 | 156717 | f
1 |  11605 | t
2 |   2528 | t
3 |   1996 | t
4 |   1239 | t
5 |   7315 | t

If there is a large number of buffers with high usage count of 4 or 5 your buffers are working well.

This query shows objects (tables and indexes) in cache:
SELECT
 c.relname, count(*) AS buffers,usagecount
FROM pg_class c
 INNER JOIN pg_buffercache b
 ON b.relfilenode = c.relfilenode
 INNER JOIN pg_database d
 ON (b.reldatabase = d.oid AND d.datname = current_database())
GROUP BY c.relname,usagecount
ORDER BY c.relname,usagecount

This shows how much of  relations are in cache:
SELECT
 c.relname,
 pg_size_pretty(count(*) * 8192) as buffered,
 round(100.0 * count(*) /
 (SELECT setting FROM pg_settings WHERE name='shared_buffers')::integer,1)
 AS buffers_percent,
 round(100.0 * count(*) * 8192 / pg_table_size(c.oid),1)
 AS percent_of_relation
FROM pg_class c
 INNER JOIN pg_buffercache b
 ON b.relfilenode = c.relfilenode
 INNER JOIN pg_database d
 ON (b.reldatabase = d.oid AND d.datname = current_database())
GROUP BY c.oid,c.relname
ORDER BY 3 DESC
LIMIT 20

, , ,

Leave a comment

pg_reorg

In any OLTP 24*7 database there is a constant need to maintain the database to meet performance metrics. As the database and tables grow in size it gets difficult to maintain the tables while production is on. As DBA’s we try to script vacuum analyze on tables during off peak hours and that does update the pg_stats to help in performance. But how do we get rid of all the bloat due to updates and rollbacks? Also how do we rebuild our indexes in such a scenario when downtime is not possible at all?

Well the answer is pg_reorg. pg_reorg re-organizes tables on a postgres database without any locks so that you can retrieve or update rows in tables being reorganized. It places the data of a table in a temp table and then recreates the indexes on the temp table. Thus, leaving the primary production table unlocked for production use. This is a very useful tool that reorganizes the tables in an instant without impacting production. Make sure you have twice the space available for the table that you are performing a pg_reorg on.

pg_reorg can be download from here. They maintain rpm and source installers.

Once you have pg_reorg downloaded follow the steps below to install it:

$ cd pg_reorg
$ make
$ su
$ make install

Next you need to run the following pg_reorg script to register it with your database

$ psql -f $PGSHARE/contrib/pg_reorg.sql -d your_database

Now you are all set and let the games begin:

$ pg_reorg --no-order --table test1 -d mydb

This will reorg the test1 table in the database mydb

, , ,

12 Comments

%d bloggers like this: