How to do stress test on PostgreSQL?

Posted on 2008-10-11
Medium Priority
Last Modified: 2010-10-05
I have a DB is PostgreSQL 8.2 running on UltraSparc T2 Processor ( SUN T5120). All applications connecting to this DB will be using Store Porcedures. We have SPs for INSERT, DELETE, SELECT ,etc.
What would be the best way of:
1) Test all the SPs and their performance
2) Run the most complex one, or the ones we expect more concurrency, from a tool, so we can measure degradation, performance issues etc, when they run at the same time, multiple times.
I've been looking at Stress test solutions, from Hardware so software, It seems I can not find a solutions that allosw me to run the SP and take parameters from text files (for example, for INSERTS)

What would be the best strategy here?

Thanks for the help.

Question by:manuel2002m
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
  • 7
  • 4
LVL 62

Expert Comment

ID: 22698470
Quite a stressing bit of old news for you:

So what you are trying to test?
What is EXACT database version and OS version database runs on (Linux, NetBSD ???)
What sort of clients will connect - windows/solaris etc.

I'd suggest running EXPLAIN on most common statement(s) and index accordingly.

If you delete massively adjusting FMS_ variables is of help
For SELECT you have to use enough memory for PostgreSQL

PostgreSQL is optimal when you reuse prepared statements.
LVL 22

Expert Comment

by:earth man2
ID: 22698834
you can do a lot with the command line utility psql.

you can specify scripts to start in the future using  the at command

psql -f testfile.sql
awk can be used to create sql script files.

sql commands NOTIFY and LISTEN can be used to synchronize different threads in your testing scheme
function eg SELECT pg_sleep(1.5); can delay thread execution.

Alternatively you can write c libpq programs or java jdbc programs to do the SQL processing.
LVL 62

Expert Comment

ID: 22699054
One should watch for commonly used  queries and stress those in proportion. closely imitating (much larger) data set.
Will your db performance match your db growth?

In Percona’s white paper “Performance at Scale: Keeping Your Database on Its Toes,” we take a high-level approach to what you need to think about when planning for database scalability.


Author Comment

ID: 22699587
Thanks guys.
Is there toold (hardware or software) that we can use to run those Store Procedures (taking their parameters from a test file or something like that).
For example, lets say I have a SELECT Statement like this:
SELECT * FROM table1 WHERE name=var1
Is there any tool, I can give a txt file with a list of names so the tool execute the Store Procedure taking names, randomly from the txt file and obeying my setting, for example, 100 simultaneous executions, during 3 hours.
So I can enter the PG Machine and record its performance.
A tool I can use to simulate what's going to happen on my DB.
By the way, ther apps connecting to that PG are Web Services running on SUN Java Application Server 9.1. (about 24 Instances, each of them on a separate server), and the HTTPS traffic is being balanced by SUN Web Server 7 running on top of SUN Cluster.
Everything is running on SUN SOLaris 10, Sparc.
I'm using T2 processors for the DB and T1 for the App server.
Thanks all for the help.

LVL 62

Expert Comment

ID: 22703733
Looks simple in Java - run database access loop in thread. You may mix requests in same thread or make every thread a request as parameter.
Sun app server does not add to JDBC so you have easy to build benchmark.

CPU brainpower is of small importance unless you do heavy DSS queries (more than couple of joins).

It is actually boiling down to some scarce internal resource like WAL, prepared query storage etc.

Normaly set all monitoring up:
When optimizing SQL is not possible anymore stop monitoring and ask for better hardware.

Actually there is problem that postgresql does not keep track of internal memory allocations in visible way like others do, so it is hard to find bottlenecks inside that are obvious on commercial databases.
LVL 62

Expert Comment

ID: 22703749
i.e. internal memory monitoring like vmstat -m does on common UNIX systems.

Author Comment

ID: 22704251
Thanks gheist, on your first comment today :
"run database access loop in thread. You may mix requests in same thread or make every thread a request as parameter"
what do you exactly mean? how can we do that? you understand? using the current SP .
Those SP takes values from arguments that you pass when you call them.
How can I do that on batch (withouth having to execute them manually)
I want some kind of way to execute combinations os those SP on certain times, then I will use the Monitoring tools to see the bahavior.

LVL 62

Expert Comment

ID: 22704402
Since you talk impossible - you stored procedure in database that does not support stored procedures that would be hard to get reasonable code that uses nonexistent feature.

Author Comment

ID: 22705842
Lets just use it instead of 'Store Procedure' shall we?
The question is still the same.
Maybe I'm explaining this too confusing, I wonder how people test their DBs? they just test manually? or do stress test of the DB placing their application layer into the equation.
Thanks, and sorry for my confusing way of asking.
LVL 62

Expert Comment

ID: 22706028
One monitors production database for long running queries, deadlocks etc.

There is no advantage in storing functions in database - they will go through query optimizer at each preparation anyway, no optimizations done on them when storing.

Author Comment

ID: 22706077
Understood, but lets say you already optimized the queries, etc, and the DB has the right parameters, etc. How do you know, if your DB backend behaves properly when hundreds of those queries are executed at the same time? or combinations of them, like 20% this, 80% that.
That's the kind of tool I'm talking about, or commands, or strategy.

LVL 62

Accepted Solution

gheist earned 2000 total points
ID: 22706113

So you emit thread for each connection be it 100, 1000 or 10000

Featured Post

Learn how to optimize MySQL for your business need

With the increasing importance of apps & networks in both business & personal interconnections, perfor. has become one of the key metrics of successful communication. This ebook is a hands-on business-case-driven guide to understanding MySQL query parameter tuning & database perf

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Best database to use for Maps is PostgreSQL. This is an open source database. Comes as a package with most Linux OS. For more info visit the following site: http://www.postgresql.org/ (http://www.postgresql.org/) This requires some add-o…
Many developers have database experience, but are new to PostgreSQL. It has some truly inspiring capabilities. I have several years' experience with Microsoft's SQL Server. When I began working with MySQL, I wanted a quick-reference to MySQL (htt…
Steps to create a PostgreSQL RDS instance in the Amazon cloud. We will cover some of the default settings and show how to connect to the instance once it is up and running.
Add bar graphs to Access queries using Unicode block characters. Graphs appear on every record in the color you want. Give life to numbers. Hopes this gives you ideas on visualizing your data in new ways ~ Create a calculated field in a query: …
Suggested Courses

649 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question