How to do stress test on PostgreSQL?

I have a DB is PostgreSQL 8.2 running on UltraSparc T2 Processor ( SUN T5120). All applications connecting to this DB will be using Store Porcedures. We have SPs for INSERT, DELETE, SELECT ,etc.
What would be the best way of:
1) Test all the SPs and their performance
2) Run the most complex one, or the ones we expect more concurrency, from a tool, so we can measure degradation, performance issues etc, when they run at the same time, multiple times.
I've been looking at Stress test solutions, from Hardware so software, It seems I can not find a solutions that allosw me to run the SP and take parameters from text files (for example, for INSERTS)

What would be the best strategy here?

Thanks for the help.

Who is Participating?
gheistConnect With a Mentor Commented:

So you emit thread for each connection be it 100, 1000 or 10000
Quite a stressing bit of old news for you:

So what you are trying to test?
What is EXACT database version and OS version database runs on (Linux, NetBSD ???)
What sort of clients will connect - windows/solaris etc.

I'd suggest running EXPLAIN on most common statement(s) and index accordingly.

If you delete massively adjusting FMS_ variables is of help
For SELECT you have to use enough memory for PostgreSQL

PostgreSQL is optimal when you reuse prepared statements.
earth man2Commented:
you can do a lot with the command line utility psql.

you can specify scripts to start in the future using  the at command

psql -f testfile.sql
awk can be used to create sql script files.

sql commands NOTIFY and LISTEN can be used to synchronize different threads in your testing scheme
function eg SELECT pg_sleep(1.5); can delay thread execution.

Alternatively you can write c libpq programs or java jdbc programs to do the SQL processing.
Cloud Class® Course: MCSA MCSE Windows Server 2012

This course teaches how to install and configure Windows Server 2012 R2.  It is the first step on your path to becoming a Microsoft Certified Solutions Expert (MCSE).

One should watch for commonly used  queries and stress those in proportion. closely imitating (much larger) data set.
manuel2002mAuthor Commented:
Thanks guys.
Is there toold (hardware or software) that we can use to run those Store Procedures (taking their parameters from a test file or something like that).
For example, lets say I have a SELECT Statement like this:
SELECT * FROM table1 WHERE name=var1
Is there any tool, I can give a txt file with a list of names so the tool execute the Store Procedure taking names, randomly from the txt file and obeying my setting, for example, 100 simultaneous executions, during 3 hours.
So I can enter the PG Machine and record its performance.
A tool I can use to simulate what's going to happen on my DB.
By the way, ther apps connecting to that PG are Web Services running on SUN Java Application Server 9.1. (about 24 Instances, each of them on a separate server), and the HTTPS traffic is being balanced by SUN Web Server 7 running on top of SUN Cluster.
Everything is running on SUN SOLaris 10, Sparc.
I'm using T2 processors for the DB and T1 for the App server.
Thanks all for the help.

Looks simple in Java - run database access loop in thread. You may mix requests in same thread or make every thread a request as parameter.
Sun app server does not add to JDBC so you have easy to build benchmark.

CPU brainpower is of small importance unless you do heavy DSS queries (more than couple of joins).

It is actually boiling down to some scarce internal resource like WAL, prepared query storage etc.

Normaly set all monitoring up:
When optimizing SQL is not possible anymore stop monitoring and ask for better hardware.

Actually there is problem that postgresql does not keep track of internal memory allocations in visible way like others do, so it is hard to find bottlenecks inside that are obvious on commercial databases.
i.e. internal memory monitoring like vmstat -m does on common UNIX systems.
manuel2002mAuthor Commented:
Thanks gheist, on your first comment today :
"run database access loop in thread. You may mix requests in same thread or make every thread a request as parameter"
what do you exactly mean? how can we do that? you understand? using the current SP .
Those SP takes values from arguments that you pass when you call them.
How can I do that on batch (withouth having to execute them manually)
I want some kind of way to execute combinations os those SP on certain times, then I will use the Monitoring tools to see the bahavior.

Since you talk impossible - you stored procedure in database that does not support stored procedures that would be hard to get reasonable code that uses nonexistent feature.
manuel2002mAuthor Commented:
Lets just use it instead of 'Store Procedure' shall we?
The question is still the same.
Maybe I'm explaining this too confusing, I wonder how people test their DBs? they just test manually? or do stress test of the DB placing their application layer into the equation.
Thanks, and sorry for my confusing way of asking.
One monitors production database for long running queries, deadlocks etc.

There is no advantage in storing functions in database - they will go through query optimizer at each preparation anyway, no optimizations done on them when storing.
manuel2002mAuthor Commented:
Understood, but lets say you already optimized the queries, etc, and the DB has the right parameters, etc. How do you know, if your DB backend behaves properly when hundreds of those queries are executed at the same time? or combinations of them, like 20% this, 80% that.
That's the kind of tool I'm talking about, or commands, or strategy.

Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.