Go Premium for a chance to win a PS4. Enter to Win

x
?
Solved

CPU bound

Posted on 2006-11-17
6
Medium Priority
?
390 Views
Last Modified: 2012-06-22
I have a 48M PostgreSQL database on a 2GB dedicated Linux server with one CPU. It is taking a pounding and I've seeing very sluggish performance. Due to the importance of the load (it's the run up to a live event and that's the reason for the traffic), I want to do as limited experimentation as possible.

It's not too bad here...

top - 23:26:38 up 144 days, 10:26,  4 users,  load average: 1.35, 1.21, 1.10
Tasks:  75 total,   2 running,  73 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0% us,  0.0% sy,  0.0% ni, 99.7% id,  0.0% wa,  0.3% hi,  0.0% si
Mem:   3631740k total,  1969348k used,  1662392k free,    92852k buffers
Swap:  4008120k total,        0k used,  4008120k free,  1757768k cached

...but I've seen those load averages get up to > 5.0 for long periods, and yes it *is* postmaster munching the mips :-(

The postgresql.conf file has been set up with defaults, and I'm planning to apply the following changes:

  #shared_buffers = 1000          # min 16, at least max_connections*2, 8KB each
  shared_buffers = 8192            # Rob S. 2006-11-17 - it is only a 50M data set, but we can spare 64M of RAM
  #vacuum_mem = 8192           # min 1024, size in KB
  vacuum_mem = 16384            # Rob S. 2006-11-17 - 16MB
  #effective_cache_size = 1000    # typically 8KB each
  effective_cache_size = 10240    # Rob S. This is 80M, which is plenty for our 50M database!

The database is: PostgreSQL 8.1.4 on i386-pc-linux-gnu, compiled by GCC cc (GCC) 3.3.5 (Debian 1:3.3.5-13)

I have never vacuumed. I'm assume that PostgreSQL does that for me now.

Here are my questions:

(1) Are these sane changes and in any way likely to reduce the CPU load?

(2) Would explicit VACUUMing be worth it?
0
Comment
Question by:rstaveley
  • 3
  • 3
6 Comments
 
LVL 22

Expert Comment

by:earth man2
ID: 17971344
You have to enable auto vacuuming daemon it is normally off by default.  Look see in the postgresql.conf file for the autovacuum setting
see
http://www.postgresql.org/docs/8.1/static/runtime-config-autovacuum
0
 
LVL 22

Accepted Solution

by:
earth man2 earned 2000 total points
ID: 17971717
Since your database is probably now in memory the bottleneck is the CPU/memory !

You need to establish what statements are consuming resource and create indexes to speed up those critical hogs.

http://www.postgresql.org/docs/8.1/static/runtime-config-statistics.html
0
 
LVL 17

Author Comment

by:rstaveley
ID: 17971810
Thanks earthman2. I've been picking through all the queries with explain and have found some indexing that improves matters. There's some horrible code in there.

Annoyingly I can't seem to see any commands despite setting stats_command_string = true. It can't stop it from saying ' <command string not enabled>'.

0
Get your Conversational Ransomware Defense e‑book

This e-book gives you an insight into the ransomware threat and reviews the fundamentals of top-notch ransomware preparedness and recovery. To help you protect yourself and your organization. The initial infection may be inevitable, so the best protection is to be fully prepared.

 
LVL 22

Expert Comment

by:earth man2
ID: 17972102
Are you accessing the database as the postgres super user ? see

http://www.postgresql.org/docs/8.1/static/monitoring-stats.html
0
 
LVL 17

Author Comment

by:rstaveley
ID: 17972143
Yes as user postgres.

My fiddling around with indexing has paid dividends you know! My load averages have come plumetting down :-) :-)
0
 
LVL 17

Author Comment

by:rstaveley
ID: 17972304
earthman2, you've answered my question. I'll open a separate one about the stats_command_string. Many thanks.
0

Featured Post

Free Tool: Site Down Detector

Helpful to verify reports of your own downtime, or to double check a downed website you are trying to access.

One of a set of tools we are providing to everyone as a way of saying thank you for being a part of the community.

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Best database to use for Maps is PostgreSQL. This is an open source database. Comes as a package with most Linux OS. For more info visit the following site: http://www.postgresql.org/ (http://www.postgresql.org/) This requires some add-o…
By, Vadim Tkachenko. In this article we’ll look at ClickHouse on its one year anniversary.
Steps to create a PostgreSQL RDS instance in the Amazon cloud. We will cover some of the default settings and show how to connect to the instance once it is up and running.
this video summaries big data hadoop online training demo (http://onlineitguru.com/big-data-hadoop-online-training-placement.html) , and covers basics in big data hadoop .

927 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question