[Okta Webinar] Learn how to a build a cloud-first strategyRegister Now


CPU Usage

Posted on 2010-01-03
Medium Priority
Last Modified: 2013-12-25
I have just noticed that on a dual Quad Xeon server with Pervasive 9.7x that it only seems to use one of the 8 processors shown in Task manager.
If I do something like view data or execute a SQL statement or evn just open a table in the Pervasive Control, it only peaks on one CPU.
Could it be that this explains why the databse is slow?
Is there a way to specify how many CPU's the server engine may use?
Question by:Alex Angus
  • 2
LVL 18

Accepted Solution

mirtheil earned 1200 total points
ID: 26165848
That is correct. PSQL itself is not multi-CPU (virtual or real) aware and cannot be changed to use multiple CPUs.  The underlying OS may use the other CPUs to balance load though.

High CPU usage may not be the root cause of performance problems.

If you open a table in the PCC, you are reading every single record in the table with no optimization.  If your disk system is slow or fighting for resources, you might see slow performance.  

If you are having a performance problem, you should post more details about the behavior your are seeing.

Author Comment

by:Alex Angus
ID: 26168949
Client was running PV 9.5 before. Then they were hit by a virus and the server became unstable as well as some fuctions were lost. So we decided to reload. At the same time we installed a new 500Gb SATA hard drive.

After installing everything the performance was slow. The database is about 11 to 12Gb. I installed updates to take it to 9.72 or somewhere there anyway.

I first thought it was the server OS. The client is a Linux house and there are about 40 Linux servers and this one Windows 2003 box with the accounting app. It is/was also a Terminal Server. So not much has changed. Oh, I also loaded AVAST where they used CLAM before. But I disabled that and it made no difference.

I had one of the top Windows Guru's in the country look at it and he made some small change to the NIC settings but otherwise he says it is just overloaded and need a new "dual quad, raid, top end" box. You know the story. The management are saying it worked faster before, so what is wrong and fix that first. Being a Linux house they are not keen on spending lots of money on a new Windows box. I could move the PV to a Linux box but I am not sure that is going to resolve the problem and the question is still why is it 30 to 40% slower on the same box. Yes the db is growing daily but would it cause a problem at this level? I have heard of 50Gb PV databases, so 12Gb should not be a problem. When I stumbled on the CPU issue I thought that must be it, but alas, my bubble is broken.

A SATA drive would normally be faster than the old IDE drive. Could it be buffer size? I have not compared them but will find out today.

There are some 300 tables in the db but only about 8 are large and two are about 4-5Gb . The rest are quite average to small. I have deleted as much history as possible. There are other databases with the history left for lookup. There are about 20 databases and 3 are live, the rest are lookup or archived. Archived mean they are in PV but not connected to the application menu. So nobody can access or open them.

Would appreciate any other pointers.
LVL 18

Expert Comment

ID: 26170890
Sometimes it's the simple things that can cause problems.  Here's a few things I've seen:
- Make sure the data files are excluded from any Anti-Virus / Anti-Spyware programs.
- Make sure your MKDE cache is set to a decent level.  The accounting app might have some suggestions but it might be trial and error.
- Make sure that MKDE tracing is turned off in the Pervasive Control Center (PCC).  Also, make sure that ODBC tracing is turned off in the ODBC Administrator.  
- You might want to check the performance at the server.  Run PCC on the server, and compare the performance running from a client.  
- If you deleted records when you archived the tables, you might want to use the PSQL rebuild utility.    There are some cases where the performance drops after deleting a large number of records from the data file.
LVL 29

Assisted Solution

by:Bill Bach
Bill Bach earned 800 total points
ID: 26176303
A few more:

- You indicated that you deleted a bunch of data.  Did you ALSO rebuild the files afterwards?  Purging old data can leave a lot of "Empty" pages in the file, especially towards the beginning of a file, that can cause performance issues for SQL queries that are not indexed, and thus use "Step" operations.  The result is that the engine has to step through all of the pages at the front of the file, finding only deleted records and skipping them.  Rebuilding not only shrinks the file, but it can also reorganize data for you.

- Check the relative performance of the network client using the Pervasive System Analyzer.  Here's a link to how to use this tool to do a performance check:

- Recheck the network settings -- look for cabling problems such as poor, Cat5 cables on GbE, errors on the link reported by the switch, a duplex mismatch or collision statistics on the link, and more.  This is especially true if the PSA Stress Test indicates a periodic loss of packets.

- Re-verify memory usage.  You didn't indicate server RAM size, so I cannot suggest anything there, but if you have 4GB, then setting an L1 cache of 800MB, and an L2 cache of 40% should help.  

- Monitor for OS swapfile usage.  If you see the swapfile in frequent use, monitor the database engine's "Working Set" memory size versus Virtual Memory Size in Task Manager.  If you see the current memory suddenly decrease dramatically, then start to increase rapidly again, accompanied by high C: drive usage to the swapfile, then your OS is swapping out the process and it is reallocating memory.  This can have an AWFUL impact on performance, even for a small system.  Splitting the OS and data volume may help diagnose this, too.

Finally -- If this site is a Linux site, why not consider a move to Pervasive PSQL Linux?  You can cross to the Linux platform at the same time as you upgrade to PSQLv10 at no additional fee (only the upgrade cost is a factor, not the platform switch).  They obviously are comfortable with Linux -- why wouldn't they want to use it for PSQL, too?  


Featured Post

Industry Leaders: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

In this blog post, we’ll look at how ClickHouse performs in a general analytical workload using the star schema benchmark test.
In this blog post, we’ll look at how using thread_statistics can cause high memory usage.
Video by: Steve
Using examples as well as descriptions, step through each of the common simple join types, explaining differences in syntax, differences in expected outputs and showing how the queries run along with the actual outputs based upon a simple set of dem…
This is a high-level webinar that covers the history of enterprise open source database use. It addresses both the advantages companies see in using open source database technologies, as well as the fears and reservations they might have. In this…
Suggested Courses

872 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question