Link to home
Start Free TrialLog in
Avatar of TheNerdherder
TheNerdherder

asked on

Pervasive PSQL v11

I have upgraded everything (I do mean everything), desktops, servers, cabling, switches, you get the point.  I have also upgraded Timberline to (new name) Sage 300 construction with comes with Pervasive PSQL v11.  Since the upgrade the software runs terrible.  Freezes throughout the day, long delays processing data (read or write) and lots of complaints.  I have been working with Sage tech support and they have made it better (made some adjustments to the server and memory allocation) however we are at a standstill for improving the performance.  Everything else on the server runs great, just not Sage construction.  Anyone have any ideas what could be happening and how it could be corrected?  Here are the details:

server
     Lenovo RD630 with dual Xeon CPU's
     80GB RAM
     1.8tb of storage in a RAID 1+0
     3 network cards gigabit - 1 for the users and 2 for the VM environment
     3 virtual servers (low usage servers)
     Windows server 2012 standard
     
workstations
     i3 CPU
     4GB ram
     160gb hard drive
     1gbs NIC
     windows 7 64-bit

Sage 300 construction (timberline) runs great from the server.  I have tested with limited users which works OK.  Its better with less users than our normal 9.  However I have not performed good testing with limited users to make an accurate comparison.

Watching the servers performance tells me there is nothing wrong with the server.  Working with file transfers yields the highest network throughput possible.  Working with the MSQL databases are fine.  Basically I am at the WTF point with this software system.  Any help would be great.
Avatar of Don Thomson
Don Thomson
Flag of Canada image

Just an off-the-wall question -- are they running Trend Micro AV?
Can you grab a Wireshark trace of a database process running, so that we can see it?  This will give you a good idea of actual response times, both server and network/client, and give you an idea of whether you need to adjust the server or the network/client.
Avatar of TheNerdherder
TheNerdherder

ASKER

DTH
 thanks - we are currently at version 13.1.24 rev 5.  We have received another update yesterday from sage and will be installing it today.
Bill,
we do not use Trend products.  I have also added all of the exclusions to our current AV software and have removed AV from the server.
Bill,
I will have to set that up.  What I can tell you from the new performance monitor in Windows 2012, the clients have anywhere from a 0 to 213ms latency.  As far as network throughput goes, all the clients have no issues with speed.  I can move large files at gigabit speeds.  
with wireshark what are you looking for?  SQL transactions start and end times, network issues, etc?

Thanks
My understanding is that a majority of TL still uses the Btrieve interface, so you'll want to look at the traffic coming in on TCP port 3351 on the server side.  Look for "nominal response time" on both sides of the link -- and try to calculate an average response time.  If you are taking the trace from the server, then you'll see the true database engine response time on the server side, while the client side will include client think time as well as network round trip time (RTT).  If you need to isolate the RTT, then take a second trace from the client side instead.  Then, subtract the response time of the server from the response time from the server on the client side -- and you're left with RTT.

Most database applications will issue thousands of small requests -- VERY different from network file I/O over SMB.  If your RTT is high, then performance will suffer.  Further, if you are losing any packets within the network, the retransmission time can be 250ms or more.  

If your server is "typical", then you'll probably see server side response time at 0.1ms or lower, and with a decent network (RTT time of 0.3ms or less), you're looking at 0.4ms total per request.  Based on this estimate, a process reading 1000 records will complete in 0.4 seconds.  Now, let's assume that you lose just TWO packets out of that 1000 -- adding in the 250ms delay for each lost packet changes your total process time to 0.9s -- more than DOUBLE the time!  Now, extend this to 100,000 requests, and you see where even minimal packet loss can be REALLY bad.  (Now you see why I worry about your 213ms latency estimates.)

A few tips: Be sure to check the expert analysis for TCP Retransmission counts in Wireshark, because this is much easier than digging through the trace, and be sure to change the Time Display Format to Seconds Since Previous Displayed Packet, so that you don't have to calculate response times in your head.

Another thing is to check your PSQL configuration.  I need this data:
- Which PSQL engine is installed -- Workgroup or Server?
- How big is your database file set?
- What is your database engine setting for:
   - Cache Allocation
   - Max Microkernel Memory Usage
   - Use System Cache
- How much FREE memory is available in your OS?
I started to gather all your requested information however the director of the accounting department has now went over my head and we have a new tech person from SAGE working on the issue.  I will try and get the info but I have been taken off the problem.
Understandable.  Depending on who you get, they will either suggest reinstalling and then give up, or they can troubleshoot the environment properly.  You might do well to keep this open in the meantime...
ASKER CERTIFIED SOLUTION
Avatar of TheNerdherder
TheNerdherder

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial