Want to protect your cyber security and still get fast solutions? Ask a secure question today.Go Premium

x
  • Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 321
  • Last Modified:

Timing queries/updates non-intrusively in a web application

I have inherited a fairly complicated web application, which I want to make minimal changes to. It is PHP-based. It isn't as fast as it needs to be, so I'm throwing some new hardware at it and will edit the minimum that I can get away with to improve its performance.

Unfortunately, the application's PHP hasn't been structured in a way that makes it easy to time database queries/updates/appends, so I'd like to see if it I get a sense of where the inefficient operations are by looking at the PostgreSQL logs.

I notice that there is a log file analyser written in Ruby that looks to be good for my purposes: http://pqa.projects.postgresql.org/

Would enabling all of the logging required for this analyser be likely to bring the site to its knees? Or do you reckon I could run the site for a week or so with logging enabled to collect a fair picture of the bottle-necks?
0
rstaveley
Asked:
rstaveley
  • 2
1 Solution
 
earth man2Commented:
You have not had any replies so far me thinks because your question is analogeous to how long is a piece of string.

Like the uncertainty principle you cannot time queries non-intrusively.  The impact depends on how maxed out your db server is ie cpu bound or disk io limited.

Minimise the effect of statistics collection by making statistics write to a different disk than your DB & indexes & OS.
0
 
rstaveleyAuthor Commented:
What I'm really trying to get is a list of individual queries / updates which take longer than the rest, so I can focus on them. In MySQL, they have the "slow query log" (see http://dev.mysql.com/doc/refman/5.0/en/slow-query-log.html), which would probably suffice for me, if such a thing existed for PostgreSQL. It would be nice also to see which style of query is executed most often. Armed with this knowledge, I can concentrate on optimising those queries alone.

I'm not really worried about the overhead of statistics collection so much as messing around with unfamiliar code prematurely. I don't want to sprinkle the code with timing hooks to find out which bits are slow. If I can see some obvious SQL baddies before digging into the PHP code, it could allow me to locate the offending code more quickly.

An alternative approach would be to look at the Apache logs to see which pages are taking the longest to render, but taking timings from the PostgreSQL logs would be much more focused. It *may* be that no optimisation is required in the SQL and all of the optimisation required would be achieved by using an optimiser with the PHP code... but I'd like to be able to find out.
0
 
rstaveleyAuthor Commented:
Thanks for the advice, earthman2. I left this to run to see if my question would get any more responses, but it is time to close it now. That Ruby analyser is the best thing I've found, but it isn't ideal for identifying design weaknesses. There's no substitute really for timing the queries within the application and that means improving its organisation so that all queries go via a common call wrapper.
0

Featured Post

Hire Technology Freelancers with Gigs

Work with freelancers specializing in everything from database administration to programming, who have proven themselves as experts in their field. Hire the best, collaborate easily, pay securely, and get projects done right.

  • 2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now