• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 615
  • Last Modified:

Grid performance


... sorry for the wall ...

this is maybe kind of a general question. One customer asks for a performance update of a software. Especialy he wants the implemented lists to perform smoother.
These lists (grid) are based on a repository-version developed/updated for years (startet originally for IE6). Because they are quite old, the design isn´t that shiny of an actual web application.

They have many features
- Row selections
- Sorting
- Filtering
- SubRows
- Paging / Sliding incl. pagesize limiter
- Dynamic cell content for I10N / L14N / Links / Images
- MultipleHeaderLines
- Tooltip (Containing full content of a long line or a special content)
- Column configuration UI (Order, Size, Hide, Textsize)
- Export to CSV/XLS/PDF
- Multiple levels of caching

Maybe there is more that I forgot. The platform is a LAMP VM environment. Most users are conected directly via theire intranet. Some of then needs to use a VPN.

Now some detail to one list
- 74 different colums - most users have displayed 10 of them
- 7 joined tables, 10 columns requires subselects to display additional/agregated data
- A full set has ~25000 entries. The real value varies because of explicit/implicite permissions
- The default pagesize is 20
- 10 Filter. Most are dropdown boxes which display all possible values to the user and return the reference IDs to filter. Some of them contains additional combined values. One ID and one freetext filter


On my development pc client+server are on the same machine. Times are "Overall elapsed" by Fiddler, content sizes are compressed
- The initial load of a page with NumRows of 20 takes 1.7sec for 54kb
- Changing the NumRows to 100 takes 1sec for 30kb
- Changing back to 20 takes 0.35sec for 15kb

- The initial load of a page with NumRows of 100 takes 1.85sec for 70kb
- Changing the NumRows to 20 takes 0.85sec for 15kb
- Changing back to 100 takes 0.45sec for 30kb

- The times for using filters takes between 0.3 and 3sec depending of the filter and value. Not each of them can be done on an index and some are applyed on "joined table X"


Overall the test times are not that bad, but the customer tells about a "bad experience" - the application is to slow for him and/or the end users. They use bigger page sizes (400 is max), so they didn´t need to reload parts of the that often - but this increases the initial load time overall (Think about the number of DOM elements when creating a grid of this size).

I looked at some demos of other grids (phpGrid, SenchaGrid, DojoGrid, SlickGrid). They all looks prettier, some have nice additional features - but noone has all the features of the existing lists. To aggree with the lack of one feature the grid must offer a huge bonus in performance which i didn´t believe on.

Example SlickGrid:
It supports smooth performance for "unlimited" records. But the load of all data of the example list above will be ~5MB. Therefore we need to do AJAX requests to load the data. The given filter didn´t support comboboxes - the user never knows, what possibilities of data in the lists exists. Because we use AJAX, we cannot filter the existing list on client and must do this on the server.
The search requests of the actual list already delivers the ID the user searches for. But now we must first execute a query to get the IDs (which may be an amount of 20k in a bad case), and then use these IDs to query for the user data


On another project (.Net) I have used DevExpress - which has all the features that the PHP lists have. But at the end these lists doesn´t have any performance plus vs. our PHP one

Do you think or know about a Grid which could perform all this features in a performanter way? Or did you think the problem can only be solved in another way (Removing features/attributes, using precalculated instead of live data, Upgrading clients, Providing more "specialized" lists to minimize attributes/filters, ...)?

Kind regards
  • 3
  • 2
3 Solutions
Garry GlendownConsulting and Network/Security SpecialistCommented:
Have you worked out whether the performance problems are caused by either the backend (database), the frontend (PHP code) or the transmission? On the Intranet, I assume 100+ Mbit/s, so that shouldn't cause the issue ...
Can you isolate a typical MySQL request (or bundle of requests required for a page) and see how long that takes? If the DB is responsible for more than - say - 60% of the waiting time (and do play with larger number of rows, like 1000 or so), then optimizing the DB settings may be an easy step to start with ... check out the MySQLTuner-script which can give you some very nice recommendations as far as optimizing the DB settings go ...
maxworxAuthor Commented:
Hi Garry,

seems the problems of the customer are all infrastructure related. I don´t have access to many parts, but some informations i could find out

- The query of the loads with 20 rows takes on my own system initially ~0.4sec and then 0.0005sec for each request without changed data. At the customer server the same request takes the 0.4sec each time
- Your tuner script doesnt work on my local machine (windows ... doesn´t work with installed activeperl & cygwin either). At the server there are some lines marked with !!:
[!!] Query cache is disabled
[!!] Joins performed without indexes: 28002
[!!] Thread cache is disabled
[!!] Table cache hit rate: 0% (64 open / 289K opened)

Analysing the calculation behind results that only on of this issues exists on my local PC: Table cache hit rate (40 open / 5k opened). But I doesn´t know how to solve this because my local variable in my.ini is set to "table_cache = 510". On the server this is maybe 64, cause the mysqltuner tells to set it higher than this value.

- Each HTTP 1.1 requests results in a HTTP 1.0 response. Because of this, no connection is reused and the SSL handshake is redone on each SubRequest (CSS, JS, Images)

- One time I requested the index page (containing only the last release notes of the application) I had to wait 45sec for the result. I couldnot reproduce this, I started a heavy load cron job which used 100% CPU load for mysql for one minute - but this did not result in a big additional time to wait. It seems to me that there are some network related problems too.

- Some of the users uses IE7 and I think their clients are as old as the browser himself.

All this draws the same conclusion: The main problem isn´t the application itself. There are some points to optimize (Bundling of CSS+JS files, Support of CDN Servers), but nothing which will mage this big difference

Do you have further advices like the tuner script? MySQL/PHP/Apache have tons of configuration parameters, they are all well documented but are always related to the application to run, the number of users and else. It seems to be a lifetime job to get good values and noone feels to be responsible for this job. Our system integrators only setup the environments. The same applies for the infrastructure provider of the customer - they only install a preconfigured system from their repositories - they didn´t know what the configuration parameters of MySQL/PHP/Apache do. Is it the job of the programmer? We normaly didn´t have access to the production environment. We can´t simulate real concurrent requests. Who else is responsible?

Kind regards
Garry GlendownConsulting and Network/Security SpecialistCommented:
Taking the output from the tuner script (for your development machine I guess it doesn't really matter that much - but you should be able to use it on a Linux box and accessing the MySQL server over the network) one could take a few points and change the settings.

[!!] Query cache is disabled

This means that subsequent requests for the same data will not be served from memory, but always rerun on the DB server backend. If the table has a low rate of changes, enabling query caches should help quite a bit. Please note that write requests will clear the cache of course. Anyway, the overhead of using the cache should be more than outweighed by the performance gains.

[!!] Joins performed without indexes: 28002

That one I'd personally take care of first ... (depending on the total number of requests as a comparison of course) When running those joins, the DB backend had to use the actual table data instead of the index information to do the join - this will definitely have a big impact on the performance. Check your main join queries against an "explain" enabled mysql client and check whether your joins either need to be changed, or whether you can add additional indexes that will help the DB improve the access. In old days, using the right index (using Informix 4.1 on then fast 486 or Pentium1 boxes) could mean the difference between a 1 second or a 5 minute request ... while I doubt the difference will be that large here, it's definitely an easy and quick fix, and it should save you quite some headache.

[!!] Thread cache is disabled

Not sure how much influence this will have ...

[!!] Table cache hit rate: 0% (64 open / 289K opened)

Understandable, with query cache disabled ...

mysqltuner should have also listed the appropriate settings at the end of its output that you need to alter in the my.cnf file ...  e.g.:

# * Query Cache Configuration
query_cache_limit       = 1M
query_cache_size        = 16M

thread_cache_size       = 8

table_cache                 = 64

Open in new window

Garry GlendownConsulting and Network/Security SpecialistCommented:
As for who is "responsible" for the correct configuration - I guess that depends on the requirements and expertise ... if you have one service provider who just supplies a server with services on it (e.g. MySQL), it doesn't mean they alone can do specific tuning for a certain progam or use ... if you as another service provider who takes care of the programs running on the server require specific settings in order for your application to run smoothly, I guess you two just have to talk to each other in order to provide the customer with the best service ... if either of you fail to do that, the other has to make sure the requirements and reasons are documented, and anything else has to be escalated to the appropriate persons at the customer ... then it's up to them to get the party that won't or can't get their part of the job done either to get their part done, or move their money elsewhere ...
I guess that's the typical situation of you split up responsibilities between multiple parties ... I hate it when it happens to us, because it always means additional, more detailed documentation of any steps and work done and requested, and it almost always takes longer to get things done ...
maxworxAuthor Commented:
Thank you for your time&answers

Featured Post

Technology Partners: We Want Your Opinion!

We value your feedback.

Take our survey and automatically be enter to win anyone of the following:
Yeti Cooler, Amazon eGift Card, and Movie eGift Card!

  • 3
  • 2
Tackle projects and never again get stuck behind a technical roadblock.
Join Now