• Status: Solved
  • Priority: Medium
  • Security: Public
  • Views: 251
  • Last Modified:

memory, disk - scaling, performance - application architecture question

I am re-architecting an application that focuses on group collaboration around interactively building and annotating a complex diagram.

Average total data size for one group session for will probably be somewhere between 150KB and 1MB.

Currently the group session data is stored entirely in memory in a large object containing datasets and supporting variables.  This data is synced to disk during diagram update/writes.  

The data is kept in memory for fast diagram form refreshes (one participants does a small write, which might change the relationships on the diagram, which then has to be refreshed to all other participants).

My question has to do with performance versus scaling:
>  If 1 GB can store 1000 concurrent sessions (in my dreams), it doesn't seem like that would be a scaling bottleneck i.e. the server would bog down first, or I could add servers if I ran out of memory.
>  If I had to retrieve say 50KB from disk per diagram refresh (say one per second for each group session), it seems like that would be a big disk performance/scaling constraint.

So it seems like keeping the data in memory is almost necessary, to achieve scaling and maintain performance.

I'm trying to figure out if I've made some basic error in this thinking.

Any comments on these thoughts and assumptions would be appreciated.

Thanks!
0
codequest
Asked:
codequest
  • 4
  • 2
  • 2
  • +1
5 Solutions
 
AndyAinscowFreelance programmer / ConsultantCommented:
I tend to agree with you - keep it in memory if possible.
0
 
Eugene ZCommented:
it depends how much RAM you have and set for Sql server.

Please classify sql server part in this process:

" group collaboration around interactively building and annotating a complex diagram.
"
how did you calculate this?
"Average total data size for one group session for will probably be somewhere between 150KB and 1MB."


on what server\PC stored
"the group session data is stored entirely in memory in a large object containing datasets and supporting variables.  This data is synced to disk during diagram update/writes."
?

What method are you using to insure that is it in memory?

-------------------------

in any case you must use perfmon \sql profiler \DMVs for sql server  that will help you see real numbers
0
 
codequestAuthor Commented:
@EugeneZ

Thanks for input.  

1) I believe I considerably overestimated the amount of data that would be actively worked on and presented.   Better estimate would be 10KB.  This was calculated by considering table rows, fields, field usage and field sized (higher estimates did not account for null fields).

2) Data is currently maintained in memory in a ADO.NET DataSet that has 5 linked tables.

Unfortunately I have only a prototype and am unable to test high volumes in order to use the utilities you recommend.
0
Never miss a deadline with monday.com

The revolutionary project management tool is here!   Plan visually with a single glance and make sure your projects get done.

 
Gerald ConnollyCommented:
If you are holding updates in memory, what are you doing to guard against equipment and/or power failure and the subsequent corruption of the DB on disk?
0
 
Eugene ZCommented:
there are much more elements that you need to conceder, check:

"out of memory exception ado.net dataset"
http://social.msdn.microsoft.com/Forums/en-US/41e1b19a-b5ee-4cf2-ac1e-ff0c9a35b961/out-of-memory-exception-adonet-dataset?forum=adodotnetdataset

also review a possibility to use stored procedures to calculate and  sql server to hold data

again there is not too much details about your apps architecture \tiers
0
 
codequestAuthor Commented:
From a similar question I posted on another site:

I'm re-architecting an asp.net application from web forms into MVC, moving from 2006 to 2013 asp.net technologies. The primary function of the app is group collaborative construction of a complex graphic/text data set. The plan is to run multiple concurrent SaaS group work sessions from a cloud.

The core graphic/text set ("data set") would consist of about 5 related tables, that need fairly complex business logic and associated multi-table queries to turn them into useful display information. The content of this data set needs to be sent to all participants, in a slightly customized way for each participant, every time it is updated by any participant.

In terms of volume, say 10 participants per group, one data set change every several seconds (in one session), the entire data set building up to approximately 10KB by the end of the session, so say an average of 5KB to retrieve the entire core data set from disk (if that were the path) for each send to the browser. That may be high but there could be a wide range of volumes.

The resulting pattern for a single session is then relatively infrequent, small updates to disk, followed by 10 times as many relatively large sends to the browser.
0
 
codequestAuthor Commented:
@connollyg

re what about memory failure:   the app currently uses DataSet/TableAdapter;  the writes are all updated to disk at the time they occur.
0
 
Gerald ConnollyCommented:
re memory - I covered that under Equipment failure! Although you could go with a server that has RAID Memory!

re updates - You implied in your first post that some kind of write gathering was taking place. If your in-memory-db is really read-only its fine.
0
 
codequestAuthor Commented:
I've concluded that performance and scaling questions are completely non-trivial, and so my question can't really be conclusively answered.  Inputs here have been valuable in reaching that conclusion, so points are rewarded accordingly.
0

Featured Post

Never miss a deadline with monday.com

The revolutionary project management tool is here!   Plan visually with a single glance and make sure your projects get done.

  • 4
  • 2
  • 2
  • +1
Tackle projects and never again get stuck behind a technical roadblock.
Join Now