ODBC performance over a WAN

I have an application that uses MFC CDatabase and CRecordset objects (and thus ODBC) to access a SQL Server database.  The performance is fine when the database is local or nearby on the LAN.

When running over a WAN, using the ODBC Administrator to set an IP address as the SQL Server, the performance takes a nasty hit.  For instance, a process that's normally done in about 1 second blows up to an unacceptable 30 seconds.

Most of the work is forward-only, read-only access.  It's reading sets of records and generating formatted text into a local memory buffer.

My questions:  
1) Is this type of ODBC access known to be vastly slower than say, ADO access when used on a WAN?  

2) Are there any settings or options that might improve the performance?  It would need to be a considerable improvement... e.g., a 10% boost won't help at all.

Thanks!
LVL 50
DanRollinsAsked:
Who is Participating?
I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.

Anthony PerkinsCommented:
How much data are you returning (number of rows and width of row)?
All things being equal, I cannot see ADO or ADO.NET for that matter giving you any major improvement, not even 10%.
0
DanRollinsAuthor Commented:
A typical scenario is to read a total of about 150 records from about 5 tables (perhaps 30-50 columns per table access).  While processing the result sets, I'll do perhaps 50-100 ad-hoc lookups in a few other tables.

It's not really doing any heavy lifting, and it all goes smooth as snot when the db is local.

My client's position is that ADO recordsets are much more efficient, that ODBC is "a chatty protocol" and that's what's increasing the latency.   The Profiler shows that the statements execute quickly, but the start-to-end is taking forever.  My position is that it's all about the network access latency, but I'd love to have more than an opinion -- i.e., some hard facts.

I'm thinking that there may be some option (such as using client-side cursors) might decrease the number of round trips.  Thoughts?
0
Anthony PerkinsCommented:
I agree with you.  However you should clarify whether the client means ADO or ADO.NET and depending on that build a quick (it should not take more than an hour or two to write) proof of concept to see if there is any truth in it.  I would be very surprised if there there was even more than a 5% difference.

But I am curious also to see if anyone has any additional info on the subject.
0

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Newly released Acronis True Image 2019

In announcing the release of the 15th Anniversary Edition of Acronis True Image 2019, the company revealed that its artificial intelligence-based anti-ransomware technology – stopped more than 200,000 ransomware attacks on 150,000 customers last year.

Dave BaldwinFixer of ProblemsCommented:
I agree with @acperkins idea of building two 'identical' test programs that differ only in the method.  I've found that clients aren't happy unless you've tried their idea.

Also, how busy with other traffic is the remote server and the network?
0
DanRollinsAuthor Commented:
I agree.  The only way to prove the point is to build a test harness.  I can even use the SQL statements from the profiler so that it is a valid test, then just dump them over an ADO connection (and ignore the responses).

DaveBaldwin,
In this case, the SQL Server is quite busy.  The actual scenario is that the client wants a concurrently-running second instance of my software to be hosted on a remote site.  Its primary (and company policy-mandated) purpose is to act as a fail-over server, but in these tests, it is also running as a peer to the first.  

The performance is dismal on this second (remote) app instance, but the primary app is running at full throttle.  However, the SQL Server box is nowhere close to maxing out the CPU meter or memory available (these guys have hardware that I can only dream about).   It really must be about bandwidth and data transport.
0
Dave BaldwinFixer of ProblemsCommented:
Can you use 'tracert' to see how far away both in time and hops the remote server is?  The server on the local LAN should only be one hop with minimal delay.  Firewalls that do packet inspection can slow things down because they delay the packets until they can review them.

And the effective bandwidth difference can account for a lot also if the remote connection is significantly slower.
0
Jim P.Commented:
Actually the best idea is to put the work back on the SQL Server and just transport the data set using stored procs, views and such. But ODBC versus ADO won't get much difference.
0
DanRollinsAuthor Commented:
My client mentioned the idea of using stored procs to speed things up, too.  I open a recordset, then use next-record to cycle through it.   I'm not at all clear how a stored proc could make that go faster.  Can you describe a scenario that could make the process faster?

I'll suggest using tracert and and have him re-examine his firewalls (and anything else that stands between the two locations.

Thanks!
0
Jim P.Commented:
With ODBC, depending on the design of the tables/views you are generally dragging all the data to the client and then processing it there.

With ADO and stored procs you are generally sending an order to the DB to run the SP and return only the needed results.

But let's split it out a little to clear up some confusion ADO can use either ODBC or build the connection string to the database in the programming. Either method is valid and all you do by skipping the ODBC is that you have to manually build the connection string that is already in the ODBC call. It takes practically no added time to use the ODBC as the connection string. Basically all your are doing is substituting DSN=MyODBCCall for Server=MySeverName;DBName=MyDBName;.....

Now DAO can't really fire off a stored proc and take the result set. The preference for ADO is obvious.

The ODBC versus building connect strings is sort of a red herring unless you are willing to hard code the connect string in code. That is foolish even in an in-house custom app. I will guarantee that within five years time the server, the DB, IP, DB Version, the user authentication or something else would change that would affect it. So the connect string will have to read from the local DB, a config (text) file, or the registry.

There won't be a mass change in speed regardless of ODBC or connect string. The SP versus a direct query may, but I don't really see how.

My semi OT article to this Q about creating DSNs will give you some of the guts of ODBC. You can see the elements of the connect string.
0
Anthony PerkinsCommented:
Just a word of caution (so that we do not get too confused with terminology) ADO is the old interface typically used by VB. The driver is what counts: ODBC or OLEDB.

The author is not using ADO, but rather C++ and the question is whether they can benefit from changing driver (if that is even an option with MFC) or changing totally to using an interface such as ADO or ADO.NET so that they can use some other driver.  In other words, if as in my view the driver will have little impact, then the interface MFC or ADO or ADO.NET will have zero effect.  All things being equal.

The question of whether Stored Procedures are any faster than say dynamic SQL, really depends on the use.  If for example the dynamic SQL is doing something as trivial as SELECT Col1, Col2, Col3 FROM MyTable WHERE SomeCondition ORDER BY MySort and (as I suspect) returning thousands of rows , then if coded correctly, I cannot see any significant improvement in performance when using Stored Procedures.

So the answer as always is it depends and we cannot assume that using a different interface (for example ADO or ADO.NET) or driver will make any difference.

I cannot over emphasize that the key lies in if it is coded correctly.  So if you return an updateable resultset and proceed to make individual edits to the records only to then return them all the way back to the database over a WAN, then sure you might as well take up knitting.  But Dan has been in this business long enough to know that and I am sure knows better than to make that mistake.
0
Jim P.Commented:
Thanks for clarifying my confusion AC.

Regarding the difference between ODBC or OLEDB over a distance -- I doubt there would be a difference.
0
DanRollinsAuthor Commented:
I appreciate the help.  It was most useful to at least partially confirm my belief that the ODBC transport (in and of itself) is not the bottleneck.   My client has agreed to do some simulations to (try to) prove their point as I was able to sound convincing in the conference.  

Also, we are working on writing some SPs that may speed up one part of the process.  I estimate we may see maybe a 10% overall speed up from that, so it really doesn't solve the problem, but it shows we are working toward a solution.

BTW... I tried numerous variations of settings on the database and recordset object options, including useCursorLib, snapshot, read-only... but there was no astonishing speedup.

-- Dan
0
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today
Microsoft SQL Server

From novice to tech pro — start learning today.