which database has a better conditions with ISAPI.dll ?

2- How can i increase my database performance if i use of ISAPI.dll ?

every question have 30 points value?


Who is Participating?

[Product update] Infrastructure Analysis Tool is now available with Business Accounts.Learn More

I wear a lot of hats...

"The solutions and answers provided on Experts Exchange have been extremely helpful to me over the last few years. I wear a lot of hats - Developer, Database Administrator, Help Desk, etc., so I know a lot of things but not a lot about one thing. Experts Exchange gives me answers from people who do know a lot about one thing, in a easy to use platform." -Todd S.


Better performance - imho - is never only a problem of
application type.

The point is - imvho - how you design and implement it.
I would say that using a relational database against a
fiie based one surely gives you much more power.

The performances can be highly increased using stored procedures and
triggers, plus UDFs for very special tasks.

When you plan an application that makes use of databases, keep in
mind that the different layers( GUI, business and backend ) must be
strongly separated.
This can be also achieved in plain C/S apps, provided that you
design it in order to behave correctly.

Some tips:

1) Avoid using datasets in your forms, use datamodules.
2) Never use datasets directly, but expose methods that will abstract you
from the specific type. Example:

procedure MyModule.AddUser( AUser: TUser );
  // Use dataset here

TUser should be a class, that way you won't change the interface
part but only the implementation, making the application more

3) Use threads if you can

Using threads makes things way faster since you send several
requests at the same time. Say that you have to retrieve data
from several unrelated tables to join in one single form: using threads
will make you request all the necessary data in one step, without
having to join( which is always slower than a plain select ).


Don't allow the RDBMS to choose an index for you.
Always define an index when you're going to select one or
more fields many times, so that the database will know
what index to use in order to speed things up


During testing of the application, have a close look at
plans that are generated from the database. This can be of
huge help to find bottlenecks and to let you see what's happening
behind the scenes.

6) Remember that you are who controls things

The programmer is responsible for performances of an
application. Someone said once "Computer are frightening
because you tell them what to do and they do it completely
honestly". Keep it in mind :-)

7) Some optimization tips

Use stored procs for the most common queries.
Stored procedures are precompiled from the database,
thus they run a lot faster.

8) The super tip


There's nothing as optimizing than the knowledge of the
system you are using.
This helps you in designing the database so that you can
outperform anything else you know worse.


ImbeginnerAuthor Commented:
andrew ,thanks for your comment

about choosing Db ,
i m agree with you but i read a book that has this section

Web Programming in Delphi
by Bryan Valencia
Chapter 4 Choosing a Database for Web Use

Interbase: The most compelling thing about Interbase is that it's thoroughly integrated
in Memo fields using Interbase. Oracle won't do text matches in memo fields, so you have to
iterate through the entire DB. Also, with Interbase, it's easy to back up and transmit an
Interbase database, data, schema, and all.  It doesn't take excessively long to log in, so
CGI apps can be written to access an Interbase DB. It doesn't lose track of indexes, and
it's very reliable. There are some other reasons to choose Interbase*, but the one big
drawback is this... Most non-Borland developers have never heard of it. Ask a room full
of VC++ snobs about Interbase, and they'll shrug at you.  * and it's free (with Delphi C/S)!
Oracle:  I use Oracle regularly in business apps, but not for the web. Oracle is a great
system for business, but it has a few drawbacks from a development standpoint.
It takes forever to log in. I am working on a program right now for a client that takes
45 seconds from the time the user types their password to when the app is ready to come up.
In my code, all it does is call a database1.open. Now the LAN traffic has an effect here,
but as far as I can tell, this slowness would force me to write all ISAPI DLL's, or write a
middle tier.  So for now, I'll use Interbase for Web, and Oracle for business apps.

I want EE experte idea in this case.
and how can i decrease the drawback of ORACLE DB that bryan described in his book?

about IMHO and IMVHO:
what is the mean of these words?

about UDFs :
i dont hear about it , can you tell me what is it ?

about >> 2) Never use datasets directly<<
can you describe it a bit more ?

about threads
i dont Think a good solution for Db purposes(you added more seek time with threading) .
in my purpose threading is good for two various works such as( calculating and Db workS together)
in addition ISAPI.dll itself do threading for concurent users.

finally, at now you earn all of points+ 20 = 80 with A grade.

but i can increase points if i see good comments in this thread.

final points = 250 + ....

>> you added more seek time with threading

That depends on how many CPU's you have available.  I typically aim for one "business thread" per CPU, plus one communications thread, one monitoring thread, and sometimes on GUI thread.  

SInce databases are typically disk I/O intensive, you may be able to benefit by more threads per CPU, since during the Disk I/O the other threads can run.  Since all serious DBMS keep a large buffer pool in memory, tuning the number of threads for optimal performance available is not as simple as my initial rule of thumb.

You neglected to tell us the size of your user base.

For a small installation supporting up to a hundred or so concurrent users, it doesn't matter.  They're all pretty much the same.  If your database is really complex you may want something that is vertically scalable.  On Intel hardware, the most vertically scaled you can get is the AMD Opterex 844 series with 8 CPU's.

For a midsized installation supporting thousand concurrent users, you start to see some benefits to using some of the pricier DBMS' such as DB2, M$ SQL, and Oracle.

For a large installation, say 10,000 concurrent users or more, you want something that is both vertically and horizontally scalable.  DB2 and Oracle are your best bets.

Linux servers tend to perform much faster and more reliably than Windows servers.  Windows is optimized around the GUI I/O, whereas linux is optimized for CPU and disk performance.  linux is vertically scalable form low end WIntel architecture to mainframe sysplexes now.

Microsoft wuns their website on 23 ganged AS/400's, after they were unable to move to 1,200 Windows NT servers.

Interbase has some really nice and progressive features, but it is strictly for the small scale systems or simple midscale systems.  It is vertically scalable to the limit of the Wintel platforms, but is not horizontally scalable.

So far as I know, only Interbase supports searches against memo fields.  The search that was described would be one that would be avoided if at all possible, and one I would design out of my structures if possible.  If you actually need that functionality, it is a prime reason you want to support threads.  You don't want to lock out the other <n> users from getting their data because one person ties up the database for 20 minutes searching  for a pattern match (slow) in a memo field (slow).

About using Qracle as a back end for the web: Yahoo is written on Oracle.

About using a middle tier: If you are writing a web app using ISAPI.DLL, you are writing a middle tier.  That is a given.  That is what an ISAPI application is.

Typically, your ISAPI app will have its own authority for the database, and will control authentication and authorization separately from the DBMS authorization and authentication.  The application will log into the back end database, what ever it is, and hold the connection until it is terminated.

As a yahoo user, your user ID does not give you aothority to log into the Oracle database directly.  The Yahoo application uses your ID and password to look up your user profile in a set of tables and act on those profiles, under its own Oracle authority.  

If you were to go into Yahoo's offices and sit down at a support programmer's desktop, and try to log into the oracle database directly with your yahoo ID and password, you would be denied access because Oracle does not know you.  However, the Yahoo app knows you and so you can still log in to the web application through any browser.

Experts Exchange Solution brought to you by

Your issues matter to us.

Facing a tech roadblock? Get the help and guidance you need from experienced professionals who care. Ask your question anytime, anywhere, with no hassle.

Start your 7-day free trial
Become a Certified Penetration Testing Engineer

This CPTE Certified Penetration Testing Engineer course covers everything you need to know about becoming a Certified Penetration Testing Engineer. Career Path: Professional roles include Ethical Hackers, Security Consultants, System Administrators, and Chief Security Officers.

ImbeginnerAuthor Commented:
any feed back from you for completing your first comment,

swift99,thanks for your comment
>>Typically, your ISAPI app will have its own authority for the database, and will control authentication and authorization separately from the DBMS authorization and authentication.  The application will log into the back end database, what ever it is, and hold the connection until it is terminated.<<
Can you describe this statement with more details?

Can you tell me a good strategy for authoring users(important for me)?

>>You neglected to tell us the size of your user base.
Maximum: 60 concurent users.



I really thought I had expanded it?
Ok, anyway, here's the followance.

UDFs are extensions you can attach to databases to let
them perform special actions, like for example encrypting
data or connecting to the net to retrieve some data.


Expressions meaning In My Humble Opinion and In My Very
Humble Opinion.

Never use the datasets directly

I said this because using them directly in your ISAPI actions can be
counterproducing: say you change database, you've got to change all the
actions code, and this is an error prone task, with a high risk of
introducing bugs.

Keeping the datasets apart, and using interface methods of a data
module, keeps this risk low since you don't touch the actions' code.


ImbeginnerAuthor Commented:
>>about keeping dataset apart<<
with this work we increase the common sections of various sections,
do you think this can be a unriable work?

With this work, you'll have less headache when you'll be going to modify the code.
Ideally, you could use an OPF but that would degrade performances.

What's behind that concept is to keep the interface unaltered, with lots less bugs in the
first instance.


Wim ten BrinkSelf-employed developerCommented:
Increasing database performance? Then forget about using Delphi and use ASP or PHP instead. Both script languages offer a very high performance for handling recordsets and databases. They are just better optimized than your average Delphi ISAPI dll.

But if you want to use Delphi, think about this: Use many indices. Make sure the database server has lots of memory, a very fast processor and preferably a very fast SCSI disk drive. Choose the best computer you have available and preferably buy a new one that's even better than this one. Perhaps a system with more than one processor. A dual-pentium IV at over 2 GHz speed with about 4 GB of RAM and a SCSI disk at over 10.000 RPM would give a nice performance, I think...

Another trick: keep the result recordsets as small as possible. Consider the connection speed between your server and all the clients. If you have a fast connection then you can create a chatty server, possibly sending over data record by record. A fast connection would make small data packets very useful. If you have a slow connection (e.g. dial-up internet connection) then you would prefer to develop in a more chunky way. You would have to send over larger data-blocks to the client and you'll need to cashe then on the client side. It means reducing the number of server calls and adding more logic to the client. Synchronising wih the server would become more important then.

And try to check if your SQL code is optimized enough. Don't return any fields that you won't use anyway. If you use any "Select * from ..." then replace them with "Select field1, field2, ... from ..." and only use the fields you need! This will reduce the amount of data traffic in your network.
Consider the use of stored procedures and views. If you just use queries then the SQL database keeps parsing and compiling your request. Stored procedures and views are precompiled, thus often faster.

And finally, if your application needs to scale well, consider using an alternative webserver instead of IIS. Or try to learn more about load balancing and start using more than one webserver system. Make sure you split up the webserver from the database server so the webserver can stay focussed on serving webpages while the database server stays focussed on retrieving data. Both server types demand quite a lot of resources from the system and if they are both running on the same system they are just in each others way.

Finally, if you use IIS and ADO, consider using MS SQL server too. These MS systems do cooperate with each other quite well. If you decide to use a different database system, check if there's a MS version of the related database driver. Oracle for example has two flavors of database drivers. One created by Oracle and one created by Microsoft. The MS version does seem to be a bit faster.
To the DBMS, whatever you use, your ISAPI application will have a logon ID and password (say userID myApp).

YOur end user ID's will never be registerd with the DBMS.  Instead, you will have an internal users/passwords table that the app authorizes and authenticates against.  Call the table AppUserSecurity, and as a minimum it will have a user ID and a pasword entry (probably encrypted, but you can do that later).  Other things in the table might include privelege level an so on.  Note that your application security table bears no relation to the DBMS security table, so an application user cannot just log into the DBMS.

On startup, the ISAPI application establishes its connection to the DBMS using its own user ID and password.  On exit, the application closes the connection.

When invoked by a user, ISAPI application is responsible for collecting a user ID and password and looking those up against the AppUserSecurity table with a query (select * from AppUserSecurity where UserID=:UserID and Password =:Password).  If there is a row then they can proceed with the app.  If not, then they cannot use the app.  It is deliberate that the query is unable to say "why" the user cannot use the app.  What the app doesn't know it can't tell, so security is maintained.
ImbeginnerAuthor Commented:
thanks for your comments,
i am happy for learning so many things of your comments

>> Then forget about using Delphi and use ASP or PHP instead <<
I read somewhere PHP for its scripting nature has a lower performance
than ISAPI,because CGI or ISAPI load on IIS as executable file(precompiled).
An intersting page on the Microsoft site compares ASP and ISAPI performance,
scalability etc, is :

>>If you have a slow connection (e.g. dial-up internet connection) then you would prefer to
 develop in a more chunky way. You would have to send over larger data-blocks to the
client and you'll need to
cashe then on the client side. It means reducing the number of server calls and
adding more logic to the client. Synchronising wih the server would become more
important then.<<
can you describe it with more details or introdce me a article about it?

>>Oracle for example has two flavors of database drivers. One created by Oracle
and one created by Microsoft. The MS version does seem to be a bit faster<<

How can i find it(my oracle version 8.1.7) ?
Very Very important :
Finally,what is your ideas about maintaning common sections between actions(critical sections
such as datamodule,variables,arrays and printer(For any request i should create a printed report for
our Servide side  and etc)).

at NOW,
swiff99                                  gives  100 Points
deerbear                             gives   80  Points
workshop_alex=50            gives  50  Points

Best Regards For EE Experts
I am not sure I understand your last question.  Please clarify.

Each technology you listed is appropriate in different circumstances, and none are exclusive of any others.
ImbeginnerAuthor Commented:
>>I am not sure I understand your last question.  Please clarify<<
anyone i say or ...

For this scale work, where the bulk of your work is likely to be inthe DBMS, the scripting/programming language is only important in terms of what services it brings to the table.  This is not a performance intensive app, with a max of 60 users, unless the database is poorly designed or implemented.  

The J2EE world has proven that interpreted languages have sufficient performance to handle the majority of database intensive enterprise scale web type applicatiions.  Interpreted languages typically have performance hit of 3 to 4 times a compiled language.

BTW: J2EE as a web application server is typically not run with a hotspot compiler, so timings of the straight interpreted model have to be used when comparing J2EE web app performance.  

Each technology you listed is a good technology in its place.  None is a cure all, and all may/should be used in conjunction with each other.

critical sections - are Windows operating system level constructs used to guarantee that code is not interrupted while the critical section is in effect.  This is a necessary thing in multi-threaded programming.

datamodule - is a convenient Delphi VCL class that allows you to visually gather objects descended from TComponent into an aggregate for programming convenience.  A datamodule is a good way to organize your code, but it can also be a deathtrap if you try to use it as a "favorite hammer"

variables - yes ... variables are necessary - wthout them there is no point to computing.  I am not sure what opinion there is to have here?  

arrays - are good constructs too where they are appropriate.  

printer - There are places for central printing and places for remote printing.  The consideration that you have to face is which is the most efficient use of your resources, and that varies from application to application, and installation to installation.  Where communications is the bottleneck, and the amount of data to be reported is large, I would be tempted to generate the reports on the server and download only the visual representation for viewing by the client.  Quickreports can support this functionality with just a little bit of conceptualization.
I do agree with the principle of isolating your business logic from both the user interaction and the DBMS layers.  

I am a firm believer in the MVC (Model View Controller) paradigm which enforces this separation, and I even like to take ot a few steps further by separating the Model entirely from the data persistence layer (DBMS typically).

I once confused a number of Delphi programmers with an application prototype that used a Datamodule as its "main form".  It was a nifty idea, and it worked well for what it was supposed to do, but the datamodule was overkill for a controller layer.  It was too easy to start mixing the data and model layers into the controller and muddying the design, and since most Delphi programmers are not used to keeping those concepts separate it was no more than a novelty.
ImbeginnerAuthor Commented:
swift99 thanks for your comments,
workshop_alex and deerbear ,
do you have ideas about my last question ?

Hi ImBeginner,

Would you please clarify which is your last question?


ImbeginnerAuthor Commented:
what is your ideas about keeping common sections from effecting each other(critical sections such as datamodule,variables,arrays and printer(For any request i should create a printed report for our Servide side  and etc)).

in using of printer on server side, how can i avoid of busying  my server computer(i need a strategy)?
1. Build the report in a separate thread.  I have a logging class that is built around a thread.  It handles approxiamtely 10,000 lines of plain text per second.

2. Ensure that you have an extra CPU on the motherboard to handle the requirements of logging in a separate thread.

3. marshall your resources, apply read locks to all at once, use resources as quickly as possible, and release resources as a batch
Wim ten BrinkSelf-employed developerCommented:
> I read somewhere PHP for its scripting nature has a lower performance
> than ISAPI,because CGI or ISAPI load on IIS as executable file(precompiled).

In the past, scripts used to be considered slower than precompiled binaries but modern script languages are actually compiled too but the compiled code is just stored in-memory and not stored on disk. In general, this means the first person who calls a script has a slight delay so the scripts are compiled but the second and further users will be accessing the compiled code, for as long as it's cashed in memory. This technique brings up the speed of scripts to the same level as compiled CGI's and other executables. The advantage for scripts is that they're often smaller in size thus faster to load. A script file of 10 KB or an executable of 1 MB, there's a big difference in load-time.

> can you describe it with more details or introdce me a article about it?
I found the terms 'Chunky vs. Chatty' mentioned in the book "Microsoft .NET Remoting" which contains a useful section about distributed application development. Before this, I was familiar with the technique but didn't know the right terms for it. .NET Remoting is an interesting technique too but it's still very dependant on Windows and requires the installation of .NET on the system, which not everyone likes.
The book also gives a nice example for these techniques. Let's say you and a collegue are working on the same project. If the two of you work in the same room then its easy to exchange information. If you have a question you can immediately ask it and get an immediate response back. But if your collegue works in a faraway place, like another state or even another country then exchaning information is a bit more difficult. You tend to collect questions until you feel the need for answers and then you call your collegue to clearly explain your problems and wait for him to provide all answers.

> How can i find it(my oracle version 8.1.7) ?
I think you already have them at this moment. I never noticed that I had to do something special to get them. I think the Oracle version is part of the Oracle Client and the Microsoft version might be part of the default ODBC installation.

> Finally,what is your ideas about maintaning common sections between actions(critical sections
> such as datamodule,variables,arrays and printer(For any request i should create a printed report for
> our Servide side  and etc)).
I learned that it is very useful to have things very loosely coupled so when one thing goes down (crashes) it won't bring down the other parts. I also know that there are many, many techniques to do exactly the same with a webserver. It starts with a good and clear design of the required functionality and then you have to determine which limitations you will have. In a loose environment you could theoretically install your whole system over multiple systems, using load balancing and other techniques to divide tasks over multiple computers. If one server is too busy, let the other server handle the next action. In general, servers aren't that expensive to use compared to the costs it might take to develop faster code.
ImbeginnerAuthor Commented:
How can i increase performance when we have only one cpu on server side(because my concurrent user is maximum 60-70)?

at now, how can i configure my programe for best using of server side
computer ?
See items 1 and 3 above.  Batch things up.

I have previously posted a high performance logging thread that could be adapted to this purpose.

Use queue's so that the logging thread can operate asynchronously to the main processes and not lose data.

ImbeginnerAuthor Commented:
My purpose only is a  best using of printers for reporting .
batch up your data that you want reported.  Get a snapshot of it off the database and into clientdatasets or value objects (memory only constructs) and dump the pointer(s) to an aggregate container object (a datamodule or a TList for example), and add the aggregate to a queue.  Your server is now free to proceed with the next request.

If you choose to print from the server, then in a separate thread, monitor the queue for contents.  Pop the aggregate container object, generate the report immediately from the snapshot memory constructs, dump it to the printer, and release the resources in the snapshot.  The actual report generation does not impact the DBMS managed resources at all.

You could also send the aggregate container object and all of its contents to a queue on a (any) workstation.  The process and reasoning are the same.

It is also reasonable, especially where security is a concern, to generate teh rreport centrally and then distribute only the end result report rather than the data.

With a little attention to detail, and about 100 to 200 extra lines of code, you should be able to build this so that you can make these decision after construction, on a report by report and machine by machine basis

1. Determine how you will build your aggregate container
2. Determine how you will tell the report generator which report to generate - this may be part of your aggregate container
3. Build your report generator thread - this piece will monitor a TQueue for Count>0.  While count > 0, it will pop the next aggregate container from the queue, determine which report to generate, and pass the aggregate to the generate method for the specific report.  The generate method will construct a QuickReport, attach the datasources from the Quickreport to the datasets in the aggregate object, and fire QuickReport's methods to generate pages.  Initial disposition of the report (save to file, print, display on screen) will be in the aggregate container.

At this point, you have the server side report generation and printing fully operational.

4. Construct a mechanism for serializing the aggregate container and its contained datasets.  Once the data is serialized, it can be transmitted across any connection.  MIDAS alreay provides 99% of this for the TClientDatasets, so it should be just a few lines of code to handle the aggregation.
5. Construct a mechanism for receiving serialized aggregate containers and rebuilding them at the receiving end.  once again, MIDAS already handles most of the low level stuff , so it should be just a few lines of code to handle the aggregation.

Alternative to MIDAS in 4 and 5, check out MidWare (http://overbyte.delphicenter.com)

Since QuickReports also includes a viewer, the report can be generated on one machine and only the binary result transmitted to the desktop.

At this point, the identical code is capable of running on any of client or server, or even on a specialized "report server", and producing a report at any location for disposition (print, save, display) at any location.  And you don't have to decide now what is best for your user - they can decide at run time.  The cost to you is just a hundred or so lines of code and some heavy thought.

it is possible to configure this so that the user requests a report, the data is collected at the database server and queued to a report server, then the report server generates and saves the report and forwards only the image to the user.

It is also possible to configure this so that al of the work happens at the user machine, or at the database server.  Hypothetically, this could be configured so that the report was directed to a machine other than the requesting user's (say an internal process for monitoring the stats of the server itself).

Imvho, the best option to go for to avoid the computer getting too busy is to
write an NT service that will handle prints, I mean a layer on the print spooler.

Creating reports in separate threads will help a lot, then pass them to your
service and it'll issue the print.


ImbeginnerAuthor Commented:
hi, thanks for you assistings

Finally, i splitted points as below :

Best Regards
It's more than this solution.Get answers and train to solve all your tech problems - anytime, anywhere.Try it for free Edge Out The Competitionfor your dream job with proven skills and certifications.Get started today Stand Outas the employee with proven skills.Start learning today for free Move Your Career Forwardwith certification training in the latest technologies.Start your trial today

From novice to tech pro — start learning today.