Solved

VFP - Parameterised Views - "sql: where clause is invalid" - Why?

Posted on 2014-09-11
26
358 Views
Last Modified: 2014-09-27
trying to pass arbitrary filter code to a parameterised view. It's rejecting even simple obvious valid examples.

I have a global variable (startit.filt_expr) which holds the filter code and the parameter in the view is simply ?startit.filt_expr

I can manually perform "Select * from ADDRESS WHERE CUSTOMER INTO DBF WHATEVER" no problem

but if startit.filt_expr is set to "ADDRESS.CUSTOMER", or just "CUSTOMER" or "ADDRESS.CUSTOMER=.T." or just "CUSTOMER=.T." and I try
"USE ADDRESS_USER_QUERY" (the name of the parameterised view) it fails every time with the error message
"SQL: WHERE clause is invalid" despite the fact that I can see startit.filt_expr is still happily sitting there waiting to be used.

Looks like I'm missing a step of misunderstanding how the parameterised views work.

Kindly enlighten me!
0
Comment
Question by:mjacobs2929
  • 9
  • 9
  • 8
26 Comments
 
LVL 29

Expert Comment

by:Olaf Doschke
ID: 40317204
View parameters are not substituted, you can do macro substitution, but that changes from version to version.

First an example of a working view parameter:

SELECT * from ADDRESS WHERE CITY = ?lcCity

This view query could be used after setting lcCity = "NY" and will result in ...WHERE CITY = "NY".

So you see the view parameter is only taken as a value, in the same way as if the view query would be part of your code and could see all variables: SELECT * from ADDRESS WHERE CITY = lcCity

If you want to use an expression you need to combine this with &. Like macro substitution that doesn't work with object properties, you can't do &Thisform.Macroexpression, you'll need it in a variable lcMacroExpression = Thisform.Macroexpression, then you can do &lcMacroexpression. The same applies to your startit object with the filt_expr property. That won't work.

So use a simple variable, and then read about macro substitution in views.

You also shouldn't put clauses like INTO DBF into a view. If you want to persist the data the view queries into a DBF simply use COPY TO on the view cursor. Well, or repeat the USE of the view, the data should already be persisted in the table(s) the view queries. And you can use the view cursor as controlsource, browse it etc like any DBF. No need to really create one on HDD.

Defining a view query with INTO DBF you are creating a copy of a copy of a book before reading it, if BOOK = underlying table, 1st copy = view query, 2nd copy = INTO DBF.

Bye. Olaf.
0
 

Author Comment

by:mjacobs2929
ID: 40317286
aha. Thanks for the link. That's the kind of view I'm talking about. I have no problem with the "standard" parameterized views.

from the link, it looks like point 4
"
4. Since using macro-substitution in Views was not "intended" functionality (and is not documented), you still cannot use the much improved View/Query Designer when working with macro-substituted parameters (huge bummer), these views must be created via code (or a cool utility such as View Editor or EView).
"
might be the key to my problem. I'm using the view designer to try to create the view so instead of a field name I'm using an expression (startit.filt_expr) and then choosing the logical condition popdown to select "Is True" and compiling the result into my dbc. The code looked kosher

"CREATE SQL VIEW "ADDRESS_USER_QUERY" ;
                  AS SELECT * FROM  vizbiz!address Address_a WHERE  ( ?startit.filt_expr )"

but looking more closely (having read that point 4) I've just spotted that "Address_a" which doesn't look like it could work

surely it should be simply:

"CREATE SQL VIEW "ADDRESS_USER_QUERY" ;
                  AS SELECT * FROM  vizbiz!address  WHERE  ( ?startit.filt_expr )"

or am I still missing something?
0
 
LVL 29

Expert Comment

by:Olaf Doschke
ID: 40317331
Read all points.

You're still missing that a) you can't expand object.property and b) you need & as in &?cFilter

Then in regard to 2. you have to define the variable before creating the view:

lcFilter = ".T." && htat can be some useful expression later on
CREATE SQL VIEW ADDRESS_USER_QUERY AS SELECT * FROM  vizbiz!address  WHERE  ?&lcFilter

If you do this in VFP8 or earlier in regard to point 2 you then would need to make sure lcFilter is NOT existing before creating the view.

But the most important point is you have to combine ? and & to get macro substitution via view parameters.

Bye, Olaf.

PS: And forget the view designer anyway. The inabilities don't just start with parameters, you can't even do UNION with the designer. You're much better off putting together your query code in a PRG and then put CREATE SQL VIEW Viewname AS in front of it.
0
 
LVL 41

Expert Comment

by:pcelba
ID: 40317871
It should be possible to define the WHERE expression in EVALUATE function:View definition in the view designerOf course, you have to design the WHERE clause in the code:
SELECT Files.path, Files.size, Files.dat;
 FROM ;
     files;
 WHERE  EVALUATE(Startit.filt_expr) 

Open in new window


Additional problems can raise when you change the filter expression... In such case REQUERY() does not work and you have to close and reopen the view by USE command.

Do you still think SQL views are good option? To create a cursor sounds better to me.
0
 

Author Comment

by:mjacobs2929
ID: 40318969
Olaf, I know you can't use & with a global variable but that's what I needed.
Fortunately pcelba's reminder to use EVAL gets around the problem.

However, using
CREATE SQL VIEW "ADDRESS_USER_QUERY" AS SELECT * FROM  vizbiz!address  WHERE  EVALUATE(STARTIT.FILT_EXPR)

I can now create the view, and, if I browse the dbc, it has all the right properties and looks proper.

But the damn thing doesn't work anyway!

It picks up every record in the table, instead of just "Customers" or whatever else I feed into STARTIT.FILT_EXPR

Incidentally, I take the point about not using the view designer. What I do is let it do the bulk of the work, then cut and paste it into my own code and modify accordlingly. Which is how I got the above to create the view.

on the question of using Cursors, I'll come back to you on that later but the chief problem is that the data in the cursor needs to behave EXACTLY as the underlying data in the relevant forms. Which means, for example, it has to have all the relevant indexes, relate properly to the other tables in the database, and so on. Seems easier to me just to create it as a dbf and then use it, temporarily, like the master table from which it originated, do the dirty, update the master table with any changes, delete the temp and bob's yer father's brother.  If cursors can be made to do all that without significant recoding of what I've already got working, I would be happy to learn the new trick.

In a sense, that (trying to learn a new trick) is what the attempted parameterised view is all about. I already have  perfectly satisfactory home grown code which can create appropriate updateable queries based on user driven arbitrary filter code, from either free or dbc tables but I'm trying to explore whether using PVs might be more efficient or offer other advantages.
0
 

Author Comment

by:mjacobs2929
ID: 40318982
and, just to eliminate the possibility that EVAL produces different results from ?&filt_expr I set up a public variable "filt_expr" and tried creating the view that way. It worked to the extent of creating the view without complaint but failed the same way as EVAL - the view doesn't appear to take any notice of the filter code...
0
 
LVL 29

Expert Comment

by:Olaf Doschke
ID: 40319549
A dot in a name isn't possible, so a "name" like STARTIT.FILT_EXPR is sepcifying a property of an object.

that doesn't work with object properties, you can't do &Thisform.Macroexpression, you'll need it in a variable lcMacroExpression = Thisform.Macroexpression, then you can do &lcMacroexpression. The same applies to your startit object with the filt_expr property. That won't work.

You haven't understood this.

It doesn't depend on STARTIT.FILT_EXPR being public, it's not a variable, it's a property of a variable. STARTIT is an object with property FILT_EXPR.

And as said you can't use this in the view definition nor execution. You can stay with any code setting this property, but you need to define the view using a simple string variable lcFilter and use  ?&lcFilter in the view definition and set lcFilter = STARTIT.FILT_EXPR before using/executing the view.

Bye, Olaf.

PS: A simple example:
_screen.Tag = "messagebox('hello world')"
lcCode = _screen.Tag
&lcCode && works
&_screen.Tag && causes error

You can't use the macro substitution operator & on a property of an object.
0
 
LVL 29

Expert Comment

by:Olaf Doschke
ID: 40319570
In regard of cursors:
You use views, views are (a form of) cursors already, mostly called view cursor, but you can simply say cursor in short.
a query INTO DBF does not copy any index. So if you could live without indexed query results you can also live without indexed cursor.

If not, that's no problem, because cursors ARE dbfs, you can simply Index them.

Bye, Olaf.
0
 
LVL 41

Expert Comment

by:pcelba
ID: 40319666
The STARTIT.FILT_EXPR works in the view but it has to be in the EVALUATE()  AND everytime you change the expression you have to close and reopen the view by USE command. The macro could also work with exactly same problems...

You may think about different approach. How many records do you have in your table?

If there are less than let say 10000 records in the local table then I would recommend simple SET FILTER.  Table placed on a network share is not so good for SET FILTER but it depends on the network speed and number of concurrent users.

SET FILTER can use macro substitution in the definition and it has one big advantage (for you): It works on the original table, so all indexes are available and all changes are immediately saved. (This advantage can easily become disadvantage under certain circumstances.)

So instead of view creation simply issue following command:

lcFilter = "CUSTOMER=.T."
SELECT YourTableAlias
SET FILTER TO &lcFilter

and that's it.

Of course, one disadvantage exists: the filter is ignored by SQL commands and you have to add it all WHERE relevant clauses...
0
 

Author Comment

by:mjacobs2929
ID: 40320752
sorry Olaf, I obviously didn't make myself clear. STARTIT.FILT_EXPR is indeed a property of my global object STARTIT. That's why I refer to it as a global variable but I realise that is sloppy use of language and can easily be confused with "public variable".  Clear?

And, on the cursor issue, the point is, I can't live without indexes or properly related tables, so one way or the other I need to index them.

On the substantive issue, I'm not getting any clue as to why the view, having been constructed as above (using either WHERE EVAL(STARTIT.FILT_EXPR OR WHERE ?&Filt_Expr) still fails to do the job. I create sql or foxpro queries like that routinely with no problem, so why would it not work as part of a DBC View? I take the point about having to reuse the view every time the filter code changes but that's not a problem. The problem is that the view doesn't even work the first time it's opened.

Incidentally, in answer to the suggestion about using filters, I generally deprecate them except for trivial inquiries or searching for subsets within queries. The tables I work with tend to have up to a million records and frequently over 100k so queries are imperative, especially in networked applications.

And, as I suggested earlier, I'm not really looking for a solution to a problem. I already have code that works very efficiently at pulling whatever I want out of one or more of the relevant tables. My quest is to learn about a new tool, or rather an old tool in a new context (for me) viz, the use of parameters to pass arbitrary filter code to the view. The hope is that it will streamline my code and perhaps make it more robust. For example I won't have to take responsibility for updating the data in the base table and it will improve network accessibility.

Can I ask either of you if you've actually got this form of parameterised query in one of your own working application or are we just theorising about what OUGHT to happen?
0
 
LVL 29

Expert Comment

by:Olaf Doschke
ID: 40320769
Have you taken everything into account, including the VFP version and whether to put ?& or &? in front of the variable and whether to define the variable before creating the view or not? Simply reread.

I have done such views and it works, of course.

* View definition
*-----------------
* a view on northwind data
Open Database _samples+"northwind\northwind.dbc"

* optional: a seperate view database 
* (could also be in the same database as the tables, I just don't want to modify the northwind sample database)
Cd GetEnv("TEMP")
Create Database myviews

* creating the view parameter before the view (important for VFP9)
lcFilter = ".T."
* creating the view using &? (important for VFP9)
Create SQL View v_sample as Select * from northwind!customers where &?lcFilter

* View usage
*-----------
* using the view with arbitrary where clausea
lcFilter = "companyname like 'A%'"
Use v_sample in 0 alias v_customerswithA


lcFilter = "city = 'London'"
Use v_sample in 0 alias v_londoncustomers

Open in new window


You can name the view parameter different, but you can't use STARTIT.FILT_EXPR in the view definition, you can simply set viewparameter = STARTIT.FILT_EXPR and that solves that problem without changing any code addressing STARTIT.FILT_EXPR so far. So you can stay with your global variable and it's properties, just copy the value to the view parameter when using it. What's so difficult about this?

Now why didn't I came up with such a sample in the first place? a) Don't know what VFP version you have b) You should know the how and why, so I still strongly recommend to read the wiki article.

OK?

What Pavel says is important of course, a view always is a snapshot of data only refreshing via REQUERY, not just by changing the view parameter, but you have well understood that implication already, I think.

So what's stopping you? What did you try and why didn't you post your code so far for our review? It's quite fruitless to come back and say "it doesn't work".

You're more following Pavels idea to use evaluate, because it works in the view designer, too. Fine:

Set Database To myviews
Create SQL View v_sample2 as Select * from northwind!customers where Evaluate(?gcFilter)
Public gcFilter

gcFilter = "city = 'London'"
Use v_sample2 in 0 alias v_londoncustomers2

Open in new window


Works, too. But you can't do EVALUATE("companyname like 'A%'"), so you are limited with evaluate.

Bye, Olaf.

PS: Gendbc (Home()+"Tools\Gendbc\gendbc.prg") will give you code to regenerate all your views and you can easily add parameters with macro substitution, also Pavel already mentioned eview, which is also nice to get view code and modify from there.

If you want to test queries I prefer sql server management studio or Access over VFPs view designer.
0
 
LVL 41

Expert Comment

by:pcelba
ID: 40321706
mjacobs, you are trying to use something which is not well documented or which is even not recommended... You have to realize such code can result into less app stability because it was not tested so heavily as other parts of VFP.

I have to say I am avoiding views in my apps because I prefer cursors. Cursors do not need database and they are also more stable. I am creating all necessary indexes on cursors because it is very fast. (Some of them are created in time when needed.)

It seems Olaf has better experience at the view field and I agree macro substitution is better than EVALUATE() because it can use Rushmore optimization. (OTOH, the LIKE operator has non-SQL equivalent.)

100.000 or even millions of records sounds too many for today's operating systems and networks. Networks are fast and reliable enough but Microsoft is degrading shared file access (DBFs especially) to something "not supported or even worst"... The preferred way is client-server access.

Relevant way of FoxPro application usage is local desktop app or VFP used as an OLE server.
0
 
LVL 29

Expert Comment

by:Olaf Doschke
ID: 40321967
Pavel, you're right, there also is the LIKE() function, but you also put your fingers on the most important advantage of macro substitution vs evaluate: Rushmore won't optimize EVALUATE("field=value"), even if it can optimize field = value.

I also rather like the successor of view, the cursoradapter. It still lacks a visual query designer, but as already said other tools have nice replacements and in the end you should be able to write SQL.

Advantages of views/CAs are repeatable queries with a defined result cursor schema, you don't repeat yourself, in that aspect also views have their advantages over directly using SQL.

The only project I used views with macro substitution was for their advantages as grid recordsource and for a less complex search mechanism, a rather small project. It was easier for the customer (also a VFP developer) to maintain the queries as views than in code, though you have maximum freedom to define any query in a variable and execute it as &lcSQL.

Bye, Olaf.
0
Do You Know the 4 Main Threat Actor Types?

Do you know the main threat actor types? Most attackers fall into one of four categories, each with their own favored tactics, techniques, and procedures.

 

Author Comment

by:mjacobs2929
ID: 40336215
apologies for delay in responding. Should have mentioned that I was away last week on family business.

I'm still struggling to get any kind of view to behave as Olaf says it should but given other comments in the thread, I think I'm losing interest in pursuing that route.

I'm intrigued by two comments pcelba makes.

First about Cursors, which I've rarely used, you tell me that they're "more stable" than views. That interests me as I too have not been impressed with how the views work. They seem to be inherently prone to failed updates in particular. So can you expand on that greater stability?

Also, as I've never trodden that path, a key concern would be the ability to rename the cursor to the arbitrary name I would need within the app (for purposes of relations and indexing). Can we do that with cursors?

More significantly, you say:
"Microsoft is degrading shared file access (DBFs especially) to something "not supported or even worst"... The preferred way is client-server access."

This was news to me. Where can I read up on it?

I might be partly shielded as the larger datasets involved are running on a very solid Novell network and thus not vulnerable to Microsoft tweaking. That said, I have a number of other clients using the software on windows networks (some of which don't even use windows servers)
0
 
LVL 41

Expert Comment

by:pcelba
ID: 40342339
Cursors are handled in memory or on the local drive. And they are open exclusively so they are fast and reliable BUT you have some extra work to create cursors and update DBF data when needed.

If you would ask if there are some bugs in cursor processing then I have to say "not everything is 100% gold" and VFP is not bug free... Cursors created by SQLEXEC() are not so reliable as cursors created by CREATE CURSOR command... SQLEXEC() creates a cursor which is closer to the view than to standard cursor and it happened to me (under certain circumstances) that SEEK() function failed on such cursor. The reason is unknown and LOCATE solved the problem... which was in indexing probably.

You can assign any alias to the cursor, you can create indexes on cursors (even on read/only ones), you can use relations between cursors etc. etc.  You cannot open one cursor in two data sessions (as any other exclusively open file).

Olaf pointed to Cursor Adapters which is let say OO extension to cursors.

Microsoft decided to cease DBF support around year 2000 when they removed VFP from Visual Studio. We don't need any explicit paper saying this but it surely exists. You may test it yourself: Call Microsoft support and ask about certain problem with DBF file created in FoxPro...

Microsoft also removed DBF support from Office 2013.

Microsoft is not testing shared file access sufficiently in new Windows versions thus Windows Vista and 7 were damaging DBF and CDX files accessed from more computers (it also happened to shared Excel sheets and Access databases). Just google for SMB2 and oplocks and you'll see.

Client-server is ideal topology even for FoxPro applications. The main advantage is local data access. Visual FoxPro can work as OLE server (with no CAL fees) and the speed is very good even on weak servers and slow networks. It just needs different coding similar to C-S applications.

You are happy man if your Novell network works without problems on Windows clients! Some Novell drivers for W7 were not so stable and DBF files damage was described in the past.

Visual FoxPro applications will work another 10-15 years and then users may use some OS virtualization like they do now for 16bit applications under 64bit OS.

Xbase++ and Lianja seems to be possible FoxPro followers but who knows...
0
 
LVL 29

Expert Comment

by:Olaf Doschke
ID: 40342589
If you say "I too have not been impressed with how the views work. They seem to be inherently prone to failed updates in particular." you're misconceiving update conflicts with failing data writing.

If you use DBFs directly and bind to them without using buffering, you're always directly writing your changes to the tables, you're working with a update strategy not scaling well for many users, and an update strategy making the last change win. Concurrent updates - and that doesn't mean the small chance of updates in the same split second, are not revealed by directly binding to DBFs.

Client/server always makes a big cut/separation of the three layers of applications: 1. database, 2. business logic (code, the core application working on loaded data) and 3. frontend (the presentation layer, User interface. Also called tiers.
The main thing to prevent is having the frontend directly talk to the database, but that's one main easy way of working with VFP. It just doesn't scale well for multi user applications.

A webserver like Pavel uses for his VFP applications typically has a frontend in the browser and code and data stay close together on the server side. This way you rather have a series of scripts on a webserver than an EXE, no states are preserved, each request from the browser will trigger a restart and build up of code needed to create the html page for presentation and a tear down of all the objects. To make this have the look and feel of a desktop app you have to learn quite a lot of things about html/css/javascript and ajax.

The other major client/server design is you forget about DBFs and put your data in a database server. You load things into cursors the one or other way and views are one of them. You stay with a VFP EXE and can keep all your code logic, just instead of binding to DBFs you bind to data loaded into cursors and you have the additional task to store the data back into the database server.

As you like relations you'd not like any of these two scenarios, because neither way you can just make the relations of DBFs do the automatic navigation between parent and child or main and detail data. You don't see live data, as you always will make requests to pull a current state and only update central data at times you save. That can also be autosave, but the main thing is you don't do it live. Neither by saving every second nor by reloading the browser page every second. This aspect of non live data is making both approaches scale better for more users and more data.

If you like what you do, then stay with just the two tiers of user interface/forms, code, and data tightly coupled, but the file protocol problems and non scalable applications will limit what you do and you better only do single/few user applications this way.

The ext level then is having both a database server, a web server or application server and a frontend, eg browser. This way of decoupling things helps to build a service oriented architecture for a bigger company where functionalities overlap in applications and you reuse not by building applications with the same code, but by centralising operations and making them available from many frontends or tiers. Same data, same code, many users and devices/platforms. Ideally. It's typically a long way to go there.

One thing is for sure: This all needs other ways of thinking about solutions, other strategies of problem solving.

Bye, Olaf.
0
 

Author Comment

by:mjacobs2929
ID: 40343544
both very interesting responses.

I'll go away and study the pros and cons of re-engineering to the CS model.  

Just a couple of questions and comments and I'll wrap this one up and split the points between you.

Olaf, when you refer to a database server, are we talking about something like the various flavours of SQL, and alternatives like Oracle?

Pcelba, your reference to cursors seems to imply that, when created, they automatically include the indexes defined for the dbf. Or am I overinterpreting what you wrote? If it is true then that would be a significant improvement on my own code where I have to recreate the indexes on every query before moving on to "real" activity.

And, on the question of Novell clients; yes the first W7 client was a mess but they fixed that after a couple of months. The current W8 client appears to be flawless. Been using it on a number of stations for about 18 months with only a single glitch - it doesn't integrate as well as previous versions with the windows login screen but the effect is trivial.
0
 
LVL 41

Expert Comment

by:pcelba
ID: 40343598
Good to know Novell is back!

Cursors are created as "naked"... So you have to create all necessary indexes and relations again. But it is very fast. (Of course, temp folder must reside on local drive.)

FoxPro working on a web server is represented by ActiveVFP: http://activevfp.codeplex.com/

We are using VFP client-server app where VFP is used on both sides:
1) Server running as a multi-threaded VFP OLE application
2) Fat clients under standard VFP run-time

But it needed some extra effort to make it working all together...
Clients are calling functions on a server and data are returned as strings. So all query results are converted to strings, zipped, and returned to clients and clients must unzip, create indexes and use... Even this level of complexity does not degrade the performance comparing to SQL Server. Clients can access the server via almost any network starting on LAN and ending on modem connections. Thanks to the fact data access is local (on the server) 150 concurrent users does not mean any problem. 2 GB limitation is solved by DBF files fragmentation.

I am sure this application will work another 10 years.

And if I imagine the cost of software licenses represents just ONE copy of Visual FoxPro then I understand why Microsoft does not support FoxPro any more.
0
 

Author Comment

by:mjacobs2929
ID: 40343709
downloaded ActiveVFP for a look. But are you really saying that a user working, for example, in my Sales Invoice module, would have to pull all the data they might need to work with across the network before they could interrogate it?

I have, for instance, a neat report which builds its own query (which, like the cursors, is stored locally) and presents a year on year comparison of performance by an arbitrary number of sales reps by code and account. I've just tested that on the Novell net and it took 15 seconds to create the query of 40,000 records (from a table with 850,000)

What sort of time would it take to perform that task using the model you describe?
0
 
LVL 41

Expert Comment

by:pcelba
ID: 40343773
To speed up something which takes 15 seconds does not make sense in this case.

OTOH, you may test how long takes the query when executed at the server directly and then send just the result to a client. It should show certain difference.
0
 

Author Comment

by:mjacobs2929
ID: 40344687
No, I was asking how long such a query might have taken on your VFP based fat client/server model. I'm guessing it would be lot longer than 15 seconds.

Can't, of course, run the same query directly on the Novell server, but on my own workstation the query took 8 seconds, so the network latency approximately halves the speed, But 15 seconds for a complex query like that is acceptable.

On a Windoze network, the latency reduces the speed further, to about one third.

Not sure that clarifies anything but the critical question remains, if we were to throw a similar task at your fat client/server version, all other things being equai, how long would you expect it to take, zipping 40,000 records from a table of 850,000, pulling them across the network, unzipping, reassembling and reindexing them for the fat client. If the honest answer is something less than 30 seconds, I could still be interested. Anything above that and I can't justify the drop in performance.
0
 
LVL 41

Accepted Solution

by:
pcelba earned 250 total points
ID: 40344754
1)
The server has faster disks and CPU than your local machine probably...
The query running on the server will be faster. That we both agree.

2)
Forget about indexes. Neither way creates indexes on the client.

3)
When executed your way the network traffic consumed by SQL command consists of:
a) index transfer (used by Rushmore optimizations)
b) data transfer - not only the result but more (and uncompressed)

When executed my way then we are talking just about
a) result data transfer (compressed) and nothing else

4)
Zipping/unzipping you don't need at all and that's the main difference BUT to ZIP DBF file is very fast - simply load it to memory by FILETOSTR() and ZIP it by calling one function from VFPCompression.FLL library.
Now you have data which represent about 1/4 or less of the original result size and you have to transfer them over a network to the client.
Client unzips these data and saves them to a disk file (DBF).

So the resulting difference depends on network speed at first and then on the server speed. If you have slow server and very fast network then the difference is marginal because the overhead caused by zipping/unzipping degrades the whole process.
If you have fast server and slow network then the time saved by ZIP transfer is much higher than the time used for zipping/unzipping.

To say how long it would take is impossible. If you post some data about the resulting DBF size and your network speed (in MB/s) then we may calculate more... BTW, do you need all 40000 records at the client's side? What they are doing with such amount of data?

The everyday's experience says zipping and unzipping does not take so much time but I have no exact measurements now. Our application uses LAN, local data access (for development), and slow cell modems across the whole country. Modems need zipping for sure and to skip this step just for the fast connection would mean additional programming (a few lines).

Further trick can be to retrieve zipped result data from the server on a background in a small chunks. You may display the first set of data immediately and the rest can be transferred when needed or continuously. This is not possible for fileserver.

The only problem is when users don't enter selection criteria and the server sends whole data table (when the code allows it)...

BUT, again, the whole C-S data access needs a lot of programming... which is easier to do when you are using SQL Server than for VFP OLE Server.

And remember VFP OLE server requires Windows Server...
0
 
LVL 29

Assisted Solution

by:Olaf Doschke
Olaf Doschke earned 250 total points
ID: 40344940
>Olaf, when you refer to a database server, are we talking about something like the various flavours of SQL, and alternatives like Oracle?

Yes, and mainly I have experience with MS SQL Server often simply abbreviated as SQL Server, and MySQL. Small experience with some other flavors.

Pavel has explained what the advantages are, and with a query in VFP via Views you also retrieve data into a view cursor, so you already know and use cursors. And like directly executing a query a view is only making use of the indexes, it doesn't copy them. The query result may look totally different, only has partly data and therefore the index file won't match the query result cursor for this reason alone beside others, so you need to index a view or cursor after retrieving the data, if you would want to make relations between cursors or further optimizable queries on these cursors, but indeed if the query results are large, that's a sign you should rather do what you do on the server side, if you have already programmed in the client/server architecture. And even if not, these cases could be done by a program running on the file server, even if you stay with the DBFs, you remove the network bottleneck. This is what ActiveVFP could do, but also a separate EXE, which runs there. Important: Runs there. If you start an exe via mapped network drive, it would run local on the client again, as your main application does.

Like pavel says the big difference using a server - in his case a COM server running on the IIS webserver side, in case of a database server the database service - the big difference using a server is the server hardware is executing your request.

Most often the server has faster CPU and more memory, as Pavel says, but it's also shared by all clients. If you use clients CPUs that can be an advantage, but you have the network bottle neck as you have no server side code executing your query the VFP sql engine has to cleverly start analysing the query for possible (rushmore) optimisations to first look into index files. So you have file access and network traffic before you retrieve the first record. If everyting is finetuned you can have surprisingly low additional access to files than if you'd knew the record numbers you need to read in advance (esentially this is what index usage is resulting in: A list of record numbers to read).

As you said you want indexed results to make relations you are talking about the VFP exclusive technique of "live queries"  or "live joins". You just USE two tables (not loading any data), you set an index on one and set a relation, which joins the two tables. Now a record pointer movement in a table causes the record pointer to move according to the relation. This is not only delayed to the time you move in the first table and therefore has the advantage of not needing any start time to see the first result(s), but it also always reads from the DBFs live data. If you start this in the morning and keep it open all the time, all new, changed, or removed records by other users will reflect in your form.

A query always is just a copy of data, no matter if you do SQL-Selects in VFP on DBFs or on a database server. And you then have a small subset of the data locally, you can also index and use in the same way with relations, but the live data advantage is lost. Nevertheless this is the way that makes it scale better. You don't clutter the network reading with back and forth in DBF files, you concentrate this during the (hopefully) optimized query and then the network can server more clients.

So in the end all technical solutions have their pros and cons.

But just to give an example of what client server means take a look at the search box of Google. You type something and get suggestions of what you might want to search. This is done by querying data on the server and not by using tables and relating them, not with setting filters or things like that. I can only assume it's also not simple SQL and not a simple database behind it, but it works with your internet bandwidth and looks like intellisense or incremental search techniques. It takes a lot of programming, as Pavel says, but it can feel very fast and live, as if you already have preloaded most of the responses you get by typing.

It's said CPUs are mostly faster waiting for human input. That's part of what makes this feel so fast, but the bottleneck of the internet bandwidth is added to the huge number of concurrent users and it still is very fast.

You can have an almost no code approach of VFP development, if you simply directly bind data to controls. You only set a grid recordsource to a table and some textbox controlsources to single fields, then you only put the table in the DE or USE it and no code is needed to show the data as list in the grid, scroll through it, select a record and let the textboxes show the fields of the selected record. That even has 0 lines of code. It's impossible to do the same thing that fast with server side data if you query the full table to a local file and then bind to that, you have a long start time, if you only load few records you have lot of code that needs to look what data to fetch and react to scrolls and other actions to load further data. This all is done under the hood of the grid control, it only reads from the DBF what it displays. And you don't have that in the client server architecture principle.

That's not to say VFP always wins. The disadvantages have been explained already, too. And the best of both worlds can be achieved by fetching more than just what you need instantly, but less than all and then work in this local data (cursors) the way you do with all data in VFP. This way you can recycle most of the code and only need to add code to handle reading from the server and writing back. That includes indexing the local cursors again and again and other things you won't do on DBFs.

This of course also includes queries at all. But they are mostly needed for another aspect of databases anyway: You store data normalised with removed redundancy and put it together for display and modification. Eg you have a cities table with one NY record and if you query addresses your result will most probably have many records displaying NY as the city of the adress, while you store addresses only referencing NY as their city by a key. If you make data normalisation one of the most important rules of your database, because it's done for data integrity and more such high quality values, you need to put together data anyway and then it has no big advantage to be able to directly show a table inside a form. That's also a reason client-server is the enterprise solution to work on data.

Bye, Olaf.
0
 

Author Comment

by:mjacobs2929
ID: 40345800
thanks chaps. Plenty to think about and new tricks to learn.

Just in answer to Pcelba's question re why the 40,000 records are needed. It's a fairly typical requirement for users of my software that they spend a considerable amount of time performing fine grained analysis. In the Sales Rep year on year example they need to know the volume and value of sales, by product, made by each rep to each of their accounts for the most recent 12 months and to see the results for the previous 12 months alongside. That requires every single line item, for both periods, to be retrieved for the analysis to be possible.  Furthermore, all the data has to available prior to the start of the analysis, so we can't cheat by starting it and allowing the remaining data to arrive later. Nor, indeed, given that the network query only takes 15 seconds, do we need to.

My conclusion, following this enlightening thread, is that I'll stick with the pure VFP for this particular application. It's already close on 2 million lines of code and the prospect of re-engineering that for Client Server looks like it would be far more effort than it's worth, particularly as my target market are SMEs with, typically, fewer than 50 workstations, for whom the VFP on a LAN seems to be more than adequate. But I do work on other projects and some of those do indeed need scaling to, potentially, millions of users, so I'm going to start the process of migrating that kind of development to the Client Server model.

I now propose to close this question and split the points.

Thanks a lot for sharing your insights.
0
 
LVL 41

Expert Comment

by:pcelba
ID: 40345967
Yes, the application port is always very expensive... but that's the life.

I also like Olaf's comments.

Thanks for the points.
0
 
LVL 29

Expert Comment

by:Olaf Doschke
ID: 40347345
First, thanks for closing this, it was a nice discussion.

>fine grained analysis
As a user you'd typically not do analysis of so many records in a form, but by exporting them to eg Excel and work on them there. Well Excel is always the bad reputation example, there are other things like software specialised on statistical analysis.

I also have to do some of these yearly exports and wonder, if it wouldn't be faster to already aggregate most of the data in the different ways the need to view on them. But it's true you only get the ideas of what to look at, if you don't just get monthly or even just quartley key values and still don't know what to change to make the next quarter or year a bigger business success.

I assume it's that way, because scrolling through 4000 grid lines will not give you an overview, you'll use that data to look at it from different angles and management doing that isn't bad at it, they just should sometimes work more directly with database experts. Another hint, what technology offers for this kind of number crunching is a different form of MSSQL databases: Online Analytical Processing (OLAP) Cubes.

Bye, Olaf.
0

Featured Post

What Security Threats Are You Missing?

Enhance your security with threat intelligence from the web. Get trending threat insights on hackers, exploits, and suspicious IP addresses delivered to your inbox with our free Cyber Daily.

Join & Write a Comment

Suggested Solutions

Utilizing an array to gracefully append to a list of EmailAddresses
Marketers need statistics and metrics like everybody else needs oxygen. In this article we explain how to enable marketing campaign statistics for Microsoft Exchange mail.
In this video we show how to create an email address policy in Exchange 2013. We show this process by using the Exchange Admin Center. Log into Exchange Admin Center.:  First we need to log into the Exchange Admin Center. Navigate to the Mail Flow…
This video discusses moving either the default database or any database to a new volume.

747 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question

Need Help in Real-Time?

Connect with top rated Experts

10 Experts available now in Live!

Get 1:1 Help Now