Link to home
Start Free TrialLog in
Avatar of Eileen Murphy
Eileen MurphyFlag for United States of America

asked on

SQL SERVER DSN QUESTIONS

Need some advise.

Is a LOCAL FILE DSN (ODBC to a SQL Server) a better solution than a NETWORK BASED FILE, SYSTEM or MACHINE DSN?

To access the application --the users click on a desktop shortcut - I use Robocopy to copy the front-end to a local folder (if it has changed) -- and then open it. I was thinking I would add a copy of the FILE DSN in the same batch file so the users always had the most current copy available locally.

When I tested the batch file the permissions didn't copy down so none of the data was editable -- when I manually changed the DSN file permissions -- the data became editable using that DSN.

I have a client with a mixed (OS and Office) environment and multiple locations. The remote location users VPN in which is causing a slew of other issues - but my focus in in making the Access apps play nice with the SQL Server (v 17.9.1).

They have a SQL Server 2008 R2 Standard presently - but are installing a Windows Server 2019 Standard with updated SQL Server software v. 18 -- and I'm in the process of migrating data over -- but because of the remote locations and the performance (very poor where the database is concerned - and frequently locks up) -- I'm looking for suggestions.
Avatar of Darrell Porter
Darrell Porter
Flag of United States of America image

Would it make more sense to use a web front end for the SQL database rather than using Access?
If Access is the necessary tool, have you considered RDP sessions instead of locally-installed Access to reduce bandwidth requirements?
It sounds as if the actual underlying issue is bandwidth utilization.  The only way to resolve this is to either move all queries to server-side or shrink the client footprint.
I wish I could give you a better answer.
I, too, live in a first-world country that has third-world Internet service...
ASKER CERTIFIED SOLUTION
Avatar of Dale Fye
Dale Fye
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Just to add one more option on the table. you could have some kind of configuration file stored in a shared drive(.ini ?) and use this to perform all the checks/actions..somewhat check server, credentials, tables,change linking if is needed and so on..
To expand it you could have an internal webpage that serves as a portal to download configurations...etc
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of Eileen Murphy

ASKER

As always I get the help I need from you Experts. I cannot thank you enough.
You are welcome!
Hoping I can get some additional help by writing a comment here -- I am implementing  dns-less connections and it is fantastic. I am having some issues and thought I'd ask y'all if you knew what I was doing wrong -- and/or could answer my questions.

1. Is it good practice to disconnect tables when forms close?

2. I am getting "can't save design changes..... another user has the file open..... " -- I am in a new db with no other users connected to it. I suspect this has to do with some connection to the SQL server that I am not releasing? There is no lock file on the front-end I'm working in... Wondered if anyone experienced this.?!?!?

3. I have some forms open with pass-through or SQL Views used in combo-box lists and subforms, etc. In testing I'm connecting to all the required tables/views, etc. on form open and that's working great.

4. Should I link all tables on the startup of the app or is it smarter to open (and close?) them as needed?
Are you coding the connections or arr you using my Linked Table Manager add-in that I mentioned in my previous post?
Hey Dale -- Somewhere in this thread I found a link to some examples of DSN coding -- this is what I'm using:


Public Const SQL1 = "ODBC;Description=SQL;DRIVER=SQL Server;SERVER=SQL1;UID=dev;Trusted_Connection=Yes;DATABASE=Data1;LANGUAGE=us_english;Network=DBMSSOCN"

Public Const SQL2 = "ODBC;Description=SQL;DRIVER=SQL Server;SERVER=SQL2;UID=dev;Trusted_Connection=Yes;DATABASE=Data1;LANGUAGE=us_english;Network=DBMSSOCN"

Function AttachDSNLessTable(stLocalTableName As String, stRemoteTableName As String, stServer As String, stDatabase As String, Optional stUsername As String, Optional stPassword As String) As Boolean
On Error GoTo AttachDSNLessTable_Err
   
    Dim td As TableDef
    Dim stConnect As String
   
    For Each td In CurrentDb.TableDefs
        If td.Name = stLocalTableName Then
            CurrentDb.TableDefs.Delete stLocalTableName
        End If
    Next
     

        stConnect = strSQLDB

   
    Set td = CurrentDb.CreateTableDef(stLocalTableName, dbAttachSavePWD, stRemoteTableName, stConnect)
    CurrentDb.TableDefs.Append td
    AttachDSNLessTable = True
   
    Forms!form1!ODBCString = stConnect
    Set td = Nothing
    Exit Function

AttachDSNLessTable_Err:
   
    AttachDSNLessTable = False
    MsgBox "AttachDSNLessTable encountered an unexpected error: " & err.Description

End Function

Example of my call to connect a table:

If AttachDSNLessTable("tb_CompanyInformation", "dbo.tb_CompanyInformation", strSQLDB, "Data1", "", "") = True Then

End If
Going to post a new question because I'm having other issues :)
1.  I generally just link the tables at design time (to a dev database which matches the structure of the production database).  I do this using my Linked Table Manager add-in, because it is easier than writing the code for each application I develop, which is what I used to do. I do all of my development in this configuration and don't ever actually "drop" any of the linked tables.

2.  When I'm ready to deploy my application, I use my Linked Table Manager to relink the currently linked tables to the tables in the production database and deploy the application for my users.

3.  I do know some developers who think they gain some degree of security by dropping all of the linked tables at the end of each session (when you close the application), but if you do this, you must maintain a list of the tables to be linked, along with the database they should be linked to.  But if you only give permissions to the SQL Server based on Windows Authentication, they should not be able to access the data either way, unless someone with access fails to lock their computer and the user logs on from the computer of someone that has the appropriate permissions.

4.  There might be occasions where you use the same FE application with multiple backends.  For example you use the same application for different clients, and must have physical separation of the data (you cannot simply have a clientID field in your table to identify data belonging to each client).  In this case, I will implement code similar to what you are using and allow the user to select the BE that they want to connect to, but this is rare, and in these cases, I prefer to have two identical FE files linked to the separate BE databases on SQL Server.
Wow. Thank you so much for taking the time to give me a detailed response. I really appreciate it. Let me ask you this: Is there any benefit --  to killing the connections -- to the burden left on the server - network traffic, etc.

For example: Let's say a form has 100k records behind it -- filtered or unfiltered -- and there are 100 people opening different instances of this same form (from their own front-ends).... if I disconnect/kill the connection when the form is closed as the users move around the application -- would that not reduce traffic some?

In another situation -- I have certain users who will always ONLY get to see their subset of records so a filtered recordset could be provided them which could also reduce the network load, no?

Besides moving to a new server this clients wants to improve performance overall and I'm trying to think of all the way in which I could do this. They frequently experience bottlenecks and freezing when certain large reports/queries are run -- and especially now that they have a lot of remote users logging in...

Again -- thanks for letting me pick your brain!!

Eileen
I would not bind forms directly to tables or to unfiltered queries.  An Access form bound to a table or unfiltered query opens as soon as it has sufficient data to populate it but then sits there with a straw sucking data from the server until the entire recordset is brought down from the server.  This is the single biggest difference between old style Access apps where tables or queries with no criteria were used and all filtering happened locally.  The point of using a RDBMS is to restrict as much as possible the data that is brought down from the server so queries with criteria are used to allow the server to do the heavy lifting and keep the excess records off the network.

Most of my applications have several different search forms.  Each major entity has its own search so Customers, Providers, Employees, etc would each have it's own search.  The SQL behind the search is built on the fly and if one result is returned, the edit form is opened but if multiple results are returned, an intermediate list form is opened.  The user then clicks (or filters further) on the record he wants to view/edit and the edit form opens.

If you still have a copy of the DEA app I built, it includes a form that relinks tables and uses the DSN-less method since as you recall, the app was sold to the public so we never had any control over their environment.  It was custom for this app so it does something that no other relinker does in that it allows you to swap between Jet/ACE and SQL Server.  It handles this by keeping a table of linked tables with remote and local names so that the RDBMS tables can be renamed to always be the same as their Jet/ACE equivalents.  The client had the option during install of using SQL Server (or some other RDBMS) or ACE for their data store.  

I also have 2 more generic relinkers that work with either RDBMS or Jet/ACE.  The ODBC relinker relies on a table of sources so each time you have to add a new connection string, you can type it once and add it to the table.  it relinks ALL tables from a single source at one time.  I was prompted to build this form when I had an app that linked to two different SQL Server databases and we had three environments - Test, Quality Assurance, and Production.  This particular version of the relinker also relinks all pass through queries so if you are using those, this might be of interest to you.  If you don't relink your passthrough queries, they will continue to execute against the old data source which could be a disaster.
Eileen,

Pat has answered most of your questions.  I have inherited a number of applications which were migrated from Access to SQL Server without any of the modifications Pat mentions.  In many cases, these are actually slower than using the Access BE.

When I get these, I encourage my clients to let me optimize them, especially the most egregious forms with the most records.  Sometimes I will create a local table (tblClients_Local as an example) to store a ClientID and ClientName that I intend to use in numerous combo boxes.  This will speed things up some, and I generally do this when the application loads, with tables that do not change often.  If you have users who will be in the database for long periods of time, then you might add an option somewhere that allows them to update the local tables.

As Pat mentioned in her most recent post, ensuring that you relink the pass-through queries is also critical when working with SQL Server.  This is a feature which is built into my LTM.  All you have to do is select those queries in the data source and Select All in the list at the bottom of the page, then relink.
If you are hunting performance then the key is "LESS"
The less data you are returning to the users the faster the application will run.
So you have a table with 100K records....no matter what you should never allow the users to pull the whole data...only what it matters.....even if they filter by one letter it will make a great difference.
Last but not least...if you have recurring data which modifications are scarce then the use of temp tables is necessary...why go all the way to SQL server to get e.g. a List of countries when you can have them pulled once every time or with clever design even less (checksums) ...whatever you can avoid just avoid it.
Thank you, thank you, thank you... I am heeding all of your advice and suggestions. These apps of course are in production and have been for eons so I have to work in copies and then test things myself -- and me working on the server doesn't give me a taste of what the users see... So I remote into machines at all of their locations to test things -- have issues with the VPN at one location which is being addressed by others -- and may be changed to Remote Desktop for those 30+ users.... So -- as I'm sure you have all experienced -- this is challenging and won't be done quickly. Once I get the foundation (data access) rebuilt I'll start renovating the structure :)