Link to home
Start Free TrialLog in
Avatar of John Sheehy
John SheehyFlag for United States of America

asked on

Best Database Practice

This isn't a question per say but more of a best practice method for tables and databases.

I have been working with databases from well over 20 years now and everyone has their own style to creating and maintaining and upgrading and planning for growth.

Since I am almost 100% self taught I wanted to get the view of experts like yourselves.

Here is the scenario:

1) Upsizing to SQL is not an option and nor is using SharePoint
2) The database is in Microsoft Access 2010 format (ACCDB)
3) There is a front end and a back end

Here are the questions:
1) Should all look up tables be in a separate database?
2) Should all working data be in a separate database?
3) Should a memo field be in a separate table?
4) Should attachments be in a separate table?

5) Should the auto number be sequential or should it be random as far as ID goes?
6) Naming conventions:
     a) Should two word be separated by an under score as in Last_Name or should it be one word; LastName
     b) Should queries be used or should the use of VB be used to execute a query; Is there a performance difference
7) Performance: What are the best practices to get the most out of this type of a application?


The database I am working with has all the tables in one database; the backend.  The front end holds the Queries, Forms, Reports, Macros and Modules.  I am worried about the size.  There is a lot of attachments and that has increased the size of the database to almost 500mb.  I know Access can do 2Gb but I really want to avoid that.

Also if I did a separate database for the attachments how would maintenance be done on the database?  Like compacting it?

I want the customer to be able to customize the database as much as possible like choosing the path for attachments and pictures and where the front end is stored.  And be able to change as needed.

What are the best practices for fields?

I have a lot of books for Access and I have read them.  I'm asking from others to increase my skill set and see what really works in practice.  Some concepts work great in theory and in a lab but when they are fielded they seem to fall short.  Like replication.

Thanks

John
ASKER CERTIFIED SOLUTION
Avatar of Jim Dettman (EE MVE)
Jim Dettman (EE MVE)
Flag of United States of America image

Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Avatar of John Sheehy

ASKER

Jim,

It's good to see you are still active on this site.  You have saved me numerous times in the past.  And I did not forget about the coin.  Right after the last time you helped me I broke my back and was in the hospital.  Took about 5 months to learn to walk again.  Still not that good.  But I'm 40 so I blame it on my age.  LOL

You mention RI, what exactly is that?  (referential Integrity)

You mention a Diary system?  I have not heard of that can you explain that a bit better?

And the Naming convention you mentioned is one that I have been trying to adopt, which makes a lot of sense now.  Just didn't know there was a name for it.

When you say always keep the connection to the BE open do you mean by keeping the FE open all the time an never closing it?  Or is there another way to go about keeping the connection alive?

When using sequential is it best to start from 1 or from 1001?  Or is it developer specific?

John
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
I'm sure you'll get a lot of variations but here's my answers.  I started out in the mainframe world working with ISAM and then VSAM and eventually IMS and DB2 before I found Access in the 90's.
1.  No/maybe.  Referential integrity can only be defined for tables in the same database so you would loose RI if you moved them to a different BE.  However, under some conditions, I copy the tables and store them in both the FE and BE.  The BE tables are the official copies and those are what the maintenance forms are bound to.  Those tables are also used whenever a join is used in a query.  But, having copies in the FE that you use as the RowSource for combos or listboxes can make them quicker to load on your forms.  When I do this, I generally refresh the tables, the first time the FE is opened each day and I give the user a button to force a refresh if they need it.
2.  I'm not sure what this means.  The only time I use a second BE is if I have to import data from a remote source and replace it on a regular basis.  So, large companies frequently use SAP or some other mega-application to run the backbone applications for the company but SAP doesn't ever allow links to "their" tables.  They force you do do extracts.  In cases like that, I build a new BE every day for the imported data so I don't have to worry about compacting.  The data is never updated and doesn't participate in any RI so it can be isolated.
3.  I've never had trouble but if the memo is not used all the time, you will probably get better performance if you use a separate table.  You can make the memo control a subform or you can join to the memo table for the forms/reports that need the memo field.
4. I don't use attachements or any of the other abomination data types.  But if you use them, put them in a separate table so they'll be easier to ditch if you ever do have to upsize.
5.  I prefer sequential because in some cases the user does see the ID and random numbers can be very large and even negative and so the user could never be shown those.
6a.  My personal preference is CamelCase.  That is based on my typing skills.  Even after all these years, it still disrupts my flow to have to type an underscore.  So, I only use the underscore on rare occassions when I want to highlight a separation.
6b. My personal preference is querydefs.  There is a performance difference but it is small.  Querydefs are compiled and the execution plan is saved when they are compiled.  Embedded SQL must be compiled each and every time it runs so the execution plan must be calculated.  Access has gotten better though.  In earlier versions, in addition to a slight timing hit, you got a huge bloating hit since it takes a lot of workspace to calculate an execution plan.  That problem seems to have been cured either in A2007 or A2010.  The other reason I use querydefs is because once I trained myself to use the QBE, I found it to be far easier mentally to drag and drop than to try to remember all the table and column names.  I remember back in the 80's when I first started working with DB2 how I dreampt of a tool like the QBE and wished I had one.  Now I do, so I use it:)  Also, if you are disciplined in your naming convention, you can create base queries and reuse them in various places.
7.  The vast majority of my apps either start out with SQL Server/DB2/Oracle/whatever BE's or get converted after the fact so I always design with client/server methods.  They do not make a significant difference when your tables are Jet/ACE but I can convert ANY application I built to SQL Server, et al in an hour or two.  The most basic concept is that all forms are bound to queries and all queries use criteria to limit the rows and columns returned.  I never use Access filters.  When I provide filtering capability on a form, I do it by adjusting the where clause and rerunning the query.  A huge factor when working with remote tables is to limit the amount of data that crosses the wire and that is the best way to do it.  The form requests one main record at a time.  Of course, there are cases where we want to bring back multiple records such as all the orders this week or all the orders for a particular customer so for those, I use intermediate forms that bring back minimal information.  I give the user a button or "hyperlink" to click on to pick a row and open the main form.

All the BE's should be set to compact on a regular basis.  Use a tool such as the one sold by FMS, Inc.

It is very important to be consistant.  That saves a lot of brain cells for you and will keep your successor from cursing your offspring to eternity.   Be orderly and use a naming convention.  Be boring in your coding style.  "Elegant" isn't required but efficient is so watch what you do inside code loops and queries.  I tend to use the curdgemon approach in code because when I leave, I don't know who will be taking over the app I created.   But, whoever it is, I want them to understand it first and foremost.  I don't write a lot of code comments but I do document anything I think someone new might not understand.  I also remove dead code and unused objects.  I start by prefixing them with "xxx" and then if nothing breaks after a few weeks, I get rid of them.  I've worked with and after people who thought a line of code written was a line of code that must be saved for posterity and their code was impossible to read there was so much commented out.  Code is actually pretty easy.  It is deciding what needs to be done that is the difficult part of our jobs.  If you want to save old code because you might want to use it later, put it in a separate "save for posterity" module where it won't get in the way.  And my final piece of advice is - writing code should not be your first choice.  I figure I've written my million lines of code so I don't need the practice.  What I love most about Access is how much it will do for me essentially for free by simply tweaking settings and making use of built in features.  For example, I never add navigation buttons unless the client insists.  Access has perfectly good navigation buttons that are free so that is code I don't need to write or to test.  The most horrifying app I ever took over was written by an experienced C++ programmer.  He didn't understand how Access worked and so was trying to make it work like a C++ app.  There were places where there was one line of code, SEVEN classes deep.  Now that is ridiculous.  Always go with the flow.  Do things the "Access" way.
Nick,

You brought up a lot of interesting facts I never considered before.  I didn't take offense to your bonehead statement.  As a matter fact I do and will soon be did store attachments in a table.  This was done because they kept being deleted from the folder.  Now that I am working on the permissions for the folder this can be discontinued and the path can be stored again.

The lookup tables would be static and I think storing them in the front end would be a performance enhancer as much of what a user does is coming from a  draw down.

I try to stick to the 8.3 naming convention.

Now as far as integration goes with other Microsoft applications such as word, is it best to store the Word templates in their own folder or in the a table within the database?

John
OK,  Well I don't want this topic to go on for ever, but I received great advice and great do's and don'ts.  I will split the point accordingly.   I love learning new things and you have all taught me something new today.  Thank you

John
I've requested that this question be closed as follows:

Accepted answer: 0 points for John's comment #a40501402
Assisted answer: 167 points for Jim Dettman (EE MVE / MS Access MVP)'s comment #a40501313
Assisted answer: 167 points for Nick67's comment #a40501359
Assisted answer: 166 points for PatHartman's comment #a40501398

for the following reason:

The advice given was incredible.   Something I have come to expect from  this community.  

John
2G is the upper limit for a file.
I prefer to keep that for the stuff that CANNOT live anywhere else: Forms, reports, code, and QueryDefs.
So no, I'd put Word templates in folders and store paths.

Although -- being an Access VBA guy -- there isn't anything I can't do with reports or exports to Excel that would necessitate the use of Word.  Other people's mileage does vary.

This was done because they kept being deleted from the folder.
Naughty people.
But there are ways to code filesystemobject to copy things to safe places -- and copy them back when boneheads go on a killing spree.
You mention a Diary system?  I have not heard of that can you explain that a bit better?
Me either!

When you say always keep the connection to the BE open do you mean by keeping the FE open all the time an never closing it?  Or is there another way to go about keeping the connection alive?

What is meant is that for as long as the frontend is open it should maintain an active connection to the backend.  I have a table.  tblDummy.  It has one field. Dummy.  I have a form.  frmDummy.  The Open Event of the startup form calls to open frmDummy as hidden.  It is the first call made to the backend -- and the use can't see it to monkey with it.  Hence, the connection to the backend will always remain open for the particular front-end client until they exit the app, no matter what objects they open or close.
Chose the wrong answer, myself.  I did this one other time too.
<<And I did not forget about the coin.  Right after the last time you helped me I broke my back and was in the hospital.  Took about 5 months to learn to walk again.  Still not that good.  But I'm 40 so I blame it on my age.  LOL>>

 Yikes! Sorry to hear that.   Hopefully it's not giving you too many problems...

<<You mention RI, what exactly is that?  (referential Integrity)>>

 Yes, referential integrity.

<<You mention a Diary system?  I have not heard of that can you explain that a bit better?>>

   Keeping multiple records rather than one big memo field for notes.   One thing I find besides the locking issues is that users like to dump anything and everything in there, then don't understand why they can't get information back out.

 So I usually had a table like:

tblOperatorNotes
NoteID - AN PK
CreatedOn - D/T
CreatedBy - Long - FK tblUsers
Note - Text (255)

 keeping as much out of the notes as possible.  For example, a customer ID, phone # to call back on, etc.  I really stress that note file should be reference data only.   Keeping it in a text field forces them to keep it short.

<<When you say always keep the connection to the BE open do you mean by keeping the FE open all the time an never closing it?  Or is there another way to go about keeping the connection alive?>>

 Open a hidden for at startup, bound to any table, or open a global record set in code (I find the hidden form the simplest).

<<When using sequential is it best to start from 1 or from 1001?  Or is it developer specific?>>

 Doesn't matter.  The only real requirement is that the number is unique, so it doesn't matter what it is.

 Auto number's don't actually serve as a primary key, even though they are labeled as such.  In fact their really not even surrogate keys because they are meaningless and have no relation to the data.  All they do is serve as a tag or pointer (if you want to run through that, see https://www.experts-exchange.com/Database/MS_Access/A_2041-The-great-PK-debate-Natural-Keys-vs-Surrogates-again.html )

Jim.
BTW, on this:

<<This was done because they kept being deleted from the folder.  Now that I am working on the permissions for the folder this can be discontinued and the path can be stored again.>>

Create a structure like this:

C:\MyApp
       \ExternalData
           \Youllneverguessthis
             
 on External data, give users only the right to traverse the folder.   When they us something like explorer, they will drill down to ExternalData and see a blank directory.

  In the app, you have the complete path, and as long as you don't expose it, it will be protected.  You can control CRUD (Create, Read, Update, and Delete) privs through the app even though they would have full rights for the actual directory.

Jim.
Jim, those are great ideas.  Thanks for explain the Diary, that is also another great idea.

The memo field was never indexed in any of the apps I used it in.  It was just a comment block.  But using your suggestion will make it easier to upsize in the future if the need arises.  Which I hope it will.  Also your idea on the folder is great.

As far as the back goes it was a stupid break.  I slipped and fell directly on my butt.  Broke my S1, L5 and L4.  A bone fragment went right into the left SI nerve and I lost all feeling down my left leg almost instantly.  Thought I was paralyzed, but in one leg.  4 Operations later and I can walk.  No running though.  Doc says that isn't in my future.  I was a Navy Chief, telling me I can't do something is like telling a fat kid he can't have cake.  LOL

John
Just to clarify on this:

Create a structure like this:

C:\MyApp
       \ExternalData
           \Youllneverguessthis
             
  You would have one or more subfolders under the "Youllneverguessthis".  So:

C:\MyApp
       \ExternalData
           \Youllneverguessthis
                \ClientData
                \Employee data

in the app, the full path would be:

C:\MyApp\ExternalData\YoullNeverGuessThis\ClientData

  and with only Traverse rights on the ExternalData directory, they won't be able to see YoullNeverGuessThis.    They could hop over it if they new the full path to something, but they won't be able to do that if you don't ever expose the full path.

Jim.
Consistent use of a naming convention (the usual one is the LNC, or Leszynski Naming Convention) is very helpful.  I have created add-ins for various Office versions to semi-automatically apply the LNC to database objects and controls; see the Code Samples page of my Website for the version you need:

http://www.helenfeddema.com/Code Samples.htm
As far as storing Word templates in Attachment fields is concerned, I generally do this if an application (database writing data to docs created from templates) is being delivered to a client who can't be relied on to install templates in the correct location, or indeed doesn't understand what a template is.  I have code that extracts the required template from the table field and places it in the default Word UserTemplates folder, so it can be used to create Word docs.  Here is a portion of the code from the sample database for my Working with Word ebook:

   'Get default Templates path from Word Options dialog
   strDefaultTemplatesPath = _
      appWord.Options.DefaultFilePath(wdUserTemplatesPath) & "\"
   strTemplateNameAndPath = strDefaultTemplatesPath & strTemplateName
   Debug.Print "Template name and path: " & strTemplateNameAndPath

   'Set Documents path to current database path
   strDocsPath = Application.CurrentProject.Path & "\"
      
On Error Resume Next
   'Check for existence of template in templates folder,
   'and exit if not found

   Set fil = fso.GetFile(strTemplateNameAndPath)
   
On Error GoTo ErrorHandler
   
   If fil Is Nothing Then
      'Extract template from Attachments field and save it to the Templates path
      blnEmbedded = True
      Call SaveAttachment(strTemplateName)
   End If
         
======================

Public Sub SaveAttachment(strTemplate As String)
'Created by Helen Feddema 1-Nov-2009
'Last modified by Helen Feddema 9-Jun-2014

On Error GoTo ErrorHandler
   
   Dim rstAttachments As DAO.Recordset
   Dim rstTable As DAO.Recordset
   Dim strDefaultTemplatesPath As String
   Dim strSearch As String
   Dim strFileAndPath As String
   
   Set appWord = GetObject(, "Word.Application")
   
   'Get default Templates path from Word Options dialog
   strDefaultTemplatesPath = _
      appWord.Options.DefaultFilePath(wdUserTemplatesPath) & "\"
   strFileAndPath = strDefaultTemplatesPath & strTemplate
   
   Set rstTable = CurrentDb.OpenRecordset("tlkpWordTemplates", _
      dbOpenDynaset)
   strSearch = "[TemplateName] = " & Chr(39) & strTemplate & Chr(39)
   'Debug.Print "Search string: " & strSearch
   
   rstTable.FindFirst strSearch
   If rstTable.NoMatch = False Then
      'Create recordset of attachments for this record
      Set rstAttachments = _
         rstTable.Fields("WordTemplate").Value
      With rstAttachments
         Do While Not .EOF
            'Save this attachment to a file in the default Templates folder
            'Debug.Print "Saving " & strFileAndPath
            .Fields("FileData").SaveToFile strFileAndPath
            .MoveNext
         Loop
         .Close
      End With
   End If
   
   rstTable.Close
   
ErrorHandlerExit:
   Exit Sub

ErrorHandler:
   If Err = 429 Then
      'Word is not running; open Word with CreateObject
      Set appWord = CreateObject("Word.Application")
      Resume Next
   Else
      MsgBox "Error No: " & Err.Number _
         & " in SaveAttachment procedure" _
         & "; Description: " & Err.Description
      Resume ErrorHandlerExit
   End If

End Sub

Open in new window

The lookup tables would be static and I think storing them in the front end would be a performance enhancer as much of what a user does is coming from a  draw down.
Don't forget, you are only storing COPIES of the lookup tables in the FE.  You need the originals in the FE to enforce RI.
So what are your view on the use of a switchboard, Menus or designing your own ribbon for a 2010 application.

John
<<So what are your view on the use of a switchboard, Menus or designing your own ribbon for a 2010 application>>

 Whatever works<g>  Don't think there is any real right or wrong there, or any real advantage or disadvantage to one or the other.

Jim.
designing your own ribbon for a 2010 application
That can be a highly complex undertaking.
Three years on, the best discussion, summary and collection of resources for that task is here
https://www.experts-exchange.com/questions/27231448/Access-2010-disable-or-modify-Quick-Access-Toolbar.html

Consistent use of a naming convention (the usual one is the LNC, or Leszynski Naming Convention) is very helpful.
I don't suspect you'd be guilty of this, but my engineer created an app he wanted to interface with mine.
'How hard could it be?'
Well, for unfathomable reasons, his data was in three different backends and he never bothered to rename any control dragged to a form or report -- and he loved tabbed controls.
So he had code like
Me.tabCtl2.Page2.Text65.Value = me.TabCtl1.Page6.Text11.Value

'Sorry, not touching that, not with a 10 ft. stick'

Now, another thing some folk may vehemently disagree with is the names given to bound controls.  By default, Access gives the controls the name of the field.  I started with A2003, and this has never been a problem, and I leave it that way.  Unbound controls I prefix (cboSomeField, txtOtherField, chkSomeBoolean)  I can see immediately that they are unbound, and what field value/purpose they are meant to serve.  Labels I only rename if a) I am bored or b) I am going to manipulate them in code (change caption or visibility) in some way.

One of the very nicest things you can do for yourself and those who inherit your apps is to give every field in every table a caption.  By default, the caption of the label created for a bound control is the field's caption.  Do it at table creation time and you need never mess with renaming labels again.
<<Now, another thing some folk may vehemently disagree with is the names given to bound controls.  By default, Access gives the controls the name of the field. >>

 Yeah....problem with that being you're never sure if your referring to the control or the field.  There are some subtle places where that difference is important (which at the moment I can't think of off-the top of my head), but they are few and far between at this point.

 You can do this without too much problem...but I'm "old school" and still rename all my controls, bound or unbound.

Jim.
I have rarely named my  controls and fields in the tables.  I normally go behind and do it in the form as it's being created.  I could easily do it in the table and save the extra work.  But one way or another it will be done.  I do agree with Jim, if you don't do it you don't which one you are refereeing to.  If it's a Label for a filed and not the actual field I will use LBL but if it's the filed I will use FLD.

I also agree with code cleanup.  I have an app right now that I there is one module that has two or three lines of code that is being used and about 200 that is commented out.  Need to create a non-used module and place it in there.

So what about Global and Option Explicit?  I never really understood what they did for the code.

John
SOLUTION
Link to home
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
Start Free Trial
Yeah....problem with that being you're never sure if your referring to the control or the field.
Oh, I am ALWAYS sure.
If the control doesn't exist, then Me.TheControlNamedTheSameAsTheField won't pop in Intellisense.
Me!FieldName will exist and sometimes even show in Intellisense, but rarely if ever will it work in code.
So really, for me, I've never found a bulletproof case where I can refer to a field NOT bound to a control.
Hence, I don't try, and it's never an issue.

The Name property of the control should be named to distinguish it from the bound field.  But, be careful if you are not used to this because you may be surprised some day when you notice that Me.FieldName <> Me.txtFieldName.  You need to be conscious of what you are referencing and this naming convention helps you keep everything straight.  If you don't prefix the assigned name in the Name property, most things will still work correctly but make sure you don't leave the odd arbitrary name you get if you drag the control off the ribbon or copy and paste a different control.


See, I don't agree with Pat and Jim on this -- and it is an old and well-worn debate.  I've never encountered an occasion where the field and control being named the same has been a problem.  I think before A2003, it was not as bulletproof as it is now.  Now, in the hierarchy a control ALWAYS outranks the field -- so if you have a field in the recordsource SomeField and a textbox named SomeField, Me.SomeField will AWAYS refer to the control.  In previous versions, that was not quite so bulletproof -- so you get folks who have been burned by past behavior who can be quite adamant about it.

Never having been burnt -- and only using A2003+, very unlikely ever to be burnt -- I get to use prefixing to denote unbound status.  I don't know how folks who prefix their bound control use their naming conventions to signal that, or if they even do.  But I have a lot of code like

With rs
    .AddNew
   !SomeField = Me.txtSomeField
   !SomeOther = Me.cboSomeOther
    .Update
End with


I know looking at that, that I am adding an new record to a table and that the values are coming from a pair of unbound controls on a form.  

On reports, I have code like
me.txtDate.Value = format(me.[Date],"dd-mmm-yyyy")
I am reminded right away that years ago I was an idiot and named a field Date and that I am now taking the value from that hidden bound control and formatting it and displaying it in an unbound control (That's a simplistic example, yes, you could do that in a textbox's Format property, but wicked things like printing '-' instead of 0, or 'N/A' instead of "" can't be done that way.  And yes, you could throw iif's into the query--but that's a little less fun to suss out and maintain -- and sometimes dammit I have a 18 part Select Case True on the go for what will actually show and then NO, you have to do it this way :)

Global
Variables (and procedure and functions) have what is called scope

In a form module

Private sub SomeSub
Dim x as integer
end sub

Only SomeSub knows from, and can use x
And only the form knows from, and can use SomeSub.

In a form module
Option Explicit
Dim x as integer
Public sub SomeSub
'some code
end sub

Now, every sub or function in the form's module can use x, but only the form's module
But, so long as the form is loaded, all code can use SomeSub

In a code module
Public x as integer.

All code, everywhere can use x
x is now a global variable.

They look nice -- but any error, even ones trapped and handled by code, potentially sets them to Null or Nothing.
It's prudent if you use them, to check their value and re-initialize them if they've gone south immediately before using them.  Unexpected BANGs! occur otherwise
How do you all deal with errors or trapping for errors?  I know in the past I was never sure if I needed to trap for an error.  Well not until Access threw up an run-time error and then I needed to research what the error meant.  Which then it really didn't make sense.  So it was by trial and error that I realized some errors are user created and some are developer created.  The developer created ones are easy to fix but the user ones take some time.  

Do you all write the error trap in the module for the code running or do you have a separate module with your own unique errors codes and return them instead.  I have seen that scenario twice where the developer created his own error codes.  It helped a lot but there was a lot of error codes.  Not sure if they were all encountered but there were easily 20 error codes.

John
I do the error trapping in each procedure because it depends on where you are what errors you want to ignore.  I generally trap specifically for "expected" errors and then generically for others.  The minimum would be:
...
On Error GoTo Err_Proc
...
Err_Proc:

    Select Case Err.Number
        Case Else
            MsgBox "Error " & Err.Number & " (" & Err.Description & ") in procedure cmdSelect_Click of VBA Document Form_frmBatchCounts"
    End Select
    Resume Exit_Proc
    Resume Next
End Sub

Open in new window

Here's another example where one code is ignored and another has a specific error message:
    Select Case Err.Number
        Case 3024
            MsgBox "Tables have moved.  Please relink.", vbOKOnly
            DoCmd.OpenForm "frmReLinkJetTables"
            Resume ExitProc
        Case 2501
            Resume ExitProc
        Case Else
            MsgBox Err.Number & "--" & Err.Description
            Resume ExitProc
    End Select

Open in new window

In most cases, I start with the basic example that just displays the Access error message along with the number.  Then I decide if I need to trap specific errors in specific situations and expand the Select Case accordingly.
Because I am an in-house developer, I don't trap for errors that should -- and therefore can and must -- be eliminated.  I want things to blow up big and ugly when I haven't done my homework and sewed my code up to be bulletproof.  This is another case of 'burnt early in career, not going back there.'  Early on, I had error-checking code and some unavoidable, occasional snafu that occasioned the use of On Error Resume Next.  Then came the day that some unexpected cascaded through and the code came to a halt in a place that was incomprehensibly bulletproof.  After a couple days of heartache, and tracing back one procedure at a time, trying to divine where the HELL the problem was, I found it.

And I refactored the code to get rid of that On Error Resume Next and made a mental note to myself to never use that again.  And then I started looking at my code

Private Sub Command1_Click()
On Error GoTo Err_Command1_Click

    Dim stDocName As String
    Dim stLinkCriteria As String

    stDocName = "Form1"
    DoCmd.OpenForm stDocName, , , stLinkCriteria

Exit_Command1_Click:
    Exit Sub

Err_Command1_Click:
    MsgBox Err.Description
    Resume Exit_Command1_Click
   
End Sub


And I said; 'Just why would it be useful to have THIS return an error message?  And how's it going to go wrong anyway?'
And the answer was :'It won't, and only if the form is in design view (not in production), or doesn't exist (not in production)
And I can trap any error that might happen in the form Open event.
So the code became this:
Private Sub Command1_Click()
DoCmd.OpenForm "Form1"
End Sub


No extraneous cruft. Much easier to read and maintain when you've got a lot of code in a module.

Now, opening Outlook always involves the unavoidable possibility  of a 429 error.  This article
https://www.experts-exchange.com/Programming/Microsoft_Development/A_17466-Properly-open-Outlook-as-an-Application-object-in-VBA.html
talks about how I go about minimizing the damage that a well-known trappable error can cause to the rest of one's code.

The idea is still the same: 'Write error-free code, and then deal with the consequences of your possible failure."  As an in-house developer, I put a lot more emphasis on the first part, because the consequences are actually a helpful feedback loop and easily dealt with.

YMMV
If you thought the procedure I posted included a Resume Next, read it again.  The Resume Next immediately follows a Resume Exit_Proc which will ALWAYS execute and so in the normal course of events, the Resume Next will NEVER execute.  It is there as a testing aid.  If I get an error, I can stop the code, move the execution point to the Resume Next and figure out exactly which statement caused the error.

The Else statement, which is the unknown error should always exit the sub.  However, certain trapped errors can safely resume Next although I haven't included any in the example.
No Pat,
I didn't fall of the turnip truck.
Although I don't see from your code snippet where
Resume ExitProc
Would actually send me, but if it is like the MS boilerplate code I posted, I can guess.

For me
when I have error-handling code, it is a signal that there exists the possibility of VBA throwing an error no matter what I've done to eliminate it -- like the 429 when opening Outlook.  You can GET the object and fail if it is not open, or SET the object and fail when it is.  And no, I am not going to do an API call to see if it is running.  That's a bit much.
So the absence of error-checking code is a signal that the procedure is either simple enough or well-tested enough that I have complete confidence that it will always complete.

I've also had problems with trapped errors still rendering module and global public variables to Nothing or Null, so I am not big on error-handling code.  My folks know when the hit they end or Debug dialog, it's time to call me.
The phone doesn't ring that often, and with 111,000 lines of code, it means I do a REALLY good job of catching all the possibilities.

And really -- what's the practical difference between the end/debug dialog, which brings things to a halt, and
MsgBox Err.Number & "--" & Err.Description which also alerts the user that something the developer didn't handle has happened -- albeit in a much less spectacular fashion that the VBE popping up?
I guess the difference comes down to - I deliver an .accdr which forces Access into runtime mode and you deliver an .accdb which allows access to everything.  Using runtime or an .accdr, you have to handle all errors or the app simply shuts down.  On the rare occasions when a strange error pops up, the users tell me what it is and hopefully what they were trying to do and then I trap that specific error and probably fix the bad data that caused it.
Because I am an in-house developer, I don't trap for errors that should -- and therefore can and must -- be eliminated. And that's exactly it.  I can, and very rarely do, rev multiple times per day and everyone gets new bits on a close & open of the app.  So the code never leaves my purview--and I deliver an .mdb, so I can walk to the person who threw the error, hunt it down and fix it there and get them going.  Then I go back to the dev station, and have a look at the fix, and perhaps code it up similarly, or look at some extra underlying issues for root cause, before testing and pushing out change.

Very different environments.
But even so, I am not sure that I'd reflexively put error-handling into every single function & sub.
But the threshold for possible error would be lower than it is for me now..
Couple of points:

@Nick on this:

<<Private Sub Command1_Click()
DoCmd.OpenForm "Form1"
End Sub

No extraneous cruft. Much easier to read and maintain when you've got a lot of code in a module.>>

 That code will fail if in the forms open event you cancel the opening for any reason and since there is no error handler here, your forcing VBA to walk up the stack to find if an active error handler is in place.  If it is, that procedure will handle the error, which would make it tough to pin down where it occurred.

 And you do need On Error Resume Next as in some places, in-line error handling makes sense when you expect an error.  For example:

 On Error Resume Next
 Kill strFile
 On Error Goto

 Now you could say that I should have done a Dir() and done this:

 If Dir(strFile) <> "" then
    Kill strFile
 End If

 so no error can occur.   But there are a couple of problems:

1. I'm now going out to the disk twice
2. I may not be able to delete the file because of permissions, so I'm not really further ahead by doing the Dir() and would need to do additional checks to make it bullet proof.

But using the in-line handling, I have more efficient code.

@Pat on this:

<< If I get an error, I can stop the code, move the execution point to the Resume Next and figure out exactly which statement caused the error.>>

 You can do that a bit differently and not sure that it would work better or not for you, but get MZ Tools, number all the lines of code and then in your error handler use the undocumented VBA.ERL call, which then pin points the line that caused the error.  

 Many times, I find I want to jump back not to the line that had an error, but one before it (i.e. creation of a SQL Statement for a recordset, but the error failed on the open).    Also in production, it pin points error exactly.   I have the software log and/or e-mail me, so I get something like this:

User generated image
I think that more than anything has helped me in handling errors.

and I pretty much put error handlers in everything like you.  There are some procedures I skip, but not too often.

Jim.
But using the in-line handling, I have more efficient code.
Nope, sorry don't agree, I'd never code that way or do that.
...and would need to do additional checks to make it bullet proof.
That's my job and that's what gets done.  This would never happen
If Dir(strFile) <> "" then
     Kill strFile
  End If


We have our friend the filesystemobject
Dim fs As Object
Dim BuiltPath As String
Set fs = CreateObject("Scripting.FileSystemObject")
BuiltPath = Nz(SomePath, "")
If fs.FileExists(BuiltPath) Then
    fs.DeleteFile BuiltPath, True
End If


And if I figured that there was a shot for the file to be open, or its permissions to be out of whack, I'd put the checks for those in, too.  Error handling is like PPE, its there for when a unforeseeable failure of the hazard mitigation measures occurs, not as a means to deal with a hazard.

That code will fail if in the forms open event you cancel the opening for any reason
That, too, is my job.  If I allow the possibility of a Cancel event, then I have to deal with the consequences.  With forms that's very rare indeed.  Reports and their NoData happen more often, and I do then deal with the consequences.

And you do need On Error Resume Next as in some places, in-line error handling makes sense when you expect an error.  Nope.  If I expect that an error can occur, then I code to ensure it doesn't.  That's my job, not an error handler's.

And as I stated earlier YMMV (Your Mileage May Vary.)  The circumstances you are developing under, the audience your app has, and the control you have over the audience's environment can make a BIG difference to how you decide to handle errors.  For me, brevity and clarity of code are very important.  The existence of an error handler in a given sub is a clear signal of the complexity of the sub, its logic and its operation.  Its absence is a clear signal that I believe there is a ZERO probability of a production error occurring.  Because I am in-house, the first thing that is going to happen upon report of an error is the disablement of error handling as the source of the error is pinpointed, prevented from occurring or coded to be handled.  It makes little sense then, to put enabled error handling all over the place.  For me, it really doesn't serve any purpose.

But that's me.
John was looking for a varied viewpoint and in no way am I evangelizing  -- but I am not open to conversion, either!
@nick,

 I'm a little confused by your post....

<<But using the in-line handling, I have more efficient code.
Nope, sorry don't agree, I'd never code that way or do that.>>

<<That's my job and that's what gets done.  This would never happen>>

<<We have our friend the filesystemobject>>

So how is this:

If fs.FileExists(BuiltPath) Then
    fs.DeleteFile BuiltPath, True
End If

any different than:

 If Dir(strFile) <> "" then
    Kill strFile
 End If

 You'd got more overhead (you've instantiated a file system object) and your still going to disk twice.   Far better and more efficient simply to do:

On Error Resume Next
Kill strFile
On Error Goto ProcessFile_Error

 I don't see how you can say it's not.

 And if you don't believe in in-line error handling, then why did everyone make such a fuss over the fact that when vb.Net came out, we finally got Try/Catch/Fail handling?

 in-line error handling logically is always better than a generic error handler, but VBA sucks at it.  For example, if you open more than one form in a procedure and you have a generic handler, how do you know which form open may have been cancelled (you get a 2501)?

Much better to do:

On Error Resume Next
Err = 0
DoCmd.OpenForm "frmA"
If Err<>0 then
   '  need to check error
End If

Err = 0
DoCmd.OpenForm "frmB"
If Err<>0 then
   '  need to check error
End If

Without in-line handling, you'd have to refactor into two procedures.

Jim.
I'm a little confused by your post....
Ok.

I have more efficient code.
I don't know that I agree with that at all, and I certainly don't know that it is correct
You are forcing the code to change what the error handler is doing on the fly, attempt your action, ignore any error that occurs and then change how the error handler behaves again.  There's no saying that it's more or less efficient than another setup off the face.

You'd got more overhead (you've instantiated a file system object) and you're still going to disk twice.   Far better and more efficient simply to do:
Again, I don't agree and again don't know if that is correct.  In the underpinnings, how is KILL accomplished?  Ultimately, the compiler is doing something to tell the OS to do a disk operation.  Is Kill much more efficient in accomplishing that task than fs.DeleteFile?  Doubtful.  And Kill is still going to have to hunt down and verify the existence of the file before deleting it.  fs.FileExists(BuiltPath) is doing that explicitly.  In compiled form, are either of those two operations likely to be that different.? I doubt it.  Nor do I know that I am going to disk twice.  What is the compiler going to do in the end?  With Kill it is going to check for a file and delete it if it exists, and return an error if it does not.  With filesytemobject, it's going to check for a file and delete it only if it exists, and return nothing.  Which is more efficient?  Unknown!

And if you don't believe in in-line error handling, then why did everyone make such a fuss over the fact that when vb.Net came out, we finally got Try/Catch/Fail handling?
I don't know for certain.  I am primarily a VBA guy.  Try-Catch-Fail in the courses I did take was pushed because
a) Code could now be made to look consistent, and legibility is very important in complex projects
b) You were discouraged from swallowing exceptions (on error resume next) so there should always be stuff in Catch--and presumably better commenting about why it's there
c) Finally is the place for cleanup code and compartmentalized it
d) All 'real' languages have try catch fail  -- look VB.NET does too!  What do you mean it's still not a real language?
e) IDE Formatting.  Try Catch Fail blocks look very nice in the editor.

in-line error handling logically is always better than a generic error handler
No argument there, but in-line error handling is, to me, fundamentally flawed.
In procedure handling is best when you need to handle errors
On error goto Err.Handler
is easy to comment out and then investigate the evil
Hunting through a complex sub (ctrl-f) for 'error' is a bitch and to be avoided.
Moreover, presumably those in-line things are there for expected errors and you can't turn them off without tripping on the now-unhandled expected errors.  It makes hunting down that unexpected error much harder.

Been there, done that.
Thanks, but I'll write code that will not throw an error that I can prevent.
It's much easier to maintain over the long run.
But that's me
YMMV
WOW that is a lot to consider in writing code to be error free or as close as possible.  You can't, and no matter how good you are, you just can't consider every possible solution.  I use to think that when I designed my last application for the Navy that I had worked out all the possible errors the user would encounter.  But knowing Sailors like I do they found a way to break it.  Sometimes it's just a simple misspelled word or a bracket where a parenthesis should of been.  Just examples.

But I can see where you all are coming from.  Depending on the complexity of your application and the amount of users you have will depend on how you handle errors if you encounter any errors.

One thing that I have noticed is as the designer of an application you may not experience all the errors if any at all.  You can not possibly use the program the same way a user will.  So having users test the application in a real scenario may help with errors.

John
So after this last question and the answers have been exhausted I will bring this thread to a close.

VB vs. VBA

Which do you prefer and explain why?

Would you rather design an application in Visual Basic/Visual Studio using access for tables

or

Would you rather design an application in MS Access using VBA.

John
Well VB "Classic" and VBA are almost one in the same.  There is very little difference.

VB.Net is a totally different animal then VB Classic or VBA.   It's OOP based and works differently.  With VB.Net, you have complete control and can do things you can't do with Access and VBA (n-tier designs, get a true standalone program, use any 3rd party control, etc).

As for your last question, like any other of that type "it depends"   Access let's you get things done pretty quickly, but then you give up things as well.

It's not really a choice of either or, but which tool is best to get the job done with.

Jim.
Jim,

I didn't even think of .Net when asking the question.  But that brings up another interesting point.  I am pretty good in VB classic (VB 6).  For me it's easy to navigate and I feel I have more control over my forms and the user interface.  Where as in VBA I can do the same but it just seems to take longer.  I have dabbled very little in VB.net  It was something I wanted to learn but then I needed to purchase yet another suite.  Visual Studio.  I'm not sure I could justify that purchase just for the sake of wanting to learning and not really monetarily making it worth it in the long run.  All though seeing beyond my scope now I might be able to justify it.

In your experience and if given the choice which one would you use to create an application?

John
For better and worse, VB6 is long gone, and still grieved by many.
It is NOT a choice for a new project.

As for .NET vs Access that's like asking
'Do you want to go to the moon?  Would you like to walk there?'
'Do you want to go to the corner store?  Would you like to take a rocket ship there?'

If the app will never see more than 100 concurrent users or need a robust presence on the internet, Access is a very fine RAD (rapid application development) platform.  If you foresee thousands of users and a WAN implementation then Visual Studio is a heavyweight full-featured development platform.

It depends on the ultimate destination.
The number of concurrent users is irrelevant regarding the choice of FE technology.  The BE is impacted both by total database size and user count.  At some point Access (Jet/ACE) won't cut it and you need to move to a server based BE.  

Each user will always have his own copy of the .exe in the case of VB/VB.Net or .mdb/.accdb in the case of Access.  Distribution of the FE needs to be centralized.  I keep the "master" copy in a shared network folder.  Each user has a shortcut that points to a batch file in that shared folder.  The batch file copies the master down to the user's workstation and opens the app.  If I want to distribute a new version, I update the "master".  You can add bells and whistles but that is a very basic scenario.

FE technology is dictated by features required.  Access is an excellent RAD FE creator but it does have its limits.  It isn't at this point a good choice for outward facing apps that need anonymous user access via the web.  It does have (with A2013 and SharePoint) limited ability to create inward facing web apps but "limited" is the operative word.  It is still not a robust environment.

The biggest thing Access has going for it is it's RAD capability.  Access does an enormous amount of work behind the scenes which you can't even appreciate unless you have worked in a different technology where you had to do it yourself.

Back to the menu/switchboard question.  I almost always use the old Switchboard - notice I said "old".  I have made certain modifications that make it much more usable such as increasing the number of items on one page and adding some different types of options.  What I give up is the ability to use the wizard to add items  beyond 8 or that use my added features.  When I start a new app, I copy the Switchboard and Switchboard Items from an old app and change the look to match that of the new app.User generated image
@PatHartman
https://support.office.com/en-ca/article/Access-2010-specifications-1e521481-7f9a-46f7-8ed9-ea9dff1fa854
You are out of choices in the backend department at 255 concurrent users.  I think you'll move backends long before you get that far, but it is a hard limit. Most sources suggest that you'll get tired of running the app long before hitting 255.

I don't know what the upper limit for Access FE's connecting to a SQL Server would be.  I don't think anyone does, because I don't know that anyone has ever tried the stress-testing of a SQL Server by Death-by-Access test methods to see.  There's just this huge enterprise disdain for Access, without hard numbers.

I just don't know.
Although here
https://www.experts-exchange.com/questions/28583053/Max-of-Modules-in-Access-2013-Desktop-Applicaiton.html?anchorAnswerId=40508191#a40508191
This guy's running almost 1100 code modules with 100 concurrent FE users over a WAN and has been doing it for years.
@Nick,
I was referring to concurrent users for the FE since there are NONE - each user has his OWN copy.  Users NEVER open a shared copy of the FE in a properly structured set up.  I did mention that the BE would at sometime have to be upsized and that depends on how many concurrent BE users and the size of the BE.

SQL server BE is limited by the licenses the company owns.   It doesn't matter what app is using them.

The size of the FE is irrelevant.  What is relevant is data access.  If that is optimized, you can support a large number of concurrent users.  The app in the link you referenced has about 25 concurrent users even though it has a 150 authorized users.
...backend department at 255 concurrent users
Agreed.  The FE is local and has a single user.

SQL server BE is limited by the licenses the company owns.   It doesn't matter what app is using them.
For discussions sake:
A company has licensing for 100,000 SQL users and 100,000 desktops with Access.
An FE gets distributed to all 100,000 desktops.
How many can successfully connect the app to the server(s) before none of them can get anything done?
And how will this compare to an FE built and created in .NET to do the same job?

I don't know the answer.
I don't know that anyone has hard data on the answer.
@Pat,
I like how you have taken the 'old' style switchboard and modified to look more appeasing to the eye.

I have used many various ways of accessing the app from a generic switchboard, using tabs on the top on a customized form to tabs on the side on a customized form, to a menu bar that was an add-on.  

I have always felt the way the application presents itself from the opening screen to the switchboard/ribbon is what will attract the customer and what will keep the customers attention to the program.  The less bland the better.  But too much and you could cause it to be too busy.  Finding that balance is dependent on who the app is being designed for and the users

As far as the FE/BE I have used both Access and SQL.  I do believe they both have their pros and cons.

I will continue to use Access for developing the front end and the back end.  I like using the upsize wizard to go from Access to SQL when the time is needed.

And one thing I was surprised that wasn't discussed was the need to use SQL.  For most customers that sue access for both the FE and BE the amount of concurrent connections will normally always be low.  But just because the concurrent connections will be low doesn't always mean that is the best choice for a back end.  

For example:  When I was in Iraq and we designed the Help Desk Application we started off using Access as the BE and FE.  Even thought he concurrent connections were always going to be less than 30 we still had issues. Not all the servers were seen by all the different entities that needed to use the application, however everyone did have access to a SQL server.  So upsizing was a great choice and it did speed of data access times.  In a multi domain/server environment it doesn't do much good if not all the potential users can access a common server.

But as I am in the civilian world now I see multi domains/servers is not really an issue.

John
John,

Keep in mind a couple of things:

<<As far as the FE/BE I have used both Access and SQL.  I do believe they both have their pros and cons.>>

  As time has moved on, SQL has become easier and easier to use, almost to the point where you almost always want to start a new app using SQL Server.   The niche that JET occupied has shrunk considerably.

<<Even thought he concurrent connections were always going to be less than 30 we still had issues. >>

  That's highly dependent on the environment.  The JET DB engine was designed as a department level database engine.  That means basically it was intended to run on a LAN and with a small group of users.   What you'll typically find is that with a well developed Access app which is read/write, 30-40 users is about max.

 However it's not about performance as it is stability.  If the app is read only/reporting and developed well, you can easily do 200+ users.

 Fast forward to today, and you have many situations where it is not being used the way it was intended, and then blamed for failing.   Can't count the number of times for example that folks try to run it over a WAN, such as using a VPN into an Office.

<< I like using the upsize wizard to go from Access to SQL when the time is needed.>>

 The up sizing wizard in Access has been removed.  A better tool to use for up sizing is the the SQL Server Migration Assistant (SSMA).  Offers a lot more detail and control over the process.
With SQL Server Express Edition being free, I don't think I'd ever start a major project again where the backend wasn't immediately in SQL Server from the start.  SSMA for Access is a very good tool  -- but even so, with the skill set I presently have, I would short-circuit the need to use it by starting the backend in SQL Server immediately.

A customer environment can be a different thing though.  SQL Server backups & configuration might be needless complexity. ;)  'The premium addition of our app comes with a DQL Server installation & backend and costs just $X.XX more than the standard edition'
I start most of my apps with embedded ACE tables, not even split.  Why? because it is easy to send out a single file for user testing.  In the early stages, I'm not worrying about contention and common data; I'm worrying about how everything looks and is it working correctly.  Also, many of my clients have IT departments that exert extreme control over the SQL Server environment, to the point of refusing to allow me to create/modify tables let alone a whole database.  In those environments, I develop all the way to the end with ACE and only when I am certain that the schema is solid will I convert to SQL.  Having to wait for a DBA to get around to adding a column or changing the length of a text field gets to be annoying and wastes time.  I had one client that was a major insurance company where the DBA insisted on naming columns.  I couldn't even choose my own names!!  For them, I had to start with SQL because they had to name everything before I could even begin.  I was very happy I was being paid by the hour for that job.  It took almost a month from the time I gave the DBA the initial schema until I got access to the test database.
So what do you all feel about indexes?  This is something that has stumped for many years and I never can remember to ask when I am in a room with someone that knows more about them.

So here it is:
I understand the need to index a record.  Such the autonumber field.  It helps keep things sequential and speeds up searches.

But what about indexing other fields.  Like in this example.

These are a few fields in one our databases:
PID - Autonumber - Indexed (Primary)
Last Four - Number - Indexed
Last Name - Text - Indexed
PayGrade - Text - Indexed.

What are the pros to indexing more than one field and what are the cons to indexing so many fields.  

I guess what I am asking is why would you index more than one field

John
Even now, choosing indexes remains a bit of an art form.
Access will index all primary keys
Access will also index all foreign keys involved in a relationship
(The limit of 32 indexes per table therefore causes the limit of no more than 32 relationships between one table and others)
Indexes on numeric fields are more efficient.
(There are those who like to use natural primary keys, including things that may be text--I am not among those folks.  I stick to autonumber sequential PKs.  It's an old, unresolvable discussion.  Pick a side :)

So, what else should you index?
That's the art form.  There are tools that make suggestions of course.  Even so, look at the tool's recommendation with a jaundiced eye.  Indexes on fields that have very few unique values (boolean fields, fields that contain a limited number of values chosen from a listbox, etc.) provide little benefit.  Indexes on fields that almost all values are unique may also be of limited value (datetime fields where you store Now() for example)

What to look for: presently unindexed fields -- especially text fields -- that are used in joins or criteria.  These can be especially helpful.

I guess what I am asking is why would you index more than one field
Because you are going to use the values from it to limit the set of records returned by a SQL Statement -- and an index scan is much more efficient than a table scan, and a numeric scan & match (which an index is) is much efficient than a text scan and match.
If you are using Jet/ACE, Access automatically indexes any foreign key fields.  The primary key is always indexed regardless of what the BE is.  So, if you are using SQL Server, you are responsible for creating indexes on foreign keys.  Beyond that, you should index any column that is used frequently in searches as long as it has more than a few distinct values.  For example, it makes no sense to index Gender since at most it has three values.  If 80% of your customers are in one city, indexing on city probably wouldn't help either.  If fields are frequently used together such as first name and last name, I will create compound indexes with the primary field being listed first in the index so in this case it would be last then first.

You have to weigh the burden of updating all the indexes when records are added/updated versus the need to optimize searching.  Each index takes space and that eats into the 2G limit for Access.

It is important to compact and repair the BE on a regular schedule and then compact and repair the FE.  The compact of the BE, reorganizes all the data into PK sequence and removes dead space. It also updates the table statistics which are used by the query engine in developing execution plans.  The FE needs to be compacted to force the querydefs to "recompile".  As tables change size, the query engine might decide to change its execution plan for how to retrieve data in a particular query.
Jim,

No problem.  I wasn't sure where to start a Discussion like this at.  I will close it out today.

Thank you.

John