Microsoft Access

221K

Solutions

52K

Contributors

Microsoft Access is a rapid application development (RAD) relational database tool. Access can be used for both desktop and web-based applications, and uses VBA (Visual Basic for Applications) as its coding language.

Share tech news, updates, or what's on your mind.

Sign up to Post

Introduction

One of the earliest concepts learned by database developers is parent-child (one-to-many) relationships.  These relationships are integral in countless database applications.  A purchase request containing one or more items ordered is a very common example.  

Child records are identifiable through key fields, such as Purchase Order ID, which link them back to the Parent record.    This relationship can easily be displayed using a main form for top-level purchase order information with a subform to list the items ordered.

This scenario gets a little more complex if any of the child records contain children of their own (i.e.,  grandkids).   This type of data structure is common in engineering and manufacturing, where individual parts can make up sub-assemblies which are pieced together to form some final product.  

For example, a bicycle is a top-level assembly that contains parts such as posts, wheels brakes and more.  Each of these parts can also contain multiple parts.   A wheel is a sub-assembly of the bicycle which  is composed of a rim, tire, spokes etcetera.  These sub-assemblies can further contain component parts and sub-assemblies of their own (e.g.,  hubs, chain rings and bearings).    

The connection between each assembly and its parts can be defined with a one-to-many relationship as described previously with the purchase order example.   A Parts List table is used to store child records for the parts, relating them to parent …
18
LVL 61

Expert Comment

by:Kevin Cross
Josiah,

Here is a message that queues up while I was in the air:
“Please forgive my confusion as I replied from my phone.  I have many iterations of this solution with most being in SQL Server, so please ignore the GROUP BY comment but issue is the same.  It is a matter of handling each line separately vs assuming unique part numbers which most of the people I deal with what to see the in the end.  In other production uses, I either aggregate multiple rows of same part before recursion so the quantity per assembly reflects the total requirement (e.g 4 wheels) or I use a unique key.  For example, in my production BOM, each component of the bill has a unique ID as the key then still lists the parent item ID, component item ID and route sequence If applicable along with the QPA.

I’m sure if you give this a shot yourself, it will make sense.  If not, just post your exact scenario with sample database and we’ll help you ... on the Q&A side of the house.”

Regarding your recent post, the system I use is Intuitive ERP running on SQL Server.  You may be able to replicate the same structure in MS Access, though.
0
LVL 62

Expert Comment

by:Jim Dettman (EE MVE)
<<Could you provide some clarification to your last comment based on the below?>>

It boils down to this; if any part number will appear within a single level of a BOM, then you need a line number/routing sequence #.  Example:

Assembly "Z"

Line  / Part  / Qty Per Assy
1  Part A   1
2  Part B   1
3  Part A   1
4  Part B   1

Instead of:

Part   /   Qty Per Assy
Part A   2
Part B   2


The article is based on the second.

<< Do you think that Access can handle tracking inventory and work in process over multiple manufacturing and assembly steps for some 1500 different parts?>>

 If you entire parts list was 1,500 item (assy's and raw materials), then yes. If would not be a problem even with the default database engine.   But with that said, I would start off using SQL Server for the data store  as projects like that usually grow quickly.  

Jim.
0
I titled this "The great PK debate: Natural Keys vs. Surrogates (again)", because on almost any developer list or forum that deals with databases, this topic always comes up  at one point or another and usually with very strong opinions on both sides.  What started me on this article was that several days ago I tripped over an article on primary key's that was just flat out wrong.  

After poking around a bit on the net I realized that over the years, a lot of myths and misconceptions have grown up around this topic.  It seems that as the years go by, more and more gets published and discussed on this topic, but things only get cloudier.  For example, are auto number fields really a surrogate key?  Can they function as a primary key?   I hope that through this article, I will be able to clarify and explain fully enough the answers to questions such as those and hopefully, another great PK debate will not ensue (or at least if it does, you'll have plenty of ammo for the debate<g>).

When I was looking at articles out on the net, one thing that struck me about almost everything I read is that not many started off on the right foot. So the first thing to clear up is what relational theory actually is. Relational theory (the big "R" as some call it), was developed by E.F. (Tedd) Codd while working at IBM.  His first paper "Derivability, Redundancy, and Consistency of Relations" was published in 1969 as a research paper. While Codd was working on ways to store data in a…
11

Expert Comment

by:developingprogrammer
Jim thanks so much for taking the time to write this article!! It has really helped me a lot and your writing style has transformed something abstract and difficult to understand for a beginner like me to some easy to digest and logical!! Thanks once again and your efforts are greatly appreciated by me and all the rest!! = ))
0
LVL 26

Expert Comment

by:Bitsqueezer
Hi,

good article about the difference between what "internal" and "external" keys are in the sense of "exposed" or "hidden" keys like harfang said above.

But I completely disagree about the term "PKs are not 'special'". That's in most cases wrong. The big difference between PKs and other indices made unique (where logically is no difference, you can apply a PK on an autonumber field and also a unique key) is the way they influence the physical order of records in a database.

In most relational databases including Access and SQL Server, the PK is a "clustered index" (in SQL Server you can create PKs without clustering, as far as I remember, but in Access you can't and in SQL Server it is the default that a PK is always a clustered index).

That means: If you have an ID field with "1,2,3" as records the physical order of the records will be "1,2,3". If the ID field used for the PK is not an autonumber field so that you can change it manually and you change the first record to "4" then the physical order of all records will be changed, after changing the ID it will be "2,3,4" and not, in case of a simple unique index, "4,2,3" (means: the physical order has not been changed).

This is a VERY important speciality of PKs which is often forgotten in such discussions as it makes a big difference in performance. Using a "natural" key with informations like a phone number or a currency code means: If it will be changed, the complete table will be reordered physically on the data storage volume. So this is the most often reason why developers mostly uses a PK with an autonumber field: This guarantees that the physical order will never change as the next saved value is of course higher than the last. For the same reason it is not a very good idea in common to use a GUID as PK: It is a very random value, very long, text based and useless for communication (can seldom be used as for example customer ID). Used as PK that means it is guaranteed that it will reorder the physical structure of a table each time you write a new record.
(Access has not the same problem here so much: It reorders the records physically only if you use "compact & repair", but a database server does this permanently (of course with optimization about the point in time).)

The physical sorting of a table in the order of a PK is not "just for fun" - it could have been created like a normal unique key without clustering, but the idea of the developers of the PK idea is to make this one the fastest index of all other, this is the reason why it IS the Primary Key, not because it is the "most often used" unique key or the key with the highest importance. If that would be the case the PK would have no advantage against other unique keys, you would create i.e. more than one unique key and use one time this and other time that, you would have no difference. In that case the statement would be true that you can use a combination of two ID fields in an m:n table as PK without any difference.
The reason why it normally should NEVER be a combined index is exactly the physical sorting of the table. If you have m:n-tables they often changes its contents, and each time you change or add data to such tables it would be for example (in order of creating) "2,3 / 4,2 / 1,3" (each one a pair of IDs in an m:n table). So if you use a PK on both columns the table will be resorted as "1,3 / 2,3 / 4,2" which is not needed for any reason - no one ever looks into such a table and wants such sort order. But if you add an additional PK (although the PK itself is mostly never used for an own relation) as autonumber you have a PK which never changes , the only exception would be a deletion of a row.

Another reason why combined columns as PK are mostly a bad idea is that you always would need both columns (in an 1:n relation) to uniquely identify the wanted row in the other table, that's unnecessary overhead.

In real life scenarios it most often is the case: If you do not have a single column with a PK that never changes and has in most cases no "natural" meaning (exception would be i.e. a CustomerID) you will always add one later in development of an application/database. It's even often the case that you would create an artificial additional ID with a meaning although you already have an autonumber ID in cases where you must make sure that the created unique number has no gap in the numbering like invoice numbers where a finance office would not be satisfied if you have a gap in the number and cannot explain why this number never was an invoice number because someone started to add a record, the ID was created and he has undone that - the ID will not be used again in an autonumber field.

I also sometimes thought about using a meaningful attribute as PK when it is sure to be unique, but in most cases I regret that later because of the stated reasons and so I nowadays don't think much about PKs and always create a long integer autonumber field for each and any table and have never problems doing so.

Cheers,

Christian
1
The Open Database Connectivity (ODBC) was established based on the idea that many disparate applications needed a "consistent" way to talk to their databases in various RDBMS back-ends. This also allows different RDBMS systems to interact directly without having to go to "text file" imports and exports. The database can be directly connected to extract the data.

Note that this is oriented towards Windows 2003, Windows 2000 and Windows XP 32 bit (x386) architecture. I have not verfied these steps on the following operating systems: Windows 2008, Vista, or Windows 7.
 

1. Acronyms and Definitions:

DSN: Data Source Name
MDAC: Microsoft Data Access Components -- The set of SQL and other default drivers. With Windows XP SP1 and higher, they are automatically loaded to the newest version via Windows Update Services or WSUS.
HKLM & HKCU:      HKEY_LOCAL_MACHINE & HKEY_CURRENT_USER. Reference to sections of the registry hive
RDBMS: Relational DataBase Management System

2. Manually Configuring:

There are two ways to start the ODBC Administrator:
-->      Go into the Control Panel --> Administrative Tools --> Data Sources (ODBC)
-->      Alternative method: Start --> Run --> ODBCAD32 --> .
On a 64 bit operating system you need to access the ODBC Administrator via the %WINDIR%\SysWOW64\odbcad32.exe to create ODBC calls usable by 32 bit (Office) applications.

The first screen that you will come to is this on the User DSN.
ODBC Admin User DSN Tab
4
LVL 76

Expert Comment

by:Qlemo
If that is on a 64bit system, remember to use the appropriate ODBC Administrator tool, depending on whether you have 32bit or 64bit drivers installed, and which one you want to use for ODBC. The 32bit ODBC Admin is %SystemRoot%\SysWow64\odbcad32.exe .
0
LVL 1

Expert Comment

by:Mitch Swetsky
I am using a windows 7 32bit client with the runtime package installed.
0
Introduction

In a modern interface, the user expects all controls to be "smart". The interface should understand a user entry even if it doesn't conform to the obvious logic, if it contains a typo, or when it's a clever attempt to accelerate the process.

This article shows a method to make an Access combo box "smarter". The main example comes from a database using a French interface, and dealing with countries. It's been simplified to a single data table (Données), linked to a table of countries (Pays). Internally, the ISO code is used as key, but the interface normally shows the French country names in full.

However, the source of the data being entered is varied: documents in several languages, the Internet, other databases. Using a simple combo, the user is thus expected to translate on the fly: Uganda to Ouganda, Chad to Tchad, or even DZ to Algeria and HR to Croatia, depending on the source material.

So, if the database uses ISO codes and contains the English names, why can't the combo accept a valid ISO code, or an English name, even if the combo displays only French names. "GR" will work fine, it expands to "Grèce", close enough to "Greece". But "LK" should be accepted for "Sri Lanka" and "Korea" should be understood even if the French call them "Corée".

This article explains how to do that. The demo uses a French interface, but will accept English country names for data entry as well as ISO codes and even partial search strings. It's …
10
LVL 58

Author Comment

by:harfang

Additional Demo File


There has been a question today about using the “smart combo” technique in a more traditional setting, namely the the addition of a new record from the “not in list” event.

The idea is that the user can type “Baggins, Bilbo” in the combo box and that this name is added to the list feeding the combo box. This is straightforward. However, the user might forget the space as in “Doe,John” or correct a typo in the dialogue box used to validate the new name. In both cases, the exact string entered in the combo will not match the new row added to the source.

This demo shows how to overcome the problem. If a new name is validated (if the user presses [OK] in the dialogue), the row source of the combo is rewritten so that whatever it currently contains is accepted and translated to the newly created record.

The user can basically just type “X” and tab out, fill in the last and first names in the dialogue box, and see the new name selected in the combo. This avoids many tedious checks for the programmer, and a few potential error messages for the user.

This demo file is in Access 2000 format, although it was developed in Access 2007.
Q-27776325.mdb

If you find it useful, please vote the article useful instead!

Markus G Fischer — (°v°)
0

Expert Comment

by:Karen Schaefer
MARKUS IS DEFINITELY AN ACCESS GURU - GOD - AND ALL AROUND GREAT EXPERT.

THANKS.
0
Applies to: Access 95 and up     Level: Intermediate.

Ever had one of your Access databases stop executing code correctly or not want to compile?  Or are unable to create a MDE no matter what?  Or do other strange things related to code?   An undocumented command line switch exists within Access that may help.

Access uses VBA (Visual Basic for Applications) for its code support. The VBA project file, which contains all the code for your Access application is stored by Access in a JET database and used when needed.  In that project file is a copy of the source code and also a copy of the same code in a compiled state (if you have compiled your application).  This is done so that at run time, code can be executed faster because the code has already been compiled.

========================================================
TIP: Always make sure your Access applications are compiled when released.  Your code will execute faster, your app will be snappy, and as a result you'll look better!  Also be aware that there are things that you can do in your application that will cause the app to become un-compiled at runtime (like switching a form into design mode and altering it).  If you want to make sure your app is always compiled, consider distributing it as a MDE; MDE's lack source code, so the DB must run in a compiled state.  There are drawbacks to MDE's though, so be aware of those issues when developing.
========================================================

Often …
6
By Patrick Matthews (matthewspatrick)

1. Introduction


A very common database task is to compute aggregate statistics (such as Count, Sum, Min, Max, etc.) for some column of data.  For example, the user may need to determine the largest value associated with a particular customer as in the data below:

CustID   SaleAmt
----------------
1        15
1        6
1        92
2        42
3        6
3        17
3        84
3        26

Open in new window


In this case, the SQL is very simple:

SELECT CustID, Max(SaleAmt) AS MaxSale
FROM SomeTable
GROUP BY CustID
 
returns:
CustID   SaleAmt
------------------------------------
1        92
2        42
3        84

Open in new window


This is an example of a column-wise aggregation.

From time to time, Access users may find it necessary to compute row-wise aggregate statistics.  This can usually be avoided by building a normalized database schema.  However, at times, the user may be constrained to a denormalized design: the data may be coming from a data warehouse; there may be some performance reason for denormalizing the data; or the Access user may be stuck with another developer's design, and that design cannot be changed.

For example, suppose the data set is similar to the following:

CustID  H01  H02  H03  ...  H24
-------------------------------
1       22   44   66   ...  81
2       7    38   55   ...  99
3       49   12   38   ...  107
4       100  83   9    ...  1

Open in new window


If you have to determine an aggregate statistic based on the values in H01, H02, etc for each row, there are some techniques available using native Access functionality, but pursuing them can result in difficult and unreadable SQL, if it can be done at all.

Whatever the reason, this article offers two user defined functions, RowStats and RowStatsFieldList, that will enable Access users to compute the usual aggregate statistics on a row-wise basis.  The article includes the source code for this functions, as well as a sample Access database.

2. Row-Wise Aggregates Using Native Functions

21
LVL 15

Expert Comment

by:Eric AKA Netminder
Where was this article about six years ago when I was writing innumerable update queries just to get a report to show four salespeople's commissions?

Outstanding work, Patrick -- and thank you.

ep
0
LVL 58

Administrative Comment

by:tigermatt
Editor's Choice Awarded. Thanks, Patrick!

tigermatt
Page Editor
0
ACCESS IS A TOY: Fact or fiction?

Over the years, many have had a love/ hate relationship with Microsoft Access; you either love it or hate it.  Many love it for its ease of use features and polished interface, which make it one of the best Rapid Application Development (RAD) tools around.  However many professional developers and IT managers hate it and consider it to be a toy for a variety of reasons.  Some are justified, but many are not.  How did these two different views come about?

Access has been around now for over fifteen years and with that many misconceptions about how it works and what it does have come into being.  In this article were going to explore some of the fact and fiction of the Access world and you can decide for yourself if Access should be considered a toy or not.

First, what is it exactly?  Microsoft Access started out as a relational database product designed to produce departmental level applications running over a Local Area Network (LAN).  What everyone thinks of as Access is actually three separate components; The Access User Interface (UI),  Visual Basic for Applications (VBA), and JET, the default database engine.

Over the years, Access has changed fundamentally in a number of ways:
Visual Basic for Applications replaced Access Basic in Access 95.
Database Replication was added in Access 95.
Support of class modules was added in Access 97.
MSDE/SQLwas supported as alternate database engine to JET in Access 2000.
32
LVL 2

Expert Comment

by:badgerfire
I have written several applications in Access.  And you can deploy it using tools provided by Microsoft for the developer (at additional cost).  But I always tell the executives that if we want to take this to the next level we need to use a true development tool.  

The biggest reason is the stink Access gets in the industry.  But in my industry a funny quandary has developed.  Excel, an honestly weaker tool, gets high praise.  There are two things that cause this paradox.  

The first is the ease and versatility of Excel.  Why would you want to move your applications to Access when Excel does a mighty fine job.  Excel has a full chart wizard making it easy for the novice user to create stunning charts.  Excel even uses the VBA engine so you can code complete applications in it.  You can secure the spreadsheets.  Really why do anything in a more complex and cumbersome tool?

The second is the complexity and lack of development on the Microsoft side.  Access is a complex tool and can do a lot with multiple data sources.  It is an excellent tool for taking sources from different data engines and combining and reporting.  Access provides quick and rapid development; but sometimes at a price.  If you a novice Access user the VBA system can be daunting.  Where in Excel you whipped it out in a couple of hours you find yourself stumped and working for days.  What a user like that does not see is the days prepping the system to work properly will save him days of hours he was manipulating Excel.

The other part of the problem was Microsoft's lack or plan to not include as a robust chart interface like the one in Excel.  But the funny thing about that is that the core engine for the charting is the same in both applications.  Yet an Access developer finds that he has fewer options in the chart wizard than an Excel user.  To truly manipulate the chart system in Access you need to delve into VBA and the object model.  Something a novice user would not do and an intermediate user may become frustrated with.

It sounds like I am touting the benefits of Excel over Access.  I am not.  Excel has its place and so does Access.  I consider Access an interim solution, a test bed if you will, to greater things.  The problem arises when executives see the test bed and assume it is a final solution.  And to me there is no final solution.  You should always improve the application by listening to the users and applying well thought out, value added features.
0

Expert Comment

by:developingprogrammer
Thanks for writing this article Jim, it really helps a lot! = )
0
Introduction

In database applications, it's often useful to retrieve a single item of information from a table. For example the full name of a company from its ID number, the current price of a product, or the number of visits of a patient. When this is needed in a query, several SQL techniques are available to obtain and use this information. In the interface or from Visual Basic for Access, domain lookup functions are used: DLookup, DSum, DCount, etc.

The present article shows that DLookup is versatile, but comparatively slow. If the area of application is restricted and focused on very simple table lookups, it becomes possible to write functions with much better performance. However that possibility is subject to the following constraints:

  »  The table is in a Jet (Access) database.
  »  The search doesn't involve wild-cards or any calculations.
  »  The field(s) in the criteria are (or should be) indexed.

If these conditions are met, a single item of information can be retrieved from a table up to 100 times faster than by using DLookup. This sort of speed opens up new possibilities, and offers solutions to problems typically considered too complex for Access.

Incidentally, the first sections show how plain DLookup can be implemented in stand alone Visual Basic or in VB.NET -- for that matter in any program using the Jet Engine as database manager -- when Access built-in functions are not available. The other domain functions can …
12
LVL 1

Expert Comment

by:LukeB
Hi Markus

Great article. I have a Access FE with SQL Server v2005 backend tables linked into it. Of course the DAO does not work with these. Could you provide an example FE that uses SEEK on these sort of backend links please ?

regards

Luke
0

Expert Comment

by:newholyman
fast search
0
Introduction

Concatenation is always explained in a footnote, or as a section of something else. It is rarely explained as such, as a concept. I was making exactly that mistake in another article, when the section started to outgrow the core topic... I still needed some time to realise that concatenation is a topic.

This article is specifically written for expressions on reports (and forms). Everything said here holds true for queries (the syntax is slightly different) but not everything for Visual Basic. In particular, the handling of quotes and newlines is very different.

Forms are normally meant for editing, and concatenated results are not editable. Although concatenation has its place on forms, this article will use 'reports' instead of 'forms and reports' in most places.

It should prove useful to anyone who needs to build text from text elements in reports (or forms), and it might even hold one or two surprises for experienced Access developers.

The examples and suggestions aren't version-specific.



What is Concatenation?

The word is used in computer languages to describe the process of building text from smaller elements. For example, the name "Prof. Abraham T. Callahan Jr." is likely to be stored in a database in several different fields: first name, initial, last name, suffix, title... When the full name is required, the individual fields need to be assembled again, not forgetting the spacing and perhaps the …
13
LVL 26

Expert Comment

by:Nick67
Hey,

Something else that could use an entire article is format
<@;;[Blue]"-- enter your name --">

That looks entirely cool...but I have no idea, at all, about the syntax of that line, or good resources to research it either.  It's always like pulling teeth when I try creating custom formats.

Also,
With your multi-line concatenation, where are you doing that?
In VBA code, you use vbCrLf to send the text to a new line.
Are your examples on an unbound textbox on a report?
And you're using CTRL+Enter like in an Excel cell to move to a new line?
That isn't at all clear.

But,
Thank you for the pointers on using + instead of &
Mega-quotes drive me nuts--they're tough to debug.
But nested IIF's and Nz's aren't a lot of fun either.

Nick
0
LVL 58

Author Comment

by:harfang
Yes, formatting could be the topic of another article. It's all in the help pages, but they don't explain what you can do with formatting. A “how to” approach of formatting would be useful...

Multi-line concatenation was the reason for writing the article, initially a (long) section in Printing Labels. So, yes, the expression is meant for a text box on a report (when it's problematic or inconvenient to use “can shrink” techniques). As such, it is entered in the property sheet, the control itself, or the “zoom” box, all of which require Ctrl+Enter to create a new line.

Rereading that section, I see that it could have been explained better; I hope I haven't scared away too many readers!

(°v°)
0
Introduction

Printing labels is one of those things that can be surprisingly fun or extremely tedious, depending on the program being used.

The first time I had to teach "labels", it was fun. The program was WordPerfect -- some antique DOS version without even a print preview feature -- and it was fun because it was easy, and it worked. Before the morning break, every student had printed a first sheet of "fake" labels, in reality photocopies of an original with hand-drawn rectangles.

Once I (reluctantly) switched to Word, the fun was gone. Labels had become complicated. Something be be feared and somewhat magical, like many other features of that program. It was no longer possible to cover labels, envelopes, columns, and basic mail-merge in one day; labels alone required one day.

This article is about Access. What will it be? fun or tedious?

If you have used the "label wizard", you probably think the latter. There is obviously some magic involved: sometimes it works and other times you are informed that "some data may not be displayed" or that "the section width is greater than the page width", you get the wrong paper size, or experience some other mishaps.

There are explanations for all of them, just as tedious as the symptoms, but their fundamental cause is simple. The "wizard", just like Word, doesn't use virtual pages, but tries to solve everything using the physical page. A typographer will automatically treat a label …
11
LVL 26

Expert Comment

by:Nick67
The labels print ok and well within the label itself, but they start to creep, a little farther down the label with each one.
A 4145 is technically called 2.5" Wide x 15/16" Long
The actual fold on the label measures to 1.0"
The driver will not believe that the Okidata can print to a 15/16" page, so I am stuck creating a 1" high custom paper type.
I can create a 15/16" high paper size, but the Access GUI won't permit it to be selected as a paper size
I can try to force Access to use via VBA--but that is tough given the physical environment.
Any time you touch the report in design view it reverts to Letter paper size, so I have code to force it back to a proper page size

One Okidata is on WinXP-32 bit and shared while the other is on Win7-64 bit and shared.
This makes deploying a common front-end that will just work...challenging

The labels themselves print singly, one at a time.
Reports that need to be snail mailed are generated.
A form is opened with a combo box to select one of 2600 clients and one of three possible mailing addresses.
A command button is pushed, a label prints, a new client is selected.
Repeat, until the mailing labels have all been printed.
Stick them on envelopes, stuff the envelopes and mail.

So, the report is strictly a one page affair.
Getting them to print in exactly the same place is the challenge :)

Nick67
0

Expert Comment

by:Sean Ster
If Access won't let you set small margins with your current printer, a nice little trick is to print the labels to acrobat pdf printer. Its driver has no problem using tiny margins. You can also preview the pdf on screen, zoom it to exact width and hold the label paper up against the screen to check for perfect alignment. Then print the pdf to your printer at 100% size.
0
Introduction

Many Access users try, at one time or another, to use domain lookup functions to answer questions which are just outside of their application range. Many ideas present themselves, but after testing them and reading the relevant help pages, nothing seems to work. But it always seems so close!
[ Read on Experts-Exchange ]

Question, by Asker: I need to find the last appointment of a patient.

Expert: Sure: DMax("AppDate", "Appointments", "PatientID=101") -- Enjoy!

Asker: Thanks, but that gives me the date, I need the appointment ID.

Expert: Oh... wait... I think I got it: You need to get the date like before, store it in a date variable (unless there is no previous date, then it would have to be a variant, and you can test for Null). Say the variable is datPrev, then lookup the ID, with something like this : DLookup("AppointmentID", "Appointments", "PatientID=101 And AppointmentDate = #" & datLast & "#"). Notice the #, this is needed for dates. -- Good Luck!

Expert: Typo. Meant datPrev.

Asker: Huh? This seems awfully complicated. What do I type in the text box?

Member: Just use DLast("AppointmentID", "Appointments", "PatientID=101")

Asker: Thanks, Member! Exactly what I was looking for!

Expert: With all due respect, Member, have you read the help page on DLast? E.g. the words "return[s] a random record " and "[use it] when you simply need any value from that field"?

Member: Try it!

8
LVL 58

Author Comment

by:harfang
I would like to thank again aikimark, the acting editor for this article, for his excellent suggestions and thorough reviews. I understand and appreciate the amount of work this represents, and appreciate how much it improves the quality of the published result. I'm also grateful for the immediate "EE Approved" status, which really means "approved by the acting editor(s)".

During the editorial process, I already wanted to link to this article in two or three question threads, and ended up just posting the relevant function. One of them actually made me revisit the code, and add the evaluation of Access objects used in the arguments, making the code more useful and less error-prone.

I hope the article will prove to be useful.

Markus -- (°v°)
0
Introduction

I often hear the opinion that "Access isn't fit for serious applications". Finance is of course very serious... and extremely serious in Switzerland; needless to say that I never expect much business from that particular branch. The main reasons are of course security and volume. You don't want numbered account information travelling on a laptop, and you won't handle a million daily transactions or fifty thousand cash distributors with Access. Let's be serious.

But there are also many small financial institutions like fund managers, and clients of finance like pension funds. Their needs in terms of data management often comes down to a few dozen items, each with a couple of thousand numbers in a small selection of currencies, and only a few active users. They still hold the opinion that "Access can't handle it", even while happily attempting to do so with Excel...

When analysing a project in this context, it appears that the main, or only remaining problem is the difficulties in performing lookups. In a spreadsheet architecture, this is solved by dependency chains. Since financial historical data is very much static, the update chain of depending cells is rarely triggered, so the overall performance is good even when numerous complex lookup operations are involved. Not so with a database architecture, where all derived data is recomputed every time a query is run.

This article will show that the nature of financial …
4
LVL 58

Author Comment

by:harfang
Thank you for for the very fast "EE approved" and "Community Pick" awards, I didn't expect them, or at least not before I got some positive votes from the "was this question helpful" survey. I'm grateful to all "community members" for their approval and welcome.

The acting editor was again aikimark, who did an excellent job in reviewing the content and the language. More than language, in fact, because he pointed to some flaws in graphical semiology as well, when the charts and illustrations could be misunderstood. If you do understand the article at all, please join your thanks to mine for his time and his pertinence.

If the article also helped, please click 'yes' above. If it helped only a little, click only a little!

(°v°)
0
Summary
It all started that I found a useful small code snippet on this site to measure time with millisecond precision. Got carried away and developed it to a more complex timer solution. This was used by me to measure the performance of a form in MS Access. Then I submitted it as an article and the comments I got made me work on it a lot more. Now I think it is a super deluxe timer solution... :)

This timer implementation
- is capable of measuring time using multiple timers identified by string labels
- is capable of maintaining statistics for each timer (min, max, average, total)
- can exclude the first measurement from statistics calculation
- is accurate to 1 ms resolution
- is extremely easy to use.

The code is packaged now in a MS Access class module and comes with a wrapper module to make it easily accessible for use from anywhere in your code without needing to create an object. I have also attached a test module to demonstrate the use.

About some of the more important design decisions
Originally the class was using the GetTickCount() API call however as 'aikimark' pointed out to me this API has an accuracy problem as the value is not updated frequently enough to truly represent millisecond resolution. Now instead GetTickCount() the timeGetTime() API is used which can be configured to provide true 1 ms accuracy. This however necessitate that the code is put in a class module as only this way it can be ensured that on Terminate, …
0

Expert Comment

by:Matthew Smith
has a mdb file with the code been forgotten to be attached ?
0
"Top 10 customers by salesperson" sound familiar?  Yes!  I would expect so, and am sure there are a number of other great business analyses with a similar principle now flowing through your mind.

So, how do you query this data in your SQL system?  

No answer; no worries!  This article aims to answer that question for you.  As the title suggests, this comes down to knowing a customer's rank with respect to other customers for a given salesperson and then selecting just those customers with a rank no greater than 10.

Presuming you already have SQL knowledge, we will go through the different methods to getting this rank and subsequently selecting our data by it in various database systems.  To this end, you will find the article organized as follows to ease the navigation to the content most pertinent to your environment.

The Data.
Everyone Ranks The Same!
MS SQL and Oracle Move Up The Ranks!
MS Access Dense to Ranking.
MySQL Emulates The Best of Them.


1. The Test Data.


The following attachments are a set of SQL scripts to create the table structure and data we will be working with throughout this article.
MS SQL:
rankdata.sql.txt
MySQL:
rnkmysql.sql.txt
Oracle:
rankora.sql.txt
And to include our MS Access friends:
rankmdb.zip
For those comfortable in just running the script (appropriate to your environment) and understand the structure to be used in this article's queries, you can grab the file you need and move on to the next section, otherwise, I will go through briefly the table design and sample of data.

21
LVL 61

Author Comment

by:Kevin Cross
Thanks, Ray! Very much appreciated. --Kevin
0

Expert Comment

by:jwfollette
Kevin, great article and a very valuable approach (as you have shown me).  Thanks again for the help!
0
Applies to: All versions of Access   Skill Level:Beginner

  One of the common questions we see in the Access zone is how to get a value from a table.  One way to do that is with a set of built-in functions called Domain functions.  These functions work against a set of records (a domain).  Most of these functions perform some type of aggregate operation (Sum, Count, Avg, etc).  The domain functions are DAvg, DCount, DLookup, DFirst, DLast, DMax, DMin, DStdev, DStdevp, DSum, DVar, and DVarp.  You can find specifics on each of these in the on-line help.

 With all of these functions you can carry out the same logic (i.e. summing a value for a given field) yourself by opening a recordset, scanning through the records, and analyzing them appropriately as you go or by executing a SQL statement (a query).  In some cases it makes sense to do this, but it is a lot of extra work in terms of coding.  The beauty of the domain functions is that they are so simple to use.  They wrap up commonly needed SQL logic in a neat easy to use function.  What is also nice is that they can be used anywhere an expression is allowed (where Access expects a value).  For example you can use a DLookup() for the controlsource of a text control on a form.

  One popular myth in the Access world is that domain functions are always slower then other methods.  This is not true.  In some cases, the domain functions can be as fast or faster then other methods.  If performance is a consideration, then you…
16

Expert Comment

by:STERLING SUTTON
Best explanation I've found yet. Thank you for clarifying the quotes in Domain Aggregate functions.

I'm currently fighting with a DLookup function with criteria that references a variable of date type. I can't get it to work. How do I get it to recognize the date? It isn't a string. I tried putting the field in quotes, but that made the comparison completely unused.

"[Expiration_Date]=" & [Expiration_Date]

Got it.

"[Expiration_Date]=#" & [Expiration_Date] & "#"
0
LVL 62

Author Comment

by:Jim Dettman (EE MVE)
<<"[Expiration_Date]=#" & [Expiration_Date] & "#">>

  Nice...you figured it out.   And glad you enjoyed the article.

Jim.
0
Article Description

Many new developers struggle with utilizing SQL when working with a SQL backend and an Access frontend. Running stored procedures on SQL Server vs. canned queries in MS Access will increase performance of your application.

In this article I will review step-by-step how to pass parameters from a user form in MS Access to a stored procedure in SQL Server, returning the results.

Part One: Creating the Pass-Thru Query
Part Two: Dynamically Changing the SQL
Part Three: Create Test Report
Part Four: Passing Values from User Form

Setup for the Example

1. You will need to create a table tblTest that has the following fields:

EmployeeName  (Example: Adria Bynum)
HoursLogged (Example: 60)
ReportMonth (Example: Dec-08)


2. You will need a stored procedure on the server that includes a parameter. This will allow you to pass the parameter from MS Access to SQL. Here is an example of a stored procedure that you may create:

CREATE PROCEDURE [dbo].[upMySprocName]
(
@MyParam1 Varchar(50)
)
AS
SELECT *
FROM tblTest t
WHERE ((t.ReportMonth)= @MyParam1)

In this example, the stored procedure will be pulling everything from the table for a specific reporting month.

Part One: Creating the Pass-Thru Query

About Pass-Thru Queries:

Pass-Thru queries bypass JET and are a direct connection to the server. The results of these queries are read only but they are very useful as data sources for reports and non-updateable form views …
1
LVL 26

Expert Comment

by:Nick67
For my part, I would like to see you amend this article significantly, as it gives short shrift to many complex issues.

"Running stored procedures on SQL Server vs. canned queries in MS Access will increase performance of your application." "Pass-Thru queries bypass JET and are a direct connection to the server. The results of these queries are read only but they are very useful as data sources for reports and non-updateable form views and offer improved performance when working with a SQL backend."

That is a vast over-simplification of a very complex subject.  The Jet/ACE engine has become very sophisticated.  There is very little, if any, performance difference between an Access SELECT statement from linked tables, an Access SELECT statement from a linked, indexed view and a stored procedure when the query does not wind up invoking a full-table download to the client-end.  More accurate and more succinct would be to say "Converting queries that you find to be long-running in Access can yield significant performance enhancements."  because that is certainly true.

For those who know nothing about the subject and are exploring it, they are likely to have installed SQL Server Express Edition.  The 2008 R2 version has gone away from the venerable Northwind sample database to PrescriptionContoso.  To have built your example from the MS sample database installed in whatever YOU are using would have made your article more useful.  You then could have provided an mdb/accdb sample that linked to that database.  The reader would then need to use the Linked Table Manager to re-connect their tables locally, but you have discussed DSN requirements in small detail already.  To then have provided sample sprocs, where in testing them  vs. their Access query counterparts you would be able to demonstrate clear performance enhancements would have been outstanding.  Cases where that can be the case are things like joins on unindexed text fields, and the use of LIKE in the WHERE clause on unindexed (usually text) fields as well.  Research would have suggested other examples, as well.  Very complex UNION queries come to mind.

Your code is sound, but you didn't comment it at all.  You didn't explain to the reader that every saved query can be accessed in code through the QueryDefs collection.  Nor did you explain that you can permanently alter the SQL statement of that query using your code.  Nor did you point out to the reader that SQL Server has different requirements for enclosing parameters.  'Strings' must be enclosed with single parentheses.  Dates should also be passed in a 'strings' and MUST be in US Date format if sent in as 'xx/xx/xx'  Or that SQL Server's wildcard is '%' and not '*'
All of which would have been really good.

And then you conclude with the questionable "In this example, your user is able to select a report and report month for the data that they want returned. The report month is successfully passed to the SQL stored procedure thru the Pass Thru query, bypassing JET and improving performance."

Bypassing Jet/ACE does not always enhance performance.  Bypassing Jet/ACE when a) the nature of the query will force Jet/ACE to request all the data in all the tables involved to complete the processing b) the amount of data requested is large and/or the network is slow and c) the server has the horsepower to complete the query more expeditiously than the client can result in performance enhancements.

Your article is a good first draft, but only that.  Don't take my word for it.  As of now, 5523 people have viewed it, no one has voted it helpful.  Although that may be a site artifact, as you first submitted it years ago, and perhaps this commentary comes out of the blue.  I do not mean to offend, but rather to constructively criticize. I am looking at the article because it was linked in my EE Newsfeed, and is now likely drawing significant traffic.  That's my 2 cents, anyway.

Nick67
0
LVL 1

Expert Comment

by:stevengrogan
Private Sub cmdButton_Click()
Dim strQry as string
Dim strSQL as string
Dim strOldSQL as string
Dim strReport as string

On Error GoTo tagErr

            strQry = "qryPT"
            strSQL = "execute upMySprocName"
            strOldSQL = ChangeSQL(strQry, strSQL & "'" & _
                    Forms![frmTest]![cmbReportMonth] & "'") 'etc. for more parameters
            strReport = "rptTest"
           
            DoCmd.OpenReport strRpt, acPreview
           
            Exit Sub

tagErr:
            msgbox err.description

End Sub

Open in new window


There is a serious error in your code here. The call to open the report should be:
DoCmd.OpenReport strReport, acPreview

Open in new window


Your code calls a nonexisting variable

Also, you may want to order some of the steps differently, as you are referencing adding function code to a form that isn't referenced on the creation until farther down in the instructions.
0

Microsoft Access

221K

Solutions

52K

Contributors

Microsoft Access is a rapid application development (RAD) relational database tool. Access can be used for both desktop and web-based applications, and uses VBA (Visual Basic for Applications) as its coding language.