Cam Raben
asked on
need help with an MS Access query
I need a query that
1) looks for all unique instances where the combination of “column A + column B” is the same,
and then
2) displays all columns but only those rows where “column A” begins with the letter A, B, or C,
and then;
3) of those displayed rows, places the contents of “column E” of the into “column G” only if “column C” contains the text “Fe” somewhere in its field,
and finally,
4) places a less than sign (<) before the number in “column G” only if “column F” contains the letter U and "column C" contains the text “Fe” somewhere in its field.
1) looks for all unique instances where the combination of “column A + column B” is the same,
and then
2) displays all columns but only those rows where “column A” begins with the letter A, B, or C,
and then;
3) of those displayed rows, places the contents of “column E” of the into “column G” only if “column C” contains the text “Fe” somewhere in its field,
and finally,
4) places a less than sign (<) before the number in “column G” only if “column F” contains the letter U and "column C" contains the text “Fe” somewhere in its field.
Col A Col B Col C Col D Col E Col F Col G Col H col I
LOCATION DATE ANALYTE PQL RESULT QUALIFY Fe Mn Zn
Asite 1 1/1/14 Fe 0.1 220
Asite 1 1/1/14 Mn 0.05 33
Asite 1 1/1/14 Zn 0.001 20 U
Asite 1 7/31/09 Fe 0.1 100
Asite 2 3/14/09 Zn 0.01 20
Asite 2 6/8/12 Mn 0.1 50 U
Asite 2 6/8/12 Zn 0.05 33
Thanks in advance!
ASKER
Unfortunately, my SQL is not strong enough to translate your answer into a useable product. I understand there will be a select statement built into an apend statement. Can you help? My table name is data.
Thanks in advance.
Thanks in advance.
Given the sample data you provided, what do you want the final solution to look like?
ASKER
See attached. This is what I need to end up with, based on the example I provided above.
for-EE-072814.xlsx
for-EE-072814.xlsx
I don't see the requirements for the MN and ZN columns in your description above.
I think what you really need is probably a crosstab query based on something like:
I think what you really need is probably a crosstab query based on something like:
TRANSFORM First(Temp.ColValue) AS FirstOfColValue
SELECT Temp.Location, Temp.MyDate
FROM (SELECT [Location], [MyDate], [Analyte] as ColHead, IIF([Qualify] = "U", "<", "") & cstr([Result]) as ColValue
FROM tbl_Chemicals
UNION ALL
SELECT [Location], [MyDate], [Analyte] & "-PQL" as ColHead, cstr([PQL]) as ColValue
FROM tbl_Chemicals
) AS Temp
GROUP BY Temp.Location, Temp.MyDate
ORDER BY Temp.Location, Temp.MyDate DESC
PIVOT Temp.ColHead;
ASKER
I am getting "invalid use of null" error. ?
ASKER
i could send the actual database if that would help.
don't send the entire database, just send the table that we are dealing with above. Chances are there is a NULL in the [Result] column, so trying to transform that to a string (so I can concatenate the "<" to some values is generating an error.
Try changing:
IIF([Qualify] = "U", "<", "") & cstr([Result]) as ColValue
to:
IIF([Qualify] = "U", "<", "") & cstr(NZ([Result], "")) as ColValue
Try changing:
IIF([Qualify] = "U", "<", "") & cstr([Result]) as ColValue
to:
IIF([Qualify] = "U", "<", "") & cstr(NZ([Result], "")) as ColValue
ASKER
I get the same error. Attached is the database. (Note that there are more columns than the simplified version I described.....once I understand how to transform this, I will incorporate the other components of the table.)
ASKER
Here it is.
for-EE-072814.accdb
for-EE-072814.accdb
Can you save that file in 2003 format, I'm running 2007 and am getting an error when I try to open the file?
I would also recommend a compact and repair before you post it here.
I would also recommend a compact and repair before you post it here.
ASKER
I compacted and repaired it and saved it as a 2003 mdb version.
for-EE-072814.mdb
for-EE-072814.mdb
SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
1. Create a table of eligible groups
I created a tblEligibleLOCCODEs table with this make-table query:SELECT TCampProject.LOCCODE, TCampProject.MYDATE INTO tblEligibleLOCCODEs
FROM TCampProject
GROUP BY TCampProject.LOCCODE, TCampProject.MYDATE
HAVING Count(TCampProject.LOCCODE)>1;
2. Run the Update query
Joining the tblEligibleLOCCODEs table to the TCampProject table, I can now do the updates with this query:UPDATE TCampProject INNER JOIN tblEligibleLOCCODEs
ON (TCampProject.MYDATE = tblEligibleLOCCODEs.MYDATE)
AND (TCampProject.LOCCODE = tblEligibleLOCCODEs.LOCCODE)
SET TCampProject.Fe = IIf([QUALIFY]="U","<","") & [Result]
WHERE (((TCampProject.LOCCODE) Like "[ABC]*") AND ((TCampProject.ANALYTE) Like "*Fe*"));
I could do the update in a single query, but the DCount() function is slow. I might have compensated by using a Harfang fast lookup function.
http:A_1921.html
http:A_1921.html
Please note that your existing Fe column is a long integer. Trying to prepend the "<" character causes a failure to update that row. If you wanted a select query with an Fe column, it would look like this:
SELECT TCampProject.ID, TCampProject.SAMPNO, TCampProject.LOCCODE,
TCampProject.LOCATION, TCampProject.MYDATE, TCampProject.OWNER, TCampProject.COUNTY,
TCampProject.ANALYTE, TCampProject.PQL, TCampProject.RESULT, TCampProject.QUALIFY,
TCampProject.PROJECT, TCampProject.VALDATE, IIf([QUALIFY]="U","<","") & [Result] AS Fe
FROM TCampProject INNER JOIN EligibleLOCCODEs
ON (TCampProject.MYDATE = EligibleLOCCODEs.MYDATE) AND (TCampProject.LOCCODE = EligibleLOCCODEs.LOCCODE)
WHERE (((TCampProject.LOCCODE) Like "[ABC]*") AND ((TCampProject.ANALYTE) Like "*Fe*"));
This assumes that I've changed my make table query into a regular Select query:SELECT TCampProject.LOCCODE, TCampProject.MYDATE
FROM TCampProject
GROUP BY TCampProject.LOCCODE, TCampProject.MYDATE
HAVING Count(TCampProject.LOCCODE)>1;
ASKER
This is excellent. Thanks very much! Since I am a fairly novice SQL user, it will take me a little time to go through and execute these. I will begin working on it this morning - hopefully I'll have worked through these by this afternoon.
ASKER
What are you, a genius?
In response to your two questions:
1. your [MyDate] column contians date and time values, do you want to track these times as different from one another if they occure on the same date? use just the date, or the Date/Time value?
times do not matter, only the date. In fact, I don't even need the time stamp in the date field.
2. Do you want all of the different values of [Analyte] or are you only concerned with specific versions? Do you want each of these colums ("Al by ICP", "Al by ICP-PQL", "Al - Dissolved Aluminum by ICP-PQL", "Al - Dissolved Aluminum by ICP"), or do you want specific values within those columns?
Actually, I only need unique *elements* (Al, Fe, Mn, Zn, etc), rather than all versions of all elements. So if aluminum (Al) is associated with several versions such as "Al by ICP", "Dissolved Aluminum by ICP", "Al by ICPMS", etc, I would only keep a column titled "Al" and put all aluminum data into that single column, EXCEPT FOR any versions with the word "solid" in it (solid is a different analysis and does not belong with these water results....it can just be deleted/ignored from the database).
The same treatment for all of the elements. But I'm not sure if this is too complicated to do in a SQL command? So I would end up with a table that contains a column for each element....and each row of the resulting table would contain a location, sample date, and all the results (for each element) for that sample date.
In response to your two questions:
1. your [MyDate] column contians date and time values, do you want to track these times as different from one another if they occure on the same date? use just the date, or the Date/Time value?
times do not matter, only the date. In fact, I don't even need the time stamp in the date field.
2. Do you want all of the different values of [Analyte] or are you only concerned with specific versions? Do you want each of these colums ("Al by ICP", "Al by ICP-PQL", "Al - Dissolved Aluminum by ICP-PQL", "Al - Dissolved Aluminum by ICP"), or do you want specific values within those columns?
Actually, I only need unique *elements* (Al, Fe, Mn, Zn, etc), rather than all versions of all elements. So if aluminum (Al) is associated with several versions such as "Al by ICP", "Dissolved Aluminum by ICP", "Al by ICPMS", etc, I would only keep a column titled "Al" and put all aluminum data into that single column, EXCEPT FOR any versions with the word "solid" in it (solid is a different analysis and does not belong with these water results....it can just be deleted/ignored from the database).
The same treatment for all of the elements. But I'm not sure if this is too complicated to do in a SQL command? So I would end up with a table that contains a column for each element....and each row of the resulting table would contain a location, sample date, and all the results (for each element) for that sample date.
@Ted
That still doesn't resolve the problem of your need to have a "<" character in a numeric field.
That still doesn't resolve the problem of your need to have a "<" character in a numeric field.
ASKER
A simple solution would be to make any value with a "<" sign instead be equal to 1/2 the value plus 0.999. The .999 would be my flag that the value represents a "less than". For example,
Fe < 100, would be Fe = 50.999
Fe < 100, would be Fe = 50.999
You could retain the value and just negate it. If we did that, then line 5 would have an Fe column value of -20 and line 8 would have an Fe column value of -50
ASKER
The former approach (1/2 value + 0.999) is preferred as it lets me to statistical analyses on the data for each element, column by column.
@Ted
I really hope this project succeeds. I live in NC as do many of my relatives. (mom in W-S) Our environmental laws, and their enforcement, have taken a hit with the current legislators and governor. You are fighting the good fight and I'm on your side.
I really hope this project succeeds. I live in NC as do many of my relatives. (mom in W-S) Our environmental laws, and their enforcement, have taken a hit with the current legislators and governor. You are fighting the good fight and I'm on your side.
I've revised my make-table query to exclude the solid items and filter for eligible LOCCODE items.
You will have to run three separate update queries.
SELECT TCampProject.LOCCODE, TCampProject.MYDATE INTO tblEligibleLOCCODEs
FROM TCampProject
WHERE (((TCampProject.ANALYTE) Not Like "*solid*"))
GROUP BY TCampProject.LOCCODE, TCampProject.MYDATE
HAVING (((TCampProject.LOCCODE) Like "[ABC]*") AND ((Count(TCampProject.LOCCODE))>1));
You will have to run three separate update queries.
UPDATE TCampProject INNER JOIN tblEligibleLOCCODEs
ON (TCampProject.LOCCODE = tblEligibleLOCCODEs.LOCCODE) AND (TCampProject.MYDATE = tblEligibleLOCCODEs.MYDATE)
SET Fe = IIF([QUALIFY]="U",.999+ [Result]/2, [Result])
WHERE ANALYTE Like "*Fe*"
UPDATE TCampProject INNER JOIN tblEligibleLOCCODEs
ON (TCampProject.LOCCODE = tblEligibleLOCCODEs.LOCCODE) AND (TCampProject.MYDATE = tblEligibleLOCCODEs.MYDATE)
SET Mn = IIF([QUALIFY]="U",.999+ [Result]/2, [Result])
WHERE ANALYTE Like "*Mn*"
UPDATE TCampProject INNER JOIN tblEligibleLOCCODEs
ON (TCampProject.LOCCODE = tblEligibleLOCCODEs.LOCCODE) AND (TCampProject.MYDATE = tblEligibleLOCCODEs.MYDATE)
SET Zn = IIF([QUALIFY]="U",.999+ [Result]/2, [Result])
WHERE ANALYTE Like "*Zn*"
ASKER
Aikimark - This is amazing work....and much appreciated! One quick point that I hope will wrap this up.
If I wanted to grab ALL the columns from ANALYTE, instead of just selected ones, how would I alter your select statement below? Maybe I should use an OR in the statement, e.g. WHERE [Analyte] LIKE "?? by ICP" OR WHERE [ANALYTE] LIKE "?? by ICPMS".......? I am trying to get all the columns represented in one fell swoop, rather than doing it multiple times and cutting and pasting.
TRANSFORM First(Temp.ColValue) AS FirstOfColValue
SELECT Temp.Location, Temp.MyDate
FROM (SELECT [Location], DateValue(tCampProject.[My Date]) as MyDate, [Analyte] as ColHead, [Result] as ColValue
FROM tCampProject
WHERE [Analyte] LIKE "?? by ICP"
UNION ALL
SELECT [Location], DateValue(tCampProject.[My Date]) as MyDate, [Analyte] & "-PQL" as ColHead, [PQL] as ColValue
FROM tCampProject
WHERE [Analyte] LIKE "?? by ICP"
) AS Temp
GROUP BY Temp.Location, Temp.MyDate
PIVOT Temp.ColHead;
ps This effort is to create a dataset of background data that represents ambient concentrations. I'll then apply it to sites with potential issues to compare their levels to background levels. The only way to do statistics on the data is to get all constiuents for a given location and sample date on one row. I really appreciate your help. I'm out of the office this afternoon and tomorrow and will try to finish this up on Friday.
If I wanted to grab ALL the columns from ANALYTE, instead of just selected ones, how would I alter your select statement below? Maybe I should use an OR in the statement, e.g. WHERE [Analyte] LIKE "?? by ICP" OR WHERE [ANALYTE] LIKE "?? by ICPMS".......? I am trying to get all the columns represented in one fell swoop, rather than doing it multiple times and cutting and pasting.
TRANSFORM First(Temp.ColValue) AS FirstOfColValue
SELECT Temp.Location, Temp.MyDate
FROM (SELECT [Location], DateValue(tCampProject.[My
FROM tCampProject
WHERE [Analyte] LIKE "?? by ICP"
UNION ALL
SELECT [Location], DateValue(tCampProject.[My
FROM tCampProject
WHERE [Analyte] LIKE "?? by ICP"
) AS Temp
GROUP BY Temp.Location, Temp.MyDate
PIVOT Temp.ColHead;
ps This effort is to create a dataset of background data that represents ambient concentrations. I'll then apply it to sites with potential issues to compare their levels to background levels. The only way to do statistics on the data is to get all constiuents for a given location and sample date on one row. I really appreciate your help. I'm out of the office this afternoon and tomorrow and will try to finish this up on Friday.
@Ted
I do not understand what you want from your description.
You have 91 distinct ANALYTE values (see below). How do you need these treated?
I do not understand what you want from your description.
ALL the columns from ANALYTEThere is only one ANALYTE column.
You have 91 distinct ANALYTE values (see below). How do you need these treated?
Ag by ICPMS
Ag-Dissolved Silver by ICPMS
Al - Dissolved Aluminum by ICP
Al by ICP
Al in solid samples by ICP
Antimony by ICPMS
Arsenic solid samples by ICPMS
As by furnace
As by ICPMS
As Dissolved by furnace
As- Dissolved by ICPMS Arsenic
B by ICP
Ba by ICP
Ba- Dissolved Barium by ICP
Be by ICP
Be-Beryllium Dissolved by ICP
Ca by ICP
Ca-Dissolved Calcium by ICP
Cadmium in solids samples by ICPMS
Cd by ICPMS
Cd Dissolved by furnace
Cd- Dissolved Cadmium by ICPMS
Cobalt by ICP
Cobalt Dissolved by ICP
Cr by ICP
Cr by ICPMS
Cr Dissolved by ICP
Cr- Dissolved Chromium by ICPMS
Cr in solids samples by ICPMS
Cu by furmace
Cu by ICPMS
Cu dissolved by furmace
Cu dissolved by furnace
Cu- Dissolved Copper by ICPMS
Cu solid samples by ICPMS
Dissolved Mn by ICP
Fe by ICP
Fe- Dissolved Iron by ICP
Hardness by Calculation
Hardness by Titration
Hg 1631
Hg 245.1
Hg 245.1 Dissolved
Iron in solids samples by ICPMS
K by ICP
K-Dissolved Potassium by ICP
Lead in solids samples by ICPMS
Li ICP
Li DISSOLVED ICP
Mg by ICP
Mg- Dissolved Magnesium by ICP
Mn by ICP
Mn by ICPMS
Mn- Disolved Manganese by ICPMS
Mn solid samples by ICPMS
Mo by ICPMS
Mo Dissolved by ICPMS
Na by ICP
Na-Dissolved Sodium by ICP
Ni by furnace
Ni by ICP
Ni by ICPMS
Ni Dissolved by furnace
Nickel in solids samples by ICPMS
Ni-Dissolved Nickel by ICP
Ni-Dissolved Nickel by ICPMS
Pb by ICP
Pb by ICPMS
Pb-Dissolved Lead by ICP
Pb-Dissolved Lead by ICPMS
Sb Dissolved by ICPMS
Se by furnace
Se by ICPMS
Se Dissolved by furnace
Se-Dissolved Selenium by ICPMS
Selenium in solids samples by ICPMS
Sn by ICPMS
Sn Dissloved by ICPMS
Sr by ICPMS
Stronium Dissolved by ICPMS
Thallium (Tl) Dissolved by ICPMS
Thallium (Tl) ICPMS
Ti (Titanium) by ICP
Ti (Titanium) Dissolved by ICP
V by ICP
V Dissolved by ICP
Zinc in solids samples by ICPMS
Zn by ICP
Zn by ICPMS
Zn-Dissolved Zinc by ICP
Zn-Dissolved Zinc by ICPMS
Ted,
1. I think you are looking an my SQL, not Aikimark's. I've backed off on this because I thought your comment: was replying to Aikimarks code, not my query.
When you are getting feedback from more than one expert, it is a good idea to indicate who you are responding to by starting your reply with an address:
@fyed
@Aikimark
Then we will know who you are talking to.
2. Yes, you could add an OR in the WHERE clause:
WHERE [Analyte] LIKE "?? by ICP"
like:
WHERE ([Analyte] LIKE "?? by ICP") OR ([Analyte] LIKE "?? by ICPMS")
But you are going to end up with column names like:
CU by ICP, CU by ICP-PQL, CU by ICPMS, CU by ICPMS-PQL, CU- Dissolved Copper by ICPMS, CU- Dissolved Copper by ICPMS-PQL
when I think what you really want is simply CU and CU-PQL. Am I wrong in this assessment?
3. You have 91 different values in the [Analyte] column of your database. You can identify the element in most of those by simply looking at the 1st two characters, but there are a number "Antimony by ICPMS", Cobalt (2 versions), Stronium, Thallium which do not use the 2 letter chemical symbol as the first 2 characters (some don't include the chemical symbol at all), making any query challenging (which is why my example simply uses the "?? by ICP". If you would normalize your data ( a column for chemical and a separate column for Method and I have no idea what "By ICP" or ICPMS means but that should probably be another field in your table, not a description in the [Analyte] field) we could probably get you closer to what you are looking for.
Dale
1. I think you are looking an my SQL, not Aikimark's. I've backed off on this because I thought your comment: was replying to Aikimarks code, not my query.
When you are getting feedback from more than one expert, it is a good idea to indicate who you are responding to by starting your reply with an address:
@fyed
@Aikimark
Then we will know who you are talking to.
2. Yes, you could add an OR in the WHERE clause:
WHERE [Analyte] LIKE "?? by ICP"
like:
WHERE ([Analyte] LIKE "?? by ICP") OR ([Analyte] LIKE "?? by ICPMS")
But you are going to end up with column names like:
CU by ICP, CU by ICP-PQL, CU by ICPMS, CU by ICPMS-PQL, CU- Dissolved Copper by ICPMS, CU- Dissolved Copper by ICPMS-PQL
when I think what you really want is simply CU and CU-PQL. Am I wrong in this assessment?
3. You have 91 different values in the [Analyte] column of your database. You can identify the element in most of those by simply looking at the 1st two characters, but there are a number "Antimony by ICPMS", Cobalt (2 versions), Stronium, Thallium which do not use the 2 letter chemical symbol as the first 2 characters (some don't include the chemical symbol at all), making any query challenging (which is why my example simply uses the "?? by ICP". If you would normalize your data ( a column for chemical and a separate column for Method and I have no idea what "By ICP" or ICPMS means but that should probably be another field in your table, not a description in the [Analyte] field) we could probably get you closer to what you are looking for.
Dale
ASKER
Dale - You are correct. Rather than 91 columns, I only want 2 columns for each unique element (there are 30 unique elements in the ANALYTE column if I am counting correctly).....one column for the element itself (the Cu part) and one for the method of analysis (the "ICP" part). So I would end up with 60 columns for these.
And yes, I doubled checked and the first two letters in the ANALYTE field will correctly capture all instances except for these outliers:
antimony ?? = ignore/omit/not needed
"B space" ("B ") = B (i.e. the first two characters capture the result which is only a one letter symbol, boron)
?? solid = ignore/omit/not needed
Cadmium?? = Cd
Cobalt?? = Co
Dissolved Mn ?? = Mn
Hardness ?? = Hardness
Iron ?? = Fe
"K space" = K
Lead ?? = Pb
Strontium ?? = Sr
Thallium ?? = Tl
"V space" = V
Zinc ?? = Zn
Then once the 60 new fields have been added (30 elements and 30 element-methods), I need to populate data into them. Recall that to populate data into the 30 new elements' fields, we use the value from the RESULT column for that element....and to populate the 30 new ELEMENT-METHOD fields we use the value from the PQL field corresponding to that element.
And yes, I doubled checked and the first two letters in the ANALYTE field will correctly capture all instances except for these outliers:
antimony ?? = ignore/omit/not needed
"B space" ("B ") = B (i.e. the first two characters capture the result which is only a one letter symbol, boron)
?? solid = ignore/omit/not needed
Cadmium?? = Cd
Cobalt?? = Co
Dissolved Mn ?? = Mn
Hardness ?? = Hardness
Iron ?? = Fe
"K space" = K
Lead ?? = Pb
Strontium ?? = Sr
Thallium ?? = Tl
"V space" = V
Zinc ?? = Zn
Then once the 60 new fields have been added (30 elements and 30 element-methods), I need to populate data into them. Recall that to populate data into the 30 new elements' fields, we use the value from the RESULT column for that element....and to populate the 30 new ELEMENT-METHOD fields we use the value from the PQL field corresponding to that element.
So, if I were you, what I would do is add two a new column to my table ([Element]).
Then I would write a query that fills in that column with the appropriate [Element] symbol.
UPDATE yourTable SET [Element] = TRIM(Left([Analyte], instr([Analyte], " ")))
WHERE instr([Analyte], " ") <= 3
with similar update queries to add the "Cd", "Co", "Mn", "Fe", "Pb", "Sr",... when the element name is present.
UPDATE yourTable SET [Element] = "Cd" WHERE instr([Analyte], "Cadmium") > 0
Then we can work from there.
Then I would write a query that fills in that column with the appropriate [Element] symbol.
UPDATE yourTable SET [Element] = TRIM(Left([Analyte], instr([Analyte], " ")))
WHERE instr([Analyte], " ") <= 3
with similar update queries to add the "Cd", "Co", "Mn", "Fe", "Pb", "Sr",... when the element name is present.
UPDATE yourTable SET [Element] = "Cd" WHERE instr([Analyte], "Cadmium") > 0
Then we can work from there.
great minds... Dale. I was in the process of doing that.
I've parsed the ANALYTE column on " " and "-" and populated a new ELEMENT column.
Applying a filter to the new ELEMENT column
Not In ('As','Hg','Al','Fe','Mn', 'Cd','Cr', 'Cu','Ni', 'Pb','Zn', 'Na','Ca', 'K','Se',' Be','Mg',' Ba','B','S r','Ag','T i','V','Li ','Mo','Sb ','Sn')
what I'm left with are
Cadmium
Iron
Nickel
Lead
Selenium
Arsenic
Zinc
Dissolved
Hardness
All the "Dissolved" values are Mn.
There are 11 rows that will be ignored due to their ANALYTE column containing "solid". They are a mixture of element letters and element names:
ANALYTE ELEMENT
Al in solid samples by ICP Al
Cadmium in solids samples by ICPMS Cadmium
Cr in solids samples by ICPMS Cr
Iron in solids samples by ICPMS Iron
Nickel in solids samples by ICPMS Nickel
Lead in solids samples by ICPMS Lead
Selenium in solids samples by ICPMS Selenium
Arsenic solid samples by ICPMS Arsenic
Cu solid samples by ICPMS Cu
Mn solid samples by ICPMS Mn
Zinc in solids samples by ICPMS Zinc
I've parsed the ANALYTE column on " " and "-" and populated a new ELEMENT column.
Applying a filter to the new ELEMENT column
Not In ('As','Hg','Al','Fe','Mn',
what I'm left with are
Cadmium
Iron
Nickel
Lead
Selenium
Arsenic
Zinc
Dissolved
Hardness
There are 11 rows that will be ignored due to their ANALYTE column containing "solid". They are a mixture of element letters and element names:
ANALYTE ELEMENT
Al in solid samples by ICP Al
Cadmium in solids samples by ICPMS Cadmium
Cr in solids samples by ICPMS Cr
Iron in solids samples by ICPMS Iron
Nickel in solids samples by ICPMS Nickel
Lead in solids samples by ICPMS Lead
Selenium in solids samples by ICPMS Selenium
Arsenic solid samples by ICPMS Arsenic
Cu solid samples by ICPMS Cu
Mn solid samples by ICPMS Mn
Zinc in solids samples by ICPMS Zinc
ASKER
Excellent ....so far so good. I now have a column named ELEMENT populated with Fe, Cd, Zn, Mn, etc ...all elements are accounted for. Next?
@Ted,
Please post that database. It will save us some work and shorten the time-to-solution.
Please post that database. It will save us some work and shorten the time-to-solution.
ASKER
Here it is. I had to temporarily delete a lot of rows so that it would not be too big to upload, so some of the unique elements from the ANALYTE column may not be there....which is fine. Once I know how to do these steps I will do them myself on the original large, master DB.
073014TCampProjectALL.mdb
073014TCampProjectALL.mdb
Have to run out of the office for a couple of hours. Will check back in then.
After the ELEMENT column has been created, here are the queries that should properly populate the column.
UPDATE TCampProject
SET TCampProject.ELEMENT = Left([ANALYTE], InStr([ANALYTE],"-")-1)
Where InStr([ANALYTE],"-") <> 0
And ELEMENT Is Null
UPDATE TCampProject
SET TCampProject.ELEMENT = Left([ANALYTE], InStr([ANALYTE]," ")-1)
Where InStr([ANALYTE]," ") <> 0
And ELEMENT Is Null
UPDATE TCampProject
SET TCampProject.ELEMENT = trim(ELEMENT)
UPDATE TCampProject
SET TCampProject.ELEMENT = "Mn"
Where ANALYTE Like "Dissolved Mn*"
UPDATE TCampProject
SET TCampProject.ELEMENT = null
Where ANALYTE Like "*solid*"
Or ANALYTE Like "Hardness*"
UPDATE TCampProject
SET TCampProject.ELEMENT = Switch(ELEMENT='Antimony', 'Sb', ELEMENT='Thallium', 'Tl', ELEMENT='Stronium', 'Sr', ELEMENT='Cobalt', 'Co')
Where ELEMENT IN ('Antimony','Thallium','Stronium','Cobalt')
Excluding the Null values, we have the following distribution of ELEMENT values:E CountOfELEMENT
Ag 16
Al 261
As 3630
B 33
Ba 33
Be 3401
Ca 3253
Cd 3628
Co 15
Cr 3599
Cu 3629
Fe 3404
Hg 1663
K 3197
Li 15
Mg 3253
Mn 3529
Mo 15
Na 3167
Ni 3634
Pb 3596
Sb 15
Se 3174
Sn 15
Sr 33
Ti 33
Tl 33
V 33
Zn 3637
@Ted
Please invoke the queries I posted to your table and repost the database.
Please invoke the queries I posted to your table and repost the database.
What should happen when the PQL column is Null?
ASKER
Yes - that part was done in the db I just sent.
Now I need the next step which is to create two new columns for each of the represented elements. Take Cr for example. One new column, titled "Cr" will contain and be populated by the value in the RESULT column. The other new column, "Cr-method" will contain and be populated by the value in the ANALYTE field. If the value in the QUALIFY column contains "U", then the result is actually 1/2 result + 0.999. And so on for each of the elements represented in the ELEMENT field....
It looks like there are 19 elements represented, so I would end up with 19 x 2 = 38 new columns, populated as described.
Now I need the next step which is to create two new columns for each of the represented elements. Take Cr for example. One new column, titled "Cr" will contain and be populated by the value in the RESULT column. The other new column, "Cr-method" will contain and be populated by the value in the ANALYTE field. If the value in the QUALIFY column contains "U", then the result is actually 1/2 result + 0.999. And so on for each of the elements represented in the ELEMENT field....
It looks like there are 19 elements represented, so I would end up with 19 x 2 = 38 new columns, populated as described.
ASKER
aikimark - When the value in PQL is null, the value in the new "2-letter element-method" column will be null.
This code will add the required element and element-PQL column pairs
Sub Q_28485399_AddColumns()
Dim rs As Recordset
Dim fld As Field, tbl As TableDef
Dim dicFields As Object
Dim vFields As Variant, vItem As Variant
Set dicFields = CreateObject("scripting.dictionary")
Set tbl = DBEngine(0)(0).TableDefs("TCampProject")
For Each fld In tbl.Fields
dicFields.Add fld.Name, 1
Next
Set rs = DBEngine(0)(0).OpenRecordset("Select Distinct ELEMENT From TCampProject Where ELEMENT Is Not Null", dbOpenDynaset, dbReadOnly, dbReadOnly)
vFields = rs.GetRows(rs.RecordCount)
rs.Close
For Each vItem In vFields
If dicFields.exists(vItem) Then
Else
'add the field
Set fld = tbl.CreateField(vItem, dbSingle)
tbl.Fields.Append fld
End If
If dicFields.exists(vItem & "-PQL") Then
Else
'add the field
Set fld = tbl.CreateField(vItem & "-PQL", dbSingle)
tbl.Fields.Append fld
End If
Next
End Sub
that part was done in the db I just sent.You have null ELEMENT values where there should be actual element names. Look at the dissolved rows.
ASKER
aikmark - this does not seem to be a SQL command, rather it looks to be VBA language. (??) I know very little about VBA and didn't think it was applicable to this effort. Please advise - thanks.
ASKER
I just tried to run it as a macro and it asked me to enter a macro name. ??
You have to add the code to a module in your database.
ASKER
got it. beautifully done! Last step is to populate it. I'm checking that part now.
ASKER
The earlier UPDATE command did not seem to populate the columns, so they are still blank. ?
ASKER
To clarify. I now have all of the necessary columns accounted for. But the data in those columns is all null.
Is your ELEMENT column populated via the update queries I posted?
Please notice that the routine I posted is named Q_28485399_AddColumns. The columns must be present before they can be updated. They should be null.
Please notice that the routine I posted is named Q_28485399_AddColumns. The columns must be present before they can be updated. They should be null.
ASKER
Is your ELEMENT column populated via the update queries I posted? YES
Please notice that the routine I posted is named Q_28485399_AddColumns. The columns must be present before they can be updated. They should be null. YES, they are all there and all contain null values.
Please notice that the routine I posted is named Q_28485399_AddColumns. The columns must be present before they can be updated. They should be null. YES, they are all there and all contain null values.
Here's the updating routine. It takes a while to run. Please be patient.
Sub Q_28485399_UpdateColumns()
Dim rs As Recordset
Dim vFields As Variant, vItem As Variant
Set rs = DBEngine(0)(0).OpenRecordset("Select Distinct ELEMENT From TCampProject Where ELEMENT Is Not Null", dbOpenDynaset, dbReadOnly, dbReadOnly)
vFields = rs.GetRows(rs.RecordCount)
rs.Close
For Each vItem In vFields
DBEngine(0)(0).Execute "Update TCampProject As E INNER JOIN tblEligibleLOCCODEs " & _
"ON (E.LOCCODE = tblEligibleLOCCODEs.LOCCODE) AND (E.MYDATE = tblEligibleLOCCODEs.MYDATE) " & _
"Set [" & vItem & "] = IIF(E.QUALIFY Like 'U*', .999 + E.[RESULT]/2, E.[RESULT]), " & _
"[" & vItem & "-PQL] = E.PQL WHERE (E.ELEMENT = '" & vItem & "')"
Debug.Print DBEngine(0)(0).RecordsAffected, vItem
Next
End Sub
These are the updated row counts from my test.
1 Ag
69 Al
758 As
0 B
0 Ba
695 Be
635 Ca
761 Cd
0 Co
762 Cr
758 Cu
674 Fe
171 Hg
607 K
0 Li
635 Mg
736 Mn
0 Mo
607 Na
762 Ni
764 Pb
0 Sb
609 Se
0 Sn
0 Sr
0 Ti
0 Tl
0 V
758 Zn
@Ted
Why are only the [ABC]* locations included? This is only 1/5 of the total rows.
Why are only the [ABC]* locations included? This is only 1/5 of the total rows.
ASKER
Aikimark - Almost there! It populates everything but does not place the results in a single row (the goal of this effort). In other words, I have several rows for "Henry Fork Riv at SR 1143 NR Brookford" (one of the values in LOCATION) for MYDATE = 2/9/2006. (On 2/9/2006 our office collected one sample at "Henry Fork Riv at SR 1143 NR Brookford" and got some results that should appear on a single row for that location-date. Instead, I get 11 rows for that location-date. (And strangely, it turns out that at least one element, Fe, showed up on two rows....it would not be possible to get 2 different Fe results for a single sample.....so I'm not sure how the second Fe got into the results.)
ASKER
Aikimark - I'm not sure why only the ABC locations are included. This is what I used to create the eligble loccodes table:
SELECT TCampProject.LOCCODE, TCampProject.MYDATE INTO tblEligibleLOCCODEs
FROM TCampProject
WHERE (((TCampProject.ANALYTE) Not Like "*solid*"))
GROUP BY TCampProject.LOCCODE, TCampProject.MYDATE
HAVING (((TCampProject.LOCCODE) Like "[ABC]*") AND ((Count(TCampProject.LOCCO DE))>1));
SELECT TCampProject.LOCCODE, TCampProject.MYDATE INTO tblEligibleLOCCODEs
FROM TCampProject
WHERE (((TCampProject.ANALYTE) Not Like "*solid*"))
GROUP BY TCampProject.LOCCODE, TCampProject.MYDATE
HAVING (((TCampProject.LOCCODE) Like "[ABC]*") AND ((Count(TCampProject.LOCCO
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
@Ted
My question to you was why you wanted to limit the locations to those beginning with the letters A, B, or C. The queries and code reflect that requirement. I was curious to know WHY the requirement even exists, seeing as how it only covers about 1/5 of the rows with elements.
My question to you was why you wanted to limit the locations to those beginning with the letters A, B, or C. The queries and code reflect that requirement. I was curious to know WHY the requirement even exists, seeing as how it only covers about 1/5 of the rows with elements.
ASKER
Given the amount of work that's gone into this question, I am happy to mark it as completed (it basically is, with a few more tweaks), award points, and re-open a new question to finish any final aspects. ?
ASKER
It asks me to enter a parameter value for E.field1. ??
There was an extra field in your table. You can delete it from the query.
ASKER
You nailed it! Really excellent work Aikimark. Thanks very much.
Thanks also to Dale!
Both of you were a huge help to me and participated substantially so I will split the points (I wish I could award both of you with the full amount.)
I will also post a follow up question to average the values for rows with the same location but different dates, which was beyond the scope of this question. I think this one will be very clean and easy.
Thanks also to Dale!
Both of you were a huge help to me and participated substantially so I will split the points (I wish I could award both of you with the full amount.)
I will also post a follow up question to average the values for rows with the same location but different dates, which was beyond the scope of this question. I think this one will be very clean and easy.
ASKER
Thanks again Aikimark. Really excellent help!
ASKER
Dale - I am working on getting you your 250 points now. It didn't go through the first time. Thanks again for your help as well. You guys are amazing programmers.
Both of you were a huge help to me and participated substantially so I will split the pointsDid you mean to accept comments from both of us or only me? My comments were both the accepted and assisted solutions
@Ted
What about my ABC location restriction question?
What about my ABC location restriction question?
ASKER
That was an artifact from one of Fyed's code postings. I will revise that to make sure I pull all records. Thanks for the follow up.
Personally, Aikimark did 90% of the effort on this. I was fine with you awarding all of the points to him.
@Dale
I would be curious to see if you could get a pivot table solution out of this. Associating the PQL data seems problematic, but I might not have sufficient perspective on the problem at this point.
I would be curious to see if you could get a pivot table solution out of this. Associating the PQL data seems problematic, but I might not have sufficient perspective on the problem at this point.
So, do I understand correctly that you only want to include those where the term ICP or ICPMS is included in the [Analyte] column.
And that you want columns that look like [Fe] and [Fe-PQL]
and not columns that look like [Fe by ICP], [Fe by ICP-PQL], [Fe by ICPMS], [Fe by ICPMS-PQL]
And that you want columns that look like [Fe] and [Fe-PQL]
and not columns that look like [Fe by ICP], [Fe by ICP-PQL], [Fe by ICPMS], [Fe by ICPMS-PQL]
ASKER
Actually I want to include those from ANALYTE column with ANY term except "solids". (I only want water methods/analyses, and all of them except those with "solids" are water methods.)
And I want columns that look like [Fe] and [Fe-method], [Mn] and [Mn-method], etc.
And I want columns that look like [Fe] and [Fe-method], [Mn] and [Mn-method], etc.
So, this is not what you are looking for, because it does not include the Method in the column header, is that correct?
TRANSFORM First(Temp.ColValue) AS FirstOfColValue
SELECT Temp.Location, Temp.MyDate
FROM (SELECT tCampProject.[Location]
, DateValue(tCampProject.[My Date]) AS MyDate
, tCampProject.ELEMENT AS ColHead
, tCampProject.[Result] AS ColValue
FROM tCampProject
WHERE (tCampProject.ELEMENT Is Not Null) AND (tCampProject.[Analyte] Not Like "*solid*")
UNION ALL
SELECT tCampProject.Location
, DateValue(tCampProject.[My Date]) AS MyDate
, [ELEMENT] & "-PQL" AS ColHead
, IIf(NZ([Qualify],"")="U"," <",NULL) + [PQL] AS ColValue
FROM tCampProject
WHERE (tCampProject.ELEMENT Is Not Null) AND (tCampProject.[Analyte] Not Like "*solid*")
) AS Temp
GROUP BY Temp.Location, Temp.MyDate
PIVOT Temp.ColHead;
TRANSFORM First(Temp.ColValue) AS FirstOfColValue
SELECT Temp.Location, Temp.MyDate
FROM (SELECT tCampProject.[Location]
, DateValue(tCampProject.[My
, tCampProject.ELEMENT AS ColHead
, tCampProject.[Result] AS ColValue
FROM tCampProject
WHERE (tCampProject.ELEMENT Is Not Null) AND (tCampProject.[Analyte] Not Like "*solid*")
UNION ALL
SELECT tCampProject.Location
, DateValue(tCampProject.[My
, [ELEMENT] & "-PQL" AS ColHead
, IIf(NZ([Qualify],"")="U","
FROM tCampProject
WHERE (tCampProject.ELEMENT Is Not Null) AND (tCampProject.[Analyte] Not Like "*solid*")
) AS Temp
GROUP BY Temp.Location, Temp.MyDate
PIVOT Temp.ColHead;
"And I want columns that look like [Fe] and [Fe-method], [Mn] and [Mn-method], etc. "
I'm not sure what you mean by [Fe-method] and [Mn-method]? What do you mean by method?
is "Dissolved by furnace" a method? How is that different than "by furnace"? If so, then you would end up with columns:
[As by furnace], [As by furnace - PQL]
[As by ICPMS], [As by ICPMS - PQL]
[As Dissolved by furnace], [As Dissolved by furnace - PQL]
You need to fix the records that look like "Cu by furmace"
Are you considering "Hardness" an "Element"
You also have records in the latest recordset where the Element field is NULL, examples:
As- Dissolved by ICPMS Arsenic
BE-Beryllium Dissolved by ICP
CA-Dissolved Calcium by ICP
Cr- Dissolved Chromium by ICPS
....
Personally, I think you need to create a Method column so that you can define precisely which methods you want, and then finish filling in the Element column and the Method column for each record. Then, you can concatenate the [Element] and [method] columns to get precisely what you are looking for as your column headers.
Is [HG 1631] a different "method" than [HG 245.1]?
I'm not sure what you mean by [Fe-method] and [Mn-method]? What do you mean by method?
is "Dissolved by furnace" a method? How is that different than "by furnace"? If so, then you would end up with columns:
[As by furnace], [As by furnace - PQL]
[As by ICPMS], [As by ICPMS - PQL]
[As Dissolved by furnace], [As Dissolved by furnace - PQL]
You need to fix the records that look like "Cu by furmace"
Are you considering "Hardness" an "Element"
You also have records in the latest recordset where the Element field is NULL, examples:
As- Dissolved by ICPMS Arsenic
BE-Beryllium Dissolved by ICP
CA-Dissolved Calcium by ICP
Cr- Dissolved Chromium by ICPS
....
Personally, I think you need to create a Method column so that you can define precisely which methods you want, and then finish filling in the Element column and the Method column for each record. Then, you can concatenate the [Element] and [method] columns to get precisely what you are looking for as your column headers.
Is [HG 1631] a different "method" than [HG 245.1]?
ASKER
I'm not sure what you mean by [Fe-method] and [Mn-method]? What do you mean by method?
PQL is probably the best descriptor. I would be better to leave it like it is.
is "Dissolved by furnace" a method? yes
How is that different than "by furnace"? If so, then you would end up with columns:
[As by furnace], [As by furnace - PQL]
[As by ICPMS], [As by ICPMS - PQL]
[As Dissolved by furnace], [As Dissolved by furnace - PQL]
You need to fix the records that look like "Cu by furmace"
WILL DO.
Are you considering "Hardness" an "Element" YES
You also have records in the latest recordset where the Element field is NULL, examples:
As- Dissolved by ICPMS Arsenic
BE-Beryllium Dissolved by ICP
CA-Dissolved Calcium by ICP
Cr- Dissolved Chromium by ICPS
....
Thanks for the heads up. I'll check those.
Personally, I think you need to create a Method column so that you can define precisely which methods you want, and then finish filling in the Element column and the Method column for each record. Then, you can concatenate the [Element] and [method] columns to get precisely what you are looking for as your column headers.
Is [HG 1631] a different "method" than [HG 245.1]? YES
Thanks Fyed. I'm out tomorrow and back on Friday. I really appreciate your help. I think I have enough information to get what I need now, though I have not put it all together yet into one final product.
PQL is probably the best descriptor. I would be better to leave it like it is.
is "Dissolved by furnace" a method? yes
How is that different than "by furnace"? If so, then you would end up with columns:
[As by furnace], [As by furnace - PQL]
[As by ICPMS], [As by ICPMS - PQL]
[As Dissolved by furnace], [As Dissolved by furnace - PQL]
You need to fix the records that look like "Cu by furmace"
WILL DO.
Are you considering "Hardness" an "Element" YES
You also have records in the latest recordset where the Element field is NULL, examples:
As- Dissolved by ICPMS Arsenic
BE-Beryllium Dissolved by ICP
CA-Dissolved Calcium by ICP
Cr- Dissolved Chromium by ICPS
....
Thanks for the heads up. I'll check those.
Personally, I think you need to create a Method column so that you can define precisely which methods you want, and then finish filling in the Element column and the Method column for each record. Then, you can concatenate the [Element] and [method] columns to get precisely what you are looking for as your column headers.
Is [HG 1631] a different "method" than [HG 245.1]? YES
Thanks Fyed. I'm out tomorrow and back on Friday. I really appreciate your help. I think I have enough information to get what I need now, though I have not put it all together yet into one final product.
If you can refine the [Element] and [Method] columns, then doing this as a crosstab is very doable.
There are only 54 variants of the [Analyte] column in your latest recordset. If you could fill in the [Element] and [Method] columns for each of these records, I can update the records in the table you provided and update the query I sent you earlier.
Chemicals-Unique-Analyte-Values.xlsx
There are only 54 variants of the [Analyte] column in your latest recordset. If you could fill in the [Element] and [Method] columns for each of these records, I can update the records in the table you provided and update the query I sent you earlier.
Chemicals-Unique-Analyte-Values.xlsx
ASKER
Great - thanks Fyed! Sorry I'm slow getting back. See attached.
for-EE-0801----Chemicals-Unique-Analyte-
for-EE-0801----Chemicals-Unique-Analyte-
After adding the [Method] column from the spreadsheet and updating all of the Element and Method values in your table, from the matching values in the spreadsheet, this is what I came up with for the crosstab query.Chemicals2.zipIf you really need a separate table, you could use the xTab query as the source for a MakeTable query.
ASKER
@ Dale -
I was out of the office the last couple of days. I need to clarify your last post, but I'm guessing you may be tired of this question! I have checked my final product and I need something tweaked. Can I open a new question in order to do this? I just want to make sure I can "get there from here".
Would that be possible? Would you be willing to give it one more look? If so, I will create a new question, upload my Access db, get you to have a look, and wrap up this effort. I would really appreciate it.
I was out of the office the last couple of days. I need to clarify your last post, but I'm guessing you may be tired of this question! I have checked my final product and I need something tweaked. Can I open a new question in order to do this? I just want to make sure I can "get there from here".
Would that be possible? Would you be willing to give it one more look? If so, I will create a new question, upload my Access db, get you to have a look, and wrap up this effort. I would really appreciate it.
@Ted,
Agree with aikimark. I'll be glad to take a look.
Don't specifically address me, your question should address this question, and the database I provided so that any expert with the time and inclination can assist you.
Dale
Agree with aikimark. I'll be glad to take a look.
Don't specifically address me, your question should address this question, and the database I provided so that any expert with the time and inclination can assist you.
Dale
ASKER
@Dale - I just wanted to thank you (and Aikimark) again for all your help on this question. I know you each spent a whole lot of time and thought power on it. Judging from the couple of responses I have gotten on my new, follow up post, I'm guessing folks have moved on to other solutions. I think that EE is maybe not the place for super novices like myself, but your responses were a huge help and are much appreciated. Contrary to the responses on the new post, I have worked very hard on this problem trying to understand what your responses meant and how to employ them. In retrospect I realize I was trying to implement both sets of solutions at the same time which may have confused things for both of you (and me). Anyway, thank you both again. : )
TC
TC
@TC,
That is why I generally backed out when aikimark and you got engaged. It tends to get confusing for the poster if he/she is trying to implement the suggestions of more than one Expert at a time.
I would disagree about EE not being the place for super novices. One of the advantages of EE over many of the other boards is that you can post screen shots and exchange files. Most of the experts here are more than willing to stick with an issue until the posters problem is resolved, and the poster understands the response. If you would like to discuss the cross-tab query I wrote so you understand that better, we can do that here in this thread.
Hopefully we didn't confuse you too much. I saw your other post but have been busy today and will not get a chance to look at it until later this evening or tomorrow.
Dale
That is why I generally backed out when aikimark and you got engaged. It tends to get confusing for the poster if he/she is trying to implement the suggestions of more than one Expert at a time.
I would disagree about EE not being the place for super novices. One of the advantages of EE over many of the other boards is that you can post screen shots and exchange files. Most of the experts here are more than willing to stick with an issue until the posters problem is resolved, and the poster understands the response. If you would like to discuss the cross-tab query I wrote so you understand that better, we can do that here in this thread.
Hopefully we didn't confuse you too much. I saw your other post but have been busy today and will not get a chance to look at it until later this evening or tomorrow.
Dale
ASKER
@Dale - Thanks much. So, where I am as of 8/6/14 at 5:50 pm (I've been working on this much of the day), I added a new ELEMENT column as you suggested and now have a db with the following ELEMENT entries culled and brought over correctly from my ANALYTE field (see below for the unique elements in my new ELEMENT field). I now simply need to add these as new fields, along with their counterpart XX-PQL (e.g. Ag, Ag-PQL, Al, Al-PQL, As, As-PQL, etc), and then populate all of them from the RESULT (and QUALIFY) and PQL fields, respectively. This can be done as a query or as a transformation and population of the table itself. i recapped my need in the new post. https://www.experts-exchange.com/questions/28491892/need-help-with-Access-update-query.html?anchorAnswerId=40244731#a40244731 I'll watch for that new post tomorrow in case you are able to have a look.
Ag
Al
As
B
Ba
Be
Ca
Cd
Co
Cr
Cu
Fe
Hardness
Hg
K
Li
Mg
Mn
Mo
Na
Ni
Pb
Sb
Se
Sn
Sr
Ti
Tl
V
Zn
Ag
Al
As
B
Ba
Be
Ca
Cd
Co
Cr
Cu
Fe
Hardness
Hg
K
Li
Mg
Mn
Mo
Na
Ni
Pb
Sb
Se
Sn
Sr
Ti
Tl
V
Zn
Hardness is an element?
So, you no longer want the "Method" in that computation?
How do you want to handle the fact that there are multiple records for each
Location, LocCode, MyDate combination for many of the elements? Do you simply want to take the average? We could do that for the [Result] values, but for the -PQL columns, we would have to average the values and then assess whether there were any "U" values in the [Qualify] column, and if there is, add the "<" symbol to the
[Element]-PQL columns.
How do you want to handle the fact that there are multiple records for each
Location, LocCode, MyDate combination for many of the elements? Do you simply want to take the average? We could do that for the [Result] values, but for the -PQL columns, we would have to average the values and then assess whether there were any "U" values in the [Qualify] column, and if there is, add the "<" symbol to the
[Element]-PQL columns.
ASKER
@Dale - Yes, if a single combination of Location, Loccode, MyDate has more than one value for a given element, then I need to average those from the {RESULTS] values (if [QUALIFY] has a "U", then we would use 1/2 of the value from [RESULT] in that averaging computation. As for the -PQL column value, we would simply use the maximum of the PQLs hyphen max (e.g. if Al-PQL had 0.2, 0.01, and 0.001, we would use "0.2-max" as the value for Al-PQL.....this tells me there were multiple lab method limits and the largest of those was 0.2.)
ASKER
@Dale - I just wanted to see if you were still tuned in or if you moved on to other questions. I have spent quite a bit of time trying to re-create what you and Aikimark were helping me with above, but I have not been able to put the pieces back together to get this to work. Since I am a state employee with no budget for this service, I'll continue to wade through on EE. Anyway, I think the "well is poisoned" on my followup post....two experts replied to tell me I simply wanted an expert to give me the answer instead of helping myself to the solution (it would take me years of training to do what you guys are doing!), and it seems the question has gone silent.
@Ted
Click on the Request Attention link and ask for more expert participation.
Click on the Request Attention link and ask for more expert participation.
@Ted,
I'm still with you on this original question, have just been slammed with work and family issues this week. I'll try to take a look at building the new xTab query that includes either the Avg([Results]) and Max([PQL] value this evening.
I'm still with you on this original question, have just been slammed with work and family issues this week. I'll try to take a look at building the new xTab query that includes either the Avg([Results]) and Max([PQL] value this evening.
ASKER
Thanks Dale - I totally understand - those come first.
If I solve it during the day today i'll let you know!
If I solve it during the day today i'll let you know!
But this question thread is closed. A new question should be asked if you want to pursue the crosstab query solution.
I'm trying to understand what you meant with regards to averaging the [Result] column, but if there is a "U" in the Qualify column, only use 1/2 of the [REsult] in the Average.
I found a group of records:
LocCode:Q3452000
MyDate: 5/23/14
Element: FE
With [Result] and [Qualify] values of
50 U
670
50 U
There is a huge variance here, because of the method used, do you really want to average all three of those, or do you want to add the Method back into the query as one of the row heading (group by) columns so that you would have records that look like:
Loc Date Method Fe Fe-PQL
Q3452000 5/23/2014 ICP 670
Q3452000 5/23/2014 dissolved - ICP 50
based on the above values, would the dissolved average actually be 50, or would it be 25, since those records have a "U" in the "Qualify" column?
I found a group of records:
LocCode:Q3452000
MyDate: 5/23/14
Element: FE
With [Result] and [Qualify] values of
50 U
670
50 U
There is a huge variance here, because of the method used, do you really want to average all three of those, or do you want to add the Method back into the query as one of the row heading (group by) columns so that you would have records that look like:
Loc Date Method Fe Fe-PQL
Q3452000 5/23/2014 ICP 670
Q3452000 5/23/2014 dissolved - ICP 50
based on the above values, would the dissolved average actually be 50, or would it be 25, since those records have a "U" in the "Qualify" column?
ASKER
Fe analyzed by the lab method "Dissolved - ICP" is totally different than Fe analyzed by the lab method "ICP". That is why I essentially separate the two. They measure different things. (When I do my statistical analyses on these data I would not mix those two.) That said, any time I have a "U" in the [QUALIFY] field, I place a 1/2 * RESULT value + 0.99. Thus, 50 U would become 25.999. Any averaging would use the 25.999 as the value.
Every time you add some new rule to your computations, it makes this significantly more difficult, and makes me very suspect of how you plan on using these results. There are many good statistical packages on the street, but I would not take these values as statistically valuable because there are so few results for any given location/element/method.
I'm attaching version 3 of the database. I've broken the two parts ([Results]) of the process into subqueries(Query_01_Result s and Query_02_Results_PQL), so that you can take a look at those, and play with them to get them "correct". As long as you keep your columns consistent between those two queries, Query_03_xTab should work. In the crosstab query, I have appended an asterisk (*) to the end of the value to indicate whether that value is a aggregate (avg or max). You could replace that with some other indicator (including the [RecCount] column itself to indicate how many records were used to achieve the aggregate value. This would look something like:
TRANSFORM First(Format([SomeValue]," #.000") & IIf(isnull([RecCount]),Nul l," (" & [RecCount] & ")")) AS Expr1
I'm going to have to cut you loose on this project. Good luck
Chemicals3.mdb
I'm attaching version 3 of the database. I've broken the two parts ([Results]) of the process into subqueries(Query_01_Result
TRANSFORM First(Format([SomeValue],"
I'm going to have to cut you loose on this project. Good luck
Chemicals3.mdb
ASKER
Thanks very much Dale. I appreciate all of your help and patient assistance. Yes, I will only "keep" the statistical analyses that make sense, based on a large enough n. You've been a huge help and I appreciate it. Best regards.
If you write SQL queries you can use "group by" in order to sum up fields with single value. I'm not that specialist with access
for 1.)
but in standard sql this would look like
select Col_A, Col_B from whatevertable group by Col_A, Col_B;
If you need any other value not grouped you need to use group functions like MAX(), MIN(),SUM() in the query. you cannot access them directly as mublitble differing values might appear. Group function is unifig them.
this might look similar to this:
select Col_A, Col_B, MAX(Col_C) from whatevertable group by Col_A, Col_B;
for 2.)
to get all sets with "A", "B", "C" starting you can use the following statements....
select * from whatevertable where (Col_a like 'A%' or Col_a like 'B%' or Col_a like 'C%');
sure you can combine them with other logical queries...
for 3.) you want to make update as i understood correctly.
the query should look like the following....
update whatevertable set Col_G = Col_E where (Col_C like '%Fe%');
if you want to update only selected rows from 2.) it should look similar to this...
update whatevertable set Col_G = Col_E where (Col_C like '%Fe%') and (Col_a like 'A%' or Col_a like 'B%' or Col_a like 'C%');
Brackets at this point have to setted carefully.
for 4.)
use update again similar to 3.)
update whatevertable set Col_G = '<' where Col_F like '%U% and Col_C like '%Fe'
Please let me know if you have additional questions....
br
Peter