Query help for Inpatient, Outpatient and Alternate Care data for letters..

Hello,
I want to know the best way to approach this scenario.
I currently have written a sql statement with 3 big select statements that are joined by a UNION statement like:
Select Inpatient data
Union
Select Outpatient data
Union
Select Alternate Care data

This works fine for creating a file for a single letter that would be generated for each patient or case.
Now, the specs have changed where there needs to be a letter generated for each
Mail-to-Type.
A Mail-to-Type is defined as - 'Facility', 'Doctor' and 'Patient'.
So basically I need a letter now for each Mail to Type to print and the current logic I have will not do this.
In other words, 3 letters are to be generated for each mail-to-type, not just one.  Or can I still keep the same logic above to print out a letter for each mail-to-type?

I tried the scenario like this for the Inpatient data:

Select Inpatient - Facility
Union
Select Inpatient - Doctor
Union
Select Inpatient - Patient

But this seems to not work, any other suggestions?

Thanks
MachinegunnerAsked:
Who is Participating?
 
MachinegunnerConnect With a Mentor Author Commented:
Thanks for the replies, I went ahead and "chopped" the sql file down, as each union had like 140 columns within it.  :-)
Anyways, I kept the logic intact, as each is different between the outpatients, inpatients and alternate care members.

Thanks again.
Test-Query.sql
0
 
awking00Commented:
Can you post your original union query (you can leave out the irrelevant fields and personal information for simplicity but include fields that define inpatient, outpatient, and alternative care data plus mail-to data)?
0
 
corvetteguruCommented:
agreed. I know this isn't impossible to resolve... it's all in the data... once the records  have been unioned, I am sure that a case when will determine which letter they are to receive.
0
Free Tool: Path Explorer

An intuitive utility to help find the CSS path to UI elements on a webpage. These paths are used frequently in a variety of front-end development and QA automation tasks.

One of a set of tools we're offering as a way of saying thank you for being a part of the community.

 
corvetteguruCommented:
whoa... machinegunner... dude... KISS method... Ockham's Razor... Take your pick!

That thing is MASSIVE! I can see your problem already. I am looking at it and it's a tough one to follow.

Let me share something from my days in the mainframe biz - ADABAS/Natural, to be exact.

ADABAS is a very powerful, fast and stable DBMS, but it is bound to the mainframe. When you want to move tons of data, quickly, ADABAS is as good as it gets.

However, when you are dealing with very complex multi-table lookups, it became a bit of a pig.

What I did to fix that problem was simple - break it up into steps.

Step 1 was my main extraction of data that I needed to filter through. High level, primary key values, dates, things like that.

Step 2 was to filter that by looking for exclusionary values in the next table.

Step 3 was then to add values - names, status values, things like that.

Step 4 was to do sort and then save the results to the PC for conversion to EXCEL or something.

Each of the above steps would create a new flat file (think temp table, but it could be retained) that fed the next step.

Here is where I would win... one my colleagues was (he passed away) very brilliant. He would create these massive reports using all the power of ADABAS...

ADABAS these things called "super descriptors" which were keys made up of multiple fields. Building indexes for those was a lot piggy, but they were powerful. He would make one of these massive reports, start it running at 4:30pm on Friday and it MIGHT be done on Monday by 8:00am!

I would take the same exact assignment, create a small job to extract, run it, whist it was running, I would start the next step, and so on.

The upside was that if mine failed, I could restart at any step, if his failed, the results were lost, or had to be deleted, and the job restarted! The punchline? My process would take no more than an hour!!!! DONE! It might take half a day to write it, but I had if to reuse if needed, of course (as did he - his concept was quicker to write - at times - but many times slower to run).

This process works just as well with SQL... extract into a temp table, then start building data to that to get to the end point. It's as much about compiling data and/or assembling data. You can use either a temp table or create a series of reusable tables. That's your call.

Think about it... I will continue to stare at your existing proc... but just looking at the complexity it might prove to be easier to break it down into bite sized chunks could really help maintain it, help a future analyst to understand it and speed up any processing you are doing. Especially when dealing with some 140 columns!

Best of luck... I genuinely mean it.
0
 
MachinegunnerAuthor Commented:
please close, thanks
0
 
Steve WalesSenior Database AdministratorCommented:
This question has been classified as abandoned and is closed as part of the Cleanup Program. See the recommendation for more details.
0
Question has a verified solution.

Are you are experiencing a similar issue? Get a personalized answer when you ask a related question.

Have a better answer? Share it in a comment.

All Courses

From novice to tech pro — start learning today.