Solved

MySQL query returns far too many rows (2 million)!

Posted on 2014-02-05
5
356 Views
Last Modified: 2014-02-05
Experts,

I have a query that should return at most 25 rows - the results for one location in January 2013.

However the resultset has 2.5 million or so!

Obviously I'm missing something, b/c it looks like every row in every join table is being included in the results.

Here's the query.  Help!
sql.txt
0
Comment
Question by:Glen Gibb
[X]
Welcome to Experts Exchange

Add your voice to the tech community where 5M+ people just like you are talking about what matters.

  • Help others & share knowledge
  • Earn cash & points
  • Learn & ask questions
5 Comments
 
LVL 3

Expert Comment

by:Shane Kahkola
ID: 39836328
Can you try SELECT DISTINCT?

If not, can you post some sample data that gets pulled from that query?
0
 
LVL 1

Expert Comment

by:SarahDaisy8
ID: 39836353
This is probably way to simple of a thought to be the solution, but I'll throw it out there heh.  What about group by?
0
 
LVL 11

Accepted Solution

by:
John_Vidmar earned 500 total points
ID: 39836718
Pull the SQL into a database tool, replace the select-section with select count(*), comment-out all tables except for the first join and run the query.  Add the next join, run the query... repeat these steps until you hit the table that makes your query blow up to 2 million records.  At this point, review the join (perhaps a typo).

Also, I would qualify every field reference, especially in the on-clause.
0
 

Author Comment

by:Glen Gibb
ID: 39836937
Tnx, Experts, for your quick reply.

1.  (ccb) Yup, tried the DISTINCT modifier and reduced from a couple of million records to about 180 K.

2.  (sara) Attempted the GROUP BY, but things got REALLY slow, and still had superfluous results.

3. (john) Sounds like a plan.  I'll get started here.  Tell me please, how does one "qualify" a field reference?  I have made sure there are no ambiguous references, etc.  Am I missing something important?

I'll post sample results - no one needs a couple million of the same row!

Capt
0
 

Author Closing Comment

by:Glen Gibb
ID: 39838046
Thanks for the approach to solving this problem.

Turns out it was bad DB design.  Merged fields in the "consumption" and "consumption_item" tables to eliminate unnecessary duplication.

Presto!  MySQL workbench was able to help me analyse the difficulty.
0

Featured Post

Webinar: MongoDB® Index Types

Join Percona’s Senior Technical Services Engineer, Adamo Tonete as he presents “MongoDB Index Types, How, When and Where Should They be Used?” on Wednesday, July 12, 2017 at 11:00 am PDT / 2:00 pm EDT (UTC-7).

Question has a verified solution.

If you are experiencing a similar issue, please ask a related question

Many CHPs use the buzzword ‘Cloud Hosting’ to sell the idea of reliability. Most consumers have the opinion that cloud hosting is easily scalable and can handle just about anything. Further, most CHPs are not transparent and hide the underlying arch…
There’s a movement in Information Technology (IT), and while it’s hard to define, it is gaining momentum. Some call it “stream-lined IT;” others call it “thin-model IT.”
Connecting to an Amazon Linux EC2 Instance from Windows Using PuTTY.
Sending a Secure fax is easy with eFax Corporate (http://www.enterprise.efax.com). First, just open a new email message. In the To field, type your recipient's fax number @efaxsend.com. You can even send a secure international fax — just include t…

726 members asked questions and received personalized solutions in the past 7 days.

Join the community of 500,000 technology professionals and ask your questions.

Join & Ask a Question