I have the following query that I use as a part of a custom search engine on my site. I've stripped it down for readability, the joined tables will have columns in the select clause, that's why they're there:
select distinct ( u.userID ), u.lastLogin, totalUsers = COUNT( u.userID ) OVER()
From ZipCodes z
inner join tblUsers u on z.zipCode = u.zipCode
inner join tblUserOffers uo on u.userID = uo.userID
Where <a big long where clause>
order by u.lastLogin OFFSET 0
ROWS FETCH NEXT 500 ROWS ONLY
my problem is with the totalUsers field. It's listing all of the records when I should get results based off of distinct( userID ).
For example, when I run the query with certain parameters, I get back 21 rows of data, however the totalUsers column has a value of 46. If I remove the DISTINCT clause, I get back 46 rows of data, however there are duplicates (and I'm not sure if I'm enough of a sql ninja to tune the complex query to prevent dupes). This query (part of a stored procedure) will be used often, so it needs to be as efficient as possible.
Cheers in advance for any help :)