SQL

SQL (Structured Query Language) is designed to be used in conjunction with relational database products as of a means of working with sets of data. SQL consists of data definition, data manipulation, and procedural elements. Its scope includes data insert, query, update and delete, schema creation and modification, and data access control.

Share tech news, updates, or what's on your mind.

Sign up to Post

I am having trouble figuring out how to order the following SQL query.  When I add "order by H.Agency, H.ActivityType" at the end of the query I get a syntax error near keyword order.  Please help!!

With CTE_Hours as (select distinct H.AgencyID, H.Agency,H.Hours, isnull(H.Classification,'') Classification, isnull(H.Objectives,'') Objectives,H.ActivityType,H.ActivityOther, H.Narrative,H.Outcome,H.Duration,H.Frequency,H.Strategy,H.Need,H.NeedOther, H.ActivityID, H.RegID , R.AgeCurrent, R.CommunityCommittee, R.YouthCommittee, R.Parentcheck, R.CommunityResident, R.Race, R.Gender, R.Sector , cast(H.ActivityDate as Date) ActivityDate, H.Fiscal from tblOrgHours H inner join tblOrgRegistrations R on H.Regid = R.RegID and R.AgeCurrent between 0 and 99 Where 
 (H.Agency = 'Administrator' OR H.Agency = 'Chicago Area Project -')  And H.Fiscal = 2019 And H.ActivityDate >= '07/01/2018' And H.ActivityDate < '12/31/2018' And R.RegDate >= '07/01/2018' And R.RegDate < '12/31/2018') select  H.Agency ,A.ActivityName ,H.Classification ,H.ActivityDate ,H.Objectives ,H.Hours ,count(A.ActivityName) over (partition by H.Agency,A.ActivityName order by A.activityName) as [ActivityCount] ,Count(H.RegID) as [# individuals] ,H.[ActivityType],H.ActivityOther,H.Narrative,H.Outcome,H.Duration,H.Frequency,H.Strategy,H.Need,H.NeedOther ,COUNT(CASE when R.CommunityCommittee = '1' then 1 end) as [CommunityCommittee]
,COUNT(CASE when R.YouthCommittee = '1' then 1 end) as [YouthCommittee] ,COUNT(CASE when

Open in new window

0
Big Business Goals? Which KPIs Will Help You
Big Business Goals? Which KPIs Will Help You

The most successful MSPs rely on metrics – known as key performance indicators (KPIs) – for making informed decisions that help their businesses thrive, rather than just survive. This eBook provides an overview of the most important KPIs used by top MSPs.

I'm currently working on a stored procedure (sql server 2014) and could use a second set of eyes to make sure it's efficient, The SP returns different logging data we collect throughout a users session on our site, essentially a home grown custom (and much simpler than) google analytics system.

Here's a brief summary of what's going on in the SP so far:
- a temporary table (#tmp) is created, and then populated with the results of another SP (this SP contains all of the basic logging data that we tracked way before I came along). I cannot currently modify this SP as it's called elsewhere in our site, so I do need to populate the results in a temp table and add on to it.

- After it's populated, I add some additional columns to the #tmp table to take advantage of the new fields that have been added into the logging functionality

- Next (and the place I'm not sure is the best way to go), I update the groupID column. Essentially, I want to match up the groupID from the LogOfQueries table to the records already in the #tmp table. This is taking between 7-10 seconds to execute this update, which is a lot slower than I expected (for comparison, if I just run the Admins_History SP, it runs in under a second)

Both tables have a an ID column that links them to each other (visitID = visitLogID). I've tried doing a simple select * from #tmp statement, instead of implementing the paging in the dynamic sql creation, and I get the same results, between 7-10 seconds.

The sql …
0
Hi,
I have the following code
USE [<database_name>] GO SELECT name AS Assembly_Name, permission_set_desc FROM sys.assemblies WHERE is_user_defined = 1; GO

Open in new window

This is used to test if CLR assemblies are in use for a database.
I need to run this against an installation of SQL Server which houses about 50 Databases.
I would like to  code this so that it automatically changes the database name rather than me having to input each database name manually.
any guidance appreciated.
Thanks
0
Hello.  I want to write a SQL statement that will check one table against another and provide matching results.

  select t1.ID, t1.Name from Table1 as t1
  where exists (select t2.Name from Table2 as t2 where t1.name like t2.name)

The above code will work, but only if the full name matches.  I also want partial matches.  For example if I have "Mickey Mouse" in Table1 and "Mouse, Mickey" in Table2 then It should consider that a partial match and show it to me.  I want to be able to use a wildcard character - for example:  select t2.Name from Table2 as t2 where t1.name like %t2.name% or similar.

I also want to return values from both tables in the output so I can manually view the information.

Thanks for your help.

===========================
Table1:
John Smith
Jane Doe
Mickey Mouse


Table2:
John Smith
Mouse, Mickey
John Doe

===========================
0
I need to convert this from an MS Access Query to a SQL Server view. How would I write this in SQL to get the format in hours & Minutes?

[PRTTotalMins]\60 & Format([PRTTotalMins] Mod 60,"\:00") AS PRT_Total_Mins

Open in new window

0
What is the best option to interface SQL with Visual Studio's  .NET. I have a project that require connection to a SQL data.

If I wanted a free SQL download. where could I find it?
0
I have a simple stored procedure which generates insert statements, and I want to know how to execute them automatically.

I am calling this stored procedure from a button on a Classic ASP page., which is the reason for wanting the INSERT statements to be executed after they are created.

In its simplest form,  here is the statement that generates the INSERT statements :

SELECT 'INSERT INTO DEV_Test (''' + txtSchoolID + ''', '''', '''', '''', '''' )' FROM DEV_Test2

Open in new window


… and this is what is produced :

INSERT INTO DEV_Test ('150721346050', '', '', '', '' )
INSERT INTO DEV_Test ('3456407213', '', '', '', '' )
INSERT INTO DEV_Test ('3721313250', '', '', '', '' )
INSERT INTO DEV_Test ('3721388839', '', '', '', '' )

Open in new window


… and I want the stored procedure to execute these statements.

Can anyone help?

Thank you

Jim
0
I have two tables  Tblreadinglist contains a list of books

ReadID autonumber
Level text
Title text
word count number
File location text

Table 2 Tblreadlistlink contains books kids have read

Readlist_id autonumber
Book_ID_FK id of book read from list of books
Student_id  fk of student
readdate date

I am trying to get a list of books the kids have NOT read but I am going around in circles.
0
Hello everyone.
When "upsizing" an Access application to SQL Server, the Upsizing Wizard offers the possibility concerning adding timestamp field to the tables: "Let system decide". Likewise, SSMA offers the same possibility: "Let SSMA decide". I'd like to ask: how exactly they "decide" about which table "needs" a timestamp field and which doesn't?
The result of such operations is that some of the tables get timestamps and some of them don't. But I couldn't tell how that decision was "taken".
First I thought that small tables (in the sense of the amount of fields they have) don't get timestamp field and big tables do, but then I noticed some small tables that got timestamp by SSMA, and some big ones that didn't. So what is the consideration that SSMA takes when making that "decision"? And when I want to make that decision myself, without SSMA or the Upsizing Wizard, what is the best practice? In other words: When is it recommended to add a timestamp (/rowversion) field to SQL tables? (Assume that it's going to be used as linked tables to MS Access in a multi-user environment, which demands to take good care of concurrency)... I was thinking it to be good practice to add rowversion to all tables. Or isn't it?
Thank you.
0
I using the following Access SQL and I receive an error 'could not delete from specified tables'.
DELETE tblGlobal.*
FROM tblGlobal LEFT JOIN tblExclusions ON tblGlobal.Account = tblExclusions.Emails
WHERE (((tblExclusions.Emails) Is Null));

Open in new window

Error.JPG
0
10 Holiday Gifts Perfect for Your Favorite Geeks
LVL 1
10 Holiday Gifts Perfect for Your Favorite Geeks

Still have some holiday shopping to do for the geeks in your life? While toys, clothing, games, and gift cards are still viable options for your friends and family, there’s more reason than ever to consider gadgets and software.

Hey experts....

(Ms Access 2K)

I need help writing a query that displays prospects in our database who have never placed an order.
Each prospect is assigned a customer number in our Prospects table.
We maintain a second table which is our Orders Table which I call InvoiceHistory
I need a way to get a query of Customer Numbers that do not ever appear in the Orders table.

The query below does a good job of showing me how many invoices we have generated for each customer,
however, you help is needed to include customers who do not show up in the query because no record with thier customer number
exists in the InvoiceHistory table.
THanks

SELECT Prospects.ContactName, Count(Left([InvoiceNumber],5)) AS Expr1
FROM Prospects INNER JOIN InvoiceHistory ON Prospects.CustomerNumber = InvoiceHistory.CustomerNumber
WHERE (((InvoiceHistory.CustomerNumber)>0))
GROUP BY Prospects.ContactName, Prospects.CustomerNumber
ORDER BY Count(Left([InvoiceNumber],5)) DESC;
0
In a stored procedure I would like to examine 3 SQL char data fields in a table that contain phone numbers. Some of the fields have no value (but they are not null). I want to find the first field that has a value (phone number) and then return that value to a newly created field that I can pass on to my Visual Studio 17 C#  program.

Example:
table1.field1 = ""
table1.field2 = 1234567891
table1.field3 = 2345678966

table1.new_field = 1234567891
0
Hi,

I am using the Redgate Data Generator to populate a table.

The field I am populating checks to see if another column field is not empty then randomly selects a value from the list.

random.choice('a','b','c') if Approved == "Yes" or Approved == "No" else "Fail"

Open in new window


Any help is appreciated.
0
I would like to find out how to secure the tables in MS Express SqL Servers 2016, I’m totally shocked to what I saw today when a prospective client called me to showcase our software which also has both the MS Access Database (As Back End) and SQL Server as Back End. The client has a standard software off the shelf but can move around within tables in SQL Server 2014 freely and was able change setting from the same software, now because of what I saw I cannot supply my software with SQL Server Backend until I know how to protect the tables from intruders. It is now clear why he was calling for our software is because he massed the unprotected back end, thanks god, I have not yet started supplying the SQL Server as backend except MS Access 2016 which has the back end encrypt with a password.
(1)      If in Ms Access, we can encrypt the backend through a password how can we do the same with MS Express SQL Server 2016???????
(2)      If the there are ways of protecting the tables, then doesn’t that affect the users and re-linking the tables, though with Ms Access the re-link table manager has no problem with that, now how are about the MS SQL Server 2016????????

Regards

Chris
0
Hello Expert,

A large web-based application is connected to an Oracle 12c database.
The application has a feature which puts a record in Draft status to isolate the record while changes are made.
The Draft record is deleted when the record is Saved and Approved. Or it is supposed to be.
For some reason, under investigation, the Draft records are sometimes not deleted.
This create an invalid data state we lovingly call a Stray Draft.
A Stray Draft causes an error if a user ever tries to edit the record again, I think
because the new Draft record ties to use the same primary key as the old, undeleted draft.

That is the context.

So the boss says 'Delete all old (last_update_date > than timestamp - 90) drafts.
The only fly in the ointment is there are about eighty draft tables with name syntax
D_tablename and it is known by this writer that some of them have parent-child
relationships. That is, for instance, the D_PAYER table is Parent to the Child D_PAYER_ADDRESS
table and the D_PAYER_ADDRESS table is Parent to the Child D_PAYER_ADDRESS_TYPE and
the  D_PAYER_ADDRESS_USAGE tables.
One cannot delete Child records that have a foreign key relationship to Parent
records.

With eight tables getting everything in order empirically is pretty tough. So
visiting with one of the developers, he said 'If you look at table meta data (Columns, Data Model,
Constraints, Grants, Statistics, Triggers, Flashback, Dependencies,
Details, Partitions, Indexes, SQL) for the…
0
In sql server 2008
Trying to figure out the best way to build a table based on this pic ?

  freight
any help appreciated

Fordraiders
0
I have 2 or 3 applications that are only officially compatible up to sql server 2014.  I would like to move them to a sql server 2016 sql instance.  I would test of course.  Is there a way to make a 2016 server behave exactly like a 2014 sql server at the database level.  ie... some databases on this instance would be at 2016, some at 2014, and some at 2012 but the instance itself is officially running 2016
0
I have these four tables. The first 2 are Oracle tables and contain a column called FNOTES.
The query needs to get the FNOTES by FCUSTOMERID, FSNID and FPAYMENTID and insert them into
the PP.Notes table. The primary table is OBO.TASP_CUSTPP.

I need to have a query that will return all the notes into a file that can be used to import
the data into the SQL Server table (PP.Notes) below. Sample data is included at the bottom
of the post.

CREATE TABLE [PP].[CustPPlan] (
	  CustPPId      [int] IDENTITY(1,1) NOT NULL --+    
	, PPId          [int] NULL                     |
	, CustomerId    [varchar](50) NOT NULL --------|----+
    , SNId          [int] NULL --------------------|----|-----+
	, PaymentId     [varchar](50) NOT NULL         |    |     |
	, StartDate     [datetime] NOT NULL            |    |     |
	, EndDate       [datetime] NULL                |    |     |
	, AspCycle      [char](1) NOT NULL             |    |     |
	, AspFrequency  [tinyint] NOT NULL             |    |     |
	, AspDay        [tinyint] NULL                 |    |     |
	, AspMonth      [tinyint] NULL                 |    |     |
	, AspWeekday    [tinyint] NULL                 |    |     |
)                                                  |    |     |
                                                   |    |     |
CREATE TABLE [PP].[Notes] (                        |    |     |
      NotesId        [int] IDENTITY(1,1) NOT NULL  |    |     |
    , SNId           [int] NULL 

Open in new window

0
SQL Server 2012

I have a datetime field, examples of data in below

2018-10-17 23:00:00.000
2018-11-12 00:00:00.000

I want to add an hour to the field if it has a time 23:00:00.000 attached to it, but leave it as is if it has a time of 00:00:00.000. I know I can do case when time = 23:00:00 etc. Just wandered if theres a neater way of doing it?

Thanks
0
SolarWinds® Network Configuration Manager (NCM)
SolarWinds® Network Configuration Manager (NCM)

SolarWinds® Network Configuration Manager brings structure and peace of mind to configuration management. Bulk config deployment, automatic backups, change detection, vulnerability assessments, and config change templates reduce the time needed for repetitive tasks.

Hello dear friends.
Recently I've tried to follow the instructions regarding "Cached connection", described in this article:
https://www.microsoft.com/en-us/microsoft-365/blog/2011/04/08/power-tip-improve-the-security-of-database-connections/
and indeed - the "miracle" did happen:
All my linked tables from Access to SQL Server now have a connection string which doesn't reveal anything about the username & password. Tables use the cached connection which has been prepared when the application was opened.

Now I have to include in this Access application some tables which reside in another database, on the same SQL Server. They won't use the cached connection.
1. Is there a way to create another cached connection for these new tables? without "harming" the first cached connection?
2. If these tables use a different connection string (so they won't use that cached connection) - is it going to "harm" the cached connection used for the old tables in any way?

Thank you very much!
0
Can someone advise me on a SQL management monitoring solution. I am looking at redgate and idera at the moment.  I have two main concerns.  Isolating sql server performance issues and also monitoring my maintenance plans in sql server.  ie... did my backups fire off and what not.

i have about 20 sql servers.
Could someone post a link to some reccomendations that fulfill those two needs.  The exact solution would be appreciated.  ie... sqltoolkit from RedGate instead of just replying with Redgate.
0
I have the below/attached employee table in an access database.  I need a query to find all the direct reports and indirect reports of a particular employee

EmpID      Fname      Lname      MngID
1      Fred      Flinstone      
2      Linda      Thompson      1
3      Sam      Adams      1
4      Jason      Williams      1
5      Michael       Jordan      2
6      Katey      Perry      5


If I wanted to find all the direct and indirect reports of EmpID 2,  it should return michael and Katey,   Excel or access should be ok


This is what I want to do but in access SQL Query to find all the subordinates of a superior
0
I need to run a update query with a join with multiple matching criteria.  Relevant tables and columns for reference are as follows.

OPMAT TABLE
arinvt_id
stdcost
backflush

ARINVT TABLE
id
itemno

I am provided with a list of itemno values that I need to locate the corresponding ID for in ARINVT.  Join that ID to the opmat.arinvt_id and update the values of stdcost and backflush.

Normally, I accomplish this by running multiple select statements against arinvt using the itemno's provided to locate the id's
select id from arinvt where itemno ='12312312321'
I run this over and over in a begin / end loop and get a giant list of arinvt.id's returned.

I then run a update statement update opmat set stdcost = 'y', backflush = 'y' where arinvt_id = '232132123' (the values returned) and I repeat all of that in a big update loop.


This requires a lot of excel junk to make my statements.


I'm trying to simplify this with the update and join combination with multiple case statements so I'm more efficient with this.
0
I have a stored procedure which is expecting a data table with one column (user defined table type in SQL Server).
The data table has had to include an extra column, so the stored proc is failing due to the extra column.
How can I pass the first column of the data table to the stored proc.
I'm using C# in Visual Studio 2017.
0
I have the query below where

SELECT message, type, count(1) as total, 
       STUFF( (SELECT N',' + CONVERT(NVCHAR(MAX), id)
               FROM dbo.testing t2
               WHERE t2.message = t.message and t2.type = t.type
               FOR XML PATH(''), TYPE
              ).value('.', 'NVARCHAR(MAX)'), 1, 1, ''
            )
FROM testing t
GROUP BY message,type 
HAVING count(1) > 1;

Open in new window



 i want to populate a field called eid starting from 1 to the number of records i have and each record have 1 or 5 or 10 ids which we just figured out lik: if the ids column have 5 ids like 18,19,20,21,22 for all these the eid will be 1

like this screenshot

http://prntscr.com/m4w5r5

i want to have the errorid first start from 1000 and for every row, it should increment but for every row, there are associated IDS which are in comma separated values, they all should have the same errorid for that row
0

SQL

SQL (Structured Query Language) is designed to be used in conjunction with relational database products as of a means of working with sets of data. SQL consists of data definition, data manipulation, and procedural elements. Its scope includes data insert, query, update and delete, schema creation and modification, and data access control.