Schuttend
asked on
Cast problem string into integer
Hello,
Attached code is giving me an error. Could somebody explain me why?
in table 'Data" [close] is a string value. In table 'test' [Close] is a integer
Regards,
Dennis
Attached code is giving me an error. Could somebody explain me why?
in table 'Data" [close] is a string value. In table 'test' [Close] is a integer
Regards,
Dennis
insert into test ([Date],cast[Close] AS integer)
select [DATE],[close] from data
missing a '(' for cast
If number is bigger than 2 billion (over 10 characters or 10 characters beginning with 3+), then you may to use CAST([Close] AS BIGINT).
http://msdn.microsoft.com/en-us/library/ms187745.aspx
http://msdn.microsoft.com/en-us/library/ms187745.aspx
mwvisa1 is always quick and precise
ASKER
Might be, but code is incorrect ;) Please see attached pic
casterror.jpg
casterror.jpg
coz it's a double not integer
Your data has a decimal in it.
INSERT INTO test([Date], [Close])
SELECT [DATE], cast([Close] AS MONEY)
FROM data
INSERT INTO test([Date], [Close])
SELECT [DATE], cast([Close] AS MONEY)
FROM data
If the datatype of close if INT, then you will have to do this:
INSERT INTO test([Date], [Close])
SELECT [DATE]
, ROUND(cast([Close] AS MONEY), 0)
FROM data
Hope that helps.
INSERT INTO test([Date], [Close])
SELECT [DATE]
, ROUND(cast([Close] AS MONEY), 0)
FROM data
Hope that helps.
or
cast(unit_price as numeric(10,2)) or whatever precision it should be
cast(unit_price as numeric(10,2)) or whatever precision it should be
Correct. The point is you cannot do the cast in the column definition of the insert statement, it must be done to the value and the value must be valid for the type you are casting to. Therefore, for an INT, you cannot cast 1666.53 since it is not a whole number.
Hopefully that makes sense.
Hopefully that makes sense.
ASKER
How about this one?
The destination field is now format decimal (10,10). However I get an error. Any reason for this?
regards,
Dennis
cast2.jpg
The destination field is now format decimal (10,10). However I get an error. Any reason for this?
regards,
Dennis
cast2.jpg
decimal[(p[, s])] and numeric[(p[, s])]
normally p > s
try decimal(20,10)
normally p > s
try decimal(20,10)
What are you trying to do?
If you control the target column data type, then figure out what you want the data to represent and the data type will be self explanatory. For example, if close is say stock price at last bell then MONEY would make sense or a DECIMAL(p, 4) since dollars go out to 4 places max typically.
Regards,
Kevin
If you control the target column data type, then figure out what you want the data to represent and the data type will be self explanatory. For example, if close is say stock price at last bell then MONEY would make sense or a DECIMAL(p, 4) since dollars go out to 4 places max typically.
Regards,
Kevin
ASKER
I have multiple strings, which need to be converted before calculations can be made.
The problem is that sometimes the string contains a money values (1112.45 for example) But sometimes it contains values like 0.993432342034. What would be the correct format to choose for the destination field??
The problem is that sometimes the string contains a money values (1112.45 for example) But sometimes it contains values like 0.993432342034. What would be the correct format to choose for the destination field??
The DECIMAL(20, 10) should fit both of those cases.
http://msdn.microsoft.com/en-us/library/ms187746.aspx
Best regards,
Kevin
http://msdn.microsoft.com/en-us/library/ms187746.aspx
Best regards,
Kevin
ASKER
Hi,
This is probably more difficult then I thought.
The decimal (20,10) does work on tables open,high,low,close,system ident
But not on [delta], [signal]. How can this be?
I changed the format field to decimal 30,22 to give it a try. But as you can see, it still generates a error??
cast3.jpg
cast4.jpg
This is probably more difficult then I thought.
The decimal (20,10) does work on tables open,high,low,close,system
But not on [delta], [signal]. How can this be?
I changed the format field to decimal 30,22 to give it a try. But as you can see, it still generates a error??
cast3.jpg
cast4.jpg
run this and see if you get any record
SELECT [DATE], [Close]
FROM data
where isnumeric([Close]) = 0
SELECT [DATE], [Close]
FROM data
where isnumeric([Close]) = 0
ASKER
no errors, but no records are returned...
you may have to find which record(s) causing the problem
can you add more conditions to locate the record(s)
SELECT ...
FROM data
where [date] between .. and ..
can you add more conditions to locate the record(s)
SELECT ...
FROM data
where [date] between .. and ..
I recommend you to change all the columns from decimal type to double and try it. The date column you should wrap it in conver and also ltrim-rtrim the strings to get rid of spaces just in case. You don't need to cast in the select statement as it will be casted automatically if the values are in corect format in regards to the type. You have however to convert the date as being in europe you might have different settings on the server that don't recognize mm/dd/yyyy format by default:
insert into
(
date,
[close]
)
SELECT
convert(datetime, ltrim(rtrim([DATE])), 101),
ltrim(rtrim([Close]))
FROM
data
Think you are bouncing to new question here, so just to clarify, the DECIMAL(20, 10) worked for [Close] so that is not an issue.
>>But not on [delta], [signal]. How can this be?
You will need to check data for [delta] and [signal] using james' suggestion.
SELECT *
FROM data
where isnumeric([delta]) = 0 OR isnumeric([signal]) = 0
Post the data you find.
>>But not on [delta], [signal]. How can this be?
You will need to check data for [delta] and [signal] using james' suggestion.
SELECT *
FROM data
where isnumeric([delta]) = 0 OR isnumeric([signal]) = 0
Post the data you find.
ASKER
SELECT *
FROM data
WHERE LEN([delta]) > 20 OR LEN([signal]) > 20
What you need to look for whether the above gets you the value or not is the largest decimal value from your data now that you have confirmed that all the data is numeric.
FROM data
WHERE LEN([delta]) > 20 OR LEN([signal]) > 20
What you need to look for whether the above gets you the value or not is the largest decimal value from your data now that you have confirmed that all the data is numeric.
Ultimately, you can just try DECIMAL(38, 28). If values have decimals greater than that then adjust accordingly.
You may have to look at FLOAT or REAL as welll.
http://msdn.microsoft.com/en-us/library/ms173773.aspx
You may have to look at FLOAT or REAL as welll.
http://msdn.microsoft.com/en-us/library/ms173773.aspx
and try this
select max(len(substring( [delta], charindex('.', [delta]), 100))) as delta_len1
, max(len(substring( [signal], charindex('.', [signal]), 100))) as signal_len1
, max(len([signal]))
, max(len([delta]))
from data
select max(len(substring( [delta], charindex('.', [delta]), 100))) as delta_len1
, max(len(substring( [signal], charindex('.', [signal]), 100))) as signal_len1
, max(len([signal]))
, max(len([delta]))
from data
ASKER
Please find attached the result of jamesqu
len.jpg
len.jpg
Try then DECIMAL(38, 19). If that doesn't work, then it means you have values that have more than 19 places before the decimal as well as values with 19 places after the decimal and 38 is the max places on DECIMAL from my understanding; therefore, you will have to move to FLOAT.
Is that what you want?
check this,
if you have large data, you may have to use varchar instead of decimal, and do your conversion in your client programming language if you can
if you have large data, you may have to use varchar instead of decimal, and do your conversion in your client programming language if you can
select max(len(substring( [delta], charindex('.', [delta]), 100))) as delta_len1
, max(len(substring( [signal], charindex('.', [signal]), 100))) as signal_len1
, max(len(substring( [delta], 1, charindex('.', [delta]))) as delta_len2
, max(len(substring( [signal], 1, charindex('.', [signal]))) as signal_len2
, max(len([signal]))
, max(len([delta]))
from data
Schuttend,
Why don't you try my suggestion with type double for your numeric columns instead of decimal. You could build a ne table to test. Double is the most flexible numeric type and will deal with all kind of values even those with exponent in them like 1.3E02, like I noticed you had few values. The decimal type will invariably fail on those values.
Use double and then try my query, again here:
Why don't you try my suggestion with type double for your numeric columns instead of decimal. You could build a ne table to test. Double is the most flexible numeric type and will deal with all kind of values even those with exponent in them like 1.3E02, like I noticed you had few values. The decimal type will invariably fail on those values.
Use double and then try my query, again here:
insert into
(
date,
[close]
-- ... rest of the columns here
)
SELECT
convert(datetime, ltrim(rtrim([DATE])), 101),
ltrim(rtrim([Close]))
-- ... rest of the columns here but stil use ltrim-rtrim wrap
FROM
data
Zberteoc,
double is Approximate Numerics, it lost information
double is Approximate Numerics, it lost information
Are you suggesting that double is a useless type? This is a first to me.
Schuttend, just try it. You can also use float, not necessarily double.
Schuttend, just try it. You can also use float, not necessarily double.
it's not useless, but it has its limitation
if possible, if may give a try with some value like 1.123456789123456789
store this into a double variable and check the value
if possible, if may give a try with some value like 1.123456789123456789
store this into a double variable and check the value
ASKER
Indeed that's my experience as well with double. However is also accounts for using floats. Am I correct?
I actually meant float. Double doesn't even exist on sql.
I was going to keep quiet since I figured you knew something I didn't. ;)
Yes SQL implements double precision using FLOAT(53) datatype.
Yes SQL implements double precision using FLOAT(53) datatype.
Yesterday I tried to post something but was lost because of problems with the website. Here:
I checked and actually decimal type works fine with values like 1.123E-02 and from the picture attached some posts earlier the values look OK in the Data table. I suspect that the problem for conversions comes from the source file that was loaded to Data table in varchar columns. It might be possible that some invisible special characters like tab or garbage are also loaded with the number values. They are there but are not visible and cause the conversion error.
In order to check this you need a text editor to open the original source file with and make the special character visible. Try Wordpad or even Word and click on the Pi button. If you have something else than carriage return and line feed at the end of the line or if you have tabs within the comma delimiters there is your problem. You need to clean the source file and load it again.
There is nothing else I can see for now.
I checked and actually decimal type works fine with values like 1.123E-02 and from the picture attached some posts earlier the values look OK in the Data table. I suspect that the problem for conversions comes from the source file that was loaded to Data table in varchar columns. It might be possible that some invisible special characters like tab or garbage are also loaded with the number values. They are there but are not visible and cause the conversion error.
In order to check this you need a text editor to open the original source file with and make the special character visible. Try Wordpad or even Word and click on the Pi button. If you have something else than carriage return and line feed at the end of the line or if you have tabs within the comma delimiters there is your problem. You need to clean the source file and load it again.
There is nothing else I can see for now.
ASKER CERTIFIED SOLUTION
membership
This solution is only available to members.
To access this solution, you must be a member of Experts Exchange.
As a memory jogger - nd might want to close off a couple of these (or reply to them) see : https://www.experts-exchange.com/Microsoft/Development/MS-SQL-Server/SQL_Server_2008/Q-23889285-Bulk-INSERT-with-different-field-types.html
ASKER
Thanks...
I didn't understand what happened here, Mark Wills got the points twice for the same question? I don't blame Mark here but I kinda feel that I just wasted my time on this question simply because the author had a lack of memory. And Mark had no input on this question whatsoever.
Beside that I also suggested to make the import as float and then do whatever they needed with it and, regardless, this is a totally different question than the referenced one.
Beside that I also suggested to make the import as float and then do whatever they needed with it and, regardless, this is a totally different question than the referenced one.
Hi zberteoc,
Might seem that way, it is just that there was another problem (a couple actually) that started out as discrete boinafide questions, and, in answering the import one, it did get into formatting columns. Ended up writing an import / data load which also took care of formats. So, I think I did have a lot of input into resolving this question, just not neccessarily on this thread (if that makes sense). My answer in this thread is a legitimate answer to this question, and in providing it, did raise the question if this is a different problem. I was concerned.
In fairness to schuttend, this data file appears to have been an absolute pain, and what seems to be different (albeit related) questions, happened to get the answers slightly differently to the way in which they were asked.
In fact, I do think schuttend should be commended - instead of one question morphing into several different Q+A - actually thought about how to segment a problem into discrete questions - doesn't often happen that way... And here is irony for you, it is the experts that have morphed a few answers into one !
So, I do not think it was a case of getting double-up points as much as being able to provide double-up answers. Probably does not relieve the frustration, but thought an explanation was in order.
Cheers,
Mark Wills
Might seem that way, it is just that there was another problem (a couple actually) that started out as discrete boinafide questions, and, in answering the import one, it did get into formatting columns. Ended up writing an import / data load which also took care of formats. So, I think I did have a lot of input into resolving this question, just not neccessarily on this thread (if that makes sense). My answer in this thread is a legitimate answer to this question, and in providing it, did raise the question if this is a different problem. I was concerned.
In fairness to schuttend, this data file appears to have been an absolute pain, and what seems to be different (albeit related) questions, happened to get the answers slightly differently to the way in which they were asked.
In fact, I do think schuttend should be commended - instead of one question morphing into several different Q+A - actually thought about how to segment a problem into discrete questions - doesn't often happen that way... And here is irony for you, it is the experts that have morphed a few answers into one !
So, I do not think it was a case of getting double-up points as much as being able to provide double-up answers. Probably does not relieve the frustration, but thought an explanation was in order.
Cheers,
Mark Wills
ASKER
Hi Zberteoc,
I'm sorry you feel this way because. I sure appreaciate your great work. I am still working on your comments at the question available below:
https://www.experts-exchange.com/questions/23883001/Column-calculations-and-inserting-into-table.html
This issue is still not resolved. I am even willing to offer you 1000 points more if this issue can finally be solved. Sharath has been working on this as well, but he is not responding anymore.
I am trying to break up problems and post them. However sometimes they are linked together for some reason.
If you please can solve my problem at the link below I think we can arrange the extra 1000 points somehow.
Regards,
Dennis
I'm sorry you feel this way because. I sure appreaciate your great work. I am still working on your comments at the question available below:
https://www.experts-exchange.com/questions/23883001/Column-calculations-and-inserting-into-table.html
This issue is still not resolved. I am even willing to offer you 1000 points more if this issue can finally be solved. Sharath has been working on this as well, but he is not responding anymore.
I am trying to break up problems and post them. However sometimes they are linked together for some reason.
If you please can solve my problem at the link below I think we can arrange the extra 1000 points somehow.
Regards,
Dennis
INSERT INTO test([Date], [Close])
SELECT [DATE], cast([Close] AS INT)
FROM data