2

I am trying to cast varchar to bigint. Then inserting it into an int column. I found that I am not getting expected value. Then I tried the following statements:

DECLARE @varData VARCHAR(50)

SET @varData  = '0000019.33'

select cast(@varData *cast(100 as float) as bigint)

select cast(@varData *cast(100 as float) as varchar)

select cast(cast(@varData *cast(100 as float) as varchar) as float)

Results:

1932

1933

1933

I am not able to understand why this difference is coming. But when I change the value to 0000029.33 then there is no difference in results.

I am stuck and do not have any idea why it's so happening. Help needed..!!!

4

2 回答 2

2

please see this Difference between numeric,float and decimal in sql server

For you question, you should try CAST @vardata as numeric, like this

SELECT CAST(CAST(@varData AS numeric(27,9)) * 100  AS bigint)
于 2013-03-30T15:48:29.397 回答
1

it happens because sql round the data, so lets say if u pick the wrong integer instead of 2.97 u will get 3. now try to imagine how much data will be lost if u want to convert just 0.000002 :) hope u understand better now

DECLARE @varData VARCHAR(50),
        @float float,
        @bigint bigint

SET @varData  = '0000019.33'    
set @bigint = convert ( bigint, (convert ( float, @vardata )) )
print @bigint
--or
set @float= convert ( float, @vardata )
set @bigint=convert ( bigint, @float)
print @bigint
于 2013-03-31T14:51:49.790 回答