Difference Between SInteger and Integer
jason_the_adams
Posts: 108
I tried to pass a value of -1, upon which I was given a warning that I was converting an sinteger to an integer. I was curious of this so I looked up what the help file said on the difference of the two:
Integer - "An intrinsic data type representing a 16-bit unsigned integer. This is the default data type if a non-array variable is declared without a data type specifier."
SInteger - "An intrinsic data type representing a 16-bit signed integer."
This struck me as remarkably unhelpful, so I hoped one of the forum folk would know what the difference is, and why by passing a value of -1 it would assume I mean it as an sinteger.
Thanks!
Integer - "An intrinsic data type representing a 16-bit unsigned integer. This is the default data type if a non-array variable is declared without a data type specifier."
SInteger - "An intrinsic data type representing a 16-bit signed integer."
This struck me as remarkably unhelpful, so I hoped one of the forum folk would know what the difference is, and why by passing a value of -1 it would assume I mean it as an sinteger.
Thanks!
0
Comments
For example, using an 8 bit number, 10000001 = 129 if it is an 8 bit integer. 10000001 = -1 if it is sinteger.
Jeff
The compiler doesn't seem to appreciate the mathematical principle that if one adds a negative, it is the same as subtracting - as subtraction is really the addition of a positive and negative number. If I change the parameter type I pass, then I get a slew of warnings saying that I'm treating an sinteger as an integer in such cases as: -- nOffSet being the variable in question.
From the Netlinx Help File:
nCurrentEvent[nTpIndex] = 3 and
nOffSet = -1
Would the two be properly added to become 2, or would the type_cast create an absolute value of nOffSet? A type_cast wouldn't do any good here on nCurrentEvent[nTpIndex] as it's what's been defined... I sure hope that makes sense.
Maybe you want nCurrentEvent[nTpIndex] = TYPE_CAST(TYPE_CAST(nCurrentEvent[nTpIndex]) + nOffset)
With the following results:
So apparently both methods work... Why? Not sure. How? I know that less than 'why'. Apparently the difference between the two, by the time it reaches the point of actually mathematically solving, is rather arbitrary. My total assumption would be that at the baser level the integer assumes a bit somewhere which establishes its negative or positive value, and simply assumes positive... But that, ladies an gentlemen, is an absolute guess.
From the manual for type_cast: "This may involve truncating the high order bytes(s) when converting to a smaller size variable or sign conversion when converting signed values to unsigned or vice versa." My guess is that the most logical conversion of -1 to an unsigned integer is actually 65535 because they have the same internal representation (hex FFFF). If you add 65535 and 5, you get 65540, which overflows (hex 10004 is more than 16 bits). This is truncated to 0004, which is decimal 4.
Signed/unsigned overflow/underflow gets tricky very quickly... google 2's complement if you want to know more about the internal representation.