Integer question.
Spire_Jeff
Posts: 1,917
I am in the middle of coding a routine and I don't want to switch gears to test this out (and the potential for the situation to occur is slim to none in this job). If I have an integer variable and I need to make sure that its value is between 0 and 100, could this be accomplished simply by doing if(variable <=100)?
I do have to type_cast a sinteger value in the assignment, but if the variable is an integer, the minute it gets assigned a negative value, it should either be 0 or over 32768, correct?
Let me know if my ramblings are not clear and I will try to expand my thoughts.
Jeff
I do have to type_cast a sinteger value in the assignment, but if the variable is an integer, the minute it gets assigned a negative value, it should either be 0 or over 32768, correct?
Let me know if my ramblings are not clear and I will try to expand my thoughts.
Jeff
0
Comments
I've found odd little anomolies with how the compiler and Netlinx deals with integers. Things like setting a loop counter to zero to get out producing 64K. I catch myself doing this all the time. In other environments it doesn't produce this effect. But I catch myself forgetting and putting it in Netlinx.
In theory once you declare an integer and integer you should get no negatives. However, sometimes I've seen the math operations forget this and go ahead and do the math and roll it over.
If it were me, I'd go ahead and add in >0 and <=100. That way you can be sure.
That sounds like your loop variable got decremented again after you set it to 0.
Did your loop look like this?
If so, x will get set to 0 in the "end" statement, then decremented (wrapping to 65535), then tested for > 0. This is because of the order of operations for the three statements in the for loop (initialization, loop-top test, loop-end increment) The above loop gets "rewritten" as shown:
Now do you see why the "end" statement won't work? Much better to use the NetLinx keyword designed to do that: "break"
Back on topic, an integer variable can't hold a negative value. If it was 0 and you subtracted 1 from it, it would wrap to 65535, so your <= 100 test is still good.
Yes, I understand that. That's what I was getting at I suppose. In Netlinx, you can set the variable to 1 and get out. That's what I do. It's just old habits from other programming environments where a zero wont get set to 64K. The type integrity is also in conditionals and math. In other environments the for loop will not decrement because a zero is the end of the line.
The first bit (reading from left to right), is the negative sign, if the byte is turned on, your number is negative.
Binary talking, -1 in a sinteger should be:
1000 0000 0000 0001
+2:
0000 0000 0000 0010
Problems will happen if you pass directly numbers greater than +32767. If you pass +32769 to a sinteger
(converting to binary = 1000 0000 0000 0001), the same thing as -1.
The same thing happens if you pass -1 to a integer, will transform to +32769. If you use the Type_cast, may not work. I need test to make sure.
Not quite... that is called 1's complement encoding, and probably was used in Cobol. All current languages that I'm aware of use 2's complement due to a number of advantages. For instance, you can use many of the same mathematic operations on a number whether it is negative or not, such as addition. You also don't have separate values for +0 and -0.
To change the sign of a number in 2's complement, you flip the bits and add one, carrying where necessary. (the operation itself is also known as 2's complement)
So +1 is:
0000 0000 0000 0001
and -1 is:
1111 1111 1111 1111
The high bit is still the sign bit, but the interpretation of the bits following it is "inverted" if it is set, rather than just being the absolute value of the integer.
Not quite…that’s not an anything encoding.
The 1’s complement of a number is a BNOT, a bitwise flip. The 2’s complement (which we use for negative numbers) is the 1’s complement of the number + 1. So as you showed -1 =
1111 1111 1111 1111
Diogo – The reason you can’t take a number and just set the sign bit to make it the negative counterpart is because it’s mathematically incorrect. In order for the math to work, adding a number to its negative complement should produce 0. So for example adding 1 together with negative 1 should equal 0.
Just setting the sign bit method:
0000 0000 0000 0001 // 1 +
1000 0000 0000 0001 // not really -1
// =
1000 0000 0000 0010 // not=zero
Doing it the 2’s complement way:
0000 0000 0000 0001 // 1 +
1111 1111 1111 1111 // - 1
// =
0000 0000 0000 0000 // 0
One more example for 15 and -15.
0000 0000 0000 1111 // 15
1111 1111 1111 0000 // 1’s complement
0000 0000 0000 0001 // +1
// =
1111 1111 1111 0001 // -15
0000 0000 0000 1111 // 15 +
1111 1111 1111 0001 // -15
// =
0000 0000 0000 0000 // 0
Actually, it is, I just mixed up my terms. Setting a sign bit without changing the rest of the value is called sign-and-magnitude encoding, and is used in floating point encodings. 1's complement is yet another way to represent negative numbers, used in some older machines. Details here: http://en.wikipedia.org/wiki/Signed_number_representations
I accidentally typed *not an anything* when I meant to type *sign and magnitude.*
I do that all the time.