Home AMX User Forum NetLinx Studio

Difference Between SInteger and Integer

I tried to pass a value of -1, upon which I was given a warning that I was converting an sinteger to an integer. I was curious of this so I looked up what the help file said on the difference of the two:

Integer - "An intrinsic data type representing a 16-bit unsigned integer. This is the default data type if a non-array variable is declared without a data type specifier."

SInteger - "An intrinsic data type representing a 16-bit signed integer."


This struck me as remarkably unhelpful, so I hoped one of the forum folk would know what the difference is, and why by passing a value of -1 it would assume I mean it as an sinteger.

Thanks!

Comments

  • Spire_JeffSpire_Jeff Posts: 1,917
    Sinteger is a signed integer. This means that the first bit (or last bit depending on how you look at it) is used to determine if the number is positive or negative. Integer is used for positive numbers only.

    For example, using an 8 bit number, 10000001 = 129 if it is an 8 bit integer. 10000001 = -1 if it is sinteger.

    Jeff
  • Does that mean anything is needed to be taken into consideration when handling the values thereof, or ought I just use it the same as I would a simple integer?

    The compiler doesn't seem to appreciate the mathematical principle that if one adds a negative, it is the same as subtracting - as subtraction is really the addition of a positive and negative number. If I change the parameter type I pass, then I get a slew of warnings saying that I'm treating an sinteger as an integer in such cases as:
    active((nCurrentEvent[nTpIndex] + nOffSet) < 1):
    
    -- nOffSet being the variable in question.
  • Joe HebertJoe Hebert Posts: 2,159
    I tried to pass a value of -1, upon which I was given a warning that I was converting an sinteger to an integer. I was curious of this so I looked up what the help file said on the difference of the two:

    Integer - "An intrinsic data type representing a 16-bit unsigned integer. This is the default data type if a non-array variable is declared without a data type specifier."

    SInteger - "An intrinsic data type representing a 16-bit signed integer."


    This struck me as remarkably unhelpful, so I hoped one of the forum folk would know what the difference is, and why by passing a value of -1 it would assume I mean it as an sinteger.

    Thanks!

    From the Netlinx Help File:
    Intrinsic Types
    The data types inherently supported by the NetLinx language are described below:
    Keyword Data Type Sign Size Range

    CHAR
    Byte Unsigned 8-bit 0 -255

    WIDECHAR
    Integer Unsigned 16-bit 0 - 65535

    INTEGER
    Integer Unsigned 16-bit 0-65536

    SINTEGER
    Integer Signed 16-bit -32768 to +32768

    LONG
    Long Integer Unsigned 32-bit 4,294,967,295

    SLONG
    Long Integer Signed 32-bit + 2,147,483,647

    FLOAT
    Floating Point Signed 32-bit 1.79769313 E+308 to 2.22507385 E-308

    DOUBLE
    Double Precision Floating Point Signed 64-bit 3.40282347 E+38 to 1.17549435 E-38
  • mpullinmpullin Posts: 949
    The compiler doesn't seem to appreciate the mathematical principle that if one adds a negative, it is the same as subtracting - as subtraction is really the addition of a positive and negative number. If I change the parameter type I pass, then I get a slew of warnings saying that I'm treating an sinteger as an integer in such cases as:
    active((nCurrentEvent[nTpIndex] + nOffSet) < 1):
    
    -- nOffSet being the variable in question.
    I've verified that this error results whenever an SINTEGER is added to an INTEGER. To the compiler, different data types are different data types, so I guess it's not that odd. This is how you'd have to do it:
    active((TYPE_CAST(nCurrentEvent[nTpIndex]) + nOffSet) < 1):
    
    (TYPE_CAST(nCurrentEvent[nTpIndex]) + nOffSet) evaluates to an SINTEGER now. Since the hard coded 1 can be interpreted as either one (it's just data, no data type associated with it) it should be fine with whatever comes out of the (). But if there were a variable instead of a 1 you would need to make sure it was an SINTEGER, or TYPE_CAST it to one.
  • Hmm, would the following code cause problems then?
    nCurrentEvent[nTpIndex] = nCurrentEvent[nTpIndex] + type_cast(nOffSet);
    
    If
    nCurrentEvent[nTpIndex] = 3 and
    nOffSet = -1

    Would the two be properly added to become 2, or would the type_cast create an absolute value of nOffSet? A type_cast wouldn't do any good here on nCurrentEvent[nTpIndex] as it's what's been defined... I sure hope that makes sense.
  • mpullinmpullin Posts: 949
    I don't know what would happen if you TYPE_CASTed a negative SINTEGER to an INTEGER. Probably not what you want to happen.

    Maybe you want nCurrentEvent[nTpIndex] = TYPE_CAST(TYPE_CAST(nCurrentEvent[nTpIndex]) + nOffset)
  • mpullin wrote: »
    Maybe you want nCurrentEvent[nTpIndex] = TYPE_CAST(TYPE_CAST(nCurrentEvent[nTpIndex]) + nOffset)
    Hmm... I think I'll test that out and post what happens. Incidentally, you probably can't use the itoa function on a sinteger either... I didn't realize using negative values would become so messy. Sheesh.
  • Alright, I threw together a little test with the following code:
    define_device
    dvTP = 10001:1:0
    
    define_event
    button_event[dvTP,1] {
      push: {
    	stack_var integer IntVar;
    	stack_var sinteger SintVar;
    	
    	IntVar = 5;
    	SintVar = -1;
    	
    	IntVar = type_cast(type_cast(IntVar) + SintVar);
    	
    	send_string 0,"'Double Type_Cast: IntVar = ',ITOA(IntVar)";
    	
    	IntVar = 5
    	SintVar = -1
    	
    	IntVar = IntVar + type_cast(SintVar);
    	
    	send_string 0,"'Single Type_Cast: IntVar = ',ITOA(IntVar)";
      }
    }
    

    With the following results:
    Line     45 (17:50:11.846):: Double Type_Cast: IntVar = 4
    Line     46 (17:50:11.846):: Single Type_Cast: IntVar = 4
    

    So apparently both methods work... Why? Not sure. How? I know that less than 'why'. Apparently the difference between the two, by the time it reaches the point of actually mathematically solving, is rather arbitrary. My total assumption would be that at the baser level the integer assumes a bit somewhere which establishes its negative or positive value, and simply assumes positive... But that, ladies an gentlemen, is an absolute guess. :D
  • I tend to avoid signed variables as much as possible for a lot of these reasons. It does get very messy when doing conversions, and I usually have to experiment to get it right.

    From the manual for type_cast: "This may involve truncating the high order bytes(s) when converting to a smaller size variable or sign conversion when converting signed values to unsigned or vice versa." My guess is that the most logical conversion of -1 to an unsigned integer is actually 65535 because they have the same internal representation (hex FFFF). If you add 65535 and 5, you get 65540, which overflows (hex 10004 is more than 16 bits). This is truncated to 0004, which is decimal 4.

    Signed/unsigned overflow/underflow gets tricky very quickly... google 2's complement if you want to know more about the internal representation.
Sign In or Register to comment.