Home AMX User Forum AMX General Discussion

HomeWorksQS DbXmlInfo.xml file

I can pull the XML file from a browser but for the life of me I can't get any return in my code.

after a modest startup delay I call:
DEFINE_FUNCTION fnLUTRON_Connect_XML()  

     {
     fnLUTRON_DeBug("'LUTRON ###############################################:DEBUG<',ITOA(__LINE__),'>'");
     fnLUTRON_DeBug("'LUTRON          AMX IS OPENING XML CONNECTION         :DEBUG<',ITOA(__LINE__),'>'");
     fnLUTRON_DeBug("'LUTRON ###############################################:DEBUG<',ITOA(__LINE__),'>'");
     //CLEAR_BUFFER cLutron_XML;
     IP_Client_Open(dvLUTRON_XML.port,LUT_IP_ADDRESS,80,IP_TCP);

     RETURN;
     }


And when I get my online_event I call:
DEFINE_FUNCTION fnLutron_GetXML()

     {//192.168.1.33/DbXmlInfo.xml
     fnLUTRON_DeBug("'LUTRON ###############################################:DEBUG<',ITOA(__LINE__),'>'");
     fnLUTRON_DeBug("'LUTRON              ATTEMPTING TO GET XML             :DEBUG<',ITOA(__LINE__),'>'");
     fnLUTRON_DeBug("'LUTRON ###############################################:DEBUG<',ITOA(__LINE__),'>'");
     SEND_STRING dvLutron_XML,"'GET /DbXmlInfo.xml HTTP/1.1',STR_CRLF";
     SEND_STRING dvLutron_XML,"'Accept: text/html, application/xhtml+xml, image/jxr, */*',STR_CRLF";
     SEND_STRING dvLutron_XML,"'Accept-Language: en-US',STR_CRLF";
     SEND_STRING dvLutron_XML,"'User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko',STR_CRLF";
     SEND_STRING dvLutron_XML,"'Accept-Encoding: gzip, deflate',STR_CRLF";
     SEND_STRING dvLutron_XML,"'Host: ',LUT_IP_ADDRESS,STR_CRLF";
     SEND_STRING dvLutron_XML,"'If-Modified-Since: ',sMyTime.cWebCurDateTime,STR_CRLF";
     SEND_STRING dvLutron_XML,"'Connection: Keep-Alive',STR_CRLF";
     SEND_STRING dvLutron_XML,"STR_CRLF";
     }


I get nothing in my string_event handler.

All I debug in diagnostics is:
Line   2894 2018-04-20 (11:29:34)::  LUTRON AXI DEBUG (1): LUTRON ###############################################:DEBUG<1104>
Line   2895 2018-04-20 (11:29:34)::  LUTRON AXI DEBUG (1): LUTRON          AMX IS OPENING XML CONNECTION         :DEBUG<1105>
Line   2896 2018-04-20 (11:29:34)::  LUTRON AXI DEBUG (1): LUTRON ###############################################:DEBUG<1106>
Line   2897 2018-04-20 (11:29:34)::  CIpEvent::OnLine 0:13:2
Line   2898 2018-04-20 (11:29:34)::  LUTRON AXI DEBUG (1): LUTRON ###############################################:DEBUG<2749>
Line   2899 2018-04-20 (11:29:34)::  LUTRON AXI DEBUG (1): LUTRON                   GET XML ONLINE               :DEBUG<2750>
Line   2900 2018-04-20 (11:29:34)::  LUTRON AXI DEBUG (1): LUTRON ###############################################:DEBUG<2751>
Line   2901 2018-04-20 (11:29:34)::  LUTRON AXI DEBUG (1): LUTRON ###############################################:DEBUG<1576>
Line   2902 2018-04-20 (11:29:34)::  LUTRON AXI DEBUG (1): LUTRON              ATTEMPTING TO GET XML             :DEBUG<1577>
Line   2903 2018-04-20 (11:29:34)::  LUTRON AXI DEBUG (1): LUTRON ###############################################:DEBUG<1578>
Line   2904 2018-04-20 (11:29:34)::  CIpEvent::OffLine 0:13:2
Line   2905 2018-04-20 (11:29:34)::  LUTRON AXI DEBUG (1): LUTRON ###############################################:DEBUG<2756>
Line   2906 2018-04-20 (11:29:34)::  LUTRON AXI DEBUG (1): LUTRON                   GET XML OFFLINE              :DEBUG<2757>
Line   2907 2018-04-20 (11:29:34)::  LUTRON AXI DEBUG (1): LUTRON ###############################################:DEBUG<2758>


Some times I do also get an ip_error for an invalid port. I've changed the port several time but no other DEV on this master uses any of the numbers I've tried.

The gets strings are exactly what I've seen in WireShark intercepts. I know I'm real rusty but WTF? I'm using an NX processor too which I've played with a few years ago to do some testing and comparisons to NI's.

Anyone got a working code sample to share for pulling the XML file off a QS processor?

#EDIT. FYI, I'm using a different dev port to pull the XML file from my telnet dev port for control and queries.

Comments

  • Think I've only had success with http comm sending the whole of the message as a single send_string

    SEND_STRING dvLutron_XML,"'GET /DbXmlInfo.xml HTTP/1.1',STR_CRLF,'Accept: text/html, application/xhtml+xml, image/jxr, */*',STR_CRLF,'Accept-Language: en-US',STR_CRLF,'User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko',STR_CRLF,'Accept-Encoding: gzip, deflate',STR_CRLF,'Host: ',LUT_IP_ADDRESS,STR_CRLF,'If-Modified-Since: ',sMyTime.cWebCurDateTime,STR_CRLF,'Connection: Keep-Alive',STR_CRLF,STR_CRLF";
  • viningvining Posts: 4,368
    I?ve always broken up these Get strings line by line and never had a problem but I can?t say I?ve ever tried on a NX processor. At this point I?ll try anything.
  • viningvining Posts: 4,368
    icraigie wrote: »
    Think I've only had success with http comm sending the whole of the message as a single send_string

    SEND_STRING dvLutron_XML,"'GET /DbXmlInfo.xml HTTP/1.1',STR_CRLF,'Accept: text/html, application/xhtml+xml, image/jxr, */*',STR_CRLF,'Accept-Language: en-US',STR_CRLF,'User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko',STR_CRLF,'Accept-Encoding: gzip, deflate',STR_CRLF,'Host: ',LUT_IP_ADDRESS,STR_CRLF,'If-Modified-Since: ',sMyTime.cWebCurDateTime,STR_CRLF,'Connection: Keep-Alive',STR_CRLF,STR_CRLF";
    This actually worked! I never would have thought of even trying this. Like I said I've always done this the other way and never once had a problem.

    Now I should port this over to an NI processor and try both methods again just to see if this is something peculiar to the NX processor or maybe just a Lutron quirk the way it handles incoming strings. I can see it wanting everything smaller then a single MTU being sent in a single segment and not broken up into multiple RX events.

  • zack.boydzack.boyd Posts: 94
    If you wanted to clean up your code, you could build the string first, then send it....
  • viningvining Posts: 4,368
    zack.boyd wrote: »
    If you wanted to clean up your code, you could build the string first, then send it....

    That?s probably what I?ll do to keep it more readable and looking more like my WireShark intercepts.

  • a_riot42a_riot42 Posts: 1,624
    I've never sent that particular string to a Lutron device, but I always do something like this when creating IP packets to ensure all data is sent as one string.
    define_function char [2048] buildDataHeader(char cmd[])
    {
        stack_var volatile char header[2048]
    
        stHeader.sGet = "'GET /rstt/onedaytable?form=1&ID=AA&', cmd,' HTTP/1.1'"
    
        header =    "stHeader.sGet,cnNL,
                             stHeader.sHost,cnNL,
                             stHeader.sUserAgent,cnNL,
                             stHeader.sAccept,cnNL,
                             stHeader.sAcceptLanguage,cnNL,
                             stHeader.sAcceptEncoding,cnNL,
                             stHeader.sDNT,cnNL,
                             stHeader.sReferer,cnNL,
                             stHeader.sConnection,cnNLNL"
    
        return header
    }
    
    
  • ericmedleyericmedley Posts: 4,177
    I think we also suffer from some 'under-the-hood' shennaigans when it comes to how the Nelinx processor actually ends up sending out string messages that exceed the normal packet size of typical IP comms. A whiole back I was having some issues with trying to move some stuff on and off a an SQL server where some of the XML traffic got quite large. Like every one here I experimented with trying to break up the send_strings, combining them into one big blob, etc...

    What eventually seemed to work was I took the large meassages and broke them into consistent 2K chunks and then, of courese, the last chunk being the remainder of whatever size under 2K.

    Each 2K chunk was sent into the send_string at that size. I never really figured out if it had to do with the max string size within the Netlinx interpreter or the masx packet size by IP. After all, there are several methods in the IP layer to send large packets that work quite well. That doens' mean the AMX firmware is using them.

    All this to say, for really large messages, I still use the 2K method and haven not run into any isses since. But don't aske my why it works. I do not know.
  • viningvining Posts: 4,368
    a_riot42 wrote: »
    I've never sent that particular string to a Lutron device, but I always do something like this when creating IP packets to ensure all data is sent as one string.
    define_function char [2048] buildDataHeader(char cmd[])
    {
    stack_var volatile char header[2048]
    
    stHeader.sGet = "'GET /rstt/onedaytable?form=1&ID=AA&', cmd,' HTTP/1.1'"
    
    header = "stHeader.sGet,cnNL,
    stHeader.sHost,cnNL,
    stHeader.sUserAgent,cnNL,
    stHeader.sAccept,cnNL,
    stHeader.sAcceptLanguage,cnNL,
    stHeader.sAcceptEncoding,cnNL,
    stHeader.sDNT,cnNL,
    stHeader.sReferer,cnNL,
    stHeader.sConnection,cnNLNL"
    
    return header
    }
    
    

    That's pretty much what I ended up doing and after thinking more about it if the string is under a single MTU (1500 less header crap) it really should be sent in a single frame.
  • viningvining Posts: 4,368
    ericmedley wrote: »
    I think we also suffer from some 'under-the-hood' shennaigans when it comes to how the Nelinx processor actually ends up sending out string messages that exceed the normal packet size of typical IP comms. A whiole back I was having some issues with trying to move some stuff on and off a an SQL server where some of the XML traffic got quite large. Like every one here I experimented with trying to break up the send_strings, combining them into one big blob, etc...

    What eventually seemed to work was I took the large meassages and broke them into consistent 2K chunks and then, of courese, the last chunk being the remainder of whatever size under 2K.

    Each 2K chunk was sent into the send_string at that size. I never really figured out if it had to do with the max string size within the Netlinx interpreter or the masx packet size by IP. After all, there are several methods in the IP layer to send large packets that work quite well. That doens' mean the AMX firmware is using them.

    All this to say, for really large messages, I still use the 2K method and haven not run into any isses since. But don't aske my why it works. I do not know.

    2k is an AMX limit or something, I can't remember but an Ethernet frame is much less unless its jumbo. The normal MTU is 1500 and that allows for some header crap but it's safer to stay below that a bit. I believe when I was watching the Lutorn responses in WireShark they were sending 1024 byte frames.

    I found this in a quick google:
    The original Ethernet IEEE 802.3 standard defined the minimum Ethernet frame size as 64 bytes and the maximum as 1518 bytes. The maximum was later increased to 1522 bytes to allow for VLAN tagging. The minimum size of an Ethernet frame that carries an ICMP packet is 74 bytes.

    After looking at this I never even considered a minimum size?

    LIke you said who knows how AMX is handling this under the hood, might be best to let it manage the segmenting or at least make sure you're under the now 1522 bytes max. LIke you I did this based on a 1500 or 1492 MTU on a few programs but who knows if it helped or hurt? Can't think of what I was talking too that needed that much data. Actually, must be my daily email logs that are sent to my office master, that I would have segmented into frames.
  • a_riot42a_riot42 Posts: 1,624
    TCP takes care of all that stuff, so there's really no need to try and second guess the firmware. Back when I was writing a Sonos module, the XML files were up to 1 MB and I never had any issues sending/receiving data through a socket as long as I didn't go past the 16k array limit when trying to parse the response. My post above was more about convenience for the programmer, not how it had to be done for TCP's sake.
    Paul
  • ericmedleyericmedley Posts: 4,177
    a_riot42 wrote: »
    TCP takes care of all that stuff, so there's really no need to try and second guess the firmware. Back when I was writing a Sonos module, the XML files were up to 1 MB and I never had any issues sending/receiving data through a socket as long as I didn't go past the 16k array limit when trying to parse the response. My post above was more about convenience for the programmer, not how it had to be done for TCP's sake.
    Paul

    Yes, this is true. I was referring more to the outgoing messages from the AMX master. When I Wiresharked (is that a word???) the traffic I seemed to see that the AMX box would arbitrarily seem to sometimes break up a large message into chunkcs by terminating the packed with the end-of-packet block and then restart another new packet right where it left off. As I said, I never really spent much time sussing out the behavior. Sometimes big packages (as you say - up to 16K) would come through just fine.

    I didn't mess iwth it much beyond figuring out that 2K seemed to always work well. I figured it was the usual Netlinx limit going out as well as coming back in.

    TCP is a prety forgiving protocol as it turns out. I once wrote a little routine in Netlinx trying to fake a png message from the Netlinx layer. I kinda got it to work on a few devices. My method was to write a hex string that started off with the container that would end the first message packet being currently sent out by the netlinx master. then I immediately added the entire packet for a ping message. And then finally the beginning packet of a third TCP mesage which would end just prior to the actual end-of-tcp message generated by the netlinx master. It actually worked on some devices. Obviously the three messages coming at the other device would be a bit too fast, not allowing the device to repsond, especially since the thrid message would fail. But, it did work.
  • viningvining Posts: 4,368
    a_riot42 wrote: »
    TCP takes care of all that stuff, so there's really no need to try and second guess the firmware. Back when I was writing a Sonos module, the XML files were up to 1 MB and I never had any issues sending/receiving data through a socket as long as I didn't go past the 16k array limit when trying to parse the response. My post above was more about convenience for the programmer, not how it had to be done for TCP's sake.
    Paul
    Has the 16k issue changed with NX procesors? When I started writing this code last week I was trying my best to pull out segments of the response as quickly as possible looking for the end tag for a each segment in data.text instead of my concantated local var I use as my buffer but some of these segments are 70+ k long and it?s been working fine. In the old days I remember having to use the ConcantString function that?s been around for a long time but now my local var is doing just fine holding strings way above 16k. I?m so rusty I don?t remember half of what I thought I used to know.

    I would prefer to hold the entire xml file and then start parsing on a completed return so I can parse without assuming everything will be returned in the same order all the time since some protocol mandate randomization. I know a recent Perl update in another device screwed up my code that was set up for a consistent order. Devices that use randomization have to be coded much differently and you really need a complete return before you can start.




  • viningvining Posts: 4,368
    Well out of curiosity I ran a quick test:
    STRING:
          {
          LOCAL_VAR CHAR cTmpBuf[65535];
          STACK_VAR INTEGER nFound;
    
          cTmpBuf = "cTmpBuf,DATA.TEXT";
          nFound++;
          (*
    
    I commented out my code to see what my local var could actually hold and making it 65535 works. In debug it shows a length of 65534 and most of the returned string is there and the code is running fine. It's still too short to hold the entire return which right now is over 75k and right now I hardly have any devices defined in this Lutron system yet, a few TStats and a few dimmers. Eventually I'll have 24 TStats and a couple hundred other devices so I imagine the XML file will grow to possible 200k+. I did try to make my local var 250000 but debug crashed every time I tried to display that var. The code ran fine still but I couldn't see how much was being held by the var. I also have a suspicion that the max it could possibly hold would be 65535 anyway. I just don't remember what the limit should be but 65535 seems likely.

    Is the a way to hold the entire return so I can then parse using a randomized approach? I do prefer an ordered approach so I can advance my find_string pointer or just remove strings as I go but like I said in the previous post I'd rather not bank on the returns always being in a fixed order since now a days it seems that's a not a reliable method of writing a parsing routine. It's definitely more efficient to do it in order but If I later have to rewrite everything to accommodate randomizing that will suck.
  • viningvining Posts: 4,368
    Maybe the 16k limit just relates to string expressions?
    "A string expression is a string enclosed in double quotes containing a series of constants and/or variables evaluated at run-time to form a string result. String expressions can contain up to 16000 characters consisting of string literals, variables, arrays, and ASCII values between 0 and 255."
  • a_riot42a_riot42 Posts: 1,624
    vining wrote: »
    Maybe the 16k limit just relates to string expressions?

    Yes. I use integer arrays longer than that, but to send strings around, I think the 16k cap hasn't changed. My guess is it has to do with the definition in UnicodeLib.axi
    // Longest WC string
    #IF_NOT_DEFINED WC_MAX_STRING_SIZE
    WC_MAX_STRING_SIZE      = 16000
    #END_IF // WC_MAX_STRING_SIZE
    
    Paul
  • viningvining Posts: 4,368
    So for argument sake if I need to wait for a complete return to parse what I receive and the return is greater than 65534 bytes what are my options. Obviously I could create multiple buffers and parse across them but that's seems like a hack job approach. Maybe I could write to file as it comes in an try parsing the file which seems like another hack, does anyone do that. It seems stupid to be limited to such a small variable size it this day and age of IP coms and devices that send large files.

    I guess I could add the ability guard against randomization by searching for every possible starting tag on each data event and who ever is lowest but above 0 would be declared the winner then wait for that tags end tag, cross it off the search list and continue. Hopefully they don't repeat in nested objects so I don't have to count beginning and ends to find the actual matching end. That doesn't sound too messy and I'm pretty sure I can stay ahead of the incoming strings so nothing is truncated by the 65534 limit. I wonder where the missing byte went, it should be 65535 but all that will store is 65534
  • a_riot42a_riot42 Posts: 1,624
    I've written to files before but that can take a while due to IO latency. When I was working on the Sonos module with those large data dumps, I used this function to join large strings as they came in, to parse once complete. Its a bit of a hack, but what choice do you have?
    (**************************************)
    (* Call Name: ConcatString                     *)
    (* Function: ConcatsString > 16K         *)
    (* Params:   sConcatFinal, sConcatNew *)
    (* Return:   n/a                                          *)
    (**************************************)
    define_function concatBigString(char sConcatFinal[],char sConcatNew[])
    {
            stack_var char sPieces[3];
            stack_var volatile char sBuffer[262144];
            stack_var long lPos, lOldPos;
    
            lpos = 1;
    
            variable_to_string(sConcatFinal,sBuffer,lPos);
            sPieces[1] = sBuffer[lPos-3];
            sPieces[2] = sBuffer[lPos-2];
            sPieces[3] = sBuffer[lPos-1];
    
            lOldPos = lpos;
            lpos = lpos - 3;
    
            variable_to_string(sConcatNew,sBuffer,lPos);
    
            sBuffer[lOldPos-3] = sPieces[1];
            sBuffer[lOldPos-2] = sPieces[2];
            sBuffer[lOldPos-1] = sPieces[3];
    
            get_buffer_string(sBuffer,3)
    
            sConcatFinal = sBuffer;
    
    }
    
    
    

    Paul
  • viningvining Posts: 4,368
    Yeah I?ve used that function before back when I think I used create_buffer for buffers and maybe that had a 16k limit too, I don?t recall the details since it seems like ages ago. I can easily use a local var which is my normal routine to concantate with data.text and that can hold up to 65534 byte string no problem but if I set its lenght higher I?m truncated at that number. I can?t say if it used to be able to work that high or not cuz again I just don?t remember so I?m wondering it that limit can be increased some how or do I pursue a work around.

  • a_riot42a_riot42 Posts: 1,624
    If you are grabbing the Lutron file, then I think I might store it on the master in a file and then parse it from disk. Its not like you would need fast, random access to it all the time. I'd guess you only parse it once, so you can slurp the file, then create your data structure, and then don't need the file thereafter.
    Paul
  • viningvining Posts: 4,368
    a_riot42 wrote: »
    If you are grabbing the Lutron file, then I think I might store it on the master in a file and then parse it from disk. Its not like you would need fast, random access to it all the time. I'd guess you only parse it once, so you can slurp the file, then create your data structure, and then don't need the file thereafter.
    Paul

    I noticed in your concant string function you set your buffer length to 262144, were you actually able to hold that much?

    I thought about writing to file and parsing from there and it would normally be a one time deal unless the file date time stamp changed but parsing from a file sounds awkward. It just sems I should ne able to have a var hold at least 1mb worth of data.
  • a_riot42a_riot42 Posts: 1,624
    vining wrote: »

    I noticed in your concant string function you set your buffer length to 262144, were you actually able to hold that much?

    I thought about writing to file and parsing from there and it would normally be a one time deal unless the file date time stamp changed but parsing from a file sounds awkward. It just sems I should ne able to have a var hold at least 1mb worth of data.

    Yes I've had char arrays as a large as a megabyte before. I don't think there is any builtin limitation beyond how much memory your controller has.
    Paul
  • viningvining Posts: 4,368
    a_riot42 wrote: »

    Yes I've had char arrays as a large as a megabyte before. I don't think there is any builtin limitation beyond how much memory your controller has.
    Paul
    Well the concat string function does allow my local var that I use for a buffer go above 65535. It just doesn?t make sense. My local var has no problem with strings above 16k if I concatenate with data.text but truncates at 65534. If I cancantenate with data.text and my local var using the concatenate string function written to overcome the 16k limit it works fine and I can hold lstrings larger than 65534. Has the 16k limit become 65534 on NX processors? I really should test this on and NI processor cuz it?s not working the way I remember. I started writing this code expecting the 16k limit but didn?t see it so I figured the concatenate string function was useless but it does seem to be needed to ocercome a 65534 limit.

    I wish AMX engineers still trolled this forum with some insight like the good old days.
  • a_riot42a_riot42 Posts: 1,624
    Why don't you post your relevant code? 65535 is close to a power of 2 (65536), but beyond that I am unaware of any builtin 65635 limit, but I never use create_buffer, so there may be some limit I am not aware of. The only limit I run across on a regular basis is the 16k string limit.
    Paul
  • ericmedleyericmedley Posts: 4,177
    I may be sticking my nose in where it?s not welcome, and may not be following the thread well... but I?ve found that when doing large XML returns, I have to use a large char array to receive the string (in chunks or whole) and the using that chunk to build a much larger buffer that is really where I carve out the full XML message. I?ve found that a single XML message can come in three or more data event string events. So, I end up just letting the string events build the bigger buffer and then just watch it to peel out another large buffer for parsing. It essentially doubles or triples the RAM use age. But it is reliable.
Sign In or Register to comment.