Data.text or Buffer Problems
sridley
Posts: 21
I am writing some code for a Media Server and am controling over IP. I am sending a request for a status update from the server every 3 seconds and I am capturing this into a buffer. My problem is that I have written the contents of this buffer out to Send_String 0 so I can check that this code is working. (I'm new to this so I am just trying to check every stage at this point). What I see in the telnet session however is an new string every 3 seconds but not the entire string. It is always missing the string.
The buffer is declared as a size of 1000 but the string is only coming through at about 120 charactors.
Any ideas what can be causing this?
The buffer is declared as a size of 1000 but the string is only coming through at about 120 charactors.
Any ideas what can be causing this?
0
Comments
Use CREATE_BUFFER in DEFINE_START, and test the buffer then for length or a delimiter in DEFINE_PROGRAM.
If I send send_String 0, Data.Text then I see the whole string in the telnet session?
An operation like cBuffer = DATA.TEXT will overwrite cBuffer.
So depending on which instruction is operated first, you may loose your previous data, or you may get the new incoming data doubled in cBuffer.
To add new stuff manually, you may do it like sMyBuffer = "sMyBuffer,NewData".
// proper create_buffer syntax.
CREATE_BUFFER dvDevice, cBuffer
Pretty neat now it's working!
Here is the output when button 1 is pushed:
Line 1 :: Length of cBuffer = 15999 - 10:12:27
I was using this routine but it appears to have a bug in it if you get a buffer past 65 536. After this limit is surpassed, the routine starts leaving binary data at the beginning of the buffer. I don't really understand how it works, but it appears that whatever binary data is being added to the buffer isn't being removed with the final get_buffer_string(sBuffer,3) as I would assume it is using more than 3 bytes to store that data. Anyone seen this bug or know how to fix it?
Thanks,
Paul
http://www.amx.com/techsupport/techNote.asp?id=886
Thanks I'll take a look. However, the function posted seems to work fine so I would rather just modify it to work past 65535 bytes. There isn't much point in a function designed to work around a 16000 byte limit only to break at 65535. I'm a little worried that a byte for byte for loop would steal the processor for too long with big files, and I don't want to write to the flash drive constantly either. There should be a built in concat function really.
Paul
Here's a revised version which should be more memory efficient and is virtually unlimited in the size of the resulting string.
The 63999 byte chunks come from a documented 64kB limitation of the variable_to_string stream marshaller. The documentation didn't specify what it classed as a kB, so the actual value can probably be increased to (2^16)-1, but I couldn't be orsed testing it.
I tested it by reading in a couple of ebooks from file, concatenating them, writing them back to another file and checking the result by diffing the final file with the source files to check for any anomalies. The resulting file size was around 4MB, so it should be pretty reliable.
Does length_array on the concatenated string return the correct number after the function completes ?
Paul
Actually, if you're going to use it for the NCL's, here is a more efficient version which negates a couple of issues that exist in the first version:
I can't see any reason why it wouldn't, but haven't checked that explicitly. The test code I ran against it exported each result to a file and the file did not contain any garbage beyond the end of the result, so have to believe that this works correctly.
From what I've heard in the past about the way NetLinx manages arrays behind the scenes it keeps the array size in a separate chunk of memory (hence why length_array() etc are nice and fast). If this is only 2 bytes to match in with the variable serialization restrictions then there may be some interesting behaviour once you start playing with strings longer than (2^16)-1 characters.
Umm, I just got very confused after running a test. I looked at the reported length of the result and it was ~38,000 bytes short of what I expected after adding the file sizes reported by Windows for the two files which totalled ~2MB.
The reason was that the length reported by file_read() for each file was lower than that reported by Windows.
But here's where it got weird:
- The resulting file is an exact concatenation of the two source files, and the size reported by Windows is exactly the sum of both file sizes (as reported by Windows)
- The buffer containing the concatenated files (presumably) contains an exact concatenation of the two files and NetLinx reports its length as being exactly the sum of the reported file sizes when they're read in
- The byte counts in NetLinx land are short of the Windows byte count by exactly the number of lines with line breaks in the source files which are read in
What's going on I hear you ask?
Silly me forgot that the files were transferred between my laptop <-> NetLinx using an ASCII mode FTP transfer and the line endings were converted going in each direction.
So, the good news is that length_array() works perfectly above 2^16 bytes (as well as on the function above).
I looked it up and length_array returns a long so as long as you are concatenating files that are less than ~4GB it should return the correct value. I was more concerned about errant characters left after marshalling with large arrays. As as aside, I had forgotten that Netlinx has a double that is 64 bit. Can't think of when I would need that very often though.
Paul