My first Gigantic Project - EEK!
fogled@mizzou
Posts: 549
I'm working on my first really gigantic project: 5 controllers, 25 output devices, 9 inputs, 5 touchpanels (1 big TPI4, 4 Pinnacles), etc. spread across 5 different main locations (4 of them actual rooms, one is a building-wide digital signage system). The TPI4 needs to be able to control all the locations, but the others only need to control their own rooms. I've got a couple questions...
First: I've never worked with more than one controller at a time before. Could a few of you chime in on whether it's better to set this up as one big system with a master and several slaves, or set them all up as masters and communicate between them? I've started my programming approach with each controller being a master, and using master-master communications, but I haven't gotten very far and I'm questioning my sanity (on this point and several other things :-).
Second: I've got input and output buffers set up for all my devices, and was planning to process the buffers in DEFINE_PROGRAM. But that leaves me with as many as 60 WHILE(FIND_STRING(buff,delimiter)) statements in DEFINE_PROGRAM, as well as feedback statements. Could I possibly run into performance issues doing this?
I think that's enough for now. A few responses will probably lead to more questions.
THANKS!
First: I've never worked with more than one controller at a time before. Could a few of you chime in on whether it's better to set this up as one big system with a master and several slaves, or set them all up as masters and communicate between them? I've started my programming approach with each controller being a master, and using master-master communications, but I haven't gotten very far and I'm questioning my sanity (on this point and several other things :-).
Second: I've got input and output buffers set up for all my devices, and was planning to process the buffers in DEFINE_PROGRAM. But that leaves me with as many as 60 WHILE(FIND_STRING(buff,delimiter)) statements in DEFINE_PROGRAM, as well as feedback statements. Could I possibly run into performance issues doing this?
I think that's enough for now. A few responses will probably lead to more questions.
THANKS!
0
Comments
I personally like to keep all the code in one main master. I've never had problems with processors being slow or network traffic and all that myself. I like to do this because it makes remote management much easier and it also makes managing the code much easier.
I understand the rationale for splitting up the code. But in my mind, it's trying to prevent problems that typically don't happen anyway.
That's my 2 cents worth. Fire away...
Well, I would have all the rooms run their own code. As far as the TPI4, I would use G4 Computer Control to connect to the rooms panels web control. Very Little code is needed.
That's how I do big jobs too. A central controller for "whole house" stuff, and separate code running on the local masters for local systems. The locals all tie to the central for shared functions. The only exception is that I like, when possible, to run code for devices on the master they are physically connected to. For example: in the "one central" master scenario, there may be an alarm panel that is in the same mechanical room as a "local" master (real example). I would put the module for that alarm panel on that local master, making sure all the panels on the central master are also defined there. Once master-to-master is properly set up, they can all share panels and even devices, so that part is completely transparent to the end user. I like to imagine that local code for devices, however, translates to better responsiveness; it's probably negligible, but I can't imagine the extra traffic doesn't add up eventually if you are running code on one master, and the device is plugged into another.
wait 8 PARSE_DENON();
wait 11 PARSE_XANTECH();
wait 12 PARSE_COFFEEPOT();
wait 20 PARSE_NOTSOIMPORTANTDEVICE();
etc.
In the meantime, I'd like to delve into the buffers and DEFINE_PROGRAM just a little more. Here's my best understanding / implementation of buffers to handle flow control and device input parsing...
There's a lot of other processing that can go on in the BUTTON_EVENT and in the parsing call, and there is some additional error handling that should be done as well. But... this is how I'm handling serial communications for smaller single-controller systems now, and it's working reasonably well. I just want to be sure the system isn't going to slow to a crawl if I add 30 more pairs of the WHILE() statements in DEFINE_PROGRAM, when I'll have probably close to 500 basic feedback statements in there as well. And, if this is a really poor way to do this, I'm open to other suggestions. At this point, you're probably all screaming "Modules! MODULES!" at me ;-)
It looks like I need to get a lot more comfortable with modules. I haven't had that much experience with them, and what experience I've had hasn't been very good (MFG modules that ended up having name conflicts and/or didn't have the functionality, or did very slowly, the one or two things I did need the device to do).
Thanks!
--D
That's not always strictly reliable. What triggers a string event is a burst of data then a pause ... but if your device pauses before all the data is in, you may not have all of it when the event fires. Conversely, if it doesn't pause enough, you might have multiple "packets" of data in one event. The timing for what constitutes a full event hasn't been published by AMX that I know of, but the mechanism definitely works that way. The only time I use the string or command handlers of a data event is when it is entirely internal to the master - as in strings between virtual devices and modules. For real devices, I test the buffer in mainline to see if there is data I need to act on. Sometimes you can get away with it, and sometimes you can't, and I would rather not experiment, so that has become my standard way of doing it.
Interesting. I wonder if this is an issue if: 1) I'm still using a buffer; and 2) I'm still using a WHILE statement to look for the termination character in the buffer.
So the STRING event fires every time there's data, then a pause. If the event fires before the entire response string is received, it will just fire again when the rest of the string comes in. But what if I get a burst of 2 full responses? Will the WHILE only run once and process only the first response, or will it keep going (and process a 2nd response) until the statement is false? I would hope the latter, but taking the word "while" at face value is a dangerous assumption. ;-)
As long as I don't end up with a command sitting in a buffer, unprocessed until the next command comes in (and once the buffer got a command behind, it would stay that way until it got a partial string with a pause long enough to fire the event), It seems like putting the WHILE statement in the STRING handler of the data event is a fair solution.
It looks like I've got some testing to do...
Thanks!
I agree with your description of how data arrives, I must strongly disagree with you that the Data_Event is not a reliable place to parse the data. I only use Data_Events, and I've never had a problem. Even in parsing in Mainline you would still have the possibility that an incomplete message is in the buffer, or there are more than one message in the buffer. Regardless of if you parse in the Data_Event or Mainline, you must still check for incomplete and multiple message.
And just to clarify, I do not use Data.Text in the Data_Event (unless, as you say, I'm dealing with a virtual device). I always create a buffer in Define_Start and parse that buffer in the Data.Event.
--D
Parsing; I always create a buffer for real devices and leave data.text for virtuals. Mostly I don't use the use the string event handler and parse from Define_Program but some times I will if I need the data to be pasred immediately and/or for smoother feedback. Otherwise I wait untill the next pass of the main line to check my buffer for the next complete response using the if(find_string(buffer,"CRLF",1)) type of approach. This method removes the need for while loops as it will take the next chunk out of the buffer on every pass if the delimiter for a complete response is present. You can also handle this in the string_event and still use the if(find_string(buffer,"CRLF",1)) in define_program to ensure nothing gets left behind in case the string_event handler does fire properly and/or to avoid a while loop. Mind you I'll still use while loops and have nothing against them and will on ocassion put a while(find_string(buffer,"CRLF",1)) in Define_Program if I want to process the contents of the buffer in one shot and not wait the extra milli-second waitng for the main line to repeat.
That may be your experience, but I have been specifically burned by this. My previous habit was to have a buffer, then trigger a re-iterative parse on the buffer when a STRING event occurred. I frequently had commands that did not fire until the second event happened because the STRING event only caught a partial command. I'm sure it''s very device dependent, and a well-behaved device won't have any issues. But I don't have the time or inclination to test and verify every new device I need to install, so I just stopped doing it that way.
I have to say that I tend to use both of your methods. I have kind of a built-in limit to parsing strings in the DATA_EVENT to 10 or so lines of code. After building my own telnet weather harvester, I found that all the X_EVENTS do time out. If you have a whole raft of stuff going on in the event, there is a point when it quits going through the stack. This was one of the things that drove me nuts at first. The compiler will let you throw in a huge stack of stuff in the event. That doesn't me it gets done.
For example, My weather string coming from WUnderground had the usual suspects, Date, time, current temp, current humidty, wind speed and direction, etc... The last few data items were current conditions and 5-day forcast.
The string would come in just fine, I know for a fact that the parsing worked just fine by soloing it out. However, when all the routines were present the last two wouldn't produce a value. I bashed around for quite some time trying to figure out if I had one of those hidden errors that you can stare at for hours and not see. No luck.
Then I happened upon the solution by accident. I moved the larger chunk of code to the top of the data_event to make it easier to work on. (I had to scroll up and down to tweak things and it was driving me nuts.) Suddenly it began working but the last two routines which were now something like wind speed and wind direction quit working.
I then realized that the DATA_EVENT was timing out before completing the last hunk of commands. I tried all kinds of things to trick it into staying in the event with no success. I then put the whole parsing routine inside an IF statment in DEFINE_PROGRAM and just triggered the IF with a flag and the whole thing magically worked fine.
So, if the parsing is quite simple and quick, I do it in the event. If it gets more involved, I move it out to either a function/call or run down to DEFINE_PROG.
You can test this yourself by putting in a crazy-big FOR loop and seenig how far it gets through the loop before timing out. It actually doesn't get too far.
I do large systems all the time and doing all the code in one master is crazy and dangerous. And for most huge systems simply not a good idea, especially for large residential jobs where you may have live cover art going to a panel.
It is far better to have the individual code for each room on each processor, then if you have a network glitch on one side of the building all of your rooms won't die.
For large corporations putting all code on one master for many rooms would be suicide. I routinely work on systems with 50 to 100 conference rooms.
You are much better to have each room have its own code however then just do virtual pushes and communications between masters.
You can also of course use rms to monitor all your systems.
Crazy and dangerous??? I have several large systems with the code in one master. (systems of $500K to $1M) I have never ran into issues.
I even have one systesm that has a Lutron Homeworks running at 115Kbaud on a sub-master's serial port that works just fine. I had my doubts about it working, but after testing, it ran like a champ.
I personally think it has much more to do with how you handle TP feedback and how you organize your touch panels in general than the processing power issues of an NI master.
The only code I put in the sup-masters is typically a quicky thing to set up the serail ports to the correct baud rate-etc...
If you think about it, the processor should be running sufficiently fast enough to do most things. In my experience most issues occur when AMX meets IP Network for communications.
We always insist upon enterprise level networking gear in our large systems. The customer doesn't have much say in this area. If I have to work with the cheaper newtorking gear, things usually go wrong fairly quickly.
Having any event handler time-out would be a huge gap in the stability of any program running on Netlinx, and I have never experienced such behavior. I went back and looked at a previous program of mine where I had a huge parser in a Data_event... to the tune on nearly 5000 lines (this was a circa 2001 ME master). This program never experienced any issues with timeouts in the parser. Perhaps this is something that was a bug in a particular version of firmware?
I am running a test right now, on a new NI-700, running a 32-bit For loop triggered in the string handler of a Data_Event. So far it's up to 80,000,000 iterations and it hasn't timed out. When it completes it's march up to 32 bits, I'll post the code and results.
And for those of you interested, the 700 is churning through 50000 iterations approximately every second with a single mod operation.
--D
I agree with Seth 100%.
If you are on someone else's ethernet connection, there is always some clown in the next area watching a YouTube video while you are trying to pass info between masters.
Code each room separately, just in case of ethernet failure or equipment failure in your central control area. That way, each area will perform independently in case something happens, and believe me, it will!
I'm checkin this behaviour out as well and agree that it does seem to work as you say. The weather module I wrote is from a long time ago. (probably around 2001 or so)
I'm running a test with a FOR loop going to 3M. It's working just fine. This is on an old ME-260 running 2.31.139. it does behave the way you describe.
I'm real sure I wasn't crazy when I tested before. I'd swear it was acting the way it did. hmmmm....
the other thing is too, when I put the old weather routine back to parsing all things in the data event, it now works fine without modifying the code. It must be something different in the firmware, perhaps.
I may have to rethink this. I don't like the idea of spaghetti code myself and if I could do all the parsing in one spot, that'd be great.
interesting....
here's the little routine I'm running on it.
Here is the complete test file that I am using: