ME260/64 and RMS Enterprise
mstocum Posts: 120
What exactly is the support status of RMS Enterprise and the ME260/64? In the installation Guide, it's listed as a supported controller for both versions 4.x and 3.x of the SDK, but firmware version 4 is required for version 4 of the SDK. There, of course, is no download for version 4 of the firmware for the ME260/64. So, what exactly is the deal?
The requirements are somewhat misleading, RMS SDK v4 requires NetLinx Master version 4 firmware; connection instability and Master lockups may result if using version 3 NetLinx Firmware with SDK v4. The ME260/64 version 4 firmware is not planned due to platform limitations so we will move the ME from the RMS v.4 supported system controller list. You can use the ME with v.3 legacy SDK though and it will work fine.
The reason I don't want to use version 3 of the SDK is we have multiple rooms running off the same master (up to 4 on some floors) and from what I remember, handling that in v3 wasn't the easiest thing in the world, v4 seems to deal with multiple rooms on a single master much better.
To explain, one of the main difference in Enterprise is the controllers now essentially become proxies for device communication with the RMS server. Assets no longer have to be connected to an instance of the RMS client that defines a room. This is all handled server side, assets simply register with the server and have a bit of data associated with them that says 'hey, I belong in location x'. Now as to what location that is is completely dynamic and can be changed at runtime from the RMS web interface. This is great because it provides you with complete abstraction from physical connectivity for you location structuring. No multiple instantiation required.
Now, at the moment you are thinking 'this is awesome and sounds exactly like what I want'. Well... almost.
If you are using SNAPI compliant NetLinx modules for your device comms then yes, this will allow you to set up M2M and run all of your RMS monitoring code from the NI-700. You may want to set up some additional communication for handling things like system power on / power off but the base functionality should all be there. Which by the way is a super neat way to RMS systems where you may not have access to the original source code.
Now, if the system is somewhat less structured and you are using modules with a non-standard API or are handling all your device communication within the global scope you can flip this around a bit. If you instantiate the RMS client on the NI-700 this will open up your gateway to the RMS server. You can then implement the rest of the RMS related code (device registration and parameter updates etc.) on the ME260's* and point it the RMS virtual device on the NI-700. I would highly recommend using RmsApi.axi and RmsEventListener.axi for this. Basically any component of the SDK that is implemented in NetLinx (that's basically everything but the client itself and the GUI module) is safe to run on v3.x firmware.
If you are running Duet modules for device comms though the RMS client will need to be on the same NI as the device modules its talking to (there's some funky things that go on in the java backend).
[size=-2]* I have not personally tested this, however to the best of my knowledge should not present any issues. If you find otherwise I would love to hear it.[/size]
Thanks. I have an NI-700 on order now, so I'll hopefully be able to report back with some good news before too long. Fortunately (for me), the only Duet modules I'm using in the system are for an audio conference system, which I'm not too worried about connecting to RMS. Just about everything else is my own code and I always follow SNAPI as close as possible.
Don't think there would be any issue with running Duet modules on the ME and RmsNetlinxDeviceMonitor modules on the NI. It's the RmsDuetDeviceMonitor modules that call the OSGi layer, otherwise it's just SNAPI on the virtual device.
I seem to recall some weird issues with Duet modules eating commands sent to the module, so NetLinx code never gets a chance to see that the command was sent. I'm not sure if that'll cause any issues. Guess I'll find out. Thanks for the tip.
Is there any documentation on how to separate this out per location? Looking at RmsEventListener.axi, the code that actually calls RmsEventSystemPowerChangeRequest() is operating on vdvRMS, and doesn't pass along any location information. This would seem to make it a global state per instance of the RmsNetLinxAdapter_dr4_0_0 module. All of the documentation I've found seems to treat System Power as being bound to the master controller.
If you have a module that doesn't set DEVICE_COMMUNICATING and DATA_INITIALIZED properly, make sure you do not have SNAPI_MONITOR_MODULE defined, or else it will never register (I'm looking at you Extron_Crosspoint_Comm_dr1_0_0).
System power seems to just be setting system.power to On or Off on the module you're tracking, don't bother with the built-in HAS_SYSTEM_POWER stuff if you run multiple rooms off one master, as it seems to assume you have a single room attached to your controller.
The Netlinx monitor modules do in fact seem to run perfectly fine on the version 3.xx firmware (as really they should, they're not doing anything that complicated), and they work just fine for Duet modules with my previous caveat about DATA_INITIALIZED.
So far it seems like you can only track a single master per RMS module instance. I need to dive a bit deeper into RmsControlSystemMonitor to see what exactly is going on with that, but that's at the bottom of my to do list right now.
Control methods are SLOOOOOOOW to execute. Here's hoping there's something I can tweak to make that process happen faster.
NI does not maintain connection back to server like RMS 3.x did. There is an adjustable reconnect time; when the connection is restablished all updates and control function calls are exchanged.
Is that CONFIG.CLIENT.HEARTBEAT? Unfortunately, according to the documentation, 15 seconds is the minimum value. Why on earth would they move from a persistent connection to polling? 15 seconds is an eternity when you have an angry professor on the phone.
That's the one. And yup, it's slow.
The communication architecture has changed significantly from v3.x. All RMS Enterprise clients communicate via a web services API that the server exposes. This is waaaaay more scalable the v3.x's persistant connection approach and also makes network guys despise you a whole lot less. Its a standard protocol that they deal with everyday and know inside out, none of this weird AV voodoo.
Now, as there is no longer a connection there ready for the server to shout things at the clients at any time it has to wait until the next time the client gets in touch. Think of the clients like one of those annoying people who have their phone number set to private. When they (client) want to get in touch with you (server) it can happen straight away, when it's the other way round though you have to wait for them. What this means is that any client initiated events cause action to take place instantly (ish). Parameter updates, device registration, meta data updates etc happen as soon as you execute the appropriate command so your monitoring data is always real time (you do have the option to queue a batch of these though - see the enqueue methods in RmsApi.axi - to optimise your communications). Server initiated events, such as a control methods that need execution, are queued on the server side and collected by the client during its next contact. The frequency of this contact is defined by the heartbeat interval. As a result, if the server executes a control method this can take *up to* 15 seconds for it to be executed by the client, depending on where in the heartbeat cycle this happens.
Now, there is a very good reason why this is set to a 15 second minimum. Lets say if we had a system with 500 clients (NI's, signage players etc). If these all had a 5 second heartbeat that would result in 12 requests per minute, per client. Across 500 clients that comes in 360,000 requests per hour, or 100 requests per second. Remember also, this is a base load - you still need to add all of your parameter updates. Now that's going to require some pretty beefy server infrastructure.
If we scale that back down to a 15 second heartbeat interval that drops use back to a much more manageable 33 requests per second. If you want to use CLIENT.MESSAGES.RETRIEVE to overcome this minimum by all means that will provide you the functionality you are after, just make sure you're server side has the grunt to handle it.
I get the logic for why they're doing things this way for a huge deployment, it just annoys me when you see companies sacrificing functionality at the small end (which will probably be most installs) just to get performance at the high end. Why not let me set the heart beat to a shorter interval and deal with the server catching fire? Even better, why not let the server send out a minimum poll time based on current system load? Again, 15 seconds doesn't seem very long on paper, but when you have a frustrated user on the phone, it's an eternity.