Custom Modules
staticattic
Posts: 200
I have worked with pre-made modules in the past, not very often though. I figured I would take a shot at writing my own. I was wondering, if other devices have to do something in relation to the device being controlled by the module, would all of that be written in the module?
For example, I am using a Tandberg 6000MXP, an Extron 128 video switch, and two plasmas. If the user selects to transmit the main camera, not only does the Tandberg need to switch to the camera input, the Extron switch needs to make some moves, and the plasma screens need to power on (if not on already) and move to the proper input selection. My initial guess would be to only have one device controlled per module. So the array of buttons used for the Tandberg would be sent to the module for usage. Then, in the main program, any of those same buttons that would require the Extron switches to do something or the plasma screens to do something, that particular button array would also be used. I was thinking if I put control of all of the other devices into the module, the module would lose its portability. So, what would be the best way to handle something like that?
For example, I am using a Tandberg 6000MXP, an Extron 128 video switch, and two plasmas. If the user selects to transmit the main camera, not only does the Tandberg need to switch to the camera input, the Extron switch needs to make some moves, and the plasma screens need to power on (if not on already) and move to the proper input selection. My initial guess would be to only have one device controlled per module. So the array of buttons used for the Tandberg would be sent to the module for usage. Then, in the main program, any of those same buttons that would require the Extron switches to do something or the plasma screens to do something, that particular button array would also be used. I was thinking if I put control of all of the other devices into the module, the module would lose its portability. So, what would be the best way to handle something like that?
0
Comments
If the three devices need to change when the user does something then your code will have to do three do_pushes on each of the three devices to accomplish the task.
Paul
The traditoinal method was to have the button_event in the main, include or UI module and communicate with the module via send_commands with commands that you yourself create and pass to the module which in turn would parse the received command in a command data_event handler and send infromation back to the main, include or UI module with send_string of commands that again you create.
So there are many ways to do this but regardless of the method I usually always try to keep the modules and the include files clean and control the various devices that need to work together in a Systems include file or module with a Systems virtual device to handle system wide button events that handle mutiple devices by do_pushes on the individual devices or direct send_commands into the modules. I tend to favor the do_pushes because I often have additional tracking or TP feedback outside the module.
A button press in the MAIN Code using COMM modules may look like this.
The COMM modules handle the commands and buffers them where needed. Comm Modules also makes it more simple to troubleshoot. Using Diagnositcs Control a device, simply send the command to the virtual device and the control is tested.
The channel states can also be viewed with a port status of a device in the online tree via NS2 or though a telnet session so you can get a fair amount of info about what the module thinks it?s doing without having to go into debug. If I have a device where an input select means nothing or does no good unless the unit is powered on (projectors, TVs, receivers, etc.) then I write the module so that I just need to pulse the input channel I want from my main code and the module will take care of turning the unit on if it?s not on already and wait the appropriate amount of time for the input select.
So instead of doing this:
I like to do this: I?ll still support channels 27 and 28 for simple on and off but I like to take as much thought process out of the main code as possible and let the module do the leg work for me.
Food for thought?
This module was for an AVR and I only used channel and level events for feedback from the module to my include file where I used channel and level events for feedback to the TPs. I used multi state bargraph buttons to display my fixed variable text such as inputs, surround modes and volume levels and don't send variable text at all with send_commands.
It's something that works for me. Normally channel feedback in notifications is saturated with button feedback and channel feedback that makes it difficult for me to see the event I'm looking for. Breaking it out to a send command allow me to quickly verify that the program logic triggered the action. Parsing for the send command is minimal as there is no parsing at all.
No more than a channel event would require in the same module.
This is how modules were originally conceived by AMX until conformity to channel presses (as in IR devices) were required by users of the modules to make a 232 device behave like an IR device.
If I am correctly understanding you guys, I should ditch the idea of having seperate UI modules for each device. Instead I should have comm modules for each device and then pass the commands to them via the main code or through DO_PUSHes of a virtual TP.
A press of button 1 needs to switch the Tandberg to the main cam input, wake the Tandberg if asleep, switch the plasma screens to the proper input, select Input 1 to Output 1 video on the Extron, hide the current pop-up on the TP and open the camera PTZ pop-up, and light the Main Cam input selection button. If I give all of the horsepower to the comm modules, then the only thing the main program needs to do is pass the info to the Tandberg, Extron, and plasma comm modules, and send the pop-up commands to the TP. The comm modules would take that info, do the manual labor required to make the action happen and pass the feedback back to the main program for parsing. Is that the right idea?
It goes on and on, but I am not going to write the whole thing. Line by line:
COMMAND: defines a section in the data event handler for processing send command instructions.
mPort = DATA.DEVICE.PORT = in this instance, mPort is going to equal 1 because my virtual Tandberg is declared as 41001:1:0.
SEND_STRING 0, "'UI rcvd from COMM on Port ',ITOA(mPort),' :',data.text" This is going to send to the master "'UI rcvd from COMM on Port 1: whatever data.text happens to be
SWITCH(remove_string(data.text,'-',1)) this removes everything in data.text up to and including the hypen.
This is where I think the horsepower from the comm module comes into play. Somewhere in the program, some event triggered an "are you in a call?" status check. Data.text is going to be the reply back from the comm module mixed with data from the real Tandberg. If the comm module quearies call 1, the real Tandberg is going to reply back with *s Call 1 (status=Synced, type=whatever, whatever, and on and on)
line = atoi(data.text) this will now equal 1 and since it is not greater than the max lines supported, we move on.
remove_string(data.text,',',1) is now going to = (status=Synced,
The UI module will find its target word, "Synced" and report back that mLineState[1] = CONFERENCE_STATE[3].
mLineState is declared as: volatile_char m_LineState[MAX_LINES][15]
CONFERNECE_STATE = char CONFERENCE_STATE[5][15] = {'Idle', 'Negotiating', 'Connected', 'Ringing', 'Dialing'}
MAX_LINES = 11
So, the TP would then show that call number 1 was connected.
Am I right in my calculations?
If the main program is going to serve as the UI to the comm modules, data events such as this will need to remain in the main program right? Or, if I used includes to handle Tandberg functions, would these types of things would go there instead?
Feedback for the Local system might look like this
[dvTP_Tandberg,1] = [dvTandberg,101]
This could be in a timeline_event or in define program. The beauty of writing you own module is that you could make it do one thing or many things.
The beauty of having virtual devices, comm modules, UI modules, or includes, is that I, as the programmer, have complete control over them. If I ever go to a job site where some guy has created a crazy TP, or job sites with multiple TP's, or any other type of situation, I would still be able to maintain my level of control without having to stress too much about it. All I would need to do would be to drop my modules, includes, and virtual devices into the source code, have the main souce code interact with the real TP's as well as send DO_PUSHes to my virtual TP's (which will always be the same) or SEND_COMMANDs directly to the virtual devices themselves. The feedback on the real TP(s) would be based on the feedback to the virtual TP's. That way, my modules would totally be "modular" and all I would need to do would be some tweaking of the main source code to make sure button arrays are correct.
Following that logic and using the same example with the Tandberg and the Extron switch, I think I have my plan. The comm modules for both are included in my program via AXI's. Included in those AXI's are the button arrays for the virtual TP's needed for the levels of control I want. The main source code uses DO_PUSHes, DO_RELEASEs, and DO_PUSH_INFINITEs to poke the buttons of the virtual Tandberg TP and the virtual Extron TP. The feed back on the real TP(s) would be something like:
[dvTP,1] = [vdvTP_Tandberg,1]
So when you select input 5 on the extron and
TANDinput(5) the Tandberg would select input 4. This keeps the AXI modular but allows you to see that you're selecting an input on the tandberg. This could be used anywhere in your project.
Any other time I have tried to sneak in a GOTO, that was serious point deductions.
I have gotten away from SEND-COMMAND and SEND_STRING for simpler module functions. I have gotten more into DO_PUSH_TIMED_INFINITE and DO_RELEASE as some other programmers like to use. I find it much easier to report a status with a channel event or a level than a character string that has to be parsed. Strings have their place in more complex communications.
When writing a module, I try to build in all possible functionality that a piece of hardware could provide in the current project and those in the future.
It isn't very often I write a do_push on a TP although it has come in handy on a few occasions. Far more often I do a do_push_timed to a virtual with a constant identifier like this do_push_timed(vdvRcvr, cnPowerOn, 1). No need to look up anything to determine what is going on.
Paul
OK, this is where I am. I ditched everything I had up to this point and started with a clean slate. I wrote a comm module for an NEC LCD4010. Right now I am working with volume ramping. I created the comm module in such a way so that in the main program or include or wherever the proper place for interfacing with a module should be, the command sent to the virtual LCD is VOLUME-xx. When that is passed to the module, it strips VOLUME- and uses the digits to create volume level strings. Inside the comm module, I have all of the regular data events for the LCD, like setting the baud rate, etc. I also created a buffer for the real LCD to parse reply strings, and pass that info back to a text button on the TP. So if the volume is 50, the text button displays a 50. Everything works as is. I am a little confused on how to interact with it to keep it entirely "modular". There are no button events in the comm module. The button events, I think, should either be in a seperate AXI or in the main program. Wherever thay are, that is where the SEND_COMMANDs would generate. The feedback is what I am hung on. Where should the comm module pass the feedback? WOuld it be better if I made my comm module channel driven? Things like input source or off/on states would be easy to handle via channel events. However, what about variable text events? Like if I wanted to drive a bargraph based on the current volume setting?
You could do this same thing if you want to send out text status of the display. But as for channel status, I typically use a channel on the real device. AMX uses channels on the virtual device. I use channels on the virtual device to control the LCD and I use send_commands to the virtual device.
I believe this allows for a fully independent module.
Main Code:
Eventually I was going to put together a button array, but I had not done it yet. That is why I was sending a direct BUTTON_EVENT instead of a reference to an array.
Module:
That was round number 1. It worked great. It just not feel like a true module to me. So, I was going to re-do it. My plan was to use CHANNEL_EVENTs to send the strings to the real LCD. The main program through an include, would press the virtual buttons that would control the actual SEND_STRINGs to the real LCD. Maybe I am skipping a step somewhere. My plan was to have the button array in an include. The buttons in the array could match up to the channels in the module or not, it really wouldn't matter. If dvTP,1 was pressed, that would activate whatever channel needed to be activated in the module. Something like:
Or ON it or OFF it or whatever. When you do it, you would send a channel event to a virtual device that controls the real LCD and you would issue SEND_COMMANDs to that virtual device? Something like:
And in the module "VOLUME UP' would cause an ON of whatever channel ramps the volume? Then the real TP, I assume, would get its feedback from the channel actions from the module? I think you were referring to that earlier. SOmething like:
[dvTP, 1] = [vdvNEC_LCD,101]
As far as the volume control, it's probably far mmore effective to uses 2 channels for volume ramping. Instead of sending a command to start and stop the ramp, just hold the channel with a TO command. The module code would probably have a statement in a repeating timeline or DEFINE_PROGRAM like this...
The ramp occurs when the channel goes high.
Please keep in mind that you as a programmer can do whatever you want to do with modules. AMX has set standards for module creation. I don't always adhere to them but I think it is wise to interact with a realdevice through a virtual device with the use of channels and send_commands. The module interacts with the real device through send string and manages statuses with channels on the real device. Channels reported to the programmer should be on the virtual device. As in channel 255 on the virtual device reports the power status of displays (ON or OFF). Notice I make no mention of a Touch panel. That functionality is left for the UI Module. (AMX)
In your 1st code example your not incrementing your nVol variable prior to your send command but I'm sure you've already noticed. I usually use arrays too but they aren't all that necassary for most modules since we have so many ports available most devices with a lot of buttons won't usually be combined with another device on the same port so button channel number changing is rarely needed. I'll often just do this:
And trap all the button pushes and it doesn't have the hold and other problems that BUTTON_EVENT[dvTP,0] has.
Either way works fine whether it's a channel_event or button_event. The last module I did I used the do_push_timed that I mentioned ealier to control the module and then channel and level events in the module to handle my feedback back the the main/axi. and I guess the only reason I used the button_event method was to avoid a timeline in the module. I think I was just hell bent on getting a hold to work in the module.
In the real code, nVol is a value that is sent to a call that increments or decrements it. It then gets sent to a function that does all of the combobulations and the function sends it to the LCD as a hex string in the format the NEC is looking for. Sorry if I caused any confusion.
Thanks everyone for all of the help. I am going to let all of this simmer in my head for a bit. I'll send my next revision tomorrow.