Home AMX User Forum NetLinx Studio

Efficiency of running code in DEFINE_PROGRAM

2»

Comments

  • a_riot42a_riot42 Posts: 1,624
    Nerieru wrote: »
    Glad you responded, but that wasn't what I was saying/asking.

    I accidentally quoted you when I was really just responding to the thread, my mistake.
    Paul
  • ROOROO Posts: 46
    Event driven FB -
    Nerieru wrote: »
    Yeah this is also what makes the programming complicated and fun to do :P. But do we ever get the time to make it as brilliant as we want it to be? :(

    On that note I wonder how you guys handle feedback? True return events from functions? Or through Strings you get back from devices? Of course it depends on the situation, but imo feedback through rs232 can be slow depending on the device and if it's too slow it looks as if the button wasn't triggered etc.

    I'd like to hear what you guys generally do.

    I don't use the Define_Program section much at all. I usually write feedback functions that are called when the page or popup is opened, and Button feedback immediately in button events. If the device response (maybe eventually) and conflicts with the Internal states of the program, I may handle it as an error but not always. Since some devices can be externally controlled, I do use timeline events to update internal system variables to insure the program stays in sync. Power on IR devices monitored through an IO (PCS2 or VSS2) helps keep the ON/OFF straight, but otherwise they're unknowns anyway.

    I figure the control system and devices really work when you send them a command, so why wait on a device response. Feedback to a Operator on a TP that something happened when he pushed the button is critical to overcome the "Human button push 2-N times" syndrom :) I believe it gives the user confidence that the Program and system are responding.

    I've also used tri-state button feedback on projectors for cooldown period lockouts and such, so the timeline feedback is needed to request the state (usually every second) until I timeout or the projector responds with a expected result. I usually have one timeline running for a second count and several timeline Event processors, but I'll create a unique timeline when I think the processing of the events needs a priority or special handling.

    Most of the time the program isn't doing anything but waiting on the user to touch a button, so the Mega processor isn't being burdoned at all. TP response, page flips, popups, and button feedback are my priority. Everything that happens then uses the heck out of the processor for a couple hundred milliseconds.

    ROO
  • PhreaKPhreaK Posts: 966
    ROO wrote: »
    TP response, page flips, popups, and button feedback are my priority.
    Nice! That's exactly the way it should be. In terms of comms timing the user is (generally) the fussiest device in the system.

    For most of the UI feedback here, where possible, the interface state will change straight away to reflect the action the user requested. This will then queue everything to the devices and make them do their thing as quickly as possible. If this fails (ie. the device response times out etc) this will be retried until the do what they are told (up to a limit of attempts) and then if that fails an alert is flagged and the UI will change to reflect the actual system state. So basic flow is:
    User action -> update UI based on intended state -> set devices to intended state -> wait -> update UI with actual state

    The easiest example of this is say a volume control. The user manipulates the control, which reflects their intended action in real time (sort of) on the interface. As this happens the device is told to move towards the target level then once the interaction is finished give everything time to breath and receive feedback (say 800ms) and update the level on the UI with the actual device level. This way the user always gets a nice experience, the device does what the user is asking it to do (even if this doesn't happen straight away) and the UI always remains in sync with the system.

    As far as the DP argument goes I try to avoid it like the plague. If there's a need to throw something together quickly for some basic UI channel updates I'll set up a timeline to run at around 70ms (which sits just below the good ol' 100ms interuption barrier) and update the UI based on tracked boolean values. This way it can be stopped and started as required and keeps everything a bit neater.
  • mpullinmpullin Posts: 949
    PhreaK wrote: »
    As far as the DP argument goes I try to avoid it like the plague. If there's a need to throw something together quickly for some basic UI channel updates I'll set up a timeline to run at around 70ms (which sits just below the good ol' 100ms interuption barrier) and update the UI based on tracked boolean values. This way it can be stopped and started as required and keeps everything a bit neater.
    How does a structure that requires you to write code for it in three different places 'keep everything a bit neater'?
  • PhreaKPhreaK Posts: 966
    mpullin wrote: »
    How does a structure that requires you to write code for it in three different places 'keep everything a bit neater'?

    It seperates out the UI and core system code. Say you might have the value
    uSystemState.uVC.callActive
    
    this value could be updated from a number of places (outgoing call, incomming call, call disconnection etc) and used in a number of places (signal routing, system utilization tracking, UI etc). The UI code then just references
    [vcStatusIndicator] = uSystemState.uVC.callActive
    
    in the UI update timeline.

    When you are debugging you can also then add uSystemState to your watch list and have a complete hierarchical view of what your code is interpretting as the current system status.
  • sonnysonny Posts: 208
    Just found this little nugget in the system I've taken over...any thought on overall behavior? I know there is a 'wait' stack that is monitored, but since this wait is in a call, not sure if the entire program is stopped waiting on the return from the call?
    DEFINE_CALL 'A_LONG_WAIT'
    {
         if (bSomeState)
         {
                wait 180000
                {
                      bSomeState = 0
                }
         }
    }
    
    .....
    
    DEFINE_PROGRAM
    
    if (bSomeState)
        CALL 'A_LONG_WAIT'
    
    
  • ericmedleyericmedley Posts: 4,177
    sonny wrote: »
    Just found this little nugget in the system I've taken over...any thought on overall behavior? I know there is a 'wait' stack that is monitored, but since this wait is in a call, not sure if the entire program is stopped waiting on the return from the call?
    DEFINE_CALL 'A_LONG_WAIT'
    {
         if (bSomeState)
         {
                wait 180000
                {
                      bSomeState = 0
                }
         }
    }
    
    .....
    
    DEFINE_PROGRAM
    
    if (bSomeState)
        CALL 'A_LONG_WAIT'
    
    

    hmmm.
    I just ran it on a master here and it didn't blow anything up.
  • viningvining Posts: 4,368
    Shouldn't have any negative affect at all. Waits don't hold up anything but the code that executes when the wait expires.
  • sonnysonny Posts: 208
    vining wrote: »
    Shouldn't have any negative affect at all. Waits don't hold up anything but the code that executes when the wait expires.

    just seems like this would be more of a hung call with respect to mainline code as opposed to a wait. I'm having a long processor lockup that seems to be consistent with the set of circumstances around this.

    I've removed so I'll know soon.
  • viningvining Posts: 4,368
    The call would add a little overhead more than just checking the value of bSomeState but it only checking the same value again and then checking the wait queue to see if this wait is already pending and if not adding it to the queue. and if you want to keep the processor from checking the boolean value of bSomeState and running the call on every pass if true just put it behind a wait 2 in DP so it only checks every 200ms.

    I don't see the need for anything being in DP without being behind at least a wait 1. It's really just as good has having a timeline just the timing isn't as reliable but for what DP is used for in shouldn't matter. if it runs every 100ms or 101ms or 98ms it doesn't change anything we'll notice for feedback.
  • sonnysonny Posts: 208
    My concern was that the CALL wouldn't complete until the wait had expired. I didn't realize the system just set that block aside and moved on. I don't use waits much outside of timing a sequence of IR commands, for example.

    Thanks. Back to the drawing board....
  • viningvining Posts: 4,368
    My understanding of waits is they are all given a uniquie ID either by us or the master. When the processor comes across a wait it first goes to a special queue (the wait queue) where all pending waits are placed and it runs through all the waits already in this queued and looks for an ID match. If it finds a match it returns to the location where it ran into the wait and then skips over the code associated with that wait. If it doesn't finds a match it adds this wait to the queue logging its ID, a location pointer and the wait time. Again it returns and skips over the code associated with this wait. Constantly running in the background the system timeline checks the wait queue every 100ms and decrements the stored time value of each wait in queue by 100ms and if this time then equals 0 it executes the code at that wait's pointer from the opening brace to its matching closing brace (or 1 line if no opening brace directly after the wait) and the queue entry is cleared.

    At least this is how it works in my head. Other heads may vary! :)
  • Great explanation of how a WAIT is processed
    vining wrote: »
    My understanding of waits is they are all given a unique ID either by us or the master...
    Nice job, vining!
  • Joe HebertJoe Hebert Posts: 2,159
    PhreaK wrote: »
    I'll set up a timeline to run at around 70ms (which sits just below the good ol' 100ms interuption barrier)
    Can you expand on that? What barrier are you referring to?
  • HedbergHedberg Posts: 671
    Joe Hebert wrote: »
    Can you expand on that? What barrier are you referring to?

    Thank you for asking that question. I've been wondering about this also, but was afraid of looking stupid.
  • viningvining Posts: 4,368
    Hedberg wrote: »
    Thank you for asking that question. I've been wondering about this also, but was afraid of looking stupid.

    x 2.......................................
  • PhreaKPhreaK Posts: 966
    Joe Hebert wrote: »
    Can you expand on that? What barrier are you referring to?

    It's based on the human information processor model. The mean perceptual processor cycle time sits at around 100ms for adults - that is if two events occur within 100ms most adult humans will perceive them as a singular event. As the time between the events increases it starts to cause a disconnect between the action and re-action.

    In the case of an AMX UI there will be additional delays between the button state change in code and the change happening on the display, however the time taken for the motor processor to do its haptic feedback thing to your brain from the touch counteracts this nicely. I haven't done any decent research on the comms timing from a master to TP but by using a timeline at 70ms this allows 100ms for the code to do it's thing (70ms motor processor cycle time + 100ms perceptual processor cycle time) before most users will start feeling as though there's any lag.
    Hedberg wrote: »
    Thank you for asking that question. I've been wondering about this also, but was afraid of looking stupid.
    "He who asks a question once is a fool for five minutes; he who does not ask a question remains a fool forever."
  • ericmedleyericmedley Posts: 4,177
    PhreaK wrote: »
    It's based on the human information processor model. The mean perceptual processor cycle time sits at around 100ms for adults - that is if two events occur within 100ms most adult humans will perceive them as a singular event. As the time between the events increases it starts to cause a disconnect between the action and re-action.

    In the case of an AMX UI there will be additional delays between the button state change in code and the change happening on the display, however the time taken for the motor processor to do its haptic feedback thing to your brain from the touch counteracts this nicely. I haven't done any decent research on the comms timing from a master to TP but by using a timeline at 70ms this allows 100ms for the code to do it's thing (70ms motor processor cycle time + 100ms perceptual processor cycle time) before most users will start feeling as though there's any lag.


    "He who asks a question once is a fool for five minutes; he who does not ask a question remains a fool forever."

    You know this is fascinating... I've always known this myself by trial and error when doing feedback on panels. I just tweaked and played with the timing until I found a place where it seeded to work. But, I just figured it was just 'me' and my silly perception. I never thought that there was any research to it. If I had bothered to look it up, I might have saved myself a lot of trial-and-error time. Thanks for sharing!
    e
Sign In or Register to comment.