When does DEFINE_PROGRAM run
PhreaK
Posts: 966
Tech note #993
Some interesting notes - most significantly DEFINE_PROGRAM will execute *every time* a global variable is written to.
Some interesting notes - most significantly DEFINE_PROGRAM will execute *every time* a global variable is written to.
0
Comments
The only trigger that bugs me is the undefined event trigger. That means I will have to go back into my systems and add a lot of empty handlers (and a ton of channel events) to really take advantage of the info in the tech note.
I doubt that this will gain you anything given that DEFINE_PROGRAM also runs when all event processing is completed.
It's also not really an issue if you keep it down to a minimum, which is best practice anyway.
I do agree though that as long as you don't get too zealous in what you're doing and not repeating yourself a zillion times a second, it shouldn't too much of a problem.
Kostas.
You should see what it looks like with a Nuvo tuner module from AMX . . . every 5 seconds the CPU would spike to 55% - was interesting to look at it on a graph. I took that baby out and put my own module in . . . 8% all the time. Now granted, it was a Duet module - but still. The idea is the same, their define_program is an internal timeline that ran every five seconds to GET the data . . . so much for their argument that Duet is more CPU friendly than NetLinx.
The note recommends putting code in DEFINE_PROGRAM within a small wait, which is what I have been doing for quite a while now. After reading, I checked my code to make sure no code was outside of a wait in DEFINE_PROGRAM; in fact, my DEFINE_PROGRAM section is all waits: The first fatal mistake that I ever made in a NetLinx program when I started doing this was having a SEND_COMMAND to a touchpanel to set button text in DEFINE_PROGRAM, just to make sure it was always up to date. Ah, those were the days.
It seems that they write the modules with the idea that it will run 1 module per controller. And how many times i found myself following your path.... writing my own modules for a device/protocol instead of using the provided ones....
Kostas
I'm working on an OLD where we're upgrading two old G3 panels to iPads,; so, I checked my CPU usage - almost 90% while running idle! So I removed the several for-loops in there, and used define_program as the loop:
Attached are my before and after results for CPU usage.
I should mention - I thought I got all the FOR loops - well there were two I missed, so there's a before, middle and after.
(The big hump in the beginning of the final one is part of the startup process - it wasn't finished - ouch, huh?
Summary
With FOR Loops ~90%
Two FOR Loops Remaining ~25%
Zero FOR Loops ~10%
@jjames Those benchmarking graphs looks great - what are you using?
Just another one of my 'for-fun' projects; NetLinx pays the bills, R&D keeps me sane.
Not to diminish the value of efficient program design, but let's not freak out and fearsomely go looking fix old stuff that isn't broken.
For data purposes, I had:
DEFINE_PROGRAM
FOR loop a
{
FOR loop a1{ }
FOR loop a2{ }
}
FOR loop b
{
FOR loop b1{ }
FOR loop b2{ }
FOR loop b3{ }
}
Getting rid of the loops with nested loops helped with the CPU (obviously.) If the CPU was that high while idle, it's possible that during heavier events CPU could get even higher. What this means - not sure. I think I'm going to run some tests to try and get the CPU as high as possible, then throw in some intensive functions and call them to see what happens.
The way I read that is it will only run if the global variable is contained within DEFINE_PROGRAM.
Otherwise a global won't trigger DEFINE_PROGRAM.
Anyone disagree?
DEFINE_PROGRAM section. After all nowdays i am writting my feedback in functions running on events.
Kostas
Not disagreeing, but having a different understanding.
They use the explanation that many programmers are syncing button states based on variable values, and that would be a good reason to run DEFINE_PROGRAM "just in case" for ANY variable changed, whether its referenced or not in DEFINE_PROGRAM
"What actually is"
"What is explained to be like"
"What we understand out of it"
The very reason of having a lengthy dissertation based on something that's supposed to throw light on everything :-)
..there is no try..
I read it as any global. I tend not to use DP so if it's only referenced variables that'd be all good, however if it is in fact any global it would explain some odd behavior I've seen in systems that utilized some closed source modules which may have had expensive code in DP.
Since reading TN870 relating to creating a queue and using to send strings to a device, I have been using the method as described in the tech note:
Since this method is calling a function that is writing a global variable, does that mean that it is force-looping itself? Not to mention calling DP on the button release and the channel events...
If I read this correctly, the first instance of 'AddtoLutronQue' causes DP to run by writing a global var, which calls to 'SendLutronQue ' which reads/modifies global var 'cLutronQue' which causes DP to run, then modifies global var 'Busy' which causes DP to run, then turns 'Busy' off which causes DP to run again....
How many times does this method of queueing cause DP to run per instance of adding to the queue?
Lately, I have been using a timeline to trigger the function call instead of placing it in the DP section.
Do it when needed . . .
Back in the "old days" where all sorts of crap live in D_P this counter would read off some abysmally low numbers (under 10 wasn't unusual) and there was definitely performance issues with those systems. I have put this same counter into problematic systems where I have been contracted to "fix the system" and without fail will see very low reported values.
The current code model has minimal programming in D_P with all UI feedback triggered only when a valid effecting change has occurred (device events and UI device and room focus changes calling common functions), and the counter now reports significantly "improved" values (>1000 is the norm with 2500 and higher observed) on large residential systems (24+ rooms w/UI's) with great user perceived performance.
The caveat being cpu usage is consistently "maxed" out - 99.x% instantaneous readings, 100.0% 30 sec max values and ~10% 30sec averages.
After commenting out that "mainline" counter increment cpu usage consistently shows 1.8% 30 sec. average and 3.5% 30 sec. max.
Question is - does it matter? There is a valid use case for having the performance parameterized then reported and logged - but which performance parameter?