WAIT Behavior Question
TurnipTruck
Posts: 1,485
Greetings,
I am trying to understand the behavior of statements within WAITs that contain variables.
For example:
nX=0
WAIT 10 nVALUE=nX
nX=1
If nX changes values after the start of the wait, but before the execution of the delayed operation, will nVALUE equal 0 or 1? And does this rule apply across all variable types, waits, etc?
Thanks.
I am trying to understand the behavior of statements within WAITs that contain variables.
For example:
nX=0
WAIT 10 nVALUE=nX
nX=1
If nX changes values after the start of the wait, but before the execution of the delayed operation, will nVALUE equal 0 or 1? And does this rule apply across all variable types, waits, etc?
Thanks.
0
Comments
If it were in the Define_Program section then: then nX is constantly being changed from 0 to 1 to 0 to 1 over and over... It would depend upon which clock cycle the wait hit on.
However, if this hunk were part of a command like OR Then nVALUE=nX Will Equal 1
Since a Wait will never expire in the middle of mainline, you can expect that nValue will always be 1 in this example, even if this code is in DEFINE_PROGRAM.
--D
I should have said that the above code would be within an event that would run once.
Yes, you are correct that this will toggle nFlash. However, in the code presented:
nValue will never be equal to 0 in this context (and has no chance at all to become 0). This is because when the Wait is encountered, it is placed into a queue, and this queue is checked for expired waits after Mainline has executed, therefore the line "nValue=nX" will always resolve to 1, because mainline will have completed before the waits are evaluated.
--D
This almost always doesn't matter, but it's important to understand for some situations, like this one. It means that you can regard all chunks of code as atomic - that is, guaranteed to complete without any other chunk of code interfering with the variables that your code is using.
Whenever this situation arises in my code I always write a big fat comment to clarify it as it is easy to miss.
By "chunk of code" I mean the code you find in define_start, define_program, and various define_events. A function call is not a separate chunk. When you code a wait you are creating another separate chunk of code which is internally atomic but is independent of the code either side of it.
The runtime system presumably IS multithreaded to allow everything except the code you write to do stuff in the background. This might imply that (eg) the state of a buffer could change halfway through the code that is using it. Can anyone clarify that?
While the OS runs in its own thread and can service the outside world at the same time our program is looping around, any messages that the OS generates have to be put into the queue first before our program can see/handle them. Our program handles one message (event) at a time (first come first serve.) So even though data might be waiting at the door it doesn?t affect our program because we have a built in buffer (the queue.)
This is only speculation but I think CREATE_BUFFER in DEFINE_START registers an event handler before any of our defined event handlers. But whatever the case I don?t believe that the buffer contents will be altered by any outside force other than a message from the queue.
End of long answer...