Home AMX User Forum AMXForums Archive Threads AMX Hardware

Why do I have to cycle my power after installing or updating my modules

About 75% of the time I must go to the Netlinx Processor and manualy reset the power after loading a new module. It seems like the processor locks up. Any thought of what I'm doing wrong?

Comments

  • mpullinmpullin Posts: 949
    Which modules are you installing? Which controller are you using? What happens if you reboot the master via NetLinx Studio?
  • banobano Posts: 173
    Try using this file, if you haven't already.

    Good Luck!
  • bano wrote:
    Try using this file, if you haven't already.

    Good Luck!

    Better still use these queue sizes as posted in Tips and Tricks three days ago:
    CheckThresholdSize(internal_threshold_index_interpreter         ,2000 ,'Interpreter'           )    
    CheckThresholdSize(internal_threshold_index_lontalk             ,50   ,'Lontalk'               )    
    CheckThresholdSize(internal_threshold_index_ip                  ,600  ,'IP'                    )    
    CheckQueueSize    (internal_queue_size_index_interpreter        ,3000 ,'Interpreter'           )
    CheckQueueSize    (internal_queue_size_index_notification_mgr   ,3000 ,'Notification Manager'  )
    CheckQueueSize    (internal_queue_size_index_connection_mgr     ,3000 ,'Connection Manager'    )    
    CheckQueueSize    (internal_queue_size_index_route_mgr          ,400  ,'Route Manager'         )    
    CheckQueueSize    (internal_queue_size_index_device_mgr         ,1500 ,'Device Manager'        )    
    CheckQueueSize    (internal_queue_size_index_diagnostic_mgr     ,500  ,'Diagnostic Manager'    )    
    CheckQueueSize    (internal_queue_size_index_tcp_tx             ,600  ,'TCP Transmit Threads'  )    
    CheckQueueSize    (internal_queue_size_index_ipconnection_mgr   ,800  ,'IP Connection Manager' )    
    CheckQueueSize    (internal_queue_size_index_message_dispatcher ,1000 ,'Message Dispatcher'    )    
    CheckQueueSize    (internal_queue_size_index_axlink_tx          ,3000 ,'Axlink Transmit'       )    
    CheckQueueSize    (internal_queue_size_index_phastlink_tx       ,3000 ,'PhastLink Transmit'    )    
    CheckQueueSize    (internal_queue_size_index_icsplontalk_tx     ,500  ,'ICSNet Transmit'       )    
    CheckQueueSize    (internal_queue_size_index_icsp232_tx         ,500  ,'ICSP 232 Transmit'     )     
    CheckQueueSize    (internal_queue_size_index_icspip_tx          ,500  ,'UDP 232 Transmit'      )    
    CheckQueueSize    (internal_queue_size_index_ni_device          ,500  ,'NI Device Manager'     )       
    
  • davegrovdavegrov Posts: 114
    Power Cycle

    I am just running an AMX Apex Destiny RS-232 module. It is an NI-2000 with the latest Netlinx Studio Version.
  • davegrov wrote:
    I am just running an AMX Apex Destiny RS-232 module. It is an NI-2000 with the latest Netlinx Studio Version.

    Like noted above, set the modified thresholds. If your master has Duet firmware (v3.xx), it's also required to resize teh Duet mem. Can be done either in terminal (set duet mem), or automatically with the following few lines of code
    DEFINE_START
    IF(DUET_MEM_SIZE_GET() < 8)
    {
    DUET_MEM_SIZE_SET(8)
    REBOOT()
    }
    
  • DHawthorneDHawthorne Posts: 4,584
    I've found this sometimes to be a function of a large amount of messages backlogging the queue on startup. 99% of my jobs never have the problem, but that last 1% is pretty consistent, and sometimes takes 2-3 cold boots to get running again. In every problem project, there are multiple masters, many touch panels (more than ten), and at least a half-dozen modules running. Duet, non-Duet doesn't make a difference.

    I haven't been able to find a solution, but since these systems are completely stable once they get going, I haven't spent any time tracking it. They have been as optimized as I know how to make them. My strong suspicion is that the real issue is updates to the panels, not anything in the code itself, but I have no way of testing it at this point. If you don't have a large system where this is happening, telnet into it when it starts and turn on the diagnostic messages. Perhaps you will get a clue what devices or modules are clogging up the queue when it starts up.
Sign In or Register to comment.