Home AMX User Forum AMX General Discussion

HTML on panels instead of TPDesign, when/if is it coming?

When other brands now are to release HTML5/CSS3/Javascript support on the panels, then I hope that AMX will do the same.
Most costumers want there panels to look as there daily browse on the phone or tablet.
But it seem to be quiet about this, could we get some update, please ?!

Comments

  • SVSi Panel Builder has supported HTML5/CSS/Javascript for several years. Panel Builder is available on the N-touch panels or the N-Command appliances. The UIs can be rendered on any device that has access to the server device and the appropriate credentials. There are in class and on-demand training options for Panel Builder and Javascript available form the learning portal.

  • John NagyJohn Nagy Posts: 1,734

    I see the N-Touch is set for end of life in 13 days, and the G5 line is the suggested replacement. So I'd guess the question really needs a response reflecting the plans for NetLinx system panels.

  • sentry07sentry07 Posts: 77

    Man, if Panel Builder is the future, I hope it's doing better than the offering we were given a couple years ago. It was pathetic. We dumped over 100 hours on a single room project and eventually pulled all the N-Touch hardware and fell back on Netlinx hardware.

  • Panel Builder is not the future as far as the tool goes. But there is nothing preventing anyone from developing the HTML5/CSS/Javascript within their preferred IDE then just leveraging PB as a deployment convenience. As far as the hardware goes, G5 panels have a web browser app built in that does a fantastic job rendering UI's hosted on the the N-Command appliance.

  • Dennis E.Dennis E. Posts: 27
    edited March 2019

    Crestrons new firmware on the panels are actually running a Chromium HTML renderer /Webview/Blink and a engine for special signal tags "CRUX" to the master.
    You just use any tools you like for doing webpages and then FTP the webpage files to the panel.
    As G5 are runnning android as far as I know and any android phone can run a browser that renders html5, javascript, CSS etc from files on the phone so this wouldn't be that hard to set up by the devs.
    You could be using XML to the WDMComm in the Duet/Java library to communicate with the master for example similair to the "CRUX".

  • Dennis E.Dennis E. Posts: 27
    edited March 2019

    One thing about this is that you wouldn't need to use TPDesign at all to make a complete panel design if don't want to, if this would be supported.
    And the ability to use all the libraries that can be found for Javascript and the backbone of HTML5,DOM,javascript and CSS3 will make you able to do the panel design look like any top modern website.
    It would be that you either do the design in:
    -TPDesign as previously
    OR
    -You do a complete design with the tools you normally use to do a web page and FTP upload the files to the panel.

    If this would be implemented and DUET would be pushed forward with documentation that made it possible for everyone to understand the architecture, then I really think that AMX could floor any other brand right now.

  • @Dennis E. said:

    -You do a complete design with the tools you normally use to do a web page and FTP upload the files to the panel.

    If this would be implemented and DUET would be pushed forward with documentation that made it possible for everyone to understand the architecture, then I really think that AMX could floor any other brand right now.

    In that model I would think the files would be loaded to a server (like perhaps an NX Master) then delivered and rendered on the panel web browser as required. Ultimately the master hardware purpose transitions from its current role to that of a simple web server and physical interface host with the majority if not all logical programming instanced at the UI.

  • John NagyJohn Nagy Posts: 1,734
    edited March 2019

    @HARMAN_icraigie said:
    In that model I would think the files would be loaded to a server (like perhaps an NX Master) then delivered and rendered on the panel web browser as required. Ultimately the master hardware purpose transitions from its current role to that of a simple web server and physical interface host with the majority if not all logical programming instanced at the UI.

    Depending on the programmer, that's true now. Many systems depend on panel logic and "dumb" back ends that simply cause an i/o event upon single button codes. And many recent requests for help here on the forum in the last months/years have been asked from a perspective of "how do I make a button that will do x" instead of "how do I have the Netlinx do x" with a button simply telling it when to do it. Each UI built to match only the task in the room it lives in... each and every panel in every room being a singular custom exercise.

    In my view, it's due to the universal remote mindset, where every button tells its own story, and the NetLinx is just there to look up the ending. It's really rare to see systems embrace the power of a central intelligence that is possible with the NetLinx.

    On the other hand, being a web server does not preclude central processing. There is no inherent difference between the central reception and reaction to button commands whose definitions are stored in a panel project, compared to button commands defined via a vended web page. Where these architectures part ways is when the web pages use java, javascript, and other "local" processing to generate actual commands at the UI, and pass them back for execution by a dumb back end.

    Such UI-logic-centric designs strain the custom work to the extreme. No two panels will every be alike; any system changes require intense and repetitive rewrites of every panel.

    Imagine instead, that the PLAY button in EVERY device page design in EVERY panel had one common code, let's say 1,1 in G4-speak. So when a Netlinx gets a 1,1 from a panel, if the Netlinx knows what room the panel is currently controlling, and thus knows what source is in use there, and by the way knows the status and permissions of the user and state of the source, and also knows how that source is communicated with via a definition library, and can look up and issue whatever and however the command is that causes that source to engage PLAY.... then ALL panels only need to have a means to issue the 1,1... and PLAY will happen as the user intends. That is exactly the same result no matter if the "page" with the "button" with code 1,1 got displayed from an internal repository ("panel project"), a vended web page, a button map relating to dry contacts, or a JSON payload. And if the source is ever traded out later for a wholly different device with IP vs IR vs REST vs IFTTT relay... the panel never needs to know the change... only the NetLinx needs to know the new device definition. One change, one place, and a hundred panels, button arrays, or whatever the UI array continue to work uninterrupted. Design it right, and such changes in infrastructure can occur without taking the system offline... and just keep working. What you have is a process engine that can adapt to change and apply it intelligently to action requests from unchanging UI's.

    Lather, rinse, repeat for all commands for all sources and system devices. Put the intelligence where the biggest brain is, design once, reap the benefits forever. And let UI devices just say, "the guy pressing my button where I am wants X to happen where he is, to what he is doing." And a vended web page is as fine for that as a TPDesign page.

    Now blend in the other benefits of a central intelligence that can weigh time of day, resources already in use, user preferences, conditional circumstances, location information, and so much more... that potentially affect what is the "RIGHT" command to issue upon any single UI event code... no matter where it comes from.

    This is the power of NetLinx. This is all lost if the UI is the brain. One of many, for the most part each unique and unable to know or understand interrelated dependencies anywhere else in the system. Local processing makes local answers. We live in a larger world than that.

    Sorry. Triggered. TL;DR: See CineTouch. (I'm promoting a model, not a product here...)

  • JasonSJasonS Posts: 229
    At some point you have to ask yourself why you would by hardware from a control system vendor to serve web pages? Why limit yourself to the vendor's language, framework, licensing model, whims etc?
  • John NagyJohn Nagy Posts: 1,734

    @JasonS said:
    At some point you have to ask yourself why you would by hardware from a control system vendor to serve web pages? Why limit yourself to the vendor's language, framework, licensing model, whims etc?

    One reason would be for easy access to I/O. But you can reach that these days without a central back end (Global Cache, IOT, IFTTT). So yeah.

  • Dennis E.Dennis E. Posts: 27
    edited March 2019

    The reason why we upload the grahics to the panel is because the panels are static, there won't be any NEW panels that connect to the system like a random laptop connects to a webserver.
    That's is also why we have dedicated panels and the graphics are resident in the panels, they will not change.
    The master should be the central point for controling the devices.
    The panel is the link between a user and the master.
    BUT it's much more efficient to have the code that handles the grahics in the panel itself, so instead of JUST upload graphics you also upload code that handles the panel.

    And today the best solution are HTML, think of the global development, millions and millions of people are pushing the HTML technology forward.
    A web browser on any device loads all the HTML, javascript and CSS and GRAPHICS data into the DOM that runs the code localy on the display device.
    And to have the web server on the panel is better because you don't need to send data on the network just to handle the graphics which just are in the panel, or else you just might have an expensive backlite graphical keypad.

    If you want to be able to browse to a "virtual" panel from a computer or likewise then you could have a webserver on the master that even could server multiple "virtual panels"

    Why try to invent something different when there is an open standard for this solution?

    The common use of an infrastructure is also the reason why phones, tablets etc has peaked so fast in the last years.
    Because so many people around the world are working on the same BASE environment.
    So instead of having a team of 5 people working in a 3 year cycle on improving things, you have millions of people.

  • JasonSJasonS Posts: 229
    There is a company that has been doing HTML/JavaScript AV control for a long time. Why isn't everyone using it? Why hasn't it pushed everyone else in this direction sooner?
  • I would definitely like to see support for HTML and more so DUET. There is so many more possibilities with DUET. In the UK there is simply no support for it, which is a bit frustrating. I've worked in Netlinx for many years and whilst it covers most avenues, there is some stuff you simply can't do - or do well.

  • a.theofilua.theofilu Posts: 16

    There is a new promising project on https://github.com/TheLord45/amxpanel. The software allows to use any device with a HTML5 browser. It emulates a panel. However. You need a small device to let run a software serving the WEB pages to the browser and communicating with a controller. This should be a device running Linux. If you're interested, visit the page and read the documentation there.

  • ericmedleyericmedley Posts: 4,177

    There is a subtle thing about the notion of making the JS on a panel actually 'be' the main code on the master: In AV UI - quite often there are many UI connected to the central controller. In most large systems I do the panel(s) are essentially dumb devices that rely on the central controller to do everything. When I have needed to do systems with the central controller code residing in a UI panel, There is no built-in method of one panels code talking to another and I am forced to create in inter-panel API. With standard Netlinx code this inter-system API is already built and works quite well.

    In my world very few AV systems are single-use/single-space. The current development environment we have for JS in AMX is pretty cumbersome and klunky. I know everyone picks on Netlinx as not being a 'real' language. But, I feel like the same crowd is not recognizing how well it is adapted to its environment. I also feel like we control system programmers put too much stock in our UI features and not enough in the control code. Very few people come into a space and spend a ton of time at the UI. They mainly want to power up the room and get their meetings going. They spend little time admiring the sublime beauty of our UI navigation and flashy motion graphics.

    We have a saying in the music production business that I feel applies here. "Don't bore us! Get to the chorus!"

  • My fav music biz saying was "Well we can always fix it in the mix" :D

  • John NagyJohn Nagy Posts: 1,734
    edited July 2019

    Eric is right, central intelligence distinguishes a "system" from a collection of universal remotes.

    We've often said that the UI is the only thing standing between the user and satisfaction, so the sooner it gets out of the way, the sooner the user gets satisfied (hence minimizing pushes-to-completion - instant gratification can't be 6 button presses away). But customers presume that the appearance of the panel pages directly correspond to the level of competence and power of the system. At at buying time, the customer generally has no idea how it's all going to work, and if the UI isn't "modern and cool", no sale. "Easy to use" and "does what I want" often don't come into consideration until after deployment.

Sign In or Register to comment.