Alternative methods of user interaction
PhreaK
Posts: 966
Recently I've been putting a bit of thought and time into ways of interaction with the current AMX interface hardware aside from 'oh look a button, I might push it'.
One of the areas I've been experimenting with is gestural control. I'm aware the the current gen TP's and software do not support it but there a couple of ways I've been beginning to implement basic gestural control in my interfaces. The most effective use I've found so far is to implement a 'wipe' gesture for locking TP's. The way I go about it is to place a transparent drag centering bar graph below all my active buttons. Using some basic threshold logic in the level events this allows the user to touch a 'non active' area of the screen and swipe the finger horizontally to lock it.
The other area I've been experimenting with is the way the panels behave when you have multiple points of contact with the screen. Although multi touch is not supported I've noticed that when using a joystick control and having multiple contact points present the coordinates of the control are an interpolation of the points with the position weighted based on the quality of the contact. What this means is that you are able to directly control the outputted coordinates based upon the relative pressure of the contact points. The interface concept where I'm currently experimenting with this is for lighting control. The basic idea is to lay out labels for the lighting zones in a circle, with a single 'pressure sensitive button' in the middle of the circle. This whole area then gets a transparent joystick control placed over it. Basically to control the lighting the user press' and holds the zone they wish to control, then the simply press the central button and vary their touch pressure to set the level. Programming wise when the user initial touches the screen the coordinate is picked up by the joystick controls levels, and from this you can interpret which zone was touched. When the user touches the central button the amount of pressure they apply will vary the actual location of the joystick control between their two points of contact, thus allowing you to calculate a usable 'level' from the coordinates of the initial touch, the coordinates of the central button and the levels being passed back from the joystick control. Additionally you need to add some filtering so that only points that lay within acceptable bounds of the path between the initial contact and the central button are used to set the levels, disregarding touch's to other areas of the screen or allowing you to use such events for other control as well as filtering the sudden jumps caused by the user lifting a finger.
My reasoning and drive to experiment with these alternate control methods is to continuously develop interfaces that are less cluttered, more function and most importantly, feel natural to use. I'm interested to hear what others think of these ideas as well as similar techniques developed or experimented with.
One of the areas I've been experimenting with is gestural control. I'm aware the the current gen TP's and software do not support it but there a couple of ways I've been beginning to implement basic gestural control in my interfaces. The most effective use I've found so far is to implement a 'wipe' gesture for locking TP's. The way I go about it is to place a transparent drag centering bar graph below all my active buttons. Using some basic threshold logic in the level events this allows the user to touch a 'non active' area of the screen and swipe the finger horizontally to lock it.
The other area I've been experimenting with is the way the panels behave when you have multiple points of contact with the screen. Although multi touch is not supported I've noticed that when using a joystick control and having multiple contact points present the coordinates of the control are an interpolation of the points with the position weighted based on the quality of the contact. What this means is that you are able to directly control the outputted coordinates based upon the relative pressure of the contact points. The interface concept where I'm currently experimenting with this is for lighting control. The basic idea is to lay out labels for the lighting zones in a circle, with a single 'pressure sensitive button' in the middle of the circle. This whole area then gets a transparent joystick control placed over it. Basically to control the lighting the user press' and holds the zone they wish to control, then the simply press the central button and vary their touch pressure to set the level. Programming wise when the user initial touches the screen the coordinate is picked up by the joystick controls levels, and from this you can interpret which zone was touched. When the user touches the central button the amount of pressure they apply will vary the actual location of the joystick control between their two points of contact, thus allowing you to calculate a usable 'level' from the coordinates of the initial touch, the coordinates of the central button and the levels being passed back from the joystick control. Additionally you need to add some filtering so that only points that lay within acceptable bounds of the path between the initial contact and the central button are used to set the levels, disregarding touch's to other areas of the screen or allowing you to use such events for other control as well as filtering the sudden jumps caused by the user lifting a finger.
My reasoning and drive to experiment with these alternate control methods is to continuously develop interfaces that are less cluttered, more function and most importantly, feel natural to use. I'm interested to hear what others think of these ideas as well as similar techniques developed or experimented with.
0
Comments
Which TP are you using? I didn't think they were pressure sensitive and have never seen documentation of it (which I would assume they would accentuate). I've done some cool stuff with joystick controls, but never found them to be multi-touch pressure sensitive. Are you sure you aren't seeing level changes as you press harder due to unavoidable finger movement? Have you watched _Minority_Report_ a few too many times maybe? Perhaps I am misunderstanding what you mean.
Paul
You post Kim, got me thinking. This is just a WAG, but if you were to define several invisible buttons to completely cover your screen in a grid, say, 50x50 each, and for each of those buttons, the PUSH event calls a function to 'register' that square on the grid, and then calls DO_RELEASE() on itself, would that allow another button_event later down the stack to fire, therefore creating the illusion of multi-touch?
This function that those buttons call to register themselves would add the x,y of that button in the grid to some array, and the bulk of your program would do something cool with that information.
It feels like this should not work, that there is some hardware limitation that prevents the panel from reporting the second touch. Like I said, it's a guess. Want to try it out Kim? I have to deal with a memory leak today -_-
I have tried this and the second touch doesn't register until the first no longer touches the screen. At least with the panel/firmware I was using. I don't really see the big deal about multi-touch. Having to use more fingers or both hands to do something seems like a step backward. How about a no-touch screen?
Paul
Have you used an iPhone? Not exactly apples to apples regarding use, but multi-touch for zooming maps and pictures is awesome. Touch and drag to flip pages? Cool.
How about an AMX system with multiple sources and multiple displays...touch and drag the desired source to the desired display?
I think there could be some very cool uses for multi touch on AMX panels....it would take a different mindset than the current approach but it would be nice to have the option to experiment -- as this thread is doing.
I was about to jump in - iPhone / iTouch has had a massive impact on user expectations. I'd hope that AMX is working on this for the near future as multi-touch is where they need to go...
exactly, but it offers massive scope for complex user functions - list searching would be another example.
For me the iPhone / iPod touch has pretty well redefined the mechanics of touchscreen UI. An iPhone user with a Sonos system can have a significantly better multiroom audio solution than an AMX 5200i user with a Kaleidescape...The game has changed.
It's interesting that you mention that. Another area I've been thinking about is ways of utilizing the panel's built in movement and light sensors to control things other than waking up the TP.
As a_riot42 pointed out the current touch panels have no way of registering multi touch. From my understanding the DO_RELEASE() function is simply like a GOTO that just runs that bit of code rather than affecting anything on the panel side. I believe the lack of multi touch is due to a physical limitation in the way the panels hardware functions.
If anyone from AMX happens to be listening it would be incredible to see the addition of accelerometers and a digital compass to the next gen wireless panels, that would really open up some cool possibilities.
I have seen this on demonstration systems at AMX roadshows / dealer weekends before. Dragging the source around was a bit slow and clunky, but it worked.
Like selling lots more flat panel displays to replace the ones that have had MVP's embedded in them Wii-mote style while someone has been trying to turn the volume up
I don?t know if the magic map (or whatever they call it) on CNN is an offshoot of iTouch but whenever I see it in action I realize how boring and shortsighted the one finger thing is.
I have also tried something like the "swipe to lock" level event, and also the ability to change pages by "swiping" the lower part of the touchpanel to go forward (swipe to the right) and backward (swipe to the left) but in my opinion it didn't feel necessary for an AMX touchpanel.
Why would you want to lock the screen, you are not putting it in your backpocket right?
About the multitouch: an iPhone/iPod Touch is a small device, which can be held with one hand and controlled with the other. I don't see myself holding an MVP8400 with one hand and using the other hand to "multi-touch" control the lights... but that's just me
I couldn't agree with you more. The whole idea of interface development is to make things easier and more natural to use. When you design gestures you need to take into account the ergonomics of the physical device. If it doesn't feel natural it's instantly scraped in my books.
http://news.cnet.com/8301-13579_3-10161312-37.html?part=rss&subj=news&tag=2547-1_3-0-20
Given the long list of major players working with Multi touch control, it seems unlikely that this could remain Apple's preserve for long - patent or not.
I have a touch panel in use at local bar/restaurant. They want the panel locked so not everyone is able to change music/cable channels. I just went with the KISS method and put a button on a panel that says "LOCK PANEL" (I also have it auto-locking after 10 minutes of no use).
As for multi-touch, I got a new MacBook Pro and I must say I LOVE the multi-touch touch pad. When I have to use a standard touch pad on one of the PCs, it feels unnatural and unnecessarily complicated. I don't use the zoom function very often because I use firefox, but scrolling happens all the time and it is soo much easier with multi-touch.
Jeff
I posted this a few years back. Very cool, worth a look:
http://mrl.nyu.edu/~jhan/ftirtouch/
By the way, this video is from 2006, apparently before the 2007 Apple patent filing.
Joe
Seriously? Apple has a patent on the concept of controlling a touchpanel with gestures? That's ridiculous. The point of the patent system should not be to cripple innovation.
Why bother innovating if you can't protect it?
Paul
I believe CNN uses a windows product to accomplish their tasks.
As far as patents they are meant to protect concepts and ideas. Sonance pretty much owns the idea of the iPort (very broad scope of patent). Lutron owns the concept of the triac dimmer along with many other aspects of dimming/packaging/manufacturing of dimmers. Meridian pretty much owns a **** ton of patents in digital processing MLP (Meridian Lossless Packaging) - DVD-Audio. Its in a lot of places in our industry. I don't believe it cripples the industry, but everyone now needs to pay Apple licensing fees to use their method.
The ?Magic Wall? was invented by Jeff Han, founder and chief scientist of Perceptive Pixel.
http://www.cnn.com/2008/TECH/11/04/magic.wall/
If you invented The Clapper™, you'd already be rich, but why stop there?
Why not try to get money whenever anyone uses sound to trigger an event?
This way we would discourage people from coming up with new ideas in the direction that we started, such as sound triggers burglar alarms, control systems, toilets! We can cut off a whole tree of ideas at its root!
I'm sure that would appeal to you, all in the name of protecting your 'intellectual property.
That's also the guy in the video link I posted.
Joe
I do recall you posting about that a long while back. I didn?t put two and two together.
But you need rev 1.1 of the mind reading programmer to take advantage of that technology!
http://www.ocztechnology.com/products/ocz_peripherals/nia-neural_impulse_actuator
There's plenty of other similar style products starting to become available.
Give it a few years and you'll be able to offer clients the option of a TP or an 'AMX bio-mod' to have matrix style plugs installed on the back of users heads with the option to go wifi enabled .
Ideas aren't patentable or subject to copyright so I am not sure what you mean by cutting off a whole tree of ideas. Patents or copyright can't stop people from thinking up new innovations if that is what you are saying.
Paul
Apple is attempting to copyright the idea of using gestures to control a touchpanel (read: http://news.cnet.com/8301-13579_3-10161312-37.html), which is why I made my original post. After you made your post asking "why innovate if you can't protect" I assumed you were defending Apple. Ideally, it is only implementations of ideas that are copyright. For instance if Apple wrote code to recognize gestures, that code would be copyright Apple obviously, but not the idea of using gestures.
Clearly we don't live in an ideal world.