Since the news about Windows 7 multi-touch capabilities has spread around, I haven't had enough time to keep up all of information related to multi-touch interaction. Fortunately there are a few bloggers out there who are doing a great job filling in the gaps.
The Touch User Interface blog has a wealth of information in the form of pictures, video clips, slides, and links that I'd like to share. The following slideshow/videos were highlighted in the Touch User Interface blog post, "Touch UI: HCI Viewpoint":
Untold Stories of Touch, Gesture, & NUI Joe Fletcher, Design Manager, Microsoft Surface
I've updated some additional information about UX,interactive multimedia, multi-touch, and gesture interaction on my Multimedia and Interaction Resources page, which is a work in progress.
Joel Eden is a User Experience Consultant at Infragistics- he recently wrote a detailed article/post in the Architecture & Design section of Dr. Dobbs Portal, "Designing for Multi-Touch, Multi-User and Gesture Based Systems". I thought I'd share the link, since I've been writing on the same topic. In his article, Joel explains the differences between traditional WIMP (Window, Icon, Menue, Pointer) interaction and gesture, multi-touch, and multi-user systems. These systems are also known as Natural User Interfaces, or NUI. He recommends that "rather than trying to come up with new complicated ways to interact with digital objects, your first goal should be to try to leverage how people already interact with objects and each other when designing gesture based systems." Joel goes on to outline UX (User Experience, IxD (Interaction Design), and HCI (Human-Computer Interaction) concepts that designers should consider when developing new systems, - Affordances, Engagement, Feedback, and "Don't Make Us Think", which he summarizes in the conclusion of his article.
"Our lack of attention to place, time, function, and human considerations means these fancy new technologies fail to deliver their real potential to real people." - Bill Buxton
Microsoft's newest operating system, Win7 (beta), allows for multi-touch interaction and application development. From what I can tell from the news from the Consumer Electronic Show (CES 2009), touch and gesture interaction holds quite a bit of interest among consumers and application developers alike.
Hopefully people will think carefully about user experience and usability factors when developing the applications we'll be using in the not too distant future!
Because of my background as a school psychologist, I’m interested how touch/gesture applications can address accessibility and universal usability issues.
I wasn't able to locate user experience or usability information for Win7. Here are a few links from the Microsoft website that I think are worth reading:
Regarding touch/gesture interaction, know that “19% (24.4 million) of computer users have a mild dexterity difficulty or impairment, and 5% (6.8 million) of computer users have a severe dexterity difficulty or impairment.”
Christian Moore, from the NUI-Group, shared this nice graphic that depicts the various fields that intersect or converge with interface design, which I think is useful when thinking about how we will design, develop, and use emerging technologies:
Concepts such as accessibility and universal usability can fit into this design!
Laurence Muller, M.Sc. is a scientific programmer at the Universiteit van Amsterdam who develops scientific software for multi-touch devices. His blog, Multigesture.Net, provides good information regarding multi-touch and gesture interaction hardware and software applications.
Laurence links to the DYI tabletop computing bootcamp that was held at IEEE Tabletops and Interactive Surfaces 2008. From there, you can find a linked list of the organizers of the events, and additional information.
Here is a link to the group's libavg wiki that includes open-source code and "how-to" instructions.
If you are interested in multi-touch and multi-gesture computing from an academic point of view, Florian Echtler, of the TechnischeUniversitatMunchen has a series of publications listed on his website. Here is the abstract of one of his papers. He is on the right track. I especially like the fact that he's thought about widget layers. (I have, too, but they are only sketches in my idea book.)
F. Echtler, G. Klinker A Multitouch Software Architecture NordiCHI 2008: Using Bridges, 18-22 October, Lund, Sweden. (bib)
"In recent years, a large amount of software for multitouch interfaces with various degrees of similarity has been written. In order to improve interoperability, we aim to identify the common traits of these systems and present a layered software architecture which abstracts these similarities by defining common interfaces between successive layers. This provides developers with a unified view of the various types of multitouch hardware. Moreover, the layered architecture allows easy integration of existing software, as several alternative implementations for each layer can co-exist. Finally, we present our implementation of this architecture, consisting of hardware abstraction, calibration, event interpretation and widget layers."
In the past, our fingers did the walking, sifting through files, papers, pamphlets, and phonebooks, and then by point-click-clicking with a mouse to interact with images and text, in essence, electronic imitations of the paper-based world. Traditional forms, brochures, ad inserts, and posters informed much of the design. How much have things change? It is 2013, but you'd think it was 1997 from the PowerPoint look and feel of many apps and web sites! Touch is everywhere, but from what I can tell, not enough designers and developers have stepped up to the plate to think more deeply about ways their applications can support human endeavors though touch and gesture interactions. For an overview of this topic, take a look at my 2011 post, written after a number of ugly encounters with user-unfriendly applications: Why bother switching from GUI to NUI?
For an in-depth look into the history of multi-touch, the wisdom of Bill Buxton is well-worth absorbing. He's worked with all sorts of interfaces, and has been curating the history of multi-touch and gesture systems since 2007:
If you have an interest in music technology and innovative gesture/multitouch applications, you'll appreciate the details that are shared on the Subcycle website.In the meantime, take the time to watch a few of Christian's videos. Enjoy!
Blip Shaper Walkthrough
Blip Shaper Walkthrough from christian bannister on Vimeo. "a) creating percussive patterns with monome b) shaping the individual sounds that make up the patterns with multitouch gestures c) recording touchscreen gestures as automation d) storing, duplicating and navigation patterns e) recording the resulting audio to a dynamic buffer f) manipulating the buffer with a multitouch cut-up approach g) visualizing everything with dual screens" Subcycle Walkthrough
"a) creating percussive patterns with monome b) shaping the individual sounds that make up the patterns with multitouch gestures c) recording touchscreen gestures as automation d) storing, duplicating and navigation patterns e) recording the resulting audio to a dynamic buffer f) manipulating the buffer with a multitouch cut-up approach g) visualizing everything with dual screens"
The following information describing the Blip Shaper is from Christian's Subcycle website/blog: "For the drum sounds I have Drumaxx running for synthesized sounds and Battery running for sampled sounds. These are running in parallel so for each voice there is a separate patch running in each VST. The Parameters are modified with the touchscreen independently but in all cases a single touch gesture on the X-Axis will cross fade between the sampled version of the sound and the synthesized version of the sound. I love this because I have never seen this before and I can never decide which technique I like better. The synthesized drums are more malleable and have more interesting parameters to play with but the sampled sounds seem more substantial. I will post a detailed list of parameters and gestures in the future."
Below is a video of an interactive media wall at the Wisconsin Institutes for Discovery, the winner of the 2011 Award of Excellence from the Digtal Screenmedia Association in the category of Best Government/Education/Non-Profit Agency Deployment.
Detailed information about the Discovery Wall, including an overview of the technology, objectives of the deployment, and the positive outcomes of his project can be found on the Digital Screenmedia website.
"Float4 Interactive is a creative technology company that develops interactive systems for entertainment, advertising and design applications."
Jim Spadaccini, of Ideum, shared information about his company's product, Gestureworks, highlighting how it provides better multi-touch and gesture support than Adobe AIR2 and Flash Player 10.1.Gestureworks supports multiple-point drag, rotate, and scale at the same time. In the video, the application is demonstrated on an HP Touchsmart 600 and a 3M multitouch screen.
Adobe AIR 2 and Flash Player 10.1 vs Gestureworks 1.0: A direct comparison of multitouch and gesture support
"A direct comparison between the built-in support for multitouch found in Adobe Flash Player 10.1 beta / Adobe AIR 2 and that of the Gestureworks multitouch framework for Flash. More about this comparison can be found on the Gestureworks website (http://www.gestureworks.com) and the Ideum website (http://www.ideum.com) There is a blog post with more about this comparison and links to all of the example files at: www.ideum.com/2010/01/true-multitouch-wi th-adobe-flash/ "
Soldier Knows Best produces great tech-oriented videos. Here's his spin on all of the rumors about the possibility of the Apple iSlate.
I just inherited a 10 month-old Mac Book, installed Snow Leopard and upgraded to iLife 2009. I'm so used to touching the screen on my HP TouchSmart PC that I found myself touching my Mac Book screen from time to time, especially when I was editing video clips in iMovie. I think the latest version of iMovie was designed with touch/gesture interaction in mind!
From what I can tell, Snow Leopard and iLife 2009 will be able to support a range of touch interactions, if not gesture input as well.
Here are some rumors that have been conjured up and distributed on the web:
I want to see if the iSlate would be good for collaborative educational games, assisted technology, augmentative communication, and alternative assessment for students who have multiple/severe disabilities.
That is a huge goal, so I'm going to start simple. I am not giving up on Windows 7 multi-touch programming. I just have an urge to find out for myself what works, what doesn't, and what platform works best for specific "personas" and "scenarios".
I plan to make a little app for the iPhone/iPod Touch, based on a game I made several years ago, "Shoes Your Battles" for a game class. I think I'd like to make this game for the Apple iTablet!
The first version of Shoes Your Battles created with Game Maker, and the second version was in Flash, back in the days of ActionScript 2.0. I started on third version, one that could be used as an advergame for people to play while shopping for shoes during shoe sales, but it never got past the planning stage.
The idea for the third version came to me when I my elderly aunt came to visit from out-of-town and just hadto go shoe shopping on the day after Thanksgiving. It was extremely difficult to figure out what was on sale, how much it cost, after taking off the previous mark-downs and what was on sale that had a price that was not yet marked down.
Adding to the confusion was the fact that there were few salespeople andherds of women. It was madness. There were pairs of shoes in the wrong boxes, boxes of shoes and no way to quickly find out the true prices! We were in the shoe department for hours, and it wasn't as fun as you'd think. If you've been in a crowded women's shoe department to buy that special pair of shoes during a fantastic shoe sale, you'll know what I mean.
At any rate, I wanted my little "Shoes Your Battles" game to help with this dreadful scenario, by somehow incorporating a shoe shopping advisor and a means to figure out the REAL sales prices of those awesome, to-die-for shoes. Unfortunately, the technology wasn't where it needed to be at the time- I am always dreaming up things that are too d--- futuristic!
4 years later, we have iPhones and SmartPhones and 3G internet and RFID and ubiquitous WiFi and the Wii and more women who like to play games and...and... The time is ripe.
NUI-Group member Christian Bannister, a musician, designer, and developer behindSubcycle Labs has been experimenting with music and multi-touch technology and interactions. Here's a demonstration of what he's come up with so far:
multi-touch the storm - interactive sound visuals - subcycle labs from christian bannister on Vimeo. . "The big picture goal of this project is to bridge the gap between sound visualization and musical instrument. With multi-touch interaction it is possible to manipulate multiple characteristics of a sound—directly and visually. Right now a lot of electronic music involves staring at the back of the performers laptop. This is a shame because in many cases a lot of really interesting things are happening on the computer that the audience is completely unaware of. This project hopes to create a common visual language and experience for the electronic musician and the audience by enhancing the perception of sound and music on both sides...These sketches are built with Processing and Max/MSP networked with OSC on a single computer..." -Christian Bannister
JazzMutant Lemur The multi-touch tech company now known as Stantum evolved from JazzMutant, Jazzmutant was founded in 2002 by Guilluame Largillier and Pascal Joguet, and joined by Julien Olivier in 2003. The original focus was to create a multi-touch controller for music applications, and the Lemur was born. It is now in version 2, with features such as a gesture object that provides three ways for people to interact with sound, extended scripting abilities, and remote control of your computer's mouse cursor or keyboard. Stantum recently developed a next-gen multi-touch screen system for use in mobile devices such as smart phones and netbooks. (I'll discuss this further in a future post.)
Mapping Ableton Live to Jazz Mutant's multi-touch Lemur for sound design:
Additional tutorials can be found the Jazzmutant YouTube channel.
Ableton Live, the software used in the above video, will include Max. Max/MSP is now known as MAX 5: "In use for over twenty years by performers, composers, artists, scientists, teachers, and students, Max is the way to make your computer do things that reflect your individual ideas and dreams. Version 5 is a dramatic transformation of Max that begins a new chapter in the program's history."
I came across the picture below today, and discovered more about it in a CNET article:
"Gesture-recognition technology, such as that seen in Project Natal, has changed the face of gaming, but Microsoft's Craig Mundie (pictured) believes it will also transform the office." (Ina Fried, CNET)
This approach looks like it incorporates visual thinking and conceptualization in a nice and fluid way.
According to Fried, author of the the linked CNET article, Craig Mundie, at the Microsoft Research Faculty Summit this past July demonstrated the set-up in the above picture. The demonstration "...included hologram-like videoconferencing, a virtual digital assistant, and multiple surface computers, along with voice-, touch- and gesture-recognition technology. The desk in the demo was a multitouch surface computer, and the office's walls were also a display that could easily switch from being a virtual window or collection of digital photos to a corkboard of sticky notes or various workspaces. In one case, Mundie also used Project Natal-like depth cameras to put himself in the middle of an architectural demo, essentially putting himself inside a building that was not yet built."
The author of the above article was also at the Microsoft Research Faculty Summit and touched on the concept of natural user interfaces:
“All the things we talk about as natural user interfaces have been largely used one at a time as enhancements to [graphical user interfaces],” Mundie said. Gesture recognition, expressive responses, immersive 3-D virtual environments, and understanding of context—these advances in computing algorithms will lead to software that is “better at anticipating what you might want.”
When I find a video of the Project Natal/Surface/Natural User Interface Office in action, I'll be sure to post it!
Why this is important:
Natural user interfaces might have the potential to support more efficient and effective collaboration among people in the workplace, and the visual display of information may also support better decision-making and problem solving.
Taking 3D interaction further, Softkinetic has developed middle-ware that uses a 3D camera to support full-body gesture interaction with games and other applications. No controllers or devices are needed!
The following video is narrated in Portuguese, I, think, but you can understand the content in any language. I you love the Wii, you'll probably like this!
Here is a video that demonstrate how Softkinetic and Silverfit paired together to develop rehabilitative games for the elderly and others:
The following table is from the Silverfit website:
Game
Movement trained
Puzzle
While sitting down, bend whole body left and right, and stand up. Cognitive/visual component.
Mole
Balance exercise by stepping with one leg while standing.
Catching grapes
Walking movement left and right.
Walking
Walking in place, while avoiding obstacles and thresholds. Activity of Daily Life (ADL) component.
Arm exercise
Arm stretching and reaching in all directions with one or both arms. ADL component.
Picking flowers
Walking backwards, forwards and sideways. Optionally, bending down.
Memory
Arm stretching left, right, forwards and upwards. Cognitive component.
The Interactive Displays Conference, sponsored by Intertech Pira, will highlight an interesting mix of existing and emerging interactive display technologies and applications. The conference will be held at the Hilton in San Jose, California, from Tuesday, April 21st through Thursday, April 23rd.
The pre-conference seminar will feature Sakuya Morimoto, of CANESTA, who will present his company's innovative single-chip 3D image sensor technology that supports gesture interaction. Keynote speakers will beJeff Han, of Perceptive Pixel, and Steven Bathiche, of MicrosoftUS.
Some Highlights:
Pre-conference Seminar: Gesture Navigation in the World of Digital Contents, Enabled by a Single-Chip 3D Image SensorPresenter: Sakuya Morimoto, Senior Director, Business Development in Asia, CANESTA, Japan Related: Hitachi at CES 2009: Use of Canesta's 3D sensor to control television and home systems using hand gestures.
"With the wave of a hand, with the shake of a hand, you can control volume, you can actually change the channels, watch your favorite program...the most exciting thing, I think, is that you can actually control your temperature and the lighting in the room, the environmental lighting. So..it is very unique technology that is out there.."
Another demonstration of Hitachi's gesture interaction using the Canesta's 3=D Depth camera:
How does Canesta's Electronic Perception Technology Work? "Canesta’s electronic perception technology forms 3-D, real time moving images in a single chip through patented methods which use light photons to “range” the image, similar to radar. The silicon sensor chip develops 3-D depth maps at a rate in excess of 30 frames per second, and then performs additional processing on these depth maps to resolve the images into application specific information that can easily be processed by embedded processor(s) in the end-use device or machine. Since Canesta’s software starts with a three-dimensional view of the world, provided immediately by the hardware, it has a substantial advantage over classical image processing software that struggles to construct three-dimensional representations using complex mathematics, and images from multiple cameras or points of view. This dramatic reduction in complexity makes it possible to embed the processing software directly into the chips themselves so they may be used in the most cost-conscious applications."
I will highlight some of the featured presentations in future blog posts:
Steven Bathiche, Director of Research, Applied Sciences Group, Entertainment and Devices Division MICROSOFT, US Guillaume Largillier, Chief Strategy Officer and Co-Founder, STANTUM, France Jeff Han, PERCEPTIVE PIXEL, US Mark Fihn, Publisher, VERITAS ET VISUS, US Derek Mitchell, Conference Producer, INTERTECHPIRA, US Vinita Jakhanwal, Principal Analyst, Small/Medium Displays, ISUPPLI CORPORATION, US Joseph Carsanaro, President and CEO F-ORIGIN, US Tommi Ilmonen, CEO MULTITOUCH OY, Finland Stephen Sedaker, Director of Component Sales WACOM TECHNOLOGY CORPORATION, US Brad Gleeson, Managing Director, Business Development TARGETPATH GLOBAL LLC., US Henry Kaufman, President and Founder, TACTABLE, US Christophe Ramstein, Chief Technology Officer, IMMERSION CORPORATION, US Mary Lou Jepsen, CEO, PIXEL QI, US John Newton, Chief Technology Officer, NEXTWINDOW, New Zealand Herve Martin, CEO, SENSITIVE OBJECT, France Scott Hagermoser, Gaming Business Unit Manager 3M TOUCH SYSTEMS, US Bob Cooney, Vice President, Business Development, ECAST, US Brent Bushnell, Chief Technology Officer UWINK, US Stephan Durach, Head, Technology Office, BMW GROUP, US Jeff Doerr, Senior Manager, Business Development Self Service Solutions Group, FLEXTRONICS, US Andy Wilson, Senior Researcher, Adaptive Systems and Interaction Group, MICROSOFT, US Mats W. Johansson, Chief Executive Officer, EON REALITY, US Lenny Engelhardt, Vice President for Business Development, N-TRIG, Israel Dr Paul Diefenbach, Director, RePlay Lab, DREXEL UNIVERSITY, US Andrew Hsu, Technical Marketing and Strategic Partnerships Manager, SYNAPTICS, US Dean LaCoe, Business Development Manager, GESTURETEK, Canada Keith Pradhan, Global Director of Product Management, TYCO ELECTRONICS, ELO TOUCHSYSTEMS, US Jerry Bertrand, Managing Member/Acting CEO, MICROSCENT, LLC, US Frederic Kaplan, CEO and Co-Founder, OZWE, Switzerland
Ubiq'window, by LM3LABS, is a gesture-based system that is used for interactive show windows, interactive in-store marketing, museum installations, and more.
The slides provide details of the Ubiq'window's system specifications, including a gesture recognition set. The slides also higlight "Airstrike", a system that allows for free-air, touchless interaction.
If you are a new visitor to this blog and interested in interactive multimedia, you'll need to know more about the NUI-Group. Natural User Interface or ~ NUI Group is an interactive media group researching and creating open source machine sensing techniques to benefit artistic and educational applications. (For related information, please read my recent post, Usability, Accessibility, and User Experience in a Win7 Environment.)
Seth Sandler, of the NUI-Group, sent out a great email with links and resources for people who are interested in multi-touch/gesture interaction, hardware, and/or software development. The list of NUI-Group members who have completed projects is listed below, with links to project websites as well as related threads on the NUI-Group forum.
(The information can be found on the NUI-Group Wiki, which boasts a nice icon based front-page)
Thanks, Seth, for organizing this wealth of information!
I received the following videos and links from Anthony Uhrick, who happens to be at WinHEC this week and was at PDC 2008 last week. Touch screen, multi-touch, and gesture technology is starting to catch on. (Anthony is the VP of sales for NextWindow, the company that created the touch screen for the HP TouchSmart and other multi-touch enabled displays.)
Below is a video clip of a multi-touch photo presentation system running Windows 7: Gesture + Touch - has gesture and physics engines.
Apparently the application can run on Vista, Win 7, and Win 7 Touch.
Here is an HP TouchSmart PC, running a Touch Map application on Windows 7:
The following clip is of a newscaster using a multi-touch transparent screen.The display is from U-Touch Ltd. a partner of NextWindow.In my opinion, the application enhances the viewers understanding of the various news topics, and is visually appealing as well.
The graphics engine used in this application was developed by Vizrt, the same folks who were behind CNN's video hologram. Here are a few pictures from the Vizrt website: