I like this demonstration of Adam Somers AirHarp music application for use with the Leap Motion 3D controller:
AirHarp is being developed in C++ using Adam Somer's audio processing toolkit, MusKit. This looks interesting! Things have changes since I last took a computer music technology course (back in 2003). Adam Somers is a senior software engineer at Universal Audio. He has a graduate degree in music technology from Stanford, and a background in computer science, electronics, human-computer interaction, and signal processing. Leap Motion is a motion-control software and hardware start-up company located in San Francisco, California. According to promotional information from the website, the company's first product, the Leap Motion controller, is 200 times more sensitive than existing technologies. It will be interesting to see how this plays out. (I'm still waiting for my pre-order.) RELATED AirHarp (links to GitHub) Leap FAQs Leap Motion Website Leap Motion Developer Portal Leap Motion Leadership Team Leap Motion goes retail: Motion controller sold exclusively at Best Buy Michael Gorman, engadget, 1/16/13
Here's an interesting use of technology for health - the Xbox Kinect in the OR!
Thanks to Harry van der Veen for the link! RELATED Kinect sensor poised to leap into everyday life Niall Firth, NewScientist, 1/17/13 For the tech-curious: PrimeSense(Company that developed the 3D depth sensor that powers the Kinect, the sensor in Ava, a healthcare robot by iRobot, and more.) OpenNI (Framework for the development of 3D sensing middleware libraries and applications.) NiTE: Natural Interface Technology for End User (Perception algorithms layer for 3D computer vision, allows for hand locating, tracking, analyzing scenes, and tracking skeleton joints.)
"GML is an extensible markup language used to define gestures that describe interactive object behavior and the relationships between objects in an application. Gesture Markup Language has been designed to enhance the development of multiuser multi-touch and other HCI device driven applications." -Gesture ML Wiki
Photo credit: Ideum RELATED Ideum Blog OpenExhibitsFree multitouch and multiuser software initiative for museums, education, nonprofits, and students GestureWorks Multi-touch authoring for Windows 8 & Windows 7
It just might be the right time for everyone to brush up on 21st century tech skills. iPads and touch-phones are ubiquitous. Touch-enabled interactive whiteboards and displays are in schools and boardrooms. With Microsoft's Windows 8 and the news that the company recently acquired Jeff Han's company, Perspective Pixel, I think that there will be good support - and more opportunities- for designers and developers interested in moving from GUI to NUI.
In the video below, from CES 2012, Jeff Han provides a good overview of where things are moving in the future. We are in a post-WIMP world and there is a lot of catching up to do!
CES 2012 Perceptive Pixel and the Future of Multitouch (IEEE Spectrum YouTube Channel)
During the video clip, Jeff explains how far things have come during the past few years:
"Five and
1/2 years ago I had to explain to everybody what multi-touch was and meant. And
then, frankly, we've seen some great products from folks like Apple, and really
have executed so brilliantly, that everyone really sees what a good
implementation can be, and have come to expect it. I also think though, that the explosion of
NUI is less about just multi-touch, but an awareness that finally
people have that you don't have to use a keyboard and mouse, you can demand
something else beside that. People are
now willing to say, "Oh, this is something I can try, you know, touch is
something I can try as my friendlier interface"."
Who wouldn't want to interact with a friendlier interface? Steve Ballmer doesn't curb his enthusiasm about Windows 8 and Perceptive Pixel. Jeff Han is happy how designs created in Windows 8 scales for use on screens large and small. He explains how Windows 8 can support collaboration. The Story Board application (7:58) on the large touchscreen display looks interesting.
I continue to be frustrated by the poor usability of many web-based and desk-top applications. I like my iPad, but only because so many dedicated souls have given some thought to the user experience when creating their apps. I often meet with disappointment when I encounter interactive displays when I'm out and about during the day. It is 2012, and it seems that there are a lot of application designers and developers who have never read Don Norman's The Design of Everyday Things!
I enjoy making working prototypes and demo apps, but my skill set is stuck in 2008, the last year I took a graduate-level computer course. I was thinking about taking a class next semester, something hands-on, creative, and also practical, to move me forward. I can only do so much when I'm in the DIY mode alone in my "lab" at home. I need to explore new tools, alongside like-minded others.
There ARE many more tools available to designers and developers than there were just four years ago. Some of them are available online, free, or for a modest fee. I was inspired by a link posted by my former HCI professor, Celine Latulipe, to her updated webpage devoted to Rapid Prototyping tools. The resources on her website look like a good place to start for people who are interested in creating applications for the "NUI" era. (Celine has worked many interesting projects that explore how technology can support new and creative interaction, such as Dance.Draw.) Below is her description of her updated HCI resources:
"New HCI resource to share: I have created a few pages on my web site devoted to Rapid Prototyping tools, books, and methods. These pages contain reviews of various digital tools, including 7 different desktop prototyping apps, and including 8 different iPad apps for wireframing/prototyping. I hope it's useful to others. Feel free to share... and please send me comments and suggestions if you find anything inaccurate, or if you think there is stuff that I should be adding. I will be continuing to update this resource." -http://www.celinelatulipe.com (click on the rapid prototyping link at the top)
IDEAS Below are just a few of my ideas that I'd like to implement in some way. I can't claim ownership to these ideas- they are mash-ups of what comes to me in my dreams, usually after reading scholarly publications from ACM or IEEE, or attending tech conferences.
An interactive timeline, (multi-dimensional, multi-modal, multimedia) for off-the-desktop interaction, collaboration, data/info analysis exploration. It might be useful for medical researchers, historians, genealogists, or people who are into the "history of ideas". Big Data folks would love it, too. It would handle data from a variety of sources, including sensor networks. It would be beautiful to use.
A web-based system of delivering seamless interactive, multi-modal, immersive experiences, across devices, displays, and surfaces. The system would support multi-user, collaborative interaction. The system would provide an option for tangible interaction.
A visual/auditory display interface that presents network activity, including potential intrusions, malfunctions, or anything that needs immediate attention that would be likely to be missed under present monitoring methods.
Interactive video tools for creation, collaboration, storytelling. (No bad remote controllers needed.)
A "wearable" that provides new ways for people to express and communicate creatively, through art, music, dance, with wireless capability. (It can interact with wireless sensor networks.)*
An public health application designed to provide information useful in understanding and sepsis prevention efforts.This application would utilize the timeline concept describe at the top of this list. This concept could also be useful in analyzing other medical puzzles, such as autism.
Most of these ideas could translate nicely to educational settings, and the focus on natural user interaction and multi-modal i/o aligns with the principles of Universal Design for Learning, something that is important to consider, given the number of "at-risk" learners and young people who have disabilities.
I welcome comments from readers who are working on similar projects, or who know of similar projects. I also encourage graduate students and researchers who are interested in natural user interfaces to and move forward with an off-the-desktop NUI project. I hope that my efforts can play a part in helping people make the move from GUI to NUI!
Below are a few videos of some interesting projects, along with a list of a few references and links.
Le Chal is a haptic/vibrational feedback shoe designed for the blind and visually impaired, on of the projects of Anirudh Sharma, a member of the NUI Group (Natural User Interface). Le Chal was conceptualized at the 2011 MIT Media Lab Design and Innovation Workshop at COEP. Take me there Shoe/ Le Chal
October 11, 2011 is a special day. A number of software programmers will be working to develop "innovative, touch-enabled applications for the autism cimmunity and make this software available for free on HackingAutism.org." Take a moment to watch the following video clip, and then explore the Hacking Autism website! "When touch-enabled computing was introduced to the world, no one could have anticipated that this technology might help open up a new world of communication, learning and social possibilities for autistic children. Yet it has.
Hacking Autism is a story of technology and hope and the difference it's making in the lives of some people who need it most.Hacking Autism doesn't seek to cure autism, but rather it aims to facilitate and accelerate technology-based ideas to help give those with autism a voice." -hackingautism.org Touch technology + people with autism spectrum disorders = One of the reasons why I returned to school to take computer courses and explore natural user interfaces and interaction.
The people at Stantum have been working hard to improve multi-touch technology, focusing on smaller tablet-sized systems. Stantum is a company I've been following for several years, from the time it was known as Jazz Mutant. I have been impressed by Stantum's focus on the needs of people as well as the company's careful attention to important details.
I'm pleased to see that the company has an idea of how its multi-modal technology can support multi-touch in education: "Ambidexterity and multi-modality are the two pillars of Stantum's core project – making the use of touch-enabled devices more creative and productive. Amongst others, there is one field of application where we truly see a soaring need for ambidexterity and multi-modality – augmented textbooks." -Guillaume Largillier
At the Society for Information Display's Display Week exhibition this past May, Stantum introduced a new palm rejection feature for its Interpolated Voltage Sensitivity technology. This technology provides users with a more natural way to interact with the interface and application content on tablets. The technology supports Android's multi-touch framework and is also Windows 7 certified. The palm rejection feature will be a welcome improvement for future multi-touch applications designed for education settings, where it is likely that more than one hand - or person, might be interacting with content on the screen at the same time.
Below are two videos that provide a glimpse of Stantum's innovations:
Stantum's technology can enable ten simultaneous touches, is highly responsive, and supports high-resolution content. According to a May press release, "Palm rejection is available as an API (application programming interface) to Windows and Android operating systems on x86 and ARM platforms. IVSM touch modules are offered to OEMs through the company’s Qualified Manufacturers Partners, comprising tier-one touch-screen manufacturers with high-volume production capabilities. More information is available at info@stantum.com"
NIME 2011 OSLO: The International Conference on New Interfaces for Musical Expression - NIME is an outgrowth of a workshop held at CHI 2001 (Human Factors in Computing Systems). "The NIME conference draws a varied group of participants, including researchers (musicology, computer science, interaction design, etc.), artists (musicians, composers, dancers, etc.) and developers (self-employed and industrial). The common denominator is the mutual interest in groundbreaking technology and music, and contributions to the conference cover everything from basic research on human cognition through experimental technological devices to multimedia performances." Just take a look at all of the presentations that were at NIME 2011! NIME 2011 Program (pdf)
Touch the Web 2011: 2nd International Workshop on Web-Enabled ObjectsJune 20-24, 2011, Paphos, Cyprus (in conjunction with the International Conference on Web Engineering ICWE) "The vision of the Internet of Things builds upon the use of embedded systems to control devices, tools and appliances. With the addition of novel communications capabilities and identification means such as RFID, systems can now gather information from other sensors, devices and computers on the network, or enable user-oriented customization and operations through short-range communication. When the information gathered by different sensors is shared by means of open Web standards, new services can be defined on top of physical elements. In addition, the new generation of mobile phones enables a true mobile Internet experience. These phones are today’s ubiquitous information access tool, and the physical token of our "Digital Me“. These meshes of things and “Digital Me” will become the basis upon which future smart living, working and production places will be created, delivering services directly where they are needed."
Upcoming Conferences and Workshops
Ubicomp 2011, September 17-21, 2011, Beijing, China
"Ubicomp is the premier outlet for novel research contributions that advance the state of the art in the design, development, deployment, evaluation and understanding of ubiquitous computing systems. Ubicomp is an interdisciplinary field of research and development that utilizes and integrates pervasive, wireless, embedded, wearable and/or mobile technologies to bridge the gaps between the digital and physical worlds. The Ubicomp 2011 program features keynotes, technical paper sessions, specialized workshops, live demonstrations, posters, video presentations, panels, industrial exhibition and a Doctoral Colloquium."
"Given its multi-disciplinary nature, Ubicomp has developed a broad base of audience over the past 12 years. Key audience communities are: Human Computer Interaction, Pervasive Computing, Distributed and Mobile Computing, Real World Modeling, Sensors and Devices, Middleware and Systems research, Programming Models and Tools, and Human Centric Validation and Experience Characterization. More detailed information about the topical focus of Ubicomp can be found in the Call for Papers."
Eurodisplay 2011: XXXI International Display Research Conference, September 19-22, Bordeaux-Arcacho, France Eurodisplay 2011 in Bordeaux/Arcachon provides researchers, engineers, and technical managers a unique opportunity to present their results and update their knowledge in all display-related research fields...The two keynote addresses on the first day of the Eurodisplay 2011 Symposium on September 20th will be a unique chance to hear a global overview on the future of the display market (Samsung LCD) and on display applications in the automotive industry (Daimler AG)
Eurodisplay ’11 in Bordeaux-Arcachon will be the right spot to learn more about the last results on display research and related fields such as organic electronics. The cognitic science will give a new vision on the impact of display in our day to day life, not only from a perception but from the Information standpoint since SID deals with “Information Display” and not only technologies.The two invited talk of Pr. Bernard Claverie and Dr. Lauren Palmateer will focus on this aspect......A dedicated business oriented session will address two important aspect of our business from display company creation by Thierry Leroux to end user trends analysis for TV market by Dr. Jae Shin. This conference will provide a global vision of our Display World including not only the well-known display leaders but also the very actives BRIC countries......Dr. V. Pellegrini Mammana will give a vivid illustration of this display industry dynamic in Brazil."
UIST Symposium, October 16-19, 2011, Santa Barbara, California "UIST (ACM Symposium on User Interface Software and Technology) is the premier forum for innovations in the software and technology of human-computer interfaces. Sponsored by ACM's special interest groups on computer-human interaction (SIGCHI) and computer graphics (SIGGRAPH), UIST brings together researchers and practitioners from diverse areas that include traditional graphical & web user interfaces, tangible & ubiquitous computing, virtual & augmented reality, multimedia, new input & output devices, and CSCW. The intimate size, single track, and comfortable surroundings make this symposium an ideal opportunity to exchange research results and implementation experiences."
VisWeek 2011: Viz, Infovis, VASTOctober 23-28, Providence, RI "Computer-based information visualization centers around helping people explore or explain data through interactive software that exploits the capabilities of the human perceptual system. A key challenge in information visualization is designing a cognitively useful spatial mapping of a dataset that is not inherently spatial and accompanying the mapping by interaction techniques that allow people to intuitively explore the dataset. Information visualization draws on the intellectual history of several traditions, including computer graphics, human-computer interaction, cognitive psychology, semiotics, graphic design, statistical graphics, cartography, and art. The synthesis of relevant ideas from these fields with new methodologies and techniques made possible by interactive computation are critical for helping people keep pace with the torrents of data confronting them."
6th Annual ACM Conference on Interactive Tabletops and Surfaces 2011ITS 2011 November 13-16, 2011 Portopia Hotel, Kobe, Japan "The Interactive Tabletops and Surfaces 2011 Conference (ITS) is a premiere venue for presenting research in the design and use of new and emerging tabletop and interactive surface technologies. As a new community, we embrace the growth of the discipline in a wide variety of areas, including innovations in ITS hardware, software, design, and projects expanding our understanding of design considerations of ITS technologies and of their applications."
AFFINE: 4th International Workshop on Affective Interaction in Natural Environments (ICMI 2011)November 17, 2011, Alicante, Spain (CFP deadline is August 19, 2011) Scope: "Computer gaming has been acknowledged as one of the computing disciplines which proposes new interaction paradigms to be replicated by software engineers and developers in other fields. The abundance of high-performance, yet lightweight and mobile devices and wireless controllers has revolutionized gaming, especially when taking into account the individual affective expressivity of each player and the possibility to exploit social networking infrastructure. As a result, new gaming experiences are now possible, maximizing users’ skill level, while also maintaining their interest to the challenges in the same, resulting in a state which psychologists call flow: “a state of concentration or complete absorption with the activity at hand and the situation”. The result of this amalgamation of gaming, affective and social computing has brought increased interest in the field in terms of interdisciplinary research........Natural interaction plays an important role in this process, since it gives game players the opportunity to leave behind traditional interaction paradigms, based on keyboards and mice, and control games using the same concepts they employ in everyday human-human interaction: hand gestures, facial expressions and head nods, body stance and speech. These means of interaction are now easy to capture, thanks to low-cost visual, audio and physiological signal sensors, while models from psychology, theory of mind and ergonomics can be put to use to map features from those modalities to higher-level concepts, such as desires, intentions and goals. In addition to that, non-verbal cues such as eye gaze and facial expressions can serve as valuable indicators of player satisfaction and help game designers provide the optimal experience for players: games which are not frustratingly hard, but still challenging and not boring.........Another aspect which makes computer gaming an important field for multimodal interaction is the new breed of multimodal data it can generate: besides videos of people playing games in front of computer screens or consoles, which include facial, body and speech expressivity, researchers in the field of affective computing and multimodal interaction may benefit from mapping events in those videos (e.g. facial signs of frustration) to specific events in the game (large number of enemies or obstacles close to the player) and infer additional user states such as engagement and immersion. Individual and prototypical user models can be built based on that information, helping produce affective and immersive experiences which maintain the concept of ‘flow’. This workshop will cover real-time and off-line computational techniques for the recognition and interpretation of multimodal verbal and non-verbal activity and behaviour, modelling and evolution of player and interaction contexts, and synthesis of believable behaviour and task objectives for non-player characters in games and human-robot interaction.The workshop also welcomes studies that provide new insights into the use of gaming to capture multimodal, affective databases, low-cost sensors to capture user expressivity beyond the visual and speech modalities and concepts from collective intelligence and group modelling to support multi-party interaction."
"Major Topics of Interest to IUI include: Intelligent interactive interfaces, systems, and devices, Ubiquitous interfaces, Smart environments and tools, Human-centered interfaces, Mobile
interfaces, Multimodal interfaces, Pen-based interfaces, Spoken and natural language interfaces, Conversational interfaces, Affective and social interfaces, Tangible interfaces, Collaborative multiuser interfaces, Adaptive interfaces, Sensor-based interfaces, User modeling and interaction with novel interfaces and devices, Interfaces for personalization and recommender systems, Interfaces for plan-based systems, Interfaces that incorporate knowledge- or agent-based approaches, Help interfaces for complex tasks, Example- and demonstration-based interfaces, Interfaces for intelligent generation and presentation of information, Intelligent authoring systems, Synthesis of multimodal virtual characters and social robots, Interfaces for games and entertainment, learning based interactions, health informatics, Empirical studies and evaluations of IUI interfaces, New approaches to designing Intelligent User Interfaces, and related areas"
IXDA: Interaction 12, Dublin, Ireland, February 1-4, 2012 "Interaction|12 is an ideal venue to showcase the most inspiring and original stories in interaction design. We have a world of creative talent to tap into, so our conference roster will fill up fast. We’re on the lookout for thoughtful, original proposals that will inspire our community of interaction designers from all over the world. Do you have an uncommon or enlightening design story, valuable lessons learned from hands-on experience and want to be a part of the programme at Interaction|12? You do? Great!"
"Sifteo cubes are 1.5 inch computers with full-color displays that sense their motion, sense each other, and wirelessly connect to your computer. You, your friends, and your family can play an ever-growing array of interactive games that get your brain and body engaged.
Sifteo’s initial collection of titles includes challenging games for adults, fun learning puzzles for kids, and games people can play together." -Sifteo website
About two years ago, I was interviewed about my thoughts about the interactive, hands-on, programmable cubes, then called Siftables, for an article published in IEEE's Computing Now magazine: Siftables Offer New Interaction Mode (James Figeuroa, Computing Now, 3/2009).
For those of you who'd like more information about tangible user interfaces (TUIs) and the development of Siftables, I've copied my 2009 post, Tangible User Interfaces, Part I: Siftables, below:
TANGIBLE USER INTERFACES, PART I: SIFTABLES (2009)
In 1997, the vision of tangible user interfaces, also known as TUI's, was outlined by Hiroshi Ishii and Brygg Ullmer of the Tangible Media Group at MIT, in their paper, "Tangible Bits: Towards Seamless Interfaces between People, Bits, and Atoms" (pdf). According to this vision, "the goal of Tangible Bits is to bridge the gaps between both cyberspace and the physical environment, as well as the foreground and background of human activities." This article isis a must-read for anyone interested in "new" interactive technologies.
The pictures in the article of the metaDesk, transBoard, activeLENS, and ambientRoom, along with the references, are worth a look, for those interested in this seminal work.
Another must-read is Hiroshi Ishii's 2008 article, Tangible Bits: Beyond Pixels (pdf). In this article, Ishii provides a good overview of TUI concepts as well as the contributions of his lab to the field since the first paper was written.
Related to Tangible User Interface research is the work of theFluid Interfaces Group at MIT. The Fluid Interfaces Group was formerly known as the Ambient Intelligence Group, and many of the group's projects incorporate concepts related to TUI and ambient intelligence.
According to the Fluid Interfaces website, the goal of this research group is to "radically rethink the human-machine interactive experience. By designing interfaces that are more immersive, more intelligent, and more interactive we are changing the human-machine relationship and creating systems that are more responsive to people's needs and actions, and that become true "accessories" for expanding our minds."
The Siftables project is an example of how TUI and fluid interface (FI) interaction can be combined. Siftables is the work of David Merrill and Pattie Maes, in collaboration with Jeevan Kalanithi, and was brought to popular attention through David Merrill's recent TED talk: David Merrill's TED Talk: Siftables - Making the digital physical -Grasp Information Physically "Siftables aims to enable people to interact with information and media in physical, natural ways that approach interactions with physical objects in our everyday lives. As an interaction platform, Siftables applies technology and methodology from wireless sensor networks to tangible user interfaces. Siftables are independent, compact devices with sensing, graphical display, and wireless communication capabilities. They can be physically manipulated as a group to interact with digital information and media. Siftables can be used to implement any number of gestural interaction languages and HCI applications....Siftables can sense their neighbors, allowing applications to utilize topological arrangement..No special sensing surface or cameras are needed."
It seems that people really like the Siftable concept, or they don't see the point. I found the following humerous critique of Siftables on YouTube:
"Imagine if all the little programs you had on your iphone were little separate chicklets in your pocket.
You'd lose em.
Your cat would eat em.
You'd vacuum them up.
They'd fall down in the sofa.
They'd be all over the car floor.
You'd throw them away by mistake..."
In my opinion, it is exciting to learn that perhaps some of this technology has the potential of becoming main-stream.
"It’s easy to forget that the computer mouse is over 45 years old."
"What’s not as easy to forget is that we’re now collectively getting used to interacting with computers via means and interfaces that have moved way beyond the keyboard and the mouse — the iPhone and Wii being the most prominent examples."
"The truth is that we stand on the verge of a major revolution in the models of Human Computer Interaction (HCI). A revolution that will fly right past academic and into a world of retail, medical, gaming, military, public event, sporting, personal and marketing applications."
"From multi-touch to motion capture to spatial operating environments, over the next 10 years, everything we know about HCI will change."
"Blur is the only conference that is exploring the line of interaction between computers and humans in a substantive, real-world and hands-on way."
"At Blur, vendors, strategists, buyers and visionaries assemble to not only discuss the larger issues of HCI, but also to lay their hands on the latest in HCI technology. Blur is the only forum for a focused, hands-on exploration of the varied technologies evolving in the HCI."
"Come play, investigate, learn and apply at Blur — where we’re changing how you interact with computers forever." -Blur
BLUR Conference Agenda (Note: I added the links to conference participants and/or their organizations. Feel free to leave a comment if you know of any corrections or better links!) Keynotes:
Neuroergonomics: How an Understanding of the Brain is Changing the Practice of Human Factors Engineering - Dr. Kay Stanney, Design Interactive