In Chapter 1 of Natural User Interfaces in .NET, Josh Blake asks and answers a question posed by many people who have been under the spell of keyboard input and GUI/ WIMP interaction:
Why bother switching from GUI to NUI? The answer? Read Chapter 1 (pdf) of the book - the chapter is free.
Here are a few of my personal reasons: 1. I want to buy the next version of the iPad or something like it. 2. I want to buy a new large-screen Internet HD TV.
3. I want to buy a Kinect.
4. I do NOT want to interact with my new TV with a Sony remote. Too many tiny buttons!
5. I do NOT want to interact with my new TV with a keyboard, because it reminds me of...work.
6. Most importantly:
I want to design apps for the people I care about, and others with similar needs:
My mom.
My grandson.
Moms and dads with kids in tow. People with special needs and/or health concerns, and the people who care and guide them. Knowledge sharers and (life-long) learners....
"It’s easy to forget that the computer mouse is over 45 years old."
"What’s not as easy to forget is that we’re now collectively getting used to interacting with computers via means and interfaces that have moved way beyond the keyboard and the mouse — the iPhone and Wii being the most prominent examples."
"The truth is that we stand on the verge of a major revolution in the models of Human Computer Interaction (HCI). A revolution that will fly right past academic and into a world of retail, medical, gaming, military, public event, sporting, personal and marketing applications."
"From multi-touch to motion capture to spatial operating environments, over the next 10 years, everything we know about HCI will change."
"Blur is the only conference that is exploring the line of interaction between computers and humans in a substantive, real-world and hands-on way."
"At Blur, vendors, strategists, buyers and visionaries assemble to not only discuss the larger issues of HCI, but also to lay their hands on the latest in HCI technology. Blur is the only forum for a focused, hands-on exploration of the varied technologies evolving in the HCI."
"Come play, investigate, learn and apply at Blur — where we’re changing how you interact with computers forever." -Blur
BLUR Conference Agenda (Note: I added the links to conference participants and/or their organizations. Feel free to leave a comment if you know of any corrections or better links!) Keynotes:
Neuroergonomics: How an Understanding of the Brain is Changing the Practice of Human Factors Engineering - Dr. Kay Stanney, Design Interactive
I love to dance- I studied dance through college, and off and on as an adult. I have a DDR (Dance Dance Revolution) game-floor pad somewhere in my attic gathering dust. I'm ready for new challenges.
I'm planning on buying a couple new dance games for the Wii and the Kinect. There is more to this story, given my interest off-the-desktop, post-WIMP HCI (human-computer interaction), interactive multimedia and games, and a career as a school psychologist dedicated to young people with disabilities, I'm excited to see where new technologies, interfaces, and interactions will take us.
So what do the wise men of usability have to say about new ways of interacting with games and other applications?
"Kinect has many great design elements that clearly show that the team (a) knows usability, (b) did user testing, and (c) had management support to prioritize usability improvements, even when they required extra development work." -Jakob Nielsen
Jakob Nielsen, one of the godfathers of usability, shared a few words of wisdom about the Kinect in his 12/27/10 Alertbox post: Kinect Gestural UI: First Impressions. Although he did not review Dance Central, he concludes that the game he reviewed, Kinect Adventures, was fun to play, despite usability problems.
If this is a topic that interests you, I recommend you read Neilsen's post, and also take a look at which are outlined in the post. Also take a look at recent essay Neilsen co-authored with Don Norman, another godfather of usability: Gestural Interfaces: A Step Backwards In Usability
Why is this topic important to me?
I have been involved in the Games for Health and Game Accessibility movement for many years. Lately I've been exploring the OpenKinect project with an aim to create ways of making movement-oriented games accessible for young people with more complex disabilities. For example, there is a need to have dance and movement games modified for students (and adults!) who need wheelchairs or walkers. There are students who have milder mobility challenges who love to dance, and the current games don't address their needs. Some of my students have vision or hearing impairments, too. They deserve a chance to play things designed for the Kinect.
"OpenKinect is an open community of people interested in making use of the amazing Xbox Kinect hardware with our PCs and other devices. We are working on free, open source libraries that will enable the Kinect to be used with Windows, Linux, and Mac."
Note: I currently work as a school psychologist with students up to age 22. My main office is adjacent to a large OT and PT room at Wolfe, a program for students who have special needs. We just had a large interactive whiteboard installed in the room that is begging for us to connect it with the school's Wii, and soon (we hope), a Kinect. If we are going to use dance games to help promote healthy activities among our special students, the games need to be accessible for students with cognitive, motor, and other limitations.
FIRST STEPS Although I can dance, I understand what the world is like through the eyes of many of the young people I work with who have motor coordination and sensory integration problems that interfere with their ability to move and dance, let alone access fast-paced dance games on the Wii or Kinect.
My initial plan is to look at what the new dance games might be like from the view of someone who doesn't know how to dance, and admits that they have "two left feet" - an perhaps, no sense of rhythm. Where would I start?
Wii's Just Dance2 seems to offer some support for learning how to dance through the use of simple movement icons, in the form of outlined figures, that provide information about how to move with the dancer on the screen. As you can see from the video below, the gamer is provided with information about upcoming moves throughout the game.
I decided to take a look at Just Dance2's MIKA "Big Girl" (You Are Beautiful) because some of the adolescent females I work with have weight concerns that interfere with their health. During the teen years, this can become a vicious cycle, resulting in less movement, and less participation with peers in physical activities, such as playing dance games. If a teen has depression as part of this mix, we know that exercise can help, and a fun dance game might be a life-saver, in more ways than one.
The screen shots below show how the movement icons are used in the game:
I thought it would be useful to learn more about the story behind the making of JustDance2. At 2:22, Alexia, the project's usability expert, makes her presence known. From what I can tell, she focused on aspects of the game that would make it more usable for non-dancers, including those with "two left feet", to play the game. (I don't know if there was anyone consulted about accessibility concerns for the game.)
Kinect Dance Central
Dance Central uses a different approach when it comes to "teaching" people how to dance along through the game. It would be interesting to test out Dance Central and JustDance 2 with the same set of people to get a better feel for what works and what doesn't. Below is a video that previews, in split-screen, the interaction that takes place in Dance Central:
Dance Central Full Motion Preview
In Dance Central, gamers are provided with information about the moves through icons that cycle up the right hand side of the screen. The level of dance-coordination to keep up with the moves is challenging at times, even for people who are OK at dancing. Players can select dances according to level of difficulty.
Kinect Usability with Regular People
Steve Cable (CX Partmers) shared his team's look at usability issues related to the Kinect by testing several games, including Dance Central, with groups of people in his article, "Designing for XBox Kinect - a usability study". The quote below is from the Steve's article:
"We’ve loved playing with the Kinect. There’s no doubt that the game play is lots of fun. In-game menus are a barrier to that fun. Kinect should allow players to move through menus quickly and compensate for inaccuracy.
We felt the Kinect would benefit from some standardised global controls – much like a controller uses the A button to select and the B button to move backwards. We also think it needs a more responsive pause gesture – one that doesn’t interfere with the user’s game play.
Most of our participants found the Dance Central menu to be more effective, more efficient and more satisfying to use. Here are our recommendations for designing a Kinect menu interface:
Allow users to make selections through positive gestures, rather than timed positions
Place options on a single axis to make them easier and quicker to select
Allow users to control menus with the game pad if they prefer
Use large easy to read text
Don’t make users scroll through options unnecessarily – it takes too long
Users will be distracted if used in a social setting – test your menus in a social context to see if they are prone to errors
Avoid the cursor metaphor, it’s not what gamers are used to seeing in game menus, and makes it harder to implement alternative joypad controls"
Below are screen shots that provide examples of how the movement icons are displayed in Dance Central:
"Our research shows that the vast majority of people polled in both developed and emerging markets see great potential for NUI applications beyond entertainment. This is especially true in China and India, where 9 out of 10 respondents indicate they are likely to use NUI technology across a range of lifestyle areas – from work, education and healthcare, to social connections, entertainment and the environment. We believe that taking technology to the next billion can be aided by NUI – making technology more accessible and more intuitive to a wider audience". - Steve Clayton, Microsoft
The people at Microsoft don't own the concept! I'm a member of the NUI Group (May, 2007) and SparkOn. Both are on-line communities where you can find people who live and breathe NUI, learn about their work, and even share designs and code. If you are intrigued by NUI - as a designer, developer, or user, please join us.
Note: I've been an evangelist and cheerleader for the NUI cause for many years. If you search this blog for "post-WIMP", "NUI", "multi-touch", "gesture", "off-the-desktop", "natural user interaction", "natural user interface", or even "DOOH", you'll be provided with an overwhelming number of posts that include videos, photographs, and links to NUI-related resources, including scholarly articles. There is a small-but-growing number of people from many disciplines, quietly working on NUI-related projects.
Nicolas Loeillot, of LM3LABS, has been ahead of the natural user interaction/ interface game for many years as his company has expanded. He's done quite a bit of deep thinking about the work of his company, and has used this wisdom to create a nice concept map that describes how LM3LABS' solutions fit into the world of gesture-based control and interaction:
In my opinion, this chart would make a great template for mapping out other natural interaction applications and products!
Here is the description of the concepts outlined in the chart:
"If all of them belong to the “gesture control” world, the best segmentation is made from 4 categories:
Finger tracking: precise finger tracking, it can be single touch or multi-touch (this latest not always being a plus). Finger tracking also encompasses hand tracking which comes, for LM3LABS products, as a gestures.
Body tracking: using one’s body as a pointing device. Body tracking can be associated to “passive” interactivity (users are engaged without their decision to be) or “active” interactivity like 3D Feel where “players” use their body to interact with content.
Face tracking: using user face as a pointing device. It can be mono user or multiple users. Face tracking is a “passive” interactivity tool for engaging user in an interactive relationship with digital content.
Image Tracking: Augmented Reality (AR) lets users use images (flyers, real products, t-shirts, faces,…) to interact with digital content. AR can be markerless or marker-based. Markerless technology has advantages but marker-based AR is easier for users to understand. (Please note here that Markerless AR is made in close collaboration with AR leader Total Immersion)." -LM3LABS
If you are interested in this subject and want to view some good examples of off-the-desktop interfaces and interactions, take a look at the LM3LABS blog, as well as Nicolas Loeillot's Vimeo channel. Also take a look at the sample of posts I've written about LM3LABS over the last few years - the links are at the end of this post.
I first discovered LM3LABS when I was taking a VR class and researching interactive, immersive large displays in 2005 or 2006. Back then, there wasn't much information about this sort of technology. A lot has changed since then!
I've learned quite a bit from watching LM3LABS (and others) grow, given my passion for postWIMP interactive technology and my commitment to blogging about this subject. Nicolas has really worked hard in this arena. As early as 2005, LM3LABS was working with Scala to provide "smart" interactive displays, and his company's applications have been supported by computer vision technologies for many years, allowing for gesture-based, or "touch-less" interaction, as demonstrated by the Catchyoo Interactive Table. This application caught my eye back in early 2007, when I was working on projects for large interactive displays for my HCI and Ubicomp classes, and was thinking about creating a table-top application.
My hunch is that LM3LABS has set the foundation for further growth in the future, given the lessons they've learned by taking risks with postWIMP technologies over the past few years!
"Founded in 2003 by a team of passionate researchers, engineers, designers, and marketers from various international backgrounds, focused on fast transformation of innovation into unique products, LM3LABS is a recognized pioneer in computer vision-based interactivity solutions. Keeping a strong customer focus, LM3LABS' team of unique people pioneers new directions, explores new concepts, new technologies and new interactions. Engaging, playful and magic, LM3LABS' products and solutions are always scalable and reliable"
info@lm3labs.com
Note to readers:
Over the past couple of years there has been an explosion of postWIMP technologies and applications, and with this pace, it has been difficult for me to keep abreast of it all. There is quite a bit I miss, given my full time job and daily life!
I welcome information about postWIMP interactive technologies and applications from my readers. Due to time constraints, not interest, I am not always able to post about a topic as soon as I'd like. That is OK, as my intention is not to be the first blogger to spread the latest tech news. I like to dig in deep when I can and make connections between innovative, interesting technologies and the people and ideas behind them.
Josh Blake recently interviewed Tamir Berliner, one of the founders of PrimeSense. If you haven't heard, Microsoft's Kinectwas based on work by PrimeSense, and licensed their technology. PrimeSense provides consumer electronics with natural user interaction capabilities. The good news is that the company recently released open-sourced middleware for natural interaction and depth-camera drivers. It will be interesting to see how this will play in the near future!
In the interview, Tamir discussed a number of topics related to postWIMP technologies. He also announced the newly created OpenNI, "an industry-led, not-for-profit organization formed to certify compatibility and interoperability of Natural Interaction (NI) devices, applications, and middleware." It is good to see this level of support for the cause!
Here is a quote from the interview that I especially liked:
"I believe that till today the devices we’ve been using, made us learn greatly lot about them before we could use them and gain their value. I’m pretty sure everyone who is reading this has got at least 3 remotes sitting on his living room table, and at least once a week needs to help someone use their computer/media center/phone/etc. It’s time for that to change and it’s up to us, the technologists to make this revolution happen, it’s time for the devices to take the step of understanding what we want and making sure we get that, even without asking if it’s a trivial task as opening a door when we approach, closing the lights when we leave the room, even making sure we have hot water to shower with when we return from work or wake up in the morning, depends on what we normally do." -Tamir
RELATED Here are a couple of videos from the OpenNI website that demonstrate OpenNI-compliant applications:
OpenNI-compliant real time skelton tracking by PrimeSense
OpenNI-compliant real time SceneAnalyzer by PrimeSense
FYI:
Josh Blake is the author of the Deconstructing the NUI blog. Over the past couple of years, he's explored natural user interfaces and interactions through his work on applications designed for Microsoft Surface and Win7 with Windows Presentation Foundation.
About a month ago, Josh organized OpenKinect, an on-line community to support collaboration among people interested in exploring ways to use Kinect with PCs and other devices. An example of this effort is the open source code, libfreenect, which includes drivers and libraries for Windows, Linux, and OS X.
Here are a couple of new natural user interface videos. The first video, by Evoluce, demonstrates gesture interaction/navigation in Windows 7 applications supported by Kinect. The second video, by Immersive Labs, shows multi-touch product browsing interaction on a large display.
The information below was taken from the website for the 13th International Conference on Multimodal Interaction. I'm excited about the range of topics that the conference will cover. I look forward to sharing more about the work of the members of this group on this blog in the future! (I've highlighted the topics that interest me the most.)
INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION CALL FOR PAPERS
The International Conference on Multimodal Interaction, ICMI 2011, will take place in Alicante (Spain), November 14-18, 2011, just after the ICCV 2011 (in Barcelona, Spain). This is the thirteenth edition of the International Conference on Multimodal Interfaces, which for the last two years joined efforts with the Workshop on Machine Learning for Multimodal Interaction (ICMI-MLMI 2009 and 2010). Starting in this edition the conference uses the new, shorther name.
The new ICMI is the premium international forum for multimodal signal processing and multimedia human-computer interaction. The conference will focus on theoretical and empirical foundations, varied component technologies, and combined multimodal processing techniques that define the field of multimodal interaction analysis, interface design, and system development. ICMI 2011 will feature a single-track main conference which includes: keynote speakers, technical full and short papers (including oral and poster presentations), special sessions, demonstrations, exhibits and doctoral spotlight papers. The conference will be followed by workshops. The proceedings of ICMI 2011 will be published by ACM as part of their series of International Conference Proceedings and will be also distributed to the attendees in USB memory sticks.
Topics of interest include but are not limited to:
Multimodal and multimedia interactive processing
Multimodal fusion, multimodal output generation, multimodal interactive discourse and dialogue modeling, machine learning methods for multimodal interaction.
Multimodal input and output interfaces Gaze and vision-based interfaces, speech and conversational interfaces, pen-based and haptic interfaces, virtual/augmented reality interfaces, biometric interfaces, adaptive multimodal interfaces, natural user interfaces, authoring techniques, architectures.
Multimodal and interactive applications Mobile and ubiquitous interfaces, meeting analysis and meeting spaces, interfaces to media content and entertainment, human-robot interfaces and interaction, audio/speech and vision interfaces for gaming, multimodal interaction issues in telepresence, vehicular applications and navigational aids, interfaces for intelligent environments, universal access and assistive computing, multimodal indexing, structuring and summarization.
Human interaction analysis and modeling Modeling and analysis of multimodal human-human communication, audio-visual perception of human interaction, analysis and modeling of verbal and nonverbal interaction, cognitive modeling.
Multimodal and interactive data, evaluation, and standards Evaluation techniques and methodologies, annotation and browsing of multimodal and interactive data, standards for multimodal interactive interfaces.