I love to dance- I studied dance through college, and off and on as an adult. I have a DDR (Dance Dance Revolution) game-floor pad somewhere in my attic gathering dust. I'm ready for new challenges.
I'm planning on buying a couple new dance games for the Wii and the Kinect. There is more to this story, given my interest off-the-desktop, post-WIMP HCI (human-computer interaction), interactive multimedia and games, and a career as a school psychologist dedicated to young people with disabilities, I'm excited to see where new technologies, interfaces, and interactions will take us.
So what do the wise men of usability have to say about new ways of interacting with games and other applications?
"Kinect has many great design elements that clearly show that the team (a) knows usability, (b) did user testing, and (c) had management support to prioritize usability improvements, even when they required extra development work." -Jakob Nielsen
Jakob Nielsen, one of the godfathers of usability, shared a few words of wisdom about the Kinect in his 12/27/10 Alertbox post: Kinect Gestural UI: First Impressions. Although he did not review Dance Central, he concludes that the game he reviewed, Kinect Adventures, was fun to play, despite usability problems.
If this is a topic that interests you, I recommend you read Neilsen's post, and also take a look at which are outlined in the post. Also take a look at recent essay Neilsen co-authored with Don Norman, another godfather of usability: Gestural Interfaces: A Step Backwards In Usability
Why is this topic important to me?
I have been involved in the Games for Health and Game Accessibility movement for many years. Lately I've been exploring the OpenKinect project with an aim to create ways of making movement-oriented games accessible for young people with more complex disabilities. For example, there is a need to have dance and movement games modified for students (and adults!) who need wheelchairs or walkers. There are students who have milder mobility challenges who love to dance, and the current games don't address their needs. Some of my students have vision or hearing impairments, too. They deserve a chance to play things designed for the Kinect.
"OpenKinect is an open community of people interested in making use of the amazing Xbox Kinect hardware with our PCs and other devices. We are working on free, open source libraries that will enable the Kinect to be used with Windows, Linux, and Mac."
Note: I currently work as a school psychologist with students up to age 22. My main office is adjacent to a large OT and PT room at Wolfe, a program for students who have special needs. We just had a large interactive whiteboard installed in the room that is begging for us to connect it with the school's Wii, and soon (we hope), a Kinect. If we are going to use dance games to help promote healthy activities among our special students, the games need to be accessible for students with cognitive, motor, and other limitations.
FIRST STEPS Although I can dance, I understand what the world is like through the eyes of many of the young people I work with who have motor coordination and sensory integration problems that interfere with their ability to move and dance, let alone access fast-paced dance games on the Wii or Kinect.
My initial plan is to look at what the new dance games might be like from the view of someone who doesn't know how to dance, and admits that they have "two left feet" - an perhaps, no sense of rhythm. Where would I start?
Wii's Just Dance2 seems to offer some support for learning how to dance through the use of simple movement icons, in the form of outlined figures, that provide information about how to move with the dancer on the screen. As you can see from the video below, the gamer is provided with information about upcoming moves throughout the game.
I decided to take a look at Just Dance2's MIKA "Big Girl" (You Are Beautiful) because some of the adolescent females I work with have weight concerns that interfere with their health. During the teen years, this can become a vicious cycle, resulting in less movement, and less participation with peers in physical activities, such as playing dance games. If a teen has depression as part of this mix, we know that exercise can help, and a fun dance game might be a life-saver, in more ways than one.
The screen shots below show how the movement icons are used in the game:
I thought it would be useful to learn more about the story behind the making of JustDance2. At 2:22, Alexia, the project's usability expert, makes her presence known. From what I can tell, she focused on aspects of the game that would make it more usable for non-dancers, including those with "two left feet", to play the game. (I don't know if there was anyone consulted about accessibility concerns for the game.)
Kinect Dance Central
Dance Central uses a different approach when it comes to "teaching" people how to dance along through the game. It would be interesting to test out Dance Central and JustDance 2 with the same set of people to get a better feel for what works and what doesn't. Below is a video that previews, in split-screen, the interaction that takes place in Dance Central:
Dance Central Full Motion Preview
In Dance Central, gamers are provided with information about the moves through icons that cycle up the right hand side of the screen. The level of dance-coordination to keep up with the moves is challenging at times, even for people who are OK at dancing. Players can select dances according to level of difficulty.
Kinect Usability with Regular People
Steve Cable (CX Partmers) shared his team's look at usability issues related to the Kinect by testing several games, including Dance Central, with groups of people in his article, "Designing for XBox Kinect - a usability study". The quote below is from the Steve's article:
"We’ve loved playing with the Kinect. There’s no doubt that the game play is lots of fun. In-game menus are a barrier to that fun. Kinect should allow players to move through menus quickly and compensate for inaccuracy.
We felt the Kinect would benefit from some standardised global controls – much like a controller uses the A button to select and the B button to move backwards. We also think it needs a more responsive pause gesture – one that doesn’t interfere with the user’s game play.
Most of our participants found the Dance Central menu to be more effective, more efficient and more satisfying to use. Here are our recommendations for designing a Kinect menu interface:
Allow users to make selections through positive gestures, rather than timed positions
Place options on a single axis to make them easier and quicker to select
Allow users to control menus with the game pad if they prefer
Use large easy to read text
Don’t make users scroll through options unnecessarily – it takes too long
Users will be distracted if used in a social setting – test your menus in a social context to see if they are prone to errors
Avoid the cursor metaphor, it’s not what gamers are used to seeing in game menus, and makes it harder to implement alternative joypad controls"
Below are screen shots that provide examples of how the movement icons are displayed in Dance Central:
Today I came home from and found one of my favorite publications, Interactions, on my kitchen counter. Imagine my excitement when I saw that the cover article for the Jan-Feb issue was "Proxemic Interactions: The New Ubicomp?". (pdf) The authors of this article are Saul Greenberg, Nicolai Marquardt, Till Ballendat, Rob Diaz-Marino, and Miaosen Wang, from the University of Calgary. (A list of some of the articles from the current issue of Interactions can be found at the end of this post.)
On the topic of proxemic interactions.... I welcome the research that is germinating in this area, and agree with the authors of Interaction's cover article that there is much work to be done in this space!
I've been searching for great examples of interactive displays in public spaces. Since there is much room for improvement in this field, much of what I share on this blog is a bit disheartening. Most of my user-unfriendly encounters with large displays happen when I'm in an airport, mall, health care facility, or a cruise ship. Things are improving, but at a slow pace. The best work I've seen so far has been in museums.
A few weeks ago I wrote a post about my nearly comical interaction with a large display that featured a QR tag that simply would NOT cooperate with my smartphone's tag reader. What were they thinking when they decided that the best place for a QR tag on a large display should be at knee level? Interactive Display with QR Tag: Close Encounter at the Orlando Airport
There is more to this story! On my way home, I had a few hours to kill at the Orlando airport. I stopped by a convenience store the Orlando airport, and discovered the non-static display below. I didn't notice the display the first time I was in the store that day.
I spotted another QR tag! There were a few problems with this set-up. First, the display was located on a wall close to a ceiling, making it very inconvenient to scan with my phone. The page with the QR tag didn't stay up long enough for me to attempt to scan it, although waited around until it cycled back in view- three times. I gave up and took a picture instead, standing on my tip-toes with my arms extended over my head, the same position required to scan the QR tag, if it would stay put!
Another problem was that the display was perched above a high-traffic area, right above the refrigerated drinks. The only place to scan the QR tag was in a narrow aisle, behind the throngs of thirsty travelers making a bee-line to the cold drink section of the store. I was in the way of a number of people in this bee-line.
What were they thinking when they decided that the best place this large display featuring a QR tag should be several feet above eye-level in this particular location? The displays in front of the store were located on either side of the doorway, at "body" level. This is a logical place to put a display. Potential customers can view the displays without getting in the way of other travelers.
I didn't notice QR tags on these displays. QR tags linking to web-based reviews of books I'd like to purchase would be useful in this scenario.
Non-static displays outside of the store:
PART II: Interactive Touch Screen Display Encounter at JC Penney
Interactive displays are cropping up everywhere. The display featured below is part of the Find More at JC Penneycampaign. The following is a quote from a JC Penney press release: "JC Penney has announced the launch of a new mobile initiative that connects mobile commerce with social and instore media applications that include location-based check-in offers and enhancements to its mobile apps. JCPenney's mobile initiatives are part of the company's investment in its overall digital platform, which also includes jcp.com, social media, and its in-store FindMore™ smart fixture."
"The smart fixture integrates JCPenney's online and offline stores by providing customers with a 52-inch interactive touch screen experience that allows them to view and purchase items from jcp.com while in a JCPenney store. Customers can also scan a bar code to get product details, access recommendations, or email data about an item of interest to themselves or friends."
Can you find the large interactive touch screen display? After shopping around in the mall, I decided to go to JC Penney and check out the sales. Below is what I passed as I entered the store. I went up and down the main aisle a few times before I noticed a large interactive display along a side aisle.
The stack of red holiday boxes obstructed my view:
I didn't notice the display the first few times I walked in the other direction. From the back, the display looked like a poster. When I finally discovered the display, I was impressed with the slick graphics and stylish models.
I wasn't so impressed by the placement of the display. To interact with the screen, I had to stand in a narrow aisle, in the path of a steady stream of slow-walking shoppers, often moving in small groups, including parents with babies in strollers.
I wasn't impressed by the confusing array of input methods on the display. I noticed that the light blue square in the lower left-hand corner of the display is designed to provide access for people with disabilities. I couldn't figure it out. Accessibility: There is a need for research in this area!
The accessibility feature was confusing to use. The virtual trackball activated a curser on the screen, but the curser did not accurately map the virtual trackball movements. This section of the screen was located on the lower left hand corner, which would be convenient for someone in a wheelchair to use. The placement of the "accessibility window" was too low for someone who might to interact with the screen in an accessible manner but don't happen to need a wheelchair.
I shot video of my interaction with the screen, but found that I was in the way of shoppers, so I stopped. Not to worry. Below is a video taken by Mike Cearly, shared on his 11th Screen blog post, "Out and About: JC Penney's "Find More" Touch Screen". (Mike's impressions of this display were similar to mine and worth reading if this is an area that interest you.)
Mike uses his 11th Screen Scorecard to evaluate the technology he encounters in public spaces. I am updating a similar rubric to use when evaluating displays in public spaces. It is important to look at multiple dimensions, such as the physical placement of the display, weather and lighting conditions, how the display interoperates with mobile devices or a system of displays, how the content is presented/accessed across channels and devices, and how users share the experience with others.
Components of the 11th Screen Scorecard -Mike Cearly Purpose: "What is the purpose of the solution? Is it to drive awareness? Acquisition? Loyalty? What is the brand trying to accomplish in this medium?" Drama: "Does the solution make a big impact on the user? Does it make them stop and interact?" Usability: "Can the user navigate through the experience with ease? Are the paths to information intuitive? There's also an element of functions, too, but I think that is much more subjective. Do the functions enhance the user experience?" Interactivity: "How does the user interact with it? Is it gesture based? Is it touch-based? Can the user interact with it through any other enabling technology?" Information: "How much and what kind of content is available for the user to interact with? Generally speaking, the more information and the different formats of information, the better." Personalization: "What level of personalization does the experience provide?"
COMMENT
I've been obsessed with large displays and the amazing potential they hold for supporting all sorts of interactions - collaboration, planning, creating, information sharing, playing, viewing, communicating, learning, shopping, way-finding, artistic expression.
I'm also drawn to smaller screens and how we use our mobile devices- smartphones, iphones, e-readers, digital cameras, etc., to interact with the displays we encounter in public spaces. We are just beginning to figure out how technology can effectively - and seamlessly-support interaction between people who are co-located as well as located in different places.
INTERACTIONS Volume XVIII.1
Here is a list of some articles from the current issue of Interactions:
"Our research shows that the vast majority of people polled in both developed and emerging markets see great potential for NUI applications beyond entertainment. This is especially true in China and India, where 9 out of 10 respondents indicate they are likely to use NUI technology across a range of lifestyle areas – from work, education and healthcare, to social connections, entertainment and the environment. We believe that taking technology to the next billion can be aided by NUI – making technology more accessible and more intuitive to a wider audience". - Steve Clayton, Microsoft
The people at Microsoft don't own the concept! I'm a member of the NUI Group (May, 2007) and SparkOn. Both are on-line communities where you can find people who live and breathe NUI, learn about their work, and even share designs and code. If you are intrigued by NUI - as a designer, developer, or user, please join us.
Note: I've been an evangelist and cheerleader for the NUI cause for many years. If you search this blog for "post-WIMP", "NUI", "multi-touch", "gesture", "off-the-desktop", "natural user interaction", "natural user interface", or even "DOOH", you'll be provided with an overwhelming number of posts that include videos, photographs, and links to NUI-related resources, including scholarly articles. There is a small-but-growing number of people from many disciplines, quietly working on NUI-related projects.
The following video and article focuses on some research that I think will be very useful for creating interactive multimedia timelines in the future. I'd like to learn more about ways this interaction can be implemented in gesture-based systems!
Content-Aware Dynamic Timeline for Video Browsing From Gonzalo's YouTube channel: "When videos have more frames than pixels in the player's timeline slider, frames become inaccessible and scrolling actions cause sudden jumps in a video's continuity. We propose a content-aware dynamic timeline control that decouples video speed and playback speed and allow salient shots to be presented at an intelligible speed."
Note: The researchers are investigating the use of audio/sound in a similar manner, making this form of video browsing/search truly multimedia. An example of this is included near the end of the above video clip.
The following video is a demonstration of something I could use in my work as a school psychologist, since I use a lot of video for assessing students with disabilities, including those with autism. I also use video to create digital social stories for many of the students.
Take a look at the video demonstration of the new SMARTBoard (800 series) that offers multi-touch and gesture interaction support so that two students can interact with the board at the same time.
Students can use 2 finger gestures to enlarge objects and move them around.
Two students can interact with the board at the same time to complete activities.
SMARTInk/Calligraphic Ink creates stylized print as you write. Whatever is written or drawn on the SMARTBoard becomes an object in the SMARTNotebook, allowing for things to be resized or rotated. (2:04)
Multi-touch gestures enabled in Window 7 and Snow Leopard work with the SMARTBoard.
Software development kit (3:28): Example of a physics application developed by a 3rd-party developer. The application supports two students working at the SMARTBoard at the same time
This video, in my opinion, does not provide viewers with the full range of possibilities that the new features provide. I'd like to see a "redo" of this video using a live teacher and a group of students. For example, it would be interested in seeing how the physics application would be incorporated into a broader lesson or science unit. I'd love to hear what real students have to say as they interact with the physics application, too.
Comment:
I think a multi-user interactive timeline would be a great application for the new SMARTBoard, because students could work together to create and recreate events. This would be ideal for history, literature, and humanities activities, across a wide span of grade levels.
Nicolas Loeillot, of LM3LABS, has been ahead of the natural user interaction/ interface game for many years as his company has expanded. He's done quite a bit of deep thinking about the work of his company, and has used this wisdom to create a nice concept map that describes how LM3LABS' solutions fit into the world of gesture-based control and interaction:
In my opinion, this chart would make a great template for mapping out other natural interaction applications and products!
Here is the description of the concepts outlined in the chart:
"If all of them belong to the “gesture control” world, the best segmentation is made from 4 categories:
Finger tracking: precise finger tracking, it can be single touch or multi-touch (this latest not always being a plus). Finger tracking also encompasses hand tracking which comes, for LM3LABS products, as a gestures.
Body tracking: using one’s body as a pointing device. Body tracking can be associated to “passive” interactivity (users are engaged without their decision to be) or “active” interactivity like 3D Feel where “players” use their body to interact with content.
Face tracking: using user face as a pointing device. It can be mono user or multiple users. Face tracking is a “passive” interactivity tool for engaging user in an interactive relationship with digital content.
Image Tracking: Augmented Reality (AR) lets users use images (flyers, real products, t-shirts, faces,…) to interact with digital content. AR can be markerless or marker-based. Markerless technology has advantages but marker-based AR is easier for users to understand. (Please note here that Markerless AR is made in close collaboration with AR leader Total Immersion)." -LM3LABS
If you are interested in this subject and want to view some good examples of off-the-desktop interfaces and interactions, take a look at the LM3LABS blog, as well as Nicolas Loeillot's Vimeo channel. Also take a look at the sample of posts I've written about LM3LABS over the last few years - the links are at the end of this post.
I first discovered LM3LABS when I was taking a VR class and researching interactive, immersive large displays in 2005 or 2006. Back then, there wasn't much information about this sort of technology. A lot has changed since then!
I've learned quite a bit from watching LM3LABS (and others) grow, given my passion for postWIMP interactive technology and my commitment to blogging about this subject. Nicolas has really worked hard in this arena. As early as 2005, LM3LABS was working with Scala to provide "smart" interactive displays, and his company's applications have been supported by computer vision technologies for many years, allowing for gesture-based, or "touch-less" interaction, as demonstrated by the Catchyoo Interactive Table. This application caught my eye back in early 2007, when I was working on projects for large interactive displays for my HCI and Ubicomp classes, and was thinking about creating a table-top application.
My hunch is that LM3LABS has set the foundation for further growth in the future, given the lessons they've learned by taking risks with postWIMP technologies over the past few years!
"Founded in 2003 by a team of passionate researchers, engineers, designers, and marketers from various international backgrounds, focused on fast transformation of innovation into unique products, LM3LABS is a recognized pioneer in computer vision-based interactivity solutions. Keeping a strong customer focus, LM3LABS' team of unique people pioneers new directions, explores new concepts, new technologies and new interactions. Engaging, playful and magic, LM3LABS' products and solutions are always scalable and reliable"
info@lm3labs.com
Note to readers:
Over the past couple of years there has been an explosion of postWIMP technologies and applications, and with this pace, it has been difficult for me to keep abreast of it all. There is quite a bit I miss, given my full time job and daily life!
I welcome information about postWIMP interactive technologies and applications from my readers. Due to time constraints, not interest, I am not always able to post about a topic as soon as I'd like. That is OK, as my intention is not to be the first blogger to spread the latest tech news. I like to dig in deep when I can and make connections between innovative, interesting technologies and the people and ideas behind them.
I came across this demonstration of I/O DataCenter's 12 by 7 foot interactive video wall that makes playing around with views of data center modules...interesting! The display is a gesture-based "multi-touch" system. (I'll update this post when I get more information.)
Here is the description from the Datacenter YouTube channel:
"Instead of hauling a 40-foot long modular data center to a trade show, i/o Data Centers is taking a high-tech approach to customer tours of their i/o Anywhere modular data center. The i/o team has created a 12-foot by 7-foot touchscreen video wall to provide interactive tours of the company's facilities. Selecting a "hot spot" pops up a virtual data center, complete with cross sections and product info, following the concept of the touch screens in the sci-fi movie "Minority Report.""
FYI: I/O Data Centers has an application that runs on the Surface.
I recently posted about the Therenect, a gesture-controlled digital theremin created for Microsoft's Kinect, created by Martin Kaltenbrenner - Therenect: Theremin for the Kinect! (via Martin Kaltenbrenner)It looks like Martin has been busy polishing up the application over the past few days, as you can see from the video below:
Here are a couple of new natural user interface videos. The first video, by Evoluce, demonstrates gesture interaction/navigation in Windows 7 applications supported by Kinect. The second video, by Immersive Labs, shows multi-touch product browsing interaction on a large display.
Martin Kaltenbrunner's description of TuioKinect: "TuioKinect tracks simple hand gestures using the Kinect controller and sends control data based on the TUIO protocol. This allows the rapid creation of gesture enabled applications with any platform or environment that supports TUIO tuio.org/ You can download the application from: code.google.com/p/tuiokinect/ Music: Jabon Jabon by El Club de los Astronautas (Institut Fatima)"
I've played around with Tuio and OpenFrameworks, but it has been a while. I can't wait until I have time to dig into this with a Kinect. I think this has great potential for supporting learning and communication among students with special needs.