Apr 1, 2013

The Uncanny Valley is Here! Activision's real-time character demo is chillingly real. (Not an April 1st joke.)

Up with Activision in the Uncanny Valley

I first heard the term 'uncanny valley' about eight or nine years ago when I was taking a 3D-modeling class.  At that time, the technology available was not close to reaching this valley - where robots or computer-generated characters are so real that they are almost repulsive.

A lot has changed over the years.

The following video, recently featured by Activision Blizzard during the 2013 Game Developer Conference, has attracted much attention in just a few days, partly because it is so real.



Although I noticed that a little more work needs to be done with the teeth, I was impressed. I liked the quality of the eye shaders that were used in the creation of this demo.  Examples of faces created with this feature turned on and off can be found on Jorge Jimenez' blog.  Jorge's slides from a 2012 course offered during SIGGRAPH provide additional information.

Computer processers have become powerful enough to handle quite a lot of processing, and the tech world has been spreading the word. Below is a presentation by Jen-Hsun Huang, CEO of Nvidia, about the company's work simulate the human face, touching on the 'uncanny valley':


Although the use of this technology to create characters, realistic on many levels, seems to be a bit creepy, it might be OK after some refinement.  There are a few questions that remain unanswered.   What would be the impact on children or teens who might spend many hours each week playing games with such realistic characters?  I'd hate to have a nightmare featuring one of these guys!

I think that this technology might have some potential for use in serious games and simulations, such as preparing emergency workers to handle a variety of realistic scenarios. Games with realistic digital characters, capable of generating a range of facial expressions, might be useful to support the learning of social interaction skills among young people with autism spectrum disorders.


RELATED/SOMEWHAT RELATED
Is It Real?  With New Technology Has Activision Crossed the 'Uncanny Valley'?
Eyder Peralta, the two-way, NPR, 3/28/13
Activision Reveals Animated Human That Looks So Real, It's Uncanny
Charlie White, Mashable, 3/28/13
Karl F. MacDorman's Writings (some focus on the uncanny valley)
Advances in Real-Time Rendering in Games Course (SIGGRAPH2012)
Separable Subsurface Scattering and Photorealistic Eyes Rendering (pptx)
Jorge Jimenez, Presenter, SIGGRAPH 2012)
Next Generation of Character Rendering Teaser (pptx)
Jimenez and Team
Crossing the 'uncanny valley': Nvidia's Faceworks renders realistic human faces
Dean Takahashi, VB/Gamesbeat, 3/18/13
Real-Time Realistic Skin Translucency
Jimenez, J., Whealan, D., Sundstedt, V., Gutierrez, D IEEE Computer Graphics and Applications, 2010
Exploring the Uncanny Valley Research Website (Indiana University School of Informatics)
Gaze-based Interaction for Virtual Environments pdf
Jimenez, J., Gutierrez, D., Latorre, P.  Journal of Universal Computer Science
Mori, Masahiro (1970). Bukimi no tani [the uncanny valley] (K. F. MacDorman & T. Minato, Trans.). Energy, 7(4), 33-35. 
2013 GPU Technology Conference Keynote Presentations

What happens when a 2-year old wakes up to the sound of the Google Map Lady? "I CAN'T turn left right now!"

Google Map Lady says, "Turn Left", toddler yells from the back seat, "I CAN'T..."

If you are new to this blog, you might not know that I'm the grandmother of a 2-year-old little boy.  Watching him grow in an increasingly technology-enriched world has been an eye-opener at times, from his first interaction with my iPad, fingers-and-toes at 7 months of age, to his attempts at rafting down a digital river, playing the Kinect Adventure! River Rush game. 

Technology is rapidly changing how we learn, interact, and navigate our world.  Designers, developers, and others who are involved in the process of creating for the near future must be mindful of the ways newer technologies might play out in the real world, where the "user" is not always the person intended for the "user experience".  Off-the-desktop technologies are rapidly advancing, and impact people of all ages, wherever they happen to be.

Today's story is just one example.

I'm fortunate to live about a 35 minute drive from my grandson, and for this reason, I sometimes take him out and about, especially when his parents have a lot of errands to run.

Toddler with replica of the Eiffel Tower, Amalie's French Bakery, NoDa, Charlotte, NC

Toddler dancing around a floor mural











After a nice lunch at Amelie's French Bakery  near the NoDa neighborhood (Charlotte, NC),  and exploring the floor murals in the little mall behind the bakery, I told my grandson that we were going to the "Big Park" (Freedom Park). 

He was so excited, but within a few minutes, he was fast asleep.


Toddler smiling and happy in the back seat


Toddler asleep in the back car seat
I drove up towards the airport to kill time, thinking that he'd wake up and we'd watch the planes. He was still sleeping.  Now what?

I opened up Google Maps on my cell phone to get directions from the airport to the Carolina Raptor Center at Latta Plantation Park, since I wasn't sure how to get there from the airport.

About 15 minutes later, as the Google Map Lady gave directions, Levi woke up, saying "What's that sound? A lady's voice?". The Google Map Lady spoke again, and said something like, "In 1000 feet, take a left turn." 

Levi replied empathically, "I CAN'T turn left right now!". Google Map Lady responded with the next direction, and Levi replied, "I CAN'T do that!". 

The little guy was visibly upset, because he thought the lady was telling him what to do. It was obvious to him that he could not comply with her request.

What to do?   How do I explain the "Google Map Lady" this to a 2-year old? 

This is how I handled the situation:

I told him that the lady's voice was to help me know where turn so I could drive to the raptor center.  I kindly told him that the directions were just for me, not little boys who can't turn the car because they are in car seats and can't drive. He nodded and said, with relief, "Lady's voice for Mi-Mi, NOT for little boys", and was fine after that.

Note:
Although I did not know it at the time, my grandson had somehow wriggled out of the left harness of his car seat. I discovered the problem as I went to unfasten him from the car seat, and wondered how long he'd not been secured safely.  It hadn't occurred to me that this would happen - everything was in place at the beginning of our ride, as you can see from the first picture.  

As I lifted my grandson out of the car seat, it crossed my mind that it would be a good idea if car seats came with sensors to let the driver know if the car seat straps, snaps, or buckles became unsecured. (Systems like Forget Me Not provide a warning system to parents if the child is forgotten in the car.)

After conducting a quick search, I found that Sherine Elizabeth Thomas has applied for a patent that includes the use of a sensor to alert the adult that a child has unbuckled their seat belt.  I think that a system could be developed to provide an alert if the child was not safely secured, as in the case of my wiggly grandson.  


RELATED AND SOMEWHAT RELATED
(Self-activating, self-aware digital wireless safety system)
John Polaceck, 3/24/13
Grandma Got STEM blog (More info to come on this topic!)

Mar 16, 2013

UPDATE: What's New for Kinect? Fusion, real-time 3D digitizing, design considerations, and more.

The Evolution of Microsoft Kinect

I've been following the evolution of Microsoft's Kinect, and recently discovered a few interesting videos that show how far the system has come. According to Josh Blake, the founder of the OpenKinect community and author of the Deconstructing the NUI blog,  the Kinect for Windows SDK v1.7 will be released on Monday, March 18th, from http://www.kinectforwindows.com.  More details about this version can be found on Josh's blog as well as the official Kinect for Windows blog.


It is possible to create applications for desktop systems that work with the Kinect in interesting ways, as you'll see in the following videos. I think there is potential here for use in education/edutainment!

Below is a video of Toby Sharp, of Microsoft Research, Cambridge, demonstrating Kinect Fusion.  The software allows you to use a regular Kinect camera to reconstruct the world in 3D.



KinEtre: A Novel Way to Bring Computer Animation to Life
According to information from the YouTube description, "KinÊtre is a research project from Microsoft Research Cambridge that allows novice users to scan physical objects and bring them to life in seconds by using their own bodies to animate them. This system has a multitude of potential uses for interactive storytelling, physical gaming, or more immersive communications."




The following videos are quite long, so feel free to re-visit this post when you have time to relax and take it all in!

Kinect Design Considerations
This video covers Microsoft's Human Interface Guidelines, scenarios for interaction and use, and best practices for user interactions.  It also includes a preview of the next major version of the Kinect SDK. 


Kinect for Windows Programming Deep Dive
This video discusses how to build Windows Desktop apps and experiences with the Kinect, and also previews some future work.




RELATED
Kinect for Windows Developer Downloads
Kinect for Windows Blog
Deconstructing the NUI Blog (Josh Blake)
Microsoft Kinect Learns to Read Hand Gestures, Minority Report-Style Interface Now Possible
Celia Gorman, IEEE Spectrum, 3/13/13
Kinect hand recognition due soon, supports pinch-to-zoom and mouse click gestures.
Tom Warren, The Verge, 3/6/13
Microsoft's KinEtre Animates Household Objects
Samuel K. Moore, IEEE Spectrum, 8/8/12
Kinect Fusion Lets You Build 3-D Models of Anything Celia Gorman, IEEE Spectrum, 3/6/13
Description of Kinect sessions at Build 2012
Kinect for every developer!
Tom Kerhove, Kinecting for Windows, 2/15/13
Kinect in the Classroom
Kinect Education

Note: Although I recently received my developer kit for Leap Motion, another gesture-based interface, I haven't lost interest in following news for Kinect.

Interactive MaKey MaKey: "An Invention Kit for Everyone" - Video Preview!

Interactive Invention:  MaKey Makey for All


MaKey MaKey is a hands-on "maker" kit created by Jay Silver and Eric Rosenbaum of MIT, based on research from the MIT Media Lab's Lifelong Kindergarten

After watching the lively video today,  I ordered my very own kit!


MaKey MaKey - An Invention Kit for Everyone from jay silver on Vimeo.

How does MaKey MaKey work?  It is powered by a board that can support 6 keyboard keys, and mouse control.  It runs on top of Arduino, an open-source electronics prototyping platform that supports multi-modal interactive input and output.  

I see endless possibilities and fun maker-crafting with my little grandson!


In the following video, musician/visual artist j.viewz uses his MaKey MaKey kit to hook up fruits and veggies to his music system.  

Watch j.viewz play a bunch of grapes!  The strawberries sound nice. 

j.viewz playing Teardrop with vegetables from j.viewz on Vimeo.


RELATED
MaKey MaKey Lifelong Kindergarten
Arduino
Phidgets
How to Start Making Your Own Electronics with Arduino and Other People's Code
Thorin Klosowski, Lifehacker, 1/12/12

Mar 12, 2013

Google Glass and Kids- BYOGG? Quick Links to MIT Tech Review Post and more

Now that I'm set to experiment with Leap Motion, I started thinking about Google Glass - I know if I visit my grandson wearing them, he'd figure he should, too.  Why not?  He expects to play with my iPad for at least a short while during our visits.

I can see the potential for active educational game applications with this device.

I wonder if Google Glass will follow the path of cell phones into classroom settings. Once banned, many schools are embracing their use in BYOD (Bring Your Own Device) programs. BYOGG?  TIme will tell.


Here is a video from Google provides a view of what the Google Glass experience might be like for a variety of people:
 



Here is an example of a mother of a 2-month-old infant, wearing Google Glass as she shares special moments:

I'm guilty of BTV (baby TV)- in the form of BetaMax and VHS tape recordings.  If Google Glass was around when my kids were babies, I'd probably do the same.


Here is another example of "Project Glass": 



I work with a number of students who are non-verbal and have severe autism. I think there is potential for use with children and adults disabilities.   


Some ideas that come to mind:

Facial expression translator/decoder (for people with autism spectrum disorders)
Two-way sign language translator
Augmented device for the visually-impaired, elderly, etc.
Accessible games, active games
Travel guide, museum guide, health care/hospital stay guide
Exercise companion
InfoVis advisor
Shopping trip/fashion advisor for people like me who hate shopping


RELATED
Growing Up with Google Glass: When Google Glass launches it will be used by kids as well as adults.
Tom Simonite, MIT Technology Review, 3/5/13

Can You See How Google Glass Will Disrupt Higher Education?
Jimmy Daly, EDTECH, 2/26/13

The rise of smartglasses in education: or, A shameless plea to Jaime Casap
Thomas B. Segal, Education Week, 3/5/13



Mar 11, 2013

Leap Motion: My Dev Kit Arrived - Now What?! Thoughts About "NUI" Child-Computer-Tech-Interaction - and More



My Leap Motion developer kit arrived last week. I carefully unboxed the small device and tried out the demo apps that came with the SDK.  I'm doing more looking than leaping at this point.

I'd like to create a simple cause-and-effect music, art and movement application for my 2-year-old grandson, knowing that he'll be turning three near the end of this year.  It would be nice if my app could provide young children with enough scaffolding to support gameplay and learning over a few years of development.

Now that I'm a grandmother, I've spent some time thinking about what the evolution of NUI will mean for young children like my grandson.   Family and friends captured his first moments after birth with iPhones, and shared across the Internet.  Born into the iWorld, he knows how to use an iPad or smart phone to view his earlier digital self on YouTube, without ever touching a mouse or a physical keyboard.

The little guy is pretty creative in his method of interacting with technology, as I've informally documented on video.   He was seven months old when he first encountered my first iPad.  It was fingers-and-toes interaction from the start.  

In the first picture below,  he's playing with NodeBeat.  In the second picture, he's 27 months old, experimenting with hand and foot interaction, on a variety of apps.




















My grandson is new to motion control applications, so I'm just beginning to learn what he likes,  and what he is capable of doing.  A couple of weeks ago, we played River Rush, from the Kinect Adventures game. He loved jumping up and down as he tried to hit the adventure pins. Most of the time, he kept jumping right out of the raft!  (I think next time we'll try Kinect Sesame Street TVor revisit Kinectimals.)  


One of the steps I'm taking to prepare for my Leap Motion adventure is take a look at what people have done with it so far.  There are at least 12,000 developer kits released, so hopefully there will be some interesting apps to go along with the retail version of Leap Motion when it is released at Best Buy on May 19th of this year.

One app I really like is  Adam Somer's AirHarp, featured in the video clip below:


I also like the idea behing the following app, developed by undergraduate students:

Social Sign: Multi-User sign language gesture translator using the Leap Motion Controller (git.to/socialSign)
 
"Built at the PennApps Spring 2013 hackathon, Social Sign is a friendly tool for learning sign language! By using the Leap Motion device, the BadApples team implemented a rudimentary machine learning algorithm to track and identify American Sign Language from a user's hand gestures."

"Social Sign visualizes these hand gestures and broadcasts them in textual and visual representations to other signers in a signing room. In a standard chat room fashion, the interface permits written communication but with the benefit of enhanced learning in mind. It's all about learning a new way to communicate."-BadApples Team



There are a few NUI-focused tech companies that have experimented with Leap Motion. Today, I received a link to the following videoclip Joanna Taccone, of Intuilab, featuring their most recent work:
Gesture recognition with Leap Motion using IntuiFace Presentation

"Preview of our work with the Leap Motion controller. In the same spirit as our support for Microsoft Kinect, we have encoded true gesture support, not just mouse emulation, for the creation of interactive applications by non-programmers. The goal is to hide complexity from designers using our product, IntuiFace Presentation (IP). Through the use of IP's trigger/action syntax, designers simply select a gesture as a trigger - Swipe Left, Swipe Right, Point, etc. - and associate that gesture with an action like "turn the page" or "rotate the carousel". As you can see in this video, it works quite well. :-) We will offer Leap support as soon as it ships." -IntuiLab



Below is a demonstration of guys playing Drop Cord, a collaboration between Leap Motion and Double Fine.  From the video, you can tell that they had a blast!  

Here is an excerpt from the chatter:  "The thing is that everyone just looks cool..Yeah, I know, it doesn't matter what you are doing...it's got the right amount of speed-up-slow-down stutter-y stuff...it is like a blend of art and science.."

According to the website, Drop Chord is a "A music-driven score challenge game for the Leap Motion controller, coming soon for PC, Mac, & IOS from the creators of Kinect Party.."  

The following video is a demonstration of the use of Leap Motion to control an avatar and other interaction in Second Life:



Below are a few more videos featuring Leap Motion:


Control Your Computer With a Chopstick: Leap Motion Hands On (Mashable)


The Leap Motion Experience at SXSW 2013


LEAP Motion demo: Visualizer, Windows 8, Fruit Ninja, and More...



RELATED
Air Harp for Leap Motion, Responsive Interaction
Leap Motion and Double Fine team on Dropchord, give air guitar skills an outlet
John Fingas, Engadget, 3/7/13
Leap Motion Controller Set To Ship May 13 for Global Pre-Orders, In Best Buy Stores May 19.
Hands on With Leap Motion's Controller
Lance Ulanoff, Mashable, 3/10/13
Leap Motion website
Social Sign
IntuiLab
Leap Motion: Low Cost Gesture Control for Your Computer Display

SOMEWHAT RELATED
Kinect for Windows Academic: Kaplan Early Learning
"3 years & up. Hands-on play with a purpose -- the next generation way. This unique learning tool uses your body as the game controller making it a great opportunity to combine active play and learning all in one. Use any surface to actively engage kinesthetic, visual, and audio learners. Bundle includes the following software: Word Pop, Directions, Patterns, and Shapes."

Comment:
I've been an enthusiastic supporter of natural-user interfaces and interaction for years - back in 2007 I worked on touch-screen applications for large displays as a graduate student, and became an early member of the NUI group.  I'm also a school psychologist, and from my experience, I understand how NUI-based applications and technologies, such as interactive whiteboards and touch-tablets, such as the iPad can support the learning, communication, and leisure needs of students who have significant special needs.   It looks like Leap Motion and similar technologies have the potential to support a wide range of applications that target special populations, of all ages.