Apr 20, 2013

Little Digits Counting and Early Math App, by Cowly Owl - Chris O'Shea

Fun with Early Math:  App by Chris O'Shea - Cowly Owl


Chris O'Shea is an artist and designer who uses technology in creative, playful ways. Over the past year or so, he's devoted some of his attention to designing engaging iPad apps and digital toys.  Below is a video of his Little Digits app in action:


Little Digits from Cowly Owl on Vimeo.

Here is the information from the Vimeo site:

"Little Digits is a fun educational app that teaches children about numbers by putting a new spin on finger counting. Using the iPad multi-touch screen, Little Digits displays number characters by detecting how many fingers you put down.  Children can learn to associate the number on the screen with the number of fingers they place down, whilst enjoying the unique characters and animations of the Little Digits world...They are also games that introduce small addition and subtraction calculations, where you can work out the answer using the same multi-touch finger detection."





RELATED
Cowly Owl
Chris O'Shea's Website
My iPad Pinterest Board





Apr 11, 2013

Interesting Videos I Almost Missed (Future/Emerging/Creative Tech)

Creative Tech Videos I Almost Missed


I admit that sometimes I just don't have the time to hang out and watch interesting or quirky tech/future tech videos on the web.  Here are a few that passed me by the first time around.  

Enjoy!


The first video for this post is of an interactive game installed permanently for children at the Royal London Hospital.  Woodland Wiggle is a work commissioned by Vital Arts, in collaboration with Nexus Interactive Arts, Chris O'Shea, Felix Massie, and Brains & Hunch.  The game was created in C++ using openFrameworks, and relies on an Xbox Kinect camera.   The installation is part of play and garden spaces designed as healing environments for young patients.  (See links in the "Related" section for more information.)




The next video is the creation of Igor Labutov, Jason Yosinski, and Hod Lipson, of the Cornell Creative Machines Lab.

AI vs. AI:  Two chatbots talking to each other


I liked this video because I once created a chatbox video game for an AI for Games class I took several years ago, and have fond memories of the hours I spent reading the textbook supporting the display on the right- Artificial Intelligence: A Modern Approach)

Tom Jenkins and Simon Sharp, of thetheory, created the following two video shorts. Address Is Approximate is a stop-motion video about a lonely desk toy who makes a journey across the US via Google Maps Street View.   Speed of Light uses a pocket projector, a video feed, and creativity to create an augmented reality-like police-chase short.  According to information from the Vimeo website, Speed of Light was filmed using a Cannon 5d Mkll + HD MiniCam, with MicroVision projectors.

Address Is Approximate, from The Theory

Address Is Approximate from The Theory on Vimeo.


Speed of Light / aka / The World's Tiniest Police Chase from The Theory on Vimeo.

RELATED
Woodland Wiggle:  Interactive games on a giant television at the Royal London Hospital
Interactive Woodland at Royal London Hospital (Nexus Productions Website)
Giant tigers and rooftop teepees: the Royal London Hospital play space
Oliver Wainwright, The Guardian, 2/21/13
Note: I especially liked that in his article about the Royal London Hospital's play space, Oliver Wainwright shared this quote from Florence Nightingale's 1859 Notes on Nursing: "variety of form and brilliancy of colour in objects presented to patients are an actual means of recovery".
Cornell Creative Machines Lab
Robot-To-Robot Chat Yields Curious Conversation
Robert Siegel, Host, All Things Considered, 9/1/11
Introduction to Artificial Intelligence (Udacity Course)
Meet the Creators: Tom Jenkins and Simon Sharp Trade Viral Shorts for A Studio Film
Joe Berkowitz, Co.Create

Listen

Apr 1, 2013

The Uncanny Valley is Here! Activision's real-time character demo is chillingly real. (Not an April 1st joke.)

Up with Activision in the Uncanny Valley

I first heard the term 'uncanny valley' about eight or nine years ago when I was taking a 3D-modeling class.  At that time, the technology available was not close to reaching this valley - where robots or computer-generated characters are so real that they are almost repulsive.

A lot has changed over the years.

The following video, recently featured by Activision Blizzard during the 2013 Game Developer Conference, has attracted much attention in just a few days, partly because it is so real.



Although I noticed that a little more work needs to be done with the teeth, I was impressed. I liked the quality of the eye shaders that were used in the creation of this demo.  Examples of faces created with this feature turned on and off can be found on Jorge Jimenez' blog.  Jorge's slides from a 2012 course offered during SIGGRAPH provide additional information.

Computer processers have become powerful enough to handle quite a lot of processing, and the tech world has been spreading the word. Below is a presentation by Jen-Hsun Huang, CEO of Nvidia, about the company's work simulate the human face, touching on the 'uncanny valley':


Although the use of this technology to create characters, realistic on many levels, seems to be a bit creepy, it might be OK after some refinement.  There are a few questions that remain unanswered.   What would be the impact on children or teens who might spend many hours each week playing games with such realistic characters?  I'd hate to have a nightmare featuring one of these guys!

I think that this technology might have some potential for use in serious games and simulations, such as preparing emergency workers to handle a variety of realistic scenarios. Games with realistic digital characters, capable of generating a range of facial expressions, might be useful to support the learning of social interaction skills among young people with autism spectrum disorders.


RELATED/SOMEWHAT RELATED
Is It Real?  With New Technology Has Activision Crossed the 'Uncanny Valley'?
Eyder Peralta, the two-way, NPR, 3/28/13
Activision Reveals Animated Human That Looks So Real, It's Uncanny
Charlie White, Mashable, 3/28/13
Karl F. MacDorman's Writings (some focus on the uncanny valley)
Advances in Real-Time Rendering in Games Course (SIGGRAPH2012)
Separable Subsurface Scattering and Photorealistic Eyes Rendering (pptx)
Jorge Jimenez, Presenter, SIGGRAPH 2012)
Next Generation of Character Rendering Teaser (pptx)
Jimenez and Team
Crossing the 'uncanny valley': Nvidia's Faceworks renders realistic human faces
Dean Takahashi, VB/Gamesbeat, 3/18/13
Real-Time Realistic Skin Translucency
Jimenez, J., Whealan, D., Sundstedt, V., Gutierrez, D IEEE Computer Graphics and Applications, 2010
Exploring the Uncanny Valley Research Website (Indiana University School of Informatics)
Gaze-based Interaction for Virtual Environments pdf
Jimenez, J., Gutierrez, D., Latorre, P.  Journal of Universal Computer Science
Mori, Masahiro (1970). Bukimi no tani [the uncanny valley] (K. F. MacDorman & T. Minato, Trans.). Energy, 7(4), 33-35. 
2013 GPU Technology Conference Keynote Presentations

What happens when a 2-year old wakes up to the sound of the Google Map Lady? "I CAN'T turn left right now!"

Google Map Lady says, "Turn Left", toddler yells from the back seat, "I CAN'T..."

If you are new to this blog, you might not know that I'm the grandmother of a 2-year-old little boy.  Watching him grow in an increasingly technology-enriched world has been an eye-opener at times, from his first interaction with my iPad, fingers-and-toes at 7 months of age, to his attempts at rafting down a digital river, playing the Kinect Adventure! River Rush game. 

Technology is rapidly changing how we learn, interact, and navigate our world.  Designers, developers, and others who are involved in the process of creating for the near future must be mindful of the ways newer technologies might play out in the real world, where the "user" is not always the person intended for the "user experience".  Off-the-desktop technologies are rapidly advancing, and impact people of all ages, wherever they happen to be.

Today's story is just one example.

I'm fortunate to live about a 35 minute drive from my grandson, and for this reason, I sometimes take him out and about, especially when his parents have a lot of errands to run.

Toddler with replica of the Eiffel Tower, Amalie's French Bakery, NoDa, Charlotte, NC

Toddler dancing around a floor mural











After a nice lunch at Amelie's French Bakery  near the NoDa neighborhood (Charlotte, NC),  and exploring the floor murals in the little mall behind the bakery, I told my grandson that we were going to the "Big Park" (Freedom Park). 

He was so excited, but within a few minutes, he was fast asleep.


Toddler smiling and happy in the back seat


Toddler asleep in the back car seat
I drove up towards the airport to kill time, thinking that he'd wake up and we'd watch the planes. He was still sleeping.  Now what?

I opened up Google Maps on my cell phone to get directions from the airport to the Carolina Raptor Center at Latta Plantation Park, since I wasn't sure how to get there from the airport.

About 15 minutes later, as the Google Map Lady gave directions, Levi woke up, saying "What's that sound? A lady's voice?". The Google Map Lady spoke again, and said something like, "In 1000 feet, take a left turn." 

Levi replied empathically, "I CAN'T turn left right now!". Google Map Lady responded with the next direction, and Levi replied, "I CAN'T do that!". 

The little guy was visibly upset, because he thought the lady was telling him what to do. It was obvious to him that he could not comply with her request.

What to do?   How do I explain the "Google Map Lady" this to a 2-year old? 

This is how I handled the situation:

I told him that the lady's voice was to help me know where turn so I could drive to the raptor center.  I kindly told him that the directions were just for me, not little boys who can't turn the car because they are in car seats and can't drive. He nodded and said, with relief, "Lady's voice for Mi-Mi, NOT for little boys", and was fine after that.

Note:
Although I did not know it at the time, my grandson had somehow wriggled out of the left harness of his car seat. I discovered the problem as I went to unfasten him from the car seat, and wondered how long he'd not been secured safely.  It hadn't occurred to me that this would happen - everything was in place at the beginning of our ride, as you can see from the first picture.  

As I lifted my grandson out of the car seat, it crossed my mind that it would be a good idea if car seats came with sensors to let the driver know if the car seat straps, snaps, or buckles became unsecured. (Systems like Forget Me Not provide a warning system to parents if the child is forgotten in the car.)

After conducting a quick search, I found that Sherine Elizabeth Thomas has applied for a patent that includes the use of a sensor to alert the adult that a child has unbuckled their seat belt.  I think that a system could be developed to provide an alert if the child was not safely secured, as in the case of my wiggly grandson.  


RELATED AND SOMEWHAT RELATED
(Self-activating, self-aware digital wireless safety system)
John Polaceck, 3/24/13
Grandma Got STEM blog (More info to come on this topic!)

Mar 16, 2013

UPDATE: What's New for Kinect? Fusion, real-time 3D digitizing, design considerations, and more.

The Evolution of Microsoft Kinect

I've been following the evolution of Microsoft's Kinect, and recently discovered a few interesting videos that show how far the system has come. According to Josh Blake, the founder of the OpenKinect community and author of the Deconstructing the NUI blog,  the Kinect for Windows SDK v1.7 will be released on Monday, March 18th, from http://www.kinectforwindows.com.  More details about this version can be found on Josh's blog as well as the official Kinect for Windows blog.


It is possible to create applications for desktop systems that work with the Kinect in interesting ways, as you'll see in the following videos. I think there is potential here for use in education/edutainment!

Below is a video of Toby Sharp, of Microsoft Research, Cambridge, demonstrating Kinect Fusion.  The software allows you to use a regular Kinect camera to reconstruct the world in 3D.



KinEtre: A Novel Way to Bring Computer Animation to Life
According to information from the YouTube description, "KinÊtre is a research project from Microsoft Research Cambridge that allows novice users to scan physical objects and bring them to life in seconds by using their own bodies to animate them. This system has a multitude of potential uses for interactive storytelling, physical gaming, or more immersive communications."




The following videos are quite long, so feel free to re-visit this post when you have time to relax and take it all in!

Kinect Design Considerations
This video covers Microsoft's Human Interface Guidelines, scenarios for interaction and use, and best practices for user interactions.  It also includes a preview of the next major version of the Kinect SDK. 


Kinect for Windows Programming Deep Dive
This video discusses how to build Windows Desktop apps and experiences with the Kinect, and also previews some future work.




RELATED
Kinect for Windows Developer Downloads
Kinect for Windows Blog
Deconstructing the NUI Blog (Josh Blake)
Microsoft Kinect Learns to Read Hand Gestures, Minority Report-Style Interface Now Possible
Celia Gorman, IEEE Spectrum, 3/13/13
Kinect hand recognition due soon, supports pinch-to-zoom and mouse click gestures.
Tom Warren, The Verge, 3/6/13
Microsoft's KinEtre Animates Household Objects
Samuel K. Moore, IEEE Spectrum, 8/8/12
Kinect Fusion Lets You Build 3-D Models of Anything Celia Gorman, IEEE Spectrum, 3/6/13
Description of Kinect sessions at Build 2012
Kinect for every developer!
Tom Kerhove, Kinecting for Windows, 2/15/13
Kinect in the Classroom
Kinect Education

Note: Although I recently received my developer kit for Leap Motion, another gesture-based interface, I haven't lost interest in following news for Kinect.

Interactive MaKey MaKey: "An Invention Kit for Everyone" - Video Preview!

Interactive Invention:  MaKey Makey for All


MaKey MaKey is a hands-on "maker" kit created by Jay Silver and Eric Rosenbaum of MIT, based on research from the MIT Media Lab's Lifelong Kindergarten

After watching the lively video today,  I ordered my very own kit!


MaKey MaKey - An Invention Kit for Everyone from jay silver on Vimeo.

How does MaKey MaKey work?  It is powered by a board that can support 6 keyboard keys, and mouse control.  It runs on top of Arduino, an open-source electronics prototyping platform that supports multi-modal interactive input and output.  

I see endless possibilities and fun maker-crafting with my little grandson!


In the following video, musician/visual artist j.viewz uses his MaKey MaKey kit to hook up fruits and veggies to his music system.  

Watch j.viewz play a bunch of grapes!  The strawberries sound nice. 

j.viewz playing Teardrop with vegetables from j.viewz on Vimeo.


RELATED
MaKey MaKey Lifelong Kindergarten
Arduino
Phidgets
How to Start Making Your Own Electronics with Arduino and Other People's Code
Thorin Klosowski, Lifehacker, 1/12/12