Showing posts sorted by date for query NUI. Sort by relevance Show all posts
Showing posts sorted by date for query NUI. Sort by relevance Show all posts

Sep 6, 2013

Eye Tribe Eye Tracker Dev Kit, $99; Open Source ITU Gaze Tracker Grows Up!

The Eye Tribe Eye Tracker developer kit is available for pre-order for $99.00. The kit comes with an SDK with C++, C#, and Java, full source code included.  

I've been waiting for a while to see this happen! 

The Eye Tribe Eye Tracker is an outgrowth of the work of a group of researchers at the IT University of Copenhagen.  At the time, it was known as the open-source ITU Gaze Tracker. 
I came across it a few years ago in a NUI-Group forum, and later wrote a post about it when the 2.0 version was released. 

Although the Eye Tribe Tracker was originally developed to meet the needs of people with disabilities who could not access computers, it was found to have potential for a number of other uses that were not really possible before the spread of mobile technologies such as touch-screen tablets and smart phones. 

To get a better understanding of eye-gaze/tracking technology, take a look at the following videos and follow the related links.



Below is a demonstration of the gaze UI on an Android smartphone:


Here is another look at this technology running on a Windows 8 Tablet:





RELATED
The Eye Tribe (website)
Eye Tribe starts taking pre-orders for $99 Windows eye tracker
Senseye will let you control your mobile phone with your eyes
Martin Bryant, The Next Web, 12/2/11
Open-source Eye-tracking: The ITU Gaze Tracker 2.0 Beta Via Martin Tull, NUI-Group Member
Lynn Marentette, Interactive Multimedia Technology, 11/1/10
ITU GazeGroup
Gaze Tracker Development
GazeGroup Forum
Martin Tall


RELATED VIDEOS
Eye Tribe was formally known as Senseye. Below is an earlier video that shows how it worked with a web-cam on a mobile device:



Open-Source ITU Gaze Tracker

ITU Gaze Tracker from ITUcph on Vimeo.


Earlier Videos of the ITU Gaze Tracker:
Technical Demonstration 




Seeking Sustainable Innovation

Jun 6, 2013

Interactive Displays and "Billboards" in Public Spaces; Pervasive Displays 2013

The 2013 International Symposium on Pervasive Displays (PerDis 2013), recently convened  in Mountain View, California.  Since I couldn't attend this conference, I was happy to learn from Albrecht Schmidt that the conference proceedings were recently uploaded to the ACM Digital library.  There are many exciting things going on in this interdisciplinary field!

Researchers involved with the Instant Places project, described in the video below, presented their work at PerDis 2013. The Instant Places project was part of PD-Net, a series of research efforts exploring the future of pervasive display networks in Europe. (See the "Related" section for additional references and links.)


Instant Places: Tools and Practices for Situated Publication in Display Networks

Below is information from the Instant Places video and website:
"The video describes a novel screen media system that explores new practices for individual publication and identity projection in public digital displays." 

"Instant Places has been developed by the Ubicomp group of the Information Systems Department, at the University of Minho, and has been funded within the scope of pd-net: Towards Future Pervasive Display Networks, by the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 244011."

Saul Greenberg was the keynote speaker at PerDis 2013.  His keynote, "Proxemic Interactions: Displays and Devices that Respond to Social Distance", highlights how far off-the-desktop our digital/physical lives have become, and how this has influenced recent research in human-computer interaction. Saul is a professor at the University of Calgary and leads research in Human Computer Interaction, Computer Supported Cooperative Work, and Ubiquitous Computing.

Although the video of Saul Greenberg's presentation below is not from PerDis 2013, it touches on the same topics and is worth taking an hour to watch.  In this video, Greenberg presents an overview of the history of human-computer interaction. He also offers up a discussion how an understanding social theory, perception of spatial relationships, and embodied interaction can be applied to the design of natural user interfaces and interactive systems.  Useful examples of interaction design explorations, within an ecological context, are provided later in the video.

Proxemic Interactions: the New Ubicomp?




RELATED


My Backstory
Regular readers of this blog know that to subject interactive displays in public spaces holds my interest. When I was taking computer courses during the mid 2000s, I focused some of my energy on projects designed for large interactive displays, inspired by reading articles like "Physically Large Displays Improve Performance on Spatial Tasks" (Desney S. Tan, Darren Gergle, Peter Scupelli, and Randy Pausch) and "Dynamo: public interactive surface supporting the cooperative sharing and exchange of media(Shahram Izadi, Harry Brignull, Tom Rodden, Yvonne Rogers, Mia Underwood).  

Jeff Han's 2006 TED talk was another inspiration. I remember my excitement as watched his demonstration of an interactive multi-touch touch screen the size of a drafting board, before the iPhone/iPad was born.  Another inspiration was Hans Rosling's TED Talk  about health statistics, with his animated interactive data visualizations presented on a huge screen.

The following year, I stumbled upon the  NUI-Group while searching for information about multi-touch displays, and was inspired by many of the early members of the group.  I also became acquainted with a world-wide network of people who share similar interests, such as Albrecht Schmidt and his team of researchers at the Unversity of Stuttgart. This busy group recently presented at PerDis 2013 and at CHI 2013 and are involved in a wider range of ongoing projects.

INTERACTIVE DISPLAYS
Alt, F. Sahami, A., Kubitza, T., Schmidt, A.  Interaction Techniques for Creating and Exchanging Content with Public Displays. In: Proceedings of the 2013 ACM Annual Conference on Human Factors in Computing Systems 
Hinrichs, U., Carependale, S., Valkanova, N., Kulkkaniemi, K., Jacucci, G., Moer, A.V., Interactive Public Displays   Computer Graphics, Vol. 33(2) IEEE Computer Society (25-27)
PerDis 2013 Program
Sample Papers:
Otero, N., Muller, M., Alissandrakis, A., and Milrad, M. Exploring video-based interactions around digital public displays to foster curiosity about science in the schools. PerDis 2013 (pdf)
Alt, F., Schneegass, S., Girgis, M., Schmidt, A. Cognitive Effects of Interactive Public Display Applications. Proceedings of the 2nd ACM International Symposium on Pervasive Displays. 2013
Langeinrich, M., Schmidt, A., Davies, N., and Jose, R.  A practical framework for ethics: the 

Note:  Members of ACM have access to all of the proceedings of PerDis2013 in the ACM Digital Library. Non-members have access to the abstracts.

PD-NET
PD-net approach to supporting ethics compliance in public display studies. Proceedings of the 2nd ACM International Symposium on Pervasive Displays. 139-143
PD-Net 
PD-NET Publications - a great reference list, with links to many papers
Reading List on Pervasive Public Displays
About Instant Places
About the Living Lab for Screens Set

DOOH-DIGITAL OUT-OF-HOME
Daily Digital Out of Home post "Billboards That Look Back" : Could miniature cameras embedded in ads lead to Big Brother at the mall? The World Is My Interactive Interface, 5/28/08
J. Müller et al., "Looking Glass: A Field Study on Noticing Interactivity on a Shop Window," Proc. 2012 SIGCHI Conf. Human Factors in Computing Systems (CHI 12), ACM, 2012, pp. 297–306
Michelis, D., Meckel, M. Why Do We Want to Interact With Electronic Billboards in Public Space?  First Workshop on Pervasive Advertising, Pervasive 2009, 5/11/09
The Rage of Interactive Billboards
The Print Innovator, 11/28/12
10 Brilliant Interactive Billboards (Videos)
Amy-Mae Elliot, Mashable, 8/21/11


SOME INTERESTING EARLIER WORK
Jeff Han's 2006 TED Talk (This is worth revisiting, as it came out before the iPhone, iPad, etc.)


Tan, D.S., Gergle, D, Scupelli, P., Pauch, R. Physically large displays improve performance on spatial tasks. ACM Transactions on Computer-Human Interaction, V13(1) 2006 (71-99)

Revisiting promising projects: Dynamo an application for sharing information on large interactive displays in public spaces (blog post)
Lynn Marentette, Interactive Multimedia Technology, 09/16/07

Brignull, H., Izadi, S., Fitzpatrick, G., Rogers, Y., Rodden,  T. The introduction of a shared interactive surface into a communal space. Proceedings of the 2004 ACM conference on Computer supported cooperative work (CSCW'04), Chicago, ACM Press, 2004 (pdf)


Izadi, S., Brignull, H., Rodden, T., Rogers, Y. and Underwood,M. Dynamo: public interactive surface supporting the cooperative sharing and exchange of media. In Proc. User
Interfaces and Software Technologies (UIST’03), Vancouver, ACM Press, 2003, 159-168. (pdf)

Proxemics (Wikipedia)


Why Do We Want to Interact With Electronic Billboards in Public Space? 


May 5, 2013

Leap Motion Update: Slow-going progress for me, at least for now!

Leap Motion Progress

My Leap Motion dev kit arrived in March. With excitement,   I installed it on my new 27-inch iMac. I decided that this would be the time to take the "leap" into Objective-C and explore the mysteries of Xcode.  I had planned to make a simple  iPad app for my 2-year-old grandson, but this inspired me to change my plans.





















Why not learn Objective-C to make a simple music/art/dance Leap Motion app for little ones?  

My progress so far?  Slow.

I updated Xcode. I installed the Leap Motion SDK.  I updated the Leap Motion SDK.  I played with the samples that came with the Leap Motion kit.   

When it came time for me to try something on my own, I thought I had everything set up in Xcode.  I got error messages that I did not understand. My attempt to figure things out led me to the Stack Overflow website, and by then, I had to get back to my paperwork in order to prepare for the next work day.

Today I realized that I missed the link about installing the Leap API docs for Objective-C in XCode.  Other things needed to be updated, so at that point, I decided to write this post....





















Reflection:
After writing some code and making repeated errors, I realized how much I had let Microsoft take root in my head.  Until 2003, the coding part of my brain was a pristine slate. It wasn't cluttered with bits and pieces previous coding languages.

Since I tend to be a knowledge junkie, my brain soaked up more than I needed when I was taking computer courses.  If you could peek inside,  you'd see C# code snippets for multi-touch and NUI, a few algorithms for A.I. and data visualization, trivia from MSDN,  and images of the Visual Studio workspace. There would be odds and ends from VB.Net, JavaScript, ActionScript, CSS, Java, C++, and pseudocode for a variety of computational thought experiments.    

A lot of stuff, and for most of it, no place to go, except for an occasional technology dream.


What's ahead?
In the short term, I'll be doing what I always do this time of year.  For many school psychologists, the last couple of months of the school year is sort of like tax season for accountants.  I have lots of students to see, lots of psychological evaluation reports to write, and meetings to attend.  The paperwork will crowd up many evenings and weekends, but  there is an end in sight.   

Summer.  This will be my summer of code.

I'll be in NYC for one week in June, attending the Interactive Design for Children conference (IDC 2013).  Many of the workshops I'd like to attend will be held at the same time.  Take a look at the program and you'll see why!  I

Decisions to be made... 
Although I am pretty good at keeping a lid on my desire to design and code during my day-to-day life as a school psychologist,  I'm finding that it is getting more difficult to ignore. I have some thinking to do. In the not too distant future, it is possible that I'll leap out of my K-12 cocoon.  

I don't think I'll leap too far, because I'd like to focus my work on projects that enhance the lives of children and families.  I will ensure sure that some of my work will benefit people of all ages who have disabilities or encounter barriers in their lives.



SOMEWHAT RELATED
Joy of Computing, 1985















My daughter, who was just two years old in the above picture, returned to school to take computer courses after working in the non-profit arts management field.  I'm happy about this, but I know that she'll face many hidden barriers when she starts working in a male-dominated environment. She is not alone.

I'm working on a future post about computer and technology-related careers.  Things have changed rapidly over the past several years and there are many new ways to learn how to code, and over time, more opportunities for creative computational thinkers - male and female, to take the lead.

In Google's Inner Circle, a Falling Number of Women
Claire Cain Miller, NYT, 8/22/13

So You Don't Want to be a Programmer After All
Jeff Attwood, Coding Horror, 4/29/13

StackExchange  (Includes StackOverflow, helpful when I troubleshoot coding problems)

An Overview of HTML 5, PhoneGap, and Mobile Apps: Understanding how web languages are used for apps and how they work with native code
Dan Bricklin



Apr 23, 2013

Google Earth and Leap Motion - I'll experiment with this after work today!

Leap Motion + Google Earth


I have the Leap Motion dev kit and can't wait until I can use it with Google Earth. Hopefully I'll find time tonight after I get home from work! For now, here is the promotional video:


RELATED
Leap Motion
Leap Motion: My Dev Kit Arrived - Now What?!   Thoughts About "NUI" Child-Computer-Tech-Interaction -- and More

Mar 16, 2013

UPDATE: What's New for Kinect? Fusion, real-time 3D digitizing, design considerations, and more.

The Evolution of Microsoft Kinect

I've been following the evolution of Microsoft's Kinect, and recently discovered a few interesting videos that show how far the system has come. According to Josh Blake, the founder of the OpenKinect community and author of the Deconstructing the NUI blog,  the Kinect for Windows SDK v1.7 will be released on Monday, March 18th, from http://www.kinectforwindows.com.  More details about this version can be found on Josh's blog as well as the official Kinect for Windows blog.


It is possible to create applications for desktop systems that work with the Kinect in interesting ways, as you'll see in the following videos. I think there is potential here for use in education/edutainment!

Below is a video of Toby Sharp, of Microsoft Research, Cambridge, demonstrating Kinect Fusion.  The software allows you to use a regular Kinect camera to reconstruct the world in 3D.



KinEtre: A Novel Way to Bring Computer Animation to Life
According to information from the YouTube description, "KinÊtre is a research project from Microsoft Research Cambridge that allows novice users to scan physical objects and bring them to life in seconds by using their own bodies to animate them. This system has a multitude of potential uses for interactive storytelling, physical gaming, or more immersive communications."




The following videos are quite long, so feel free to re-visit this post when you have time to relax and take it all in!

Kinect Design Considerations
This video covers Microsoft's Human Interface Guidelines, scenarios for interaction and use, and best practices for user interactions.  It also includes a preview of the next major version of the Kinect SDK. 


Kinect for Windows Programming Deep Dive
This video discusses how to build Windows Desktop apps and experiences with the Kinect, and also previews some future work.




RELATED
Kinect for Windows Developer Downloads
Kinect for Windows Blog
Deconstructing the NUI Blog (Josh Blake)
Microsoft Kinect Learns to Read Hand Gestures, Minority Report-Style Interface Now Possible
Celia Gorman, IEEE Spectrum, 3/13/13
Kinect hand recognition due soon, supports pinch-to-zoom and mouse click gestures.
Tom Warren, The Verge, 3/6/13
Microsoft's KinEtre Animates Household Objects
Samuel K. Moore, IEEE Spectrum, 8/8/12
Kinect Fusion Lets You Build 3-D Models of Anything Celia Gorman, IEEE Spectrum, 3/6/13
Description of Kinect sessions at Build 2012
Kinect for every developer!
Tom Kerhove, Kinecting for Windows, 2/15/13
Kinect in the Classroom
Kinect Education

Note: Although I recently received my developer kit for Leap Motion, another gesture-based interface, I haven't lost interest in following news for Kinect.

Mar 11, 2013

Leap Motion: My Dev Kit Arrived - Now What?! Thoughts About "NUI" Child-Computer-Tech-Interaction - and More



My Leap Motion developer kit arrived last week. I carefully unboxed the small device and tried out the demo apps that came with the SDK.  I'm doing more looking than leaping at this point.

I'd like to create a simple cause-and-effect music, art and movement application for my 2-year-old grandson, knowing that he'll be turning three near the end of this year.  It would be nice if my app could provide young children with enough scaffolding to support gameplay and learning over a few years of development.

Now that I'm a grandmother, I've spent some time thinking about what the evolution of NUI will mean for young children like my grandson.   Family and friends captured his first moments after birth with iPhones, and shared across the Internet.  Born into the iWorld, he knows how to use an iPad or smart phone to view his earlier digital self on YouTube, without ever touching a mouse or a physical keyboard.

The little guy is pretty creative in his method of interacting with technology, as I've informally documented on video.   He was seven months old when he first encountered my first iPad.  It was fingers-and-toes interaction from the start.  

In the first picture below,  he's playing with NodeBeat.  In the second picture, he's 27 months old, experimenting with hand and foot interaction, on a variety of apps.




















My grandson is new to motion control applications, so I'm just beginning to learn what he likes,  and what he is capable of doing.  A couple of weeks ago, we played River Rush, from the Kinect Adventures game. He loved jumping up and down as he tried to hit the adventure pins. Most of the time, he kept jumping right out of the raft!  (I think next time we'll try Kinect Sesame Street TVor revisit Kinectimals.)  


One of the steps I'm taking to prepare for my Leap Motion adventure is take a look at what people have done with it so far.  There are at least 12,000 developer kits released, so hopefully there will be some interesting apps to go along with the retail version of Leap Motion when it is released at Best Buy on May 19th of this year.

One app I really like is  Adam Somer's AirHarp, featured in the video clip below:


I also like the idea behing the following app, developed by undergraduate students:

Social Sign: Multi-User sign language gesture translator using the Leap Motion Controller (git.to/socialSign)
 
"Built at the PennApps Spring 2013 hackathon, Social Sign is a friendly tool for learning sign language! By using the Leap Motion device, the BadApples team implemented a rudimentary machine learning algorithm to track and identify American Sign Language from a user's hand gestures."

"Social Sign visualizes these hand gestures and broadcasts them in textual and visual representations to other signers in a signing room. In a standard chat room fashion, the interface permits written communication but with the benefit of enhanced learning in mind. It's all about learning a new way to communicate."-BadApples Team



There are a few NUI-focused tech companies that have experimented with Leap Motion. Today, I received a link to the following videoclip Joanna Taccone, of Intuilab, featuring their most recent work:
Gesture recognition with Leap Motion using IntuiFace Presentation

"Preview of our work with the Leap Motion controller. In the same spirit as our support for Microsoft Kinect, we have encoded true gesture support, not just mouse emulation, for the creation of interactive applications by non-programmers. The goal is to hide complexity from designers using our product, IntuiFace Presentation (IP). Through the use of IP's trigger/action syntax, designers simply select a gesture as a trigger - Swipe Left, Swipe Right, Point, etc. - and associate that gesture with an action like "turn the page" or "rotate the carousel". As you can see in this video, it works quite well. :-) We will offer Leap support as soon as it ships." -IntuiLab



Below is a demonstration of guys playing Drop Cord, a collaboration between Leap Motion and Double Fine.  From the video, you can tell that they had a blast!  

Here is an excerpt from the chatter:  "The thing is that everyone just looks cool..Yeah, I know, it doesn't matter what you are doing...it's got the right amount of speed-up-slow-down stutter-y stuff...it is like a blend of art and science.."

According to the website, Drop Chord is a "A music-driven score challenge game for the Leap Motion controller, coming soon for PC, Mac, & IOS from the creators of Kinect Party.."  

The following video is a demonstration of the use of Leap Motion to control an avatar and other interaction in Second Life:



Below are a few more videos featuring Leap Motion:


Control Your Computer With a Chopstick: Leap Motion Hands On (Mashable)


The Leap Motion Experience at SXSW 2013


LEAP Motion demo: Visualizer, Windows 8, Fruit Ninja, and More...



RELATED
Air Harp for Leap Motion, Responsive Interaction
Leap Motion and Double Fine team on Dropchord, give air guitar skills an outlet
John Fingas, Engadget, 3/7/13
Leap Motion Controller Set To Ship May 13 for Global Pre-Orders, In Best Buy Stores May 19.
Hands on With Leap Motion's Controller
Lance Ulanoff, Mashable, 3/10/13
Leap Motion website
Social Sign
IntuiLab
Leap Motion: Low Cost Gesture Control for Your Computer Display

SOMEWHAT RELATED
Kinect for Windows Academic: Kaplan Early Learning
"3 years & up. Hands-on play with a purpose -- the next generation way. This unique learning tool uses your body as the game controller making it a great opportunity to combine active play and learning all in one. Use any surface to actively engage kinesthetic, visual, and audio learners. Bundle includes the following software: Word Pop, Directions, Patterns, and Shapes."

Comment:
I've been an enthusiastic supporter of natural-user interfaces and interaction for years - back in 2007 I worked on touch-screen applications for large displays as a graduate student, and became an early member of the NUI group.  I'm also a school psychologist, and from my experience, I understand how NUI-based applications and technologies, such as interactive whiteboards and touch-tablets, such as the iPad can support the learning, communication, and leisure needs of students who have significant special needs.   It looks like Leap Motion and similar technologies have the potential to support a wide range of applications that target special populations, of all ages.

Feb 15, 2013

Designing for Touch & Gesture: Tips for Apps and the Web (Updated)

In the past, our fingers did the walking, sifting through files, papers, pamphlets, and phonebooks, and then by point-click-clicking with a mouse to interact with images and text, in essence, electronic imitations of the paper-based world. Traditional forms, brochures, ad inserts, and posters informed much of the design. 

How much have things change?   It is 2013, but you'd think it was 1997 from the PowerPoint look and feel of many apps and web sites!   Touch is everywhere, but from what I can tell, not enough designers and developers have stepped up to the plate to think more deeply about ways their applications can support human endeavors though touch and gesture interactions.  

For an overview of this topic, take a look at my 2011 post, written after a number of ugly encounters with user-unfriendly applications:  Why bother switching from GUI to NUI?  

For an in-depth look into the history of multi-touch, the wisdom of Bill Buxton is well-worth absorbing.  He's worked with all sorts of interfaces, and has been curating the history of multi-touch and gesture systems since 2007:


Multi-Touch Systems that I have Known and Loved
Bill Buxton, Microsoft Research, Updated 8/30/12



Even if you are not a designer or developer, I encourage you to explore some of the links below:

Touch Gestures for Application Design
Luke Wroblewski, 10/9/12

Common Misconceptions About Touch
Steven Hoober, 3/18/13

Designing With Tablets in Mind:  Six Tips to Remember
Connor Turnbull, Webdesign tuts+, 9/27/11

Finger-Friendly Design: IDeal Mobile Touchscreen Target Sizes
Anthony T, Smashing Magazine, 2/21/12

Best Practices: Designing Touch Tablet Experiences for Preschoolers (pdf)
Sesame Street Workshop


Are Touch Screens Accessible?
AcessIT, National center on Accessible Information Technology in Education

iOS Human Interface Guidelines
Apple

Android User Interface Guidelines
Using Touch Gestures
Handling Multi-Touch Gestures
Android

Designing for Tablets?  We're Here to Help!
Roman Nurik, Android Developers Blog 11/26/12

Touch interaction design (Windows Store apps)
Microsoft - MSDN

Multi-Touch Systems that I have Known and Loved
Bill Buxton, Microsoft Research, Updated 8/30/12


Jan 10, 2013

Gesture Markup Language (GML) for Natural User Interaction and Interfaces

Quick post:
"GML is an extensible markup language used to define gestures that describe interactive object behavior and the relationships between objects in an application.  Gesture Markup Language has been designed to enhance the development of multiuser multi-touch and other HCI device driven applications." -Gesture ML Wiki

GestureML was created and maintained by Ideum. 

More information to come!
The Pano













Photo credit: Ideum

RELATED
Ideum Blog

OpenExhibits Free multitouch and multiuser software initiative for museums, education, nonprofits, and students

GestureWorks  Multi-touch authoring for Windows 8 & Windows 7



Nov 17, 2012

Human Computer Interaction + Informal Science Education Conference (NUI News)

I recently learned of the HCI + ISE conference, funded by the National Science Foundation and organized by Ideum and Independent Exhibitions that will provide the groundwork for the future of the development and design of interactive computer-based science exhibits.
Science museums have a long history of interactivity, well suited to groups of "explorers", such as families or students visiting on a field trip.  

What is really exciting is that new interactive applications and technologies have the power to transform the way people learn and understand science in a collaborative and social way.  Innovations in the field of HCI - Human-Computer Interaction- such as multi-touch and gesture interaction, are  well-suited to meet the goals of science education for all, beyond the school doors and wordy textbooks. 

Below is a screen-shot of the conference website, a description about the conference, quoted from the site, and some related resources.



About the HCI+ISE Conference
"HCI technologies, such as motion capture, multitouch, augmented reality, RFID, and voice recognition are beginning to change the way computer-based science exhibits are designed and developed. Human Computer Interaction in Informal Science Education (HCI+ISE) is a first-of-its-kind gathering to explore and disseminate effective practices in developing a new generation of digital exhibits that are more intuitive, interactive, and social than their predecessors."
"The HCI+ISE Conference, to be held in Albuquerque, New Mexico June 11-14 2013, will bring together 60 museum exhibit designers and developers, learning researchers, and technology industry professionals to share effective practices, and to explore both the enormous potential and possible pitfalls that these new technologies present for exhibit development in informal science education settings."
"HCI+ISE will focus on the practical considerations of implementing new HCI technologies in educational settings with an eye on the future. Along with a survey of how HCI is shaping the museum world, participants will be challenged to envision the museum experience a decade into future. The conference results will provide a concrete starting point for exhibit developers and informal science educators who are just beginning to investigate these emerging technologies and design challenges in creating these new types of exhibits."
Why HCI+ISE?
"Since the mid-1980s informal educational venues have increasingly incorporated computer-based exhibits into their science communication offerings in an effort to keep pace with public expectations and make use of the expanding opportunities these technologies provide. The advent and popularity of once novel HCI technologies are becoming commonplace: the Wii and Microsoft Kinect now allow for motion capture video games, tablet PCs have multitouch interaction, and smart phones and other devices come standard with voice recognition. Yet many museums are still developing single-touch and trackball-driven, single-user computer kiosks."
"Science museums have a long history of championing hands-on, physical, and inquiry-based activities and exhibits. This vast experience has only just begun to be applied to interactive computer interfaces. Along with seasoned science exhibit developers, the Conference will draw upon individuals outside of ISE who will provide fresh insight into the technologies, design issues, and audience expectations that these visitor experiences present."
Involvement and Findings
"HCI+ISE will bring together a diverse group of practitioners and other professionals to discuss (and in some cases share and prototype) new design approaches utilizing emerging HCI technology. Please see our Apply page to learn how you can participate. Conference news and findings will be distributed through a variety of ISE and museum websites, including this one."
"We welcome your questions and comments about the HCI+ISE Conference."
CONTACTS
Kathleen McLean of Independent Exhibitions
& Jim Spadaccini of Ideum
HCI+ISE Co-chairs
"Open Exhibits is a multitouch, multi-user tool kit that allows you to create custom interactive exhibits."
CML:  Creative Mark-up Language
GML: Gesture Mark-up Language
GestureWorks
Ideum

Jul 21, 2012

Musings about NUI, Perceptive Pixel and Microsoft, Rapid Creative Prototyping (Lots of video and links) Revised

It just might be the right time for everyone to brush up on 21st century tech skills. iPads and touch-phones are ubiquitous. Touch-enabled interactive whiteboards and displays are in schools and boardrooms.  With Microsoft's Windows 8 and the news that the company recently acquired Jeff Han's company, Perspective Pixel, I think that there will be good support - and more opportunities- for designers and developers interested in moving from GUI to NUI.    


In the video below, from CES 2012, Jeff Han provides a good overview of where things are moving in the future.  We are in a post-WIMP world and there is a lot of catching up to do!

CES 2012  Perceptive Pixel and the Future of Multitouch (IEEE Spectrum YouTube Channel)



During the video clip, Jeff explains how far things have come during the past few years:
 "Five and 1/2 years ago I had to explain to everybody what multi-touch was and meant. And then, frankly, we've seen some great products from folks like Apple, and really have executed so brilliantly, that everyone really sees what a good implementation can be, and have come to expect it.  I also think though, that the explosion of NUI is less about just multi-touch, but an awareness that finally people have that you don't have to use a keyboard and mouse, you can demand something else beside that.  People are now willing to say, "Oh, this is something I can try, you know, touch is something I can try as my friendlier interface"."

Who wouldn't want to interact with a friendlier interface?  Steve Ballmer doesn't curb his enthusiasm about Windows 8 and Perceptive Pixel.  Jeff Han is happy how designs created in Windows 8 scales for use on screens large and small. He explains how Windows 8 can support collaboration. The Story Board application (7:58) on the large touchscreen display looks interesting.

I continue to be frustrated by the poor usability of many web-based and desk-top applications.  I like my iPad, but only because so many dedicated souls have given some thought to the user experience when creating their apps.  I often meet with disappointment when I encounter interactive displays when I'm out and about during the day.  It is 2012, and it seems that there are a lot of application designers and developers who have never read Don Norman's The Design of Everyday Things!



I enjoy making working prototypes and demo apps, but my skill set is stuck in 2008, the last year I took a graduate-level computer course.  I was thinking about taking a class next semester, something hands-on, creative, and also practical, to move me forward. I can only do so much when I'm in the DIY mode alone in my "lab" at home.  I need to explore new tools, alongside like-minded others.  


There ARE many more tools available to designers and developers than there were just four years ago.  Some of them are available online, free, or for a modest fee.  I was inspired by a link posted by my former HCI professor, Celine Latulipe, to her updated webpage devoted to Rapid Prototyping tools. The resources on her website look like a good place to start for people who are interested in creating applications for the "NUI" era.  (Celine has worked many interesting projects that explore how technology can support new and creative interaction, such as Dance.Draw.) Below is her description of her updated HCI resources:

"New HCI resource to share: I have created a few pages on my web site devoted to Rapid Prototyping tools, books, and methods. These pages contain reviews of various digital tools, including 7 different desktop prototyping apps, and including 8 different iPad apps for wireframing/prototyping. I hope it's useful to others. Feel free to share... and please send me comments and suggestions if you find anything inaccurate, or if you think there is stuff that I should be adding. I will be continuing to update this resource." -http://www.celinelatulipe.com (click on the rapid prototyping link at the top)



IDEAS
Below are just a few of my ideas that I'd like to implement in some way. I can't claim ownership to these ideas- they are mash-ups of what comes to me in my dreams, usually after reading scholarly publications from ACM or IEEE, or attending tech conferences. 
  • An interactive timeline, (multi-dimensional, multi-modal, multimedia) for off-the-desktop interaction, collaboration, data/info analysis exploration.  It might be useful for medical researchers, historians, genealogists, or people who are into the "history of ideas".  Big Data folks would love it, too. It would handle data from a variety of sources, including sensor networks. It would be beautiful to use.
  • A web-based system of delivering seamless interactive, multi-modal, immersive experiences, across devices, displays, and surfaces. The system would support multi-user, collaborative interaction.  The system would provide an option for tangible interaction.
  • A visual/auditory display interface that presents network activity, including potential intrusions, malfunctions, or anything that needs immediate attention that would be likely to be missed under present monitoring methods. 
  • Interactive video tools for creation, collaboration, storytelling.  (No bad remote controllers needed.)
  • A "wearable" that provides new ways for people to express and communicate creatively, through art, music, dance, with wireless capability. (It can interact with wireless sensor networks.)*
  • An public health application designed to provide information useful in understanding and sepsis prevention efforts. This application would utilize the timeline concept describe at the top of this list. This concept could also be useful in analyzing other medical puzzles, such as autism.
Most of these ideas could translate nicely to educational settings, and the focus on natural user interaction and multi-modal i/o aligns with the principles of Universal Design for Learning, something that is important to consider, given the number of "at-risk" learners and young people who have disabilities.

I welcome comments from readers who are working on similar projects, or who know of similar projects.  I also encourage graduate students and researchers who are interested in natural user interfaces to and move forward with an off-the-desktop NUI project.  I hope that my efforts can play a part in helping people make the move from GUI to NUI!  



Below are a few videos of some interesting projects, along with a list of a few references and links.


SMALLab (Multi-modal embodied immersive learning)


PUPPET PARADE: Interactive Kinect Puppets(CineKid 2011)



MEDIA FACADES: When Buildings Start to Twitter

HUMANAQUARIUM (CHI 2012)

 

NANOSCIENCE NRC Cambridge (Nokia's Morph project)






 
Examples: YouTube Playlists
POST WIMP EXPLORERS' CLUB
POST-WIMP EXPLORER'S CLUB II

Web Resources
Celine Latulipe's Rapid Prototyping Resources 
Creative Applications
NUI Group: Natural User Interface Group
OpenFrameworks and Interactive Multimedia: Funky Forest Installation for CineKid
SMALLab Learning
OpenExhibits: Free multi-touch + multiuser software initiative for museums, education, nonprofits, and students.
OpenSense Wiki 
CINEKID 2012 Website 
Multitouch Systems I Have Known and Loved (Bill Buxton)
Windows 8
Perceptive Pixel
Books
Natural User Interfaces in .NET  WPF 4, Surface2, and Kinect (Josh Blake, Manning Publications)
Chapter 1 pdf (Free)
Brave NUI World: Designing Natural User Interfaces for Touch and Gesture (Daniel Wigdor and Dennis Wixon)
Designing Gestural Interfaces (Dan Saffer)
Posts
Bill Snyder, ReadWrite Web, 7/20/12

I noticed some interesting tools on the Chrome web store - I plan to devote a few more posts to NUI tools in the future.