Showing posts with label gesture interaction. Show all posts
Showing posts with label gesture interaction. Show all posts

Dec 12, 2010

LM3LAB's Useful Map of Interactive Gesture-Based Technologies: Tracking fingers, bodies, faces, images, movement, motion, gestures - and more

Nicolas Loeillot, of LM3LABS, has been ahead of the natural user interaction/ interface game for many years as his company has expanded. He's done quite a bit of deep thinking about the work of his company, and has used this wisdom to create a nice concept map that describes how LM3LABS' solutions fit into the world of gesture-based control and interaction:




In my opinion, this chart would make a great template for mapping out other natural interaction applications and products!


Here is the description of the concepts outlined in the chart:


"If all of them belong to the “gesture control” world, the best segmentation is made from 4 categories:
  • Finger tracking: precise finger tracking, it can be single touch or multi-touch (this latest not always being a plus). Finger tracking also encompasses hand tracking which comes, for LM3LABS products, as a gestures.
  • Body tracking: using one’s body as a pointing device. Body tracking can be associated to “passive” interactivity (users are engaged without their decision to be) or “active” interactivity like 3D Feel where “players” use their body to interact with content.
  • Face tracking: using user face as a pointing device. It can be mono user or multiple users. Face tracking is a “passive” interactivity tool for engaging user in an interactive relationship with digital content.
  • Image Tracking: Augmented Reality (AR) lets users use images (flyers, real products, t-shirts, faces,…) to interact with digital content. AR can be markerless or marker-based. Markerless technology has advantages but marker-based AR is easier for users to understand. (Please note here that Markerless AR is made in close collaboration with AR leader Total Immersion)."  -LM3LABS
   If you are interested in this subject and want to view some good examples of off-the-desktop interfaces and interactions, take a look at the LM3LABS blog, as well as Nicolas Loeillot's Vimeo channel.  Also take a look at the sample of posts I've written about LM3LABS over the last few years - the links are at the end of this post.

I love LM3LABS' Interactive Balloon:

Interactive balloons from Nicolas Loeillot on Vimeo.


Interactive Balloons v lm3 labs v2 (SlideShare)



Background
I first discovered LM3LABS when I was taking a VR class and researching interactive, immersive large displays in 2005 or 2006.  Back then, there wasn't much information about this sort of technology.  A lot has changed since then!


I've learned quite a bit from watching LM3LABS (and others) grow, given my passion for postWIMP interactive technology and my commitment to blogging about this subject.   Nicolas has really worked hard in this arena.  As early as 2005, LM3LABS was working with Scala to provide "smart" interactive displays, and his company's applications have been supported by computer vision technologies for many years, allowing for gesture-based, or "touch-less" interaction, as demonstrated by the Catchyoo Interactive Table.  This application caught my eye back in early 2007, when I was working on projects for large interactive displays for my HCI and Ubicomp classes, and was thinking about creating a table-top application.


My hunch is that LM3LABS has set the foundation for further growth in the future, given the lessons they've learned by taking risks with postWIMP technologies over the past few years!


Previous Blog Posts Related to LM3LABS:
Interactive Retail Book (Celebrating history of Christian Dior from 1948-2010 (video)
Ubiq Motion Sensor Display at Future Ready Singapore (video)
Interactive Virtual DJ on a Transparent Pane, by LM3LABS and Brief Ad
LM3LABS' Catchyoo Interactive Koi Pond: Release of ubiq'window 2.6 Development Kit and Reader
A Few Things from LM3LABS
LM3LABS, Nicolas Leoillot, and Multi-touch
More from LM3LABS: Ubiq'window and Reactor.cmc's touch screen shopping catalog, Audi's touch-less showroom screen, and the DNP Museum Lab.


About LM3LABS
"Founded in 2003 by a team of passionate researchers, engineers, designers, and marketers from various international backgrounds, focused on fast transformation of innovation into unique products, LM3LABS is a recognized pioneer in computer vision-based interactivity solutions. Keeping a strong customer focus, LM3LABS' team of unique people pioneers new directions, explores new concepts, new technologies and new interactions.  Engaging, playful and magic, LM3LABS' products and solutions are always scalable and reliable"

info@lm3labs.com

Note to readers:
Over the past couple of years there has been an explosion of postWIMP technologies and applications, and with this pace, it has been difficult for me to keep abreast of it all. There is quite a bit I miss, given my full time job and daily life!

I welcome information about postWIMP interactive technologies and applications from my readers.  Due to time constraints, not interest, I am not always able to post about a topic as soon as I'd like.  That is OK, as my intention is not to be the first blogger to spread the latest tech news.  I like to dig in deep when I can and make connections between innovative, interesting technologies and the people and ideas behind them. 




Dec 6, 2010

Air Presenter Plus, for the Kinect, for Presentations, developed by Evoluce and So touch

As soon as Kinect was released by Microsoft, there was a flurry of app development. Evoluce and So Touch partnered to create a presentation application for the Kinect that could be used in work settings. Take a look!


Information about Air Presenter Plus, from the So touch's YouTube channel:

"So touch, the leading creative software company for new digital technologies, in partnership with Evoluce, the leading provider of advanced multi-touch screen technologies, present: So touch Air Presenter for Kinect. The world's first presentation software optimized for Kinect.

Turn your corporate presentations, welcome areas, trade show booths and point of sales into mind boggling experiences, controlling your presentation with multi-touch gestures leveraging So touch Air Presenter gestures software and Evoluce Kinect Windows 7 software.

Integrate your usual PDF, Power point, JPG and video materials into So touch multi-touch minority report's style interface and control it with gestures in the air.

So touch Air Presenter is delivered with a very graphic player, featuring a multi-touch zoom mode and an integrated video player as well as a very easy to use content manager.

So touch Air Presenter content, sourced locally or from the network, can be played on multiple screens at the same time. So touch Air Presenter content manager can deliver customize or generic content to each player.

So touch Air Presenter packaged with Evoluce Kinect Windows 7 software will be released soon. So touch Air Presenter is already available for TUIO based gestures devices. To know more and download a free trial version, visit http://www.so-touch.com/air-presenter"




So touch
Evoluce

Nov 30, 2010

TuioKinect, by Martin Kaltenbrunner: "A simple TUIO hand gesture tracker for Kinect"

More Kinect from Martin Kaltenbrunner:


Martin Kaltenbrunner's description of TuioKinect:
"TuioKinect tracks simple hand gestures using the Kinect controller and sends control data based on the TUIO protocol. This allows the rapid creation of gesture enabled applications with any platform or environment that supports TUIO tuio.org/​ You can download the application from: code.google.com/​p/​tuiokinect/​ Music: Jabon Jabon by El Club de los Astronautas (Institut Fatima)"


I've played around with Tuio and OpenFrameworks, but it has been a while.  I can't wait until I have time to dig into this with a Kinect. I think this has great potential for supporting learning and communication among students with special needs.

RELATED/SOMEWHAT RELATED
TuioKinect:  TUIO Hand tracker for Kinect
Martin Kaltenbrenner, Tangible Interaction Frameworks 11/27/10
Therenect: Theremin for the Kinect! (via Marten Kaltenbrenner)
Xbox Kinect Interactive Puppet Prototype, from Theo Watson and Emily Bobeille, creators of Funky Forest
Hacked Kinect Multitouch using libFreenect and libTISCH (via Florian Echtler)

Oct 31, 2010

Microsoft is acquiring Canesta, Inc., a developer of 3-D electronic perception technology for natural user interaction, gaming, and more.

Microsoft to Acquire 3-D Chip Firm Canesta
Michael Baron, TheStreet 10/29/10

Thanks to Harry Van Der Veen, of NUITEQ, for this link!

RELATED
The following video is from the Canesta3D YouTube channel. It demonstrates the 3D input sensor in action, with four people moving around in a living room. The chip used in the system depicted in the video was the precursor to the current chip, called the "Cobra 320x200".


Below is a demo of gesture interaction using Canesta3D technology to control and select information and content on a large display.  In my opinion, this will change the way we interact with our TV's, at least for those of us who hate using bad remotes!  Microsoft's acquisition of Canesta is good news, especially if they allow this technology to be used by the masses.   I'm pretty sure it has the capability of supporting  interaction with HD TV's are internet-ready, and can support GoogleTV, LeanBack, and Vimeo's Couch Mode.




Canesta Announces Definitive Agreement to be Acquired by Microsoft
Press Rease, 10/29/10, Canesta

About Canesta (From the Canesta website)
"Canesta (www.canesta.com) is the inventor of revolutionary, low cost electronic perception technology and leading provider of single chip CMOS 3-D sensors that fundamentally change the relationship between devices and their users. This capability makes possible true 3-D perception as input to everyday devices, rather than the widely understood 3-D representational technologies as output. Canesta’s 3-D input technology, based upon tiny, CMOS 3-D imaging chips or “sensors”, enables fine-grained, 3-dimensional depth-perception in a wide range of applications. Products based on this capability can then react on sight to the actions or motions of individuals and objects in their field of view, gaining levels of functionality and ease of use that were simply not possible in an era when such devices were blind. Canesta’s focus is on mass market consumer electronics, but many applications exist in other markets as well. Canesta is located in Sunnyvale, CA. The company has filedin excess of fifty patents, 44 of which have been granted so far."


Canesta Corporate Fact Sheet (pdf)
Videos: http://canesta.com/applications/consumer-electronics/gesture-controls

I posted some videos about Canesta's technologies on the following post. There are two videos that show Canesta's 3D depth camera works on a Hitachi flat-panel display: Interactive Displays 2009 Conference

For more information about interactive TV, GoogleTV, Leanback and Couch Mode, see the second section of my recent post:
Philipp Geist: Blending the Physical with the Digital;  Google TV/Leanback, Vimeo's new Couch Mode, oh..and ViewSonic's 3D (glasses-less) pocket camcorder...

Sep 26, 2010

Essential Interaction Design Essays and Articles: Dan Saffer's Lists, Don Norman, and Interactions Magazine

I came across a link about Dan Saffer's recent post, Essential Interaction Design Essays and Articles.  Equally important is Dan Saffer's List:  Top Ten Essential Interaction Design Books


Dan Saffer is one of my "important influences".  When I was taking HCI and Ubiquitous Computing courses, I bought the first edition of his book,  Designing for Interaction:  Creating Innovative Applications and Devices.  In today's world of technical convergence, it is an important read, as Saffer's content crosses a number of disciplines.

Thoughts:
It doesn't surprise me to learn that the #1 book on Saffer's Essential Interaction Design Books list is  list is Don Norman's The Design of Everyday Things.  According to Saffer,  "there’s no getting around it: this is the book. Affordances, mental models, and other bits that have all become part of the general lexicon all started with The Don’s book. A must read." 

Don Norman's book was required reading in the Human-Computer Interaction class I took a few years ago.  As I read through the book, I sensed a familiar tone.  I later learned that Don Norman was the co-author of a required textbook for one of the psychology courses I took when I was a university student the first time around.    



Don Norman's thinking has influenced me for decades - he continues to be an influence, because he writes articles for one of my favorite publications, Interactions Magazine:



It brightens up my day when I open up my mailbox- the one at the end of my real-life driveway- and find my Interactions magazine, in all of its well-designed, well-written,  semi-glossy-paged glory, waiting for me to open up and read.   The September/October, 2010 issue includes articles on topics related to authenticity in new media, the complexity of "advancement", design and usability, and the politics of development. 


A must-read is Gestural Interfaces: A Step Backwards in Usability, co-authored by Don Norman and his collaborator, Jakob Neilson, 


Here is an excerpt from the article, which highlights some of the problems of rushing to get products with natural-user interfaces out to market:
"Why are we having trouble? Several reasons:
  • The lack of established guidelines for gestural control
  • The misguided insistence by companies (e.g., Apple and Google) to ignore established conventions and establish ill-conceived new ones.
  • The developer community’s apparent ignorance of the long history and many findings of HCI research, which results in their feeling empowered to unleash untested and unproven creative efforts upon the unwitting public"
(Interactions Magazine is a publication of ACM CHI -Association of Computing Machinery, Computer-Human Interaction interest group).


Other articles by Don Norman, published in Interactions Magazine:
The Research-Practice Gap: The Need for Translational Developers 
Natural User Interfaces are not Natural 
The Transmedia Design Challenge: Technology that is Pleasurable and Satisfying
Technology First, Needs Last: The Research-Product Gulf
To be published, available on the jnd website:
Systems Thinking:  A Product is More Than The Product  


SOMEWHAT RELATED
My resource pages:
RESOURCES: Natural User Interaction, InfoViz, Multi-touch, Blog roll, and More - a huge mega-list of links! 
Conferences, Research, Resources page


Living with Complexity
Donald Norman, to be release in October 2010
Living with Complexity


Interactions Archives


Here are a list of books/articles, suggested by Dan Saffer's readers:


Designing for Interaction – Saffer, D. (2nd Edition; 2009)
Thoughts on Interaction Design – Kolko, J. (2009)
The Humane Interface – Raskin, J.
Digital Ground – McCullough, M.
Inmates are running the Asylum – Cooper, A
Designing Interactions – Moggridge, B (ed.)
Everyware – Greenfeild, A.
Designing Social Interfaces – Malone & Crumlisch
Emotional Design – Norman, D.
Invisible Computer – Norman, D.
Persuasion Technology – Fogg, BJ
Thoughtful Interaction Design: A Design Perspective on Information Technology by Jonas Lowgren and Erik Stolterman (Paperback – Mar 30, 2007)

Designing Visual Interfaces by Mullet/San
Steve Krug – Don’t Make Me Think: A Common Sense Approach to Web Usability
Design Research: Methods and Perspectives edited by Brenda Laurel 
Information Architecture (“The Polar Bear Book”) by Peter Morville.


Thanks to Putting People First for the link to Dan Saffer's list!

Sep 18, 2010

Interactive 360 Degree Glass-less 3D Video Display with Gesture Sensor: Demo of Sony's RayModeler

The video below gives a demo of Sony's RayModeler, "A 360-Degree Display that doesn't require glasses". The video shows how the auto-stereoscopic 3D content is filmed. It also shows how items within the display respond to gesture interaction. The first prototype was introduced in 2009 and then brought out at the SIGGRAPH conference this summer.



According to an article written by Richard Lawler, Core77 created "Breakout" for the RayModeler, a game similar to Pong.  I'll have to think more about this technology before I form an opinion!

RELATED
Sony's 360-degree RayModeler 3D display brings its glasses-free act to LA, plays Breakout
Richard Lawler, Engadget 7/28/10

Sony's 360-degree 3D display prototype makes virtual pets more lifelike, expensive
Thomas Ricker, Engadget 10/19/09

Jun 22, 2010

Kinect Sensor for Xbox 360 Offers Full-Body and Gesture Interaction: No controllers or remotes!

Project Natal was the code name for the Kinect Sensor for Xbox 360. For $149.99 you can pre-order your very-own system from the Microsoft Store that will allow you to interact with video games with your body alone. No need for controllers or 'motes!

Presentation about the fitness benefits of the Kinect Sensor for Xbox 360:



This video is a preview of a dance game for the Xbox using the Kinect Sensor:


It would be great if I could do my Zumba moves with Kinect Sensor system and a great Xbox application!

Here's another video that explains the system in more detail, with brief interviews of innovators from Microsoft:


Here is a copy of my previous post about Project Natal:

How It Works: Microsoft's Project Natal for the Xbox 360 video from Scientific American


Microsoft gathered a wealth of biometric data to recognize the range of human movement in order to develop an algorithm for the next generation of controller-less gaming. "Natal will consist of a depth sensor that uses infrared signals to create a digital 3-D model of a player's body as it moves, a video camera that can pick up fine details such as facial expressions, and a microphone that can identify and locate individual voices."


The technology behind Natal has the potential for a range of uses beyond gaming.

Scientific American article:
Binary Body Double:  Microsoft Reveals the Science Behind Project Natal for Xbox 360

Dec 26, 2009

A few things from LM3Labs

I just noticed an interactive section of Lm3labs' website that demonstrates a range of interesting pictures that provide a nice overview of the company's work within the retail sector.  Lm3labs has offices in France and Japan.

The company, run by Nicolas Loeillot, is also involved in non-retail projects, such as museum exhibits, focusing on "touch-less" interactivity.  I've included a few pictures and videos below.

Microsoft's Photosynth on Lm3lab's touch-less Ubiq'window:

Demo of video-guide on a Ubiq'window:

Ubiq'window Demo in the US from Nicolas Loeillot on Vimeo.

Pictures of Lm3Lab's installations at Toshiba:
 twins

top

_DSC0328

Somewhat Related
Nicolas Loeillot's Photos

Oct 28, 2009

Interactive Multimedia Across Platforms and Screens: Adobe's Open Screen Project; MEX Mobile User Experience Manifesto.... (Please don't annoy the user!)

I'm not sure what I think about ubiquitous Flash 10.1. and Adobe's Open Screen project. I like the idea of anything that is seamlessly cross-platform, but I shudder to think that this might let out a wave unwanted or annoying "push" advertising on on-the-go screens of all sizes. I'm assuming web developers, along with TV ad producers, will be jumping on this train without fully thinking about how their applications and designs will play out in the off-the-desktop, digital-out-of home world.

I decided to take a look, drill down through the hype, and share a few links related to this topic.

Adobe Pushes for a Flash-ier Mobile Web
Rob Pegararo, Faster Forward, Washington Post (10/5/09)
"Are you anxious to bring Flash to the mobile Web, even if it means being subjected to some over-eager Web coder's song-and-dance routine? Or would you rather do without it on the go, even if that means having to switch to a "real" computer to use some Web sites' features?"

Hopefully the "over-eager web coders" will heed the MEX Manifesto:

MEX:  Mobile User Experience 2009 Manifesto (pdf)
"The Manifesto sets out our beliefs as to how user-centred design principles can enhance the experience of multi-platform digital services."

A framework for user journeys in a multi-platform world:  Marek Poawlowski, founder of MEX

MEX: User experience journeys in a multi-platform environment from Marek Pawlowski on Vimeo.

"User experiences are evolving into increasingly complex sets of interactions between multiple devices.  In this video presentation, Marek Pawlowski of the MEX Mobile User Experience strategy forum, shows how a framework can be used to map user journeys through the multi-platform environment."

"Unencumbered by wires, information is flowing into every corner of our world at an ever increasing rate and through an ever increasing range of digital platforms. The single greatest challenge facing digital industries is understanding how this explosion of data will be woven into the fabric of consumers' lives." -- Marek Pawlowski, founder of MEX.

MEX Blog 

OPEN SCREEN VIDEO

Open Screen Project from Vyshak V on Vimeo.
"The Open Screen Project is an industry-wide initiative, led by Adobe and backed by other industry leaders who all share one clear vision: Enable consumers to engage with rich Internet experiences seamlessly across any device, anywhere. Partners in the Open Screen Project are working together to provide a consistent runtime environment for open web browsing and standalone applications — taking advantage of Adobe® Flash® Player and, in the future, Adobe® AIR®. This consistent runtime environment will remove barriers to publishing content and applications across desktops, mobile phones, televisions, and other consumer electronics." Learn more

Reinventing Storytelling in the Digital Age Across Platforms, Across Screens

NAB 2009 presentation by Shantanu Narayen of Adobe and A.D. Albers, of Disney Interactive Media Group, from NAB 2009
Adobe and NVIDIA Deliver Rich Web Experiences on Netbooks and Mobile Devices
Reuters (10/5/009)
"At Adobe MAX, Adobe's worldwide developer conference, Adobe Systems Incorporated and NVIDIA Corporation..announced that both companies are bringing uncompromised browsing of rich Web content to netbooks, smarphones and smartbooks built with NVIDIA GPUs. The companies have been working closely together as part of the Open Screen Project to optimize and dramatically improve performance of Flash Player 10.1 by taking advantage of GPU video and graphics acceleration on a wide range of mobile Internet devices. NVIDIA customers embracing Flash Player 10.1 for their new devices include HP, Lenovo, Samsung, Acer, Asus and more..."


RIM Joins Open Screen Project  Reuters (10/4/09 )


Honey I Shrunk the Flash Player Simon Bisson and Mary Branscome, ZDNET, 10/12/09
Teaming up with Adobe and the Open Screen Project -Google Blog   (10/5/09)


Paramount Digital Entertainment Launches Interactive Thriller on MySpace  Tracy Sedlow, InteractiveTV Today (10/28/09)
"The company says that it can deliver the show's multiple interactive elements to viewers across devices using Adobe Flash Player and Adobe AIR, "because of efforts by the Open Screen Project, an industry-wide initiative led by Adobe and supported by PDE and close to 50 other industry leaders, to enable people to engage with rich Internet experiences across any device, anywhere." -

Oct 23, 2009

Two good articles by Bill Buxton: The Mad Dash Towards Touch Technology; The Long Nose of Innovation

I came a couple of interesting links to a couple of articles from the Putting People First blog. The links are articles written by Microsoft Research principal scientist, Bill Buxton.  If you've never heard of Bill Buxton, he's the guy that was doing multi-touch research way back in the 1980's. 

The Mad Dash Toward Touch Technology
Bill Buxton, Business Week, 10/21/09
"True innovators need to know as much about when, why, and how not to use trendy technology as when to use i."

The Long Nose of Innovation
Bill Buxton,  Business Week, 1/2/08
"The bulk of innovation is low-amplitude and takes place over a long period. Companies should focus on refining existing technologies as much as on creation." 

RELATED
Updated!

Multi-Touch Systems that I Have Known and Loved
(Bill Buxton)


I came across Bill Buxton's Multi-Touch website in early 2007 when I was taking HCI and Ubicomp.  I was searching for information about large touch-screen displays and applications for a couple of class projects.  The website was the answer to my graduate student prayers.  On the site, you'll find a fantastic overview of the history of "multi-touch", including gesture recognition and related surface technologies. 

The website has interesting links.  If you have the time, take a look at Buxton's main websitehttp://www.billbuxton.com/. You'll find loads of interesting links. I especially like the links to his Business Week articles.



Bill Buxton is the author of "Sketching User Experiences:  Getting the design right and the right design", a book that I own and recommend.

Oct 16, 2009

Jonathan Kessler's Hand Eye Technologies: Coordinating your cell phone with Interactive TV

Hand Eye Technologies is developing ways to use your smart-phone over remote control driven interaction.  Jonathan Kessler, the CEO of the company, was interviewed by Tracy Swedlow, of ITTV, about his background and his ideas for the future of interactive television.

Podcast Link:  Hand Eye Technologies Interview
Here is a video from the Hand Eye Technologies website:



If you happen to have an HIT-enabled mobile device, near an HIT enabled display, two-way communication is established, via a LAN, WiFi, or wireless 3G carrier. The mobile device's camera is used to manipulate things on the interface, and the set-box takes care of some of the rest.

Interactions include selecting text and objects, "drag and drop", insert/delete, inputting text or annotations, and drwing on the screen. Hand Eye offers a drawing application called Video Graffiti, and traces the movements you make when you move your mobile device.


"Hand Eye Technologies' mission is to create and communicate the premier software platform that enables mobile devices to interact with the digital world around them... any time, anywhere." - Hand Eye Technologies

"It is more about human-computer interface than remote control". -Jonathan Kessler


This looks like it is moving towards the next level of 2-way TV interactivity, much better than what the traditional remote control can do.

RELATED


Hand Eye Technologies Management Team
CNET Hand Eye wants your smartphone to watch TV with you
Venture BeatDEMO: Hand Eye Technologies lets your mobile phone watch TV with you 
TheWrap.comComing Soon: Real-Time Interactivity Between TVs and Smartphones
Ubergizmo -With Hand Eye Technologies, the TV show continues in your handset

Interactive TV Today
About InteractiveTV Today:
"Founded in 1998 by Tracy Swedlow and co-owned by Richard Washbourne, InteractiveTV Today [itvt] is the most widely read and trusted news source on the rapidly emerging medium of multiplatform, broadband interactive television (ITV). We provide concise, original coverage of industry developments, technologies, content projects, and the people building the business. Our readership is mostly made up of hundreds of thousands of executives from around the world."



Sep 6, 2009

Oblong's g-speak Spatial Operating Environment: Gesture interaction, massive datasets, film production, and more.


g-speak overview 1828121108 from john underkoffler on Vimeo.

What is g-speak?


From the Oblong website:  "Spatial semantics at the platform level"

"Every graphical and input object in a g-speak environment has real-world spatial identity and position. Anything on-screen can be manipulated directly. For a g-speak user, "pointing" is literal."


"The g-speak implementation of spatial semantics provides application programmers with a single, ready-made solution to the interlocking problems of supporting multiple screens and multiple users. It also makes control of real-world objects (vehicles, robotic devices) trivial and allows tangible interfaces and customized physical tools to be used for input."


"The g-speak platform is display agnostic. Wall-sized projection screens co-exist with desktop monitors, table-top screens and hand-held devices. Every display can be used simultaneously and data moves selectively to the displays that are most appropriate. Three-dimensional displays can be used, too, without modification to application code."

Origins of Oblong

g-speak was born at the MIT Media Lab, and Oblong was started in 2006. The work behind g-speak's gestural I/O began over 15 years ago. For more information, read g-speak in slices.

Oblong developed Tamper over the g-speak system as a prototype for film production. Below is the demo.  At 0:08, sketches of the gestures used in g-speak are displayed in the video.



I hate wearing gloves, but I'd gladly put them on to play with the system for a few days!

Aug 8, 2009

More about Project Natal: Richochet - Great Gaming for Fitness, Johnnie Chung Lee's Contribution


(Credit: CNET News)
Ina Fried, in a recent CNET Beyond Binary post, recently reviewed her experience playing Ricochet, a 3D game developed by Microsoft for Natal, the company's new gesture-recognition, controller-less Xbox gaming system. Above is a screenshot from Fried's article, Exclusive: Getting up close and personal with Natal:

Here is the video:


Ina Fried had a chance to spend some time in Redmond, Washington to explore the games in development at Microsoft, and hang out with the people responsible for Project Natal.

In her Beyond Binary article, Fried notes that the Ricochet game provides quite a workout, and this has had a positive effect on the Natal team:


"Since I started working on this project, I've lost almost like 10 pounds," said Kudo Tsunoda, general manager of Microsoft Game Studios and the creative director for Project Natal. "We're going to have the most in-shape development team you've ever seen."


Fans of Johnny Chung Lee will be happy to know that his work at Microsoft contributed to this game in someway, if they don't know this by now!

Who is Johnny Chung Lee? Read my post, "I wish I could be Johnny Chung Lee for a Day!".


RELATED

Speaking of Natal, it should be out next year (Ina Fried, CNET)

Gates: Natal to bring gesture recognition to Windows too



Jul 12, 2009

NUI-Group Members: What are they doing now?

Multitouch Media Application Pro v3.0 from Falcon4ever on Vimeo.

MMA Pro is a multitouch photo and video organizer build in Adobe AIR (Flex3) and has new features such as Google Maps, support for uploading pictures on the fly using blue-tooth. For more information, visit Laurence Muller's website, Multigesture.Net. There you can download the application. Make sure you read the install instructions that are included in the readme.text, and also make sure that you have the latest Adobe AIR 1.5.x. Laurence also recommends installing BlueSoleil to handing the pairing of devices and file transfers. (If you've never programmed for Bluetooth, take his advice!)

Laurence Muller (M.Sc.) is a Scientific Programmer at the University of Amsterdam who develops scientific software for multi-touch devices. He is a member of the NUI-Group.

The following video highlights some of the applications from the University of Amsterdam from about a year ago:

Multitouch Applications from Falcon4ever on Vimeo.

Feel free to leave a comment and a link or two if you are a NUI-Group member and like to share your recent projects!

Jun 2, 2009

Updates about NextWindow and Stantum; Upcoming Emerging Displays Technologies Conference

Here is a brief update about two companies that I follow:

NextWindow Granted Key Optical Touch Screen Patents (pdf)
Pleasanton, CA – June 1, 2009 – "NextWindow, the leader in optical touch screens for all-in-one PCs and large-format displays, has been awarded two key technology patents, one in the US and another in China. The newly granted patents which refer to optical touch systems incorporating light emitters, reflectors and detection methods, help cement NextWindow’s leadership positions in the important Chinese manufacturing and US sales markets"

"The US patent, number 7,538,759, issued by the United States Patent & Trademark Office on May 26, 2009, is titled, “Touch Panel Display System with Illumination and Detection Provided from a Single Edge.” NextWindow previously was granted this patent in Australia , and a request for patent is pending in Canada , Europe, Hong Kong and Japan ..."

You can follow NextWindow on Twitter

Stantum Granted Key Patents on Its Multi-Touch Technology

BORDEAUX, France, June 1, 2009 – "Stantum, a pioneer developer of multi-touch solutions and systems since 2002, announced today that both the European Patent Office and the China Patent & Trademark Office have granted patents (EP1719047 and CN100447723C, respectively) to Stantum on its multi-touch technology."

"In 2004, under its former name, JazzMutant, Stantum became the first company ever to develop and bring to market a multi-touch electronic device – the award-winning Lemur remote controller for creative professionals. The recently granted European and Chinese patents extend the original patent filed in France in February 2004."

"The patents describe a method and a system for controlling electronic devices by manipulating graphic objects on a transparent multi-contact touch panel. Beyond the process enabling the detection and tracking of an unlimited number of simultaneous contact points on a touch screen, the patents disclose various multi-touch interaction techniques, such as applying specific behavior to graphic objects according to finger gestures...."

Stantum's Quarterly Newsletter

Here is an industry-related 1-day conference that looks interesting!

2009 Emerging Display Technologies Conference: Innovation for the Next Wave of Growth

"Emerging display technologies offer alternative performance, cost, design, and business models to mainstream display technologies. From touch screens, flexible displays, OLED displays, e-paper displays, and pocket projectors to 3D displays, this 1-day conference will explore how new display technologies can bring innovative form factors, attractive visual performance, power saving, and potentially drive growth in the near future."

Thursday, September 3, 2009 8:00 AM - 5:30 PM

San Jose Marriott
301 S. Market Street
San Jose, California 95113
USA
408-280-1300


May 26, 2009

GestureTek's "Cube": A compact Interactive Gesture-based Display System

GestureTek's "Cube"




From the GestureTek website:

"Introducing The Cube - a compact, turnkey, 'plug and play' interactive display unit that brings the power of gesture control to a variety of display spaces. Project the interactive 80” diagonal display onto almost any floor, wall, table or counter for branding, advertising, entertainment and product promotion. The Cube will engage customers, turn heads and drive business results."

Brochure
(pdf)

Apr 4, 2009

Put-That-There: Voice and Gesture at the Graphics Interface and more Blasts from the 1980's HCI Past


bigkif's information about "Put-That-There" about Put-That-There gives a good description of this video:

Put-That-There at CHI '84

"In 1980, Richard A. Bolt from MIT wrote Put-that-there : voice and gesture at the graphics interface. It was a pioneering multimodal application that combined speech and gesture recognition.

This demo shows users commanding simple shapes about a large-screen graphics display surface. Because voice can be augmented with simultaneous pointing, the free usage of pronouns becomes possible, with a corresponding gain in naturalness and economy of expression. Conversely, gesture aided by voice gains precision in its power to reference."

Richard A. Bolt "Put-That-There": Voice and Gesture at the Graphics Interface
(pdf) SIGGRAPH '80

Here is another blast from the '80's:

Kankaanpaa A. FIDS- AFlat-Panel Interactive Display System IEEE March 1988 IEEE Computer Graphics Applications(Nokia Information Systems)

"Although the needs and expectations of these various users are very diverse, they all have a common requirement: more natural and easier methods for communicating with the computer than are available today. Furthermore, they do not want to interact with the computer; they want to communicate with the application they are using. They do not want to use computer jargon; they want to use the same natural methods that they use when they perform the same tasks without a computer."

“We believe that only three of the flat-panel technologies described above, namely LCD, EL, and plasma, will be sufficiently advanced for mass production within this decade.”

Bill Buxton was working on multi-touch and gesture interaction in the 1980's, but his dreams did not become a reality until this century, for a variety of reasons. He shared his thoughts about the paradox of the speed of technology in a presentation at the 2008 IEEE International Solid-State Circuits Conference:Surface and Tangible Computing, and the “Small” Matter of People and Design”(pdf)

‘Carrying on from an earlier thesis in our department (Mehta , 1982) , we built a tablet that was sensitive to simultaneous touches at multiple locations, and with the ability to sense the degree of each touch independently (Lee, Buxton & Smith, 1984). We stopped the work in late 1984 when I saw a much better implementation at Bell Labs – one that was transparent and mounted over a CRT. The problem was that they never released the technology, so, the whole multi-touch venture went dormant for 20 years. But, I never stopped dreaming about it. (Lesson: don’t stop your research just because someone else is way ahead of you. It might be transitory, and anyhow, remember the story of the tortoise and the hare.)

“I spoke earlier about the paradox in the speed of technology development it goes at rocket speed, but that of a glacier as well; Simultaneously! In the perfect world, this would be ideal: we could go through several iterations of ideas so that by the time the new paradigms of interaction, such as Surface and Tangible computing are ready for prime time, everything will be in place. But, the rapid iteration is more directed at supporting the old paradigms faster and cheaper, rather then helping shape the new ones. The reasons are not hard to understand. From the perspective of circuit design, the problems are really hard. So, one has to have one’s head down working flat out to get anything done. But, there is a side of me that motivated this paper that asks, If it is so hard, then isn’t it worth making sure that the things one is working on are things that are worthy of one’s hard-earned skills?”

SOMEWHAT RELATED

Bill Buxton's Haptic Input References
(pdf)

Feb 15, 2009

Interactive Displays 2009 Conference: Tuesday, April 21 -Thursday April 23, Hilton San Jose, California

The Interactive Displays Conference, sponsored by Intertech Pira, will highlight an interesting mix of existing and emerging interactive display technologies and applications. The conference will be held at the Hilton in San Jose, California, from Tuesday, April 21st through Thursday, April 23rd.

The pre-conference seminar will feature Sakuya Morimoto, of CANESTA, who will present his company's innovative single-chip 3D image sensor technology that supports gesture interaction.
Keynote speakers will be
Jeff Han, of Perceptive Pixel, and Steven Bathiche, of Microsoft US.

Some Highlights:

Pre-conference Seminar: Gesture Navigation in the World of Digital Contents, Enabled by a Single-Chip 3D Image Sensor Presenter: Sakuya Morimoto, Senior Director, Business Development in Asia, CANESTA, Japan

Related:
Hitachi at CES 2009: Use of Canesta's 3D sensor to control television and home systems using hand gestures.



"With the wave of a hand, with the shake of a hand, you can control volume, you can actually change the channels, watch your favorite program...the most exciting thing, I think, is that you can actually control your temperature and the lighting in the room, the environmental lighting. So..it is very unique technology that is out there.."

Another demonstration of Hitachi's gesture interaction using the Canesta's 3=D Depth camera:



When a TV Remote is Just Too Much Effort, Wave -
Jennifer Bergen, PC Magazine
CANESTA Corporate Fact Sheet (pdf)

How does Canesta's Electronic Perception Technology Work?
"Canesta’s electronic perception technology forms 3-D, real time moving images in a single chip through patented methods which use light photons to “range” the image, similar to radar. The silicon sensor chip develops 3-D depth maps at a rate in excess of 30 frames per second, and then performs additional processing on these depth maps to resolve the images into application specific information that can easily be processed by embedded processor(s) in the end-use device or machine. Since Canesta’s software starts with a three-dimensional view of the world, provided immediately by the hardware, it has a substantial advantage over classical image processing software that struggles to construct three-dimensional representations using complex mathematics, and images from multiple cameras or points of view. This dramatic reduction in complexity makes it possible to embed the processing software directly into the chips themselves so they may be used in the most cost-conscious applications."



I will highlight some of the featured presentations in future blog posts:

Steven Bathiche, Director of Research, Applied Sciences Group, Entertainment and Devices Division MICROSOFT, US
Guillaume Largillier, Chief Strategy Officer and Co-Founder, STANTUM, France
Jeff Han, PERCEPTIVE PIXEL, US
Mark Fihn, Publisher, VERITAS ET VISUS, US
Derek Mitchell, Conference Producer, INTERTECHPIRA, US
Vinita Jakhanwal, Principal Analyst, Small/Medium Displays, ISUPPLI CORPORATION, US
Joseph Carsanaro, President and CEO F-ORIGIN, US
Tommi Ilmonen, CEO MULTITOUCH OY, Finland
Stephen Sedaker, Director of Component Sales WACOM TECHNOLOGY CORPORATION, US
Brad Gleeson, Managing Director, Business Development TARGETPATH GLOBAL LLC., US
Henry Kaufman, President and Founder, TACTABLE, US
Christophe Ramstein, Chief Technology Officer, IMMERSION CORPORATION, US
Mary Lou Jepsen, CEO, PIXEL QI, US
John Newton, Chief Technology Officer, NEXTWINDOW, New Zealand
Herve Martin, CEO, SENSITIVE OBJECT, France
Scott Hagermoser, Gaming Business Unit Manager 3M TOUCH SYSTEMS, US
Bob Cooney, Vice President, Business Development, ECAST, US
Brent Bushnell, Chief Technology Officer UWINK, US
Stephan Durach, Head, Technology Office, BMW GROUP, US
Jeff Doerr, Senior Manager, Business Development Self Service Solutions Group, FLEXTRONICS, US
Andy Wilson, Senior Researcher, Adaptive Systems and Interaction Group, MICROSOFT, US
Mats W. Johansson, Chief Executive Officer, EON REALITY, US
Lenny Engelhardt, Vice President for Business Development, N-TRIG, Israel
Dr Paul Diefenbach, Director, RePlay Lab, DREXEL UNIVERSITY, US
Andrew Hsu, Technical Marketing and Strategic Partnerships Manager, SYNAPTICS, US
Dean LaCoe, Business Development Manager, GESTURETEK, Canada
Keith Pradhan, Global Director of Product Management, TYCO ELECTRONICS, ELO TOUCHSYSTEMS, US
Jerry Bertrand, Managing Member/Acting CEO, MICROSCENT, LLC, US
Frederic Kaplan, CEO and Co-Founder, OZWE, Switzerland


Related

Visionary Jeff Han and Microsoft's Steven Bathiche to Keynote at Interactive Displays 2009