Here are a couple of new natural user interface videos. The first video, by Evoluce, demonstrates gesture interaction/navigation in Windows 7 applications supported by Kinect. The second video, by Immersive Labs, shows multi-touch product browsing interaction on a large display.
Kinect Treatment of Windows 7, by Evoluce
Evoluce: Leading Surface Technologies
Immersive Labs - Multi-touch Product Browser
Immersive Labs
Focused on interactive multimedia and emerging technologies to enhance the lives of people as they collaborate, create, learn, work, and play.
Showing posts with label NUI. Show all posts
Showing posts with label NUI. Show all posts
Dec 3, 2010
More gesture and multi-touch interaction! Windows 7 Navigation with Kinect; Product browser by Immersive Labs,
Posted by
Lynn Marentette
Labels:
evoluce,
gesture,
immersive labs,
kinect,
multi-touch,
NUI,
product browser,
touch,
Windows 7
No comments:
Nov 30, 2010
Therenect: Theremin for the Kinect! (via Martin Kaltenbrenner)
Yet another reason why I need to get a Kinect!
Martin Kaltenbrenner's video demonstrates how the Kinect can be transformed into a virtual Theremin.
Therenect - Kinect Theremin from Martin Kaltenbrunner on Vimeo.
Here's Martin's description of the Therenect:
"The Therenect is a virtual Theremin for the Kinect controller. It defines two virtual antenna points, which allow controlling the pitch and volume of a simple oscillator. The distance to these points can be controlled by freely moving the hand in three dimensions or by reshaping the hand, which allows gestures that are quite similar to playing an actual Theremin."
"This musical instrument has been developed by Martin Kaltenbrunner at the Interface Culture Lab at the University of Art and Industrial Design in Linz, Austria. The software has been developed using the Open Frameworks and OpenKinect libraries."
Martin Kaltenbrenner's video demonstrates how the Kinect can be transformed into a virtual Theremin.
Therenect - Kinect Theremin from Martin Kaltenbrunner on Vimeo.
Here's Martin's description of the Therenect:
"The Therenect is a virtual Theremin for the Kinect controller. It defines two virtual antenna points, which allow controlling the pitch and volume of a simple oscillator. The distance to these points can be controlled by freely moving the hand in three dimensions or by reshaping the hand, which allows gestures that are quite similar to playing an actual Theremin."
"This musical instrument has been developed by Martin Kaltenbrunner at the Interface Culture Lab at the University of Art and Industrial Design in Linz, Austria. The software has been developed using the Open Frameworks and OpenKinect libraries."
Posted by
Lynn Marentette
Nov 29, 2010
International Conference on Multimodal Interaction: ICMI 2011 Call for Papers
The information below was taken from the website for the 13th International Conference on Multimodal Interaction. I'm excited about the range of topics that the conference will cover. I look forward to sharing more about the work of the members of this group on this blog in the future! (I've highlighted the topics that interest me the most.)
INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION CALL FOR PAPERS
The International Conference on Multimodal Interaction, ICMI 2011, will take place in Alicante (Spain), November 14-18, 2011, just after the ICCV 2011 (in Barcelona, Spain). This is the thirteenth edition of the International Conference on Multimodal Interfaces, which for the last two years joined efforts with the Workshop on Machine Learning for Multimodal Interaction (ICMI-MLMI 2009 and 2010). Starting in this edition the conference uses the new, shorther name.
The new ICMI is the premium international forum for multimodal signal processing and multimedia human-computer interaction. The conference will focus on theoretical and empirical foundations, varied component technologies, and combined multimodal processing techniques that define the field of multimodal interaction analysis, interface design, and system development. ICMI 2011 will feature a single-track main conference which includes: keynote speakers, technical full and short papers (including oral and poster presentations), special sessions, demonstrations, exhibits and doctoral spotlight papers. The conference will be followed by workshops. The proceedings of ICMI 2011 will be published by ACM as part of their series of International Conference Proceedings and will be also distributed to the attendees in USB memory sticks.
The new ICMI is the premium international forum for multimodal signal processing and multimedia human-computer interaction. The conference will focus on theoretical and empirical foundations, varied component technologies, and combined multimodal processing techniques that define the field of multimodal interaction analysis, interface design, and system development. ICMI 2011 will feature a single-track main conference which includes: keynote speakers, technical full and short papers (including oral and poster presentations), special sessions, demonstrations, exhibits and doctoral spotlight papers. The conference will be followed by workshops. The proceedings of ICMI 2011 will be published by ACM as part of their series of International Conference Proceedings and will be also distributed to the attendees in USB memory sticks.
Topics of interest include but are not limited to:
- Multimodal and multimedia interactive processing
Multimodal fusion, multimodal output generation, multimodal interactive discourse and dialogue modeling, machine learning methods for multimodal interaction.
- Multimodal input and output interfaces
Gaze and vision-based interfaces, speech and conversational interfaces, pen-based and haptic interfaces, virtual/augmented reality interfaces, biometric interfaces, adaptive multimodal interfaces, natural user interfaces, authoring techniques, architectures.
- Multimodal and interactive applications
Mobile and ubiquitous interfaces, meeting analysis and meeting spaces, interfaces to media content and entertainment, human-robot interfaces and interaction, audio/speech and vision interfaces for gaming, multimodal interaction issues in telepresence, vehicular applications and navigational aids, interfaces for intelligent environments, universal access and assistive computing, multimodal indexing, structuring and summarization.
- Human interaction analysis and modeling
Modeling and analysis of multimodal human-human communication, audio-visual perception of human interaction, analysis and modeling of verbal and nonverbal interaction, cognitive modeling.
- Multimodal and interactive data, evaluation, and standards
Evaluation techniques and methodologies, annotation and browsing of multimodal and interactive data, standards for multimodal interactive interfaces.
- Core enabling technologies
Pattern recognition, machine learning, computer vision, speech recognition, gesture recognition.
Important dates
| Workshops proposal | March 1, 2011 |
| Paper and demo submission | May 13, 2011 |
| Author notification | August 5, 2011 |
| Camera ready deadline | September 2, 2011 |
| Conference | November 14-16, 2011 |
| Workshops | November 17-18, 2011 |
General Chairs
Hervé Bourlard (Idiap)Thomas S. Huang (Univ. of Illinois)
Enrique Vidal (Tech. Univ. of Valencia)
Program Chairs
Daniel Gatica-Perez (Idiap)Louis-Philippe Morency (Univ. South. California)
Nicu Sebe (Univ. of Trento)
Demo Chairs
Kazuhiro Otsuka (NTT Comm. Sci. Lab.)Jordi Vitrià (UB/CVC, Barcelona)
Workshop Chairs
Fernando de la Torre(Carnegie Mellon Univ.)
Alejandro Jaimes (Yahoo! Research, Barcelona)
Publication Chair
Jose Oncina (Univ. of Alicante)Student & Doctoral Spotlight Chair
Li Deng (Microsoft Research and Univ. of Washington)Sponsorship Chair
Nuria Oliver (Telefónica I+D)Publicity Chair
Helen Mei-Ling Meng (CUHK, Hong Kong)Local Organization Chair
Luisa Micó (Univ. of Alicante)Treasurer
Jorge Calera (Univ. of Alicante)Local organizers
Xavier Anguera (Telefónica I+D)A. Javier Gallego Sánchez (Univ. of Alicante)
Ida Hui (CUHK, Hong Kong)
Jose Manuel Iñesta (Univ. of Alicante)
Alejandro Toselli (Tech. Univ. of Valencia)
RELATED
Accepted Papers for ICMI-MLMI 2010
NOTE: ICMI 2011 will be held after ICCV 2011, the 13th International Conference on Computer Vision in Barcelona.
Posted by
Lynn Marentette
Nov 24, 2010
Microsoft Surface Light and Physics App for Kids at the Smithsonian
Microsoft Surface at the Smithsonian
The Surface is located in the Smithsonian's Castle, and is part of "The Wonder of Light: Touch and Learn!" exhibit, which opened on Tuesday, November 9th (2010). Microsoft donated the Surface unit to the Smithsonian.
Below is slideshow of the interactive exhibit:
The video below provides a closer look at the applications created by Infostrat for the Smithsonian exhibit:
RELATED
New Interactive Exhibit Opens in Smithsonian's Castle, Bringing Light To Life
Smithsonian News Release, 11/9/20
Josh Blake's post, Microsoft Surface and Magical Object Interaction.
The Surface is located in the Smithsonian's Castle, and is part of "The Wonder of Light: Touch and Learn!" exhibit, which opened on Tuesday, November 9th (2010). Microsoft donated the Surface unit to the Smithsonian.
Below is slideshow of the interactive exhibit:
The video below provides a closer look at the applications created by Infostrat for the Smithsonian exhibit:
RELATED
New Interactive Exhibit Opens in Smithsonian's Castle, Bringing Light To Life
Smithsonian News Release, 11/9/20
Josh Blake's post, Microsoft Surface and Magical Object Interaction.
Posted by
Lynn Marentette
Nov 23, 2010
Light Touch Interactive Projector; Holographic Laser Projection (HLP) "How it Works": Update on Light Blue Optics (Videos, links)
It has been about a year since I wrote about Light Blue Optics, "a privately-funded company developing and supplying miniature projection systems for use in high volume applications in markets including automotive, digital signage and consumer electronics." Light Blue Optics is located in Cambridge, UK, and has a development facility in Colorado Springs.
Light Touch Interactive Projector
Holographic Laser Projection (HLP): How it Works
RELATED
A Touch Screen Table
Brendan O'Brian, QSR 11/23/10
Roland Gribben, The Telegraph 10/11/10
HLP technology, and how it can be used for practical purposes, is further explained in the following white papers:
Buckley, E., Lacoste, L., Stindt, D. Rear-view virtual image displays. SID (Society for Information Display), Vehicles and Photons - 16th Annual Symposium on Vehicle Displays, 10/15/09
Abstract: "Light Blue Optics holographic laser projection technology can be utilised to create a virtual image display which, with a volume enclosing less than 700cc, exhibits a form-factor consistent with integration into a rear-view mirror. By combining the visual accommodation and concomitant reaction time benefits of a head-up display with the ability to present high resolution safety-critical information in a rear-view off-axis configuration with large eyebox, significant potential safety benefits can result."
Buckley, E., Tindt, D., Isele, R. Novel Human-Machine Interface (HMI) Design Enabled by Holographic Laser Projection SID 2009 Symposium, 6/2/09
Light Touch Interactive Projector
Holographic Laser Projection (HLP): How it Works
RELATED
A Touch Screen Table
Brendan O'Brian, QSR 11/23/10
"Light Blue Optics, which rolled out the Light Touch in January, is working with several restaurant chains to put its technology on tables...“You can project menus onto the table so the customer can sit down and order their meal,” says Tamara Roukaerts, director of marketing communications at Light Blue Optics. “They can also watch videos of the chef preparing their meal through a live video feed.”"
Light Blue Optics turns KFC tables into touch screensRoland Gribben, The Telegraph 10/11/10
HLP technology, and how it can be used for practical purposes, is further explained in the following white papers:
Buckley, E., Lacoste, L., Stindt, D. Rear-view virtual image displays. SID (Society for Information Display), Vehicles and Photons - 16th Annual Symposium on Vehicle Displays, 10/15/09
Abstract: "Light Blue Optics holographic laser projection technology can be utilised to create a virtual image display which, with a volume enclosing less than 700cc, exhibits a form-factor consistent with integration into a rear-view mirror. By combining the visual accommodation and concomitant reaction time benefits of a head-up display with the ability to present high resolution safety-critical information in a rear-view off-axis configuration with large eyebox, significant potential safety benefits can result."
Buckley, E., Tindt, D., Isele, R. Novel Human-Machine Interface (HMI) Design Enabled by Holographic Laser Projection SID 2009 Symposium, 6/2/09
Abstract: "Despite the current proliferation of in-car flat panel displays, designers continue to investigate alternatives to flat and rectangular thin-film transistor (TFT) panels – principally to obtain differentiation by freedom of design using, for example, free-form shapes, round displays, flexible displays or mechanical 3D solutions. A perfect demonstration was provided at the 2008 Paris Motor Show by the BMW Mini Center Globe, a novel instrument cluster design which combines lighting, a circular flat panel and a holographic laser projector provided by Light Blue Optics (LBO) to redefine the state of the art in human-machine interface (HMI)...In this paper, the authors will show how the incorporation of LBO’s holographic laser projection technology can allow the construction of a unique display technology like the Mini Center Globe, and how such a combination of technologies represents a significant advance in the current state of the art in automotive displays."
The Story Behind this Post
I was having one of my occasional vivid "technology dreams" just before my dog woke me up in the middle of the night tonight. I was driving around in a futuristic car that had all sorts of cool technologies, including a holographic side-view mirror, similar to the one I blogged about in a 2009 post about Blue Light Optics. This inspired me take a quick look at what the company is doing now.
The dream that entertained me tonight was probably triggered by what I read just before I went to sleep- a call for papers posted by Albrecht Schmidt on Facebook: "Call for Papers - Theme Issue on Automotive User Interfaces, for an upcoming edition of Personal and Ubiquitous Computing. If you are curious, here's an example of one of my blog posts that was inspired by one of my geek-tech-dreams: "Last Night I Dreamt about Haptic Touch Screen Overlays".
I was having one of my occasional vivid "technology dreams" just before my dog woke me up in the middle of the night tonight. I was driving around in a futuristic car that had all sorts of cool technologies, including a holographic side-view mirror, similar to the one I blogged about in a 2009 post about Blue Light Optics. This inspired me take a quick look at what the company is doing now.
The dream that entertained me tonight was probably triggered by what I read just before I went to sleep- a call for papers posted by Albrecht Schmidt on Facebook: "Call for Papers - Theme Issue on Automotive User Interfaces, for an upcoming edition of Personal and Ubiquitous Computing. If you are curious, here's an example of one of my blog posts that was inspired by one of my geek-tech-dreams: "Last Night I Dreamt about Haptic Touch Screen Overlays".
Posted by
Lynn Marentette
Nov 15, 2010
Human-Machine-Music Interaction: KarmetiK Machine Orchestra (Video, links)
Here is an example of innovative interaction between humans, machines, and music:
KarmetiK Machine Orchestra - Live at REDCAT Walt Disney Hall - Los Angeles - Jan 27, 2010 from KarmetiK on Vimeo.
Information from the KarmetiK Machine Orchestra Vimeo page:
"On January 27th, 2010, KarmetiK and California Institute of the Arts brought together a group of interdisciplinary artists to perform in a revolutionary production. During this performance, The Machine Orchestra, a collective of musicians, engineers, dancers, and theatre designers, gave an audience at the Walt Disney Concert Hall's REDCAT performance space a glimpse of the future: one in which computers, robots, and humans join forces to make music.Featuring a cast of musicians, new musical interfaces, and musical robotics, The Machine Orchestra fused a wide array of musical styles ranging from free electronic improvisation to world dance music.This DVD features uninterrupted footage of The Machine Orchestra's debut concert, a performance exploring human interaction with KarmetiK's collection of musical robots: MahaDeviBot, GanaPatiBot, Tammy, Raina, and ReyongBot. Directed by Ajay Kapur and Michael Darling."
Music Director, Co-Creator: Ajay Kapur
Production Director, Co-Creator: Michael Darling
Guest Electronic Artists: Curtis Bahn & Perry Cook
World Music Performers: Ustad Aashish Khan, Pak Djoko Walujo, & I Nyoman Wenten
Multimedia Performer-Composers: Charlie Burgin, Dimitri Diakopoulos, Jordan Hochenbaum, Jim Murphy, Owen Vallis, Meason Wiley, and Tyler Yamin
Visual Design: Jeremiah Thies
Dance: Raakhi Sinha & Kieran Heralall
Lighting Design: Tiffany Williams
Sound Design: John Baffa
Production: Lauren Pratt
Editing: Meason Wiley
Filming: Benny Schuetze
machineorchestra.com
Follow KarmetiK on Facebook and Twitter:
facebook.com/karmetik
twitter.com/karmetik
Detailed information about this performance and Machine Orchestra:
MACHINE ORCHESTRA
KarmetiK Machine Orchestra
RELATED
Building a Hybrid Man/Machine Orchestra, Pt 1
Jordan Hochenbaum, Create Digital Music 1/25/10
Direct links to the publications listed below, and more, on the Publications: Refereed Journals and Conference Papers page of the Karmetik website.
Kapur, A. & M. Darling A Pedagogical Paradigm for Musical Robotics, Proceedings of the
International Conference on New Interfaces for Musical Expression, Sydney, Australia, June 2010.
Hochenbaum, J., Kapur, A., & M. Wright, Multimodal Musician Recognition, Proceedings of the International Conference on New Interfaces for Musical Expression, Sydney, Australia, June 2010
Vallis, O., Hochenbaum, J,, & A. Kapur, A Shift Towards Iterative and Open-Source Design for Musical Interfaces, Proceedings of the International Conference on New Interfaces for Musical Expression, Sydney, Australia, June 2010
Hochenbaum, J., Vallis, O., Diakopoulos, D., Murphy, J. & A. Kapur, On Designing Expressive Musical Interfaces for TableTop Surfaces , Proceedings of the International Conference on New Interfaces for Musical Expression, Sydney, Australia, June 2010
Murphy, J., Kapur, A., & C. Burgin, The Helio: A Study of Membrane Potentiometers and Long Force Sensing Resistors for Musical Interfaces, Proceedings of the International Conference on New Interfaces for Musical Expression, Sydney, Australia, June 2010
KarmetiK Machine Orchestra - Live at REDCAT Walt Disney Hall - Los Angeles - Jan 27, 2010 from KarmetiK on Vimeo.
Information from the KarmetiK Machine Orchestra Vimeo page:
"On January 27th, 2010, KarmetiK and California Institute of the Arts brought together a group of interdisciplinary artists to perform in a revolutionary production. During this performance, The Machine Orchestra, a collective of musicians, engineers, dancers, and theatre designers, gave an audience at the Walt Disney Concert Hall's REDCAT performance space a glimpse of the future: one in which computers, robots, and humans join forces to make music.Featuring a cast of musicians, new musical interfaces, and musical robotics, The Machine Orchestra fused a wide array of musical styles ranging from free electronic improvisation to world dance music.This DVD features uninterrupted footage of The Machine Orchestra's debut concert, a performance exploring human interaction with KarmetiK's collection of musical robots: MahaDeviBot, GanaPatiBot, Tammy, Raina, and ReyongBot. Directed by Ajay Kapur and Michael Darling."
Music Director, Co-Creator: Ajay Kapur
Production Director, Co-Creator: Michael Darling
Guest Electronic Artists: Curtis Bahn & Perry Cook
World Music Performers: Ustad Aashish Khan, Pak Djoko Walujo, & I Nyoman Wenten
Multimedia Performer-Composers: Charlie Burgin, Dimitri Diakopoulos, Jordan Hochenbaum, Jim Murphy, Owen Vallis, Meason Wiley, and Tyler Yamin
Visual Design: Jeremiah Thies
Dance: Raakhi Sinha & Kieran Heralall
Lighting Design: Tiffany Williams
Sound Design: John Baffa
Production: Lauren Pratt
Editing: Meason Wiley
Filming: Benny Schuetze
machineorchestra.com
Follow KarmetiK on Facebook and Twitter:
facebook.com/karmetik
twitter.com/karmetik
Detailed information about this performance and Machine Orchestra:
Lisa Zyga, Physorg.com
KarmetiK Machine Orchestra
RELATED
Building a Hybrid Man/Machine Orchestra, Pt 1
Jordan Hochenbaum, Create Digital Music 1/25/10
Jordan Hochenbaum, Create Digital Music 4/22/10
Direct links to the publications listed below, and more, on the Publications: Refereed Journals and Conference Papers page of the Karmetik website.
Kapur, A. & M. Darling A Pedagogical Paradigm for Musical Robotics, Proceedings of the
International Conference on New Interfaces for Musical Expression, Sydney, Australia, June 2010.
Hochenbaum, J., Kapur, A., & M. Wright, Multimodal Musician Recognition, Proceedings of the International Conference on New Interfaces for Musical Expression, Sydney, Australia, June 2010
Vallis, O., Hochenbaum, J,, & A. Kapur, A Shift Towards Iterative and Open-Source Design for Musical Interfaces, Proceedings of the International Conference on New Interfaces for Musical Expression, Sydney, Australia, June 2010
Hochenbaum, J., Vallis, O., Diakopoulos, D., Murphy, J. & A. Kapur, On Designing Expressive Musical Interfaces for TableTop Surfaces , Proceedings of the International Conference on New Interfaces for Musical Expression, Sydney, Australia, June 2010
Murphy, J., Kapur, A., & C. Burgin, The Helio: A Study of Membrane Potentiometers and Long Force Sensing Resistors for Musical Interfaces, Proceedings of the International Conference on New Interfaces for Musical Expression, Sydney, Australia, June 2010
Posted by
Lynn Marentette
Labels:
Ajay Kapur,
HCI,
Jordan HochenbaumMachine Orchestra,
KarmetiK,
Michael Darling,
NUI,
robot,
TUI,
video
1 comment:
Juggling and Music: JAM meets the ReacTable
Enjoy!
Juggling and Music Meets the ReacTable, With Carles Lopez
I could play around with a ReacTable all day long!
RELATED
Need an 8 and 1/2 minute dance/exercise break? Get up out of your chair and dance to this video of the Brainwater ReacTable Live Performance 1
Juggling and Music Meets the ReacTable, With Carles Lopez
I could play around with a ReacTable all day long!
RELATED
Need an 8 and 1/2 minute dance/exercise break? Get up out of your chair and dance to this video of the Brainwater ReacTable Live Performance 1
Posted by
Lynn Marentette
Nov 11, 2010
NY Times article and Video: iPad Opens World to a Disabled Boy
I meant to post a link to this article a while ago:
iPad Opens World to a Disabled Boy
Emily B. Hager, New York Times, 10/12/10
Cross-posted on the TechPsych blog
iPad Opens World to a Disabled Boy
Emily B. Hager, New York Times, 10/12/10
Cross-posted on the TechPsych blog
Posted by
Lynn Marentette
Nov 10, 2010
New Version of Surface from Microsoft?
Next Gen Microsoft Surface 'Imminent'
Seamus Byrne, Gizmodo 11/11/10
Here is a quote from the Gizmodo article:
"Iain McDonald of agency Amnesia Razorfish, owned by Microsoft until late 2009 and now part of the Publicis Groupe, told Gizmodo the next generation Microsoft Surface will indeed be a flat surface concept, not the entire coffee table system with cameras and projectors living underneath. The new Surface will also have higher resolution cameras so that special codes will no longer be required to identify objects. And the new Surface will also be around $8,000 (whether this was USD or AUD wasn’t specified)." - Seamus Byrne
More to come...
Seamus Byrne, Gizmodo 11/11/10
Here is a quote from the Gizmodo article:
"Iain McDonald of agency Amnesia Razorfish, owned by Microsoft until late 2009 and now part of the Publicis Groupe, told Gizmodo the next generation Microsoft Surface will indeed be a flat surface concept, not the entire coffee table system with cameras and projectors living underneath. The new Surface will also have higher resolution cameras so that special codes will no longer be required to identify objects. And the new Surface will also be around $8,000 (whether this was USD or AUD wasn’t specified)." - Seamus Byrne
More to come...
Posted by
Lynn Marentette
Labels:
3M touch,
amnesia,
byrne,
gizmodo,
imminent,
multitouch,
new,
next gen surface,
NUI,
razorfish,
surface,
surface computing
No comments:
Nov 2, 2010
EyeTube for YouTube! Eye-gaze interaction software, free and downloadable from GazeGroup
Gaze interaction systems provide access to computers and the rich content now available on the web for many people with disabilities. Unfortunately, commercial gaze tracking systems are very expensive and at times, difficult to calibrate. There is hope!
Following up on my recent post, "Open-source Eye-tracking: The ITU Gaze Tracker 2.0 Beta", I thought I'd share the GazeGroup's EyeTube for YouTube interface.
What is great about EyeTube for YouTube is that it provides two different interfaces. The simplified version looks good for younger children or people with cognitive disorders, and is icon-based. The second version is appropriate for people who can navigate through more complex visual representations of content.
EyeTube requires a Windows-based system and .Net 3.5 at this time. It can be downloaded from the GazeGroup website. If you plan to download the application, you must also make sure you have a YouTube account. To get the application up and running, you'll need to change the settings (EyeTubeSettings.xml) to match your account. (If you don't know much about changing settings or xml, ask someone you know who works in IT.)
Below is the icon-based version of the eye-gaze interface for YouTube:

Feature-rich version of the EyeTube interface for YouTube:

From the GazeGroup site:
RELATED
The GazeGroup
(The individuals mentioned below may be currently working elsewhere, but involved in the gaze research in some way.)
ACM CHI Conference Articles
San Agustin, J., Skovsgaard, H., Hansen, J. P., and Hansen, D. W. 2009. Low-cost gaze interaction: ready to deliver the promises. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 - 09, 2009). CHI EA '09. ACM, New York, NY, 4453-4458. DOI= http://doi.acm.org/10.1145/1520340.1520682
San Agustin, J., Hansen, J. P., Hansen, D. W., and Skovsgaard, H. 2009. Low-cost gaze pointing and EMG clicking. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 - 09, 2009). CHI EA '09. ACM, New York, NY, 3247-3252. DOI= http://doi.acm.org/10.1145/1520340.1520466
Tall, M., Alapetite, A., San Agustin, J., Skovsgaard, H. H., Hansen, J. P., Hansen, D. W., and Møllenbach, E. 2009. Gaze-controlled driving. InProceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 - 09, 2009). CHI EA '09. ACM, New York, NY, 4387-4392. DOI= http://doi.acm.org/10.1145/1520340.1520671
UPDATE
Following up on my recent post, "Open-source Eye-tracking: The ITU Gaze Tracker 2.0 Beta", I thought I'd share the GazeGroup's EyeTube for YouTube interface.
What is great about EyeTube for YouTube is that it provides two different interfaces. The simplified version looks good for younger children or people with cognitive disorders, and is icon-based. The second version is appropriate for people who can navigate through more complex visual representations of content.
EyeTube requires a Windows-based system and .Net 3.5 at this time. It can be downloaded from the GazeGroup website. If you plan to download the application, you must also make sure you have a YouTube account. To get the application up and running, you'll need to change the settings (EyeTubeSettings.xml) to match your account. (If you don't know much about changing settings or xml, ask someone you know who works in IT.)
Below is the icon-based version of the eye-gaze interface for YouTube:
Feature-rich version of the EyeTube interface for YouTube:
From the GazeGroup site:
"The EyeTube prototype offers a feature rich eye controlled interface for the popular YouTube service. Instead of emulating a mouse pointer and interacting with a web browser the EyeTube interface is especially designed to be driven by gaze input. It offers a wide range of features such as keyword searching, popular video feeds, favorites and social aspects such as subscriptions, friends and commenting on videos.The highly optimized interfaces allows for a streamline interaction which is aleviated from the Midas Touch problem. In most previous gaze interfaces selection is made by a dwell time activator, e.g fixat a button for a specific amount of time and it will execute the function. In the EyeTube interface a fixation on a U.I element will highlight it and a second fixation on the activation button is required to execute the function. This removes the stress of having to constantly move the eyes to avoid unintentional activation."
"The EyeTube also exists in another simplified incarnation developed for users whom are distracted by a larger number of options. It supports basic features such as browsing categories, optional keyword searching and favorites."
RELATED
The GazeGroup
(The individuals mentioned below may be currently working elsewhere, but involved in the gaze research in some way.)
- John Paulin Hansen, Head of group (Ph.D. in Psychology)
- Dan Witzner Hansen, Associate professor
- Javier San Agustin, PhD student
- Sune Alstrup Johansen, PhD student
- Henrik Hegner Tomra Skovsgaard, PhD student
- Martin Tall, PhD student (Now working at Duke University, North Carolina)
GazeGroup Research Areas
COGAIN (Communication by Gaze Interaction)
ACM CHI Conference Articles
San Agustin, J., Skovsgaard, H., Hansen, J. P., and Hansen, D. W. 2009. Low-cost gaze interaction: ready to deliver the promises. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 - 09, 2009). CHI EA '09. ACM, New York, NY, 4453-4458. DOI= http://doi.acm.org/10.1145/1520340.1520682
San Agustin, J., Hansen, J. P., Hansen, D. W., and Skovsgaard, H. 2009. Low-cost gaze pointing and EMG clicking. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 - 09, 2009). CHI EA '09. ACM, New York, NY, 3247-3252. DOI= http://doi.acm.org/10.1145/1520340.1520466
Tall, M., Alapetite, A., San Agustin, J., Skovsgaard, H. H., Hansen, J. P., Hansen, D. W., and Møllenbach, E. 2009. Gaze-controlled driving. InProceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 - 09, 2009). CHI EA '09. ACM, New York, NY, 4387-4392. DOI= http://doi.acm.org/10.1145/1520340.1520671
UPDATE
Eye-controlled games and leisure applications from the COGAIN wiki: http://www.cogain.org/wiki/Leisure_Applications
- EyeArt - EyeArt eye-drawing program, developed by Andre Meyer and Markus Dittmar, Technical University of Dresden, Applied Cognitive Research Unit, Germany.
- GazeTrain - Gaze-controlled action oriented puzzle game, developed by Lasse Farnung Laursen, Technical University of Denmark
- Puzzle - Simple puzzle game that can be played with eye movements, developed by Vytautas Vysniauskas, Siauliai University, Lithuania
- Road to Santiago - Gaze-controlled adventure game (full game), developed by Javier Hernandez Sanchiz, Universidad Publica de Navarra, Spain
- Snap Clutch - An application that uses eye gaze data to generate key and mouse events for playing games such as World of Warcraft and Second Life.
- ASE: Accessible Surfing Extension for Firefox - Follow this link to access ASE, an Accessible Surfing Extension for Firefox, developed by Emiliano Castellina and Fulvio Corno at Politecnico di Torino. (Note that this is a beta version.)
- Eye Gaze Music (SAW Selection Sets) - Point and Play – eye gaze (direct pointing) musical activities, developed by DART. Please note that SAW (Special Access to Windows) framework application is needed to play these 15 music selection sets. SAW is available for free athttp://www.oatsoft.org/Software/SpecialAccessToWindows
- EyeTube - Gaze interaction for YouTube - Follow this link to get more information and download EyeTube at ITU GazeGroup's web pages
- Eye3D and other head eye mouse software - Eye3D for education, and a collection of links to free software that works with head or eye mouse. Includes links to downloads and original sites.
- Gaze-controlled Breakout - Follow this link to access a modified version of the LBreakout2 game which can be operated by an SMI eye tracker, developed by Michael Dorr et al. at University of Luebeck
- Oleg Spakov's Freeware games for MyTobii - Follow this link to access MyTobii compatible games developed by Oleg Spakov, University of Tampere, Finland
- Free ITU Gaze Tracker and applications - Download a webcam based open-source gaze tracker and several applications that work with it, developed at IT University of Copenhagen
- GameBase - Check out the Eye-Gaze Games category at the SpecialEffect GameBase!
- More information about Gaze-Controlled Games - Follow this link to see a list of online information resources on using gaze for the control of games and other leisure applications
Posted by
Lynn Marentette
Nov 1, 2010
Open-source Eye-tracking: The ITU Gaze Tracker 2.0 Beta Via Martin Tall, NUI-Group Member
I came across the first version of the open-source ITU Gaze Tracker on the NUI Group forum in April of 2009 and played around with it a bit. I was impressed. I'm happy to say that the new version looks even better, although I haven't had the time to try it out. Below are two recent videos that will give you a better understanding about gaze tracking.
For the tech-curious, make sure you take the time to view the second video! Links to info & code are below.
GT2 High speed remote eye tracking "Pushing the limits"
Technical Demonstration
Info about the ITU Gaze Tracker 2.0 Beta from the NUI Group Forum, posted by Martin Tall:
For the tech-curious, make sure you take the time to view the second video! Links to info & code are below.
GT2 High speed remote eye tracking "Pushing the limits"
Technical Demonstration
Info about the ITU Gaze Tracker 2.0 Beta from the NUI Group Forum, posted by Martin Tall:
Introducing the ITU Gaze Tracker 2.0 Beta
"We’ve made great progress since the initial release, today we open the doors for version 2.0. Internally we’ve rewritten major parts of the platform to gain flexibility and higher performance. First version was DIY playtime, this version is nothing short of a screamer. High performance, very accuracy tracking. People are telling us we are crazy giving it away but we’re dedicated to the mission: Accessible eye tracking for all, regardless of nationality and means. We’re making it happen."
Important highlights for GT2.0b:
- Supports three modes of operation, head-mounted, remote mono/binocular
- Vastly improved performance, +500fps head mounted, +170fps remote binocular (both eyes)
- Awesome accuracy, avg. 0.3 - 0.7 degrees of visual angle (remote binocular)
- New U.I, looks so.. 2010
- Automatic tuning (optimization of algorithms parameters)
- Relatively low CPU-utilization and memory footprint (12%, 170Mb, core i7 860 win7-64)
- Many enhancements, bug-fixes etc.
- Supports three modes of operation, head-mounted, remote mono/binocular
- Vastly improved performance, +500fps head mounted, +170fps remote binocular (both eyes)
- Awesome accuracy, avg. 0.3 - 0.7 degrees of visual angle (remote binocular)
- New U.I, looks so.. 2010
- Automatic tuning (optimization of algorithms parameters)
- Relatively low CPU-utilization and memory footprint (12%, 170Mb, core i7 860 win7-64)
- Many enhancements, bug-fixes etc.
Posted by
Lynn Marentette
Labels:
2.0,
accessibility,
accessible,
Beta,
eye tracking,
gaze racking,
gazegroup,
interaction,
ITU Gaze Tracker,
Martin Tall,
NUI,
NUI Group
1 comment:
Unlocking the Future of Cities through Multi-Touch Interactive Visualization at RENCI (UNC-Charlotte)
Here is a link to an article that was in the SciTech section of my morning paper today!
Unlocking the Future of Cities: UNCC scientists work across disciplines to predict how urban areas will use open land. Tyler Dukes, Charlotte Observer, 10/31/10

Image Source: Charlotte Observer
Wouldn't this be a great tool to use to support collaborative learning projects in the schools?
RELATED
RENCI at UNC-Charlotte has a Multi-touch Table in the Visualization Center
RENCI Visualization Center Update
Visualization Resources at RENCI UNC-Charlotte
RENCI at UNC Charlotte
Multi-Touch at RENCI
Research by Touch: RENCI Multitouch Table Gives Computer Science Research an Intuitive Interface
Unlocking the Future of Cities: UNCC scientists work across disciplines to predict how urban areas will use open land. Tyler Dukes, Charlotte Observer, 10/31/10
"As part of a three-year, $286,000 grant from the National Science Foundation, the group of scientists from UNC Charlotte is researching the complex relationship between the Queen City and its surrounding forest and pastoral lands. Using a combination of social, natural and computer science, they're working to build an interactive map-based simulation capable of showing the impact of future development and policy on land use....It's a project requiring Meentemeyer's team to peel back multiple layers of cultural and economic values surrounding land in the South. The research will have implications beyond the Charlotte area...By allowing the public to explore those possibilities visually on anything from a laptop to a touch-screen table, the research team is hoping its work will mean more informed decisions about how people use the land around them." -Charlotte Observer
Image Source: Charlotte Observer
Wouldn't this be a great tool to use to support collaborative learning projects in the schools?
RELATED
RENCI at UNC-Charlotte has a Multi-touch Table in the Visualization Center
RENCI Visualization Center Update
Visualization Resources at RENCI UNC-Charlotte
RENCI at UNC Charlotte
Multi-Touch at RENCI
Research by Touch: RENCI Multitouch Table Gives Computer Science Research an Intuitive Interface
Posted by
Lynn Marentette
Subscribe to:
Posts (Atom)