Showing posts with label +. Show all posts
Showing posts with label +. Show all posts

Nov 2, 2010

EyeTube for YouTube! Eye-gaze interaction software, free and downloadable from GazeGroup

Gaze interaction systems provide access to computers and the rich content now available on the web for many people with disabilities.  Unfortunately, commercial gaze tracking systems are very expensive and at times, difficult to calibrate.  There is hope!


Following up on my recent post, "Open-source Eye-tracking: The ITU Gaze Tracker 2.0 Beta", I thought I'd share the GazeGroup's EyeTube for YouTube interface.  


What is great about EyeTube for YouTube is that it provides two different interfaces. The simplified version looks good for younger children or people with cognitive disorders, and is icon-based.  The second version is appropriate for people who can navigate through more complex visual representations of content. 


EyeTube requires a Windows-based system and .Net 3.5 at this time. It can be downloaded from the GazeGroup website.  If you plan to download the application, you must also make sure you have a YouTube account. To get the application up and running, you'll need to change the settings (EyeTubeSettings.xml) to match your account.   (If you don't know much about changing settings or xml, ask someone you know who works in IT.)


Below is the icon-based version of the eye-gaze interface for YouTube:
EyeTube - Gaze Interaction for YouTube (simplified version)


Feature-rich version of the EyeTube interface for YouTube:
EyeTube - Gaze Interaction for YouTube

From the GazeGroup site:

"The EyeTube prototype offers a feature rich eye controlled interface for the popular YouTube service. Instead of emulating a mouse pointer and interacting with a web browser the EyeTube interface is especially designed to be driven by gaze input. It offers a wide range of features such as keyword searching, popular video feeds, favorites and social aspects such as subscriptions, friends and commenting on videos.The highly optimized interfaces allows for a streamline interaction which is aleviated from the Midas Touch problem. In most previous gaze interfaces selection is made by a dwell time activator, e.g fixat a button for a specific amount of time and it will execute the function. In the EyeTube interface a fixation on a U.I element will highlight it and a second fixation on the activation button is required to execute the function. This removes the stress of having to constantly move the eyes to avoid unintentional activation."
"The EyeTube also exists in another simplified incarnation developed for users whom are distracted by a larger number of options. It supports basic features such as browsing categories, optional keyword searching and favorites."

RELATED
The GazeGroup
(The individuals mentioned below may be currently working elsewhere, but involved in the gaze research in some way.)

GazeGroup Research Areas

COGAIN (Communication by Gaze Interaction)

ACM CHI Conference Articles
San Agustin, J., Skovsgaard, H., Hansen, J. P., and Hansen, D. W. 2009. Low-cost gaze interaction: ready to deliver the promises. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 - 09, 2009). CHI EA '09. ACM, New York, NY, 4453-4458. DOI= http://doi.acm.org/10.1145/1520340.1520682
San Agustin, J., Hansen, J. P., Hansen, D. W., and Skovsgaard, H. 2009. Low-cost gaze pointing and EMG clicking. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 - 09, 2009). CHI EA '09. ACM, New York, NY, 3247-3252. DOI= http://doi.acm.org/10.1145/1520340.1520466 
Tall, M., Alapetite, A., San Agustin, J., Skovsgaard, H. H., Hansen, J. P., Hansen, D. W., and Møllenbach, E. 2009. Gaze-controlled driving. InProceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 - 09, 2009). CHI EA '09. ACM, New York, NY, 4387-4392. DOI= http://doi.acm.org/10.1145/1520340.1520671

UPDATE

Eye-controlled games and leisure applications from the COGAIN wiki: http://www.cogain.org/wiki/Leisure_Applications
  • EyeArt - EyeArt eye-drawing program, developed by Andre Meyer and Markus Dittmar, Technical University of Dresden, Applied Cognitive Research Unit, Germany.
  • GazeTrain - Gaze-controlled action oriented puzzle game, developed by Lasse Farnung Laursen, Technical University of Denmark
  • Puzzle - Simple puzzle game that can be played with eye movements, developed by Vytautas Vysniauskas, Siauliai University, Lithuania
  • Road to Santiago - Gaze-controlled adventure game (full game), developed by Javier Hernandez Sanchiz, Universidad Publica de Navarra, Spain
  • Snap Clutch - An application that uses eye gaze data to generate key and mouse events for playing games such as World of Warcraft and Second Life.
  • ASE: Accessible Surfing Extension for Firefox - Follow this link to access ASE, an Accessible Surfing Extension for Firefox, developed by Emiliano Castellina and Fulvio Corno at Politecnico di Torino. (Note that this is a beta version.)
  • Eye Gaze Music (SAW Selection Sets) - Point and Play – eye gaze (direct pointing) musical activities, developed by DART. Please note that SAW (Special Access to Windows) framework application is needed to play these 15 music selection sets. SAW is available for free athttp://www.oatsoft.org/Software/SpecialAccessToWindows
  • EyeTube - Gaze interaction for YouTube - Follow this link to get more information and download EyeTube at ITU GazeGroup's web pages
  • Eye3D and other head eye mouse software - Eye3D for education, and a collection of links to free software that works with head or eye mouse. Includes links to downloads and original sites.
  • Gaze-controlled Breakout - Follow this link to access a modified version of the LBreakout2 game which can be operated by an SMI eye tracker, developed by Michael Dorr et al. at University of Luebeck
  • Oleg Spakov's Freeware games for MyTobii - Follow this link to access MyTobii compatible games developed by Oleg Spakov, University of Tampere, Finland
  • Free ITU Gaze Tracker and applications - Download a webcam based open-source gaze tracker and several applications that work with it, developed at IT University of Copenhagen
  • GameBase - Check out the Eye-Gaze Games category at the SpecialEffect GameBase!
  • More information about Gaze-Controlled Games - Follow this link to see a list of online information resources on using gaze for the control of games and other leisure applications

Nov 1, 2010

3D Browser-based Science Games from Muzzy Lane: The ClearLab Project

"ClearLab is a project to create innovative 3D science games for middle school students. ClearLab games will be immersive and educational, and can be played in the browser - at school, the library, at home - anywhere with access to the internet. Teachers will be able to assign, manage and assess student game play from the web."


"ClearLab is being developed by Muzzy Lane Software, Inc., in partnership with the Federation of American Scientists, curriculum developers from K12, Inc. and science teachers around the country. The project's primary goal is to develop games that improve student performance on standardized assessment and that foster lifelong passion for science."


The ClearLab Project is funded by DARPA.  It is an open development project.


Thans to Eliane Alhadeff, of Serious Games Market, for the link!


ClearLab Blog



Cross-posted on the TechPsych blog.

Open-source Eye-tracking: The ITU Gaze Tracker 2.0 Beta Via Martin Tall, NUI-Group Member

I came across the first version of the open-source ITU Gaze Tracker on the NUI Group forum in April of 2009 and played around with it a bit.  I was impressed.  I'm happy to say that the new version looks even better, although I haven't had the time to try it out.  Below are two recent videos that will give you a better understanding about gaze tracking.  


For the tech-curious, make sure you take the time to view the second video!  Links to info & code are below.


GT2 High speed remote eye tracking "Pushing the limits"


Technical Demonstration


Info about  the ITU Gaze Tracker 2.0 Beta from the NUI Group Forum, posted by Martin Tall:



Introducing the ITU Gaze Tracker 2.0 Beta
"We’ve made great progress since the initial release, today we open the doors for version 2.0. Internally we’ve rewritten major parts of the platform to gain flexibility and higher performance.  First version was DIY playtime, this version is nothing short of a screamer. High performance, very accuracy tracking. People are telling us we are crazy giving it away but we’re dedicated to the mission: Accessible eye tracking for all, regardless of nationality and means. We’re making it happen."
Important highlights for GT2.0b:
- Supports three modes of operation, head-mounted, remote mono/binocular
- Vastly improved performance, +500fps head mounted, +170fps remote binocular (both eyes)
- Awesome accuracy, avg. 0.3 - 0.7 degrees of visual angle (remote binocular)
- New U.I, looks so.. 2010
- Automatic tuning (optimization of algorithms parameters)
- Relatively low CPU-utilization and memory footprint (12%, 170Mb, core i7 860 win7-64)
- Many enhancements, bug-fixes etc.

Unlocking the Future of Cities through Multi-Touch Interactive Visualization at RENCI (UNC-Charlotte)

Here is a link to an article that was in the SciTech section of my morning paper today!


Unlocking the Future of Cities:  UNCC scientists work across disciplines to predict how urban areas will use open land. Tyler Dukes, Charlotte Observer, 10/31/10


"As part of a three-year, $286,000 grant from the National Science Foundation, the group of scientists from UNC Charlotte is researching the complex relationship between the Queen City and its surrounding forest and pastoral lands. Using a combination of social, natural and computer science, they're working to build an interactive map-based simulation capable of showing the impact of future development and policy on land use....It's a project requiring Meentemeyer's team to peel back multiple layers of cultural and economic values surrounding land in the South. The research will have implications beyond the Charlotte area...By allowing the public to explore those possibilities visually on anything from a laptop to a touch-screen table, the research team is hoping its work will mean more informed decisions about how people use the land around them."  -Charlotte Observer




Image Source: Charlotte Observer


Wouldn't this be a great tool to use to support collaborative learning projects in the schools?


RELATED
RENCI at UNC-Charlotte has a Multi-touch Table in the Visualization Center
RENCI Visualization Center Update
Visualization Resources at RENCI UNC-Charlotte
RENCI at UNC Charlotte
Multi-Touch at RENCI
Research by Touch:  RENCI Multitouch Table Gives Computer Science Research an Intuitive Interface



Oct 31, 2010

Microsoft is acquiring Canesta, Inc., a developer of 3-D electronic perception technology for natural user interaction, gaming, and more.

Microsoft to Acquire 3-D Chip Firm Canesta
Michael Baron, TheStreet 10/29/10

Thanks to Harry Van Der Veen, of NUITEQ, for this link!

RELATED
The following video is from the Canesta3D YouTube channel. It demonstrates the 3D input sensor in action, with four people moving around in a living room. The chip used in the system depicted in the video was the precursor to the current chip, called the "Cobra 320x200".


Below is a demo of gesture interaction using Canesta3D technology to control and select information and content on a large display.  In my opinion, this will change the way we interact with our TV's, at least for those of us who hate using bad remotes!  Microsoft's acquisition of Canesta is good news, especially if they allow this technology to be used by the masses.   I'm pretty sure it has the capability of supporting  interaction with HD TV's are internet-ready, and can support GoogleTV, LeanBack, and Vimeo's Couch Mode.




Canesta Announces Definitive Agreement to be Acquired by Microsoft
Press Rease, 10/29/10, Canesta

About Canesta (From the Canesta website)
"Canesta (www.canesta.com) is the inventor of revolutionary, low cost electronic perception technology and leading provider of single chip CMOS 3-D sensors that fundamentally change the relationship between devices and their users. This capability makes possible true 3-D perception as input to everyday devices, rather than the widely understood 3-D representational technologies as output. Canesta’s 3-D input technology, based upon tiny, CMOS 3-D imaging chips or “sensors”, enables fine-grained, 3-dimensional depth-perception in a wide range of applications. Products based on this capability can then react on sight to the actions or motions of individuals and objects in their field of view, gaining levels of functionality and ease of use that were simply not possible in an era when such devices were blind. Canesta’s focus is on mass market consumer electronics, but many applications exist in other markets as well. Canesta is located in Sunnyvale, CA. The company has filedin excess of fifty patents, 44 of which have been granted so far."


Canesta Corporate Fact Sheet (pdf)
Videos: http://canesta.com/applications/consumer-electronics/gesture-controls

I posted some videos about Canesta's technologies on the following post. There are two videos that show Canesta's 3D depth camera works on a Hitachi flat-panel display: Interactive Displays 2009 Conference

For more information about interactive TV, GoogleTV, Leanback and Couch Mode, see the second section of my recent post:
Philipp Geist: Blending the Physical with the Digital;  Google TV/Leanback, Vimeo's new Couch Mode, oh..and ViewSonic's 3D (glasses-less) pocket camcorder...

Technology and Education, a Temporal Approach -Link to Dan Sutch's article, plus info about FutureLab

I'm interested in topics related to school reform and how it impacts the intersection of education/learning and emerging/innovative interactive technologies.  There are many changes going on that will impact the future of education, and I thought I'd devote a post or two to this topic on the Interactive Multimedia Technology and TechPsych blogs.

Over the past few years, I've noticed that there is a re-occurring theme, sort of a self-perpetuating "myth" - or hope, that if we just could fire/tweak/transform- the teachers, and if we just had the right kind of technologies and applications at hand, the multiple problems of education would be solved.  Of course, we know it is much more complicated than throwing innovative technologies, teaching strategies, and new, "highly qualified" teachers into the educational mix!

On this note,  I'd like to share a link to an article written by Dan Sutch on the Flux blog, hosted by FutureLab. ( I've included some links to resources from the FuturLab website.  I've also added my own "2-cents" to the topic of technology and education reform, which can be found at the top of this blog under the "My 2-cents: Innovative Technologies, Education Reform, which is in draft form.)


In his article, Dan Sutch touches on some key problems facing education. Like the little boy in the children's book, the Emperor's New Clothes, he points out that the polarizing debates regarding education blind us to what we really need to think about- and understand.  


Dan Sutch,  Flux (FutureLab)  October 7, 2010

Here are thee three "meta-functions" of education discussed in Dan's article:

  • How the world is as it is. Which requires exploration of what is already known about the world: knowledge domains, histories, cultural differences etc.  [The past] This only makes sense in relation to how learners
  • Understand their place in the world. This is a focus on the individual, their culture and context, their interests, knowledges and relationships etc. [The present] This then leads to a need to understand
  • How they act within the world and how they can change it. This is about developing personal identities and agency, and the skills to enact them – for themselves, their communities and for wider global challenges. [The future]
By using this framework, it might be possible for us to generate meaningful ways to use technology to support the business/science/art of teaching and learning. This framework might be something for university-level teacher educators to consider.


RELATED AND SOMEWHAT RELATED
About Dan Sutch
"Dan's main research interests are in mobile learning, radical innovation and the role of the teacher in technology-rich learning environments. Dan’s current work involves investigating new models of innovation in the design and application of digital learning resources and the capacity of teachers to act as innovators in the use of digital learning resources."

Panel of Flux Contributors
I encourage you to take a look at the other people who contribute to Flux!  They are on the forefront of education and emerging technologies, and come from a wide range of disciplines.


About FutureLab
FutureLab is an organization located in the UK that focuses on "the way people learn through innovative technology and practice".   The FutureLab website has a wealth of information about interactive and immersive technologies that support- or have the potential to support- learning. 


FutureLab's Free Online Tools
One of many examples is Create-A-Scape:
"Create-A-Scape is a website that provides resources for creating digitally-enhanced learning experiences, using mobile technology to experience location-sensitive sounds and images that have been 'attached to' the local landscape. Can be used right across the curriculum with all age groups." -FutureLab


Links to FutureLab topics, from the home page of the FutureLab website:

Links to FutureLab's current projects: