Sep 11, 2009

Ru Zarin's Interactive Multi-touch Applications for the Cognitively Disabled: Social stories, talking symbols, a piano, and more!

 http://uidresearch.files.wordpress.com/2009/07/multi_touch_trollskogen_1024.jpg


Trollskogen Final Demo from Ru Zarin on Vimeo.

The above video is a demonstration of Ru Zarin's masters degree project at Umea Universitet.

This project examined ways to use multi-touch technologies to support communication and learning among children with Down syndrome, autism, and related diagnoses within the age group of 5-8. I especially liked the animated social story application.

"Trollskogen is a communicative framework designed to enhance communication among people with cognitive disabilities. The forest is split up into interactive modules that provide a fun and engaging learning environment while helping improve on certain aspects of speech, reading/writing and symbol based languages. This framework has been deployed on a custom multi-touch table prototype built at the Interactive institute Umeå, enabling the children to interact with their fingers in a more natural, intuitive way rather than a traditional keyboard/mouse setup."

RELATED READING
RELATED VIDEOS OF THE PROJECT DEVELOPMENT

Ru collaborated with the illustrator Linda Arvidsson for the characters and objects found in the troll forest theme.  

I especially liked the Troll Forest Cabin, which is shown in the social stories section of the video.

http://uidresearch.files.wordpress.com/2009/05/sketch_cabin.jpg

Cross-posted on the TechPsych blog.

Sep 10, 2009

CONCAVESCREAM: Soundtrack for a book interactive music and visuals,nice and relaxing on a touch screen!

I found a link to an interesting audio-based interactive website today. The Concave Scream "Soundtrack for a Book" is a promotional website for the band's new album of the same name. I never heard of this band before, but I'm impressed with the touch screen interactivity that the website provides!

It was awarded the FWA (Favorite Website Award) earlier this year.

 

Sep 7, 2009

Stantum's Mobile Phone Multi-touch Interface: Demonstration of Precise Interactions on a Resistive Touch Screen



For more information, read Paul Miller's article (Engadget), Stantum's mind-blowing multitouch interface on video!

http://www.blogcdn.com/www.engadget.com/media/2009/02/stantum-top-001.jpg

Paul Miller says:

"The PMatrix multitouch firmware allows for unlimited inputs, detection of any contacting object (a finger, a stylus or even a paintbrush) and pressure sensitivity. We played with the demo unit for a bit and were frankly blown away, it's far and away the best touch experience we've ever seen or felt, and the multitouch functionality is just gravy on top. Stantum is targeting resistive touchscreens because they're still considerably cheaper to build than capacitive ones, and from our perspective there seems to be zero tradeoff -- for sensitivity and accuracy this destroys everything else we've seen on the market, capacitive or not."

How it works:  Tech Specs from Stantum:

PMatrix: The unparalleled Multi-Touch IP core and detection firmware (pdf)


"PMatrix™ is the most powerful software engine for Multi-Touch technology. It runs as a chip firmware or software on controllers orany embedded processing devices. PMatrix scans and delivers an exact representation of what’s happening on the touch-panel. The Multi-Touch driver dynamically updates the cursor list, enabling any application to control the Graphical User Interface objects."

Stantum's rational behind the company's choice of using resistive rather than capacitive touch technology:

Why resistive? :
In comparison to capacitive sensing, resistive technology brings :

• Versatility : it detects fingers, gloves, stylus and any contact object
• A lower solution cost
• Lower power consumption
• Faster scanning rate
• Lower overall latency
• Precision is linear (same in the center and at the borders)
• Performance is not affected by the size of the touch-panel or the number of touches
• Transparency and soft touch enabling user experience equivalent to the best capacitive touch-panels

I'll update this post when I have more information!


RELATED

For in-depth, detailed information about resisitive multi-touch technology,  read the following posts via Touch User Interface:

How the Stantum's Resistive Multi-Touch Screen Works 
Pressure Sensitive Multi-touch Surfaces


Interactive Light and Dance Installations using Flexible OLED's: You Fade to Light

http://www.random-international.com/storage/IMG_2700.jpg?__SQUARESPACE_CACHEVERSION=1252332787714

Dance and Interactive Light Installation: You Fade to Light/ edit 1 (rAndom International, Kristin Knappstein, Royal Philips Electronic' Luminblade team)

You Fade To Light / edit 1 from rAndom International on Vimeo.

"Lumiblade OLED's are large area diffuse light sources made from extremely thin glass and feature a perfect mirror finish. OLED's are very close to the quality of natural light."

Installation and Art Direction by rAndom International
Choreography: Laïla Diallo in collaboration with Khamlane Halsackda
Performance: Laïla Diallo and Khamlane Halsackda
Film Production and Director of Photography: Mark Purnell at Hogarth Worldwide
Related versions can be found on the rAndom International Vimeo site.
Photos of the making of You Fade to Light


Chris O'Shea created the software for You Fade to Light using C++, openFrameworks, and openCV.  There was a hidden camera embedded in the installation that generated video for the software to process and also generate video files as output. The hardware drivers were created by rAndom International and through the software, controlled the brightness of each OLED.

Chris is the author of the Pixelsumo blog, and also is behind This Happened, a "series of events focusing on the stories behind interaction design",  with the purpose of encouraging interaction designers to become more open in their methods and ideas. 

For more information about interactive OLED, read the NY Time's article:

Panels of Light Fascinate Designers

Eric A. Taub, New York Times, 9/6/09
http://graphics8.nytimes.com/images/2009/09/07/business/bulb_600.jpg

Thanks to Celine Latulipe for the link to the NY Times article!

Multi-touch, multimedia, multi-modal: Fujitsu LIFEBOOK T4310 has a multitude of possibilities!

Multi-touch, multi-media, multi-modal... the Fujitsu LIFEBOOK T4310 looks like it provides multiple possibilities for people from all walks of life:

-GestureWorks

The LIFEBOOK T4310 comes with an integrated web cam and fingerprint reader, and a variety of I/O options, including HDMI, USB, Firewire, BlueTooth,  LAN, analog video output, SD card reader, line in/out, a wireless switch for the integrated 3G and UMTS, and an express card reader. There is an optional modular bay that can accommodate an additional drive or battery.


The fun part is that the Fujitsu LIFEBOOK T4310 hands-on comes with Microsoft's Touch Pack applications, which are demonstrated in the video below:
  • Microsoft Surface globe
  • Surface Collage
  • Microsoft Rebound touch game
  • Surface Lagoon screensaver, which provides a water-ripple effect and little fish that respond to touch interaction.



As demonstrated in the video, the LIFEBOOK supports gesture interaction, multi-touch interaction, stylus interaction, and traditional keyboard interaction. The capacitive display has a bi-directional hinge, allowing it to be turned 360 degrees, and also positioned so that the display can be set facing up horizontally over the keyboard. (This feature would be welcomed in educational settings, if the notebook was used in education settings, as it could support paired and group collaborative learning activities.)


What I like about the LIFEBOOK is that it has an integrated ambient light sensor, which automatically adjusts the brightness of the display according to the level of light in the environment. This feature is important for people like me who are on the go and must use their computing devices under a range of lighting conditions.


I would love to get my hands on the Fujitsu LIFEBOOK T4310 for a month and test the system in-depth in my day-job as a school psychologist, and in my leisure pursuits as a UX/designer/developer/musician/gamer/etc,  "hobbyist".


Can anyone make this dream come true?


RELATED:
Fujitsu's multitouch LIFEBOOK T4310 tablet makes quick work of Microsoft's Touch Pack 
(Paul Miller, Engadget, 9/6/009)
Microsoft Touch Pack Gallery (Engadget)
Engadget's Reader Comments


I visited Fujitsu's website to gather additional information, and found a couple of press releases in German:


IFA 2009:  Fujitsu setzt auf Fingerspitzengefuhl und prasentiert  Notebook mit Multiple-Touch-Function (pdf)


Here is the press contact:


Fujitsu Technology Solutions
Melanie Wolf, Senior PR Manager
Tel.: + 49 (0) 89 62060 4458, Mobile: + 49 (0) 171 33 42 882,
Melanie.wolf@ts.fujitsu.com

Sep 6, 2009

Oblong's g-speak Spatial Operating Environment: Gesture interaction, massive datasets, film production, and more.


g-speak overview 1828121108 from john underkoffler on Vimeo.

What is g-speak?


From the Oblong website:  "Spatial semantics at the platform level"

"Every graphical and input object in a g-speak environment has real-world spatial identity and position. Anything on-screen can be manipulated directly. For a g-speak user, "pointing" is literal."


"The g-speak implementation of spatial semantics provides application programmers with a single, ready-made solution to the interlocking problems of supporting multiple screens and multiple users. It also makes control of real-world objects (vehicles, robotic devices) trivial and allows tangible interfaces and customized physical tools to be used for input."


"The g-speak platform is display agnostic. Wall-sized projection screens co-exist with desktop monitors, table-top screens and hand-held devices. Every display can be used simultaneously and data moves selectively to the displays that are most appropriate. Three-dimensional displays can be used, too, without modification to application code."

Origins of Oblong

g-speak was born at the MIT Media Lab, and Oblong was started in 2006. The work behind g-speak's gestural I/O began over 15 years ago. For more information, read g-speak in slices.

Oblong developed Tamper over the g-speak system as a prototype for film production. Below is the demo.  At 0:08, sketches of the gestures used in g-speak are displayed in the video.



I hate wearing gloves, but I'd gladly put them on to play with the system for a few days!