Nov 17, 2008

Interactive Information Visualization: Flare & Flex; Visualization Links from Crisis Fronts

A big part of interactive multimedia is visualization, and with the latest tools, interesting things are happening!

Flare is a visualization tool for the web, and utilizes Adobe's Flex SDK, an ActionScript 3 Compiler, and Flex Builder. Basically, it is an ActionScript library, and the applications run in the Adobe Flash Player.

It was developed by the
University of California, Berkeley Visualization Lab, which contains a wealth of resources and information about the visualization lab's projects and presentations.

Additional information, including tutorials, source code, sample applications, API documentation, and a help forum can be found on the Flare website

An interactive visualization created with Flare.


Here are some cool links about data visualization, via Sebastian Misiurek, of the Crisis Fronts: Cognitive Infrastructures blog:

Infosthetics

Wordle

Simple Complexity

Strange Maps

Sebastian also recommends the following papers (pdf):

Information Aesthetics in Information Visualization


Artistic Data Visualization: Beyond Visual Analytics


I especially like the description of the Crisis Fronts project:

"Crisis Fronts is the Degree Project studio and seminar run by Michael Chen and Jason Lee, with Gil Akos and Ronnie Parsons at Pratt Institute’s School of Architecture.

Crisis Fronts is an ongoing inquiry into contemporary global crises that suggest new demands and agendas for architecture, and the potential afforded by parametric and generative digital design tools to engage them."

Nov 16, 2008

OpenFrameworks & Interactive Multimedia: Funky Forest Installation for CineKid

The Funky Forest was created by Emily Gobeille and Theodore Watson for the 2007 CineKid festival in the Netherlands, using OpenFrameworks, an open-source application used for multimedia and multi-touch applications. Take a look at the video and pictures of the children interacting with this technology!

"It “is a wild and crazy ecosystem where you manage the resources to influence the environment around you. Streams of water flowing on the floor can be diverted to make the different parts of the forest grow. If a tree does not receive enough water it withers away but by pressing your body into the forest you create new trees based on your shape and character. As you explore and play you discover that your environment is inhabited by sonic life forms who depend on a thriving ecosystem to survive.”

The trees and creatures in the installation look really beautiful; just abstract enough to make it look like a strange magical forest, but the processes of our real ecosystems are still recognisable. A really wonderful project. And it sure looks like a lot of fun!" -Tanja, from the TakeBigBites blog









Every Surface a Computer: "Scratch" Capturing Finger Input on Surfaces using Sound. Video by Chris Harrison and Scott Hudson's Video - UIST '08

Chris Harrison and Scott Hudson, from the Human-Computer Interaction Group at Carnegie-Mellon University, presented their latest research at the UIST '08 conference. Take a look at the video below to see how gestures that result in sounds can can transformed on unpowered finger input surfaces, using a stethoscope sensors and filters:



Yes, every surface is a computer!
(Even your pants...)

For detailed information, read the paper presented at UIST '08 by Chris Harrison and Scott E. Hudson:
Scratch Input: Creating Large, Inexpensive, Unpowered, and Mobile Finger Input Surfaces

RELATED:

The Best Paper Award at UIST '08 was "Bringing Physics to the Surface", by Andrew Wilson, of Microsoft Research, and Ahahram Izadi, Otmar Hilliges, Armando Garcia-Mendoza, and David Kirk, of Microsoft Research, Cambridge.

Here is the abstract:

"This paper explores the intersection of emerging surface technologies, capable of sensing multiple contacts and of-ten shape information, and advanced games physics engines. We define a technique for modeling the data sensed from such surfaces as input within a physics simulation. This affords the user the ability to interact with digital objects in ways analogous to manipulation of real objects. Our technique is capable of modeling both multiple contact points and more sophisticated shape information, such as the entire hand or other physical objects, and of mapping this user input to contact forces due to friction and collisions within the physics simulation. This enables a variety of fine-grained and casual interactions, supporting finger-based, whole-hand, and tangible input. We demonstrate how our technique can be used to add real-world dynamics to interactive surfaces such as a vision-based tabletop, creating a fluid and natural experience. Our approach hides from application developers many of the complexities inherent in using physics engines, allowing the creation of applications without preprogrammed interaction behavior or gesture recognition."
Preparation for the Internet of Surfaces & Things?




(Cross-posted on the Technology-Supported Human World Interaction blog)

Nov 15, 2008

Multi-touch and Flash: Links to resources, revisiting Jeff Han's TED 2006 presentation

Despite the increase in interest in systems that support multi-touch, multi-user multimedia interaction, there is a need for creative, tech-savvy types to develop innovative applications. Why? This technology has the potential to make a powerful impact on how people learn, communicate, solve "big picture" problems, and do their various jobs.

CNN's Magic Wall was one of the first applications to gain the attention of the masses, as it was used as an interactive map during the US presidential election process. Touch-screen interaction gained even more notice after the recent SNL parody by Fred Amisen.

If you think about it, the multi-touch applications you see on the news aren't much different than what you'd get from a "single-touch" program.

Fancy, yes. Truly innovative, no.

Just imagine a 3D multi-touch, multi-user, multimedia version of Google Search. I did. I put my sketches in my idea book and hurt my brain thinking about how it could be coded.

Jeff Han, the man behind Perceptive Pixel and CNN's magic wall, had much more up his sleeve when he demonstrated his work at TED 2006. Even if you've previously seen this video, it is worth looking at again. (I've provided a link to the transcript below.)



Transcript of Jeff Han's TED 2006 Presentation

This video presentation had a transformational effect on me as I watched for the first time. Jeff Han brought to life ideas that were similar to my own as a beginning computer student thinking about collaborative educational games and multimedia applications that could be played on interactive whiteboards.

Here are some selected quotes from the video:

"
I really really think this is gonna change- really change the way we interact with the machines from this point on."

"
Again, the interface just disappears here. There's no manual. This is exactly what you kind of expect, especially if you haven't interacted with a computer before."

"Now, when you have initiatives like the hundred dollar laptop, I kind of cringe at the idea that we're gonna introduce a whole new generation of people to computing with kind of this standard mouse-and-windows pointer interface. This is something that I think is really the way we should be interacting with the machines from this point on. (applause)"

"Now this is going to be really important as we start getting to things like data visualization. For instance, I think we all really enjoyed Hans Rosling's talk, and he really emphasized the fact that I've been thinking about for a long time too, we have all this great data, but for some reason, it's just sitting there. We're not really accessing it. And one of the reasons why I think that is, is because of things like graphics- will be helped by things like graphics and visualization and inference tools. But I also think a big part of it is gonna be- starting to be able to have better interfaces, to be able to drill down into this kind of data, while still thinking about the big picture here."

So now what?

A recent post by "Alex", on the
AFlex World blog discusses a few solutions. Alex had a chance to meet with Harry van der Veen and Pradeep George from the NUI Group, and Georg Kaindl, a multi-touch interaction designer from the Technical University of Vienna. The focus of the discussion was to come up with ideas to encourage Adobe/Flash designers and developers to learn more about multi-touch technology and interaction, and take steps to create innovative applications.

I especially like the following quote from the post:

"...A quick quote from our conversations: “When our children will walk up to a display, they will touch it and expect to do something.”

As a techie and a school psychologist, I see an immediate need for innovative applications. I know that there is a built-in market in the schools, at least for low-cost applications. Despite economic constraints, many school districts continue to invest in interactive whiteboards (IWB's). They are cropping up in preschool and K-12 settings, and teachers are searching for more than what's currently available.

Interactive, collaborative applications are needed in fields such as health care, patient education, finance & economics, urban planning, civil engineering, travel & tourism, museums & exhibitions, special events, entertainment, and more.

Smart Technologies, the company behind SmartBoards, has a new interactive multi-touch, multi-user table designed for K-6 education, the Smart Table. Hewlett Packard has several versions of the TouchSmart PC, which can support at least duo-touch, if not multi-touch, multi-user applications. There are numerous all-in-one large screen display
s on the market that support multi-touch and multi-user interaction.

Quotes from Harry van der Veen, of Multitouch NL:

"In 10 years from now when a child walks up to a screen he expects it to be a multi-touch screen with which he can interact with by using gestures."

"...multi-touch screens will be as common as for children is the internet nowadays, as common as mobile phones are for us."


Here is a quote from a conversation I had with Spencer, who blogs at TeacherLED.

"It was interesting this week as I was in a classroom with a teacher who I've not worked with before... he had 2 students using the whiteboard who kept touching it together by mistake. The teacher, exasperated, said to himself, "Why can't they make these things to accept 2 touches without going crazy!"

Proof of the demand! I think you are right when teachers spot the limitations and then see the technology on visits to museums, that might stimulate demand."


Spencer creates cool interactive mini-applications, mostly for math, using Flash, that teachers (and students) love to use on interactive whiteboards. (He's interested in multi-touch, too.)


So what are we waiting for?!

Related:
Natural User Interface Europe AB meets Adobe
Georg's Touche Framework
NUI Group
TeacherLED
Interactive Touch-Screen Technology, Participatory Design, and "Getting It".
Hans Rosling's 2007 TED talk

Nov 13, 2008

RENCI at Duke University: Multi-Touch Collaborative Wall and Table utilizing TouchLib; More about UNC-C's Viz lab...

RENCI is a multi-disciplinary collaboration between several universities in North Carolina, with centers located at the Europa Center, Duke University, N.C. State, UNC Chapel Hill, East Carolina University, UNC-Asheville, UNC-Charlotte, and the Health Sciences Library at UNC-Chapel Hill. Many of the centers focus on visualization and collaborative technologies, and have been involved in multi-touch "surface" computing.

The pictures below are from the RENCI center at Duke University:

http://vis.renci.org/multitouch/wp-content/themes/daleri-dark-10/img/front.jpg

Duke Multi-Touch Collaborative Wall

The multi-touch wall is 13 x 5 feet, and utilizes six high-definition projectors, resulting in a combined resolution of 5760-2160, and supports multiple users. According to information on the RENCI website, the design is scalable and applicable to non-flat surfaces. The wall system runs on Windows and Linux.

Duke Multitouch Wall. (Photo credit: Josh Coyle)

(Photo by Josh Coyle)

The Wall is positioned at the end of the primary collaboration space. (Photo credit: Josh Coyle)

(Photo by Josh Coyle)

DI, or Direct Illumination is used for touch detection in both the wall and the table for detecting touch. A separate instance of Touchlib runs for each of the 8 cameras used to detect touch. A gesture engine interprets the information about touches on the screen as gesture events. Each camera is handled separately for image processing and blob tracking tasks.

Direct Illumination (DI)

Graphics from the RENCI Vis Group Multi-Touch Blog

The Duke Multi-Touch Wall System

Here is cool picture of the "Multi-touch Calibration Device", which uses a built-in TouchLib utility.

Calibrating using the utility built into TouchLib.

Additional information can be found on the RENCI Vis Group Multi-Touch Blog.

FYI

Touchlib is a multi-touch development kit that can be found on the NUI-Group website.

"Touchlib is a library for creating multi-touch interaction surfaces. It handles tracking blobs of infrared light, and sends your programs these multi-touch events, such as 'finger down', 'finger moved', and 'finger released'. It includes a configuration app and a few demos to get you started, and will interace with most types of webcams and video capture devices. It currently works only under Windows but efforts are being made to port it to other platforms."

If you are interested in creating your own multi-touch table, the NUI-Group website and forums are a great place to start.

Related:

If you follow my blog, you probably know that I've taken several graduate courses at UNC-Charlotte. Some of my professors and a classmate or two have been involved in some exciting visualization research over the past year. (If you are serious about multi-touch and other visually-based applications, it is worth taking some time to familiarize yourself with visualization and interaction research.)

News from the UNC-Charlotte Vis Center:

At the University of North Carolina at Charlotte, RENCI is a collaboration between the UNC Charlotte Urban Institute, the Center for Applied Geographic Information Science, and the Charlotte Visualization Center.

11/06/2008
Robert Kosara’s group wins two awards at IEEE VisWeek Caroline Ziemkiewicz and Robert Kosara won Honorable Mention (the second highest award) at the IEEE InfoVis Conference for their paper, “The Shaping of Information by Visual Metaphors”. Also, Alex Godwin, Kosara’s student, won Best Poster for his submission, “Visual Data Mining of Unevenly-Spaced Event Sequences”.

The Vis Center is pretty fascinating, as you can see by the group of visitors at an open house.

If you are just as fascinated by this stuff as the guys in the picture, here are links to some recent papers by UNC-Charlotte faculty affiliated with the Vis Center:

The Shaping of Information by Visual Metaphors (Caroline Ziemkiewicz and Robert Kosara)

Evaluating the Relationship Between User Interaction and Financial Visual Analysis (Don Hyun Jeong, Wenwen Dou, Felsia Stukes, William Ribarsky, Heather Richter Lipford, Remco Chang)

Visual Analytics for Complex Concepts Using a Human Cognition Model (Tera Marie Green, William Ribarsky, and Brian Fisher)

Nov 6, 2008

Multi-Touch News from WinHEC and PDC

I received the following videos and links from Anthony Uhrick, who happens to be at WinHEC this week and was at PDC 2008 last week. Touch screen, multi-touch, and gesture technology is starting to catch on. (Anthony is the VP of sales for NextWindow, the company that created the touch screen for the HP TouchSmart and other multi-touch enabled displays.)

Below is a video clip of a multi-touch photo presentation system running Windows 7: Gesture + Touch - has gesture and physics engines.


Apparently the application can run on Vista, Win 7, and Win 7 Touch.

Here is an HP TouchSmart PC, running a Touch Map application on Windows 7:


The following clip is of a newscaster using a multi-touch transparent screen.The display is from U-Touch Ltd. a partner of NextWindow. In my opinion, the application enhances the viewers understanding of the various news topics, and is visually appealing as well.


The graphics engine used in this application was developed by Vizrt, the same folks who were behind CNN's video hologram. Here are a few pictures from the Vizrt website:








The workflow behind the CNN hologram
The workflow behind the CNN hologram (click to enlarge)

The transporter room during setup
The holograph "transporter" room.

For more videos using Windows 7 apps, see
creamhackered's YouTube channel. (Videos appear to be from NeoWin Net.)

Windows 7 Design Concepts and Usability Tests