Showing posts with label HCI. Show all posts
Showing posts with label HCI. Show all posts

Jul 17, 2009

The new iPhone icons can speak: "Voiceover" makes it accessible to people with vision impairments - Via David Pogue

This is good news. According to David Pogue, in his Pogue's Posts column in the New York Times, "You’d never suspect that the iPhone 3GS, which has no physical keys at all, is one of the easiest smartphones in the world for a blind person to use. But now it’s true, thanks to VoiceOver."

Apple is mindful of people with disabilities. The virtual tour of the new 3GS has a closed-
captioned option.




The iPhone 3GS. I want one.

Jul 16, 2009

Interactive Multimedia Technology Themes -Update on Travel Technologies

Over the next month or so I will be re-organizing this blog. I'll be analyzing the various themes that have emerged since I started this on 4/11/06, over three years ago, as part of an assignment for a class about distance education and on-line communication tools.

My first topic was "Games, Simulations, and Virtual Worlds". Although I continue to focus on those themes, I mostly center around off-the-desktop interactive, collaborative, and emerging technologies that support interaction and activities in public spaces.

One theme that interests me is technology that supports travel experiences. Since I've had the opportunity to travel a great deal (before the economy started to go downhill), I've had a chance to explore this arena as a participant-observer*, and have documented my findings through photographs and video.

It is a joke in my family that if I disappear from the tribe, I can usually be found nearby, poking at an interactive touch screen, photographing something related to technology, or sneaking in a few shots of other people interacting with technology, and sometimes even talking to strangers as they use technology. (I usually ask permission to take pictures of people who are in my view finder, but sometimes they just happen to be in my line of sight.)

It is amazing what an earful you can get about technology as a fellow traveller!

I came across the work of Nanonation when I was on a Royal Caribbean cruise ship, and was a little disappointed with the touch-screen content and interaction around the ship. From what I can tell from the NanoNation website, the applications have been improved somewhat, especially the way-finding application on the Freedom of the Seas:

Wayfinding Application, Freedom of the Seas

Nanonation was also involved in the development of a "Discovery Wall" at the Umpqua bank. This system incorporates tangible icons that sit on a shelf located near the Discovery Wall that trigger an interactive flash presentation on a screen. The icons represent various bank products, and are RFID enabled.

Discovery Wall, Umpqua Bank

Back to the topic of cruise ship/travel technology:
When I was on the Ruby Princess cruise in December of 2008, I was impressed with the "Movies Under the Stars" set-up. At night, the sunning decks are transformed into out-door movie viewing spots, where you can lounge around, basking under the stars at you watch the gigantic silver screen and excellent sound system.

During the day, the system is used to display games that people play on the Wii, which provides the non-playing sunbathers additional entertainment.

I recently learned that the Movies Under the Stars system was developed and installed by FUNA, an international company that focuses on marine-related industries, as well as land-based industries.

Take a look to my "Wii-OOH" Flickr set slideshow to see the Wii in action on the large screen of the Ruby Princess, and on smaller screens in the food-court of the Concord Mills (NC) mall:

(Note, the mall pictures were taken with my cell phone.)

I want to go back!

HCI Note
*I was trained in the use of participant observation long ago, when I was studying social science
and psychology at the University of Michigan. It is a method that was developed early on by anthropologists
and sociologists, and adopted later by researchers in other fields. Some human-computer interaction
researchers use this method, and related techniques, such as ethnography, in their work.

Jun 6, 2009

Information about Touch Screens, Multi-Touch, & Gesture Interaction is Spreading

Since the news about Windows 7 multi-touch capabilities has spread around, I haven't had enough time to keep up all of information related to multi-touch interaction. Fortunately there are a few bloggers out there who are doing a great job filling in the gaps.

The Touch User Interface blog has a wealth of information in the form of pictures, video clips, slides, and links that I'd like to share.

The following slideshow/videos were highlighted in the Touch User Interface blog post, "Touch UI: HCI Viewpoint":

Untold Stories of Touch, Gesture, & NUI

Joe Fletcher, Design Manager, Microsoft Surface

Touch and Gesture Computing, What You Haven't Heard
Dan Saffer



Other posts of interest on the Touch User Interface blog:
Touch screens and vision impairment
Link: Designing the Palm Pre: An Interview with Michelle Koh

Touch User Interface Overview

I've updated some additional information about UX,interactive multimedia, multi-touch, and gesture interaction on my Multimedia and Interaction Resources page, which is a work in progress.


May 20, 2009

xXtraLab's Multi-touch Projects

xXtraLab is an interaction design firm located in Taiwan. The xxtralab team has been working on some interesting multi-touch projects. Take a look!






















Multi-touch wall for briefing and real-time info sharing
Multisensory iTea-table



"xXtraLab Design Co. is one of the leading multimedia company in Taiwan, focusing on the design & engineering of HCI (Human-Computer Interaction) interfaces in museum, exposition, and showrooms (client lists here). Members of xXtraLab come from diversifying fields such as visual design, digital media, architecture, interior design, information engineering, design computing, industrial design, and fine art. we respect different cultural views and work as a multi-disciplinary team to offer inclusive design services."

May 5, 2009

World Builder: Interaction of the Future?

World Builder from Bruce Branit on Vimeo.



If you've seen this 9-minute video, you won't mind taking another look. It was created by Bruce Branit depicting a man who builds a holographic world sometime in the future. Branit is a visual effects artist, who reportedly used the Lightwave 3D graphics platform for post-production.

The music was composed by Randy Skach.

Apr 4, 2009

Put-That-There: Voice and Gesture at the Graphics Interface and more Blasts from the 1980's HCI Past


bigkif's information about "Put-That-There" about Put-That-There gives a good description of this video:

Put-That-There at CHI '84

"In 1980, Richard A. Bolt from MIT wrote Put-that-there : voice and gesture at the graphics interface. It was a pioneering multimodal application that combined speech and gesture recognition.

This demo shows users commanding simple shapes about a large-screen graphics display surface. Because voice can be augmented with simultaneous pointing, the free usage of pronouns becomes possible, with a corresponding gain in naturalness and economy of expression. Conversely, gesture aided by voice gains precision in its power to reference."

Richard A. Bolt "Put-That-There": Voice and Gesture at the Graphics Interface
(pdf) SIGGRAPH '80

Here is another blast from the '80's:

Kankaanpaa A. FIDS- AFlat-Panel Interactive Display System IEEE March 1988 IEEE Computer Graphics Applications(Nokia Information Systems)

"Although the needs and expectations of these various users are very diverse, they all have a common requirement: more natural and easier methods for communicating with the computer than are available today. Furthermore, they do not want to interact with the computer; they want to communicate with the application they are using. They do not want to use computer jargon; they want to use the same natural methods that they use when they perform the same tasks without a computer."

“We believe that only three of the flat-panel technologies described above, namely LCD, EL, and plasma, will be sufficiently advanced for mass production within this decade.”

Bill Buxton was working on multi-touch and gesture interaction in the 1980's, but his dreams did not become a reality until this century, for a variety of reasons. He shared his thoughts about the paradox of the speed of technology in a presentation at the 2008 IEEE International Solid-State Circuits Conference:Surface and Tangible Computing, and the “Small” Matter of People and Design”(pdf)

‘Carrying on from an earlier thesis in our department (Mehta , 1982) , we built a tablet that was sensitive to simultaneous touches at multiple locations, and with the ability to sense the degree of each touch independently (Lee, Buxton & Smith, 1984). We stopped the work in late 1984 when I saw a much better implementation at Bell Labs – one that was transparent and mounted over a CRT. The problem was that they never released the technology, so, the whole multi-touch venture went dormant for 20 years. But, I never stopped dreaming about it. (Lesson: don’t stop your research just because someone else is way ahead of you. It might be transitory, and anyhow, remember the story of the tortoise and the hare.)

“I spoke earlier about the paradox in the speed of technology development it goes at rocket speed, but that of a glacier as well; Simultaneously! In the perfect world, this would be ideal: we could go through several iterations of ideas so that by the time the new paradigms of interaction, such as Surface and Tangible computing are ready for prime time, everything will be in place. But, the rapid iteration is more directed at supporting the old paradigms faster and cheaper, rather then helping shape the new ones. The reasons are not hard to understand. From the perspective of circuit design, the problems are really hard. So, one has to have one’s head down working flat out to get anything done. But, there is a side of me that motivated this paper that asks, If it is so hard, then isn’t it worth making sure that the things one is working on are things that are worthy of one’s hard-earned skills?”

SOMEWHAT RELATED

Bill Buxton's Haptic Input References
(pdf)

Mar 18, 2009

More for Multi-touch: NextWindow Plug-in for Natural User Interface's Snowflake Multi-touch Software -and more.



Those of you have an HP TouchSmart, Dell Studio One PC, or NextWindow displays might be interested in the new NUI plug-in that supports NUI Suite Snowflake software. Here are the features of the plugin, according to information from the Natural User Interface website:
  • Detailed user manual included with FAQ
  • Developed on fast and reliable C++ platform
  • Intuitive
  • Customizable
  • Gesture recognition library
  • TUIO/OSC (Open Sound Control) support (sending and receiving events)
  • Low level API
  • Hardware accelerated rendering
  • Support for wide variety of media types
  • Advanced window handler that supports scaling and rotation
  • Suitable for Windows® XP and Windows® Vista (Mac OSX and Linux can be developed on request)
  • Audio support
  • Single, dual support
  • Multi-threaded resource handler (For fast data visualization)

"NUI has partnered up with NextWindow™, an international leader in the development of optical multi-touch technology and the manufacturer of optical multi-touch screens, overlays and OEM touch components."

"NextWindow™'s integrated technology allows for natural and intuitive interaction of digital content on flat TFT, LCD and Plasma solutions."

"The NUI NextWindow™ plug-in can be used with any programming language that supports TUIO, i.e. C/C++/C#, Java, Flash, Python, VVVV etc, meaning that software developers can run their own applications on NextWindow™, utilizing the NUI NextWindow™ plug-in."

Comment:
I became a fan of NextWindow touch-screen displays in early 2007 when I worked on a couple of touch-screen projects in my HCI and Ubicomp classes at UNC-Charlotte.


I've been using my HP TouchSmart PC at work with students with disabilities. I'm experimenting with the NUI Suite SnowFlake on my TouchSmart, and found that interacting with the Particles application delighted students with severe autism. The activities provided opportunities to establish joint attention. I also noticed an increase in the number of vocalizations and/or verbalizations among the students. Of course, this was NOT a scientific study.

RELATED
Definition of Joint Attention from UConn:

"Joint Attention is the process of sharing one’s experience of observing an object or event, by following gaze or pointing gestures. It is critical for social development, language acquisition, cognitive development…"

http://eigsti.psy.uconn.edu/jt_attn.JPG


Establishing joint attention is an important step in the development of social interaction skills among young people who have autism spectrum disorders.

More about joint attention:

Joint Attention Study Has Implication for Understanding Autism
Science Daily, 9/29/07

Asperger-Advice: Joint Attention

Autism Games: Joint Attention and Reciprocity

Why is joint attention a pivotal skill in autism?
Tony Charman
Philos Trans R Soc Lond B Biol Sci. 2003 February 28; 358(1430): 315–324.
doi: 10.1098/rstb.2002.1199.

Feb 27, 2009

Tangible User Interfaces Part II: More Examples, Resources, and Use for TUI's in Education

In Part I of my "mini-series" about Tangible User Interfaces, I discussed the origins of TUI and provided some examples of Siftables. In this section, I've provided some links to information about Tangible User Interfaces and the abstracts of two articles pertaining to TUI's in educational settings.

Zen Waves: A Digital (musical) Zen Garden



reactable from Nick M. on Vimeo.

Reactable
http://upload.wikimedia.org/wikipedia/commons/e/e3/Reactable_Multitouch.jpg
More about the Reactable
"The reactable hardware is based on a translucent, round multi-touch surface. A camera situated beneath the table, continuously analyzes the surface, tracking the player's finger tips and the nature, position and orientation of physical objects that are distributed on its surface. These objects represent the components of a classic modular synthesizer, the players interact by moving these objects, changing their distance, orientation and the relation to each other. These actions directly control the topological structure and parameters of the sound synthesizer. A projector, also from underneath the table, draws dynamic animations on its surface, providing a visual feedback of the state, the activity and the main characteristics of the sounds produced by the audio synthesizer."


The Bubblegum Sequencer: Making Music with Candy



Jabberstamp: Embedding Sound and Voice in Children's Drawings
(pdf)
(A TUI application to support literacy development in children)

Affective TouchCasting
(pdf)

TapTap: A Haptic Wearable for Asynchronous Distributed Touch Therapy
(pdf)

BodyBeats: Whole-Body, Musical Interfaces for Children
(pdf)

Telestory is a Siftables application that looks like it would be quite useful for supporting children who have communication disorders or autism spectrum disorders.

Telestory Siftables application from Jeevan Kalanithi on Vimeo.

"Telestory is an educational, language learning application created by Seth Hunter. In this video, the child is looking at a television screen. He can control onscreen characters, events and objects with the siftables. For example, he has the dog and cat interact by placing the dog and cat siftables next to each other."
TeleStory Project Website

Here is a video of how Siftables can be used as equation editors:


Siftables Equation Editor from Jeevan Kalanithi on Vimeo.

RESOURCES ABOUT TUI'S:


5 lessons about tangible interfaces, GDC Lyon, December 2007(pdf) Nicolas Nova


Special Issue on Tangible and Embedded Interaction (Guest Editors: Eva Hornecker, Albrecht Schmidt, Brygg Ullmer) Journal of Arts and Technology (IJART) Volume 1 Issue 3/4 - 2008


Reality-Based Interaction: A Framework for Post-WIMP Interfaces (pdf)


Here are a couple of abstracts of articles related to the use of TUI's in education:

Evaluation of the Efficacy of Computer-Based Training Using Tangible User Interface for Low-Functioning Children with Autism Proceedings of the 2008 IEEE International Conference on Digital Games and Intelligent Toys

"Recently, the number of children having autism disorder increases rapidly all over the world. Computer-based training (CBT) has been applied to autism spectrum disorder treatment. Most CBT applications are based on the standard WIMP interface. However, recent study suggests that a Tangible User Interface (TUI) is easier to use for children with autism than the WIMP interface. In this paper, the efficiency of the TUI training system is considered, in comparison with a conventional method of training basic geometric shape classification. A CBT system with TUI was developed using standard computer equipment and a consumer video camera. The experiment was conducted to measure learning efficacy of the new system and the conventional training method. The results show that, under the same time constraint, children with autism who practiced with the new system were able to learn more shapes than those participating in the conventional method."

Towards a framework for investigating tangible environments for learning Sara Price, Jennifer G. Sheridan, Taciana Pontual Falcao, George Roussos, London Knowledge Lab, 2008

"External representations have been shown to play a key role in mediating cognition. Tangible environments offer the opportunity for novel representational formats and combinations, potentially increasing representational power for supporting learning. However, we currently know little about the specific learning benefits of tangible environments, and have no established framework within which to analyse the ways that external representations work in tangible environments to support learning. Taking external representation as the central focus, this paper proposes a framework for investigating the effect of tangible technologies on interaction and cognition. Key artefact-action-representation relationships are identified, and classified to form a structure for investigating the differential cognitive effects of these features. An example scenario from our current research is presented to illustrate how the framework can be used as a method for investigating the effectiveness of differential designs for supporting science learning"

Jan 24, 2009

Digital Storytelling Platforms and Multiple Perspectives: A look at the work of Jonathan Harris - Food for Thought for Interactive Timeline Design

I'm in the process of creating an interactive timeline, and as I revisited my links and bookmarks, I came across a link to a video of Jonathan Harris discussing his ideas regarding digital storytelling, overlapping threads, and multiple perspectives.

Jonathan explores real-life stories and celebrates the interconnections between events, ideas, feelings, and people. Linear narrative and linear time lines do not do justice to the richness and complexities of human experience
.


"Combining elements of computer science, anthropology, visual art and storytelling, Jonathan Harris designs systems to explore and explain the human world."



"Jonathan Harris is redefining the idea of what it means to tell a story. Take a ride through an arctic whale hunt and plunge headfirst into the feelings Harris finds running rampant in cyberspace as he describes what he calls “storytelling platforms.” "

Below are links to two story-telling platforms described in the presentations. The Whale Hunt is organized so that the user can explore the story through a variety of perspectives and interfaces, and at different points in time.

The first screenshot shows how the user can select one of the "cast" members to see how the story unfolds from that person's point of view.





"Every few minutes, the system searches the world's newly posted blog entries for occurrences of the phrases "I feel" and "I am feeling". When it finds such a phrase, it records the full sentence, up to the period, and identifies the "feeling" expressed in that sentence (e.g. sad, happy, depressed, etc.). Because blogs are structured in largely standard ways, the age, gender, and geographical location of the author can often be extracted and saved along with the sentence, as can the local weather conditions at the time the sentence was written. All of this information is saved...The result is a database of several million human feelings, increasing by 15,000 - 20,000 new feelings per day. Using a series of playful interfaces, the feelings can be searched and sorted across a number of demographic slices, offering responses to specific questions..."


Jonathan Harris also presented at the December 2007 EG Conference. The video and related information can be found on the TED website.

Jonathan Harris: The art of collecting stories


If you have some time on your hands, explore Jonathan's Universe project:

http://universe.daylife.com/common/statement-universe.gif
This photo depicts the nine stages of the Universe environment.

"Universe is a system that supports the exploration of personal mythology, allowing each of us to find our own constellations, based on our own interests and curiosities. Everyone's path through Universe is different, just as everyone's path through life is different. Using the metaphor of an interactive night sky, Universe presents an immersive environment for navigating the world's contemporary mythology, as found online in global news and information from Daylife. Universe opens with a color-shifting aurora borealis, at the center of which is a moon, and through which thousands of stars slowly move. Each star has a specific counterpart in the physical world — a news story, a quote, an image, a person, a company, a team, a place — and moving the cursor across the star field causes different stars to connect, forming constellations. Any constellation can be selected, making it the center of the universe, and sending everything else into its orbit."

Universe was created using Processing, which is an open-source software that is used by people from all sorts of disciplines to create interesting interactive information visualizations and more. The data used in Universe is from Daylife. For more information about Daylife, visit the Daylife Labs.

Sidenote:
Jonathan Harris collaborated with Sep Kamvar on the We Feel Fine project. Sep Kamvar teaches classes like "Social Software" and "Computational Methods in Data Mining" at Stanford University. Sep is part of the Stanford Human Computer Interaction (HCI) Group.

Stanford's HCI group's weekly seminars highlight a variety of interesting speakers. Current and previous talks are available via Stanford OnLine. You can link to current presentations and videos from the Human Computer Interaction Seminar website. If you are curious, past presentations/abstracts can be accessed on-line alphabetically or by date, going as far back as 1990.

Dan Saffer, author of Designing Gestural Interfaces, presented "Tap is the New Click" on January 23rd, 2009 at Stanford.  You can access the
video directly.

Pop! Tech PopCast

Jan 3, 2009

Phenom's All-in One Touch-Screen Watch Phone: A perfect observational research tool !?



Note: Other pictures from the Phenom site gave a message, "Sorry, Our Photos are Copyrighted", so this is the only photo I was able to obtain. You can see more photos on Phenom's online photo-gallery.

I might have found the ultimate HCI - ubicomp research tool! The Phenom Watch-Phone. It might also come in handy when I'm conducting observations "across settings" in my job as a school psychologist. (Maybe Phenom will give me one for free to test out for a while...I'd be happy to develop some apps for it if it works out for me.)




I was delighted with some of the user-friendly marketing features on the Phenom website. I didn't have to dig and get lost and dig some more to find what I needed. The above videoclip is featured on the home page of the site, which gives gives a good info-tease about the advantages of the watch. The FAQ section is fairly extensive and easy to navigate.

When you explore to the "Gadget Freaks" page, you are provided with an audio presentation, with musical accompaniment,
as you view pictures and prices. (You can turn the web page audio if you don't want to listen to the blurb, or turn it on and listen to it again, if you missed something.)

When you click on the picture of the "SpecialOPS Black" version of the watchphone, you are taken to another page where you can inspect different features more closely as you move your mouse around the photo. (Since I have an HP TouchSmart PC, I just moved my finger around the photo- a great effect!)

Here is the description of the SpecialOps phone, taken from the Phenom website:

"The ultimate watch phone for those who like to live on the edge. The SpecialOps is a fully functional GSM cell phone that has a touch screen and an external key pad. The SpecialOps has an MP3&MP4, built-in microphone and speakers, digital and video camera, MicroSD slot and built-in Bluetooth. You can even take notes with your convenient and compact stylus or record your thoughts on the run. See full list of features for more details."

More info from the Phenom website:

Features
-External keypad
-LCD: 1.3 inch TFT260k Pixels
-Touch Screen
-Language: English
-Ring tone: 64 Polyphonic, Supported Formats: Mp3, MIDI, Wave
-Music Format: MP4, Full Screen
-Camera: 130 Pixels
-T-Flash Supported
-Built-in Bluetooth
-Picture Format: JPG, GIF

Basic Functions
-Notebook: 250 Groups
-SMS and MMS Messaging
-User-defined on-off switch
-Game: Picture Mosaic
-Other Function: MP3, MP4, Built-in Speakerphone, Group Messaging MMS, Call Barring

Basic Parameters:
-Network: GSM, GPRS, WAP
-Frequencies: 900/1800/1900MHZ
-Call Time: 2 hours (estimated - per battery)
-Standby Time: 120 hours (estimated - per battery)

Accessories Included
-256MG Micro SD Memory Card
-Data Wire
-Battery
-USB Charger
-Stylus

More about Phenom

Nov 23, 2008

Need for Multi-touch, Multi-user Interactive Multimedia Applications, and the Miracle Question

Last week I received a few comments on my post, "Multi-touch and Flash: Links to Resources; Revisiting Jeff Han's Presentation". I started to respond to a thoughtful comment by Spencer, of TeacherLED, and I wanted to share it as a post:

Spencer is a teacher and instructional technology consultant who develops web-based interactive applications for use on interactive whiteboards (IWB's). He's interested in multi-touch applications for education and has some good insights into what HCI researchers call the "problem space".

Here are Spencer's comments:

..."I agree that Flash could have a very important role to play here. I chose Flash as my development tool because it allows quick development of ideas and then easy distribution of the product. The importance of this is that it allows people who have a profession other than software developer to create software with the insight of their main role. In my case, as a teacher, I can identify things I wish I had and then make them. Often I find that other teachers had the same wish and they then appreciate the product."

"The unfortunate thing with multi-touch is that it is far from the technology most of us outside the industry/research areas have to work with. An app created in Flash for single touch follows the mouse and pointer method so it can be developed easily. When done it can be easily tested on a standard IWB for the feel (which is often surprisingly different on the IWB compared to using a mouse)."

"The Flash developer community has a very experimental and creative characteristic and I’m sure would be a great driving force for multi-touch but first there needs to come a reason for more people to have some sort of multi-touch display for general use, beyond facilitating experiments. When the various operating systems support it and have the apps to make having a supporting display viable then the experimentation and ideas will really flow."

"In addition, the display makers need to recognize the benefits of Flash and ensure they address them. At the moment it seems to be too often an afterthought if considered at all. SDKs and APIs make no reference to Flash or they remain indefinitely in beta for older versions of Flash only."

"It is a pity that all of this will take time. The more time that passes the more single touch IWBs are bought and installed which will delay the uptake of the eventual multi-touch ones. Meanwhile children continue to have to keep reminding themselves that they can only touch the board in one place when it is clear that every bit of their brain is telling them to interact with the board in a much more natural multi-touch way."

My response to Spencer's comment:

Spencer,

You make good points regarding the barriers to getting the multi-touch approach adopted by the "mainstream". You're right about what the commercial display makers need to do. If they want to market displays that will have more appeal, they must think about the different sorts of applications and programming environments that the displays should support.

Display makers also need to think more about the bigger picture - in what sort of environments will the displays be located? Indoors? Outdoors? Near bright sunlight? What about people with disabilities, children, or the elderly?

I can see that in the future, multi-touch displays and other devices would operate within an embedded systems environment and support mobile computing activities as well. There are existing examples of this concept, of course, but there is much room for creative improvement. An embedded systems approach is complex, and would need to handle input from sensors, support multi-modal signal process, and also provide users with a range of connectivity modes, including RFID. (Data management and storage needs would have to be addressed, along with privacy and security concerns.)

Most importantly, in my opinion, these systems would need to have the flexibility required to support human activities and interactions that have not yet emerged! Certainly this will need to take a multidisciplinary approach.

There are many unanswered questions....How does this fit in with mobile computing and "cloud" computing? What sort of middleware needs to be developed?

Even if we don't have solutions to the bigger problems, there are many smaller problems that I think could be somewhat easily solved.

As you mentioned, many applications that are designed for single-touch screens don't fully support the way people identify, select, and move items around the screen. Although educators access websites every day for use on interactive whiteboards, they are hungry for more. There are not enough websites that are optimized for single-touch interaction, or touch-screen interaction in 3-D "space".

Teachers who are successful users of interactive whiteboards know exactly what we are talking about. They spend quite a bit of time searching for new on-line resources they can use with their students. They know how much the students want to interact with the screen at the same time and would be so excited to have capability at their fingertips!

Optimizing websites for touch-screen applications is possible, but this idea hasn't occurred to web developers. Their jobs don't require it, so there is no incentive. Google is developing FlareBrowser, that can support multi-touch interactions, but according to information on the website, it runs on Mac Leopard 1.5, and nothing else. The present version is bare-bones. I haven't yet tested the FlareBrowser.

I think that another barrier to getting multi-touch off the ground is that the people who might have the knack for multi-touch application development simply don't know it! We've mentioned that Flash developers have the potential to create good multi-touch applications. I also think that game developers and designers could make good contributions to the multi-touch movement. Just think about what thought goes into programming interactions and event handling for 3-D web-based multi-player games!

Yet another barrier is that people who work in lower-tech fields could benefit from collaborative multi-touch applications, but they don't know it, either. The research I've reviewed tells me that multi-touch applications can support a wide range of human endeavors- work, creativity, data analysis, education, collaboration, planning, and so forth.

What is missing is the input of potential end users from a variety of fields. No specific discipline "owns" multi-touch, so it is hard to figure out how we can make this happen.

Could we set up multi-touch technology playgrounds at professional and trade conferences? What about airports and hospital lobbies? Libraries and museums? Shopping centers? Sports events and rock concerts?

This leads me to my next idea, which is jumping ahead a bit:

One of the barriers to the development of multi-touch applications is that it is not easy to gather user requirements when the users are not familiar with the technology.
That is when my "Miracle Question" technique comes into play. I learned this technique when I studied brief solution-focused counseling and found that if modified, can be useful when figuring out user requirements. (The process still needs some fleshing out.)
Why the Miracle Question?
The questions that a developer uses to guide the client during the initial planning stages are very important. Keep in mind that people want to use technology because it meets a need and also solves a problem, which is the similar to the reason a person might seek counseling.
The Miracle Question technique (actually, a series of questions) might help to tease things out. The goal of this type of questioning is to help the client use their own creativity, resources, and problem-solving skills so they can become effective partners throughout the development cycle.
(People with human-computer interaction training might have an easier time understanding how this technique might be modified and applied to different fields.)

FYI
A good example the Miracle Question process, as used in therapy and counseling, can be found on the Network of Social Construction Therapies website in an article written by the late Steve de Shazer:

http://brianmft.talkspot.com/aspx/templates/topmenuclassical.aspx/msgid/366482

There aren't many resources about the use of the Miracle Question in IT or business. Here are a couple:

Solution Focused Management of Unplanned IT Outages (Read pages 132 and the references.)http://conferences.vu.edu.au/web2006/images/CDProceedings06.pdf
Proceedings of 7th International We-B (Working for E-Business) Conference, 2006Katherine O'CallaghanSugumar Mariappandar, Ph.D.School of Business and InformaticsAustralian Catholic University

Miracle Question in Executive Coaching
http://www.1to1coachingschool.com/Coaching_Miracle_Question.htm

Nov 16, 2008

Every Surface a Computer: "Scratch" Capturing Finger Input on Surfaces using Sound. Video by Chris Harrison and Scott Hudson's Video - UIST '08

Chris Harrison and Scott Hudson, from the Human-Computer Interaction Group at Carnegie-Mellon University, presented their latest research at the UIST '08 conference. Take a look at the video below to see how gestures that result in sounds can can transformed on unpowered finger input surfaces, using a stethoscope sensors and filters:



Yes, every surface is a computer!
(Even your pants...)

For detailed information, read the paper presented at UIST '08 by Chris Harrison and Scott E. Hudson:
Scratch Input: Creating Large, Inexpensive, Unpowered, and Mobile Finger Input Surfaces

RELATED:

The Best Paper Award at UIST '08 was "Bringing Physics to the Surface", by Andrew Wilson, of Microsoft Research, and Ahahram Izadi, Otmar Hilliges, Armando Garcia-Mendoza, and David Kirk, of Microsoft Research, Cambridge.

Here is the abstract:

"This paper explores the intersection of emerging surface technologies, capable of sensing multiple contacts and of-ten shape information, and advanced games physics engines. We define a technique for modeling the data sensed from such surfaces as input within a physics simulation. This affords the user the ability to interact with digital objects in ways analogous to manipulation of real objects. Our technique is capable of modeling both multiple contact points and more sophisticated shape information, such as the entire hand or other physical objects, and of mapping this user input to contact forces due to friction and collisions within the physics simulation. This enables a variety of fine-grained and casual interactions, supporting finger-based, whole-hand, and tangible input. We demonstrate how our technique can be used to add real-world dynamics to interactive surfaces such as a vision-based tabletop, creating a fluid and natural experience. Our approach hides from application developers many of the complexities inherent in using physics engines, allowing the creation of applications without preprogrammed interaction behavior or gesture recognition."
Preparation for the Internet of Surfaces & Things?




(Cross-posted on the Technology-Supported Human World Interaction blog)