Apr 4, 2009

Put-That-There: Voice and Gesture at the Graphics Interface and more Blasts from the 1980's HCI Past


bigkif's information about "Put-That-There" about Put-That-There gives a good description of this video:

Put-That-There at CHI '84

"In 1980, Richard A. Bolt from MIT wrote Put-that-there : voice and gesture at the graphics interface. It was a pioneering multimodal application that combined speech and gesture recognition.

This demo shows users commanding simple shapes about a large-screen graphics display surface. Because voice can be augmented with simultaneous pointing, the free usage of pronouns becomes possible, with a corresponding gain in naturalness and economy of expression. Conversely, gesture aided by voice gains precision in its power to reference."

Richard A. Bolt "Put-That-There": Voice and Gesture at the Graphics Interface
(pdf) SIGGRAPH '80

Here is another blast from the '80's:

Kankaanpaa A. FIDS- AFlat-Panel Interactive Display System IEEE March 1988 IEEE Computer Graphics Applications(Nokia Information Systems)

"Although the needs and expectations of these various users are very diverse, they all have a common requirement: more natural and easier methods for communicating with the computer than are available today. Furthermore, they do not want to interact with the computer; they want to communicate with the application they are using. They do not want to use computer jargon; they want to use the same natural methods that they use when they perform the same tasks without a computer."

“We believe that only three of the flat-panel technologies described above, namely LCD, EL, and plasma, will be sufficiently advanced for mass production within this decade.”

Bill Buxton was working on multi-touch and gesture interaction in the 1980's, but his dreams did not become a reality until this century, for a variety of reasons. He shared his thoughts about the paradox of the speed of technology in a presentation at the 2008 IEEE International Solid-State Circuits Conference:Surface and Tangible Computing, and the “Small” Matter of People and Design”(pdf)

‘Carrying on from an earlier thesis in our department (Mehta , 1982) , we built a tablet that was sensitive to simultaneous touches at multiple locations, and with the ability to sense the degree of each touch independently (Lee, Buxton & Smith, 1984). We stopped the work in late 1984 when I saw a much better implementation at Bell Labs – one that was transparent and mounted over a CRT. The problem was that they never released the technology, so, the whole multi-touch venture went dormant for 20 years. But, I never stopped dreaming about it. (Lesson: don’t stop your research just because someone else is way ahead of you. It might be transitory, and anyhow, remember the story of the tortoise and the hare.)

“I spoke earlier about the paradox in the speed of technology development it goes at rocket speed, but that of a glacier as well; Simultaneously! In the perfect world, this would be ideal: we could go through several iterations of ideas so that by the time the new paradigms of interaction, such as Surface and Tangible computing are ready for prime time, everything will be in place. But, the rapid iteration is more directed at supporting the old paradigms faster and cheaper, rather then helping shape the new ones. The reasons are not hard to understand. From the perspective of circuit design, the problems are really hard. So, one has to have one’s head down working flat out to get anything done. But, there is a side of me that motivated this paper that asks, If it is so hard, then isn’t it worth making sure that the things one is working on are things that are worthy of one’s hard-earned skills?”

SOMEWHAT RELATED

Bill Buxton's Haptic Input References
(pdf)

The Internet of Things can be Cute: MIR:ROR by Violet

The MIR:ROR application supports memory and interaction with your computers as well as your Web 2.0 applications via a RFID Stamp. You can even figure out the last time you fed your fish!


Here is some of the promotional information from the Violet website:
"Mir:ror makes your everyday objects interactive, intelligent, communicant. Stick RFID Ztamps on them and show them to the Mir:ror: your keys will send e-mail to tell someone you’ve got home, your pills know when you’ve swallowed them, your toys play videos… Thousands of uses you can easily program through a Website."
http://idleparis.co.uk/wp-content/uploads/2008/10/violet_mir_ror_nabaztag.jpgAlign Centerhttp://www.violet.net/img/mirror.gifhttp://idleparis.co.uk/wp-content/uploads/2008/12/mirror-300x219.jpghttp://www.violet.net/img/ztamps_banner.gifhttp://www.violet.net/img/nanoztag_home.jpg
"Nano:ztags are lovable micro-rabbits with a RFID Ztamp in their tummy. Program them to play any content or application you choose each time you show them to a Nabaztag:tag or Mir:ror."

Apr 3, 2009

Albrecht Schmidt's User Interface Engineering Blog: Great Links, References, and Resources

I really like Albrecht Schmidt's User Interface Engineering blog.

Albrecht Schmidt is a professor at the University of Duisburg-Essen who focuses his research on "novel user interfaces and innovative applications enabled by ubiquitous computing." Dr. Schmidt previously headed something called the "Embedded Interaction Research Group" at the Ludwig-Maximilians University in Munich.


Albrecht Schmidt will be working in collaboration with Chris Kray's group at the Culture Lab at Newcastle University in the UK. The work will focus on creating and building interactive appliances. He also mentioned the work of Jayne Wallace, one of the researchers at the Culture Lab, who creates digital jewelry.

The best thing about Albrecht's recent post was his short list of references:

[1] Wallace, J. and Press, M. (2004) All this useless beauty The Design Journal Volume 7 Issue 2 (PDF)

[2] Jayne Wallace. Journeys. Intergeneration Project.

[3] Kern, D., Harding, M., Storz, O., Davis, N., and Schmidt, A. 2008. Shaping how advertisers see me: user views on implicit and explicit profile capture. In CHI '08 Extended Abstracts on Human Factors in Computing Systems (Florence, Italy, April 05 - 10, 2008). CHI '08. ACM, New York, NY, 3363-3368. DOI= http://doi.acm.org/10.1145/1358628.1358858

Another recent post that I like was "Teaching, Technical Training Day at the EPO". The following topics were covered during a training event at the European Patent Office in Munich, Germany:
  • Merging the physical and digital (e.g. sentient computing and dual reality [1])
  • Interlinking the real world and the virtual world (e.g. Internet of things)
  • Interacting with your body (e.g. implants for interaction, brain computer interaction, eye gaze interaction)
  • Interaction beyond the desktop, in particular sensor based UIs, touch interaction, haptics, and Interactive surfaces
  • Device authentication with focus on spontaneity and ubicomp environments
  • User authentication focus on authentication in the public
  • Location-Awareness and Location Privacy
I liked the references that Dr. Schmidt posted, given my growing interest in topics related to interactive wireless sensor networks:

[1] Lifton, J., Feldmeier, M., Ono, Y., Lewis, C., and Paradiso, J. A. 2007. A platform for ubiquitous sensor deployment in occupational and domestic environments In Proceedings of the 6th Conference on international information Processing in Sensor Networks (Cambridge, Massachusetts, USA, April 25 - 27, 2007). IPSN '07. ACM, New York, NY, 119-127. DOI= http://doi.acm.org/10.1145/1236360.1236377

[2] Naohiko Kohtake, et al. u-Texture: Self-organizable Universal Panels for Creating Smart Surroundings. The 7th Int. Conference on Ubiquitous Computing (UbiComp2005), pp.19-38, Tokyo, September, 2005. http://www.ht.sfc.keio.ac.jp/u-texture/paper.html

[3] Schwesig, C., Poupyrev, I., and Mori, E. 2004. Gummi: a bendable computer. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Vienna, Austria, April 24 - 29, 2004). CHI '04. ACM, New York, NY, 263-270. DOI= http://doi.acm.org/10.1145/985692.985726

[4] Wigdor, D., Forlines, C., Baudisch, P., Barnwell, J., and Shen, C. 2007. Lucid touch: a seethrough mobile device. InProceedings of the 20th Annual ACM Symposium on User interface Software and Technology (Newport, Rhode Island, USA, October 07 - 10, 2007). UIST '07. ACM, New York, NY, 269-278. DOI= http://doi.acm.org/10.1145/1294211.1294259

[5] Campbell, A. T., Eisenman, S. B., Lane, N. D., Miluzzo, E., Peterson, R. A., Lu, H., Zheng, X., Musolesi, M., Fodor, K., and Ahn, G. 2008. The Rise of People-Centric Sensing. IEEE Internet Computing 12, 4 (Jul. 2008), 12-21. DOI= http://dx.doi.org/10.1109/MIC.2008.90

If you take a peek at the Culture Lab website, be sure to look at the Ambient Kitchen, designed to support older people with memory difficulties:



The Ambient Kitchen probably be a great way to help busy families get meals on the table, too!

Also look at Jayne Wallace's digital jewelry website The following pictures are of a neckpiece that triggers digital visits of silent film sequences on digital displays in its vicinity:
Sometimes Image 2sometimes1

Some of Jayne's digital jewelry was created in response to "physical, clinical, social, and emotional dynamics of memory loss".

I hope you enjoy exploring Albrecht Schmidt's User Interface Engineering blog and the Culture Lab website!

Apr 2, 2009

Expore Chicago Installation at O'Hare Airport: Collaboration between HP TouchSmart PC's , GigaPan, and Others

I was going to post the pictures I took today of touch-screens and other displays from the Cleveland airport, but the news about the HP TouchSmart installation at the O'Hare airport was more exciting.

http://www.cmu.edu/news/images/ExploreChicago_news1.jpg

Via Carnegie Mellon: Explore Chicago Via GigaPan
3/30/09
"Panoramas created with GigaPan, a technology developed by Carnegie Mellon’s Robotics Institute and NASA, are featured on a new city Web site. The imagery of iconic Chicago locations can be explored in detail with 50 HP TouchSmart PCs installed throughout the airport by HP and the Chicago Department of Aviation and Office of Tourism."

http://farm4.static.flickr.com/3431/3399009311_15cccec5f8.jpg?v=0


http://farm4.static.flickr.com/3648/3399862418_bb14d2f7b2.jpg?v=0
Via Flickr
"Chicago's Mayor Daley, left, and Hewlett-Packard's, Stephen DeWitt, interact with on of the kiosks for Explore Chicago, a state of the art installation featuring touch screen HP computers and high-tech lounges, offering travelers a way to connect to the Chicago Tourism Center at O'Hare Airport Monday, March 30, 2009. During the unveiling, Mayor Daley, announced plans to use Chicago's recently awarded economic stimulus package of $12M for runway expansions at O'Hare Airport. (AP Images for HP/Stacie Freundberg)"

Mar 30, 2009

Softkinetic 3D Gesture Recognition for Games and Rehabilitative Play

Taking 3D interaction further, Softkinetic has developed middle-ware that uses a 3D camera to support full-body gesture interaction with games and other applications. No controllers or devices are needed!



The following video is narrated in Portuguese, I, think, but you can understand the content in any language. I you love the Wii, you'll probably like this!


Here is a video that demonstrate how Softkinetic and Silverfit paired together to develop rehabilitative games for the elderly and others:


The following table is from the Silverfit website:
Game Movement trained
Puzzle While sitting down, bend whole body left and right, and stand up. Cognitive/visual component.
Mole Balance exercise by stepping with one leg while standing.
Catching grapes Walking movement left and right.
Walking Walking in place, while avoiding obstacles and thresholds. Activity of Daily Life (ADL) component.
Arm exercise Arm stretching and reaching in all directions with one or both arms. ADL component.
Picking flowers Walking backwards, forwards and sideways. Optionally, bending down.
Memory Arm stretching left, right, forwards and upwards. Cognitive component.

RELATED

Softkinetic and Silverfit Introduce Senior-Targeted Gaming

(Danny Cowan, Gamasutra, 12/19/08)

Softkinetic's Gesture-Based Interactive TV Action: