Showing posts with label NUI. Show all posts
Showing posts with label NUI. Show all posts

Feb 9, 2011

If you haven't seen it yet, here is a video of Microsoft Surface 2.0. A new usability feature in this version- you can sit with your legs under the table!

If you haven't seen it yet, here is a video of Microsoft Surface 2.0

"Where am I in this information space?" Bifocal Display Concept Video, 1982, via the Interaction Design.org Encylopedia

The following video about the Bifocal Display concept is an interesting look at how a set of ideas unfolded in the early 1980's. These ideas took flight and influenced many of the design concepts we view as "new" today.


The video was posted on the on-line Interaction Design Encyclopedia, a free, open source multimedia resource that includes a range of chapters written by leading researchers and practitioners from fields related to Human-Computer Interaction (HCI), Interaction Design (IXD), User Experience (UX), Information Architecture (IA), Usability, and more.



"People want to see context...." -Robert Spence

1980's Paper Prototype of  Carousel Concept (screen shot from video)
"Let's apply this idea to my in-tray,  an information that is quite varied, and often full of surprises..." -Robert Spence


For more information, see the article by Robert Spence and Mark Apperley on the Interaction -Design.org website.  Scroll down to "The Bifocal Display Explained" for great sketches of the concepts discussed in the video.  


Also take a look at the recent videos from InteractionDesign.org related to this topic.  What a great way to gain insight about the design and development process!


Video 1: Introduction to the Bifocal Display
Video 2. Main guidelines and future directions
Video 3: How the Bifocal Display was invented and launched


REFERENCE
Spence, Robert and Apperley, Mark (2011). Encyclopedia entry on Bifocal Display. Retrieved 9 February 2011 from Interaction-Design.org: http://www.interaction-design.org/encyclopedia/bifocal_display.html


RESOURCES
Interaction-Design.org YouTube Channel
Interaction-Design.org

Feb 8, 2011

Toyota Vision Multi-Touch Wall at the 2011 Detroit Autoshow: Good example of a collaboration between PUSH, Juxt Interactive, GPJ, and Hush (updated)

GOOD EXAMPLE!

I didn't have the chance to get up to the Detroit Auto Show and interact with the iWall, so I watched the videos.


Toyota iWall from Justin Micklish on Vimeo.


From Justin Micklish's Vimeo site:
"Named "Belle of the Ball" at the 2011 Detroit Auto Show, the Toyota Vision Wall is a large-format interactive installation leveraging a great deal of UX intelligence and bleeding edge technology."


This project was a collaboration between a team at PUSH Offices and Juxt Interactive (lead agency), GPJ (experience marketers), and Hush (motion graphics).


Toyota Vision Multi-Touch Wall Case Study:  A good explanation of the "how and why" of this project:

Toyota Vision Multi-Touch Wall Case Study from JUXT Interactive on Vimeo.

From the JUXT Interactive Vimeo site:
"In January, Toyota debuted the largest digital multi-touch installation ever seen at the 2011 North American International Auto Show in Detroit. GPJ and JUXT Interactive created the digital touch wall experience to deliver a feast for the senses, allowing consumers to explore videos, photos and custom content through a larger-than-life, interactive surface."


RELATED
PUSH
"PUSH is a marketing & design research firm based in Venice, CA. We shirk labels and defy classifications. We have been called strategists, writers, technologists, marketers, typographers, entrepreneurs, and designers. Above all, we are curious. We make a living creating compelling experiences with our clients. Recent collaborations include work with Yahoo, West, and Sony."
Juxt Interactive 
"JUXT is a company that creates from the ground up, top to bottom, or even sideways. It kinda makes sense since we were started by a couple of architects....Lots of people ask us if we only do digital work. Well we like to think and play where people are, and these days that just happens to be the Internet. That suits us just fine because the online space has far less restrictions and barriers than other media. JUXT does not take kindly to limits, or being put in a box."
GPJ: George P. Johnson
"GPJ is the world's premier experience marketing agency, working with leading companies who are looking for innovative marketing solutions that motivate their audiences and activate their brands through live experiences."
Hush
"Hush is a design and direction studio crafting stories for digital, broadcast, mobile and physical installations. Whether using the latest technology or the most classic of techniques, we unifyu stories for agencies and brands alike."


Comment:  This post was revised on 2/11/11 to reflect a clarification from Justin MicKlish.  He's a partner and producer with Push and worked on this project with others in the role of technology consulting and software development. 

Jan 28, 2011

"Microsoft is Imagining a NUI Future". You can, too!

Microsoft is Imagining a NUI Future
Steve Clayton, Next at Microsoft Blog, 1/26/11


"Our research shows that the vast majority of people polled in both developed and emerging markets see great potential for NUI applications beyond entertainment. This is especially true in China and India, where 9 out of 10 respondents indicate they are likely to use NUI technology across a range of lifestyle areas – from work, education and healthcare, to social connections, entertainment and the environment. We believe that taking technology to the next billion can be aided by NUI – making technology more accessible and more intuitive to a wider audience". - Steve Clayton, Microsoft


The people at Microsoft don't own the concept!  I'm a member of the NUI Group (May, 2007) and SparkOn.  Both are on-line communities where you can find people who live and breathe NUI, learn about their work, and even share designs and code. If you are intrigued by NUI - as a designer, developer, or user, please join us.


Note: 
I've been an evangelist and cheerleader for the NUI cause for many years.  If you search this blog for "post-WIMP", "NUI", "multi-touch", "gesture", "off-the-desktop""natural user interaction", "natural user interface", or even "DOOH", you'll be provided with an overwhelming number of posts that include videos, photographs, and links to NUI-related resources, including scholarly articles.  There is a small-but-growing number of people from many disciplines, quietly working on NUI-related projects.


RELATED
Microsoft Plans a Natural Interface Future Full of Gestures, Touchscreens, and Haptics
Kit Eaton, Fast Company, 1/26/112
Rethinking Computing (video)
Craig Mundie, Microsoft
Interactive Touch-Screen Technology, Participatory Design, and "Getting It" - Revised
Touch Screen Interaction in Public Spaces:  Room for Improvement, if "every surface is to be a computer".

Jan 21, 2011

MIT MediaLab's DepthJS: Now your web page can interact with the Microsoft Kinect using Javascript (Link to code, more)

MIT Media Lab's DepthJS website

Info from the DepthJS website:
"Navigating the web is only one application of the framework we built - that is, we envision all sorts of applications that run in the browser, from games to specific utilities for specific sites. The great part is that now web developers who specialize in Javascript can work with the Kinect without having to learn any special languages or code. We believe this will allow a new set of interactions beyond what we first developed."


DepthJS is open source under the AGPL license. Code: https://github.com/doug/depthjs

RELATED
Gestures that your TV Will Understand (Features information about PrimeSense)
Tom Simonite, MIT Technology Review, 1/21/11


Hackers Take the Kinect to New Levels
Timothy Carmody, MIT Technology Review 12/2/10


Microsoft Kinect: How the device can respond to your voice and gestures
Erica Naone, MIT Technology Review, January/February 2011






Comment:
I went to Best Buy today and almost bought a Kinect.  Unfortunately, the demo hadn't come in yet, so I decided to wait until I could give it a try.  I'm curious to experiment with what it can do. 

Jan 18, 2011

"Hi, Google. My name is Johnny" Johnny Chung Lee leaves Microsoft. (I still wish I could be Johnny Chung Lee for a day.)

"Hi, Google.  My name is Johnny"  Johnny Chung Lee announced on his Procrastineering blog that he's accepted a position at Google as a "Rapid Evaluator".   I'm not sure what he will be doing in this position, but his title is intriguing!

Here are some of my previous posts devoted to the work of Johnny Chung Lee:

I wish I could be Johnny Chung Lee for a Day!  Tracking fingers with the Wii Remote
Video Clips of Projects Inspired by Johnny Chung Lee
More about Project Natal:  Richochet - Great Gaming for Fitness, Johnnie Chung Lee's Contribution


I STILL wish I could be Johnny Chung Lee for a day!

RELATED
Microsoft Kinect Developer Johnny Chung Lee Jumps Ships and Lands at Google
Leena Rao, TechCrunch, 1/18/11
What Microsoft Kinect Defection to Google Means
Rich Tehrani, TMCnet Blog 1/18/11
Microsoft Loses a Top Kinect Researcher to Google
Tricia Duryee,Yahoo! Finance, 1/18/11

Jan 12, 2011

Multi-modal Interactive Maps for People with Visual Impairments: Featuring a Stantum multitouch screen with a tactile layer.

To learn more about this project, take a look at the video and related publications below. This is a great example of a team that is harnessing emerging technologies to improve the lives of people with disabilities.


Video: "Multimodal Maps for Blind People"


Website


Publications
Anke Brock, Philippe Truillet, Bernard Oriola, Christophe Jouffrai (IRIT CNRS and Universite de Toulouse) Usage of Multimodal Maps for Blind People: Why and How
ITS’10, November 7–10, 2010, Saarbrücken, Germany
Paper: http://www.irit.fr/~Philippe.Truillet/projects/doc/MultimodalMapsForTheBlind-ITS10.pdf
Poster: http://www.irit.fr/~Philippe.Truillet/projects/doc/Poster-ITS10.pdf

Stantum (Multi-touch screen used for the application.)
Ivy Middleware (Used in this application.)

Cross-posted on the TechPsych blog.

Jan 9, 2011

New Microsoft Surface 2.0 and InfoStrat's Surface 2.0 Information Visualization Controls

Microsoft Surface 2.0 was unveiled at CES 2011 a few days ago,  the result of a collaboration between Microsoft and Samsung.  Surface 2.0 is a step up!  The 40 inch 1080p high-definition LCD display no longer requires a projection/camera system, which clears the area below the screen  of bulky hardware.  The best part about Surface 2.0, in my opinion, is that it doesn't have to be used as a table.  It can be configured in a variety of ways, even mounted on walls.   For this reason, it will be useful in a variety of settings and situations.


Below is a quote about Surface 2.0 from Steve Ballmer's recent keynote address at CES 2011 that outlines the new technology that is embedded in the Surface 2.0 display:


"But what's really amazing about this technology, what really makes it magical, is the sensor itself. So, those first-generation Surface PCs needed cameras underneath that would look up to try to see what was going on. But what we have here is called PixelSense. PixelSense is new technology we've invented where there's infrared sensors all across this screen. Every single pixel is actually acting as a camera. The PC, the Surface here, can actually see." -Steve Ballmer:  My Keynote Address at the 2011 International Consumer Electronics Show" (Huffington Post, 1/6/2011)





The good news is that developers have been busy at work to create applications for Surface 2.0. Below is a video demonstration of what the folks at InfoStrat have recently created to support collaborative information visualization activities:







Here's the information about the controls from the Infostratcville YouTube channel:

"This is a sneak preview of a suite of data visualization controls developed by InfoStrat for Microsoft Surface 2.0. The controls will be made available as open source software at no charge on CodePlex.com in the first half of 2011."


"This data visualization control suite provides multi-touch versions of the following controls:
- DeepZoom multi-resolution image control that allows high performance display of very high-resolution imagery
- PowerPoint Viewer which enables slide decks to be arranged and presented using multi-touch
- PivotViewer chart control that allows dynamic sorting and categorization of data
- PhysicsCanvas which provides an infinite, dynamic canvas for viewing and organizing content"




RELATED
Josh Blake's post:  "Microsoft Surface 2.0 Data Visualization Controls by InfoStrat" 
Microsoft Surface Blog: "Microsoft and Samsung Unveil the Next Generation of Surface"


PRESS RELEASE
For Immediate Release

9 a.m. PST
January 6, 2010
InfoStrat Releases Next-Generation Data Visualization Controls for Microsoft Surface 2.0
Washington DC – January 6, 2010 – InfoStrat today announced plans to support Microsoft Surface 2.0 by releasing a control suite that accelerates the development of next-generation multi-touch data visualizations. The controls will be made available as open source software at no charge on CodePlex.com in the first half of 2011.
This data visualization control suite provides multi-touch versions of the following controls:
§ Deep Zoom multi-resolution image control that allows high performance display of very high-resolution imagery
§ PowerPoint Viewer which enables slide decks to be arranged and presented using multi-touch
§ Pivot Viewer chart control that allows dynamic sorting and categorization of data
§ Physics Canvas which provides an infinite, dynamic canvas for viewing and organizing content
Other features of the controls:
§ Works on both Microsoft Surface and Microsoft Windows 7 with touch
§ A single application built with the data visualization framework can support multiple hardware form factors including: horizontal multi-touch tables, tablets, and large format vertical touch screens
§ Innovative object recognition to enable rapid data manipulations (only on Microsoft Surface)
Watch a sneak preview of the control suite on YouTube:http://www.youtube.com/watch?v=lEVtjHlrf4I
InfoStrat is a member of Microsoft’s Technology Adoption Program (TAP) for Microsoft Surface. As a Microsoft Surface 2.0 TAP member, InfoStrat receives early access to hardware and software, allowing InfoStrat to gain expertise and influence the development of the product before it was released to the public.
In 2008, InfoStrat solved the problem of using Bing Maps 3D on Microsoft Surface in a way that performed well and was WPF-friendly. InfoStrat open-sourced the solution as a reusable control for the WPF and Surface community. Since then, the control has received over 120,000 page views and has over 8200 downloads, and has also been featured in many of our own applications. This control, known as InfoStrat.VE, has become one of the most popular controls for building mapping applications on Microsoft Surface: http://bingmapswpf.codeplex.com
“We are proud to be part of the Microsoft Surface development community,” according to Jim Townsend, president of InfoStrat, “and excited about the possibilities of Microsoft’s new version of Surface.”
Microsoft Surface provides a new way to experience and use information and digital content, engaging the senses, improving collaboration and empowering people to interact. Microsoft Surface is at the forefront of developing software and hardware that uses vision-based technology to fundamentally change the way people use computing devices. More information can be found at http://www.surface.com.
Information Strategies ("InfoStrat") is an award-winning Microsoft Gold Certified Partner and a Microsoft Surface Strategic Partner and member of the Technology Adopter Program.
For more information, press only:
Josh Wall, InfoStrat, (202) 364-8822 ext. 202, joshw@infostrat.com

Dec 22, 2010

Multi-touch SmartBoard! (SMARTBoard 800 Series)

Take a look at the video demonstration of the new SMARTBoard (800 series) that offers multi-touch and gesture interaction support so that two students can interact with the board at the same time.

  • Students can use 2 finger gestures to enlarge objects and move them around.
  • Two students can interact with the board at the same time to complete activities.
  • SMARTInk/Calligraphic Ink creates stylized print as you write. Whatever is written or drawn on the SMARTBoard becomes an object in the SMARTNotebook, allowing for things to be resized or rotated.   (2:04)
  • Multi-touch gestures enabled in Window 7 and Snow Leopard work with the SMARTBoard.
  • Software development kit (3:28):  Example of a physics application developed by a 3rd-party developer.  The application supports two students working at the SMARTBoard at the same time
This video, in my opinion, does not provide viewers with the full range of possibilities that the new features provide.   I'd like to see a "redo" of this video using a live teacher and a group of students.  For example, it would be interested in seeing how the physics application would be incorporated into a broader lesson or science unit.   I'd love to hear what real students have to say as they interact with the physics application, too.

Comment:
I think a multi-user interactive timeline would be a great application for the new SMARTBoard, because students could work together to create and recreate events.  This would be ideal for history, literature, and humanities activities, across a wide span of grade levels.

Dec 14, 2010

Short documentary of the story behind the Reactable, a tangible user interface for creating music. (Includes an interview of Joel Bonasera, of Charlotte's Discovery Place museum.)

The following video provides a look into the history of the Reactable,  from the initial paper prototypes to the present, including the Reactable Mobile application designed for the iPad, iPhone, and iPod touch.   The video includes interviews of Sergi Jorda and Gunter Geiger, members of the original team at Pompeu Fabra University (Barcelona) that created the Reactable.  The other team members are Martin Kaltenbrunner and Marcos Alonso.




FYI:  At about 2:34 in the video, Joel Bonasera briefly discusses the Reactable installation at Charlotte's Discovery Place museum. Joel is a project manager at Discovery Place.


RELATED
How the Reactable Works
John Fuller, howstuffworks
Music Technology Group, Pompeu Fabra University
Reactable Website
Reactable Concepts
Reactable History
Discovery Place
Interactive Technology in the Carolinas: Discovery Place Science Center
(Includes a short video clip I took of the Reactable at Discovery Place)

Dec 12, 2010

LM3LAB's Useful Map of Interactive Gesture-Based Technologies: Tracking fingers, bodies, faces, images, movement, motion, gestures - and more

Nicolas Loeillot, of LM3LABS, has been ahead of the natural user interaction/ interface game for many years as his company has expanded. He's done quite a bit of deep thinking about the work of his company, and has used this wisdom to create a nice concept map that describes how LM3LABS' solutions fit into the world of gesture-based control and interaction:




In my opinion, this chart would make a great template for mapping out other natural interaction applications and products!


Here is the description of the concepts outlined in the chart:


"If all of them belong to the “gesture control” world, the best segmentation is made from 4 categories:
  • Finger tracking: precise finger tracking, it can be single touch or multi-touch (this latest not always being a plus). Finger tracking also encompasses hand tracking which comes, for LM3LABS products, as a gestures.
  • Body tracking: using one’s body as a pointing device. Body tracking can be associated to “passive” interactivity (users are engaged without their decision to be) or “active” interactivity like 3D Feel where “players” use their body to interact with content.
  • Face tracking: using user face as a pointing device. It can be mono user or multiple users. Face tracking is a “passive” interactivity tool for engaging user in an interactive relationship with digital content.
  • Image Tracking: Augmented Reality (AR) lets users use images (flyers, real products, t-shirts, faces,…) to interact with digital content. AR can be markerless or marker-based. Markerless technology has advantages but marker-based AR is easier for users to understand. (Please note here that Markerless AR is made in close collaboration with AR leader Total Immersion)."  -LM3LABS
   If you are interested in this subject and want to view some good examples of off-the-desktop interfaces and interactions, take a look at the LM3LABS blog, as well as Nicolas Loeillot's Vimeo channel.  Also take a look at the sample of posts I've written about LM3LABS over the last few years - the links are at the end of this post.

I love LM3LABS' Interactive Balloon:

Interactive balloons from Nicolas Loeillot on Vimeo.


Interactive Balloons v lm3 labs v2 (SlideShare)



Background
I first discovered LM3LABS when I was taking a VR class and researching interactive, immersive large displays in 2005 or 2006.  Back then, there wasn't much information about this sort of technology.  A lot has changed since then!


I've learned quite a bit from watching LM3LABS (and others) grow, given my passion for postWIMP interactive technology and my commitment to blogging about this subject.   Nicolas has really worked hard in this arena.  As early as 2005, LM3LABS was working with Scala to provide "smart" interactive displays, and his company's applications have been supported by computer vision technologies for many years, allowing for gesture-based, or "touch-less" interaction, as demonstrated by the Catchyoo Interactive Table.  This application caught my eye back in early 2007, when I was working on projects for large interactive displays for my HCI and Ubicomp classes, and was thinking about creating a table-top application.


My hunch is that LM3LABS has set the foundation for further growth in the future, given the lessons they've learned by taking risks with postWIMP technologies over the past few years!


Previous Blog Posts Related to LM3LABS:
Interactive Retail Book (Celebrating history of Christian Dior from 1948-2010 (video)
Ubiq Motion Sensor Display at Future Ready Singapore (video)
Interactive Virtual DJ on a Transparent Pane, by LM3LABS and Brief Ad
LM3LABS' Catchyoo Interactive Koi Pond: Release of ubiq'window 2.6 Development Kit and Reader
A Few Things from LM3LABS
LM3LABS, Nicolas Leoillot, and Multi-touch
More from LM3LABS: Ubiq'window and Reactor.cmc's touch screen shopping catalog, Audi's touch-less showroom screen, and the DNP Museum Lab.


About LM3LABS
"Founded in 2003 by a team of passionate researchers, engineers, designers, and marketers from various international backgrounds, focused on fast transformation of innovation into unique products, LM3LABS is a recognized pioneer in computer vision-based interactivity solutions. Keeping a strong customer focus, LM3LABS' team of unique people pioneers new directions, explores new concepts, new technologies and new interactions.  Engaging, playful and magic, LM3LABS' products and solutions are always scalable and reliable"

info@lm3labs.com

Note to readers:
Over the past couple of years there has been an explosion of postWIMP technologies and applications, and with this pace, it has been difficult for me to keep abreast of it all. There is quite a bit I miss, given my full time job and daily life!

I welcome information about postWIMP interactive technologies and applications from my readers.  Due to time constraints, not interest, I am not always able to post about a topic as soon as I'd like.  That is OK, as my intention is not to be the first blogger to spread the latest tech news.  I like to dig in deep when I can and make connections between innovative, interesting technologies and the people and ideas behind them. 




Dec 11, 2010

Gesture "multitouch" 12 x 7 interactive video wall provides tours of I/O Data Centers' facilities

I came across this demonstration of I/O DataCenter's 12 by 7 foot interactive video wall that makes playing around with views of data center modules...interesting! The display is a gesture-based "multi-touch" system. (I'll update this post when I get more information.)



Here is the description from the Datacenter YouTube channel:


"Instead of hauling a 40-foot long modular data center to a trade show, i/o Data Centers is taking a high-tech approach to customer tours of their i/o Anywhere modular data center. The i/o team has created a 12-foot by 7-foot touchscreen video wall to provide interactive tours of the company's facilities. Selecting a "hot spot" pops up a virtual data center, complete with cross sections and product info, following the concept of the touch screens in the sci-fi movie "Minority Report.""


FYI: I/O Data Centers has an application that runs on the Surface.

UPCOMING:
Stay tuned for my upcoming posts! 


News about LM3LABS (Previous post)
Interactive Surveillance CCeline Latulipe (technologist) Annabel Manning (artist)

Dec 9, 2010

Interested in the OpenNI Initiative? OpenKinect? To learn more, read Josh Blake's Interview of Tamir Berliner of PrimeSense




Josh Blake, Deconstructing the NUI, 12/9/10



Josh Blake recently interviewed Tamir Berliner, one of the founders of PrimeSense.  If you haven't heard, Microsoft's Kinect was based on work by PrimeSense, and licensed their technology. PrimeSense provides consumer electronics with natural user interaction capabilities. The good news is that the company recently released open-sourced middleware for natural interaction and depth-camera drivers. It will be interesting to see how this will play in the near future!




In the interview, Tamir discussed a number of topics related to postWIMP technologies.  He also announced the newly created  OpenNI, "an industry-led, not-for-profit organization formed to certify compatibility and interoperability of Natural Interaction (NI) devices, applications, and middleware."   It is good to see this level of support for the cause!


Here is a quote from the interview that I especially liked:

"I believe that till today the devices we’ve been using, made us learn greatly lot about them before we could use them and gain their value. I’m pretty sure everyone who is reading this has got at least 3 remotes sitting on his living room table, and at least once a week needs to help someone use their computer/media center/phone/etc. It’s time for that to change and it’s up to us, the technologists to make this revolution happen, it’s time for the devices to take the step of understanding what we want and making sure we get that, even without asking if it’s a trivial task as opening a door when we approach, closing the lights when we leave the room, even making sure we have hot water to shower with when we return from work or wake up in the morning, depends on what we normally do." -Tamir


RELATED
Here are a couple of videos from the OpenNI website that demonstrate OpenNI-compliant applications:

OpenNI-compliant real time skelton tracking by PrimeSense


OpenNI-compliant real time SceneAnalyzer by PrimeSense



FYI: 
Josh Blake is the author of the Deconstructing the NUI blog. Over the past couple of years, he's explored natural user interfaces and interactions through his work on applications designed for Microsoft Surface and Win7 with Windows Presentation Foundation.
About a month ago, Josh organized OpenKinect, an on-line community to support collaboration among people interested in exploring ways to use Kinect with PCs and other devices.  An example of this effort is the open source code, libfreenect, which includes drivers and libraries for Windows, Linux, and OS X. 


The Natural User Interface Revolution
Josh Blake, 1/5/09


Kinect for Xbox 360: The inside story of Microsft's secret 'Project Natal'  (long, but worth reading) David Rowan, Wired UK, 10/29/10


People of libreenect

OpenNI User Guide (pdf)

Dec 6, 2010

ICE PAD: Interactive Multitouch Ice Sculpture by Art Below Zero (video)

ICE PAD: Interactive Multitouch Ice Sculpture by Art Below Zero


Here is the information about the interactive sculpture from the Art Below Zero YouTube Channel:

"Created by David Sauer & Max Zuleta for the Lake Forest Tree Lighting Festival.This Ice Crystal Display was the 1st to be created in the USA, Transforming 300 pounds of ice into the equivalent of a giant Ipad touch screen. "People always want to touch our Ice Sculptures, This Interactive Display gave them the perfect reason to get their hands cold." said Max Zuleta owner of Art Below Zero. The public response was amazement and interest in the workings of the touch screen in ice. Our favorite guess was "It must work by sensing body heat!"..."

"...The system is known as Rear Diffused Illumination or Rear DI. It works because an Infrared light is shone from the opposite side of the ice wall through the ice. When an object such as a finger, hand, or mitten stops the infrared light it reflects the light back to a custom camera built by Peau Productions. The illuminated objects are then converted to points of interaction using an open source program Community Core Vision which outputs TUIO data streams to a Flash program for animation. We like the look and feel of the Fluid Solver flash application. The output from the computer is then projected into the ice and ice diffracts the light into something beautiful. By this method the user can manipulate a visible light screen via an invisible light that only the camera can see..."



Thanks to Nolan Ramseyer, of PeauProductions, for the link!
PeauProductions Blog: Multitouch and Technology


RELATED
Ubice = Multi-touch On Ice at the Nokia Research Center in Finland (Video + Pic via Albrecht Schmidt)
Art Below Zero

Dec 3, 2010

Workshop on Mobile and Personal Projection: Call for Papers, CHI 2010, May 8, 2011, Vancouver, CA

I can't wait to attend CHI 2011!    There will be lots to learn about emerging technologies and interactions at the conference. Here's another call for papers/participation for a workshop session at the conference, via Markus Löchtefeld



CALL FOR PAPERS: MP²: Workshop on Mobile and Personal Projection, a workshop to be held at CHI 2011, Vancouver, CA. May 8,  2011


Objectives

The workshop will provide an open forum to share information, results, and ideas on current research on mobile and personal projection. The participants will explain, demonstrate and discuss their current research with others in order to receive feedback, criticism and ideas for future work. Concrete selected questions, ideas and concepts will be addressed in various group sessions in which the participants will work on topics such as a design space for mobile and personal projection; user interface, interaction design and application sketches; paper prototypes; or ad-hoc studies using the provided mobile and personal projector hardware. The results of these group sessions will be discussed with all workshop participants. Finally, we will discuss future research areas, challenges and the potential for mobile and personal projection in order to lay the foundations for a research agenda in this field.

Workshop Topics

The workshop looks for contributions on the following and related topics:
  • Applications and interaction techniques for mobile and wearable projection.
  • Personal projection in augmented reality.
  • Interaction with projected interfaces.
  • Projector phones and wearable projectors.
  • Multi-user interactions and applications.
  • Multimodal and personalized (mobile) interfaces.
  • New application areas of mobile projection.
  • Social implications when interacting with projected interfaces.
  • Artistic and unusual ways to utilize mobile projection.
  • New forms of interaction with the environment.

Research Questions

Mobile and personal projection is at a relatively early stage of research. Reflecting this state, the workshop specifically addresses the following fundamental research questions:
  • What are the unique properties and affordances of mobile and personal projection? What are suitable interaction metaphors?
  • What are core application domains that benefit the most from the usage of mobile and personal projection? What are the application contexts and usage requirements that support mobile and personal projection?
  • What are suitable interaction techniques for mobile and personal projection? How can gestures be incorporated? How should visualizations be structured? How can the projected virtual and real images of objects coexist? What is the role of augmented and mixed reality?
  • What is the social impact of mobile and personal projection technologies? How can users manage privacy when using mobile and personal projectors? How does public behavior change with the introduction of mobile and personal projection technologies?
  • How can spontaneous co-located collaboration be supported by mobile and personal projection technologies? How can designs support the exchange of media items between mobile projector phones?
  • What are suitable strategies and methodologies for evaluating mobile and personal projection interfaces? What aspects impact the user experience?

Submission

We ask for papers that address one or more of the research questions mentioned above, or that describe findings that relate to these research questions based on systems the authors have built. We welcome position papers (2 pages) as well as papers reporting novel concepts, (first) prototypes, studies, applications or interaction concepts (up to 4 pages). All submissions should be prepared according to the standard HCI Archive format.
Each paper will be receive at least two reviews. All accepted papers will be made available online and will be published at Sun SITE Central Europe (CEUR) Workshop Proceedings.
INFORMATION:

Mobile and personal projection interfaces are no longer fiction and have received considerable attention recently. Integrated pico-projectors in mobile and wearable devices could make mobile projection ubiquitous within the next few years. Walls, desks, floors, ceilings, t-shirts or palms will act as projection surfaces for these kinds of new devices.
These technological developments offer new opportunities and challenges for novel forms of interaction. Virtual displays can extend beyond physical device boundaries and augment existing objects. There are also new opportunities for spontaneous multi-user interaction. However, issues such as lighting conditions, privacy, and social acceptability also come into play.
We will bring together researchers and practitioners who are concerned with design, development, and implementation of new applications and services using personal mobile and wearable projectors in their user interfaces.

Important Dates

  • January 10, 2011 - Submission Deadline
  • February 4, 2011 - Acceptance Notification
  • March 11, 2011 - Revised Manuscript Due
  • May 8, 2011 - Workshop Date
Organizers

Otto-von-Guericke-Universität Magdeburg(Germany)
Nokia Research Center,Tampere (Finland)
Swansea University (UK)
DFKI (Germany)
University of Munich (Germany)
University of Duisburg Essen (Germany) & Lancaster University (UK)