Showing posts sorted by relevance for query "natural user interaction". Sort by date Show all posts
Showing posts sorted by relevance for query "natural user interaction". Sort by date Show all posts

Nov 29, 2010

International Conference on Multimodal Interaction: ICMI 2011 Call for Papers

The information below was taken from the website for the 13th International Conference on Multimodal Interaction. I'm excited about the range of topics that the conference will cover.  I look forward to sharing more about the work of the members of this group on this blog in the future!  (I've highlighted the topics that interest me the most.)

INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION CALL FOR PAPERS

The International Conference on Multimodal Interaction, ICMI 2011, will take place in Alicante (Spain), November 14-18, 2011, just after the ICCV 2011 (in Barcelona, Spain). This is the thirteenth edition of the International Conference on Multimodal Interfaces, which for the last two years joined efforts with the Workshop on Machine Learning for Multimodal Interaction (ICMI-MLMI 2009 and 2010). Starting in this edition the conference uses the new, shorther name.

The new ICMI is the premium international forum for multimodal signal processing and multimedia human-computer interaction. The conference will focus on theoretical and empirical foundations, varied component technologies, and combined multimodal processing techniques that define the field of multimodal interaction analysis, interface design, and system development. ICMI 2011 will feature a single-track main conference which includes: keynote speakers, technical full and short papers (including oral and poster presentations), special sessions, demonstrations, exhibits and doctoral spotlight papers. The conference will be followed by workshops. The proceedings of ICMI 2011 will be published by ACM as part of their series of International Conference Proceedings and will be also distributed to the attendees in USB memory sticks.


Topics of interest include but are not limited to:

  • Multimodal and multimedia interactive processing
    Multimodal fusion, multimodal output generation, multimodal interactive discourse and dialogue modeling, machine learning methods for multimodal interaction.
  • Multimodal input and output interfaces
    Gaze and vision-based interfaces, speech and conversational interfaces, pen-based and haptic interfaces, virtual/augmented reality interfaces, biometric interfaces, adaptive multimodal interfaces, natural user interfaces, authoring techniques, architectures.
  • Multimodal and interactive applications
    Mobile and ubiquitous interfaces, meeting analysis and meeting spaces, interfaces to media content and entertainment, human-robot interfaces and interaction, audio/speech and vision interfaces for gaming, multimodal interaction issues in telepresence, vehicular applications and navigational aids, interfaces for intelligent environments, universal access and assistive computing, multimodal indexing, structuring and summarization.
  • Human interaction analysis and modeling
    Modeling and analysis of multimodal human-human communication, audio-visual perception of human interaction, analysis and modeling of verbal and nonverbal interaction, cognitive modeling.
  • Multimodal and interactive data, evaluation, and standards
    Evaluation techniques and methodologies, annotation and browsing of multimodal and interactive data, standards for multimodal interactive interfaces.
  • Core enabling technologies
    Pattern recognition, machine learning, computer vision, speech recognition, gesture recognition.

Important dates

Workshops proposalMarch 1, 2011
Paper and demo submissionMay 13, 2011
Author notificationAugust 5, 2011
Camera ready deadlineSeptember 2, 2011
ConferenceNovember 14-16, 2011
WorkshopsNovember 17-18, 2011


General Chairs

Hervé Bourlard (Idiap)
Thomas S. Huang (Univ. of Illinois)
Enrique Vidal (Tech. Univ. of Valencia)

Program Chairs

Daniel Gatica-Perez (Idiap)
Louis-Philippe Morency (Univ. South. California)
Nicu Sebe (Univ. of Trento)

Demo Chairs

Kazuhiro Otsuka (NTT Comm. Sci. Lab.)
Jordi Vitrià (UB/CVC, Barcelona)

Workshop Chairs

Fernando de la Torre
(Carnegie Mellon Univ.)
Alejandro Jaimes (Yahoo! Research, Barcelona)

Publication Chair

Jose Oncina (Univ. of Alicante)

Student & Doctoral Spotlight Chair

Li Deng (Microsoft Research and Univ. of Washington)

Sponsorship Chair

Nuria Oliver (Telefónica I+D)

Publicity Chair

Helen Mei-Ling Meng (CUHK, Hong Kong)

Local Organization Chair

Luisa Micó (Univ. of Alicante)

Treasurer

Jorge Calera (Univ. of Alicante)

Local organizers

Xavier Anguera (Telefónica I+D)
A. Javier Gallego Sánchez (Univ. of Alicante)
Ida Hui (CUHK, Hong Kong)
Jose Manuel Iñesta (Univ. of Alicante)
Alejandro Toselli (Tech. Univ. of Valencia)



RELATED
Accepted Papers for ICMI-MLMI 2010


NOTE:  ICMI 2011 will be held after ICCV 2011, the 13th International Conference on Computer Vision in Barcelona.

Sep 26, 2009

More Multi-touch and Gesture-based Natural User Interfaces: Bamboo Wacom Tablet; Multi-touch PresTop Kiosk and Snowflake Suite software

Wacom Tablets Get Multi-Touch, Gestures
(Charlie Sorrel, Wired, 9/24/09)
"For the tech-curious, the new tablets have 512 pressure levels in the pen tip and the active area of the tablet is 5.8 x 3.6 inches, and all lose the in-pack mouse (for obvious reasons). The Touch and the Pen models are both $70, and the Pen & Touch is $100. Also, if you were thinking of buying Photoshop Elements 7 for the same price, get a tablet instead — Elements comes in the box."




http://www.wired.com/images_blogs/gadgetlab/2009/09/cth460k_3-660x371.jpg

Official Wacom Video

"Bamboo Touch is new type of computer input device by Wacom that lets you navigate and perform commands like zoom, scroll, rotate and more with a series of simple finger taps and hand gestures. Bamboo Touch brings Multi-Touch capability to your Mac or PC"

Video from a Wacom user:

A nice alternative to a mouse.  I'm going to get one for my laptop!


Multi-touch Kiosks!
Press release:  Dutch touchscreen supplier PresTop partners with Natural User Interface (NUITEQ)
 
http://prestop.nl/images/gallery/products/st_UU_zuil_wit.png
http://prestop.nl/images/gallery/products/st_DSC02106.png

RELATED

I couldn't find any video clips of PresTop's multi-touch interaction. From what I can tell, PresTop multi-touch screens will be using SnowFlake Suite from Natural User Interface Technogies AB.

How-to:SnowFlake Suite Flash multi-touch Interactable component (NUIversity)

Without a single line of code, you can do quite a bit with Snowflake Suite

"This video covers how to make a rotatable and scalable image. The beauty about this is, that we have developed a Flash mouse input simulator, so that there is no need for multi-touch hardware in order to develop your applications. Simply simulate multiple mouse inputs for multi-touch.This project is still in alpha phase and a download will become available with the next release of Snowflake Suite 1.7 for the NextWindow platform and camera based multi-touch solutions."


Below is a video of single-touch interaction for PresTop, from Omnivision:


PresTop  PresTop offers interactive hardware and software solutions that can be used indoors as well as in outdoor environments.

Nov 15, 2008

Multi-touch and Flash: Links to resources, revisiting Jeff Han's TED 2006 presentation

Despite the increase in interest in systems that support multi-touch, multi-user multimedia interaction, there is a need for creative, tech-savvy types to develop innovative applications. Why? This technology has the potential to make a powerful impact on how people learn, communicate, solve "big picture" problems, and do their various jobs.

CNN's Magic Wall was one of the first applications to gain the attention of the masses, as it was used as an interactive map during the US presidential election process. Touch-screen interaction gained even more notice after the recent SNL parody by Fred Amisen.

If you think about it, the multi-touch applications you see on the news aren't much different than what you'd get from a "single-touch" program.

Fancy, yes. Truly innovative, no.

Just imagine a 3D multi-touch, multi-user, multimedia version of Google Search. I did. I put my sketches in my idea book and hurt my brain thinking about how it could be coded.

Jeff Han, the man behind Perceptive Pixel and CNN's magic wall, had much more up his sleeve when he demonstrated his work at TED 2006. Even if you've previously seen this video, it is worth looking at again. (I've provided a link to the transcript below.)



Transcript of Jeff Han's TED 2006 Presentation

This video presentation had a transformational effect on me as I watched for the first time. Jeff Han brought to life ideas that were similar to my own as a beginning computer student thinking about collaborative educational games and multimedia applications that could be played on interactive whiteboards.

Here are some selected quotes from the video:

"
I really really think this is gonna change- really change the way we interact with the machines from this point on."

"
Again, the interface just disappears here. There's no manual. This is exactly what you kind of expect, especially if you haven't interacted with a computer before."

"Now, when you have initiatives like the hundred dollar laptop, I kind of cringe at the idea that we're gonna introduce a whole new generation of people to computing with kind of this standard mouse-and-windows pointer interface. This is something that I think is really the way we should be interacting with the machines from this point on. (applause)"

"Now this is going to be really important as we start getting to things like data visualization. For instance, I think we all really enjoyed Hans Rosling's talk, and he really emphasized the fact that I've been thinking about for a long time too, we have all this great data, but for some reason, it's just sitting there. We're not really accessing it. And one of the reasons why I think that is, is because of things like graphics- will be helped by things like graphics and visualization and inference tools. But I also think a big part of it is gonna be- starting to be able to have better interfaces, to be able to drill down into this kind of data, while still thinking about the big picture here."

So now what?

A recent post by "Alex", on the
AFlex World blog discusses a few solutions. Alex had a chance to meet with Harry van der Veen and Pradeep George from the NUI Group, and Georg Kaindl, a multi-touch interaction designer from the Technical University of Vienna. The focus of the discussion was to come up with ideas to encourage Adobe/Flash designers and developers to learn more about multi-touch technology and interaction, and take steps to create innovative applications.

I especially like the following quote from the post:

"...A quick quote from our conversations: “When our children will walk up to a display, they will touch it and expect to do something.”

As a techie and a school psychologist, I see an immediate need for innovative applications. I know that there is a built-in market in the schools, at least for low-cost applications. Despite economic constraints, many school districts continue to invest in interactive whiteboards (IWB's). They are cropping up in preschool and K-12 settings, and teachers are searching for more than what's currently available.

Interactive, collaborative applications are needed in fields such as health care, patient education, finance & economics, urban planning, civil engineering, travel & tourism, museums & exhibitions, special events, entertainment, and more.

Smart Technologies, the company behind SmartBoards, has a new interactive multi-touch, multi-user table designed for K-6 education, the Smart Table. Hewlett Packard has several versions of the TouchSmart PC, which can support at least duo-touch, if not multi-touch, multi-user applications. There are numerous all-in-one large screen display
s on the market that support multi-touch and multi-user interaction.

Quotes from Harry van der Veen, of Multitouch NL:

"In 10 years from now when a child walks up to a screen he expects it to be a multi-touch screen with which he can interact with by using gestures."

"...multi-touch screens will be as common as for children is the internet nowadays, as common as mobile phones are for us."


Here is a quote from a conversation I had with Spencer, who blogs at TeacherLED.

"It was interesting this week as I was in a classroom with a teacher who I've not worked with before... he had 2 students using the whiteboard who kept touching it together by mistake. The teacher, exasperated, said to himself, "Why can't they make these things to accept 2 touches without going crazy!"

Proof of the demand! I think you are right when teachers spot the limitations and then see the technology on visits to museums, that might stimulate demand."


Spencer creates cool interactive mini-applications, mostly for math, using Flash, that teachers (and students) love to use on interactive whiteboards. (He's interested in multi-touch, too.)


So what are we waiting for?!

Related:
Natural User Interface Europe AB meets Adobe
Georg's Touche Framework
NUI Group
TeacherLED
Interactive Touch-Screen Technology, Participatory Design, and "Getting It".
Hans Rosling's 2007 TED talk

Jan 28, 2011

"Microsoft is Imagining a NUI Future". You can, too!

Microsoft is Imagining a NUI Future
Steve Clayton, Next at Microsoft Blog, 1/26/11


"Our research shows that the vast majority of people polled in both developed and emerging markets see great potential for NUI applications beyond entertainment. This is especially true in China and India, where 9 out of 10 respondents indicate they are likely to use NUI technology across a range of lifestyle areas – from work, education and healthcare, to social connections, entertainment and the environment. We believe that taking technology to the next billion can be aided by NUI – making technology more accessible and more intuitive to a wider audience". - Steve Clayton, Microsoft


The people at Microsoft don't own the concept!  I'm a member of the NUI Group (May, 2007) and SparkOn.  Both are on-line communities where you can find people who live and breathe NUI, learn about their work, and even share designs and code. If you are intrigued by NUI - as a designer, developer, or user, please join us.


Note: 
I've been an evangelist and cheerleader for the NUI cause for many years.  If you search this blog for "post-WIMP", "NUI", "multi-touch", "gesture", "off-the-desktop""natural user interaction", "natural user interface", or even "DOOH", you'll be provided with an overwhelming number of posts that include videos, photographs, and links to NUI-related resources, including scholarly articles.  There is a small-but-growing number of people from many disciplines, quietly working on NUI-related projects.


RELATED
Microsoft Plans a Natural Interface Future Full of Gestures, Touchscreens, and Haptics
Kit Eaton, Fast Company, 1/26/112
Rethinking Computing (video)
Craig Mundie, Microsoft
Interactive Touch-Screen Technology, Participatory Design, and "Getting It" - Revised
Touch Screen Interaction in Public Spaces:  Room for Improvement, if "every surface is to be a computer".

Dec 12, 2010

LM3LAB's Useful Map of Interactive Gesture-Based Technologies: Tracking fingers, bodies, faces, images, movement, motion, gestures - and more

Nicolas Loeillot, of LM3LABS, has been ahead of the natural user interaction/ interface game for many years as his company has expanded. He's done quite a bit of deep thinking about the work of his company, and has used this wisdom to create a nice concept map that describes how LM3LABS' solutions fit into the world of gesture-based control and interaction:




In my opinion, this chart would make a great template for mapping out other natural interaction applications and products!


Here is the description of the concepts outlined in the chart:


"If all of them belong to the “gesture control” world, the best segmentation is made from 4 categories:
  • Finger tracking: precise finger tracking, it can be single touch or multi-touch (this latest not always being a plus). Finger tracking also encompasses hand tracking which comes, for LM3LABS products, as a gestures.
  • Body tracking: using one’s body as a pointing device. Body tracking can be associated to “passive” interactivity (users are engaged without their decision to be) or “active” interactivity like 3D Feel where “players” use their body to interact with content.
  • Face tracking: using user face as a pointing device. It can be mono user or multiple users. Face tracking is a “passive” interactivity tool for engaging user in an interactive relationship with digital content.
  • Image Tracking: Augmented Reality (AR) lets users use images (flyers, real products, t-shirts, faces,…) to interact with digital content. AR can be markerless or marker-based. Markerless technology has advantages but marker-based AR is easier for users to understand. (Please note here that Markerless AR is made in close collaboration with AR leader Total Immersion)."  -LM3LABS
   If you are interested in this subject and want to view some good examples of off-the-desktop interfaces and interactions, take a look at the LM3LABS blog, as well as Nicolas Loeillot's Vimeo channel.  Also take a look at the sample of posts I've written about LM3LABS over the last few years - the links are at the end of this post.

I love LM3LABS' Interactive Balloon:

Interactive balloons from Nicolas Loeillot on Vimeo.


Interactive Balloons v lm3 labs v2 (SlideShare)



Background
I first discovered LM3LABS when I was taking a VR class and researching interactive, immersive large displays in 2005 or 2006.  Back then, there wasn't much information about this sort of technology.  A lot has changed since then!


I've learned quite a bit from watching LM3LABS (and others) grow, given my passion for postWIMP interactive technology and my commitment to blogging about this subject.   Nicolas has really worked hard in this arena.  As early as 2005, LM3LABS was working with Scala to provide "smart" interactive displays, and his company's applications have been supported by computer vision technologies for many years, allowing for gesture-based, or "touch-less" interaction, as demonstrated by the Catchyoo Interactive Table.  This application caught my eye back in early 2007, when I was working on projects for large interactive displays for my HCI and Ubicomp classes, and was thinking about creating a table-top application.


My hunch is that LM3LABS has set the foundation for further growth in the future, given the lessons they've learned by taking risks with postWIMP technologies over the past few years!


Previous Blog Posts Related to LM3LABS:
Interactive Retail Book (Celebrating history of Christian Dior from 1948-2010 (video)
Ubiq Motion Sensor Display at Future Ready Singapore (video)
Interactive Virtual DJ on a Transparent Pane, by LM3LABS and Brief Ad
LM3LABS' Catchyoo Interactive Koi Pond: Release of ubiq'window 2.6 Development Kit and Reader
A Few Things from LM3LABS
LM3LABS, Nicolas Leoillot, and Multi-touch
More from LM3LABS: Ubiq'window and Reactor.cmc's touch screen shopping catalog, Audi's touch-less showroom screen, and the DNP Museum Lab.


About LM3LABS
"Founded in 2003 by a team of passionate researchers, engineers, designers, and marketers from various international backgrounds, focused on fast transformation of innovation into unique products, LM3LABS is a recognized pioneer in computer vision-based interactivity solutions. Keeping a strong customer focus, LM3LABS' team of unique people pioneers new directions, explores new concepts, new technologies and new interactions.  Engaging, playful and magic, LM3LABS' products and solutions are always scalable and reliable"

info@lm3labs.com

Note to readers:
Over the past couple of years there has been an explosion of postWIMP technologies and applications, and with this pace, it has been difficult for me to keep abreast of it all. There is quite a bit I miss, given my full time job and daily life!

I welcome information about postWIMP interactive technologies and applications from my readers.  Due to time constraints, not interest, I am not always able to post about a topic as soon as I'd like.  That is OK, as my intention is not to be the first blogger to spread the latest tech news.  I like to dig in deep when I can and make connections between innovative, interesting technologies and the people and ideas behind them. 




Mar 29, 2011

SIFTEO, the next-gen Siftables! (Tangible User Interfaces for All)

Despite my enthusiasm for TUI's , I somehow missed the news about the transformation of Siftables to a commercial version, Sifteo:

Sifteo Inc. Debuts Sifteo™ Cubes - A New Way To Play (PDF



"Sifteo cubes are 1.5 inch computers with full-color displays that sense their motion, sense each other, and wirelessly connect to your computer. You, your friends, and your family can play an ever-growing array of interactive games that get your brain and body engaged.
Sifteo’s initial collection of titles includes challenging games for adults, fun learning puzzles for kids, and games people can play together." -Sifteo website
For more information, see the Sifteo website,  blog, and YouTube  channel.  If you can't wait to get your own set,  take a look at Josh Blake's Sifteo Cube Unboxing Video!

RELATED
About two years ago, I was interviewed about my thoughts about the interactive, hands-on, programmable cubes, then called Siftables,  for an article published in IEEE's Computing Now magazine:  Siftables Offer New Interaction Mode  (James Figeuroa, Computing Now, 3/2009). 

For those of you who'd like more information about tangible user interfaces (TUIs) and  the development of Siftables, I've copied my 2009 post,   Tangible User Interfaces, Part I:  Siftables,  below:

TANGIBLE USER INTERFACES, PART I: SIFTABLES (2009)
In 1997, the vision of tangible user interfaces, also known as TUI's, was outlined by Hiroshi Ishii and Brygg Ullmer of the Tangible Media Group at MIT, in their paper, "Tangible Bits: Towards Seamless Interfaces between People, Bits, and Atoms" (pdf).   According to this vision, "the goal of Tangible Bits is to bridge the gaps between both cyberspace and the physical environment, as well as the foreground and background of human activities." This article is is a must-read for anyone interested in "new" interactive technologies.

The pictures in the article of the metaDesk, transBoard, activeLENS, and ambientRoom, along with the references, are worth a look, for those interested in this seminal work.

Another must-read is Hiroshi Ishii's 2008 article, Tangible Bits: Beyond Pixels (pdf). In this article, Ishii provides a good overview of TUI concepts as well as the contributions of his lab to the field since the first paper was written.

Related to Tangible User Interface research is the work of the Fluid Interfaces Group at MIT. The Fluid Interfaces Group was formerly known as the Ambient Intelligence Group, and many of the group's projects incorporate concepts related to TUI and ambient intelligence. 



According to the Fluid Interfaces website, the goal of this research group is to "radically rethink the human-machine interactive experience. By designing interfaces that are more immersive, more intelligent, and more interactive we are changing the human-machine relationship and creating systems that are more responsive to people's needs and actions, and that become true "accessories" for expanding our minds."

The Siftables project is an example of how TUI and fluid interface (FI) interaction can be combined. Siftables is the work of David Merrill and Pattie Maes, in collaboration with Jeevan Kalanithi, and was brought to popular attention through David Merrill's recent TED talk:

David Merrill's TED Talk: Siftables - Making the digital physical
-Grasp Information Physically

"Siftables aims to enable people to interact with information and media in physical, natural ways that approach interactions with physical objects in our everyday lives. As an interaction platform, Siftables applies technology and methodology from wireless sensor networks to tangible user interfaces. Siftables are independent, compact devices with sensing, graphical display, and wireless communication capabilities. They can be physically manipulated as a group to interact with digital information and media. Siftables can be used to implement any number of gestural interaction languages and HCI applications....
Siftables can sense their neighbors, allowing applications to utilize topological arrangement..No special sensing surface or cameras are needed."





Siftables Music Sequencer from Jeevan Kalanithi on Vimeo.

http://web.media.mit.edu/~dmerrill/images/music-against-wood-320x213.jpg


More about Siftables:
Rethinking display technology (Scott Kirsner, Boston Globe, 7/27/08)
TED: Siftable Computing Makes Data Physical
Siftables: Toward Sensor Network User Interfaces (pdf)

It seems that people really like the Siftable concept, or they don't see the point. I found the following humerous critique of Siftables on YouTube:

"Imagine if all the little programs you had on your iphone were little separate chicklets in your pocket.
You'd lose em.
Your cat would eat em.
You'd vacuum them up.
They'd fall down in the sofa.
They'd be all over the car floor.
You'd throw them away by mistake..."

In my opinion, it is exciting to learn that perhaps some of this technology has the potential of becoming main-stream.


Dec 26, 2009

DYI multi-touch...

If you follow this blog, you know I like to share what people are doing with multi-touch and related natural user interfaces/interaction. In this post, I'd like to share an article about two students who decided to build and market a multi-touch table- the article below explains the story in-depth, and video shows the nuts and bolts.


Enterprising roomates build multi-touch LCD, market their business to West Coast*
Walter Valencia, Collegiate Times 12/1/09



According to the above article, Aaron Bitler and Brady Simpson they were inspired by CNN's Magic Wall during the 2008 election.  Bitler and Simpson learned more about natural user interface/interaction during a presentation in a business class that featured a video about the Microsoft Surface table and natural user interface technologies.  They formed a company, 3M8,  to build and market mutli-touch display/tables.


Vision x32 from Aaron Bitler on Vimeo.


From what I can tell, it looks like Bitler and Simpson relied on the DYI information and support from the NUI-group website to carry out their ideas. Bitler and Simpson met with representatives of 22Miles, a company located in San Jose that provides interactive solutions, including multi-touch, for web, mobile, and touch screen implementations.

I'll post more about 22Miles in an upcoming post.

Until then, take a look at 22Miles' promo video, featuring a huge 3D interactive multi-touch heart:

Oct 12, 2010

Update on Josh Blake, newly designated Microsoft Surface MVP

Josh Blake is the Tech Lead of the InfoStrat Advance Technology Group in DC.  He has been creating multi-touch applications Microsoft's Surface multi-user table-tops for a while. Recently, his team built a suite of applications designed for use by young children at a museum.  Below is a video demonstration of some of this work. It really looks exciting!


Microsoft Surface and Magical Object Interaction

Josh Blake's blog is called Deconstructing the NUI- for those of you new to this blog, NUI stands for Natural User Interface (also known as Natural User Interaction).  See his post, Microsoft Surface and Magical Object Interaction, for more information!

RELATED
Here is a plug for Josh Blake's book, "Multitouch on Windows"

Book Ordering Information

FYI:  InfoStrat  is hiring  WPF experts as well as Microsoft CRM and Microsoft SharePoint experts.


Microsoft Surface MVPs
Dr. Neil Roodyn
Dennis Vroegop
Rick Barraza
Joshua Blake