Showing posts sorted by relevance for query gesture. Sort by date Show all posts
Showing posts sorted by relevance for query gesture. Sort by date Show all posts

Oct 25, 2009

GDIF: Gesture Description Interchange Format, a tool for music-related movements, actions, and gestures.

There has been a flurry of work in the computer music technology world that relates to what has been going on with interactive display technology, multi-touch & gesture interaction. I came across a link to the GDIF website when I was searching for information about interactive music and the use of multi-touch technologies for a future blog post.   

So what is GDIF?  Gesture description interchange format

"The Gesture Description Interchange Format (GDIF) is being developed as a tool for streaming and storing data of music-related movements, actions, and gestures.  Current general purpose formats developped within the motion capture industry and biomechanical community (e.g. C3D) focus mainly on describing low-level motion of body joints.  We are more interested in describing gesture qualities, performer-instrument relationships, and movement-sound relationships in a coherent and consistent way.  A common format will simplify working with different software, platforms and devices, and allow for sharing data between institutions."  (The Jamoma environment is used to prototype GDIF.)


Alexander Refsum Jensenius is the man who initiated the GDIF project.  He's written a variety of articles about music, gestures, movement, and emerging technologies.  


Here's Alexander's bio"Alexander (BA, MA, MSc, PhD) is a music researcher and research musician working in the fields of embodied music cognition and new interfaces for musical expression (NIME) at the University of Oslo and at the Norwegian Academy of MusicHe studied informatics, mathematics, musicology, music performance and music technology at UiOChalmersUC Berkeley and McGill. Alexander is active in the international computer music community through a number of collaborative projects, and as the initiator of GDIFHe performs on keyboard instruments and live electronics in various constellations, including the Oslo Laptop Orchestra (OLO)."




Related Publications
Godoy, R. I., E. Haga, and A. R. Jensenius (2006b). Playing `air instruments':Mimicry of sound-producing gestures by novices and experts. InS. Gibet, N. Courty, and J.-F. Kamp (Eds.), Gesture in Human-Computer Interaction and Simulation, GW 2005, Volume LNAI 3881, pp. 256{267.Berlin: Springer-Verlag.
Jensenius, A. R (2009): Motion capture studies of action-sound couplings in sonic interaction. STSM COST Action SID report. fourMs lab, University of Oslo.
Jensenius, A. R. (2007). Action - Sound: Developing Methods and Tools to Study Music-related Body Movement. PhD thesis. Department of Musicology. University of Oslo, Norway
Jensenius, A. R., K. Nymoen and R. I. Godoy (2008): A Multilayered GDIF-Based Setup for Studying Coarticulation in the Movements of Musicians. Proceedings of the International Computer Music Conference, 24-29 August 2008, Belfast.
Jensenius, A. R., T. Kvifte, and R. I. Godoy (2006). Towards a gesture description interchange format. In N. Schnell, F. Bevilacqua, M. Lyons, and A. Tanaka (Eds.), NIME '06: Proceedings of the 2006 International Conference on New Interfaces for Musical Expression, Paris, pp. 176{179. Paris: IRCAM { Centre Pompidou.}
Kvifte, T. and A. R. Jensenius (2006). Towards a coherent terminology and model of instrument description and design. In N. Schnell, F. Bevilacqua, M. Lyons, and A. Tanaka (Eds.), Proceedings of New Interfaces for Musical Expression, NIME 06, IRCAM - Centre Pompidou, Paris, France, June 4-8, pp. 220–225. Paris: IRCAM - Centre Pompidou. [PDF]
Marshall,M. T., N. Peters, A. R. Jensenius, J. Boissinot, M. M. Wanderley, and J. Braasch (2006). On the development of a system for gesture control of spatialization. In Proceedings of the 2006 International Computer Music Conference, 6-11 November, New Orleans. [PDF]

RELATED
"Sonic Interaction Design is the exploitation of sound as one of the principal channels conveying information, meaning, and aesthetic/emotional qualities in interactive contexts."
SID Action has four working groups:
WG1: Perceptual, cognitive, and emotional study of sonic interactions
WG2: Product sound design
WG3: Interactive art and music
WG4: Sonification



    "SoundHack was my main thing for a long time, and I poured a lot of effort into it. It was the place I put my ideas. I did have something of a mission with SoundHack. I wanted to take some computer music techniques that were only used in academia, and get them out there so that all types of musicians could use them."-Tom Erbe  SoundHack Spectral Shapers


Csound Blog "Old School Computer Music"
"Csound is a sound and music synthesis system, providing facilities for composition and performance over a wide range of platforms. It is not restricted to any style of music, having been used for many years in the creation of classical, pop, techno, ambient, experimental, and (of course) computer music, as well as music for film and television."-Csound on Sourceforge


Quote from Dr. Richard Boulanger (Father of CSound):
"For me, music is a medium through which the inner spiritual essence of all things is revealed and shared. Compositionally, I am interested in extending the voice of the traditional performer through technological means to produce a music which connects with the past, lives in the present and speaks to the future. Educationally, I am interested in helping students see technology as the most powerful instrument for the exploration, discovery, and realization of their essential musical nature - their inner voice."


Upcoming post about innovations at Stantum:
I'll be focusing on Stantum and its music and media technologies division, JazzMutant. in my next post. It is interesting to note that the co-founders of Stantum, Guilliam Largilleir and Pascal Joget, have a background in electronic music.  Guiliam specializes in multi-modal user interfaces and human-machine interface technologies. Pascal has a background in physics and electronics, and has worked as a sound engineer.


My music back-story:



The very first computer-related course I took was Computer Music Technology (in 2003), since I play an electronic midi/digital keyboard and previously tried to teach myself a few things, long before computers and related technologies were "easy" for me to figure out.  During the mid-90's, I tried my hand at Dr. Richard Boulanger' CSound, and tried to acquaint myself with tools from Cycling'74, but I gave up.  Not long after that, bought the first version of MOTU's  Freestyle, which nicely worked on my Performa 600, hooked up to my Ensoniq 32, after the nice people at MOTU sent me an update that was compatible with my set-up.  Later on,  I came across Tom Erbe's SoundHack freeware.   


A lot has changed since then! 




Apr 4, 2009

Put-That-There: Voice and Gesture at the Graphics Interface and more Blasts from the 1980's HCI Past


bigkif's information about "Put-That-There" about Put-That-There gives a good description of this video:

Put-That-There at CHI '84

"In 1980, Richard A. Bolt from MIT wrote Put-that-there : voice and gesture at the graphics interface. It was a pioneering multimodal application that combined speech and gesture recognition.

This demo shows users commanding simple shapes about a large-screen graphics display surface. Because voice can be augmented with simultaneous pointing, the free usage of pronouns becomes possible, with a corresponding gain in naturalness and economy of expression. Conversely, gesture aided by voice gains precision in its power to reference."

Richard A. Bolt "Put-That-There": Voice and Gesture at the Graphics Interface
(pdf) SIGGRAPH '80

Here is another blast from the '80's:

Kankaanpaa A. FIDS- AFlat-Panel Interactive Display System IEEE March 1988 IEEE Computer Graphics Applications(Nokia Information Systems)

"Although the needs and expectations of these various users are very diverse, they all have a common requirement: more natural and easier methods for communicating with the computer than are available today. Furthermore, they do not want to interact with the computer; they want to communicate with the application they are using. They do not want to use computer jargon; they want to use the same natural methods that they use when they perform the same tasks without a computer."

“We believe that only three of the flat-panel technologies described above, namely LCD, EL, and plasma, will be sufficiently advanced for mass production within this decade.”

Bill Buxton was working on multi-touch and gesture interaction in the 1980's, but his dreams did not become a reality until this century, for a variety of reasons. He shared his thoughts about the paradox of the speed of technology in a presentation at the 2008 IEEE International Solid-State Circuits Conference:Surface and Tangible Computing, and the “Small” Matter of People and Design”(pdf)

‘Carrying on from an earlier thesis in our department (Mehta , 1982) , we built a tablet that was sensitive to simultaneous touches at multiple locations, and with the ability to sense the degree of each touch independently (Lee, Buxton & Smith, 1984). We stopped the work in late 1984 when I saw a much better implementation at Bell Labs – one that was transparent and mounted over a CRT. The problem was that they never released the technology, so, the whole multi-touch venture went dormant for 20 years. But, I never stopped dreaming about it. (Lesson: don’t stop your research just because someone else is way ahead of you. It might be transitory, and anyhow, remember the story of the tortoise and the hare.)

“I spoke earlier about the paradox in the speed of technology development it goes at rocket speed, but that of a glacier as well; Simultaneously! In the perfect world, this would be ideal: we could go through several iterations of ideas so that by the time the new paradigms of interaction, such as Surface and Tangible computing are ready for prime time, everything will be in place. But, the rapid iteration is more directed at supporting the old paradigms faster and cheaper, rather then helping shape the new ones. The reasons are not hard to understand. From the perspective of circuit design, the problems are really hard. So, one has to have one’s head down working flat out to get anything done. But, there is a side of me that motivated this paper that asks, If it is so hard, then isn’t it worth making sure that the things one is working on are things that are worthy of one’s hard-earned skills?”

SOMEWHAT RELATED

Bill Buxton's Haptic Input References
(pdf)

Nov 19, 2008

Video of touch interaction on a HP TouchSmart, with NextWindow's Gesture Server Technology

Here is a short video clip of some TouchSmart interaction:



The video shows the new NextWindow Gesture Server Application.

Info from the NextWindow website:

"NextWindow Gesture Server Application in conjunction with a NextWindow touch screen enables two-touch gestures to be used on the Microsoft Windows Vista desktop and certain applications.

You perform a gesture by double-tapping or dragging two fingers on the touch surface. The Gesture Server interprets these actions as commands to the operating system. For example a two-touch vertical drag on the Vista desktop can adjust the computer's audio volume control up or down as required."


Also from the website:

Vertical Scroll Vertical scroll: drag two fingers up or down the touch screen.

Vertical Scroll Horizontal scroll: drag two fingers left or right on the touch screen.

Vertical Scroll Zoom: move two fingers apart or together.

Vertical Scroll Double Tap: double-tap two fingers on screen.

"You can enable or disable the two-touch functionality and adjust the sensitivity of each of the four two-touch gestures. You can also select the command that is executed with the double-tap gesture."

Apr 26, 2011

Multi-touch and Gesture Interaction News and Updates You Might Have Missed (Part I)

Over the past couple of months, I've come across many interesting links related to multi-touch and gesture interaction, but I haven't had time to devote a thoughtful post to each one.  "Part I",  is a nice collection of experimental, commercial, and non-commercial efforts by a variety of creative technologists, with a smattering of industry news that might be of interest to IMT readers. 


Ideum's MT55 HD Multitouch Table 4/19/11

New MT55 HD Multitouch Table Now Shipping,  Jim Spadaccini, Ideum Blog 4/11/11

Smithsonian American Art Museum to Open Education Center  Sara Beladi, NBC Washington News, 4/4/11 (Rumor has it that the Smithsonian American Art might include touch and multi-touch displays in it's plans for a new education center.  The center was funded by an anonymous $8 million dollar gift.)

Bill Buxton, Microsoft Research, 4/7/11 - Includes lots of pictures, links to videos, and more information of what might be the first touch-screen.  Also see Bill Buxton's companion website, Multi-Touch Systems that I have Known and Loved, updated on 3/21/11.  Bill Buxton knows all (almost!)


"The MTbiggie uses the "Front Diffused Illumination" multitouch technique, with ambient infrared light and a DIY infrared webcam. The MTbiggie is similar to the MTmini, but includes a projected image and infrared webcam (rather than a normal webcam)...The MTbiggie isn’t the most stable and robust setup, but it is the easiest to build. To see other methods of building more stable multitouch displays, view the full multitouch display list." -Seth Sandler

(Also check out NodeBeat, a multi-touch music/audio sequencer/generator app by Seth Sandler and Justin Windle)

Intuilab, 4/13/11
"IntuiLab, a global leader in surface computing software applications, today announced support for the revolutionary Microsoft Kinect device across its full line of IntuiFace products and solutions including IntuiFace Presentation and IntuiFace Commerce...Microsoft Kinect brings distant gesture control to interactive solutions. These gesture controls allow users to interact with displayed digital assets from a distance at their own pace and path – for example, browsing through a large quantity of products in a store catalog or manipulating 3D models (such as a mobile phone) – all without having to actually touch the screen..."  -IntuiLab (Take a look at the IntuiLab team- an interactive page!)




Sparkon:  Videos and links related to multi-touch and gesture-based applications



Official Kinect SDK to be Open SourceJosh Blake, Deconstructing the NUI, 4/18/11  
9 This bit of news excited me, but don't get your hopes up. If anyone knows what will happen with the Kinect SDK, please leave a comment.)
"Update 4/18 7:34pm: Mary Jo Foley picks up this story, but the Microsoft spokesperson she talked to denied that the Kinect SDK will be open source. As she notes, Microsoft has pulled 180’s before regarding Kinect. After spokespeople initially were hostile to the idea of Kinect hacking, Xbox executives later embraced the idea that people are using Kinect for non-gaming purposes on the PC. Let’s hope Microsoft stays open to this idea." -Josh Blake

Kenrick Kin, Tom Miller, Bjoern Bollensdorff, Tony DeRose, Bjoern Hartmann, Manees Agrawala (Pixar Online Library)

Flight Race Game on 3DFeel lm3Labs, 4/18/11


JazzMutant Lemure Version 2 : "The only multi-touch and modular controller for sequencers, synthesizers, virtual instruments, vjing and lights, now even better."


Harry van der Veen's Multitouch Blog (NUITEQ)


Stantum "Unlimited Multi-Touch" Latest News

At Immersive Labs, Ads Watch Who Looks at Them Amy Lee, Huffington Post, 4/26/11 

Immersive Labs

Hard Rock Cafe International Using NextWindow Touch Screens:  "Rock Wall Solo displays enhance music lovers' experience in Seattle, Dallas, Detroit and Berlin" 4/12/11 (Full press release pdf)
Music on Touch Screens (NextWindow)

Razorfish: Thoughts on MIX 11 ,James Ashley, Razorfish Blog, 4/20/11  Also see: Razorfish Lab's Prototypes




"The multitouch microscope brings new dimensions into teaching and research. Researchers at the Institute for Molecular Medicine Finland (FIMM) and Multitouch Ltd have created a hand and finger gesture controlled microscope. The method is a combination between two technologies: web-based virtual microscopy and a giant size multitouch display."
"The result is an entirely new way of performing microscopy: by touching a table- or wall-sized screen the user can navigate and zoom within a microscope sample in the same way as in a conventional microscope. Using the touch control it is possible to move from the natural size of the sample to a 1000-fold magnification, at which cells and even subcellular details can be seen."  -Multitouchfi  Also see the Multitouch website.



Big Size Multitouch Display Turned into a MicroscopeMicroscopy-News, 3/28/11
Mac OX 10.7 Lion: new multi-touch gestures, Dock integration for Expose, Launchpad, Mission Control Appleinsider, 4/14/11


Vectorform App featured in Royal Caribbean's Video Promotion: James Brolin, Dean Cain get hands-on with Vectorform app Alison Weber, Vectorform Blog, 3/3/11


3M Touch Systems's YouTube Channel

Social Mirror 3D Gestural Display, Now Using Kinnect:  SnibbeInteractive




Mar 11, 2013

Leap Motion: My Dev Kit Arrived - Now What?! Thoughts About "NUI" Child-Computer-Tech-Interaction - and More



My Leap Motion developer kit arrived last week. I carefully unboxed the small device and tried out the demo apps that came with the SDK.  I'm doing more looking than leaping at this point.

I'd like to create a simple cause-and-effect music, art and movement application for my 2-year-old grandson, knowing that he'll be turning three near the end of this year.  It would be nice if my app could provide young children with enough scaffolding to support gameplay and learning over a few years of development.

Now that I'm a grandmother, I've spent some time thinking about what the evolution of NUI will mean for young children like my grandson.   Family and friends captured his first moments after birth with iPhones, and shared across the Internet.  Born into the iWorld, he knows how to use an iPad or smart phone to view his earlier digital self on YouTube, without ever touching a mouse or a physical keyboard.

The little guy is pretty creative in his method of interacting with technology, as I've informally documented on video.   He was seven months old when he first encountered my first iPad.  It was fingers-and-toes interaction from the start.  

In the first picture below,  he's playing with NodeBeat.  In the second picture, he's 27 months old, experimenting with hand and foot interaction, on a variety of apps.




















My grandson is new to motion control applications, so I'm just beginning to learn what he likes,  and what he is capable of doing.  A couple of weeks ago, we played River Rush, from the Kinect Adventures game. He loved jumping up and down as he tried to hit the adventure pins. Most of the time, he kept jumping right out of the raft!  (I think next time we'll try Kinect Sesame Street TVor revisit Kinectimals.)  


One of the steps I'm taking to prepare for my Leap Motion adventure is take a look at what people have done with it so far.  There are at least 12,000 developer kits released, so hopefully there will be some interesting apps to go along with the retail version of Leap Motion when it is released at Best Buy on May 19th of this year.

One app I really like is  Adam Somer's AirHarp, featured in the video clip below:


I also like the idea behing the following app, developed by undergraduate students:

Social Sign: Multi-User sign language gesture translator using the Leap Motion Controller (git.to/socialSign)
 
"Built at the PennApps Spring 2013 hackathon, Social Sign is a friendly tool for learning sign language! By using the Leap Motion device, the BadApples team implemented a rudimentary machine learning algorithm to track and identify American Sign Language from a user's hand gestures."

"Social Sign visualizes these hand gestures and broadcasts them in textual and visual representations to other signers in a signing room. In a standard chat room fashion, the interface permits written communication but with the benefit of enhanced learning in mind. It's all about learning a new way to communicate."-BadApples Team



There are a few NUI-focused tech companies that have experimented with Leap Motion. Today, I received a link to the following videoclip Joanna Taccone, of Intuilab, featuring their most recent work:
Gesture recognition with Leap Motion using IntuiFace Presentation

"Preview of our work with the Leap Motion controller. In the same spirit as our support for Microsoft Kinect, we have encoded true gesture support, not just mouse emulation, for the creation of interactive applications by non-programmers. The goal is to hide complexity from designers using our product, IntuiFace Presentation (IP). Through the use of IP's trigger/action syntax, designers simply select a gesture as a trigger - Swipe Left, Swipe Right, Point, etc. - and associate that gesture with an action like "turn the page" or "rotate the carousel". As you can see in this video, it works quite well. :-) We will offer Leap support as soon as it ships." -IntuiLab



Below is a demonstration of guys playing Drop Cord, a collaboration between Leap Motion and Double Fine.  From the video, you can tell that they had a blast!  

Here is an excerpt from the chatter:  "The thing is that everyone just looks cool..Yeah, I know, it doesn't matter what you are doing...it's got the right amount of speed-up-slow-down stutter-y stuff...it is like a blend of art and science.."

According to the website, Drop Chord is a "A music-driven score challenge game for the Leap Motion controller, coming soon for PC, Mac, & IOS from the creators of Kinect Party.."  

The following video is a demonstration of the use of Leap Motion to control an avatar and other interaction in Second Life:



Below are a few more videos featuring Leap Motion:


Control Your Computer With a Chopstick: Leap Motion Hands On (Mashable)


The Leap Motion Experience at SXSW 2013


LEAP Motion demo: Visualizer, Windows 8, Fruit Ninja, and More...



RELATED
Air Harp for Leap Motion, Responsive Interaction
Leap Motion and Double Fine team on Dropchord, give air guitar skills an outlet
John Fingas, Engadget, 3/7/13
Leap Motion Controller Set To Ship May 13 for Global Pre-Orders, In Best Buy Stores May 19.
Hands on With Leap Motion's Controller
Lance Ulanoff, Mashable, 3/10/13
Leap Motion website
Social Sign
IntuiLab
Leap Motion: Low Cost Gesture Control for Your Computer Display

SOMEWHAT RELATED
Kinect for Windows Academic: Kaplan Early Learning
"3 years & up. Hands-on play with a purpose -- the next generation way. This unique learning tool uses your body as the game controller making it a great opportunity to combine active play and learning all in one. Use any surface to actively engage kinesthetic, visual, and audio learners. Bundle includes the following software: Word Pop, Directions, Patterns, and Shapes."

Comment:
I've been an enthusiastic supporter of natural-user interfaces and interaction for years - back in 2007 I worked on touch-screen applications for large displays as a graduate student, and became an early member of the NUI group.  I'm also a school psychologist, and from my experience, I understand how NUI-based applications and technologies, such as interactive whiteboards and touch-tablets, such as the iPad can support the learning, communication, and leisure needs of students who have significant special needs.   It looks like Leap Motion and similar technologies have the potential to support a wide range of applications that target special populations, of all ages.

May 13, 2010

Gesture Vocabulary from N-Trig: "N-act Hands-on"

N-Trig is a company founded in 1999 that provides pen and multi-touch solutions that integrate into LCDs and other devices, and provides opportunities for independent software vendors (ISVs) and original equipment manufacturers (OEMs) to create new interactive and hands-on computing experiences, according to the company's profile. The latest news about N-Trig's interactive capabilities was outlined in a recent article by DanaWollman, in Laptop:


I found the following video from N-Trig on YouTube, released on 5/11/10, that shows the new gesture set that is supported by N-Trig:




The N-act Gesture Set (depicted in the video below)
N-act3SideSweep for browsing, use fingers together for browsing
N-act2+1 - select from a displayed menu
N-act3Tap- displays open windows in a 3D carousel
N-act3Hold-rotates the 3D carousel
N-act2Scroll- scroll through a document
N-act2Tap-minimizes the open window, displays the desktop
N-act1Touch- select an item on the screen
N-act4Tap-displays customized, relevant list of web page icons; selected text/item is pasted into the chosen app.
N-act4Zoom-magnifies a movable selected area of the screen
N-act4Select-selects an area and opens a context sensitive menu

avitalntrig
Here is the promotional information from the YouTube video:
"This video demonstrates the N-trig N-act Gesture Vocabulary, a set of true multi-touch gestures for two plus one, three- and four-fingers, enabling users to perform an action directly on the screen, and providing a rich set of hand movements that enhance the overall user experience, enabling a whole new approach to how we interact with our computing devices, for a true Hands-on computing experience."


RELATED

Dana Wollman, 5/1/10, Laptop

www.n-trig.com
N-trig DuoSense Technology
The Future is Now:  Creating and Developing a Touch-Enabled World (pdf)
N-trig N-act Hands-On Gesture Vocabulary (N-Trig website)
Better Multi-Touch Displays Coming 
Mike Miller, Forward Thinking Blog, PC Mag (3/3/10)
DuoSense: Creating a Multi-touch Enabled World (November 2009)

Jan 10, 2013

Gesture Markup Language (GML) for Natural User Interaction and Interfaces

Quick post:
"GML is an extensible markup language used to define gestures that describe interactive object behavior and the relationships between objects in an application.  Gesture Markup Language has been designed to enhance the development of multiuser multi-touch and other HCI device driven applications." -Gesture ML Wiki

GestureML was created and maintained by Ideum. 

More information to come!
The Pano













Photo credit: Ideum

RELATED
Ideum Blog

OpenExhibits Free multitouch and multiuser software initiative for museums, education, nonprofits, and students

GestureWorks  Multi-touch authoring for Windows 8 & Windows 7



Dec 12, 2010

LM3LAB's Useful Map of Interactive Gesture-Based Technologies: Tracking fingers, bodies, faces, images, movement, motion, gestures - and more

Nicolas Loeillot, of LM3LABS, has been ahead of the natural user interaction/ interface game for many years as his company has expanded. He's done quite a bit of deep thinking about the work of his company, and has used this wisdom to create a nice concept map that describes how LM3LABS' solutions fit into the world of gesture-based control and interaction:




In my opinion, this chart would make a great template for mapping out other natural interaction applications and products!


Here is the description of the concepts outlined in the chart:


"If all of them belong to the “gesture control” world, the best segmentation is made from 4 categories:
  • Finger tracking: precise finger tracking, it can be single touch or multi-touch (this latest not always being a plus). Finger tracking also encompasses hand tracking which comes, for LM3LABS products, as a gestures.
  • Body tracking: using one’s body as a pointing device. Body tracking can be associated to “passive” interactivity (users are engaged without their decision to be) or “active” interactivity like 3D Feel where “players” use their body to interact with content.
  • Face tracking: using user face as a pointing device. It can be mono user or multiple users. Face tracking is a “passive” interactivity tool for engaging user in an interactive relationship with digital content.
  • Image Tracking: Augmented Reality (AR) lets users use images (flyers, real products, t-shirts, faces,…) to interact with digital content. AR can be markerless or marker-based. Markerless technology has advantages but marker-based AR is easier for users to understand. (Please note here that Markerless AR is made in close collaboration with AR leader Total Immersion)."  -LM3LABS
   If you are interested in this subject and want to view some good examples of off-the-desktop interfaces and interactions, take a look at the LM3LABS blog, as well as Nicolas Loeillot's Vimeo channel.  Also take a look at the sample of posts I've written about LM3LABS over the last few years - the links are at the end of this post.

I love LM3LABS' Interactive Balloon:

Interactive balloons from Nicolas Loeillot on Vimeo.


Interactive Balloons v lm3 labs v2 (SlideShare)



Background
I first discovered LM3LABS when I was taking a VR class and researching interactive, immersive large displays in 2005 or 2006.  Back then, there wasn't much information about this sort of technology.  A lot has changed since then!


I've learned quite a bit from watching LM3LABS (and others) grow, given my passion for postWIMP interactive technology and my commitment to blogging about this subject.   Nicolas has really worked hard in this arena.  As early as 2005, LM3LABS was working with Scala to provide "smart" interactive displays, and his company's applications have been supported by computer vision technologies for many years, allowing for gesture-based, or "touch-less" interaction, as demonstrated by the Catchyoo Interactive Table.  This application caught my eye back in early 2007, when I was working on projects for large interactive displays for my HCI and Ubicomp classes, and was thinking about creating a table-top application.


My hunch is that LM3LABS has set the foundation for further growth in the future, given the lessons they've learned by taking risks with postWIMP technologies over the past few years!


Previous Blog Posts Related to LM3LABS:
Interactive Retail Book (Celebrating history of Christian Dior from 1948-2010 (video)
Ubiq Motion Sensor Display at Future Ready Singapore (video)
Interactive Virtual DJ on a Transparent Pane, by LM3LABS and Brief Ad
LM3LABS' Catchyoo Interactive Koi Pond: Release of ubiq'window 2.6 Development Kit and Reader
A Few Things from LM3LABS
LM3LABS, Nicolas Leoillot, and Multi-touch
More from LM3LABS: Ubiq'window and Reactor.cmc's touch screen shopping catalog, Audi's touch-less showroom screen, and the DNP Museum Lab.


About LM3LABS
"Founded in 2003 by a team of passionate researchers, engineers, designers, and marketers from various international backgrounds, focused on fast transformation of innovation into unique products, LM3LABS is a recognized pioneer in computer vision-based interactivity solutions. Keeping a strong customer focus, LM3LABS' team of unique people pioneers new directions, explores new concepts, new technologies and new interactions.  Engaging, playful and magic, LM3LABS' products and solutions are always scalable and reliable"

info@lm3labs.com

Note to readers:
Over the past couple of years there has been an explosion of postWIMP technologies and applications, and with this pace, it has been difficult for me to keep abreast of it all. There is quite a bit I miss, given my full time job and daily life!

I welcome information about postWIMP interactive technologies and applications from my readers.  Due to time constraints, not interest, I am not always able to post about a topic as soon as I'd like.  That is OK, as my intention is not to be the first blogger to spread the latest tech news.  I like to dig in deep when I can and make connections between innovative, interesting technologies and the people and ideas behind them.