The following video provides a look into the history of the Reactable, from the initial paper prototypes to the present, including the Reactable Mobile application designed for the iPad, iPhone, and iPod touch. The video includes interviews of Sergi Jorda and Gunter Geiger, members of the original team at Pompeu Fabra University (Barcelona) that created the Reactable. The other team members are Martin Kaltenbrunner and Marcos Alonso.
FYI: At about 2:34 in the video, Joel Bonasera briefly discusses the Reactable installation at Charlotte's Discovery Place museum. Joel is a project manager at Discovery Place.
RELATED
How the Reactable Works
John Fuller, howstuffworks
Music Technology Group, Pompeu Fabra University
Reactable Website
Reactable Concepts
Reactable History
Discovery Place
Interactive Technology in the Carolinas: Discovery Place Science Center
(Includes a short video clip I took of the Reactable at Discovery Place)
Focused on interactive multimedia and emerging technologies to enhance the lives of people as they collaborate, create, learn, work, and play.
Dec 14, 2010
Dec 12, 2010
Interactive Surveillance: Live digital art installation by Annabel Manning and Celine Latulipe
Interactive Surveillance, a live installation by artist Annabel Manning and technologist Celine Latulipe, was held at the Dialect Gallery in the NoDa arts district of Charlotte, N.C. on Friday, December 10th, 2010. I attended this event with the intention of capturing some of the interaction between the participants and the artistic content during the experience, but I came away with so much more. The themes embedded in the installation struck a chord with me on several different levels.
Friday's version of Interactive Surveillance provided participants the opportunity to use wireless gyroscopic mice to manipulate simulated lenses on a large video display. The video displayed on the screen was a live feed from a camera located in the stairway leading to the second-floor gallery. When both lenses converged on the screen, a picture was taken of the stairway scene, and then automatically sent to Flickr. Although it was possible for one person to take a picture of the scene holding a mouse in each hand, the experience was enhanced by collaborating with a partner.
Video Reflection of Interactive Surveillance (Lynn Marentette, 12/10/10)
Live Installation: Interactive Surveillance, by Annabel Manning and Celine Latulipe from Lynn Marentette on Vimeo.
Interactive Surveillance Website

Friday's version of Interactive Surveillance provided participants the opportunity to use wireless gyroscopic mice to manipulate simulated lenses on a large video display. The video displayed on the screen was a live feed from a camera located in the stairway leading to the second-floor gallery. When both lenses converged on the screen, a picture was taken of the stairway scene, and then automatically sent to Flickr. Although it was possible for one person to take a picture of the scene holding a mouse in each hand, the experience was enhanced by collaborating with a partner.
In another area of the gallery, guests had the opportunity to use wireless mice to interact with previously recorded surveillance video on another large display. The video depicted people crossing desert terrain at night from Mexico to the U.S. In this case, the digital lenses on the screen functioned as search lights, illuminating - and targeting- people who would prefer not to be seen or noticed in any way. On a nearby wall was another smaller screen with the same video content displayed on the larger screen. This interaction is demonstrated in the video below:
A smaller screen was set out on the refreshment table so participants could view the Flickr photostream of the "surveillance" pictures taken of the stairway. On a nearby wall was a smaller digital picture frame that provided a looping video montage of Manning's photo/art of people crossing the border.
The themes explored in the original Interactive Surveillance include border surveillance, shadow, and identity, delivered in a way that creates an impact beyond the usual chatter of pundits, politicians, and opinionators. The live installation provided another layer to the event by providing participants to be the target of the "stairway surveillance", as well as play the role of someone who conducts surveillance.
Reflections:
In a way, the live component of the present installation speaks to the concerns of our present era, where the balance between freedom and security is shaky at best. It is understandable that video surveillance is used in our nation's efforts to protect our borders. But in our digital age, surveillance is pervasive. In most public spaces it is no longer possible to avoid the security camera's eye. Our images are captured and stored without our explicit knowledge. We do not know the identities or the intentions of those who view us, or our information, remotely.
We are numb to the ambient surveillance that surrounds us. We go about our daily activities without notice. We are silently tracked as we move across websites, dart in and out of supermarkets and shopping malls, and pay for our purchases with plastic. Our SMART phones know where we are located and will give out our personal information if we are not vigilant, as our default settings are often "public".
It is easy to forget that the silent type of surveillance exists. It is not so easy to ignore more invasive types of "surveillance". We must agree to submit to a high degree of inspection in the form of metal detectors, baggage searches, and in recent weeks, uncomfortable physical pat-downs, for the privilege of traveling across state borders by plane, within our own country. In some airports, we are subject to whole-body scans that provide strangers with views of our most private spaces. We go along with this effort and prove our innocence on-the-spot, for the greater good. Conversely, we have multiple means of conducting our own forms of surveillance, through Internet searches, viewing pictures and videos posted to the web, and playing around with Google Streetview.
It is easy to forget that the silent type of surveillance exists. It is not so easy to ignore more invasive types of "surveillance". We must agree to submit to a high degree of inspection in the form of metal detectors, baggage searches, and in recent weeks, uncomfortable physical pat-downs, for the privilege of traveling across state borders by plane, within our own country. In some airports, we are subject to whole-body scans that provide strangers with views of our most private spaces. We go along with this effort and prove our innocence on-the-spot, for the greater good. Conversely, we have multiple means of conducting our own forms of surveillance, through Internet searches, viewing pictures and videos posted to the web, and playing around with Google Streetview.
As I wandered around the Dialect Gallery with my video camera, I realized that I was conducting my own form of surveillance, adding another layer to the mix. Unfortunately, some of the time I had my camera pressed to "pause" when I thought I was filming, and vice versa, and as a consequence, I did not capture people using the wireless mice to interact with the content on the displays. I went ahead with my mission and created a short video reflection of my impressions of Interactive Surveillance. If you look closely at the video between :40 and :47, you'll see some people from across the street from the gallery that I unintentionally captured, and now they are part of my surveillance.
Although the video below was hastily edited, it includes music and sounds from the iMovie library that approximated the "soundtrack" that formed in my mind as I experienced the exhibit.
To get a better understanding of Interactive Surveillance, I recommend the following links:
Although the video below was hastily edited, it includes music and sounds from the iMovie library that approximated the "soundtrack" that formed in my mind as I experienced the exhibit.
To get a better understanding of Interactive Surveillance, I recommend the following links:
Barbara Schrieber, Charlotte Viewpoint
Video Reflection of Interactive Surveillance (Lynn Marentette, 12/10/10)
Live Installation: Interactive Surveillance, by Annabel Manning and Celine Latulipe from Lynn Marentette on Vimeo.
Interactive Surveillance Website
Interactive Surveillance Flickr Photostream
Posted by
Lynn Marentette
LM3LAB's Useful Map of Interactive Gesture-Based Technologies: Tracking fingers, bodies, faces, images, movement, motion, gestures - and more
Nicolas Loeillot, of LM3LABS, has been ahead of the natural user interaction/ interface game for many years as his company has expanded. He's done quite a bit of deep thinking about the work of his company, and has used this wisdom to create a nice concept map that describes how LM3LABS' solutions fit into the world of gesture-based control and interaction:

In my opinion, this chart would make a great template for mapping out other natural interaction applications and products!
Here is the description of the concepts outlined in the chart:
"If all of them belong to the “gesture control” world, the best segmentation is made from 4 categories:
I love LM3LABS' Interactive Balloon:
Interactive balloons from Nicolas Loeillot on Vimeo.
Interactive Balloons v lm3 labs v2 (SlideShare)
BackgroundI first discovered LM3LABS when I was taking a VR class and researching interactive, immersive large displays in 2005 or 2006. Back then, there wasn't much information about this sort of technology. A lot has changed since then!
I've learned quite a bit from watching LM3LABS (and others) grow, given my passion for postWIMP interactive technology and my commitment to blogging about this subject. Nicolas has really worked hard in this arena. As early as 2005, LM3LABS was working with Scala to provide "smart" interactive displays, and his company's applications have been supported by computer vision technologies for many years, allowing for gesture-based, or "touch-less" interaction, as demonstrated by the Catchyoo Interactive Table. This application caught my eye back in early 2007, when I was working on projects for large interactive displays for my HCI and Ubicomp classes, and was thinking about creating a table-top application.
My hunch is that LM3LABS has set the foundation for further growth in the future, given the lessons they've learned by taking risks with postWIMP technologies over the past few years!
Previous Blog Posts Related to LM3LABS:
Interactive Retail Book (Celebrating history of Christian Dior from 1948-2010 (video)
Ubiq Motion Sensor Display at Future Ready Singapore (video)
Interactive Virtual DJ on a Transparent Pane, by LM3LABS and Brief Ad
LM3LABS' Catchyoo Interactive Koi Pond: Release of ubiq'window 2.6 Development Kit and Reader
A Few Things from LM3LABS
LM3LABS, Nicolas Leoillot, and Multi-touch
More from LM3LABS: Ubiq'window and Reactor.cmc's touch screen shopping catalog, Audi's touch-less showroom screen, and the DNP Museum Lab.
In my opinion, this chart would make a great template for mapping out other natural interaction applications and products!
Here is the description of the concepts outlined in the chart:
"If all of them belong to the “gesture control” world, the best segmentation is made from 4 categories:
- Finger tracking: precise finger tracking, it can be single touch or multi-touch (this latest not always being a plus). Finger tracking also encompasses hand tracking which comes, for LM3LABS products, as a gestures.
- Body tracking: using one’s body as a pointing device. Body tracking can be associated to “passive” interactivity (users are engaged without their decision to be) or “active” interactivity like 3D Feel where “players” use their body to interact with content.
- Face tracking: using user face as a pointing device. It can be mono user or multiple users. Face tracking is a “passive” interactivity tool for engaging user in an interactive relationship with digital content.
- Image Tracking: Augmented Reality (AR) lets users use images (flyers, real products, t-shirts, faces,…) to interact with digital content. AR can be markerless or marker-based. Markerless technology has advantages but marker-based AR is easier for users to understand. (Please note here that Markerless AR is made in close collaboration with AR leader Total Immersion)." -LM3LABS
If you are interested in this subject and want to view some good examples of off-the-desktop interfaces and interactions, take a look at the LM3LABS blog, as well as Nicolas Loeillot's Vimeo channel. Also take a look at the sample of posts I've written about LM3LABS over the last few years - the links are at the end of this post.
Interactive balloons from Nicolas Loeillot on Vimeo.
Interactive Balloons v lm3 labs v2 (SlideShare)
Background
I've learned quite a bit from watching LM3LABS (and others) grow, given my passion for postWIMP interactive technology and my commitment to blogging about this subject. Nicolas has really worked hard in this arena. As early as 2005, LM3LABS was working with Scala to provide "smart" interactive displays, and his company's applications have been supported by computer vision technologies for many years, allowing for gesture-based, or "touch-less" interaction, as demonstrated by the Catchyoo Interactive Table. This application caught my eye back in early 2007, when I was working on projects for large interactive displays for my HCI and Ubicomp classes, and was thinking about creating a table-top application.
My hunch is that LM3LABS has set the foundation for further growth in the future, given the lessons they've learned by taking risks with postWIMP technologies over the past few years!
Previous Blog Posts Related to LM3LABS:
Interactive Retail Book (Celebrating history of Christian Dior from 1948-2010 (video)
Ubiq Motion Sensor Display at Future Ready Singapore (video)
Interactive Virtual DJ on a Transparent Pane, by LM3LABS and Brief Ad
LM3LABS' Catchyoo Interactive Koi Pond: Release of ubiq'window 2.6 Development Kit and Reader
A Few Things from LM3LABS
LM3LABS, Nicolas Leoillot, and Multi-touch
More from LM3LABS: Ubiq'window and Reactor.cmc's touch screen shopping catalog, Audi's touch-less showroom screen, and the DNP Museum Lab.
About LM3LABS
"Founded in 2003 by a team of passionate researchers, engineers, designers, and marketers from various international backgrounds, focused on fast transformation of innovation into unique products, LM3LABS is a recognized pioneer in computer vision-based interactivity solutions. Keeping a strong customer focus, LM3LABS' team of unique people pioneers new directions, explores new concepts, new technologies and new interactions. Engaging, playful and magic, LM3LABS' products and solutions are always scalable and reliable"
info@lm3labs.com
Note to readers:
Over the past couple of years there has been an explosion of postWIMP technologies and applications, and with this pace, it has been difficult for me to keep abreast of it all. There is quite a bit I miss, given my full time job and daily life!
I welcome information about postWIMP interactive technologies and applications from my readers. Due to time constraints, not interest, I am not always able to post about a topic as soon as I'd like. That is OK, as my intention is not to be the first blogger to spread the latest tech news. I like to dig in deep when I can and make connections between innovative, interesting technologies and the people and ideas behind them.
Posted by
Lynn Marentette
Dec 11, 2010
SMALLab Update: Embodied and Engaged Learning - ASU researchers partner with GameDesk
SMALLab is an interdisciplinary collaborative project at the Arts, Media and Engineering program at Arizona State University, and includes people from fields such as education, art, theatre, computer science, engineering, and psychology. The SMALLab provides students with a multi-sensory, multi-modal way of learning concepts in an immersive environment, and uses a motion capture system that tracks the position of the students as they move and interact within the environment.
SMALLab's project lead is David Birchfield, a media artist, researcher, and educator who focuses on K-12 learning, media art installations, and live computer music performances. SMALLab researchers have recently partnered with GameDesk to develop a 6th grade curriculum for a GameDesk charter school in 2012. (Information and links related to GameDesk are located in the RELATED section of this post.)
Below is a detailed excerpt from an overview of SMALLab:
"In today’s world, digital technology must play a central role in students’ learning. A convergence of trends in the learning science and human-computer interaction (HCI) research offers new theoretical and technological frameworks for learning. in particular, mixed-reality, experiential media systems can support learning in a way that is social, collaborative, multimodal, and embodied. These systems comprise a new breed of student-centered learning environments [SCLE’s]. Importantly, they must address the practicalities of today’s classrooms and informal learning environments (eg.: space, infrastructure, financial resources) while embracing the innovative forms of interactivity that are emerging from our media research communities (eg: multimodal sensing, real time interactive media, context aware computing)...
...SMALLab is an extensible platform for semi-immersive, mixed-reality learning. By semi-immersive, we mean that the mediated space of SMALLab is physically open on all sides to the larger environment. Participants can freely enter and exit the space without the need for wearing specialized display or sensing devices such as head-mounted displays (HMD) or motion capture markers. Participants seated or standing around SMALLab can see and hear the dynamic media, and they can directly communicate with their peers that are interacting in the space. As such, the semi-immersive framework establishes a porous relationship between SMALLab and the larger physical learning environment. By mixed-reality, we mean that there is an integration of physical manipulation objects, 3D physical gestures, and digitally mediated components. By extensible, we mean that researchers, teachers, and students can create new learning scenarios in SMALLab using a set of custom designed authoring tools and programming interfaces."
Below are a few videos about SMALLab, and information about GameDesk, an organization that is collaborating with SMALLab in California.
SMALLab Learning from SMALLab on Vimeo.
Below is a demonstration of a Smallab learning activity:
Gamedesk Smallab Session from Gamedesk on Vimeo.
RELATED
Sara Corbett, NYTimes Magazine, 9/15/10
Info about GameDesk, from the GameDesk website:
"GameDesk is a 501(c)3 nonprofit research and outreach organization that seeks to reshape models for learning through game-play and game development. The organization looks to help close the achievement gap and engage students to learn core STEM curriculum. It develops project-based learning with a strong focus on purpose, ownership, and personal value. The organization (originally developed out of research and support at the University of Southern California's IMSC) has now been in development, practice, and/or evaluation for over two years in various schools in the Los Angeles area." -Gamedesk
Gamedesk Concept Chart
Posted by
Lynn Marentette
Very Cute! Department of Defense Acquisition Mini Learning Games
If you are the Department of Defense, how do you make sure your workers in the Acquisition Department engage in required learning activities?
Games! You can access the games via the Defense Acquisition University game portal. Below are some screenshots, descriptions, and links:
Procurement Fraud Indicators

"Investigate potential Procurement Fraud Indicators in this game which allows you to form hypotheses, test your theories, even question individuals who might have something to hide!" -CLC DAU
Homeward Bound

"Join Ratner's friends and help guide him back to the Pentagon; across rivers, highways, and highly guarded walls using your knowledge of Acquisition Strategy and Contract Execution." -CLC DAU
Acquisition Proposition

"How well do you know the Acquisition Lifecycle? Test your knowledge in this fast paced game!" -CLC DAU
About the Defense Acquisition University
"The Defense Acquisition University is the one institution that touches nearly every member of the Defense Acquisition Workforce throughout all career stages. The university provides a full range of basic, intermediate, and advanced certification training, assignment-specific training, applied research, and continuous learning opportunities. The university also fosters professional development through mission assistance, rapid-deployment training on emerging acquisition initiatives, online knowledge-sharing tools, and continuous learning modules." - DAU Website
RELATED
Listen to the DoD Roundtable: Interview and discussion about the casual learning games, featuring Dr. Alicia Sanchez, Games Czar, Defense Acquisition University
DoD Roundtable Transcript (pdf)
Defense video games perfectly capture excitement of acquisition process
Stephen Losey, Fedline, 12/10/10
DoD launches its own causal games site
milgamer, 12/8/10
Games! You can access the games via the Defense Acquisition University game portal. Below are some screenshots, descriptions, and links:
Procurement Fraud Indicators
"Investigate potential Procurement Fraud Indicators in this game which allows you to form hypotheses, test your theories, even question individuals who might have something to hide!" -CLC DAU
Homeward Bound
"Join Ratner's friends and help guide him back to the Pentagon; across rivers, highways, and highly guarded walls using your knowledge of Acquisition Strategy and Contract Execution." -CLC DAU
Acquisition Proposition
"How well do you know the Acquisition Lifecycle? Test your knowledge in this fast paced game!" -CLC DAU
About the Defense Acquisition University
"The Defense Acquisition University is the one institution that touches nearly every member of the Defense Acquisition Workforce throughout all career stages. The university provides a full range of basic, intermediate, and advanced certification training, assignment-specific training, applied research, and continuous learning opportunities. The university also fosters professional development through mission assistance, rapid-deployment training on emerging acquisition initiatives, online knowledge-sharing tools, and continuous learning modules." - DAU Website
RELATED
Listen to the DoD Roundtable: Interview and discussion about the casual learning games, featuring Dr. Alicia Sanchez, Games Czar, Defense Acquisition University
DoD Roundtable Transcript (pdf)
Defense video games perfectly capture excitement of acquisition process
Stephen Losey, Fedline, 12/10/10
DoD launches its own causal games site
milgamer, 12/8/10
Posted by
Lynn Marentette
Quick Post: Journey, the next game from thatgamecompany (developers of Flower, flOw, and Cloud).
I've been following the work of some of the people behind thatgamecompany since they were graduate students at USC, working on Cloud, an enchanting and relaxing game. They went on to develop Flower and flOw, and are now working on Journey, the next game planned for release:
To view video trailers of other games by thatgamecompany, see the following post:
Games to Lift Stress Away: Flower, flOw (and Cloud), from thatgamecompany
Also visit thatgamecompany's website!
To view video trailers of other games by thatgamecompany, see the following post:
Games to Lift Stress Away: Flower, flOw (and Cloud), from thatgamecompany
Also visit thatgamecompany's website!
Posted by
Lynn Marentette
Labels:
Cloud,
Flow,
Flower Journey,
jenova chen,
play station,
thatgamecompany,
trailer,
video
No comments:
Subscribe to:
Posts (Atom)