Video game music is in the background of life in the lives of many people, and might be the "comfort food" music of the younger generations, given the number of hours videos are played homes in the U.S.
VIDEO GAMES LIVE will be performed in conjunction with the Charlotte Symphony Orchestra on Saturday, February 19, 2011, at 7:30 PM at Ovens Auditorium. I'll be there!
"Video Games Live! is a multimedia concert experience featuring the best music and exclusive synchronized video clips from the most popular games from the beginning of video gaming to the present. Performed with the Charlotte Symphony, the show combines exclusive video footage and music arrangements with synchronized lighting, solo performers, stage show production, special FX, electronic percussionists, and unique interactive segments.
Whether it’s the power and passion of the more recent blockbusters or the excitement of remembering the sentimental classics, it will truly be a special night to remember for the entire family." -Charlotte Symphony Orchestra website
Promo video from VIDEO GAMES LIVE! (Brazil)
RELATED
VIDEO GAMES LIVE website
VIDEO GAMES LIVE tour dates
Information about VIDEO GAMES LIVE from the "About" page of the website:
Video Games Live™ is an immersive concert event featuring music from the most popular video games of all time. Top orchestras and choirs perform along with exclusive video footage and music arrangements, synchronized lighting, solo performers, electronic percussionists, live action and unique interactive segments to create an explosive entertainment experience!
This is a concert event put on by the video game industry to help encourage and support the culture and art that video games have become. Video Games Live™ bridges a gap for entertainment by exposing new generations of music lovers and fans to the symphonic orchestral experience while also providing a completely new and unique experience for families and/or non-gamers. The show is heralded and enjoyed by the entire family. It's the power & emotion of a symphony orchestra mixed with the excitement and energy of a rock concert and the technology and interactivity of a video game all completely synchronized to amazing cutting edge video screen visuals, state-of-the-art lighting and special on-stage interactive segments with the audience.
If you or someone you know is into video games you won't want to miss this highly acclaimed one-of-a-kind concert experience. Or maybe you are looking for something cultural and exciting that the whole family will enjoy? Video Games Live™ is not just a concert, but a celebration of the entire video game industry that people of all ages will adore.
Video Games Live™ is created and produced by industry veteran and world famous video game composer Tommy Tallarico.
Video Games Live™ features the best music and exclusive synchronized video clips from the most popular games from the beginning of video gaming to the present. Game franchises include:
Mario™
Zelda™
Halo®
Final Fantasy®
Warcraft®
StarCraft®
Diablo®
Sonic™
Metal Gear Solid®
Kingdom Hearts
Chrono Trigger™
Chrono Cross™
Mega Man™
Myst®
Tron
Castlevania®
Metroid®
Interactive Frogger
Interactive Space Invaders
Interactive Guitar Hero™
Interactive Donkey Kong™
Medal of Honor™
God of War™
BioShock™
Civilization IV
Tomb Raider®
Beyond Good & Evil™
Advent Rising
EverQuest® II
Mass Effect™
Shadow of the Colossus
Silent Hill™
Crysis®
Monkey Island
Earthworm Jim
End of Nations™
Afrika™
Assassin's Creed™ II
Uncharted™ II
Portal™
Lair
Conan
Command & Conquer: Red Alert
Headhunter
Splinter Cell®
Ghost Recon™
Rainbow Six®
Jade Empire
Contra
OutRun
Gears of War
Need For Speed®: Undercover
Harry Potter and the Order of the Phoenix™
Classic Arcade Medley featuring over 20+ games from Pong® to Donkey Kong® including such classics as Dragon's Lair, Tetris, Duck Hunt, Ghosts 'n Goblins, Gauntlet, Punch-Out, OutRun and MANY MORE!"
Focused on interactive multimedia and emerging technologies to enhance the lives of people as they collaborate, create, learn, work, and play.
Dec 31, 2010
VIDEO GAMES LIVE!- With the Charlotte Symphony Orchestra-An immersive multimedia event.
Posted by
Lynn Marentette
Joy of Stats: Hans Rosling's information visualization show, plus more. Enjoy!
RELATED
GAPMINDER:
GAPMINDER World
Information Visualization
SOMEWHAT RELATED
HTML5 and Visualization on the Web
Robert Kosara, Eager Eyes, 12/21/2010
Dec 29, 2010
UPDATE: CALL FOR PAPERS: Workshop on UI Technologies and Educational Pedagogy, Child-Computer Interaction (in conjunction with CHI 2011, May)
CALL FOR PAPERS
Child Computer Interaction:
in conjunction with CHI 2011, Vancouver, Canada
May 8th 2011
Topic: Given the emergence of Child Computer Interaction and the ubiquitous application of interactive technology as an educational tool, there is a need to explore how next generation HCI will impact education in the future. Educators are depending on the interaction communities and to deliver technologies that will improve and adapt learning to an ever- changing world. In addition to novel UI concepts, the HCI community needs to examine how these concepts can be matched to contemporary paradigms in educational pedagogy. The classroom is a challenging environment for evaluation, thus new techniques need to be established to prove the value of new HCI interactions in the educational space. This workshop provides a forum to discuss key HCI issues facing next generation education.
We invite authors to present position papers about potential design challenges and perspectives on how the community should handle the next generation of HCI in education. Topics of interest include:
- • Gestural input, multitouch, large displays, multi-display interaction, response systems
- • Mobile Devices/mobile & pervasive learning
- • Tangible, VR, AR & MR, Multimodal interfaces, universal design, accessibility
- • Console gaming, 3D input devices, 3D displays
- • Co-located interaction, presentations, tele-presence, interactive video
- • Child Computer Interaction, Educational Pedagogy, learner-centric, adaptive “smart” applications
- • Empirical methods, case studies, linking of HCI research with educational research methodology
- •Usable systems to support learning and teaching: Ecology of learning, any where, anytime, (UX of cloud computing to support teaching and learning)
Submission: The deadline for workshop paper submissions is January 14, 2011. Interested researchers should submit a 4-page position paper in the ACM CHI adjunct proceedings style to the workshop management system. Acceptance notifications will be sent out February 20, 2011. The workshop will be held May 7 or May 8, 2011 in Vancouver, Canada. Please note that at least one author of an accepted position paper must register for the workshop and for one or more days of the CHI 2011conference.
Website: http://www.dfki.de/EducationCHI2011
Edward Tse, SMART Technologiess
Johannes Schöning, DFKI GmbH
Yvonne Rogers, Pervasive Computing Laboratory, The Open University
Jochen Huber, Technische Universität Darmstadt
Max Mühlhäuser, Technische Universität Darmstadt
Lynn Marentette, Union County Public Schools, Wolfe School
Richard Beckwith, Intel
Posted by
Lynn Marentette
Dec 27, 2010
ūmi from Cisco - Forget about the video phone, why not try home teleconferencing on your huge HDTV?
You've probably seen the series of commercial for ūmi telepresence featuring Ellen Page. In one, she's trying to learn how to play the spoons. In another, she's engaging in a tea party. The one I like the best is when she tries to "telepresently" converse with her friend, Steve. From what I can tell from the commercials and the Cisco website, I think that the ūmi is a much better option than the video phones that have been brought to market over the past two years!
If you want to be telepresent, and if you have $599.00, you can add a ūmi to your cart while visiting the CISCO ūmi telepresence website. With free shipping and a 30-day return policy, why not try it? If your friends and relatives have a laptop and Google Chat, you can still use your ūmi with them. (My hunch is that the experience is much better if both parties have a ūmi.)
It might cost a bit more than $599.00 to get your ūmi up and running. First of all, you'll need to have an HDTV with an HDMI input port, with a resolution of 1080p, or 720p. You also need to have a top-tier broadband connection to the Internet if you have a 1080p HDTV, which requires a minimum of 3.5 Mbps upload/download speed. If you have a 720p HDTV, you'll need a minimum of 1.5 Mbps upload and download speed.
FYI: I connect to the Internet from my computer on the second floor of my home via a wireless router, located on the first floor. My upload speed is just 0.965 Mbps, which is not fast enough for the ūmi. My download speed is 8.991 Mbps, which is more than I need. I checked Time Warner Cable and found out that they now offer Wideband, with download speeds up to 50 Mbps, with upload speed up to 5 Mbps. If I really, really want a ūmi, I'll have to solve this problem with Time Warner/Roadrunner.
The following video has a short discussion of TV UI and usability testing- they tossed a bag of rice at the TV to simulate a cat jumping at the screen....
In the Lab: The Innovators Behind Cisco ūmi
Below are a couple of videos about the ūmi from Cisco's YouTube site:
RELATED
Wikipedia's definition of telepresence:
"Telepresence refers to a set of technologies which allow a person to feel as if they were present, to give the appearance that they were present, or to have an effect, via telerobotics, at a place other than their true location."
Cisco Broadband Speed Test
Cisco Blog
How is your network running?
Brenna Karr, Cisco Blog 12/17/10
Previous post: Like Neil Steinberg once said, "Dude, Where's My Video Phone?"
Dude, Where's My Video Phone?
Neil Steinberg, Forbes, 10/15/07
If you want to be telepresent, and if you have $599.00, you can add a ūmi to your cart while visiting the CISCO ūmi telepresence website. With free shipping and a 30-day return policy, why not try it? If your friends and relatives have a laptop and Google Chat, you can still use your ūmi with them. (My hunch is that the experience is much better if both parties have a ūmi.)
It might cost a bit more than $599.00 to get your ūmi up and running. First of all, you'll need to have an HDTV with an HDMI input port, with a resolution of 1080p, or 720p. You also need to have a top-tier broadband connection to the Internet if you have a 1080p HDTV, which requires a minimum of 3.5 Mbps upload/download speed. If you have a 720p HDTV, you'll need a minimum of 1.5 Mbps upload and download speed.
FYI: I connect to the Internet from my computer on the second floor of my home via a wireless router, located on the first floor. My upload speed is just 0.965 Mbps, which is not fast enough for the ūmi. My download speed is 8.991 Mbps, which is more than I need. I checked Time Warner Cable and found out that they now offer Wideband, with download speeds up to 50 Mbps, with upload speed up to 5 Mbps. If I really, really want a ūmi, I'll have to solve this problem with Time Warner/Roadrunner.
The following video has a short discussion of TV UI and usability testing- they tossed a bag of rice at the TV to simulate a cat jumping at the screen....
In the Lab: The Innovators Behind Cisco ūmi
Below are a couple of videos about the ūmi from Cisco's YouTube site:
RELATED
Wikipedia's definition of telepresence:
"Telepresence refers to a set of technologies which allow a person to feel as if they were present, to give the appearance that they were present, or to have an effect, via telerobotics, at a place other than their true location."
Cisco Broadband Speed Test
Cisco Blog
How is your network running?
Brenna Karr, Cisco Blog 12/17/10
Previous post: Like Neil Steinberg once said, "Dude, Where's My Video Phone?"
Dude, Where's My Video Phone?
Neil Steinberg, Forbes, 10/15/07
Posted by
Lynn Marentette
Labels:
cisco,
commercial,
ellen page,
HDTV,
interactive,
telepresence,
umi,
video conferencing,
video phone
No comments:
Dec 23, 2010
Hans Rosling Interacts with Health Data: 200 Countries, 200 Years, 4 Minutes
Hans Rosling's enthusiasm for data visualization has increased my appreciation for statistics. In the video below, Rosling interacts with 120,000 data points related to 200 countries over 200 years. I especially like the "Alternate Reality" effect.
"Unveiling the beauty of statistics for a fact based world view"
Hans Rosling is a Professor of Global Health in Stockholm, Sweden, and the Director of the Gapminder Foundation. The Gapminder World website has a wealth of resources for teachers, students, and anyone who is interested in learning about things through the use of information visualization.
According to information from the website, "Gapminder is a non-profit venture – a modern “museum” on the Internet – promoting sustainable global development and achievement of the United Nations Millennium Development Goals.Gapminder was founded in Stockholm by Ola Rosling, Anna Rosling Rönnlund and Hans Rosling on February 25, 2005. Gapminder is registered as a Foundation at Stockholm County Administration Board (Länstyrelsen i Stockholm) with registration number (organisationsnummer) 802424-7721."
Below is a list of annotated links to various Gapminder webpages:
Gapminder Labs: "Gapminder Labs is where we experiment with new features, visualizations and tools. Some of these might later gain a more prominent place on Gapminder.org."
Gapminder for Teachers: "This section is for educators who want to use Gapminder in their education. You'll find shortcuts to tools and guides for Gapminder in a classroom."
Gapminder Downloads: This section includes links to downloadable content, such as Gapminder Desktop, handouts, lesson plans, including teacher guides, and a good number of interesting interactive presentations.
Gapminder Videos: The videos include interesting presentations as well as a number of Hans Rosling's TED talks. The material is free to use and distribute under the Creative Commons License.
Data in Gapminder World: This section includes all of the indicators displayed in Gapminder World.
Gapminder World
Gapminder FAQs
Cross-posted on the TechPsych and The World Is My Interface blogs
"Unveiling the beauty of statistics for a fact based world view"
Hans Rosling is a Professor of Global Health in Stockholm, Sweden, and the Director of the Gapminder Foundation. The Gapminder World website has a wealth of resources for teachers, students, and anyone who is interested in learning about things through the use of information visualization.
According to information from the website, "Gapminder is a non-profit venture – a modern “museum” on the Internet – promoting sustainable global development and achievement of the United Nations Millennium Development Goals.Gapminder was founded in Stockholm by Ola Rosling, Anna Rosling Rönnlund and Hans Rosling on February 25, 2005. Gapminder is registered as a Foundation at Stockholm County Administration Board (Länstyrelsen i Stockholm) with registration number (organisationsnummer) 802424-7721."
Below is a list of annotated links to various Gapminder webpages:
Gapminder Labs: "Gapminder Labs is where we experiment with new features, visualizations and tools. Some of these might later gain a more prominent place on Gapminder.org."
Gapminder for Teachers: "This section is for educators who want to use Gapminder in their education. You'll find shortcuts to tools and guides for Gapminder in a classroom."
Gapminder Downloads: This section includes links to downloadable content, such as Gapminder Desktop, handouts, lesson plans, including teacher guides, and a good number of interesting interactive presentations.
Gapminder Videos: The videos include interesting presentations as well as a number of Hans Rosling's TED talks. The material is free to use and distribute under the Creative Commons License.
Data in Gapminder World: This section includes all of the indicators displayed in Gapminder World.
Gapminder World
Gapminder FAQs
Cross-posted on the TechPsych and The World Is My Interface blogs
Posted by
Lynn Marentette
Dec 22, 2010
Teach Parents Tech website by Google employees - gotta love it - it includes tech "how-to" care package videos!
Google employees know what it is like to play the role of the extended family tech support person. For the holidays- and beyond - they've created a series how-to videos that might prove to be useful to parents and other extended family members who are interested in joining World 2.0 but need some sort of useful roadmap.
Teach Parents Tech is a great website to visit to learn the basics and a bit more. Here is the introductory video:
There is a "how-to"video for nearly everything. Below is a screen shot of the home page, that lets you create a customized tech support "care package" that you can email to a parent:
Teach Parents Tech is a great website to visit to learn the basics and a bit more. Here is the introductory video:
There is a "how-to"video for nearly everything. Below is a screen shot of the home page, that lets you create a customized tech support "care package" that you can email to a parent:
Posted by
Lynn Marentette
Multi-touch SmartBoard! (SMARTBoard 800 Series)
Take a look at the video demonstration of the new SMARTBoard (800 series) that offers multi-touch and gesture interaction support so that two students can interact with the board at the same time.
- Students can use 2 finger gestures to enlarge objects and move them around.
- Two students can interact with the board at the same time to complete activities.
- SMARTInk/Calligraphic Ink creates stylized print as you write. Whatever is written or drawn on the SMARTBoard becomes an object in the SMARTNotebook, allowing for things to be resized or rotated. (2:04)
- Multi-touch gestures enabled in Window 7 and Snow Leopard work with the SMARTBoard.
- Software development kit (3:28): Example of a physics application developed by a 3rd-party developer. The application supports two students working at the SMARTBoard at the same time
This video, in my opinion, does not provide viewers with the full range of possibilities that the new features provide. I'd like to see a "redo" of this video using a live teacher and a group of students. For example, it would be interested in seeing how the physics application would be incorporated into a broader lesson or science unit. I'd love to hear what real students have to say as they interact with the physics application, too.
Comment:
I think a multi-user interactive timeline would be a great application for the new SMARTBoard, because students could work together to create and recreate events. This would be ideal for history, literature, and humanities activities, across a wide span of grade levels.
Posted by
Lynn Marentette
Video School Online: Free from Vimeo
Prosumers, DYI, hobbyists, multimedia wannabes, and even a few film or video pros might want to take a look at Vimeo's Video School Online.
I'd like to use a dolly for a couple projects, and found the following video on the Vimeo Video School website that gives a great step-by-step demonstration of how to make your very own dolly for about $45.00:
My DIY Dolly from Knut Uppstad on Vimeo.
I'd like to use a dolly for a couple projects, and found the following video on the Vimeo Video School website that gives a great step-by-step demonstration of how to make your very own dolly for about $45.00:
My DIY Dolly from Knut Uppstad on Vimeo.
Posted by
Lynn Marentette
Labels:
dolly workshop,
dyi dolly,
film,
multimedia,
on line,
techniques,
video,
video production,
video school,
vimeo
No comments:
Interesting animation made with Google Docs presentation app. (Google Demo Slam), via Flowing Data
The video below was an entry in the Google Demo Slam, an effort started by Google to share the word with the world about their innovative technologies. By the time I learned of Epic Docs Animation, the video had over 800,000 views. I plan to view a few more Google Demo Slam videos over the holiday break!
-Tu+, Namroc, and Metcalf
For more information and Demo Slam videos, visit Google's Demo Slam website: "Welcome to Demo Slam, Where Amazing Tech Demos Battle for Your Enjoyment"
RELATED
Epic animation in Google Docs
Nathan Yau, Flowing Data, 12/22/10
Google's rationale for creating Demo Slam:
"We spend our time making a whole bunch of technologies that are free for the world, but a lot of people dont even know about them. And that kind of sucks. So, we thought organizing the world's most creative tech demo battle would be a great way to help spread the word and teach people about tech. Not to mention, it is a lot of fun."
About Demo Slam
Hall of Demo Champs
-Tu+, Namroc, and Metcalf
For more information and Demo Slam videos, visit Google's Demo Slam website: "Welcome to Demo Slam, Where Amazing Tech Demos Battle for Your Enjoyment"
RELATED
Epic animation in Google Docs
Nathan Yau, Flowing Data, 12/22/10
Google's rationale for creating Demo Slam:
"We spend our time making a whole bunch of technologies that are free for the world, but a lot of people dont even know about them. And that kind of sucks. So, we thought organizing the world's most creative tech demo battle would be a great way to help spread the word and teach people about tech. Not to mention, it is a lot of fun."
About Demo Slam
Hall of Demo Champs
Posted by
Lynn Marentette
Dec 14, 2010
"Design is the Solution-From Visual Clarity to Clarity in the Mind" (gem of an article by Gerd Waloszek, SAP User Experience)
Design is the Solution - From Visual Clarity to Clarity in the Mind
Gerd Waloszek, SAP User Experience, 12/7/10
In this article, Gerd Waloszek provides an overview of traditional usability principles and shares his thoughts about broadening the concept of clarity to include mental states and models. His article includes charts/concept maps as well as links to great resources.
If this topic interests you, plan to block out some time to read this article and explore the links.
Gerd Waloszek, SAP User Experience, 12/7/10
In this article, Gerd Waloszek provides an overview of traditional usability principles and shares his thoughts about broadening the concept of clarity to include mental states and models. His article includes charts/concept maps as well as links to great resources.
If this topic interests you, plan to block out some time to read this article and explore the links.
Posted by
Lynn Marentette
Short documentary of the story behind the Reactable, a tangible user interface for creating music. (Includes an interview of Joel Bonasera, of Charlotte's Discovery Place museum.)
The following video provides a look into the history of the Reactable, from the initial paper prototypes to the present, including the Reactable Mobile application designed for the iPad, iPhone, and iPod touch. The video includes interviews of Sergi Jorda and Gunter Geiger, members of the original team at Pompeu Fabra University (Barcelona) that created the Reactable. The other team members are Martin Kaltenbrunner and Marcos Alonso.
FYI: At about 2:34 in the video, Joel Bonasera briefly discusses the Reactable installation at Charlotte's Discovery Place museum. Joel is a project manager at Discovery Place.
RELATED
How the Reactable Works
John Fuller, howstuffworks
Music Technology Group, Pompeu Fabra University
Reactable Website
Reactable Concepts
Reactable History
Discovery Place
Interactive Technology in the Carolinas: Discovery Place Science Center
(Includes a short video clip I took of the Reactable at Discovery Place)
FYI: At about 2:34 in the video, Joel Bonasera briefly discusses the Reactable installation at Charlotte's Discovery Place museum. Joel is a project manager at Discovery Place.
RELATED
How the Reactable Works
John Fuller, howstuffworks
Music Technology Group, Pompeu Fabra University
Reactable Website
Reactable Concepts
Reactable History
Discovery Place
Interactive Technology in the Carolinas: Discovery Place Science Center
(Includes a short video clip I took of the Reactable at Discovery Place)
Posted by
Lynn Marentette
Dec 12, 2010
Interactive Surveillance: Live digital art installation by Annabel Manning and Celine Latulipe
Interactive Surveillance, a live installation by artist Annabel Manning and technologist Celine Latulipe, was held at the Dialect Gallery in the NoDa arts district of Charlotte, N.C. on Friday, December 10th, 2010. I attended this event with the intention of capturing some of the interaction between the participants and the artistic content during the experience, but I came away with so much more. The themes embedded in the installation struck a chord with me on several different levels.
Friday's version of Interactive Surveillance provided participants the opportunity to use wireless gyroscopic mice to manipulate simulated lenses on a large video display. The video displayed on the screen was a live feed from a camera located in the stairway leading to the second-floor gallery. When both lenses converged on the screen, a picture was taken of the stairway scene, and then automatically sent to Flickr. Although it was possible for one person to take a picture of the scene holding a mouse in each hand, the experience was enhanced by collaborating with a partner.
Video Reflection of Interactive Surveillance (Lynn Marentette, 12/10/10)
Live Installation: Interactive Surveillance, by Annabel Manning and Celine Latulipe from Lynn Marentette on Vimeo.
Interactive Surveillance Website
Friday's version of Interactive Surveillance provided participants the opportunity to use wireless gyroscopic mice to manipulate simulated lenses on a large video display. The video displayed on the screen was a live feed from a camera located in the stairway leading to the second-floor gallery. When both lenses converged on the screen, a picture was taken of the stairway scene, and then automatically sent to Flickr. Although it was possible for one person to take a picture of the scene holding a mouse in each hand, the experience was enhanced by collaborating with a partner.
In another area of the gallery, guests had the opportunity to use wireless mice to interact with previously recorded surveillance video on another large display. The video depicted people crossing desert terrain at night from Mexico to the U.S. In this case, the digital lenses on the screen functioned as search lights, illuminating - and targeting- people who would prefer not to be seen or noticed in any way. On a nearby wall was another smaller screen with the same video content displayed on the larger screen. This interaction is demonstrated in the video below:
A smaller screen was set out on the refreshment table so participants could view the Flickr photostream of the "surveillance" pictures taken of the stairway. On a nearby wall was a smaller digital picture frame that provided a looping video montage of Manning's photo/art of people crossing the border.
The themes explored in the original Interactive Surveillance include border surveillance, shadow, and identity, delivered in a way that creates an impact beyond the usual chatter of pundits, politicians, and opinionators. The live installation provided another layer to the event by providing participants to be the target of the "stairway surveillance", as well as play the role of someone who conducts surveillance.
Reflections:
In a way, the live component of the present installation speaks to the concerns of our present era, where the balance between freedom and security is shaky at best. It is understandable that video surveillance is used in our nation's efforts to protect our borders. But in our digital age, surveillance is pervasive. In most public spaces it is no longer possible to avoid the security camera's eye. Our images are captured and stored without our explicit knowledge. We do not know the identities or the intentions of those who view us, or our information, remotely.
We are numb to the ambient surveillance that surrounds us. We go about our daily activities without notice. We are silently tracked as we move across websites, dart in and out of supermarkets and shopping malls, and pay for our purchases with plastic. Our SMART phones know where we are located and will give out our personal information if we are not vigilant, as our default settings are often "public".
It is easy to forget that the silent type of surveillance exists. It is not so easy to ignore more invasive types of "surveillance". We must agree to submit to a high degree of inspection in the form of metal detectors, baggage searches, and in recent weeks, uncomfortable physical pat-downs, for the privilege of traveling across state borders by plane, within our own country. In some airports, we are subject to whole-body scans that provide strangers with views of our most private spaces. We go along with this effort and prove our innocence on-the-spot, for the greater good. Conversely, we have multiple means of conducting our own forms of surveillance, through Internet searches, viewing pictures and videos posted to the web, and playing around with Google Streetview.
It is easy to forget that the silent type of surveillance exists. It is not so easy to ignore more invasive types of "surveillance". We must agree to submit to a high degree of inspection in the form of metal detectors, baggage searches, and in recent weeks, uncomfortable physical pat-downs, for the privilege of traveling across state borders by plane, within our own country. In some airports, we are subject to whole-body scans that provide strangers with views of our most private spaces. We go along with this effort and prove our innocence on-the-spot, for the greater good. Conversely, we have multiple means of conducting our own forms of surveillance, through Internet searches, viewing pictures and videos posted to the web, and playing around with Google Streetview.
As I wandered around the Dialect Gallery with my video camera, I realized that I was conducting my own form of surveillance, adding another layer to the mix. Unfortunately, some of the time I had my camera pressed to "pause" when I thought I was filming, and vice versa, and as a consequence, I did not capture people using the wireless mice to interact with the content on the displays. I went ahead with my mission and created a short video reflection of my impressions of Interactive Surveillance. If you look closely at the video between :40 and :47, you'll see some people from across the street from the gallery that I unintentionally captured, and now they are part of my surveillance.
Although the video below was hastily edited, it includes music and sounds from the iMovie library that approximated the "soundtrack" that formed in my mind as I experienced the exhibit.
To get a better understanding of Interactive Surveillance, I recommend the following links:
Although the video below was hastily edited, it includes music and sounds from the iMovie library that approximated the "soundtrack" that formed in my mind as I experienced the exhibit.
To get a better understanding of Interactive Surveillance, I recommend the following links:
Barbara Schrieber, Charlotte Viewpoint
Video Reflection of Interactive Surveillance (Lynn Marentette, 12/10/10)
Live Installation: Interactive Surveillance, by Annabel Manning and Celine Latulipe from Lynn Marentette on Vimeo.
Interactive Surveillance Website
Interactive Surveillance Flickr Photostream
Posted by
Lynn Marentette
LM3LAB's Useful Map of Interactive Gesture-Based Technologies: Tracking fingers, bodies, faces, images, movement, motion, gestures - and more
Nicolas Loeillot, of LM3LABS, has been ahead of the natural user interaction/ interface game for many years as his company has expanded. He's done quite a bit of deep thinking about the work of his company, and has used this wisdom to create a nice concept map that describes how LM3LABS' solutions fit into the world of gesture-based control and interaction:
In my opinion, this chart would make a great template for mapping out other natural interaction applications and products!
Here is the description of the concepts outlined in the chart:
"If all of them belong to the “gesture control” world, the best segmentation is made from 4 categories:
I love LM3LABS' Interactive Balloon:
Interactive balloons from Nicolas Loeillot on Vimeo.
Interactive Balloons v lm3 labs v2 (SlideShare)
BackgroundI first discovered LM3LABS when I was taking a VR class and researching interactive, immersive large displays in 2005 or 2006. Back then, there wasn't much information about this sort of technology. A lot has changed since then!
I've learned quite a bit from watching LM3LABS (and others) grow, given my passion for postWIMP interactive technology and my commitment to blogging about this subject. Nicolas has really worked hard in this arena. As early as 2005, LM3LABS was working with Scala to provide "smart" interactive displays, and his company's applications have been supported by computer vision technologies for many years, allowing for gesture-based, or "touch-less" interaction, as demonstrated by the Catchyoo Interactive Table. This application caught my eye back in early 2007, when I was working on projects for large interactive displays for my HCI and Ubicomp classes, and was thinking about creating a table-top application.
My hunch is that LM3LABS has set the foundation for further growth in the future, given the lessons they've learned by taking risks with postWIMP technologies over the past few years!
Previous Blog Posts Related to LM3LABS:
Interactive Retail Book (Celebrating history of Christian Dior from 1948-2010 (video)
Ubiq Motion Sensor Display at Future Ready Singapore (video)
Interactive Virtual DJ on a Transparent Pane, by LM3LABS and Brief Ad
LM3LABS' Catchyoo Interactive Koi Pond: Release of ubiq'window 2.6 Development Kit and Reader
A Few Things from LM3LABS
LM3LABS, Nicolas Leoillot, and Multi-touch
More from LM3LABS: Ubiq'window and Reactor.cmc's touch screen shopping catalog, Audi's touch-less showroom screen, and the DNP Museum Lab.
In my opinion, this chart would make a great template for mapping out other natural interaction applications and products!
Here is the description of the concepts outlined in the chart:
"If all of them belong to the “gesture control” world, the best segmentation is made from 4 categories:
- Finger tracking: precise finger tracking, it can be single touch or multi-touch (this latest not always being a plus). Finger tracking also encompasses hand tracking which comes, for LM3LABS products, as a gestures.
- Body tracking: using one’s body as a pointing device. Body tracking can be associated to “passive” interactivity (users are engaged without their decision to be) or “active” interactivity like 3D Feel where “players” use their body to interact with content.
- Face tracking: using user face as a pointing device. It can be mono user or multiple users. Face tracking is a “passive” interactivity tool for engaging user in an interactive relationship with digital content.
- Image Tracking: Augmented Reality (AR) lets users use images (flyers, real products, t-shirts, faces,…) to interact with digital content. AR can be markerless or marker-based. Markerless technology has advantages but marker-based AR is easier for users to understand. (Please note here that Markerless AR is made in close collaboration with AR leader Total Immersion)." -LM3LABS
If you are interested in this subject and want to view some good examples of off-the-desktop interfaces and interactions, take a look at the LM3LABS blog, as well as Nicolas Loeillot's Vimeo channel. Also take a look at the sample of posts I've written about LM3LABS over the last few years - the links are at the end of this post.
Interactive balloons from Nicolas Loeillot on Vimeo.
Interactive Balloons v lm3 labs v2 (SlideShare)
Background
I've learned quite a bit from watching LM3LABS (and others) grow, given my passion for postWIMP interactive technology and my commitment to blogging about this subject. Nicolas has really worked hard in this arena. As early as 2005, LM3LABS was working with Scala to provide "smart" interactive displays, and his company's applications have been supported by computer vision technologies for many years, allowing for gesture-based, or "touch-less" interaction, as demonstrated by the Catchyoo Interactive Table. This application caught my eye back in early 2007, when I was working on projects for large interactive displays for my HCI and Ubicomp classes, and was thinking about creating a table-top application.
My hunch is that LM3LABS has set the foundation for further growth in the future, given the lessons they've learned by taking risks with postWIMP technologies over the past few years!
Previous Blog Posts Related to LM3LABS:
Interactive Retail Book (Celebrating history of Christian Dior from 1948-2010 (video)
Ubiq Motion Sensor Display at Future Ready Singapore (video)
Interactive Virtual DJ on a Transparent Pane, by LM3LABS and Brief Ad
LM3LABS' Catchyoo Interactive Koi Pond: Release of ubiq'window 2.6 Development Kit and Reader
A Few Things from LM3LABS
LM3LABS, Nicolas Leoillot, and Multi-touch
More from LM3LABS: Ubiq'window and Reactor.cmc's touch screen shopping catalog, Audi's touch-less showroom screen, and the DNP Museum Lab.
About LM3LABS
"Founded in 2003 by a team of passionate researchers, engineers, designers, and marketers from various international backgrounds, focused on fast transformation of innovation into unique products, LM3LABS is a recognized pioneer in computer vision-based interactivity solutions. Keeping a strong customer focus, LM3LABS' team of unique people pioneers new directions, explores new concepts, new technologies and new interactions. Engaging, playful and magic, LM3LABS' products and solutions are always scalable and reliable"
info@lm3labs.com
Note to readers:
Over the past couple of years there has been an explosion of postWIMP technologies and applications, and with this pace, it has been difficult for me to keep abreast of it all. There is quite a bit I miss, given my full time job and daily life!
I welcome information about postWIMP interactive technologies and applications from my readers. Due to time constraints, not interest, I am not always able to post about a topic as soon as I'd like. That is OK, as my intention is not to be the first blogger to spread the latest tech news. I like to dig in deep when I can and make connections between innovative, interesting technologies and the people and ideas behind them.
Posted by
Lynn Marentette
Dec 11, 2010
SMALLab Update: Embodied and Engaged Learning - ASU researchers partner with GameDesk
SMALLab is an interdisciplinary collaborative project at the Arts, Media and Engineering program at Arizona State University, and includes people from fields such as education, art, theatre, computer science, engineering, and psychology. The SMALLab provides students with a multi-sensory, multi-modal way of learning concepts in an immersive environment, and uses a motion capture system that tracks the position of the students as they move and interact within the environment.
SMALLab's project lead is David Birchfield, a media artist, researcher, and educator who focuses on K-12 learning, media art installations, and live computer music performances. SMALLab researchers have recently partnered with GameDesk to develop a 6th grade curriculum for a GameDesk charter school in 2012. (Information and links related to GameDesk are located in the RELATED section of this post.)
Below is a detailed excerpt from an overview of SMALLab:
"In today’s world, digital technology must play a central role in students’ learning. A convergence of trends in the learning science and human-computer interaction (HCI) research offers new theoretical and technological frameworks for learning. in particular, mixed-reality, experiential media systems can support learning in a way that is social, collaborative, multimodal, and embodied. These systems comprise a new breed of student-centered learning environments [SCLE’s]. Importantly, they must address the practicalities of today’s classrooms and informal learning environments (eg.: space, infrastructure, financial resources) while embracing the innovative forms of interactivity that are emerging from our media research communities (eg: multimodal sensing, real time interactive media, context aware computing)...
...SMALLab is an extensible platform for semi-immersive, mixed-reality learning. By semi-immersive, we mean that the mediated space of SMALLab is physically open on all sides to the larger environment. Participants can freely enter and exit the space without the need for wearing specialized display or sensing devices such as head-mounted displays (HMD) or motion capture markers. Participants seated or standing around SMALLab can see and hear the dynamic media, and they can directly communicate with their peers that are interacting in the space. As such, the semi-immersive framework establishes a porous relationship between SMALLab and the larger physical learning environment. By mixed-reality, we mean that there is an integration of physical manipulation objects, 3D physical gestures, and digitally mediated components. By extensible, we mean that researchers, teachers, and students can create new learning scenarios in SMALLab using a set of custom designed authoring tools and programming interfaces."
Below are a few videos about SMALLab, and information about GameDesk, an organization that is collaborating with SMALLab in California.
SMALLab Learning from SMALLab on Vimeo.
Below is a demonstration of a Smallab learning activity:
Gamedesk Smallab Session from Gamedesk on Vimeo.
RELATED
Sara Corbett, NYTimes Magazine, 9/15/10
Info about GameDesk, from the GameDesk website:
"GameDesk is a 501(c)3 nonprofit research and outreach organization that seeks to reshape models for learning through game-play and game development. The organization looks to help close the achievement gap and engage students to learn core STEM curriculum. It develops project-based learning with a strong focus on purpose, ownership, and personal value. The organization (originally developed out of research and support at the University of Southern California's IMSC) has now been in development, practice, and/or evaluation for over two years in various schools in the Los Angeles area." -Gamedesk
Gamedesk Concept Chart
Posted by
Lynn Marentette
Subscribe to:
Posts (Atom)