Showing posts sorted by relevance for query gesture. Sort by date Show all posts
Showing posts sorted by relevance for query gesture. Sort by date Show all posts

Aug 10, 2013

Kinect Interaction to support people with disabilities: DoubleFine's Happy Action Theater/Kinect Party; OAK Air Switch and Face Switch, resources

I've been experimenting with gesture and touch-based applications for many years and I'm excited to see how things have unfolded over the past couple of years, especially in the field of special education.

Last week I downloaded DoubleFine'Kinect-based Happy Action Theater/Kinect Party,  to use during some group activities with students I work with who have significant disabilities (including severe autism).  I wish I had discovered this suite of games sooner!

I had loads of fun with students and colleagues as we explored some of the 36 creative, and sometimes zany, minigames.  I had heard that DoubleFine had launched something special, but didn't realize how awesome it was until I spent some serious playful time with it at home last weekend. I then tried it out at work this past week.  

If you are planning to explore Happy Action Theater/Kinect party, keep in mind that it plays best when there are at least two people and an audience to cheer everything along.  Through the use of blob detection algorithms, the games can handle up to 6 players at a time, which is perfect for small-group special classes.

The following trailer gives just a little hint of what this suite of mini-games is all about!


I noted that many of the games were effective in helping students become more aware of their peers. They began to play and interact with one-another in ways I hadn't previously imagined.  I especially liked the fact that many of the mini-games made it possible for students in wheelchairs to participate.  

I look forward to exploring more of the games over the next few months and will follow up with a future post after I get more input from my colleagues (and students).

I learned about Kinect Party through my contact with people involved with the GestureSEN wiki. The wiki was created as part of a Professional Learning Community (PLC) for people who work with students in specialized schools, similar to the school where I work, and contains a wealth of information about the use of newer and emerging technologies, such as the iPad, Leap Motion, the Kinect, and eye-gaze systems to support young people with significant disabilities  including autism  Some members of the GestureSEN wiki have learned to code or are in the process of doing so, motivated by what they've experienced so far with their students.  (More information and links are listed in the "RELATED" section of this post.)

OAK

OAK was developed by RCAST at the University of Tokyo in collaboration with Microsoft Japan Co., Ltd. It uses the motion-tracking capabilities of Microsoft's Kinect sensor to create non-contact switches for people with limited mobility, enabling them access to computers and other electronic devices and systems.  The video below provides a nice overview of the OAK system.

The OAK Pro bundle includes the following applications:

The Air Switch software uses the distance/depth capabilities of the Kinect sensor to support gestures of the head, hands, or larger body part to turn things off or on. The infrared from the Kinect also supports the use of the Air Switch in the dark.   The color mode function captures movements from smaller parts of the body, such as a fingertip.

The Face Switch software uses facial recognition software that can track the movements of the face, mouth, tongue, and eyes.  It can identify facial parts that have moved significantly, and records motion data 

The Motion History software observes  the movement of a person's body using the video component of the Kinect sensor.   This customizes the system to the individual and ensures accuracy of the switch.   Movements are color coded and provide the person who is setting up the system a means to fit the system to the specific capabilities and needs of the user.  

The OAK system can be enhanced by the sue of peripherals, such as a USB 4 channel relay box, an IR remote control device or outlet, or other on/off switches/outlets.

The Assist-i corporation has made the OAK system and peripherals available on Amazon Japan.  From what I can tell from the company's website, the OAK software can be downloaded free for a 30-day trial.   I'd love to see how it would work with some of the students I work with who have difficulty accessing conventional switches!  It would be wonderful to come up with ways for these students to access a wider range of digital media activities and games.


















RELATED
University adapting videogame technology to help physically disabled computer users
Philip Kendall, Japan Today, 10/10/12
OAK Air Switch (PC Kinect)
OneSwitch.org.uk 4/30/13
OAK Air Switch, Face Switch, Motion History Pro Bundle (pdf)
Assist-i Corporation
Amazon Ai store: Assist-i Corporation (Prices are in Yen.)

Below is a partial list of links to resources related to using or creating engaging interactive applications and games for people with special needs: 

Using Kinect in Special Ed Classrooms: Advice from Loudoun County, Virginia Teachers
Microsoft in Education Team, Microsoft in Education Blog, 6/1/12

KinectSEN-Kinect and Special Educational Needs round-up
Greg Dunan, Microsoft Coding4Fun, 10/11/12

Monkeying Around with Autism Assessments: Kinect-based game by Vectorform andKaiser Permanente therapists offers a barrel of possibilities!
Lynn Marentette, Interactive Multimedia Technology, 7/23/13

Behind the Scenes: Creating Marty the Monkey (The character from Vectorform's autism assessment app) John Einselen, Vectorform Blog, 7/24/13


Kinect Party Review: More Fun from the Fun Kings
Casey Lynch, IGN, 12/20/12


The Power of Kinect in Special Needs Education
Willemijn de Lint,  Hans Smeele, mytylschool De Ruimte

Sign Language Recognition and Translation with Kinect (pdf)
Ming Zhou, et. al.

Cool Kinect move: Reading sign language in real time
Christopher MacManus, CNET, 7/18/13

Anthony Rhys, Trinity Fields ICT

James Winchester, SENClassroom blog

PMLD Eyegaze Project at Trinity Fields

Kinect hacking using Processing

Kinect SEN and Processing Resources
Keith Manville, Oak Grove College OpenSEN

Mat's Classroom Blog

GestureSEN Wiki
KinectSEN Wiki; KinectSEN News
ProcessingSEN wiki
LeapSEN Wiki
EyegazeSEN Wiki

SEN Students and Coding
OpenSEN, 3/5/13

Processing2

Kinect for Windows Blog

Kinect For Windows
DoubleFine

Understanding Engagement, Module 3.2Training materials for teachers of learners with severe, profound and complex learning difficulties, UK Dept. for Education


Subscribe by e-mail

May 27, 2013

Leap Motion and Google Earth Experiment: Cute Doggie Photo-globe Mashup

Leap Motion and Google Earth Experiment: Cute Doggie Photo-globe Mashup 

I finally experimented with my Leap Motion controller and Google Earth, using a mashup I created a few years ago with pictures of cute dogs from my Flickr photo-stream.  In the video below, you can see that my gesture navigation skills still need some practice!

I should have watched the following video of Leap Motion in action with Google Earth before trying this experiment at home : )  

I am pretty sure that developers will be able to tweak Leap Motion + Google Earth interaction in the near future.  I'd like to adapt it for use with kids as well as adults who have mild motor impairments.





















Cute Doggies Photo-Globe Mash-up using Google Earth and a Flickr Set (How-to)

If you'd like to make your very own photo-globe using Google Earth and Flickr photos, here are the directions, ported and updated from a previous post:


This photo is a screen shot of photos of just about every dog I know, and some that happened to cross my path. In this post, I'll share some information about how to create a photo-globe in Google Earth. 

The first step is to make sure you have lots of pictures related to your theme uploaded to a site such as Flickr.  (You can also create a photo-globe using pictures from your computer's hard drive.)

To get the pictures into Google Earth, I used the Image Overlay feature, and in the "link" textbox, I entered the image URL for each picture that I'd previously loaded as a set in Flickr.



To do prepare for this, make sure you go to "view" tab on the upper left-hand section of your screen, and make sure that "toolbar" is checked. Also make sure that "Grid" is selection, as this will help make it easier to arrange and align your pictures.  You can turn off this feature later. Near the top of the screen, click on the Image Overlay icon. (I've highlighted it in the picture.)



You'll have to enter the URL of the image you'd like to add to the globe in the "Link" textbox, which I've highlighted in the above picture.  In this case, I've used a link to one of my pictures in a Flickr set I created for this project.

One thing to keep in mind is that the picture will take up a much larger space than you might prefer, so you'll have to adjust the size using the green markers:

Positioning the Overlay in the Viewer
The following directions are from the "Positioning the Imagery in the Viewer" section in the help section:


  1. Use the center cross-hair marker to slide the entire overlay on the globe and position it from the center. (Tip: do this first.)
  2. Use the triangle marker to rotate the image for better placement.
  3. Use any of the corner cross-hair markers to stretch or skew the selected corner. If you press the Shift key when selecting this marker, the image is scaled from the center.
  4. Use any of the four side anchors to stretch the image in or out of from the selected side. If you press the Shift key when doing this, the image is scaled from the center.

TIP:  Try positioning the center of the image as a reference point first, and then use the Shift key in combination with one of the anchors to scale the image for best positioning.

Directions updated to reflect latest version of Flickr, as of 5/27/13:

To find the image URL for a photo in Flickr that you wish to link on your photo-globe, select your desired photo and right click "Copy Image URL".
















Put your curser in the Link section of  the "New Image Overlay" dialog box in Google Earth, and right click to select "paste" from the drop-down menu















Then repeat the process.  It helps to name each picture so that you can find it easily in Google Earth.

To enhance your mash-up, you can add place-marks that contain URLs that link to additional information about the subject of a picture, such as blog posts with embedded videos and/or text related to a picture, and so forth. Directions can be found in Google Earth's help section.

The process of building a photo-globe in Google Earth is a bit tedious.  If someone has a short-cut to share, please let me know!


RESOURCES
Google Earth
Flickr
Programmable Web (My hunch is that this site might provide some information about shortcuts for creating a photo-globe in Google Earth.)
LEAP Motion

Mar 16, 2013

UPDATE: What's New for Kinect? Fusion, real-time 3D digitizing, design considerations, and more.

The Evolution of Microsoft Kinect

I've been following the evolution of Microsoft's Kinect, and recently discovered a few interesting videos that show how far the system has come. According to Josh Blake, the founder of the OpenKinect community and author of the Deconstructing the NUI blog,  the Kinect for Windows SDK v1.7 will be released on Monday, March 18th, from http://www.kinectforwindows.com.  More details about this version can be found on Josh's blog as well as the official Kinect for Windows blog.


It is possible to create applications for desktop systems that work with the Kinect in interesting ways, as you'll see in the following videos. I think there is potential here for use in education/edutainment!

Below is a video of Toby Sharp, of Microsoft Research, Cambridge, demonstrating Kinect Fusion.  The software allows you to use a regular Kinect camera to reconstruct the world in 3D.



KinEtre: A Novel Way to Bring Computer Animation to Life
According to information from the YouTube description, "KinÊtre is a research project from Microsoft Research Cambridge that allows novice users to scan physical objects and bring them to life in seconds by using their own bodies to animate them. This system has a multitude of potential uses for interactive storytelling, physical gaming, or more immersive communications."




The following videos are quite long, so feel free to re-visit this post when you have time to relax and take it all in!

Kinect Design Considerations
This video covers Microsoft's Human Interface Guidelines, scenarios for interaction and use, and best practices for user interactions.  It also includes a preview of the next major version of the Kinect SDK. 


Kinect for Windows Programming Deep Dive
This video discusses how to build Windows Desktop apps and experiences with the Kinect, and also previews some future work.




RELATED
Kinect for Windows Developer Downloads
Kinect for Windows Blog
Deconstructing the NUI Blog (Josh Blake)
Microsoft Kinect Learns to Read Hand Gestures, Minority Report-Style Interface Now Possible
Celia Gorman, IEEE Spectrum, 3/13/13
Kinect hand recognition due soon, supports pinch-to-zoom and mouse click gestures.
Tom Warren, The Verge, 3/6/13
Microsoft's KinEtre Animates Household Objects
Samuel K. Moore, IEEE Spectrum, 8/8/12
Kinect Fusion Lets You Build 3-D Models of Anything Celia Gorman, IEEE Spectrum, 3/6/13
Description of Kinect sessions at Build 2012
Kinect for every developer!
Tom Kerhove, Kinecting for Windows, 2/15/13
Kinect in the Classroom
Kinect Education

Note: Although I recently received my developer kit for Leap Motion, another gesture-based interface, I haven't lost interest in following news for Kinect.

Feb 20, 2013

AirHarp for Leap Motion, a Responsive Musical Natural User Interface

I like this demonstration of Adam Somers  AirHarp music application for use with the Leap Motion 3D controller:


AirHarp is being developed in C++ using Adam Somer's audio processing toolkit, MusKit.  This looks interesting!  Things have changes since I last took a computer music technology course (back in 2003).

Adam Somers is a senior software engineer at Universal Audio.  He has a graduate degree in music technology from Stanford, and a background in computer science, electronics, human-computer interaction, and signal processing.

Leap Motion is a motion-control software and hardware start-up company located in San Francisco, California. According to promotional information from the website, the company's first product, the Leap Motion controller, is 200 times more sensitive than existing technologies.  It will be interesting to see how this plays out.  (I'm still waiting for my pre-order.)

RELATED
AirHarp (links to GitHub)
Leap FAQs
Leap Motion Website
Leap Motion Developer Portal
Leap Motion Leadership Team
Leap Motion goes retail: Motion controller sold exclusively at Best Buy
Michael Gorman, engadget, 1/16/13

Leap Motion: Low Cost Gesture Control for your Computer Display
Asus partners up with Leap Motion, PCs with 3D motion control to debut in 2013
Michael Gorman, engadget, 1/3/13
Stanford Center for Computer Research in Music and Acoustics


Jan 17, 2013

XBox Kinect in the OR: Kinect supports gesture interaction with 3D imaging of the patient, while operating.

Here's an interesting use of technology for health - the Xbox Kinect in the OR!

Thanks to Harry van der Veen for the link!


RELATED
Kinect sensor poised to leap into everyday life
Niall Firth, NewScientist, 1/17/13

For the tech-curious:
PrimeSense (Company that developed the 3D depth sensor that powers the Kinect, the sensor in Ava, a healthcare robot by iRobot, and more.)

OpenNI (Framework for the development of 3D sensing middleware libraries and applications.)

NiTE: Natural Interface Technology for End User (Perception algorithms layer for 3D computer vision, allows for hand locating, tracking, analyzing scenes, and tracking skeleton joints.)

Jul 29, 2012

Blast from the 2009 past: News, Videos, and Links about Multi-touch and Screen Technologies

One of the things I like to do is share updates about the world of multimedia, multi-touch, gesture, screen, surface, and interactive technologies, focusing on off-the-desktop applications and systems. When I started this blog, I had to put forth quite a bit of effort just to FIND interesting things to blog about.  


These days, there are so many sources that focus on emerging - and now commonplace- interactive technologies, my main challenge is to filter the noise.  Where do I begin?


My archives are vast.   I randomly picked the year 2009 and came across one of my previous posts, "News, Videos, and Links about Multitouch and Screen Technologies."   The post is long, and contains a number of videos and links that probably will be of value to a future curator of the history of technology.


I welcome comments from readers who might be able to help me update information about various applications and systems I've featured on this blog in the past. 

The pictures are screenshots from the results of an  image search for "interactivemultimediatechnology".  Over the past 6 years, I've posted quite a few!








Jul 21, 2012

Musings about NUI, Perceptive Pixel and Microsoft, Rapid Creative Prototyping (Lots of video and links) Revised

It just might be the right time for everyone to brush up on 21st century tech skills. iPads and touch-phones are ubiquitous. Touch-enabled interactive whiteboards and displays are in schools and boardrooms.  With Microsoft's Windows 8 and the news that the company recently acquired Jeff Han's company, Perspective Pixel, I think that there will be good support - and more opportunities- for designers and developers interested in moving from GUI to NUI.    


In the video below, from CES 2012, Jeff Han provides a good overview of where things are moving in the future.  We are in a post-WIMP world and there is a lot of catching up to do!

CES 2012  Perceptive Pixel and the Future of Multitouch (IEEE Spectrum YouTube Channel)



During the video clip, Jeff explains how far things have come during the past few years:
 "Five and 1/2 years ago I had to explain to everybody what multi-touch was and meant. And then, frankly, we've seen some great products from folks like Apple, and really have executed so brilliantly, that everyone really sees what a good implementation can be, and have come to expect it.  I also think though, that the explosion of NUI is less about just multi-touch, but an awareness that finally people have that you don't have to use a keyboard and mouse, you can demand something else beside that.  People are now willing to say, "Oh, this is something I can try, you know, touch is something I can try as my friendlier interface"."

Who wouldn't want to interact with a friendlier interface?  Steve Ballmer doesn't curb his enthusiasm about Windows 8 and Perceptive Pixel.  Jeff Han is happy how designs created in Windows 8 scales for use on screens large and small. He explains how Windows 8 can support collaboration. The Story Board application (7:58) on the large touchscreen display looks interesting.

I continue to be frustrated by the poor usability of many web-based and desk-top applications.  I like my iPad, but only because so many dedicated souls have given some thought to the user experience when creating their apps.  I often meet with disappointment when I encounter interactive displays when I'm out and about during the day.  It is 2012, and it seems that there are a lot of application designers and developers who have never read Don Norman's The Design of Everyday Things!



I enjoy making working prototypes and demo apps, but my skill set is stuck in 2008, the last year I took a graduate-level computer course.  I was thinking about taking a class next semester, something hands-on, creative, and also practical, to move me forward. I can only do so much when I'm in the DIY mode alone in my "lab" at home.  I need to explore new tools, alongside like-minded others.  


There ARE many more tools available to designers and developers than there were just four years ago.  Some of them are available online, free, or for a modest fee.  I was inspired by a link posted by my former HCI professor, Celine Latulipe, to her updated webpage devoted to Rapid Prototyping tools. The resources on her website look like a good place to start for people who are interested in creating applications for the "NUI" era.  (Celine has worked many interesting projects that explore how technology can support new and creative interaction, such as Dance.Draw.) Below is her description of her updated HCI resources:

"New HCI resource to share: I have created a few pages on my web site devoted to Rapid Prototyping tools, books, and methods. These pages contain reviews of various digital tools, including 7 different desktop prototyping apps, and including 8 different iPad apps for wireframing/prototyping. I hope it's useful to others. Feel free to share... and please send me comments and suggestions if you find anything inaccurate, or if you think there is stuff that I should be adding. I will be continuing to update this resource." -http://www.celinelatulipe.com (click on the rapid prototyping link at the top)



IDEAS
Below are just a few of my ideas that I'd like to implement in some way. I can't claim ownership to these ideas- they are mash-ups of what comes to me in my dreams, usually after reading scholarly publications from ACM or IEEE, or attending tech conferences. 
  • An interactive timeline, (multi-dimensional, multi-modal, multimedia) for off-the-desktop interaction, collaboration, data/info analysis exploration.  It might be useful for medical researchers, historians, genealogists, or people who are into the "history of ideas".  Big Data folks would love it, too. It would handle data from a variety of sources, including sensor networks. It would be beautiful to use.
  • A web-based system of delivering seamless interactive, multi-modal, immersive experiences, across devices, displays, and surfaces. The system would support multi-user, collaborative interaction.  The system would provide an option for tangible interaction.
  • A visual/auditory display interface that presents network activity, including potential intrusions, malfunctions, or anything that needs immediate attention that would be likely to be missed under present monitoring methods. 
  • Interactive video tools for creation, collaboration, storytelling.  (No bad remote controllers needed.)
  • A "wearable" that provides new ways for people to express and communicate creatively, through art, music, dance, with wireless capability. (It can interact with wireless sensor networks.)*
  • An public health application designed to provide information useful in understanding and sepsis prevention efforts. This application would utilize the timeline concept describe at the top of this list. This concept could also be useful in analyzing other medical puzzles, such as autism.
Most of these ideas could translate nicely to educational settings, and the focus on natural user interaction and multi-modal i/o aligns with the principles of Universal Design for Learning, something that is important to consider, given the number of "at-risk" learners and young people who have disabilities.

I welcome comments from readers who are working on similar projects, or who know of similar projects.  I also encourage graduate students and researchers who are interested in natural user interfaces to and move forward with an off-the-desktop NUI project.  I hope that my efforts can play a part in helping people make the move from GUI to NUI!  



Below are a few videos of some interesting projects, along with a list of a few references and links.


SMALLab (Multi-modal embodied immersive learning)


PUPPET PARADE: Interactive Kinect Puppets(CineKid 2011)



MEDIA FACADES: When Buildings Start to Twitter

HUMANAQUARIUM (CHI 2012)

 

NANOSCIENCE NRC Cambridge (Nokia's Morph project)






 
Examples: YouTube Playlists
POST WIMP EXPLORERS' CLUB
POST-WIMP EXPLORER'S CLUB II

Web Resources
Celine Latulipe's Rapid Prototyping Resources 
Creative Applications
NUI Group: Natural User Interface Group
OpenFrameworks and Interactive Multimedia: Funky Forest Installation for CineKid
SMALLab Learning
OpenExhibits: Free multi-touch + multiuser software initiative for museums, education, nonprofits, and students.
OpenSense Wiki 
CINEKID 2012 Website 
Multitouch Systems I Have Known and Loved (Bill Buxton)
Windows 8
Perceptive Pixel
Books
Natural User Interfaces in .NET  WPF 4, Surface2, and Kinect (Josh Blake, Manning Publications)
Chapter 1 pdf (Free)
Brave NUI World: Designing Natural User Interfaces for Touch and Gesture (Daniel Wigdor and Dennis Wixon)
Designing Gestural Interfaces (Dan Saffer)
Posts
Bill Snyder, ReadWrite Web, 7/20/12

I noticed some interesting tools on the Chrome web store - I plan to devote a few more posts to NUI tools in the future.

Jul 12, 2012

TechCrunch Charlotte Highlights: T1 Vision; inTouch Collaborative Software


Yesterday evening I attended a meetup of TechCrunch and Charlotte-area techies, held at the uptown Packard Place entrepreneurial center.  It was jammed-packed with people all abuzz with tech start-up fever, fueled by awesome food provided by Zen Fusion.  Although my main purpose for attending the TechCrunch meet-up was to learn more about innovative technology start-ups in my region, I also was hoping to capture a few shots of interesting people.   I like to keep my eye open for tee-shirt slogans, and one worn by a young gentleman caught my eye, proclaiming that he'd seen the future, and it is in his browser.  On the back of his tee-shirt was a bright HTML5 logo, something that is dear to my heart, as I am moving from HTML4 to HTML5.  He was polite and agreed to pose for a couple of photographs:
 






It turned out that the HTML5 guy was at the TechCrunch event with one of his colleagues from T1 Visions, a social touchscreen solutions company that I've featured previously on this blog.  They caught me up on the growth of this start-up company, which now has 15 employees and has broadened its reach beyond table-top restaurant applications to the healthcare, education, corporate, retail, and broader hospitality sectors.

What I like about table-top systems is that they provide support for "natural user interaction".  It allows for multiple modes of interaction with and presentation of multimedia content.  Over the past several years, these systems have been proven to be useful to a wider range of people and settings.  Interfaces that support touch and gesture interaction are no longer viewed as novelties, given the pervasiveness of touch-phones and tablets and their ease-of-use for most people.

A useful product from T1 Visions is the T1 Collaboration Table. It supports touch-screen interaction and can also handle up to four simultaneously connected laptops.   The table system provides a media viewer that supports sharing of photos across screens, devices, and surfaces.  It also contains a web browser, a presentation viewer, and a whiteboard that is compatible with video conferencing.  The company provides customized applications for its clients.   In the Charlotte area, some of the tables can be found in restaurants, such as the Mellow Mushroom, Cowfish, and Harpers.  A few were recently installed in the Atkins library at UNC-Charlotte, to support group-work among students.

To learn more about what T1 Visions has to offer, take a few minutes to view the following videos and follow the links at the end of this post!






Demonstration of how the collaboration table can work within a business environment:


Demonstration of the T1VISION touch wall:
RELATED
T1 Visions Gallery
T1 Visions: Social Touchscreen Solutions
Interactive tabletops bring people together
Marty Minchin, Charlotte Observer, South Charlotte News, 2/20/12
Interactive Technology in the Carolinas: T-1 Visions Update

NOTE:
TechCrunch is a technology media group founded in 2005 that focuses on innovative technologies.  This summer, a group of TechCrunchers are visiting cities in the south that were previously not under their radar, such as Savannah and my home region, Charlotte, N.C.   The Charlotte TechCrunch meetup was held on Wednesday, July 11, 2012.  I plan to devote a few more blog posts to share what I learned.

Jul 8, 2012

PO-MO, a creative group that combines digital art, interaction, movement, and play to create engaging surfaces and spaces.

I recently learned more  PO-MO, a relatively new start-up tech company based in Winnipeg, Canada. According to the company's information, PO-MO "specializes in interactive digital display solutions, including gesture and motion based interactivity, interactive display content creation and management, and large interactive display and projection services for advertisers, educators, and events."  


Po-Motion was a finalist in an elevator pitch video contest last fall. It has several advantages over potential competitors.  The system is easy to use, and priced within the range that is affordable for schools, museums, and other cost-conscious groups who would like to provide technology-supported immersive interactive experiences for people of all ages.  The PO-MOtion software designed for interactive floors and walls starts at $39.99, and works on any computer, using any USB web camera and a projector. Other applications make use of Kinect sensors.


I especially like one of PO-MO's recent projects, the Impossible Animals Museum Exhibit, created using Unity 3-D, for the Manitoba Children's Museum.  How does it work?  Children create a colored egg using crayons and paper, which is then scanned into the exhibit and digitally embedded into the system, which includes an interactive wall and floor.  When the egg is touched, it is activated to hatch, and then becomes a motion reactive animal.  The environment includes things like water, landscapes, and even a spaceship.  The system has a "reset world" button for museum staff to use when needed.  

Impossible Animals Exhibit

Impossible Animals Interactive Museum Installation from PO-MO Inc. on Vimeo.



The following video explains how the PO-MO system works:


PO-MO is also involved in promotional projects, assisting retailers, ad agencies, and brand managers with creative ways to engage customers and clients:
Ragpackers Kinect-based Window Display

Ragpickers Kinect Window Display from PO-MO Inc. on Vimeo.


The following video provides a scrolling description about PO-MO's work, including promising data collected during implementation:

Other products and services provided by PO include mobile app development. I especially like the augmented reality business card depicted in the following video clip:

Augmented Reality Business Card from PO-MO Inc. on Vimeo


Imagine if your local shopping centers, museums, libraries, or even schools offered this level of immersive interaction on a regular basis!

RELATED
The PO-MOtion system has a wide range of uses. It is currently used in an educational setting in a sensory room for students with special needs, something that I'd like to try out in the near future with students at Wolfe School. I plan to share more about this in another post.


PO-MO Case Studies


PO-MO Bios:
Meghan Athavale – Director/CEO, PO-MO Inc.
"Meghan has been a professional designer and animator since graduating from Red River College in 1997. After graduation, she moved to Calgary, where she spent almost two years directing projects at Aurenya Studios, a start-up animation company. In 2001, Meghan was engaged by Community Connections to support community-based IT development projects in rural Manitoba and in Winnipeg’s inner city.  In 2008, Meghan joined Manlab, developing educational interactive games and resources for Immigrate Manitoba. She also launched Meghan PO-MO Project, a sole proprietorship which provided sound reactive visuals for DJs and venues across Canada. In 2009, Meghan was contracted as the User Experience Designer at Tipping Canoe, a multinational internet marketing company.

In 2010, Meghan formed PO-MO Inc. in partnership with Curtis Wachs. She began working exclusively for the company in December, 2010. Today, Meghan is the driving force behind PO-MO Inc."


Curtis Wachs – Technical Director/COO, PO-MO Inc.
"Curtis graduated from Assiniboine Community College in 2003 where he studied object oriented programming. Directly upon graduating, Curtis was hired by Assiniboine Community College to help design and develop software for online classes. Curtis relocated to Winnipeg in 2006 to create interactive training material for sales staff at E.H. Price. During the course of his work, Curt was apprenticed in 3D modelling and animation by Liem Ngyuen, a former Frantic Films resident. In 2008, Curtis joined Manlab, where he created online educational games for Travel Manitoba, Immigrate Manitoba, and other clients. In 2010, Curtis formally joined PO-MO Project, and the company became a partnership. In June 2010, PO-MO Inc. was founded.

Curtis is currently the technical director at PO-MO Inc., overseeing the project management and workflow of contracted and R&D development projects."


Feb 4, 2012

Razorfish Gesture and Touch Platform for the "Retail Experience"


Razorfish Connected Retail Experience Platform (codename "5D") from Razorfish - Emerging Experiences on Vimeo.


The above video is an overview of the "5D" connected retail experience platform by Razorfish Emerging Experience. This concept looks like it was designed for me - someone who loves tech,  has a high need for hassle-free shopping.  Someday I hope I will have the ultimate technology-supported shopping experience : )




RELATED
Razorfish Press Release
Razorfish


SOMEWHAT RELATED 
Previous posts:
Interactive Visual Merchandising
Another close encounter with in-store digital display marketing at Best Buy...
Interactions (ACM) Cover Article - "Proxemic Interactions: The New Ubicomp?" Plus - Close encounters with displays at the airport and JC Penney
Pervasive Retail Part 1: Web UX  Meets Retail CX - Screens Large and Small at the Mall, Revisited
Interactive Displays in Public Spaces
Interactive Display with QR Tag: Close Encounter at the Orlando Airport

Other:
Retail Customer Experience website
Pervasive Retail

GestureTek: Retail Marketing Solutions: Interactive Screen and Window Display Systems for Advertising in Stores, Malls and Shopping Centers
JC Penney Remodel  Interactive Video
Window Shopping Goes High-Tech With Motion-Sensing Interactive Displays
Bridgette Meinhold, Ecouterre, 9/22/11

Nov 28, 2011

FlatFrog Multitouch Videos: Point Separation, Multi-input, Multi-user input

FlatFrog Multitouch is a company based in Sweden. It was founded by Ola Wassvic and Christer FÃ¥hraeus.  The technologies support 20+ simultaneous touches, and recognize object size, a useful feature. FlatFrog screens can be optimized for a wide range of light conditions  FlatFrog's multi-touch and gesture interaction is featured in the short video clips below.  


FlatFrog is gearing up for commercial release. According to the FAQ's on the website, "all sizes are possible, from 5" to 100" and upward.  Promethean is one of the company's investors.   There is a volume manufacturing agreement with Kortek Corporation, known for industrial and gaming displays.




Thanks Touch User Interface for sharing this information! (Touch User Interface is the blog for Sensible UI, known for the ArduMT, aka the Arduino Multi-touch Development Kit)