Focused on interactive multimedia and emerging technologies to enhance the lives of people as they collaborate, create, learn, work, and play.
Sep 6, 2009
Oblong's g-speak Spatial Operating Environment: Gesture interaction, massive datasets, film production, and more.
g-speak overview 1828121108 from john underkoffler on Vimeo.
What is g-speak?
From the Oblong website: "Spatial semantics at the platform level"
"Every graphical and input object in a g-speak environment has real-world spatial identity and position. Anything on-screen can be manipulated directly. For a g-speak user, "pointing" is literal."
"The g-speak implementation of spatial semantics provides application programmers with a single, ready-made solution to the interlocking problems of supporting multiple screens and multiple users. It also makes control of real-world objects (vehicles, robotic devices) trivial and allows tangible interfaces and customized physical tools to be used for input."
"The g-speak platform is display agnostic. Wall-sized projection screens co-exist with desktop monitors, table-top screens and hand-held devices. Every display can be used simultaneously and data moves selectively to the displays that are most appropriate. Three-dimensional displays can be used, too, without modification to application code."
Origins of Oblong
g-speak was born at the MIT Media Lab, and Oblong was started in 2006. The work behind g-speak's gestural I/O began over 15 years ago. For more information, read g-speak in slices.
Oblong developed Tamper over the g-speak system as a prototype for film production. Below is the demo. At 0:08, sketches of the gestures used in g-speak are displayed in the video.
I hate wearing gloves, but I'd gladly put them on to play with the system for a few days!
Posted by
Lynn Marentette
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment