I came across this demonstration of I/O DataCenter's 12 by 7 foot interactive video wall that makes playing around with views of data center modules...interesting! The display is a gesture-based "multi-touch" system. (I'll update this post when I get more information.)
Here is the description from the Datacenter YouTube channel:
"Instead of hauling a 40-foot long modular data center to a trade show, i/o Data Centers is taking a high-tech approach to customer tours of their i/o Anywhere modular data center. The i/o team has created a 12-foot by 7-foot touchscreen video wall to provide interactive tours of the company's facilities. Selecting a "hot spot" pops up a virtual data center, complete with cross sections and product info, following the concept of the touch screens in the sci-fi movie "Minority Report.""
FYI: I/O Data Centers has an application that runs on the Surface.
UPCOMING:
Stay tuned for my upcoming posts!
News about LM3LABS (Previous post)
Interactive Surveillance CCeline Latulipe (technologist) Annabel Manning (artist)
Focused on interactive multimedia and emerging technologies to enhance the lives of people as they collaborate, create, learn, work, and play.
Dec 11, 2010
Dec 9, 2010
Interested in the OpenNI Initiative? OpenKinect? To learn more, read Josh Blake's Interview of Tamir Berliner of PrimeSense
Josh Blake, Deconstructing the NUI, 12/9/10
Josh Blake recently interviewed Tamir Berliner, one of the founders of PrimeSense. If you haven't heard, Microsoft's Kinect was based on work by PrimeSense, and licensed their technology. PrimeSense provides consumer electronics with natural user interaction capabilities. The good news is that the company recently released open-sourced middleware for natural interaction and depth-camera drivers. It will be interesting to see how this will play in the near future!
In the interview, Tamir discussed a number of topics related to postWIMP technologies. He also announced the newly created OpenNI, "an industry-led, not-for-profit organization formed to certify compatibility and interoperability of Natural Interaction (NI) devices, applications, and middleware." It is good to see this level of support for the cause!
Here is a quote from the interview that I especially liked:
"I believe that till today the devices we’ve been using, made us learn greatly lot about them before we could use them and gain their value. I’m pretty sure everyone who is reading this has got at least 3 remotes sitting on his living room table, and at least once a week needs to help someone use their computer/media center/phone/etc. It’s time for that to change and it’s up to us, the technologists to make this revolution happen, it’s time for the devices to take the step of understanding what we want and making sure we get that, even without asking if it’s a trivial task as opening a door when we approach, closing the lights when we leave the room, even making sure we have hot water to shower with when we return from work or wake up in the morning, depends on what we normally do." -Tamir
RELATED
Here are a couple of videos from the OpenNI website that demonstrate OpenNI-compliant applications:
OpenNI-compliant real time skelton tracking by PrimeSense
OpenNI-compliant real time SceneAnalyzer by PrimeSense
FYI:
Josh Blake is the author of the Deconstructing the NUI blog. Over the past couple of years, he's explored natural user interfaces and interactions through his work on applications designed for Microsoft Surface and Win7 with Windows Presentation Foundation.
About a month ago, Josh organized OpenKinect, an on-line community to support collaboration among people interested in exploring ways to use Kinect with PCs and other devices. An example of this effort is the open source code, libfreenect, which includes drivers and libraries for Windows, Linux, and OS X.
The Natural User Interface Revolution
Josh Blake, 1/5/09
Kinect for Xbox 360: The inside story of Microsft's secret 'Project Natal' (long, but worth reading) David Rowan, Wired UK, 10/29/10
People of libreenect
OpenNI User Guide (pdf)
Posted by
Lynn Marentette
Plug for Computer Science in Education Week: Informative series of short video clips, resources, and links to promote understanding of the importance of computer science and related fields
This week is Computer Education in Education Week, part of an effort of ACM (Association for Computing Machinery) and CSTA (Computer Science Teachers Association) to promote awareness of the importance of computer science education in K-12 education. CSTA developed a series of short videos to share with students as part of this effort. The videos highlight the multitude of ways that computer scientists impact our world. In my opinion, the videos would be appropriate for sharing with parents, teachers, school counselors, school administrators, and school board members.
Computer Science and Entertainment
Computer Science and the Environment
Computer Science and Communications
Computer Science and Medicine
Computer Science and Empowerment
To dig deeper into this topic, read Running On Empty: The Failure to Teach K-12 Computer Science in the Digital Age (pdf)
RELATED
CSEd: Computer Science in Education Week
Computing in the Core
Computer Science in Education Facebook Page
Anita Borg Institute for Women and Technology
ACM/CSTA's Recommendations
A Model Curriculum for K-12 Computer Science (PDF)
Google: Exploring Computational Thinking
ACM Computing Careers Website
Cross-posted on the Tech Psych blog.
Computer Science and Entertainment
Computer Science and the Environment
Computer Science and Communications
Computer Science and Medicine
Computer Science and Empowerment
To dig deeper into this topic, read Running On Empty: The Failure to Teach K-12 Computer Science in the Digital Age (pdf)
RELATED
CSEd: Computer Science in Education Week
Computing in the Core
Computer Science in Education Facebook Page
Anita Borg Institute for Women and Technology
ACM/CSTA's Recommendations
A Model Curriculum for K-12 Computer Science (PDF)
Google: Exploring Computational Thinking
ACM Computing Careers Website
Cross-posted on the Tech Psych blog.
Posted by
Lynn Marentette
Dec 6, 2010
UPDATE: Demo 2 of the Kinect Theramin, Therenect, by Martin Kaltenbrunner
I recently posted about the Therenect, a gesture-controlled digital theremin created for Microsoft's Kinect, created by Martin Kaltenbrenner - Therenect: Theremin for the Kinect! (via Martin Kaltenbrenner) It looks like Martin has been busy polishing up the application over the past few days, as you can see from the video below:
Therenect - Kinect Theremin - 2nd Demo from Martin Kaltenbrunner on Vimeo.
RELATED
Virtual Theremin Made with Kinect; Real Thereminists Will Make it Useful
Peter Kirn, Create Digital Music, 11/30/10
Therenect - Kinect Theremin - 2nd Demo from Martin Kaltenbrunner on Vimeo.
RELATED
Virtual Theremin Made with Kinect; Real Thereminists Will Make it Useful
Peter Kirn, Create Digital Music, 11/30/10
Posted by
Lynn Marentette
Labels:
create digital music,
demo,
kinect,
martin kaltenbrunner,
theremin,
therenect,
update,
video
No comments:
ICE PAD: Interactive Multitouch Ice Sculpture by Art Below Zero (video)
ICE PAD: Interactive Multitouch Ice Sculpture by Art Below Zero
Here is the information about the interactive sculpture from the Art Below Zero YouTube Channel:
"Created by David Sauer & Max Zuleta for the Lake Forest Tree Lighting Festival.This Ice Crystal Display was the 1st to be created in the USA, Transforming 300 pounds of ice into the equivalent of a giant Ipad touch screen. "People always want to touch our Ice Sculptures, This Interactive Display gave them the perfect reason to get their hands cold." said Max Zuleta owner of Art Below Zero. The public response was amazement and interest in the workings of the touch screen in ice. Our favorite guess was "It must work by sensing body heat!"..."
"...The system is known as Rear Diffused Illumination or Rear DI. It works because an Infrared light is shone from the opposite side of the ice wall through the ice. When an object such as a finger, hand, or mitten stops the infrared light it reflects the light back to a custom camera built by Peau Productions. The illuminated objects are then converted to points of interaction using an open source program Community Core Vision which outputs TUIO data streams to a Flash program for animation. We like the look and feel of the Fluid Solver flash application. The output from the computer is then projected into the ice and ice diffracts the light into something beautiful. By this method the user can manipulate a visible light screen via an invisible light that only the camera can see..."
Thanks to Nolan Ramseyer, of PeauProductions, for the link!
PeauProductions Blog: Multitouch and Technology
RELATED
Ubice = Multi-touch On Ice at the Nokia Research Center in Finland (Video + Pic via Albrecht Schmidt)
Art Below Zero
Here is the information about the interactive sculpture from the Art Below Zero YouTube Channel:
"Created by David Sauer & Max Zuleta for the Lake Forest Tree Lighting Festival.This Ice Crystal Display was the 1st to be created in the USA, Transforming 300 pounds of ice into the equivalent of a giant Ipad touch screen. "People always want to touch our Ice Sculptures, This Interactive Display gave them the perfect reason to get their hands cold." said Max Zuleta owner of Art Below Zero. The public response was amazement and interest in the workings of the touch screen in ice. Our favorite guess was "It must work by sensing body heat!"..."
"...The system is known as Rear Diffused Illumination or Rear DI. It works because an Infrared light is shone from the opposite side of the ice wall through the ice. When an object such as a finger, hand, or mitten stops the infrared light it reflects the light back to a custom camera built by Peau Productions. The illuminated objects are then converted to points of interaction using an open source program Community Core Vision which outputs TUIO data streams to a Flash program for animation. We like the look and feel of the Fluid Solver flash application. The output from the computer is then projected into the ice and ice diffracts the light into something beautiful. By this method the user can manipulate a visible light screen via an invisible light that only the camera can see..."
Thanks to Nolan Ramseyer, of PeauProductions, for the link!
PeauProductions Blog: Multitouch and Technology
RELATED
Ubice = Multi-touch On Ice at the Nokia Research Center in Finland (Video + Pic via Albrecht Schmidt)
Art Below Zero
Posted by
Lynn Marentette
Interactive Information Visualization for the Kinect? Something like Jer Thop's "Just Landed-36 Hours" might work nicely if revamped!
I follow the O'Reilly Radar blogs and came across a recent post about an information visualization created by blprnt two years ago using Processing. I think it would have great potential if it was re-purposed for use on the Kinect! In the article, Edd Dumbill discusses the advantages of using Processing to create data and information visualizations.
One example of the power of Processing is an information visualization, "Just Landed -36 Hours, created by Jer Thorp. Jer gathered tweets from Twitter that included the statement, "just landed", along with location information for each tweet, within a 36-hour period, to create the visualization.
36 Hours- Just Landed is a great 3D visualization of air travel on our planet. I especially lik the different views that the application provides. As soon as I watched the Just Landed video, I thought it would be great if it could be revamped for use on the Kinect! (Leave a comment if you know of anyone working on a project in this area.)
Just Landed - 36 Hours from blprnt on Vimeo.
Information about the video from blprnt's Vimeo site:
"I was discussing H1N1 with a bioinformatics friend of mine last weekend, and we ended up talking about ways that epidemiologists model transmission of disease. I wondered how some of the information that is shared voluntarily on social networks might be used to build useful models of various kinds...I'm also interested in visualizing information that isn't implicitly shared - but instead is inferred or suggested...This piece looks for tweets containing the phrases 'just landed in...' or 'just arrived in...'. Locations from these tweets are located using MetaCarta's Location Finder API. The home location for the traveling users are scraped from their Twitter pages. The system then plots these voyages over time...I'm not entirely sure where this will end up going, but I am reasonably happy with the results so far. Built with Processing (processing.org) You can read more about this project on my blog - blog.blprnt.com"
RELATED
Strata Gems: Write your own visualizations: The Processing language is an easy way to get started with graphics
Edd Dumbill, O'Reilly Radar, 12/3/10
One example of the power of Processing is an information visualization, "Just Landed -36 Hours, created by Jer Thorp. Jer gathered tweets from Twitter that included the statement, "just landed", along with location information for each tweet, within a 36-hour period, to create the visualization.
36 Hours- Just Landed is a great 3D visualization of air travel on our planet. I especially lik the different views that the application provides. As soon as I watched the Just Landed video, I thought it would be great if it could be revamped for use on the Kinect! (Leave a comment if you know of anyone working on a project in this area.)
Just Landed - 36 Hours from blprnt on Vimeo.
Information about the video from blprnt's Vimeo site:
"I was discussing H1N1 with a bioinformatics friend of mine last weekend, and we ended up talking about ways that epidemiologists model transmission of disease. I wondered how some of the information that is shared voluntarily on social networks might be used to build useful models of various kinds...I'm also interested in visualizing information that isn't implicitly shared - but instead is inferred or suggested...This piece looks for tweets containing the phrases 'just landed in...' or 'just arrived in...'. Locations from these tweets are located using MetaCarta's Location Finder API. The home location for the traveling users are scraped from their Twitter pages. The system then plots these voyages over time...I'm not entirely sure where this will end up going, but I am reasonably happy with the results so far. Built with Processing (processing.org) You can read more about this project on my blog - blog.blprnt.com"
RELATED
Strata Gems: Write your own visualizations: The Processing language is an easy way to get started with graphics
Edd Dumbill, O'Reilly Radar, 12/3/10
Subscribe to:
Posts (Atom)