Info from the DepthJS website: "Navigating the web is only one application of the framework we built - that is, we envision all sorts of applications that run in the browser, from games to specific utilities for specific sites. The great part is that now web developers who specialize in Javascript can work with the Kinect without having to learn any special languages or code. We believe this will allow a new set of interactions beyond what we first developed."
Comment: I went to Best Buy today and almost bought a Kinect. Unfortunately, the demo hadn't come in yet, so I decided to wait until I could give it a try. I'm curious to experiment with what it can do.
"Hi, Google. My name is Johnny" Johnny Chung Lee announced on his Procrastineering blog that he's accepted a position at Google as a "Rapid Evaluator". I'm not sure what he will be doing in this position, but his title is intriguing!
Here are some of my previous posts devoted to the work of Johnny Chung Lee:
Josh Blake recently interviewed Tamir Berliner, one of the founders of PrimeSense. If you haven't heard, Microsoft's Kinectwas based on work by PrimeSense, and licensed their technology. PrimeSense provides consumer electronics with natural user interaction capabilities. The good news is that the company recently released open-sourced middleware for natural interaction and depth-camera drivers. It will be interesting to see how this will play in the near future!
In the interview, Tamir discussed a number of topics related to postWIMP technologies. He also announced the newly created OpenNI, "an industry-led, not-for-profit organization formed to certify compatibility and interoperability of Natural Interaction (NI) devices, applications, and middleware." It is good to see this level of support for the cause!
Here is a quote from the interview that I especially liked:
"I believe that till today the devices we’ve been using, made us learn greatly lot about them before we could use them and gain their value. I’m pretty sure everyone who is reading this has got at least 3 remotes sitting on his living room table, and at least once a week needs to help someone use their computer/media center/phone/etc. It’s time for that to change and it’s up to us, the technologists to make this revolution happen, it’s time for the devices to take the step of understanding what we want and making sure we get that, even without asking if it’s a trivial task as opening a door when we approach, closing the lights when we leave the room, even making sure we have hot water to shower with when we return from work or wake up in the morning, depends on what we normally do." -Tamir
RELATED Here are a couple of videos from the OpenNI website that demonstrate OpenNI-compliant applications:
OpenNI-compliant real time skelton tracking by PrimeSense
OpenNI-compliant real time SceneAnalyzer by PrimeSense
FYI:
Josh Blake is the author of the Deconstructing the NUI blog. Over the past couple of years, he's explored natural user interfaces and interactions through his work on applications designed for Microsoft Surface and Win7 with Windows Presentation Foundation.
About a month ago, Josh organized OpenKinect, an on-line community to support collaboration among people interested in exploring ways to use Kinect with PCs and other devices. An example of this effort is the open source code, libfreenect, which includes drivers and libraries for Windows, Linux, and OS X.
I recently posted about the Therenect, a gesture-controlled digital theremin created for Microsoft's Kinect, created by Martin Kaltenbrenner - Therenect: Theremin for the Kinect! (via Martin Kaltenbrenner)It looks like Martin has been busy polishing up the application over the past few days, as you can see from the video below:
I follow the O'Reilly Radar blogs and came across a recent post about an information visualization created by blprnt two years ago using Processing. I think it would have great potential if it was re-purposed for use on the Kinect! In the article, Edd Dumbill discusses the advantages of using Processing to create data and information visualizations.
One example of the power of Processing is an information visualization, "Just Landed -36 Hours, created by Jer Thorp. Jer gathered tweets from Twitter that included the statement, "just landed", along with location information for each tweet, within a 36-hour period, to create the visualization.
36 Hours- Just Landed is a great 3D visualization of air travel on our planet. I especially lik the different views that the application provides. As soon as I watched the Just Landed video, I thought it would be great if it could be revamped for use on the Kinect! (Leave a comment if you know of anyone working on a project in this area.)
Information about the video from blprnt's Vimeo site:
"I was discussing H1N1 with a bioinformatics friend of mine last weekend, and we ended up talking about ways that epidemiologists model transmission of disease. I wondered how some of the information that is shared voluntarily on social networks might be used to build useful models of various kinds...I'm also interested in visualizing information that isn't implicitly shared - but instead is inferred or suggested...This piece looks for tweets containing the phrases 'just landed in...' or 'just arrived in...'. Locations from these tweets are located using MetaCarta's Location Finder API. The home location for the traveling users are scraped from their Twitter pages. The system then plots these voyages over time...I'm not entirely sure where this will end up going, but I am reasonably happy with the results so far. Built with Processing (processing.org) You can read more about this project on my blog - blog.blprnt.com"
As soon as Kinect was released by Microsoft, there was a flurry of app development. Evoluce and So Touch partnered to create a presentation application for the Kinect that could be used in work settings. Take a look!
Information about Air Presenter Plus, from the So touch's YouTube channel:
"So touch, the leading creative software company for new digital technologies, in partnership with Evoluce, the leading provider of advanced multi-touch screen technologies, present: So touch Air Presenter for Kinect. The world's first presentation software optimized for Kinect.
Turn your corporate presentations, welcome areas, trade show booths and point of sales into mind boggling experiences, controlling your presentation with multi-touch gestures leveraging So touch Air Presenter gestures software and Evoluce Kinect Windows 7 software.
Integrate your usual PDF, Power point, JPG and video materials into So touch multi-touch minority report's style interface and control it with gestures in the air.
So touch Air Presenter is delivered with a very graphic player, featuring a multi-touch zoom mode and an integrated video player as well as a very easy to use content manager.
So touch Air Presenter content, sourced locally or from the network, can be played on multiple screens at the same time. So touch Air Presenter content manager can deliver customize or generic content to each player.
So touch Air Presenter packaged with Evoluce Kinect Windows 7 software will be released soon. So touch Air Presenter is already available for TUIO based gestures devices. To know more and download a free trial version, visit http://www.so-touch.com/air-presenter"
Here are a couple of new natural user interface videos. The first video, by Evoluce, demonstrates gesture interaction/navigation in Windows 7 applications supported by Kinect. The second video, by Immersive Labs, shows multi-touch product browsing interaction on a large display.
Martin Kaltenbrunner's description of TuioKinect: "TuioKinect tracks simple hand gestures using the Kinect controller and sends control data based on the TUIO protocol. This allows the rapid creation of gesture enabled applications with any platform or environment that supports TUIO tuio.org/ You can download the application from: code.google.com/p/tuiokinect/ Music: Jabon Jabon by El Club de los Astronautas (Institut Fatima)"
I've played around with Tuio and OpenFrameworks, but it has been a while. I can't wait until I have time to dig into this with a Kinect. I think this has great potential for supporting learning and communication among students with special needs.
"The Therenect is a virtual Theremin for the Kinect controller. It defines two virtual antenna points, which allow controlling the pitch and volume of a simple oscillator. The distance to these points can be controlled by freely moving the hand in three dimensions or by reshaping the hand, which allows gestures that are quite similar to playing an actual Theremin."
"This musical instrument has been developed by Martin Kaltenbrunner at the Interface Culture Lab at the University of Art and Industrial Design in Linz, Austria. The software has been developed using the Open Frameworks and OpenKinect libraries."
I was watching a DVR'd episode of NCIS tonight with my husband and noticed Cisco Telepresence video conferencing system was a player in the story line, as well as a Cisco Cius touch-screen tablet. It seems that on TV, laptops, desktops, and old-fashioned cell phones are history. "Emerging" technologies are woven into the story lines of more television episodes, including CSI, NCIS, Bones, Grey's Anatomy, and others.
The original intention of this post was to discuss the concept of emerging technologies and product placement/embedded advertising on television programs and movies, and share a few interesting examples related to his topic. I quickly realized that there is much more to this story. Why? More people access video and interactive multimedia content when they are on-the-go using laptops, smart phones, iPads, and similar tablets. New televisions, such as Sony Internet TV, are internet-enabled, and many people already access the web content on their televisions through devices such as game consoles or Apple TV.
It is a marketer's dream.
Unfortunately, we might not be ways to "opt-out" of all of the indirect (and direct) advertising that will come our way as we access video and related content across multiple platforms. It won't be as easy as blocking pop-up ads or fast-forwarding the DVR!
Below are some examples of ways some emerging technologies are "placed" in television/film, grouped by company. In the "Apple" section, I've included video of Steve Jobs introducing iAds. Near the end of this post, I've included links that relate to ethical legal and ethical issues regarding product placement and embedded advertising.
Food for thought. I'm still digesting what I've found!
Apple iADMobile advertising that delivers interaction and emotion, 1 billion ad impressions a day, within your app. Apple's iAD isn't really product placement. It is about embedded ads in your mobile devices. "Who wants to get yanked out of their ad?"-Steve Jobs
"iAd is a breakthrough mobile advertising platform from Apple. With it, apps can feature rich media ads that combine the emotion of TV with the interactivity of the web. For developers, it means a new, easy-to-implement source of revenue. For advertisers, it creates a new media outlet that offers consumers highly targeted information." -Apple
As I searched for more information about product placement and embedded advertising, I came across a few posts/websites that suggests that in some circles, this is a hot/controversial topic:
Joseph Lewczak and Ann DiGiovanni, WLF Legal Backgrounder, 4/9/10
FIT Media FAQs (FIT= Fairness and Integrity in Telecommunications Media)
"FIT Media is a non-partisan coalition of health, media and child advocacy organizations and professionals supporting transparency and child protection in embedded TV advertising."
This is an interesting website - FIT Media covers topics such as "Advernews", "Embedded Propoganda", "Deceptive Advertising", and ways that embedded advertising might be harmful. Week ahead: FCC meeting, Do Not Track hearing Cecilia Kang, Washington Post 11/29/10
What is Apple's New Privacy Policy? "Amidst all the glitz of releasing a new mobile operating system and iPhone, Apple quietly updated their privacy policy. Why?"
Michael Kassner, Tech Republic, 6/28/10
FYI: If you have an iPhone running iOS 4 and wish to opt-out of iAD, you can do so at http://oo.apple.com
Xerox Patent Filing Make Product Placement Addressable The Media Buyer, 1/12/10 "The patent describes the system (via GoRumors) as having the ability to alter content within a program based on the viewer. For example, if a character on a show mentions Macy’s, that content could be shown to general viewers. But that small portion of the broadcast could be “marked,” and the content could be changed so that the character instead says the name of sporting goods store Modell’s. That portion of the broadcast would be served to viewers who are into sports. Similarly, if the storefront was shown during the program, general audiences would see the Macy’s store, while sports fans would see the Modell’s store."
I came across the following video on a post on the Creative Applications blog about the Theo Watson and Emily Gobeille's recent work with OpenFrameworks, Kinect, and an interactive puppet prototoype:
You can see from the video of their Funky Forestinstallation (2007 CinKid) that they create engaging interactive applications:
Funky Forest - Interactive Ecosystem from Theo Watson on Vimeo. (Information from Theo Watson's Vimeo site) "Funky Forest is a wild and crazy ecosystem where children manage the resources to influence the environment around them. By using their bodies or pillow "rocks" and "logs", water flowing from the digital stream on the floor can be dammed and diverted to the forest to make different parts grow. If a tree does not receive enough water it withers away but by pressing their bodies into the forest children create new trees based on their shape and character. As children explore and play they discover that the environment is inhabited by a variety of sonic life forms and creatures who appear and disappear depending on the health of the forest. As the seasons change the creatures also go through a metamorphosis." Credits: Theodore Watson Emily Gobeille Project Page: muonics.net/site_docs/work.php?id=41 zanyparade.com/v8/projects.php?id=12
Note: A version of Funky Forest, Moomah Edition, is permanently installed in New York City at the Moomah Children's Cafe. This version includes interactive environments related to the four seasons of the year, including an interactive particle system with falling leaves and snow.
FOR THE TECH-CURIOUS Libfreenect for OX, by Theodore Watson
XBox Kinect running on OS X ( with source code ) from Theo Watson on Vimeo. The following information and links were taken from the Vimeo description of the above video: "This is a port/adaptation of Hector Martin's libfreenect for OS X made by Theodore Watson. Hector Martin's Libfreenect project page is here: git.marcansoft.com/?p=libfreenect.git Most of the code is unchanged but there are some changes to libusb which was needed to get it running (and a few extra libusb commands) as well as some tweaking of the transfer sizes. It should be self contained and you shouldn't need to install libusb (the app links it directly ). Grab the Source Code: UPDATE: (Tested on OS X 10.6.3 - 32bit now and with fixes) theo.tw/deliver/kinect/001-libfreenect-modded-osx-updated.zip ofxKinect for OF users - thanks Dan! openframeworks.cc/forum/viewtopic.php?p=24948#p24948 Tips: - Try both usb ports. - Try not to have too many other devices plugged in (or any)."
(Check the Vimeo website to see if there are updates)
MULTI-TOUCH WITH HACKED KINECT Here is NUI-Group member Florian Echtler's proof-of-concept HD video of using a hacked Kinect camera for multitouch-like interaction. The application was built on Ubuntu Linux written using libfreenect, by marcan42 and Florian's creation, libTISCH.
Florian decided to use picture-browsing interaction to demonstrate proof-of concept, so "everybody can focus on more interesting things :-)"
(I have SO many ideas for this! I'll throw a few out there in an upcoming post....maybe someone can run with them!)
FOR THE TECH-CURIOUS: TISCH stands for Tangible Interactive Surfaces for Collaboration between Humans, and is a cross-platform, cross device multi-touch development framework. You can download the source package for Windows, MacOS X, and Linux from the TISCH Sourceforge website. The Ubantu Lucid/Karmac version has "superquick installation via PPA" - the instructions can be found on the TISCH Sourceforge website.
For techies (and the tech-curious) who like technologies that support collaboration and multi-touch interaction, this is great news!
Florian Echtler announced the first stable releas of libTISCH, a multi-touch development framwork, which can be found on Sourceforge. TISCH stands for Tangible Interaction Surfaces for Collaboration between Humans. libTISCH, a C++ software framework, is included in this project. It provides a means for creating GUIs based on multi-touch and/or tangible input devices.
Here is how it works:
Here is information from libTISCH announcement:
Highlights of this release are, among others, the following features:
- ready-to-use multitouch widgets based on OpenGL
- reconfigurable, hardware-independent gesture recognition engine
- support for widely used (move, scale, rotate..), pre-defined gestures
as well as custom-defined gestures
- hardware drivers for FTIR, DI, Wiimote, DiamondTouch..
- TUIO converters: source and sink
- cross-platform: Linux, MacOS X, Windows (32 and 64 bit)
- cross-language: C++ with bindings for C#, Java, Python
libTISCH has a lot to offer for the multitouch developer. For example,
the textured widgets enable rapid development of applications for many
kinds of multi-touch or tangible interfaces. The separate gesture
recognition engine allows the translation of a wide range of highly
configurable gestures into pre-defined or custom events which are then
acted on by the widgets. While the lower layers of libTISCH provide
functionality similar to tbeta, touche etc. (you can interface existing
TUIO-based software with libTISCH in both directions), it goes far
beyond.