Dec 11, 2010

Very Cute! Department of Defense Acquisition Mini Learning Games

If you are the Department of Defense, how do you make sure your workers in the Acquisition Department engage in required learning activities?   


Games!  You can access the games via the Defense Acquisition University game portalBelow are some screenshots, descriptions, and links:


Procurement Fraud Indicators

"Investigate potential Procurement Fraud Indicators in this game which allows you to form hypotheses, test your theories, even question individuals who might have something to hide!" -CLC DAU

Homeward Bound
DoD Casual Games
"Join Ratner's friends and help guide him back to the Pentagon; across rivers, highways, and highly guarded walls using your knowledge of Acquisition Strategy and Contract Execution." -CLC DAU


Acquisition Proposition

"How well do you know the Acquisition Lifecycle? Test your knowledge in this fast paced game!" -CLC DAU


About the Defense Acquisition University
"The Defense Acquisition University is the one institution that touches nearly every member of the Defense Acquisition Workforce throughout all career stages. The university provides a full range of basic, intermediate, and advanced certification training, assignment-specific training, applied research, and continuous learning opportunities. The university also fosters professional development through mission assistance, rapid-deployment training on emerging acquisition initiatives, online knowledge-sharing tools, and continuous learning modules." - DAU Website


RELATED
Listen to the DoD Roundtable:  Interview and discussion about the casual learning games, featuring Dr. Alicia Sanchez, Games Czar, Defense Acquisition University
DoD Roundtable Transcript (pdf)
Defense video games perfectly capture excitement of acquisition process
Stephen Losey, Fedline, 12/10/10
DoD launches its own causal games site
milgamer, 12/8/10

Quick Post: Journey, the next game from thatgamecompany (developers of Flower, flOw, and Cloud).

I've been following the work of some of the people behind thatgamecompany since they were graduate students at USC, working on Cloud, an enchanting and relaxing game. They went on to develop Flower and flOw, and are now working on Journey, the next game planned for release:





To view video trailers of other games by thatgamecompany, see the following post:


Games to Lift Stress Away: Flower, flOw (and Cloud), from thatgamecompany

Also visit thatgamecompany's website!

Gesture "multitouch" 12 x 7 interactive video wall provides tours of I/O Data Centers' facilities

I came across this demonstration of I/O DataCenter's 12 by 7 foot interactive video wall that makes playing around with views of data center modules...interesting! The display is a gesture-based "multi-touch" system. (I'll update this post when I get more information.)



Here is the description from the Datacenter YouTube channel:


"Instead of hauling a 40-foot long modular data center to a trade show, i/o Data Centers is taking a high-tech approach to customer tours of their i/o Anywhere modular data center. The i/o team has created a 12-foot by 7-foot touchscreen video wall to provide interactive tours of the company's facilities. Selecting a "hot spot" pops up a virtual data center, complete with cross sections and product info, following the concept of the touch screens in the sci-fi movie "Minority Report.""


FYI: I/O Data Centers has an application that runs on the Surface.

UPCOMING:
Stay tuned for my upcoming posts! 


News about LM3LABS (Previous post)
Interactive Surveillance CCeline Latulipe (technologist) Annabel Manning (artist)

Dec 9, 2010

Interested in the OpenNI Initiative? OpenKinect? To learn more, read Josh Blake's Interview of Tamir Berliner of PrimeSense




Josh Blake, Deconstructing the NUI, 12/9/10



Josh Blake recently interviewed Tamir Berliner, one of the founders of PrimeSense.  If you haven't heard, Microsoft's Kinect was based on work by PrimeSense, and licensed their technology. PrimeSense provides consumer electronics with natural user interaction capabilities. The good news is that the company recently released open-sourced middleware for natural interaction and depth-camera drivers. It will be interesting to see how this will play in the near future!




In the interview, Tamir discussed a number of topics related to postWIMP technologies.  He also announced the newly created  OpenNI, "an industry-led, not-for-profit organization formed to certify compatibility and interoperability of Natural Interaction (NI) devices, applications, and middleware."   It is good to see this level of support for the cause!


Here is a quote from the interview that I especially liked:

"I believe that till today the devices we’ve been using, made us learn greatly lot about them before we could use them and gain their value. I’m pretty sure everyone who is reading this has got at least 3 remotes sitting on his living room table, and at least once a week needs to help someone use their computer/media center/phone/etc. It’s time for that to change and it’s up to us, the technologists to make this revolution happen, it’s time for the devices to take the step of understanding what we want and making sure we get that, even without asking if it’s a trivial task as opening a door when we approach, closing the lights when we leave the room, even making sure we have hot water to shower with when we return from work or wake up in the morning, depends on what we normally do." -Tamir


RELATED
Here are a couple of videos from the OpenNI website that demonstrate OpenNI-compliant applications:

OpenNI-compliant real time skelton tracking by PrimeSense


OpenNI-compliant real time SceneAnalyzer by PrimeSense



FYI: 
Josh Blake is the author of the Deconstructing the NUI blog. Over the past couple of years, he's explored natural user interfaces and interactions through his work on applications designed for Microsoft Surface and Win7 with Windows Presentation Foundation.
About a month ago, Josh organized OpenKinect, an on-line community to support collaboration among people interested in exploring ways to use Kinect with PCs and other devices.  An example of this effort is the open source code, libfreenect, which includes drivers and libraries for Windows, Linux, and OS X. 


The Natural User Interface Revolution
Josh Blake, 1/5/09


Kinect for Xbox 360: The inside story of Microsft's secret 'Project Natal'  (long, but worth reading) David Rowan, Wired UK, 10/29/10


People of libreenect

OpenNI User Guide (pdf)

Plug for Computer Science in Education Week: Informative series of short video clips, resources, and links to promote understanding of the importance of computer science and related fields

This week is Computer Education in Education Week, part of an effort of ACM (Association for Computing Machinery) and CSTA (Computer Science Teachers Association) to promote awareness of the importance of computer science education in K-12 education. CSTA developed a series of short videos to share with students as part of this effort. The videos highlight the multitude of ways that computer scientists impact our world. In my opinion, the videos would be appropriate for sharing with parents, teachers, school counselors, school administrators, and school board members.

Computer Science and Entertainment


Computer Science and the Environment


Computer Science and Communications


Computer Science and Medicine


Computer Science and Empowerment


To dig deeper into this topic, read Running On Empty: The Failure to Teach K-12 Computer Science in the Digital Age (pdf)

RELATED
CSEd: Computer Science in Education Week
Computing in the Core
Computer Science in Education Facebook Page
Anita Borg Institute for Women and Technology
ACM/CSTA's Recommendations
A Model Curriculum for K-12 Computer Science (PDF)
Google: Exploring Computational Thinking
ACM Computing Careers Website

Cross-posted on the Tech Psych blog.

Dec 6, 2010

UPDATE: Demo 2 of the Kinect Theramin, Therenect, by Martin Kaltenbrunner

I recently posted about the Therenect, a gesture-controlled digital theremin created for Microsoft's Kinect, created by Martin Kaltenbrenner - Therenect: Theremin for the Kinect! (via Martin Kaltenbrenner)  It looks like Martin has been busy polishing up the application over the past few days, as you can see from the video below:

Therenect - Kinect Theremin - 2nd Demo from Martin Kaltenbrunner on Vimeo.

RELATED
Virtual Theremin Made with Kinect; Real Thereminists Will Make it Useful
Peter Kirn, Create Digital Music, 11/30/10

ICE PAD: Interactive Multitouch Ice Sculpture by Art Below Zero (video)

ICE PAD: Interactive Multitouch Ice Sculpture by Art Below Zero


Here is the information about the interactive sculpture from the Art Below Zero YouTube Channel:

"Created by David Sauer & Max Zuleta for the Lake Forest Tree Lighting Festival.This Ice Crystal Display was the 1st to be created in the USA, Transforming 300 pounds of ice into the equivalent of a giant Ipad touch screen. "People always want to touch our Ice Sculptures, This Interactive Display gave them the perfect reason to get their hands cold." said Max Zuleta owner of Art Below Zero. The public response was amazement and interest in the workings of the touch screen in ice. Our favorite guess was "It must work by sensing body heat!"..."

"...The system is known as Rear Diffused Illumination or Rear DI. It works because an Infrared light is shone from the opposite side of the ice wall through the ice. When an object such as a finger, hand, or mitten stops the infrared light it reflects the light back to a custom camera built by Peau Productions. The illuminated objects are then converted to points of interaction using an open source program Community Core Vision which outputs TUIO data streams to a Flash program for animation. We like the look and feel of the Fluid Solver flash application. The output from the computer is then projected into the ice and ice diffracts the light into something beautiful. By this method the user can manipulate a visible light screen via an invisible light that only the camera can see..."



Thanks to Nolan Ramseyer, of PeauProductions, for the link!
PeauProductions Blog: Multitouch and Technology


RELATED
Ubice = Multi-touch On Ice at the Nokia Research Center in Finland (Video + Pic via Albrecht Schmidt)
Art Below Zero

Interactive Information Visualization for the Kinect? Something like Jer Thop's "Just Landed-36 Hours" might work nicely if revamped!

I follow the O'Reilly Radar blogs and came across a recent post about an information visualization created by blprnt two years ago using Processing. I think it would have great potential if it was re-purposed for use on the Kinect! In the article, Edd Dumbill discusses the advantages of using Processing to create data and information visualizations.  


One example of the power of Processing is an information visualization, "Just Landed -36 Hours, created by Jer Thorp.  Jer gathered tweets from Twitter that included the statement, "just landed", along with location information for each tweet, within a 36-hour period, to create the visualization.


36 Hours- Just Landed is a great 3D visualization of air travel on our planet.  I especially lik the different views that the application provides. As soon as I watched the Just Landed video, I thought it would be great if it could be revamped for use on the Kinect!   (Leave a comment if you know of anyone working on a project in this area.)


Just Landed - 36 Hours from blprnt on Vimeo.


Information about the video from blprnt's Vimeo site:


"I was discussing H1N1 with a bioinformatics friend of mine last weekend, and we ended up talking about ways that epidemiologists model transmission of disease. I wondered how some of the information that is shared voluntarily on social networks might be used to build useful models of various kinds...I'm also interested in visualizing information that isn't implicitly shared - but instead is inferred or suggested...This piece looks for tweets containing the phrases 'just landed in...' or 'just arrived in...'. Locations from these tweets are located using MetaCarta's Location Finder API. The home location for the traveling users are scraped from their Twitter pages. The system then plots these voyages over time...I'm not entirely sure where this will end up going, but I am reasonably happy with the results so far.   Built with Processing (processing.org) You can read more about this project on my blog - blog.blprnt.com"


RELATED
Strata Gems:  Write your own visualizations:  The Processing language is an easy way to get started with graphics
Edd Dumbill, O'Reilly Radar, 12/3/10

Air Presenter Plus, for the Kinect, for Presentations, developed by Evoluce and So touch

As soon as Kinect was released by Microsoft, there was a flurry of app development. Evoluce and So Touch partnered to create a presentation application for the Kinect that could be used in work settings. Take a look!


Information about Air Presenter Plus, from the So touch's YouTube channel:

"So touch, the leading creative software company for new digital technologies, in partnership with Evoluce, the leading provider of advanced multi-touch screen technologies, present: So touch Air Presenter for Kinect. The world's first presentation software optimized for Kinect.

Turn your corporate presentations, welcome areas, trade show booths and point of sales into mind boggling experiences, controlling your presentation with multi-touch gestures leveraging So touch Air Presenter gestures software and Evoluce Kinect Windows 7 software.

Integrate your usual PDF, Power point, JPG and video materials into So touch multi-touch minority report's style interface and control it with gestures in the air.

So touch Air Presenter is delivered with a very graphic player, featuring a multi-touch zoom mode and an integrated video player as well as a very easy to use content manager.

So touch Air Presenter content, sourced locally or from the network, can be played on multiple screens at the same time. So touch Air Presenter content manager can deliver customize or generic content to each player.

So touch Air Presenter packaged with Evoluce Kinect Windows 7 software will be released soon. So touch Air Presenter is already available for TUIO based gestures devices. To know more and download a free trial version, visit http://www.so-touch.com/air-presenter"




So touch
Evoluce

Dec 5, 2010

Video: DaVinci Surface Physics Illustrator Interface on Xbox Kinect, with gesture interaction, by Razorfish


DaVinci prototype on Xbox Kinect from Razorfish - Emerging Experiences on Vimeo.

RELATED
Razorfish ports DaVinci interface to Kinect, makes physics cool (video)
Time Stevens, Engaget, 12/5/10
Razorfish
(I love this website.)

3D Multimedia Holiday Projection on Buildings in Amsterdam's City Center: Enjoy!



Info from Muse Amsterdam's YouTube channel, musedigital:


"On 22, 23 & 24 November 2010, H&M brought their flagship store in Amsterdam to live with a 3D projection mapping on the historic building. For over 3 minutes, guests and a gathered crowd enjoyed a surreal fairytale of light and magical effects. A red ribbon, wrapped around the building, untangles and transformed the building into a colorful dollhouse where nothing is what it seems."


Agency: Muse Amsterdam (www.muse.nl)


Production: MrBeam, Mickey Did It, BeamSystems.

(via Ambient Content)

"TV Everywhere": Google acquires Widevine to support adaptive streaming video, including DRM content.

This evening I followed a tweet to an article written by Ben Parr, of Mashable:    

Google Acquires Some Powerful Video-Streaming and DRM Technology.   According to Google,  Widevine's adaptive video streaming technology monitors and adapts to bandwidth changes as it delivers content.  This technology make accessing high-quality video content across the web more seamless, consistent, and convenient across platforms and locations, and is known as "TV Everywhere". 

A range of  technologies developed by Widevane in the recent past look like they will be of benefit to Google. Widevine's intellectual property portfolio covers a lot of ground.  The patent claims distribution, as outlined on the company's website, includes realtime piracy detection and response fingerprinting, forensic watermarking, media tracking, evolving detectors (monitoring and response to piracy), security renewals, QOS, cross domain content security, secure processor technology, trusted computing technology, grooming/transcoding, DCAS (Downloadable Conditional Access System), device certificates, application level encryption, adaptive streaming, and usage controls.  

Google will have a wide reach with the acquisition of Widevane, as it plans to continue the company's partnerships with the "entire ecosystem" of businesses related to digital video content in some way.   As more people access web-based video from smart phones and related devices, and discover they can access video whenever they want, the demand for Google's cloud computing support will grow, along with the need for additional centers and support to handle the demand for multimedia content and related software applications.

The acquisition of Widevane might provide Google with a great deal of power over the next generation of cable/airwaves. If so, this will be a boon to advertisers, if done well. As it is, viewers must wait patiently to watch an ad for 15 minutes or so before viewing a short video clip on websites such as the Wall Street Journal. For some, this just a minor annoyance, and certainly not as bad as garish banner ads and pop-ups.  Marketers will have additional opportunities to reach potential customers through the use of product placement/embedded ads when people access more longer-playing videos and movies on-the-go.  The technology exists to create customized embedded ads in videos based on data collected about the viewer, which is right up Google's alley.  

Google's Data (on us) + Widevane = ?  

FYI:
A recent post on the Google Blog explains the acquisition of Widevane in detail: On demand is in demand: we've agreed to acquire Widevane (12/03/10). According to the information from the website, "Widevine is a privately held corporation headquartered in Seattle, WA, funded by Constellation VenturesCisco SystemsCharter VenturesDai Nippon Printing Co., Ltd(DNP)Pacesetter Capital GroupThe Phoenix PartnersTELUS (NYSE: TU) and VantagePoint Venture Partners." 


RELATED
5 Reasons Google Bought Widevine -Ryan Lawler, GigaOm, 12/3/10
Google to acquire Widevine - Heaven sent or a Devil's Deal? (Includes list of Widevine's partners/customers) - Paul Johnson, AppMarketTV, 12/4/10
Google acquires Widevine - Colin Mann, Advanced Television, 12/4/10
Related Posts from Advanced Television

Are we moving to cloud-based DRM ? Take a look at the content & links below:
Jacqui Cheng, Ars Technica  (Interesting- 122 comments)


SOMEWHAT RELATED

The Battle For Your Digital Living Room: Apple, google and others are vying hard for this valuable real estate -Knowledge @Wharton, Forbes, 9/17/10

Why is this important to me?

I'm working on some ideas for web-based interactive educational videos and other interactive multimedia applications designed to be accessed across various screens and devices.  Technology is changing rapidly, and to move forward, I need to know more as I make decisions in the future.  I'll return to this topic in future posts as I research this topic further.


Thanks to Pawel Solyga (Solydzajs) for the tweet that sent me down this rabbit hole ; )  
FYI: Pawel is a software engineer, focused on next gen mobile apps.  He is the VP of Technology at Numote. He also is a NUI Group co-founder.  

Dec 3, 2010

Workshop on Mobile and Personal Projection: Call for Papers, CHI 2010, May 8, 2011, Vancouver, CA

I can't wait to attend CHI 2011!    There will be lots to learn about emerging technologies and interactions at the conference. Here's another call for papers/participation for a workshop session at the conference, via Markus Löchtefeld



CALL FOR PAPERS: MP²: Workshop on Mobile and Personal Projection, a workshop to be held at CHI 2011, Vancouver, CA. May 8,  2011


Objectives

The workshop will provide an open forum to share information, results, and ideas on current research on mobile and personal projection. The participants will explain, demonstrate and discuss their current research with others in order to receive feedback, criticism and ideas for future work. Concrete selected questions, ideas and concepts will be addressed in various group sessions in which the participants will work on topics such as a design space for mobile and personal projection; user interface, interaction design and application sketches; paper prototypes; or ad-hoc studies using the provided mobile and personal projector hardware. The results of these group sessions will be discussed with all workshop participants. Finally, we will discuss future research areas, challenges and the potential for mobile and personal projection in order to lay the foundations for a research agenda in this field.

Workshop Topics

The workshop looks for contributions on the following and related topics:
  • Applications and interaction techniques for mobile and wearable projection.
  • Personal projection in augmented reality.
  • Interaction with projected interfaces.
  • Projector phones and wearable projectors.
  • Multi-user interactions and applications.
  • Multimodal and personalized (mobile) interfaces.
  • New application areas of mobile projection.
  • Social implications when interacting with projected interfaces.
  • Artistic and unusual ways to utilize mobile projection.
  • New forms of interaction with the environment.

Research Questions

Mobile and personal projection is at a relatively early stage of research. Reflecting this state, the workshop specifically addresses the following fundamental research questions:
  • What are the unique properties and affordances of mobile and personal projection? What are suitable interaction metaphors?
  • What are core application domains that benefit the most from the usage of mobile and personal projection? What are the application contexts and usage requirements that support mobile and personal projection?
  • What are suitable interaction techniques for mobile and personal projection? How can gestures be incorporated? How should visualizations be structured? How can the projected virtual and real images of objects coexist? What is the role of augmented and mixed reality?
  • What is the social impact of mobile and personal projection technologies? How can users manage privacy when using mobile and personal projectors? How does public behavior change with the introduction of mobile and personal projection technologies?
  • How can spontaneous co-located collaboration be supported by mobile and personal projection technologies? How can designs support the exchange of media items between mobile projector phones?
  • What are suitable strategies and methodologies for evaluating mobile and personal projection interfaces? What aspects impact the user experience?

Submission

We ask for papers that address one or more of the research questions mentioned above, or that describe findings that relate to these research questions based on systems the authors have built. We welcome position papers (2 pages) as well as papers reporting novel concepts, (first) prototypes, studies, applications or interaction concepts (up to 4 pages). All submissions should be prepared according to the standard HCI Archive format.
Each paper will be receive at least two reviews. All accepted papers will be made available online and will be published at Sun SITE Central Europe (CEUR) Workshop Proceedings.
INFORMATION:

Mobile and personal projection interfaces are no longer fiction and have received considerable attention recently. Integrated pico-projectors in mobile and wearable devices could make mobile projection ubiquitous within the next few years. Walls, desks, floors, ceilings, t-shirts or palms will act as projection surfaces for these kinds of new devices.
These technological developments offer new opportunities and challenges for novel forms of interaction. Virtual displays can extend beyond physical device boundaries and augment existing objects. There are also new opportunities for spontaneous multi-user interaction. However, issues such as lighting conditions, privacy, and social acceptability also come into play.
We will bring together researchers and practitioners who are concerned with design, development, and implementation of new applications and services using personal mobile and wearable projectors in their user interfaces.

Important Dates

  • January 10, 2011 - Submission Deadline
  • February 4, 2011 - Acceptance Notification
  • March 11, 2011 - Revised Manuscript Due
  • May 8, 2011 - Workshop Date
Organizers

Otto-von-Guericke-Universität Magdeburg(Germany)
Nokia Research Center,Tampere (Finland)
Swansea University (UK)
DFKI (Germany)
University of Munich (Germany)
University of Duisburg Essen (Germany) & Lancaster University (UK)