I've been intrigued by graphene's multiple possibilities for the future. It is a flexible, programmable material that harness nano-technology to create flexible touch screens, "wearables", efficient energy storage systems, and more. The following videos provide just two examples of graphene's potential. The details? If you are curious, follow the links at the end of this post.
Here is a short clip of a demo of a graphene touch screen on a Samsung Galaxy:
Leap Motion and Google Earth Experiment: Cute Doggie Photo-globe Mashup
I finally experimented with my Leap Motion controller and Google Earth, using a mashup I created a few years ago with pictures of cute dogs from my Flickr photo-stream. In the video below, you can see that my gesture navigation skills still need some practice!
I should have watched the following video of Leap Motion in action with Google Earth before trying this experiment at home : )
I am pretty sure that developers will be able to tweak Leap Motion + Google Earth interaction in the near future. I'd like to adapt it for use with kids as well as adults who have mild motor impairments.
Cute Doggies Photo-Globe Mash-up using Google Earth and a Flickr Set (How-to)
If you'd like to make your very own photo-globe using Google Earth and Flickr photos, here are the directions, ported and updated from a previous post:
This photo is a screen shot of photos of just about every dog I know, and some that happened to cross my path. In this post, I'll share some information about how to create a photo-globe in Google Earth. The first step is to make sure you have lots of pictures related to your theme uploaded to a site such as Flickr. (You can also create a photo-globe using pictures from your computer's hard drive.) To get the pictures into Google Earth, I used the Image Overlay feature, and in the "link" textbox, I entered the image URL for each picture that I'd previously loaded as a set in Flickr.
To do prepare for this, make sure you go to "view" tab on the upper left-hand section of your screen, and make sure that "toolbar" is checked. Also make sure that "Grid" is selection, as this will help make it easier to arrange and align your pictures. You can turn off this feature later. Near the top of the screen, click on the Image Overlay icon. (I've highlighted it in the picture.)
You'll have to enter the URL of the image you'd like to add to the globe in the "Link" textbox, which I've highlighted in the above picture. In this case, I've used a link to one of my pictures in a Flickr set I created for this project. One thing to keep in mind is that the picture will take up a much larger space than you might prefer, so you'll have to adjust the size using the green markers:
Use the center cross-hair marker to slide the entire overlay on the globe and position it from the center. (Tip: do this first.)
Use the triangle marker to rotate the image for better placement.
Use any of the corner cross-hair markers to stretch or skew the selected corner. If you press the Shift key when selecting this marker, the image is scaled from the center.
Use any of the four side anchors to stretch the image in or out of from the selected side. If you press the Shift key when doing this, the image is scaled from the center.
TIP: Try positioning the center of the image as a reference point first, and then use the Shift key in combination with one of the anchors to scale the image for best positioning.
Directions updated to reflect latest version of Flickr, as of 5/27/13: To find the image URL for a photo in Flickr that you wish to link on your photo-globe, select your desired photo and right click "Copy Image URL".
Put your curser in the Link section of the "New Image Overlay" dialog box in Google Earth, and right click to select "paste" from the drop-down menu
Then repeat the process. It helps to name each picture so that you can find it easily in Google Earth.
To enhance your mash-up, you can add place-marks that contain URLs that link to additional information about the subject of a picture, such as blog posts with embedded videos and/or text related to a picture, and so forth. Directions can be found in Google Earth's help section.
The process of building a photo-globe in Google Earth is a bit tedious. If someone has a short-cut to share, please let me know!
The big news in tech today is the unveiling of the new Xbox One/Kinect 2 system. For now, the video below might be the closest you'll get to the system. Wired's senior editor, Peter Rubin had a chance to interview Scott Evans, of Microsoft, as he demonstrated the fascinating technical details in a family-room type setting.
Wired's interview of Scott Evans and demo of the new Xbox One and Kinect 2, using Active IR technology.
From what I learned, the new Kinect sensor has six times the fidelity of the previous version. Paired with the new Xbox One, it can do amazing things. Engineers from around the world collaborated on this project, providing expertise in facial recognition, digital signal processing, speech recognition, machine learning, and computer vision. The Xbox One is fueled by an 8-core x86 processor, supported by 8GB of RAM, which is sure to handle the hardest gamer's needs. It also includes a 500GB hard drive and an HD Blu-ray player.
The new system was designed to enhance the gaming/user experience. The 1080p camera provides a field of view that is 60 degrees larger than its predecessor, and can handle a high level of detail. It provides a better means of interpreting movement and orientation, and it processes skeleton and hand movements more precisely. The system features "muscle man", a human-based physics model that is layered over the skeleton and depth map. It senses and calculating the forces the player uses while moving in a game. What I find interesting is that the camera can detect the player's pulse by measuring subtle changes of the skin that can't be perceived by the naked eye. It also can quickly identify each player (it handles up to six), and identify facial expressions. The active IR (infrared) system provides the system with better accuracy than the original Kinect. I wasn't able to find out much information regarding privacy issues with this system. This is a concern, since it can sense your physiological responses, movement patterns, and facial expressions. Over time, a good deal of very personal information would be gathered about each user. I shudder to think about the consequences if the data fell into the wrong hands.
Possibilities for Special Needs Populations
I can see that the Xbox One + Kinect 2 system has the potential for games and other interactive applications for use in physical rehabilitation and fitness. Since it can interpret facial expressions, it could also provide a way to support social skills learning among children and teens who have autism spectrum disorders.
RELATED
Microsoft invests a good deal of attention to proof-of-concept projects that may or not become part of a commercial product. Below is an example of IllumiRoom:
Hrvoje Benko, of Microsoft Research, discusses the IllumiRoom concept during an interview at CHI 2013.
I like this demonstration of Adam Somers AirHarp music application for use with the Leap Motion 3D controller:
AirHarp is being developed in C++ using Adam Somer's audio processing toolkit, MusKit. This looks interesting! Things have changes since I last took a computer music technology course (back in 2003). Adam Somers is a senior software engineer at Universal Audio. He has a graduate degree in music technology from Stanford, and a background in computer science, electronics, human-computer interaction, and signal processing. Leap Motion is a motion-control software and hardware start-up company located in San Francisco, California. According to promotional information from the website, the company's first product, the Leap Motion controller, is 200 times more sensitive than existing technologies. It will be interesting to see how this plays out. (I'm still waiting for my pre-order.) RELATED AirHarp (links to GitHub) Leap FAQs Leap Motion Website Leap Motion Developer Portal Leap Motion Leadership Team Leap Motion goes retail: Motion controller sold exclusively at Best Buy Michael Gorman, engadget, 1/16/13
The Affinity+ concept has the potential to be useful in educational settings such as schools, museums, and libraries. Although it was designed to support collaborative activities among software designers/developers, it could support a wide range of collaborative project-based learning activities. The clearly narrated video below was produced by a team from the Pacific Northwest National Laboratory.
"Affinity diagraming is a powerful method for encouraging and capturing lateral thinking in a group environment. The Affinity+ Concept was designed to improve the collaborative brainstorm process through the use of large display surfaces in conjunction with mobile devices like smart phones and tablets. The system works by capturing the ideas digitally and allowing users to sort and group them on a large touch screen manually. Additionally, Affinity+ incorporates theme detection, topic clustering, and other processing algorithms that help bring structured analytic techniques to the process without requiring explicit leadership roles and other overhead typically involved in these activities." -PNNL RELATED Affinity+ Semi-Structured Brainstorming on Large Displays Russ Burtner, Richard May, Randy Scarberry, Ryan LaMothe, Alex Endert Pacific Northwest National Laboratory
I've been following a number of people that have been working in the area of natural user interfaces and interaction for many years. An example of this work is NUITEQ, a company lead by Harry van deer Veen. Below is NUITEQ's most recent show reel of Snowflake Suite, an off-the-shelf multitouch SDK.
Here is the description of the software from the naturaluserinterface YouTube channel:
"NUITEQ's award-winning multitouch software product Snowflake Suite comes off the shelf with 30+ apps, a free SDK to develop your own multitouch software apps and its content is easy to customize. The solution is offers high performance, stability, quality and comes with dedicated support. Apps includes presentation, productivity and creativity tools as well as games. The software can be used in different scenarios such as corporate presentations, exhibitions, entertainment, education, public spaces, consumer electronics, retail and digital signage."
FYI: Tutorials about the user of Snowflake Suite can be found on the naturaluserinterface YouTube channel.
Harry van der Veen has been sharing his NUI journey journey since 2007 on his Multitouch blog.
I'd like to share the on-line demo of MindHabit's suite of serious games that I've found useful in my work with teens and young adults who need support in the area of social-emotional skills.
What I like about the online demo is that it adjusts to the player's responses. This feature made it fun to use during the last few social skills groups I facilitated at work, since it could be played by students with a range of cognitive abilities. I had students take turns playing the game using a SMARTboard, and found that all of the students paid attention to what was going on. In my opinion, using the interactive whiteboard supported "off-the-shoulder" learning among the students who were not at the board.
MindGames is available for Windows and Macs, and the full version is just $19.99 and provides 100 game levels. The full version tracks progress and includes four games.
Here's some information from the company's website: "Based on social intelligence research conducted at McGill University, these stress busting, confidence boosting games use simple, fun-to-play exercises designed to help players develop and maintain a more positive state of mind." "Based on the principles of social intelligence: Inhibition - uses game mechanics to promote positive habits; Association - connects personal info to positive feedback; Activation - uses personal references"
You are playing the MindHabits Trainer online demo. Your progress will not be logged beyond this session.
Interview with Guillaume Largillier about Stantum's multi-touch tablet.
Collaboration with Stevens Institute of Technology, focusing on a serious game project to support learning of "on-the-job" social skills for teens and young adults with autism spectrum disorders and related challenges.
More news about large interactive displays, multi-touch, and gesture applications/installations.
"It's people who are now the interface." -Ole Bowman, cultural and architectural historian
I found the above quote from the Immersive Cocoon website and smiled.
When I first learned about the Immersive Cocoon in 2008, I thought it was just another technological fancy that probably would not come to market anytime soon. Although it still is in the concept stage, I think it has a chance of making it, given the rapid advances in interactive technology over the past few years.
It wouldn't surprise me to see i-Cocoons finding a place in libraries, educational settings, museums, and other public spaces within the next 5-8 years, given an economic turnaround.
What is the Immersive Cocoon? "The Immersive Cocoon is a future concept study by Tino Schaedler with design collective NAU; an idea to push the envelope and provoke a new conception of interface technology...Directed and 3D CG by Oliver Zeller. More info, behind the scenes and full credits at i-cocoon.com." -adNAU"
"Please play fullscreen and LOUD! ...This spec teaser reveals an evolution in computing interaction, within a setting inspired by the penultimate scene from Stanley Kubrick's 2001: A Space Odyssey...Starring that film's lead actor, Keir Dullea; "2011" was developed over a two year period. Live action was filmed multi-camera, against green screen atop a backlit plexi floor on a shoestring budget. Mr. Dullea was then integrated into an entirely digitally created CG set rendered at 1080HD."
Here are some previous videos about the iCocoon concept:
RELATED Immersive Cocoon Concept Website Designers developing virtual-reality 'Cocoon' Mark Tutton, 9/12/08, Telepresence Options /Human Productivity Lab Immersive Cocoon-Facebook "NAU is an international, multidisciplinary design firm, spanning the spectrum from architecture and interior design to exhibitions and interactive interfaces. As futurists creating both visual design and constructed projects, NAU melds the precision of experienced builders with the imagination and attention to detail required to create innovative exhibits, public events and architecture." FYI: Concerning interactive technology, things have changed a bit in my corner of the world - as I write this post, there is a Kinect beckoning me to dance in my bonus room. The Kinect was something that came to market much sooner than I expected. I'll have an iPad2 sometime in the near future- another example of how rapidly things are evolving. I skim the news by touch/swiping my now-outdated HTC Incredible. My 88-year-old aunt, has used Skype more than once to "chat" with her baby great-nephew across the miles.
I use a Wii at work at least once a week to support social interaction skills with some students who have moderate-to-severe autism. Every classroom in the main school I serve has a huge, immersive, interactive whiteboard that relies on touch and kinesthetic interaction-my colleagues can't imagine going back to teaching without them.
To learn more about this project, take a look at the video and related publications below. This is a great example of a team that is harnessing emerging technologies to improve the lives of people with disabilities.
I recently posted about the Therenect, a gesture-controlled digital theremin created for Microsoft's Kinect, created by Martin Kaltenbrenner - Therenect: Theremin for the Kinect! (via Martin Kaltenbrenner)It looks like Martin has been busy polishing up the application over the past few days, as you can see from the video below:
"In this paper we present novel input devices that combine the standard capabilities of a computer mouse with multi-touch sensing. Our goal is to enrich traditional pointer-based desktop interactions with touch and gestures. To chart the design space, we present five different multi-touch mouse implementations. Each explores a different touch sensing strategy, which leads to differing form-factors and hence interactive possibilities. In addition to the detailed description of hardware and software implementa-tions of our prototypes, we discuss the relative strengths, limitations and affordances of these novel input devices as informed by the results of a preliminary user study."
According to the PR release from Microsoft, Texas Health Resources and Microsoft partner Infusion Development have developed a prototype to assist with doctor-patient communication and collaboration:
Medhost has created an emergency department dashboard that can assist medical professionals decision-making process more efficiently:
Vectorform developed an application to assist children in rehabilitation at the Cook Children's Health System in Fort Worth, Texas. The application allows the rehabilitation specialist design their own evaluations for patients:
The only drawback is that the Surface has a very high price tag. I think I'll stick with HP TouchSmart PC projects with my students!
Strukt is a design studio in Vienna, Austria, that specializes in interactive and generative design for a variety of purposes, such as interactive environments and installations, ambient intelligent environments, games, and multi-touch tables, screens, and walls. The video is a demonstration of applications that were presented at the March 2009 TOCA ME Design Conference in Munich, Germany. The applications were developed using vvvv. (More information regarding vvvv can be found at the end of this post.)
INFO FOR THE TECH-SAVVY OR TECH-CURIOUS:
According to information from the vvvv website, vvvv is a "toolkit for real time video synthesis. It is designed to facilitate the handling of large media environments with physical interfaces, real-time motion graphics, audio and video that can interact with many users simultaneously. vvvv is a visual programming interface. Therefore it provides a graphical programming language for easy prototyping and development. vvvv is real time, where many other languages have distinct modes for building and running programs, vvv only has one mode, run-time. vvvv is free for non-commercial use." VVVV Screenshots VVVV's Propaganda Page Other projects using VVVV Struktable: the 70-inch Multitouch Table
STRUK ON A SPHERE: Interactive installation at a Mercedes Benz conference