Nov 30, 2008
Tech-Savvy Teachers at Classroom 2.0; Google Sketch-up for math, and other creative ideas...including multi-touch
"Guzman", a member of Classroom 2.0, is a math teacher in Florence, Italy who teaches math to middle school students. In addition to math, he has an IT background. He used Google Sketchup to introduce and work with solids. He thinks that Sketchup would be a good tool to use to teach volumes, sections, and more.
Guzman's student's worked on this project one hour a week for five weeks, and created all of the models using Sketchup. The animation was created with Sketchup by Guzman.
You can find the models Guzman's students made at Google 3D Warehouse
Tom Barret, at Classroom 2.0 has started a Multi-Touch Interactive Desk Development group. Tom is involved in the SynergyNet: Multi-touch in Education research, which is part of Durham (UK) University's Technology-Enhanced Learning Research Group.
You can find related pictures on Flickr: Multi-Touch Interaction: Applications and Gesture Ideas
Nov 26, 2008
Laurence links to the DYI tabletop computing bootcamp that was held at
IEEE Tabletops and Interactive Surfaces 2008. From there, you can find a linked list of the organizers of the events, and additional information.
Picture below is from MTC Multi-touch Console:
Here is a link to the group's libavg wiki that includes open-source code and "how-to" instructions.
If you are interested in multi-touch and multi-gesture computing from an academic point of view, Florian Echtler, of the Technische Universitat Munchen has a series of publications listed on his website. Here is the abstract of one of his papers. He is on the right track. I especially like the fact that he's thought about widget layers. (I have, too, but they are only sketches in my idea book.)
TICH: Tangible Interactive Surfaces for Collaboration between Humans (Sourceforge website, with links to libtisch.
F. Echtler, G. Klinker
A Multitouch Software Architecture
NordiCHI 2008: Using Bridges, 18-22 October, Lund, Sweden. (bib)
"In recent years, a large amount of software for multitouch interfaces with various degrees of similarity has been written. In order to improve interoperability, we aim to identify the common traits of these systems and present a layered software architecture which abstracts these similarities by defining common interfaces between successive layers. This provides developers with a unified view of the various types of multitouch hardware. Moreover, the layered architecture allows easy integration of existing software, as several alternative implementations for each layer can co-exist. Finally, we present our implementation of this architecture, consisting of hardware abstraction, calibration, event interpretation and widget layers."
Teliris InterAct TouchTable and TouchWall: Immersive Collaboration & Telepresence; DVE's Holographic Tele-Immersion Room
According to a whitepaper on the Teliris website, "Business Value of Telepresence", by S. Ann Earon, "Telepresence is what videoconferencing was meant to be: reliable, highly interactive, easy to operate, resulting in a natural meeting with transparent technology and an emphasis on human factors."
Teleris now offers something they call Immersive Collaboration, which involves the use of surface computing that supports document and multimedia content sharing across locations, as if all of the group members are in the same room
Watch the demonstration of the Teliris Collaboration Touch Table in a telepresence meeting. In the video clip below, the narrator shares content from a local Teliris Collaboration Touch Table to a remote meeting participant who is at another table.
"Touch to Telepresence"
DVE (Digital Video Enterprises) developed a Tele-Immersion room that uses Cristie Digital Systems Mirage HD 3D projectors to create holographic images of remotely located meeting participants:
DVE Telepresence: An Introduction (A plug from DVE, but informative.)
DVE Portable Virtual Presentation -A Volumetric 3D image from a projector hidden from the audience's view:
This system can display 3D images on the stage, and supports 2-way interactive HD feeds.
The above examples demonstrate how newer technologies, including table-top surfaces, can be used for collaborative business meetings. I can envision this technology used for medical education, medical consultations, and collaboration between artists and musicians.
When the price comes down, perhaps we will have these systems in our family rooms!
Nov 23, 2008
For more information, take a look at the Touch TV Networks website.
For the Tech Curious: "Get in Touch with Touchless": Multi-touch with just a webcam and the free demo application!
You can find the demo code on the Codeplex website. Here is a quote:
"The Touchless SDK enables developers to create multi-touch based applications using a webcam for input. Touch without touching."
"Touchless started as Mike Wasserman’s college project at Columbia University. The main idea: to offer users a new and cheap way of experiencing multi-touch capabilities, without the need of expensive hardware or software. All the user needs is a camera, which will track colored markers defined by the user."
(I posted about the Touchless SDK previously, but I didn't have the video.)
Spencer is a teacher and instructional technology consultant who develops web-based interactive applications for use on interactive whiteboards (IWB's). He's interested in multi-touch applications for education and has some good insights into what HCI researchers call the "problem space".
Here are Spencer's comments:
..."I agree that Flash could have a very important role to play here. I chose Flash as my development tool because it allows quick development of ideas and then easy distribution of the product. The importance of this is that it allows people who have a profession other than software developer to create software with the insight of their main role. In my case, as a teacher, I can identify things I wish I had and then make them. Often I find that other teachers had the same wish and they then appreciate the product."
"The unfortunate thing with multi-touch is that it is far from the technology most of us outside the industry/research areas have to work with. An app created in Flash for single touch follows the mouse and pointer method so it can be developed easily. When done it can be easily tested on a standard IWB for the feel (which is often surprisingly different on the IWB compared to using a mouse)."
"The Flash developer community has a very experimental and creative characteristic and I’m sure would be a great driving force for multi-touch but first there needs to come a reason for more people to have some sort of multi-touch display for general use, beyond facilitating experiments. When the various operating systems support it and have the apps to make having a supporting display viable then the experimentation and ideas will really flow."
"In addition, the display makers need to recognize the benefits of Flash and ensure they address them. At the moment it seems to be too often an afterthought if considered at all. SDKs and APIs make no reference to Flash or they remain indefinitely in beta for older versions of Flash only."
"It is a pity that all of this will take time. The more time that passes the more single touch IWBs are bought and installed which will delay the uptake of the eventual multi-touch ones. Meanwhile children continue to have to keep reminding themselves that they can only touch the board in one place when it is clear that every bit of their brain is telling them to interact with the board in a much more natural multi-touch way."
My response to Spencer's comment:
You make good points regarding the barriers to getting the multi-touch approach adopted by the "mainstream". You're right about what the commercial display makers need to do. If they want to market displays that will have more appeal, they must think about the different sorts of applications and programming environments that the displays should support.
Display makers also need to think more about the bigger picture - in what sort of environments will the displays be located? Indoors? Outdoors? Near bright sunlight? What about people with disabilities, children, or the elderly?
I can see that in the future, multi-touch displays and other devices would operate within an embedded systems environment and support mobile computing activities as well. There are existing examples of this concept, of course, but there is much room for creative improvement. An embedded systems approach is complex, and would need to handle input from sensors, support multi-modal signal process, and also provide users with a range of connectivity modes, including RFID. (Data management and storage needs would have to be addressed, along with privacy and security concerns.)
Most importantly, in my opinion, these systems would need to have the flexibility required to support human activities and interactions that have not yet emerged! Certainly this will need to take a multidisciplinary approach.
There are many unanswered questions....How does this fit in with mobile computing and "cloud" computing? What sort of middleware needs to be developed?
Even if we don't have solutions to the bigger problems, there are many smaller problems that I think could be somewhat easily solved.
As you mentioned, many applications that are designed for single-touch screens don't fully support the way people identify, select, and move items around the screen. Although educators access websites every day for use on interactive whiteboards, they are hungry for more. There are not enough websites that are optimized for single-touch interaction, or touch-screen interaction in 3-D "space".
Teachers who are successful users of interactive whiteboards know exactly what we are talking about. They spend quite a bit of time searching for new on-line resources they can use with their students. They know how much the students want to interact with the screen at the same time and would be so excited to have capability at their fingertips!
Optimizing websites for touch-screen applications is possible, but this idea hasn't occurred to web developers. Their jobs don't require it, so there is no incentive. Google is developing FlareBrowser, that can support multi-touch interactions, but according to information on the website, it runs on Mac Leopard 1.5, and nothing else. The present version is bare-bones. I haven't yet tested the FlareBrowser.
I think that another barrier to getting multi-touch off the ground is that the people who might have the knack for multi-touch application development simply don't know it! We've mentioned that Flash developers have the potential to create good multi-touch applications. I also think that game developers and designers could make good contributions to the multi-touch movement. Just think about what thought goes into programming interactions and event handling for 3-D web-based multi-player games!
Yet another barrier is that people who work in lower-tech fields could benefit from collaborative multi-touch applications, but they don't know it, either. The research I've reviewed tells me that multi-touch applications can support a wide range of human endeavors- work, creativity, data analysis, education, collaboration, planning, and so forth.
What is missing is the input of potential end users from a variety of fields. No specific discipline "owns" multi-touch, so it is hard to figure out how we can make this happen.
Could we set up multi-touch technology playgrounds at professional and trade conferences? What about airports and hospital lobbies? Libraries and museums? Shopping centers? Sports events and rock concerts?
This leads me to my next idea, which is jumping ahead a bit:
One of the barriers to the development of multi-touch applications is that it is not easy to gather user requirements when the users are not familiar with the technology.
That is when my "Miracle Question" technique comes into play. I learned this technique when I studied brief solution-focused counseling and found that if modified, can be useful when figuring out user requirements. (The process still needs some fleshing out.)
Why the Miracle Question?
The questions that a developer uses to guide the client during the initial planning stages are very important. Keep in mind that people want to use technology because it meets a need and also solves a problem, which is the similar to the reason a person might seek counseling.
The Miracle Question technique (actually, a series of questions) might help to tease things out. The goal of this type of questioning is to help the client use their own creativity, resources, and problem-solving skills so they can become effective partners throughout the development cycle.
(People with human-computer interaction training might have an easier time understanding how this technique might be modified and applied to different fields.)
A good example the Miracle Question process, as used in therapy and counseling, can be found on the Network of Social Construction Therapies website in an article written by the late Steve de Shazer:
There aren't many resources about the use of the Miracle Question in IT or business. Here are a couple:
Solution Focused Management of Unplanned IT Outages (Read pages 132 and the references.)http://conferences.vu.edu.au/web2006/images/CDProceedings06.pdf
Proceedings of 7th International We-B (Working for E-Business) Conference, 2006Katherine O'CallaghanSugumar Mariappandar, Ph.D.School of Business and InformaticsAustralian Catholic University
Miracle Question in Executive Coaching
Nov 22, 2008
Rome Reborn Update: New Google Earth layer of Ancient Rome - Great Idea for Engaging Interactive Whiteboard Activities
Visitors can explore inside the city's buildings, and obtain related historical information through pop-up windows. The 3-D interaction is great on the large screen or interactive whiteboard.
I posted about the Rome Reborn
Below is the "how-to" video:
Google Earth's Ancient Roman Holiday
Rome Curriculum Competition for Educators
Apple MacBook laptop
Digital classroom projector
3D Navigation mouse
$500 in gift cards to Target or Office Depot
Engraved Google "Top Educator" plaque
"We're accepting curricula from all grade levels and K-12 subject areas including art history, math, social studies, physics, and philosophy, so whether you teach 5th grade art or high school engineering, there's glory and a nice prize package waiting for you."
Nov 20, 2008
CNN's Magic Wall Conspiracy Thriller on the Daily Show: John Oliver, Jeff Han, John King and a cast of TouchScreens and Windows...
"It's good to be King." - John King, after disposing of John Oliver...
I just took a look at a hilarious episode about interactive multi-touch screens and a conspiracy theory on the Daily Show. The episode features Jeff Han, the creator of CNN's Magic Wall, John Oliver, John King, and others from CNN.
Via John Herrman and Gizmodo
If you are interested in multi-touch technology, feel free to do a search for additional information on this blog. The following post includes Jeff Han's demonstration of his multi-touch applications from TED 2006, along with resources and links:
Multi-touch and Flash: Links to Resources, Revisiting Jeff Han's TED 2006 Presentation
Note: If you are a parent, please screen the video clip before deciding if it is OK for your child to view.
Multitouch Space Invaders Basic Demo from multitouch-barcelona on Vimeo..
For more sights and sounds from this group, visit the Red Bull Music Academy Barcelona 2008 website.
For more about multi-touch technology, including DYI instructions for creating multi-touch tables and displays, open-source code,and tutorials, visit the NUI Group website.
Nov 19, 2008
According to the HP website, some models of the notebook come with a built-in fingerprint reader to assist with log-on or lock-up functions. It includes integrated Altec Lansing stereo speakers and supports multimedia entertainment applications. The screen is 12.1", with an HP BrightView LED display. It is capable of playing HD content.
Watch the video:
Explore the features in the interactive presentation.
Read the WSJ Market Watch article:
"The enhanced HP MediaSmart digital entertainment software suite on the tx2 allows users to more naturally select, organize and manipulate digital files such as photos, music, video and web content by simply touching the screen.
"Breezing through websites and enjoying photos or video at the tap, whisk or flick of a finger is an entirely new way to enjoy digital content on a notebook PC," said Ted Clark, senior vice president and general manager, Notebook Global Business Unit, Personal Systems Group, HP. "With the introduction of the TouchSmart tx2, HP is providing users an easier, more natural way to interact with their PCs, and furthering touch innovation." "
The notebook uses capacitive touch technology, and supports gestures such as "pinch, rotate, arc, flick, pres and drag, and single & double tap."
For more information, see Hugo Jobling's recent post on the TrustedReviews website.
The touch-screen in HP's products are from NextWindow. NextWindow now has drivers that will work with the upcoming Windows 7, which will allow for multi-touch applications.
FYI: Video clip of HP's TouchSmart single-touch interaction, from July 2008:
From Andy Vandervells' Trusted Reviews post, "Hands On with the HP TouchSmart"
The video shows the new NextWindow Gesture Server Application.
Info from the NextWindow website:
"NextWindow Gesture Server Application in conjunction with a NextWindow touch screen enables two-touch gestures to be used on the Microsoft Windows Vista desktop and certain applications.
You perform a gesture by double-tapping or dragging two fingers on the touch surface. The Gesture Server interprets these actions as commands to the operating system. For example a two-touch vertical drag on the Vista desktop can adjust the computer's audio volume control up or down as required."
Also from the website:
Vertical scroll: drag two fingers up or down the touch screen.
Horizontal scroll: drag two fingers left or right on the touch screen.
Zoom: move two fingers apart or together.
Double Tap: double-tap two fingers on screen.
"You can enable or disable the two-touch functionality and adjust the sensitivity of each of the four two-touch gestures. You can also select the command that is executed with the double-tap gesture."
Nov 17, 2008
Flare is a visualization tool for the web, and utilizes Adobe's Flex SDK, an ActionScript 3 Compiler, and Flex Builder. Basically, it is an ActionScript library, and the applications run in the Adobe Flash Player.
It was developed by the University of California, Berkeley Visualization Lab, which contains a wealth of resources and information about the visualization lab's projects and presentations.
Additional information, including tutorials, source code, sample applications, API documentation, and a help forum can be found on the Flare website
An interactive visualization created with Flare.
Here are some cool links about data visualization, via Sebastian Misiurek, of the Crisis Fronts: Cognitive Infrastructures blog:
Sebastian also recommends the following papers (pdf):
Information Aesthetics in Information Visualization
Artistic Data Visualization: Beyond Visual Analytics
I especially like the description of the Crisis Fronts project:
"Crisis Fronts is the Degree Project studio and seminar run by Michael Chen and Jason Lee, with Gil Akos and Ronnie Parsons at Pratt Institute’s School of Architecture.
Crisis Fronts is an ongoing inquiry into contemporary global crises that suggest new demands and agendas for architecture, and the potential afforded by parametric and generative digital design tools to engage them."
Nov 16, 2008
"It “is a wild and crazy ecosystem where you manage the resources to influence the environment around you. Streams of water flowing on the floor can be diverted to make the different parts of the forest grow. If a tree does not receive enough water it withers away but by pressing your body into the forest you create new trees based on your shape and character. As you explore and play you discover that your environment is inhabited by sonic life forms who depend on a thriving ecosystem to survive.”
The trees and creatures in the installation look really beautiful; just abstract enough to make it look like a strange magical forest, but the processes of our real ecosystems are still recognisable. A really wonderful project. And it sure looks like a lot of fun!" -Tanja, from the TakeBigBites blog
Every Surface a Computer: "Scratch" Capturing Finger Input on Surfaces using Sound. Video by Chris Harrison and Scott Hudson's Video - UIST '08
Yes, every surface is a computer!
(Even your pants...)
For detailed information, read the paper presented at UIST '08 by Chris Harrison and Scott E. Hudson:
Scratch Input: Creating Large, Inexpensive, Unpowered, and Mobile Finger Input Surfaces
The Best Paper Award at UIST '08 was "Bringing Physics to the Surface", by Andrew Wilson, of Microsoft Research, and Ahahram Izadi, Otmar Hilliges, Armando Garcia-Mendoza, and David Kirk, of Microsoft Research, Cambridge.
Here is the abstract:
"This paper explores the intersection of emerging surface technologies, capable of sensing multiple contacts and of-ten shape information, and advanced games physics engines. We define a technique for modeling the data sensed from such surfaces as input within a physics simulation. This affords the user the ability to interact with digital objects in ways analogous to manipulation of real objects. Our technique is capable of modeling both multiple contact points and more sophisticated shape information, such as the entire hand or other physical objects, and of mapping this user input to contact forces due to friction and collisions within the physics simulation. This enables a variety of fine-grained and casual interactions, supporting finger-based, whole-hand, and tangible input. We demonstrate how our technique can be used to add real-world dynamics to interactive surfaces such as a vision-based tabletop, creating a fluid and natural experience. Our approach hides from application developers many of the complexities inherent in using physics engines, allowing the creation of applications without preprogrammed interaction behavior or gesture recognition."
Preparation for the Internet of Surfaces & Things?
(Cross-posted on the Technology-Supported Human World Interaction blog)
Nov 15, 2008
CNN's Magic Wall was one of the first applications to gain the attention of the masses, as it was used as an interactive map during the US presidential election process. Touch-screen interaction gained even more notice after the recent SNL parody by Fred Amisen.
If you think about it, the multi-touch applications you see on the news aren't much different than what you'd get from a "single-touch" program.
Fancy, yes. Truly innovative, no.
Just imagine a 3D multi-touch, multi-user, multimedia version of Google Search. I did. I put my sketches in my idea book and hurt my brain thinking about how it could be coded.
Jeff Han, the man behind Perceptive Pixel and CNN's magic wall, had much more up his sleeve when he demonstrated his work at TED 2006. Even if you've previously seen this video, it is worth looking at again. (I've provided a link to the transcript below.)
Transcript of Jeff Han's TED 2006 Presentation
This video presentation had a transformational effect on me as I watched for the first time. Jeff Han brought to life ideas that were similar to my own as a beginning computer student thinking about collaborative educational games and multimedia applications that could be played on interactive whiteboards.
Here are some selected quotes from the video:
"I really really think this is gonna change- really change the way we interact with the machines from this point on."
"Again, the interface just disappears here. There's no manual. This is exactly what you kind of expect, especially if you haven't interacted with a computer before."
"Now, when you have initiatives like the hundred dollar laptop, I kind of cringe at the idea that we're gonna introduce a whole new generation of people to computing with kind of this standard mouse-and-windows pointer interface. This is something that I think is really the way we should be interacting with the machines from this point on. (applause)"
"Now this is going to be really important as we start getting to things like data visualization. For instance, I think we all really enjoyed Hans Rosling's talk, and he really emphasized the fact that I've been thinking about for a long time too, we have all this great data, but for some reason, it's just sitting there. We're not really accessing it. And one of the reasons why I think that is, is because of things like graphics- will be helped by things like graphics and visualization and inference tools. But I also think a big part of it is gonna be- starting to be able to have better interfaces, to be able to drill down into this kind of data, while still thinking about the big picture here."
So now what?
A recent post by "Alex", on the AFlex World blog discusses a few solutions. Alex had a chance to meet with Harry van der Veen and Pradeep George from the NUI Group, and Georg Kaindl, a multi-touch interaction designer from the Technical University of Vienna. The focus of the discussion was to come up with ideas to encourage Adobe/Flash designers and developers to learn more about multi-touch technology and interaction, and take steps to create innovative applications.
I especially like the following quote from the post:
"...A quick quote from our conversations: “When our children will walk up to a display, they will touch it and expect to do something.”
As a techie and a school psychologist, I see an immediate need for innovative applications. I know that there is a built-in market in the schools, at least for low-cost applications. Despite economic constraints, many school districts continue to invest in interactive whiteboards (IWB's). They are cropping up in preschool and K-12 settings, and teachers are searching for more than what's currently available.
Interactive, collaborative applications are needed in fields such as health care, patient education, finance & economics, urban planning, civil engineering, travel & tourism, museums & exhibitions, special events, entertainment, and more.
Smart Technologies, the company behind SmartBoards, has a new interactive multi-touch, multi-user table designed for K-6 education, the Smart Table. Hewlett Packard has several versions of the TouchSmart PC, which can support at least duo-touch, if not multi-touch, multi-user applications. There are numerous all-in-one large screen displays on the market that support multi-touch and multi-user interaction.
Quotes from Harry van der Veen, of Multitouch NL:
"In 10 years from now when a child walks up to a screen he expects it to be a multi-touch screen with which he can interact with by using gestures."
"...multi-touch screens will be as common as for children is the internet nowadays, as common as mobile phones are for us."
Here is a quote from a conversation I had with Spencer, who blogs at TeacherLED.
"It was interesting this week as I was in a classroom with a teacher who I've not worked with before... he had 2 students using the whiteboard who kept touching it together by mistake. The teacher, exasperated, said to himself, "Why can't they make these things to accept 2 touches without going crazy!"
Proof of the demand! I think you are right when teachers spot the limitations and then see the technology on visits to museums, that might stimulate demand."
Spencer creates cool interactive mini-applications, mostly for math, using Flash, that teachers (and students) love to use on interactive whiteboards. (He's interested in multi-touch, too.)
So what are we waiting for?!
Natural User Interface Europe AB meets Adobe
Georg's Touche Framework
Interactive Touch-Screen Technology, Participatory Design, and "Getting It".
Hans Rosling's 2007 TED talk
Nov 13, 2008
RENCI at Duke University: Multi-Touch Collaborative Wall and Table utilizing TouchLib; More about UNC-C's Viz lab...
RENCI is a multi-disciplinary collaboration between several universities in North Carolina, with centers located at the Europa Center, Duke University, N.C. State, UNC Chapel Hill, East Carolina University, UNC-Asheville, UNC-Charlotte, and the Health Sciences Library at UNC-Chapel Hill. Many of the centers focus on visualization and collaborative technologies, and have been involved in multi-touch "surface" computing.
The pictures below are from the RENCI center at Duke University:
Duke Multi-Touch Collaborative Wall
The multi-touch wall is 13 x 5 feet, and utilizes six high-definition projectors, resulting in a combined resolution of 5760-2160, and supports multiple users. According to information on the RENCI website, the design is scalable and applicable to non-flat surfaces. The wall system runs on Windows and Linux.
(Photo by Josh Coyle)
(Photo by Josh Coyle)
DI, or Direct Illumination is used for touch detection in both the wall and the table for detecting touch. A separate instance of Touchlib runs for each of the 8 cameras used to detect touch. A gesture engine interprets the information about touches on the screen as gesture events. Each camera is handled separately for image processing and blob tracking tasks.
Graphics from the RENCI Vis Group Multi-Touch Blog
Here is cool picture of the "Multi-touch Calibration Device", which uses a built-in TouchLib utility.
Additional information can be found on the RENCI Vis Group Multi-Touch Blog.
"Touchlib is a library for creating multi-touch interaction surfaces. It handles tracking blobs of infrared light, and sends your programs these multi-touch events, such as 'finger down', 'finger moved', and 'finger released'. It includes a configuration app and a few demos to get you started, and will interace with most types of webcams and video capture devices. It currently works only under Windows but efforts are being made to port it to other platforms."
If you are interested in creating your own multi-touch table, the NUI-Group website and forums are a great place to start.
If you follow my blog, you probably know that I've taken several graduate courses at UNC-Charlotte. Some of my professors and a classmate or two have been involved in some exciting visualization research over the past year. (If you are serious about multi-touch and other visually-based applications, it is worth taking some time to familiarize yourself with visualization and interaction research.)
News from the UNC-Charlotte Vis Center:
At the University of North Carolina at Charlotte, RENCI is a collaboration between the UNC Charlotte Urban Institute, the Center for Applied Geographic Information Science, and the Charlotte Visualization Center.
Robert Kosara’s group wins two awards at IEEE VisWeek Caroline Ziemkiewicz and Robert Kosara won Honorable Mention (the second highest award) at the IEEE InfoVis Conference for their paper, “The Shaping of Information by Visual Metaphors”. Also, Alex Godwin, Kosara’s student, won Best Poster for his submission, “Visual Data Mining of Unevenly-Spaced Event Sequences”.
The Vis Center is pretty fascinating, as you can see by the group of visitors at an open house.
If you are just as fascinated by this stuff as the guys in the picture, here are links to some recent papers by UNC-Charlotte faculty affiliated with the Vis Center:
The Shaping of Information by Visual Metaphors (Caroline Ziemkiewicz and Robert Kosara)
Evaluating the Relationship Between User Interaction and Financial Visual Analysis (Don Hyun Jeong, Wenwen Dou, Felsia Stukes, William Ribarsky, Heather Richter Lipford, Remco Chang)
Visual Analytics for Complex Concepts Using a Human Cognition Model (Tera Marie Green, William Ribarsky, and Brian Fisher)
Nov 6, 2008
Below is a video clip of a multi-touch photo presentation system running Windows 7: Gesture + Touch - has gesture and physics engines.
Apparently the application can run on Vista, Win 7, and Win 7 Touch.
Here is an HP TouchSmart PC, running a Touch Map application on Windows 7:
The following clip is of a newscaster using a multi-touch transparent screen.The display is from U-Touch Ltd. a partner of NextWindow. In my opinion, the application enhances the viewers understanding of the various news topics, and is visually appealing as well.
The graphics engine used in this application was developed by Vizrt, the same folks who were behind CNN's video hologram. Here are a few pictures from the Vizrt website:
The holograph "transporter" room.
For more videos using Windows 7 apps, see creamhackered's YouTube channel. (Videos appear to be from NeoWin Net.)
Windows 7 Design Concepts and Usability Tests
Nov 5, 2008
CNN's Holographic Technology: Wolf Blitzer and Jessica Yellin, Anderson Cooper and Will.I.Am, and the music video.
Here is a video clip of CNN's holographic technology used to transmit a 3-D video image of Jessica Yellin, speaking with Wolf Blitzer on Election Day, November 2008. A partial transcript is included below. Before you get too excited about this technology, know that the correspondents see the image on a plasma TV, not in "real" 3-D, according to Gizmodo.
"There are 35 high-definition cameras ringing me, in a ring around me, I'm in the center, and they shoot my body at different angles, and I'm told that transmits what looks like an entire body image back there to New York. These cameras, I'm told, talk to the cameras in New York. So they move, and they know when to move when the cameras in New York move, and it looks a little different from a real person there, but it is pretty remarkable."
"It's still Jessica Yellin, and you look just like Jessica Yellin, and we know you are Jessica Yellin....You're a terrific hologram, thanks very much Jessica Yellin is in Chicago, she's not here in New York with us... but you know what, it looks like she was right here with us, it is pretty amazing technology."
Here is the clip of Anderson Cooper interviewing Will.I.Am's hologram, beamed in from from Grand Park in Chicago.
"Let's see if we can beam him in now. There we go. Will, thanks very much for being
with us. How is this night for you?"
"Ah, this is great. We're at an eve of a brand new day in America, and it feels great being here in Chicago. All this technology, I'm being beamed to you, like in Star Wars and stuff..."
"Yeah, it looks like, especially like, exactly like in Star Trek, when they'd beam people down, that's what it looks like here....We are doing this interview with you this way because it is a lot quieter than having you in the crowd, its very hard to hear in this crowd, and we appreciate you being with you..."
Will.I.Am goes on to discuss the song he created from an inspirational speech by Obama.
How the CNN Holographic Interview System Works
Election Night TV: Networks Aim to Dazzle With Gagetry
-Edward C. Baid and Jon Swartz, TechNewsworld
Behind the hologram:
"Vizrt creates leading-edge content production tools for the digital media industry - From award-winning 3D graphics & maps to integrated video workflow solutions."
FYI: Here is the video of the song, We Can Change- (My first view of this video was today, after the election.)
picture via engadget
Nov 4, 2008
Searching for Multi-Touch Info? Drivers for Windows 7 Available from NextWindow & HP TouchSmart...More about N-Trig..Multi-Touch Resources
Next Window Releases Touch Screen Drivers for Microsoft Windows 7
"The technology to build multi-touch applications for next year's operating system is available today."
David Villarina, NextWindow
dvillarina @ nextwindow.com
KFAR SABA, Israel & AUSTIN, Texas--(BUSINESS WIRE)--"N-trig, providers of DuoSenseTMTM evolution. With the industry’s only combined pen and multi-touch capabilities, N-trig is transforming the way people interact with computers. technology, combining pen and capacitive touch in a single device, brings the power of technology and the human touch together to begin a new era in interface technologies and lead the Hands-on computing evolution. With the industry’s only combined pen and multi-touch capabilities, N-trig is transforming the way people interact with computers"
"...Realizing the power of the human interface, N-trig’s DuoSense digitizers are designed to integrate easily, support any type of screen, keep devices slim, light and bright, and can support numerous applications from small notebooks to large LCDs. Combined, pen and touch enables users to open files, manipulate pictures and browse the desktop as they would the files on their desk...Currently available on the Dell Latitude XT and additional OEM designs planned to come to market in early 2009, N-trig is opening a window onto a world where multi-touch is the accepted standard for computer interfaces."
All you ever wanted to know about Multi-Touch:
Bill Buxton's Multi-Touch Systems that I Have Known and Loved
All you ever wanted to know about interactive tables that support collaboration:
Pasta & Vinegar's List of Interactive Tables
(From 2005, but has been updated.)
All you ever wanted to know about tangible user interfaces:
5 Lessons About Tangible Interfaces (pdf) - Nicolas Nova
All you ever wanted to know about interactive gestures:
Interactive Gestures (wiki)
All you ever wanted to know about open-source multi-touch & related technology:
NUI Group (Natural User Interface)
Resources and Links about Touch Screens, Tables, and Multi-touch
Note: I highlight news, thoughts, and reflections about interactive multimedia, multi-touch, and related emergent technologies on this blog.
If you don't see what you are looking for on this post, feel free to do a search on this blog, or my other blog, Technology-Supported Human-World Interaction.
For Multi-Touch Interaction Humor:
Multi-touch Parody of CNN's Magic Map Wall: Saturday Night Live's Weekend Update MegaPixel Giant Touch Map
Microsoft Surface Parody
Nov 1, 2008
Emerging Technologies: SHiFT 08 Conference - Sensor Networks and Data for the Open Internet of Things
I first learned about the "Internet of Things" nearly two years ago when I was taking a Ubicomp class. Since that time, things have sped quickly along in the research arena, but I don't think most folks are aware of how this technological transformation will impact our daily lives.
The videoclip below is from David Orban's presentation at SHiFT 08, "Why We Need to Listen to our Things":
Orban discusses how we currently spend much of our time taking care of our mobile devices, but as the magnitude of devices increasing, it is difficult to manage things as we have in the past. There is just too much data...There is a need for obtaining information from sensor networks. "We must derive deep knowledge of the environment from these sensors." In the video clip, Orban goes on to discuss the various challenges in this field:
- Signal to Noise problem.
- Signal to Signal problem.
- Management of the sheer volume of data that is generated, or will be generated - how data is filtered and analysed.
- Dependability - managing spime systems and sensor networks of tens of billions of elements.
- Aggregation of data to derive second order knowledge.
- New phenomena will surprise us in the future, we will learn more about our environment, and listen to our planet more clearly.
SHifT 08 was held in Lisbon, Portugal on October 15-17. The focus of this year's conference was Transient Technologies, "in the sense that technology is breaking up with it's digital boundaries and it's becoming a vital part of a lot of the things we do and interact with in our daily lives."
The themes of SHiFT 08 included user experience, mobile computing, sustainability, the social web, web design, open technologies, digital media, artificial intelligence, spimes, and knowledge & innovation.
Picture from PCMAG.Com
The videoclip below is a demonstration of the prototype of SecondLight, a Surface-like application developed by a team of researchers at Microsoft Research UK. If you place a tracing-paper like object on the surface, secondary information regarding the content on the surface can be revealed. The system relies on a projector system and a special liquid crystal on the display.
Here is another video from Slashgear that describes SecondLight. According to the video, SecondLight is a multi-point display, displaying images on a surface, and images through a surface. The system has two hi-res cameras, two projectors with optical shutters, infrared illumination, and an electrically switchable diffuser.
SecondLight can track images from a distance, track IR reflective objects, and also track IR emissive objects with mobile, multi-point touch detection.