Intel's Oasis system uses object recognition that triggers various applications that generate such things as shopping lists and recipes. The system can handle more than one item of food. It includes videos of how to prepare meals, a great feature for people just starting , or those who are learning to prepare healthier meals.
In my opinion, this sort of application would be useful to people with disabilities that affect memory.
(Previously posted on THE WORLD IS MY INTERACTIVE INTERFACE blog.)
Focused on interactive multimedia and emerging technologies to enhance the lives of people as they collaborate, create, learn, work, and play.
Showing posts sorted by date for query gesture. Sort by relevance Show all posts
Showing posts sorted by date for query gesture. Sort by relevance Show all posts
Jul 2, 2010
Gesture and object recognition on your kitchen counter: The Oasis Project demo from Intel Labs
Posted by
Lynn Marentette
Jun 22, 2010
Kinect Sensor for Xbox 360 Offers Full-Body and Gesture Interaction: No controllers or remotes!
Project Natal was the code name for the Kinect Sensor for Xbox 360. For $149.99 you can pre-order your very-own system from the Microsoft Store that will allow you to interact with video games with your body alone. No need for controllers or 'motes!
Presentation about the fitness benefits of the Kinect Sensor for Xbox 360:
This video is a preview of a dance game for the Xbox using the Kinect Sensor:
It would be great if I could do my Zumba moves with Kinect Sensor system and a great Xbox application!
Here's another video that explains the system in more detail, with brief interviews of innovators from Microsoft:
Here is a copy of my previous post about Project Natal:
How It Works: Microsoft's Project Natal for the Xbox 360 video from Scientific American
Microsoft gathered a wealth of biometric data to recognize the range of human movement in order to develop an algorithm for the next generation of controller-less gaming. "Natal will consist of a depth sensor that uses infrared signals to create a digital 3-D model of a player's body as it moves, a video camera that can pick up fine details such as facial expressions, and a microphone that can identify and locate individual voices."
The technology behind Natal has the potential for a range of uses beyond gaming.
Scientific American article:
Binary Body Double: Microsoft Reveals the Science Behind Project Natal for Xbox 360
Presentation about the fitness benefits of the Kinect Sensor for Xbox 360:
This video is a preview of a dance game for the Xbox using the Kinect Sensor:
It would be great if I could do my Zumba moves with Kinect Sensor system and a great Xbox application!
Here's another video that explains the system in more detail, with brief interviews of innovators from Microsoft:
Here is a copy of my previous post about Project Natal:
How It Works: Microsoft's Project Natal for the Xbox 360 video from Scientific American
Microsoft gathered a wealth of biometric data to recognize the range of human movement in order to develop an algorithm for the next generation of controller-less gaming. "Natal will consist of a depth sensor that uses infrared signals to create a digital 3-D model of a player's body as it moves, a video camera that can pick up fine details such as facial expressions, and a microphone that can identify and locate individual voices."
The technology behind Natal has the potential for a range of uses beyond gaming.
Scientific American article:
Binary Body Double: Microsoft Reveals the Science Behind Project Natal for Xbox 360
Posted by
Lynn Marentette
May 29, 2010
Preview: Update on Touch & Multitouch Technologies, Websites, and Touch-Interactive Multimedia Apps
It is about time for an update about touch/gesture- interactive technologies.
I've been researching the latest in "touch" screens and new developments in interactive multi-media content. In just one year, a multitude of websites have been transformed from static to interactive.
Although the initial objective for some of these websites was to optimize the interface and navigation for people accessing websites via touch-screen cell phones, some are ideal for use on touch-enabled slates, the iPad, and even larger touch screen displays and surfaces.
Convergence seems to be the buzz word of the day. Interactive TV. Game sets with Internet access. Movies on your cell phone. Touch screen Coke machines displaying movie trailers. What's happening now, and what is next?
I welcome input from my readers in the form of links to websites, university labs with grad students and professors who are obsessed with emerging interactive technologies, proof-of-concept video clips, video clips of related technologies that are new-to-market, etc.
I will add video clips to the following playlist:
FYI: I'm also in the middle of writing a series of posts about 3D television technologies for the Innovative Interactivity blog, and welcome input from my readers about this topic.
RELATED (Previous posts)
(the above post includes links to various multi-touch developer kits and resources)
Multi-touch Linux on a Stantum Slate PC & More (links to a nice overview about multi-touch interaction from ENAC)
Multimedia, Multi-touch, Gesture, and Interaction Resources (needs a little updating)
Posted by
Lynn Marentette
Labels:
display,
interactive,
interactive website,
multi-touch,
post-WIMP,
research,
resources,
technology,
touch,
update
No comments:
May 13, 2010
Gesture Vocabulary from N-Trig: "N-act Hands-on"
N-Trig is a company founded in 1999 that provides pen and multi-touch solutions that integrate into LCDs and other devices, and provides opportunities for independent software vendors (ISVs) and original equipment manufacturers (OEMs) to create new interactive and hands-on computing experiences, according to the company's profile. The latest news about N-Trig's interactive capabilities was outlined in a recent article by DanaWollman, in Laptop:
The N-act Gesture Set (depicted in the video below)
N-act3SideSweep for browsing, use fingers together for browsing
N-act2+1 - select from a displayed menu
N-act3Tap- displays open windows in a 3D carousel
N-act3Hold-rotates the 3D carousel
N-act2Scroll- scroll through a document
N-act2Tap-minimizes the open window, displays the desktop
N-act1Touch- select an item on the screen
N-act4Tap-displays customized, relevant list of web page icons; selected text/item is pasted into the chosen app.
N-act4Zoom-magnifies a movable selected area of the screen
N-act4Select-selects an area and opens a context sensitive menu
avitalntrig
Here is the promotional information from the YouTube video:
"This video demonstrates the N-trig N-act Gesture Vocabulary, a set of true multi-touch gestures for two plus one, three- and four-fingers, enabling users to perform an action directly on the screen, and providing a rich set of hand movements that enhance the overall user experience, enabling a whole new approach to how we interact with our computing devices, for a true Hands-on computing experience."
RELATED
www.n-trig.com
N-trig DuoSense Technology
The Future is Now: Creating and Developing a Touch-Enabled World (pdf)
N-trig N-act Hands-On Gesture Vocabulary (N-Trig website)
Better Multi-Touch Displays Coming
Mike Miller, Forward Thinking Blog, PC Mag (3/3/10)
DuoSense: Creating a Multi-touch Enabled World (November 2009)
I found the following video from N-Trig on YouTube, released on 5/11/10, that shows the new gesture set that is supported by N-Trig:
The N-act Gesture Set (depicted in the video below)
N-act3SideSweep for browsing, use fingers together for browsing
N-act2+1 - select from a displayed menu
N-act3Tap- displays open windows in a 3D carousel
N-act3Hold-rotates the 3D carousel
N-act2Scroll- scroll through a document
N-act2Tap-minimizes the open window, displays the desktop
N-act1Touch- select an item on the screen
N-act4Tap-displays customized, relevant list of web page icons; selected text/item is pasted into the chosen app.
N-act4Zoom-magnifies a movable selected area of the screen
N-act4Select-selects an area and opens a context sensitive menu
avitalntrig
Here is the promotional information from the YouTube video:
"This video demonstrates the N-trig N-act Gesture Vocabulary, a set of true multi-touch gestures for two plus one, three- and four-fingers, enabling users to perform an action directly on the screen, and providing a rich set of hand movements that enhance the overall user experience, enabling a whole new approach to how we interact with our computing devices, for a true Hands-on computing experience."
RELATED
Dana Wollman, 5/1/10, Laptop
www.n-trig.com
N-trig DuoSense Technology
The Future is Now: Creating and Developing a Touch-Enabled World (pdf)
N-trig N-act Hands-On Gesture Vocabulary (N-Trig website)
Better Multi-Touch Displays Coming
Mike Miller, Forward Thinking Blog, PC Mag (3/3/10)
DuoSense: Creating a Multi-touch Enabled World (November 2009)
Posted by
Lynn Marentette
May 7, 2010
The attracTable is Coming Soon: Sony will launch a high-definition touch and gesture- interactive tabletop, using Actracsys's technology!
Sony will be introducing a full high-definition interactive table, a result of a collaboration with the Swiss company Atracsys.


(At about 2:14 in the video below, there is a demonstration of an application that recognizes facial features and expressions, which are used to control and manipulate images on the screen.)
Images from the Sony Stand at Vision 2009
Here is an "overview" video that shows a number of uses for the Attractable:
Here is the "Nespresso" table, which provides people with information about the type of coffee that you are drinking. It makes more sense as demonstrated in the video.
Atracsys @ Baselworld 2010
beMerlin: Interactive gesture-based application for retail:


(At about 2:14 in the video below, there is a demonstration of an application that recognizes facial features and expressions, which are used to control and manipulate images on the screen.)
Images from the Sony Stand at Vision 2009
Here is an "overview" video that shows a number of uses for the Attractable:
Here is a version of the atracTable, using a tangible user interface to create music:
Here is the "Nespresso" table, which provides people with information about the type of coffee that you are drinking. It makes more sense as demonstrated in the video.
Atracsys @ Baselworld 2010
beMerlin: Interactive gesture-based application for retail:
Posted by
Lynn Marentette
Labels:
atracsys,
attractale,
beMerlin,
gestures,
HD,
interactive,
sony,
surface,
tabletop,
touch
No comments:
Apr 25, 2010
LM3Labs' Catchyoo Interactive Koi Pond; release of ubiq'window 2.6 Development Kit and Reader
Catchyoo Koi FX, from LM3Labs
Catchyoo Koi FX from Nicolas Loeillot on Vimeo.
The music on the video clip is by the band Remioromen, from Japan.
LM3Labs recently released ubiq'window 2.6 Pack, a development kit and reader that handles gesture interaction for proximity touch-less technology based on computer vision. It includes a calibration mode, usage statistics, and is compliant with Windows 7. In the near future, LM3Labs will release new software for their partners and ubiq'window developers.
About LM3Labs:
"Focused on fast transformation of innovation into unique products, LM3Labs is a recognized pioneer in computer vision-based interactivity solutions. LM3Labs is a fast growing company based in Tokyo, Japan and Sophia-Antipolis, France." -LM3Labs Blog
Posted by
Lynn Marentette
Jan 31, 2010
Flexible Interfaces & Useful Wearables for All - Combining Good Concepts: Slap Bracelet, flexible ePaper, Morph, Asus Waveface, the Porcupine, Sixth Sense, and the iPhone/iPad. (How about an iCuff?!)
One of the projects I toyed with for a Ubiquitous Computing class three years ago was an application that would work nicely on a PDA that I could somehow strap to my wrist. I wanted to something that would allow me to keep my hands free and support some of my work functions as a school psychologist, such as observing and assessing students, counseling young people, and consulting with teachers and parents. The application would also be useful to my colleagues.
The second part of this application would support teens and young adults with more severe disabilities who participate in a community-based vocational training program. The application would provide a means of giving the students feedback during on-site work activities as well as in work adjustment simulation activities at school.
I abandoned the idea early on, due to frustrating BlueTooth issues and the lack of a suitable way to secure the PDA to various types of wrists.
It is 2010 and now we have the iPhone, iPad, touch-screen netbook/slates, e-readers, 3GS, consumer-ready RFID, low-cost portable GPS devices, and in some places, ubiquitous free Wi-Fi, low-cost digital cameras, and a range of devices that have the potential to play together in some way. Below are a few examples of how far things have come.
EXAMPLES

True Wearable, by Propeller (This was a prototype introduced in 2007, I think.)



(Belkin Sports Armband for iPhone; Trueband, by Grantwood Technology; MarewareSportShell)
RIDGELINE W200
The water-resistant Ridgeline has many of the features I'd like, such as the touch screen interface, a blacklit keypad, an adjustable strap, and range of I/Os. I kind of liked the wearable scanner and imager feature. The scanner/imager can be rotated. If the imager also included a video camera, it would be a plus, since I use video quite a bit to develop video social stories for some of the students I work with who have autism spectrum disorders.
The Ridgeline W200 is too ugly and clunky for me to consider wearing! I'm sure price of the Ridgeline would be out of the question for public school employees and community mental health workers who work with young people with special needs.


(Ridgeline W200 Wearable Touch-Screen Computer)


"Everybody had them or at least seen ‘em. Slap bracelets were usually made of thin piece of aluminum wrapped in fabric. Using the same form, Chocolate Agency came up with a mini multimedia device that snaps on with a slap. The entire surface is E-Paper and possesses all its thin, high contrast, power efficient qualities. The length can be adjusted by adding magnetic snaps to the ends. Best part is there’s no recharging needed. It gets all the power it needs via kinetic energy so go ahead, go slap happy." -Yanko Design
Nokia Morph (Concept)
The Nokia Morph is a concept project that integrates nanotechnology into mobile devices. I posted about the Morph last year: Last Night I Dreamt About Haptic Touch-Screen Overlays

Asus Waveface Smartphone (Video from CES 2010)
The Porcupine
This morning I devoted about 45 minutes skimming over the Proceedings for the Fourth International Conference on Tangible, Embedded, and Embodied Interaction, TEI '10, held January 25-27 in Cambridge, MA. A paper related to the Porcupine, a wearable sensing device, caught my eye:
Coming to Grips with the Objects We Grasp: Detecting Interactions with Efficient Wrist-Worn Sensors (Eugin Berlin, Jun Liu, Kristof van Laerhoven, Bernt Schield, TEI 2010)
From what I can tell, the features of the Porcupine, if embedded in a wearable iPhone-type device, would be extremely useful in a variety of fields, including special education, rehabilitation/habilitation, health care, mental health, vocational training for people with more complex disabilities, and so on.
Porcupine

Porcupine Project Documents
(The code for Porcupine is available on Sourceforge.net.)


I posted about Sixth Sense earlier in 2009:
The second part of this application would support teens and young adults with more severe disabilities who participate in a community-based vocational training program. The application would provide a means of giving the students feedback during on-site work activities as well as in work adjustment simulation activities at school.
I abandoned the idea early on, due to frustrating BlueTooth issues and the lack of a suitable way to secure the PDA to various types of wrists.
It is 2010 and now we have the iPhone, iPad, touch-screen netbook/slates, e-readers, 3GS, consumer-ready RFID, low-cost portable GPS devices, and in some places, ubiquitous free Wi-Fi, low-cost digital cameras, and a range of devices that have the potential to play together in some way. Below are a few examples of how far things have come.
EXAMPLES

True Wearable, by Propeller (This was a prototype introduced in 2007, I think.)



(Belkin Sports Armband for iPhone; Trueband, by Grantwood Technology; MarewareSportShell)
RIDGELINE W200
The water-resistant Ridgeline has many of the features I'd like, such as the touch screen interface, a blacklit keypad, an adjustable strap, and range of I/Os. I kind of liked the wearable scanner and imager feature. The scanner/imager can be rotated. If the imager also included a video camera, it would be a plus, since I use video quite a bit to develop video social stories for some of the students I work with who have autism spectrum disorders.
The Ridgeline W200 is too ugly and clunky for me to consider wearing! I'm sure price of the Ridgeline would be out of the question for public school employees and community mental health workers who work with young people with special needs.


(Ridgeline W200 Wearable Touch-Screen Computer)


"Everybody had them or at least seen ‘em. Slap bracelets were usually made of thin piece of aluminum wrapped in fabric. Using the same form, Chocolate Agency came up with a mini multimedia device that snaps on with a slap. The entire surface is E-Paper and possesses all its thin, high contrast, power efficient qualities. The length can be adjusted by adding magnetic snaps to the ends. Best part is there’s no recharging needed. It gets all the power it needs via kinetic energy so go ahead, go slap happy." -Yanko Design
The Nokia Morph is a concept project that integrates nanotechnology into mobile devices. I posted about the Morph last year: Last Night I Dreamt About Haptic Touch-Screen Overlays

Asus Waveface Smartphone (Video from CES 2010)
The Porcupine
This morning I devoted about 45 minutes skimming over the Proceedings for the Fourth International Conference on Tangible, Embedded, and Embodied Interaction, TEI '10, held January 25-27 in Cambridge, MA. A paper related to the Porcupine, a wearable sensing device, caught my eye:
Coming to Grips with the Objects We Grasp: Detecting Interactions with Efficient Wrist-Worn Sensors (Eugin Berlin, Jun Liu, Kristof van Laerhoven, Bernt Schield, TEI 2010)
From what I can tell, the features of the Porcupine, if embedded in a wearable iPhone-type device, would be extremely useful in a variety of fields, including special education, rehabilitation/habilitation, health care, mental health, vocational training for people with more complex disabilities, and so on.
Porcupine

Porcupine Project Documents
(The code for Porcupine is available on Sourceforge.net.)
Sixth Sense

I posted about Sixth Sense earlier in 2009:
Pattie Maes TED Talk: Sixth Sense - Mobile Wearable Interface and Gesture Interaction (for the price of a cell phone?!) Sixth sense allows you to use ANY surface for interaction, and can provide you relevant information about whatever is in front of you. This would be a great feature for people with disabilities and in the future might also function as a cognitive prosthesis.
Below is a TED Talk video of Pranav Mistry, the Ph.D student who invented Sixth Sense, discussing open-source Sixth Sense and related applications:
Below is a TED Talk video of Pranav Mistry, the Ph.D student who invented Sixth Sense, discussing open-source Sixth Sense and related applications:
So now what?
After the iPad was unveiled, several people who blog about assistive technology and augmentative communication were curious to see if the new device had the potential for use with people who have disabilities.
It does.
After the iPad was unveiled, several people who blog about assistive technology and augmentative communication were curious to see if the new device had the potential for use with people who have disabilities.
It does.
Here are a few links:
From what I understand, the iPad will work with Proloquo2Go, an alternative/ augmentative communication program for Apple's iPhone and iPod Touch. Proloquo2Go is priced at a level much lower than other PDA-based systems, and can be purchased at the iTunes App Store. It can be downloaded for use on the iPad once the iPad is available to consumers.
This is great news.
Now someone just needs to get on the convergence train and develop a flexible, mobile device that incorporates the best features of the devices and applications that currently exist!
This is great news.
Now someone just needs to get on the convergence train and develop a flexible, mobile device that incorporates the best features of the devices and applications that currently exist!
Posted by
Lynn Marentette
Jan 28, 2010
TEI '10 Info and Links: Fourth Annual International Conference on Tangible, Embedded, and Embodied Interaction
In my dreams, I am a full-time tech student. Fortunately, I can follow my inner geek and share what I find on this blog. The information below was inspired by links from a Facebook status update by Laurence Muller, author of the Multi-Gesture blog.
The video below is a montage of TEI'10 hands-on studio:
TEI Studios from jay silver on Vimeo.
"From TEI 2010. These are the hands-on studios (like workshops) where 200 people participated in building and making all day long elbow to elbow, getting into the details and taking perspectives."
About TEI:
TEI '10: Fourth International Conference on Tangible, Embedded, and Embodied Interaction, January 25-27, Cambridge, MA.
"TEI, the conference on tangible, embedded, and embodied interaction, is about HCI, design, interactive art, user experience, tools and technologies, with a strong focus on how computing can bridge atoms and bits into cohesive interactive systems."
Here is a link to the keynote:
http://www.vikmuniz.net/
Here is a link to one of the papers presented at TEI:
Electronic Popables: Exploring Paper-Based Computing through an Interactive Pop-Up Book (pdf)- Jie Qi and Leah Buechley, MIT Media Lab, High-Low Tech Group
More about Laurence Muller:
Laurence Muller (M.Sc.)
, is a Fellow at the Harvard University (USA) at the School of Engineering and Applied Science (SEAS) / The Initiative in Innovative Computing (IIC) in the Scientists' Discovery Room Lab (SDR Lab). Currently he is working on innovative scientific software for multi-touch devices and display wall systems. (I took Laurence's information from his blog.)
More to come!
The video below is a montage of TEI'10 hands-on studio:
TEI Studios from jay silver on Vimeo.
"From TEI 2010. These are the hands-on studios (like workshops) where 200 people participated in building and making all day long elbow to elbow, getting into the details and taking perspectives."
About TEI:
TEI '10: Fourth International Conference on Tangible, Embedded, and Embodied Interaction, January 25-27, Cambridge, MA.
"TEI, the conference on tangible, embedded, and embodied interaction, is about HCI, design, interactive art, user experience, tools and technologies, with a strong focus on how computing can bridge atoms and bits into cohesive interactive systems."
Here is a link to the keynote:
http://www.vikmuniz.net/
Here is a link to one of the papers presented at TEI:
Electronic Popables: Exploring Paper-Based Computing through an Interactive Pop-Up Book (pdf)- Jie Qi and Leah Buechley, MIT Media Lab, High-Low Tech Group
More about Laurence Muller:
Laurence Muller (M.Sc.)
More to come!
Posted by
Lynn Marentette
Jan 26, 2010
There is a need for multi-touch/gesture designers/developers!
If you are a talented interactive web designer/developer, game designer/developer, traditional programmer with a creative bent, or someone who who is thinking about working with technology in the future as a programmer or designer, I urge you to consider thinking about designing/developing multi-touch applications in the near future.
In my opinion, there will be a need for multi-touch web applications as well as for multi-touch education and collaboration applications for the SMART Table, Microsoft's Surface, multi-touch tablets like the rumored iTablet from Apple, and the multi-touch laptops and all-in-ones (Dell, HP, etc.).
Below are direct links to some of my blog posts related to multi-touch applications and screens. If you are fairly new to multi-touch, I'm sure that looking through some of my blog posts will be helpful. All of the posts have links to resources, and most have photos and video clips of multi-touch in action.
If you are new to this blog, I have a great deal of information, links, photos, and video clips of various multi-touch screens and applications. The best way to find the stuff is to enter in a keyword in the search box for this blog: multitouch, touch screen, gesture, multi-touch, etc. on this blog.
Also do a search on my other blog: The World Is My Interface http://tshwi.blogspot.com
Here are some links:
Do you have an HP TouchSmart, Dell Studio One or NextWindow touch-screen? NUITech's Snowflake Suite upgrade provides a multi-touch plug-in
http://bit.ly/5tdlhc
The following blog post has a video clip that shows someone from Adobe painting with a multi-touch application in development:
More Multi-Touch!: Rumor of the mobile apple iTablet; AdobeXD & Multitouch; 10-finger Mobile Multitouch: http://bit.ly/4S9Upm
Ideum's GestureWorks: http://bit.ly/4C1p7M
Interactive Walls, Interactive Projection Systems, GestureTek's Motion-Based Games: http://bit.ly/6GRGtW
Intuilab's Interfaces: Multi-touch applications/solutions for presentations, collaboration, GIS, and commerce: http://bit.ly/7RK7qN
For software developers:
How to do Multitouch with WPF 4 in Visual Studio 2010: http://bit.ly/7c4YqC
In my opinion, there will be a need for multi-touch web applications as well as for multi-touch education and collaboration applications for the SMART Table, Microsoft's Surface, multi-touch tablets like the rumored iTablet from Apple, and the multi-touch laptops and all-in-ones (Dell, HP, etc.).
Below are direct links to some of my blog posts related to multi-touch applications and screens. If you are fairly new to multi-touch, I'm sure that looking through some of my blog posts will be helpful. All of the posts have links to resources, and most have photos and video clips of multi-touch in action.
If you are new to this blog, I have a great deal of information, links, photos, and video clips of various multi-touch screens and applications. The best way to find the stuff is to enter in a keyword in the search box for this blog: multitouch, touch screen, gesture, multi-touch, etc. on this blog.
Also do a search on my other blog: The World Is My Interface http://tshwi.blogspot.com
Here are some links:
Do you have an HP TouchSmart, Dell Studio One or NextWindow touch-screen? NUITech's Snowflake Suite upgrade provides a multi-touch plug-in
http://bit.ly/5tdlhc
The following blog post has a video clip that shows someone from Adobe painting with a multi-touch application in development:
More Multi-Touch!: Rumor of the mobile apple iTablet; AdobeXD & Multitouch; 10-finger Mobile Multitouch: http://bit.ly/4S9Upm
Ideum's GestureWorks: http://bit.ly/4C1p7M
Interactive Walls, Interactive Projection Systems, GestureTek's Motion-Based Games: http://bit.ly/6GRGtW
Intuilab's Interfaces: Multi-touch applications/solutions for presentations, collaboration, GIS, and commerce: http://bit.ly/7RK7qN
For software developers:
How to do Multitouch with WPF 4 in Visual Studio 2010: http://bit.ly/7c4YqC
Posted by
Lynn Marentette
Jan 23, 2010
More interactivity: Interactive Walls, Interactive Projection Systems, GestureTek's Motion-based Game
I recently discovered that Accenture's website has a few interactive web pages that provide information about the company's interactive wall technology. What I liked about the site is that I could interact with it by touching the screen of my HP TouchSmart PC, and it worked! (I'm always on the look-out for interactive websites that are good for touch-screen interaction.)
Below are screen shots of the on-line semi-functional demo of Accenture's Strategic Decision Interface:
(The website worked through touch-interaction via my HP TouchSmart PC!)
For more information: Interactive Wall Technology: Seeing the Big Picture
Newfangled Projector Systems:
New Projectors Make Any Wall an Interactive Whiteboard: Epson, Boxlight unveil potentially game-changing technology -Meris Stansbury, eSchool News 1/13/10
"In a move that could shake up the interactive whiteboard (IWB) market, two projector manufacturers have just released new products that can turn virtually any surface into an IWB...The development means schools no longer have to buy separate hardware to enjoy the benefits of IWBs, whose interactive surface and ability to engage students have made them quite popular in classrooms."
The article highlights Epson's BrightLink 450i ultra short-throw projector which eliminates most shadows, and images can be anywhere from 59 to 96 inches diagonally with WXGA resolution. The system requires an infrared pen.
Another system is the ProjectoWrite2/W from Boxlight, which is a short-throw LCD projector with XGA resolution that can project up to 80 inches diagonally.
GestureTek
I've written a few blogs in the past about GestureTek. I wonder if their technology would work with the projection systems mentioned in the eSchool News article. Below are a few examples of what GestureTek's been doing lately:
GestureTek's Video Game Wall at the Child's Play Activity Center (Las Vegas)


The above pictures of the Child's Play Activity Center show how GestureTek's WallFX interactive display system can be used to create a fun environment for children. The system includes a ceiling projector and a camera that can capture full-body motion. The system provides 25 games and special effects. Wouldn't this concept be great for interactive and fun educational games?
For details about this system:
GestureTek's video game wall shows where gesture-based games can go
-Dean Takahashi, GamesBeat, 8/25/09
GestureTek's Immersive Multi-platform Game: Head-butting Interactive Soccer
"Video gesture control pioneer GestureTek., unveiled its new Momo™ Software Development Kit for game developers and original equipment manufacturers at the 2010 Consumer Electronics Show. Gesture recognition software tracks motion and objects such as faces and hands and brings immersive, gesture-based interactivity to multiple platforms, such as PCs, laptops, mobile phones, toys and other devices. The video is a demonstration of a head-butting soccer game."
GestureTek Interactive City Flight Simulator Game
Below are screen shots of the on-line semi-functional demo of Accenture's Strategic Decision Interface:
(The website worked through touch-interaction via my HP TouchSmart PC!)
For more information: Interactive Wall Technology: Seeing the Big Picture
Newfangled Projector Systems:
New Projectors Make Any Wall an Interactive Whiteboard: Epson, Boxlight unveil potentially game-changing technology -Meris Stansbury, eSchool News 1/13/10
"In a move that could shake up the interactive whiteboard (IWB) market, two projector manufacturers have just released new products that can turn virtually any surface into an IWB...The development means schools no longer have to buy separate hardware to enjoy the benefits of IWBs, whose interactive surface and ability to engage students have made them quite popular in classrooms."
Another system is the ProjectoWrite2/W from Boxlight, which is a short-throw LCD projector with XGA resolution that can project up to 80 inches diagonally.
GestureTek
I've written a few blogs in the past about GestureTek. I wonder if their technology would work with the projection systems mentioned in the eSchool News article. Below are a few examples of what GestureTek's been doing lately:
GestureTek's Video Game Wall at the Child's Play Activity Center (Las Vegas)


The above pictures of the Child's Play Activity Center show how GestureTek's WallFX interactive display system can be used to create a fun environment for children. The system includes a ceiling projector and a camera that can capture full-body motion. The system provides 25 games and special effects. Wouldn't this concept be great for interactive and fun educational games?
For details about this system:
GestureTek's video game wall shows where gesture-based games can go
-Dean Takahashi, GamesBeat, 8/25/09
GestureTek's Immersive Multi-platform Game: Head-butting Interactive Soccer
"Video gesture control pioneer GestureTek., unveiled its new Momo™ Software Development Kit for game developers and original equipment manufacturers at the 2010 Consumer Electronics Show. Gesture recognition software tracks motion and objects such as faces and hands and brings immersive, gesture-based interactivity to multiple platforms, such as PCs, laptops, mobile phones, toys and other devices. The video is a demonstration of a head-butting soccer game."
GestureTek Interactive City Flight Simulator Game
Jan 21, 2010
Ideum's GestureWorks vs Adobe AIR 2 and Flash Player 1.0 comparison of multitouch and gesture support
Jim Spadaccini, of Ideum, shared information about his company's product, Gestureworks, highlighting how it provides better multi-touch and gesture support than Adobe AIR2 and Flash Player 10.1. Gestureworks supports multiple-point drag, rotate, and scale at the same time. In the video, the application is demonstrated on an HP Touchsmart 600 and a 3M multitouch screen.
Adobe AIR 2 and Flash Player 10.1 vs Gestureworks 1.0: A direct comparison of multitouch and gesture support
"A direct comparison between the built-in support for multitouch found in Adobe Flash Player 10.1 beta / Adobe AIR 2 and that of the Gestureworks multitouch framework for Flash. More about this comparison can be found on the Gestureworks website (http://www.gestureworks.com) and the Ideum website (http://www.ideum.com) There is a blog post with more about this comparison and links to all of the example files at: www.ideum.com/2010/01/true-multitouch-wi th-adobe-flash/ "
True Multitouch with Adobe Flash - Jim Spadaccini
GestureWorks Supported Gestures
Example of Ideum's GestureWorks multi-touch, multi-user design for an exhibit a the Vancouver Aquarium:
Adobe AIR 2 and Flash Player 10.1 vs Gestureworks 1.0: A direct comparison of multitouch and gesture support
"A direct comparison between the built-in support for multitouch found in Adobe Flash Player 10.1 beta / Adobe AIR 2 and that of the Gestureworks multitouch framework for Flash. More about this comparison can be found on the Gestureworks website (http://www.gestureworks.com) and the Ideum website (http://www.ideum.com) There is a blog post with more about this comparison and links to all of the example files at: www.ideum.com/2010/01/true-multitouch-wi th-adobe-flash/ "
True Multitouch with Adobe Flash - Jim Spadaccini
GestureWorks Supported Gestures
Example of Ideum's GestureWorks multi-touch, multi-user design for an exhibit a the Vancouver Aquarium:
Posted by
Lynn Marentette
Jan 16, 2010
Big Data: What are the possibilities for collaborative interactive information visualization? (Video interviews of Roger Magoulas, director of research at O'Reilly)
When I return to graduate school (hopefully I'll have the means to attend full-time), I want to flesh out my ideas for a "interactive multi-dimensional multi-media multi-user timeline" for use on interactive multi-touch/gesture tables and displays. Although I've limited my work to a prototype of a template, I know that this concept won't work unless the application can incorporate an efficient means of handling large volumes of data, as well as data in various formats.
I want this template to be useful to people in a variety of contexts, such as students studying world history and humanities, education administrators looking at educational data over time, producers and viewers of interactive documentary programs (think interactive TV), the health industry, urban planners, the military, serious games, etc.
One of my stumbling blocks is how all of the data would be stored and analysed. What I learned a few years ago in my computer classes simply won't work.
So now what?! I think that Roger Magoulas, the director of research at O'Reilly, has some good things to say about the critical problem of handling what he calls "Big Data". Here are a few videos that I think are worth watching.
The Future of Work
Part One
Next Device (SmartPhones, netbooks, creation & consumption factors - supporting usability in multiple contexts)
You Tube Series: O'Reilly Media
Big Data: Technologies & Techniques for Large-Scale Data (Emphasis on experimental approach) Part I
Part II (Discusses new forms of databases and the user of parallel processors to handle Big Data)
Part III Key Technology Dimensions
Part IV, Focus on hardware- Solid state disks, new data structure called "triadic continuum" which handles real-time data and ongoing probability estimates of data.
I would be happy to hear from anyone who is working on a project similar to the one I'm working on as a "hobby".
RELATED
Triadic Continuum
"Phaneron, KStore, Knowledge store, or simply K, is a dynamic data model that is based on the cognitive theory of C. S. Peirce. Phaneron efficiently organizes data into a unique, compact, interconnected, and fully-related data model. Phaneron is constructed using the Triadic Continuum."
For those of you who like visual representations of geeky-techy concepts, here a few visuals and related descriptions of KStore fundamentals from the Triadic Continuum website:
"The KStore data model is constructed using the basic triad. For example, the event sequence 'cat' would be recorded as shown in 'a sequence' below. A new level of nodes is created above a lower level of nodes as a result of the triadic process. In this case the lower level of nodes contains a node for each character of the alpha-numeric character set and the new nodes reference the lower level nodes to record the sequence 'cat'. Each sequence is initialize with a reference to a BOT (beginning of thought) and terminated with an EOT (end of thought) reference."

"The data set above was used to create the K structure below with the lowest level that contains the alpha-numeric character set, the second level is created to record sequences that represent the field variables. Then a third level is created using the field variables of the second level to record the record sequences. Records recorded in this K structure reuse the field variable nodes so that these field variable sequences never have to be recorded more than once. This is just one of the attributes of a K structure that makes it very efficient." -Triadic-conintuum.com

Mazzagatti, J.C. (2006) The Potential for Recognizing Errors in a Dataset Using Computer Memory Resident data Structure Based on the Phaneron of C.S. Peirce (doc)
Personal Note:
Due to the economic downturn and its impact on my family (two kids in college), I returned to work full time in mid 2008. I have a very busy day job as a school psychologist, working at two high schools as well as a program for students with multiple, severe disabilities, including autism. This has limited my ability to work on my project.
I want this template to be useful to people in a variety of contexts, such as students studying world history and humanities, education administrators looking at educational data over time, producers and viewers of interactive documentary programs (think interactive TV), the health industry, urban planners, the military, serious games, etc.
One of my stumbling blocks is how all of the data would be stored and analysed. What I learned a few years ago in my computer classes simply won't work.
So now what?! I think that Roger Magoulas, the director of research at O'Reilly, has some good things to say about the critical problem of handling what he calls "Big Data". Here are a few videos that I think are worth watching.
The Future of Work
Part One
Next Device (SmartPhones, netbooks, creation & consumption factors - supporting usability in multiple contexts)
You Tube Series: O'Reilly Media
Big Data: Technologies & Techniques for Large-Scale Data (Emphasis on experimental approach) Part I
Part II (Discusses new forms of databases and the user of parallel processors to handle Big Data)
Part III Key Technology Dimensions
Part IV, Focus on hardware- Solid state disks, new data structure called "triadic continuum" which handles real-time data and ongoing probability estimates of data.
I would be happy to hear from anyone who is working on a project similar to the one I'm working on as a "hobby".
RELATED
Triadic Continuum
"Phaneron, KStore, Knowledge store, or simply K, is a dynamic data model that is based on the cognitive theory of C. S. Peirce. Phaneron efficiently organizes data into a unique, compact, interconnected, and fully-related data model. Phaneron is constructed using the Triadic Continuum."
For those of you who like visual representations of geeky-techy concepts, here a few visuals and related descriptions of KStore fundamentals from the Triadic Continuum website:
"The KStore data model is constructed using the basic triad. For example, the event sequence 'cat' would be recorded as shown in 'a sequence' below. A new level of nodes is created above a lower level of nodes as a result of the triadic process. In this case the lower level of nodes contains a node for each character of the alpha-numeric character set and the new nodes reference the lower level nodes to record the sequence 'cat'. Each sequence is initialize with a reference to a BOT (beginning of thought) and terminated with an EOT (end of thought) reference."
"The data set above was used to create the K structure below with the lowest level that contains the alpha-numeric character set, the second level is created to record sequences that represent the field variables. Then a third level is created using the field variables of the second level to record the record sequences. Records recorded in this K structure reuse the field variable nodes so that these field variable sequences never have to be recorded more than once. This is just one of the attributes of a K structure that makes it very efficient." -Triadic-conintuum.com
Mazzagatti, J.C. (2006) The Potential for Recognizing Errors in a Dataset Using Computer Memory Resident data Structure Based on the Phaneron of C.S. Peirce (doc)
Personal Note:
Due to the economic downturn and its impact on my family (two kids in college), I returned to work full time in mid 2008. I have a very busy day job as a school psychologist, working at two high schools as well as a program for students with multiple, severe disabilities, including autism. This has limited my ability to work on my project.
Posted by
Lynn Marentette
Jan 1, 2010
Apple iSlate, iTablet , MacBook Touch: Will it support gesture interaction & haptic feedback?
Soldier Knows Best produces great tech-oriented videos. Here's his spin on all of the rumors about the possibility of the Apple iSlate.
I just inherited a 10 month-old Mac Book, installed Snow Leopard and upgraded to iLife 2009. I'm so used to touching the screen on my HP TouchSmart PC that I found myself touching my Mac Book screen from time to time, especially when I was editing video clips in iMovie. I think the latest version of iMovie was designed with touch/gesture interaction in mind!
From what I can tell, Snow Leopard and iLife 2009 will be able to support a range of touch interactions, if not gesture input as well.
Here are some rumors that have been conjured up and distributed on the web:
The Exhaustive Guide to Applet Tablet Rumors (Matt Buchanan, Gizmodo, 12/26/09)
Apple Expects to Sell 10 Million Tablets in First Year (Pete Cashmore, Mashable, 1/1/10)
iGuide Emerges as Another Potential Apple Tablet Name (Adam Ostrow, Mashable, 12/29/09)
The Tablet (John Gruber, Daring Fireball, 12/31/09)
"And so in answer to my central question, regarding why buy The Tablet if you already have an iPhone and a MacBook, my best guess is that ultimately, The Tablet is something you’ll buy instead of a MacBook."
Apple Owns iSlate.com Domain: The Mystery Deepens (Dan Nosowitz, Gismodo, 12/25/09)
What is the Ultimate Role of the Apple Tablet? (Arnold Kim, MacRumors, 12/31/09)
iPad, iTablet, iSlate, or MacTab (Cruz Miranda, 8/31/09)
Why am I excited about this?
I want to see if the iSlate would be good for collaborative educational games, assisted technology, augmentative communication, and alternative assessment for students who have multiple/severe disabilities.
That is a huge goal, so I'm going to start simple. I am not giving up on Windows 7 multi-touch programming. I just have an urge to find out for myself what works, what doesn't, and what platform works best for specific "personas" and "scenarios".
I plan to make a little app for the iPhone/iPod Touch, based on a game I made several years ago, "Shoes Your Battles" for a game class. I think I'd like to make this game for the Apple iTablet!
The first version of Shoes Your Battles created with Game Maker, and the second version was in Flash, back in the days of ActionScript 2.0. I started on third version, one that could be used as an advergame for people to play while shopping for shoes during shoe sales, but it never got past the planning stage.
The idea for the third version came to me when I my elderly aunt came to visit from out-of-town and just had to go shoe shopping on the day after Thanksgiving. It was extremely difficult to figure out what was on sale, how much it cost, after taking off the previous mark-downs and what was on sale that had a price that was not yet marked down.
Adding to the confusion was the fact that there were few salespeople and herds of women. It was madness. There were pairs of shoes in the wrong boxes, boxes of shoes and no way to quickly find out the true prices! We were in the shoe department for hours, and it wasn't as fun as you'd think. If you've been in a crowded women's shoe department to buy that special pair of shoes during a fantastic shoe sale, you'll know what I mean.
At any rate, I wanted my little "Shoes Your Battles" game to help with this dreadful scenario, by somehow incorporating a shoe shopping advisor and a means to figure out the REAL sales prices of those awesome, to-die-for shoes. Unfortunately, the technology wasn't where it needed to be at the time- I am always dreaming up things that are too d--- futuristic!
4 years later, we have iPhones and SmartPhones and 3G internet and RFID and ubiquitous WiFi and the Wii and more women who like to play games and...and... The time is ripe.
Apple better come up with the iSlate!
SOMEWHAT RELATED
Thinking about post-WIMP HCI
It is always important to re-visit wisdom from the past when thinking about new interfaces and means of technology-supported human interaction. Here are a few resources from the field of Human-Computer Interaction found on the HCI Vistas website:
The Prism of User Experience -A nice graphic metaphor to help the conceptualization process. (Denish Katre, 2007)
Journal of HCI Vistas: Multi-disciplinary Perspective of Usability and HCI
Personas as part of a user-centered innovation process Lene Nielsen, 1/08 HCI Vistas Vol-IV
10 Steps to Personas (Lene Nielsen, 7/07, HCI Vistas Vol-III)
I just inherited a 10 month-old Mac Book, installed Snow Leopard and upgraded to iLife 2009. I'm so used to touching the screen on my HP TouchSmart PC that I found myself touching my Mac Book screen from time to time, especially when I was editing video clips in iMovie. I think the latest version of iMovie was designed with touch/gesture interaction in mind!
From what I can tell, Snow Leopard and iLife 2009 will be able to support a range of touch interactions, if not gesture input as well.
Here are some rumors that have been conjured up and distributed on the web:
The Exhaustive Guide to Applet Tablet Rumors (Matt Buchanan, Gizmodo, 12/26/09)
Apple Expects to Sell 10 Million Tablets in First Year (Pete Cashmore, Mashable, 1/1/10)
iGuide Emerges as Another Potential Apple Tablet Name (Adam Ostrow, Mashable, 12/29/09)
The Tablet (John Gruber, Daring Fireball, 12/31/09)
"And so in answer to my central question, regarding why buy The Tablet if you already have an iPhone and a MacBook, my best guess is that ultimately, The Tablet is something you’ll buy instead of a MacBook."
Apple Owns iSlate.com Domain: The Mystery Deepens (Dan Nosowitz, Gismodo, 12/25/09)
What is the Ultimate Role of the Apple Tablet? (Arnold Kim, MacRumors, 12/31/09)
iPad, iTablet, iSlate, or MacTab (Cruz Miranda, 8/31/09)
Why am I excited about this?
I want to see if the iSlate would be good for collaborative educational games, assisted technology, augmentative communication, and alternative assessment for students who have multiple/severe disabilities.
That is a huge goal, so I'm going to start simple. I am not giving up on Windows 7 multi-touch programming. I just have an urge to find out for myself what works, what doesn't, and what platform works best for specific "personas" and "scenarios".
I plan to make a little app for the iPhone/iPod Touch, based on a game I made several years ago, "Shoes Your Battles" for a game class. I think I'd like to make this game for the Apple iTablet!
The first version of Shoes Your Battles created with Game Maker, and the second version was in Flash, back in the days of ActionScript 2.0. I started on third version, one that could be used as an advergame for people to play while shopping for shoes during shoe sales, but it never got past the planning stage.
The idea for the third version came to me when I my elderly aunt came to visit from out-of-town and just had to go shoe shopping on the day after Thanksgiving. It was extremely difficult to figure out what was on sale, how much it cost, after taking off the previous mark-downs and what was on sale that had a price that was not yet marked down.
Adding to the confusion was the fact that there were few salespeople and herds of women. It was madness. There were pairs of shoes in the wrong boxes, boxes of shoes and no way to quickly find out the true prices! We were in the shoe department for hours, and it wasn't as fun as you'd think. If you've been in a crowded women's shoe department to buy that special pair of shoes during a fantastic shoe sale, you'll know what I mean.
At any rate, I wanted my little "Shoes Your Battles" game to help with this dreadful scenario, by somehow incorporating a shoe shopping advisor and a means to figure out the REAL sales prices of those awesome, to-die-for shoes. Unfortunately, the technology wasn't where it needed to be at the time- I am always dreaming up things that are too d--- futuristic!
4 years later, we have iPhones and SmartPhones and 3G internet and RFID and ubiquitous WiFi and the Wii and more women who like to play games and...and... The time is ripe.
Apple better come up with the iSlate!
SOMEWHAT RELATED
Thinking about post-WIMP HCI
It is always important to re-visit wisdom from the past when thinking about new interfaces and means of technology-supported human interaction. Here are a few resources from the field of Human-Computer Interaction found on the HCI Vistas website:
The Prism of User Experience -A nice graphic metaphor to help the conceptualization process. (Denish Katre, 2007)
Journal of HCI Vistas: Multi-disciplinary Perspective of Usability and HCI
Personas as part of a user-centered innovation process Lene Nielsen, 1/08 HCI Vistas Vol-IV
10 Steps to Personas (Lene Nielsen, 7/07, HCI Vistas Vol-III)
Posted by
Lynn Marentette
Labels:
accessible games,
Apple,
apps,
creative programming,
design,
games,
gizmodo,
iGuide,
iTablet,
mac,
Macbook Touch,
multi-touch,
NUI,
post-WIMP,
product,
rumors,
Soldier Knows Best,
touch
No comments:
Dec 31, 2009
The Post-WIMP Explorers' Club: Update of the Updates, Morning of 12/31/09
What is the Post WIMP Explorers Club?
I came up with the name of this semi-fictional club as a way to organize my thoughts (and blog posts) regarding the development of a new metaphor for post-WIMP applications and technologies, related specifically to natural user interfaces, natural user interaction design, and off-the-desktop user experience.
Update, morning of 12/31/09:
Josh Blake, author of the blog "Deconstructing the NUI", posted Metaphors and OCGM this morning. It fleshes out post-WIMP concepts, addressing metaphors & interfaces. The premise is that NUI metaphors will be less complex than GUI (WIMP) metaphors. My feeling is that on the surface, this will hold true, especially for consumers/users and people creating light-weight applications and software widgets.
Underneath the surface, where designers and developers brains spend more time than users & consumers, things might be more complex. Why? The technology to support the required wizardry is more complex. With convergence, the creation of new technologies, applications, communication systems, and even electronic entertainment, is now dependent upon the work and thinking of people from a wider range of disciplines. Each discipline brings to the table a set of terms rooted in theory, and even research practices.
Update, late afternoon, 12/30/09:
START HERE FOR THE "ORIGINAL" POST FROM 12/29 & 12/20/09:
Background
About a year ago I responded to a conversation between Johnathan Brill, Josh Blake, and Richard Monson-Haefel discussing "post-WIMP" conceptualization regarding natural user interfaces and interaction, otherwise known as NUI. The focus of the discussion was on Johnathan's post, "New Multi-touch Interface Conventions". At the time, we were reading Dan Saffer's book, Designing Gestural Interfaces, and contemplating new ways that technology can support human interaction and activities in a more natural, enjoyable, and intuitive manner.
A few days later, I shared some of the concepts from the discussion on a post on this blog, "Why "new" ways of interaction?". The post includes video of Johnathan Brill discussing PATA, a post-WIMP analogue to assist with multi-touch/gesture based application development, which he describes as follows:
Places
"Lighting, focus, and depth, simplified searching and effecting hyperlinked content."
Animation "Using animation to subtly demonstrate what applications do and how to use them is a better solution than using icons. Animations makes apps easier to learn."
Things "Back in the days of floppy disks, objects helped us organize our content. This limitation was forced by arcane technology, but it did have one huge advantage. We used our spatial memory to help us navigate content. Things will help us organize content and manipulate controllers across a growing variety of devices."
Auras "Auras will help us track what we are tracking and when an interaction has been successful."
(For reference, I've copied some of my responses to the first discussion, which can be found near the end of this post)
A year later....
What has changed? Everything post-WIMP has been covered like a blanket by the NUI-word. "NUI" now functions as a generic term for anything that is not exactly WIMP. There is a sense of urgency now to figure out how best to conceptualize post-WIMP interfaces and interactions. Newer, affordable technologies enable us to interact with friends and family while we are on-the go. Netbooks, e-Readers, SmartPhones, large touch screen displays, interactive HDTV, and new devices with multi-modal I/O's abound. Our grandparents are on Facebook and twitter from their iPhones. Our world no longer requires us to be slaves to the WIMP mentality.
So what is the problem?
The technology has moved along so fast that application designers and developers have not had a chance to catch up. (The iPhone is an exception.) The downturn in the economy has made it difficult for many to take the leap from traditional software or web development and gain new skill sets. On top of it all, most of us over the age of 15 have been brainwashed from years of working within the constraints of WIMP. It doesn't matter if we are users, consumers, students, designers, or developers.
Even the folks least likely to have difficulty expanding into the post-WIMP world have had some difficulty. If you've had training in HCI (Human-Computer Interaction), you were inadvertently brainwashed with the best. The bulk of the theory and research you contemplated was launched at a time when WIMP was king, even as the Web expanded. Many of the of the principles held dear to traditional HCI folks have been shattered, and no-one has come up with a "theory of everything" that will cover all of the human actions and interactions that are supported or guided by new technologies.
The problem, in part, is that letting go of WIMP is hard to do, as illustrated by the following post from the Ars Technica website: Light Touch: A Design Firm Grapples with Microsoft Surface (Matthew Braga, 6/29/09) "Ditching the mouse and keyboard means a whole lot more than just doing without two common peripherals. As those who have worked with Microsoft Surface have found out, you have to jettison decades of GUI baggage and start with a whole new mindset...In actuality, few multi-touch gestures are really anything like what we experience in the physical world. There is no situation in which we pull on the corners of an image to increase its size, or swipe in a direction to reveal more content. So, in the context of real-world interaction, these types of gestures are far from natural...gestures should not only feel natural, but logical; the purpose that gestures like these serve, after all, is to replace GUI elements to the end of making interaction a more organic process." (Be sure to read the comments.)
Now that the Surface is taking root in more places, and touch-screen all-in-one PC's and tablets are starting to multiply, more people are giving "NUI" some thought. Ron George, an interaction and product designer with experience working with Microsoft's Surface team has contributed to the post-WIMP discussion and spent some time sharing ideas with Josh Blake, a .NET, SharePoint, and Microsoft Surface Consultant for InfoStrat and author of Deconstructing the NUI blog. The outcome of this discussion was Ron George's December 28th blog post, "OCGM (pronounced Occam['s Razor] is the replacement for WIMP", and Josh Blake's post, "WIMP is to GUI as OCGM (Occam) is to NUI". (Be sure to read the comments for both of these posts!)
OCGM (as conceptualized by Ron George)
Objects "are the core of the experience. They can have a direct correlation with something physical, or they can just be objects in the interface."
Containers "will be the “grouping” of the objects. This can manifest itself in whatever the system sees fit to better organize or instruct the user on interactions. They do not have to be, nor should they be, windows. They can be any sort of method of presentation or relationship gathering as seen fit."
Gestures "I went into detail about the differences in Gestures and Manipulations in a previous post [check it out for a refresher]. Gestures are actions performed by the user that initiate a function after its completion and recognition by the system. This is an indirect action on the system because it needs to be completed before the system will react to it."
Manipulations "are the direct influences on an object or a container by the user. These are immediate and responsive. They are generally intuitive and mimic the physical world in some manner. The results are expected and should be non-destructive. These are easily performed and accidental activations should be expected and frequent."
To illustrate a point regarding the validity of the OCGM analogy proposed by Ron George, Josh Blake shares the following video of a presentation from REMIX 2009, in which August de los Reyes, the Principle Director of User Experience for Surface Computing at Microsoft, briefly discusses the TOCA (Touch, Objects, Containers, and Actions) concept, suggested to replace the WIMP concept:
The video wouldn't embed, so go to the following link:
Predicting the Past: A Vision for Microsoft Surface
"Natural User Interface (NUI) is here. New systems of interaction require new approaches to design. Microsoft Surface stands at the forefront of this product space. This presentation looks at one of the richest sources for inventing the future: the past. By analyzing preceding inflection points in user interface, we can derive some patterns that point to the brave NUI world."
The concepts outlined in the presentation are similar to Microsoft's Vision for 2019
Richard Monson-Haefel added his thoughts about the discussion about OCGM in his recent blog post, "What is NUI's WIMP?" Richard disagrees with the OCGM concept, as he feels it doesn't encompass some important interactions, such as speech/direct voice input. He'd probably agree that NUI is NOT WIMP 2.0.
Post-NUI, Activity Theory, and Off-the-Desktop Interaction Design:
As I was reading the recent posts and discussions regarding NUI/OCGM, I also contemplated some of what I've been reading over my holiday break, "Acting With Technology: Activity Theory and Interaction Design", written by Victor Kaptelinin and Bonnie A. Nardi. Victor Kaptelinin is the co-editor of "Beyond the Desktop Metaphor: Designing Integrated Digital Work Environments" (MIT Press, 2007), and has an interest in computer-supported cooperative work. Bonnie Nardi brings to the IT world her background in anthropology, and is the co-author of "Information Ecologies: Using Technology with the Heart" (MIT Press, 1999). The authors know what they are talking about.
It is important to note that activity theory-based interaction design is viewed as a "post-cognitivistic", and informed by some of what I studied in psychology, education, and social science years ago. Within the field of activity theory are some important differences, which I'll save for a future post.
Below are some concepts taken from the book. I am still mulling them over through the prism of NUI, post-WIMP, PATA, TOCA, OCGM, etc. That's why there will be at "Part II", with specific examples.
"Means and ends, the extent to which the technology facilitates and constrains attaining user's goals and the impact of the technology on provoking or resolving conflicts between different goals
Social and physical aspects of the environment - integration of target technology with requirements, tools, resources, and social rules of the environment
Learning, cognition, and articulation, internal vs external components of activity and support of their mutual transformations with target technology
Development -Developmental transformation of the above components as a whole"
"Taken together, these sections cover various aspects of the way the target technology supports, or is intended to support, human actions". (page 270)
I especially like the activity checklist included in the appendix of the book, as well as the concept of tool mediation. "The Activity Checklist is intended to be used at early phases of system design or for evaluating existing systems. Accordingly, there are two slightly different versions of the Checklist, the "evaluation version" and the "design version". Both versions are implemented as organized sets of items covering the contextual factors that can potentially influence the use of computer technology in real-life settings. It is assumed that the Checklist can help to identify the most important issues, for instance, potential trouble spots that designers can address". (page 269)
"The Checklist covers a large space. It is intended to be used first by examining the whole space for areas of interest, then focusing on the identified areas of interest in as much depth as possible...there is a heavy emphasis on the principle of tool mediation" (page 270).
Other Thoughts
What is missing from this picture is a Universal Design component, something that I think holds up across time and technologies. Following the principles of Universal design doesn't mean dumbing down or relying on simplicity. It is a multi-faceted approach, and relies on conctructing flexibility in use, one of the key concepts of Universal Design. I'd like to see this concept embedded in the post-WIMP conceptualization somehow.
Because of my background in education/psychology/ special education, I try to follow the principles of Universal Design for Learning (UDL) when I work on technology project. I've spent some time thinking about how the principles of UDL could be realized through new interaction/interface systems. Although this approach focuses on the educational technology domain, it is important to consider, given that a good percentage of our population - potential users, clients, consumers - has a temporary or permanent disability of one kind or another.
Components of Universal Design for Learning:
Multiple Means of Representation
Provide options for perception
Provide options for language and symbols
Provide options for comprehension
Multiple Means of Action and Expression
Provide options for physical action
Provide options for expressive skills and fluency
Provide options for executive functions
Multiple Means of Engagement
Provide options for recruiting interest
Provide options for sustaining effort and persistence
Provide options for self-regulation
-Adapted from the UDL Guidelines/Educator Checklist, which breaks down the components into more specific details.
Note: The concept of Universal Design for Learning shares historical roots with some of the work behind Activity Theory and Interaction Design. Obviously, there is still much to contemplate regarding OCGM and other permutations of post-WIMP concepts!
Here are my comments to the discussion on Johnathan Brill's blog from January 2009:
Thoughts: I'm seeing a trend to "optimize" WIMP-type applications so they can be manipulated by gesture and touch. People who program kiosks, ATM's and POS touch screens are examples of what I'm talking about. Touch and hold, two-fingered touch, and double-tap are just a slight transformation of the WIMP world-view, and in my opinion, are still WIMP (wimpy!). The mouse interaction "pretenders" are fine for using legacy productivity applications, OK in the short run.
For example, I have an HP TouchSmart, but I don't use the touch screen as often as I'd hoped. Try using using Visual Studio to code something on a touch screen. There is so much more that can be done! I know from the touch-screen prototype/demos I've worked on in various classes that applications that support collaboration and decision-making are important, and not just for work.
What do people DO, really? First of all, we are social beings, most of us. Think of what we share and discuss with others, and think about what sort of interactions on a display might best correspond with this interaction. Here are some of the things I've been DOING recently that involved some sort of technology and communication/collaboration with others:
---Travel planning - I recently went on a cruise and with various family members, selected activities I wanted to do on the ship as well plan my shore excursions (a complicated process)
---Picture sharing- I came back from the cruise with lots of pictures that I uploaded on Flickr. Related to this process: Picture annotating, tagging, choosing/comparing & editing it would be SO cool if I could use two sliders to enhance my pictures just so!
---Talking on the phone and responding to e-mail with friends and family members about the pictures, and what they wanted to see on Flickr- "you know, the ones of the dogs in Jamaica"... of which I had about 68!
---Financial planning with my husband. (I took info-viz last semester, so I know the possibilities are there.)
---Using the touch-screen to check-in at my eye-doctor's office: This was a user-unfriendly experience. Such a nice little screen. I was provided with a WIMPY PowerPoint-like interface which was confusing to use- and time consuming!
---Shopping at the new Super Wal-Mart: I asked the greeter if there was a map, and he said, "Food is over there, and the rest of the stuff is that way." Flat panel displays were all over the store, but of course,they weren't interactive. I had no idea where anything was, and the few items I was looking at had no bar-code tags. There wasn't a clerk in sight. Wal-Mart TV rolled on-and-on via the display above my head. If I could only harness the display and have the talking head answer my questions! I gave up on my shopping trip when I was in the facial lotion/potion section. Too many choices, and too much fine print to read.
---Shopping at the mall with my young-adult daughters... I'll have to hold my thoughts on that one for now!
Some suggestions:
I think the artist/designers, (even dancers,) who are interested in multi-touch and gesture interaction have some interesting things to consider. (I linked to some of my previous posts.)
Again:
I am still mulling things over through the prism of NUI, post-WIMP, PATA, TOCA, OCGM, etc. So that is why there will be at "Part II". With specific examples!
RELATED
Multimedia, Multi-touch, Gesture, and Interaction Resources
My thoughts:
2007 Letter to the Editor, Pervasive Computing
Useful Usability Studies (pdf)
2007 Blog Post
Usability/Interaction Hall of Shame (In a Hospital)
2008 Blog Posts
Emerging Interactive Technologies, Emerging Interactions, and Emerging Integrated Form Factors
Interactive Touch-Screen Technology, Participatory Design, and "Getting It"
An Example of Convergence: Interactive TV: uxTV 2008
2009 Blog Posts
Why "new" ways of interaction?
Microsoft: Are You Listening? Cool Cat Teacher (Vicki Davis) Tries out Microsoft's Multi-touch Surface Table
Haptic/Tactile Interface: Dynamically Changeable Physical Buttons
The Convergence of TV, the Internet, and Interactivity: Update
UX of ITV: The User Experience and Interactive TV (or Let's Stamp Out Bad Remote Controls)
Digital Convergence and Interactive Television; Boxee and Digital Convergence
ElderGadget Blog: Useful Tech and Tools
Other People's Thoughts
Ron George's blog, OCGM (pronounced Occam['s Razor] is the replacement for WIMP 12/28/09
Ron George: Welcome to the OCGM Generation! Part 2
Stephen, Microsoft Kitchen: OCGM, A New Windows User Experience
Richard Monson-Haefel's blog, Multi-touch and NUI: What is NUI's WIMP?
Richard Monson-Haefel: OCGM: George's Razor
Josh Blake's blog, Deconstructing the NUI: WIMP is to GUI as OCGM (Occam) is to NUI
Bill Buxton: Gesture Based Interaction (pdf) (Updated 5/2009)
Bill Buxton: "Surface and Tangible Computing, and the "Small" Matter of People and Design" (pdf) - ISSCC 2008
Dan Saffer, Designing for Gestural Interfaces: Touchscreens and Interactive Devices
Dan Saffer, Designing for Interaction
Mark Weiser, Computer for the 21st Century Scientific American, 09, 1991
Touch User Interface: Readings in Touch Screen, Multi-Touch, and Touch User Interface
Jacob O Wobbrock, Meredith Ringel Morris, Andrew D. Wilson User-Defined Gestures for Surface Computing CHI 2009, April 4–9, 2009, Boston, Massachusetts, USA.
I came up with the name of this semi-fictional club as a way to organize my thoughts (and blog posts) regarding the development of a new metaphor for post-WIMP applications and technologies, related specifically to natural user interfaces, natural user interaction design, and off-the-desktop user experience.
Update, morning of 12/31/09:
Josh Blake, author of the blog "Deconstructing the NUI", posted Metaphors and OCGM this morning. It fleshes out post-WIMP concepts, addressing metaphors & interfaces. The premise is that NUI metaphors will be less complex than GUI (WIMP) metaphors. My feeling is that on the surface, this will hold true, especially for consumers/users and people creating light-weight applications and software widgets.
Underneath the surface, where designers and developers brains spend more time than users & consumers, things might be more complex. Why? The technology to support the required wizardry is more complex. With convergence, the creation of new technologies, applications, communication systems, and even electronic entertainment, is now dependent upon the work and thinking of people from a wider range of disciplines. Each discipline brings to the table a set of terms rooted in theory, and even research practices.
Update, late afternoon, 12/30/09:
Richard Monson-Haefels response to Ron George's "Part 2". The concept of OCGM might be growing on him now... OCGM: George's Razor : "If Ron George can explain how OCGM encompasses Affordances and Feedback than I'll be convinced that OCGM works for NUI. Otherwise, I think OCGM is a great start that would benefit from an added "A" and "F"." -Richard
- OCGM relates to Occam's Razor. It is helpful to read a bit about it if you are are interested in the post WIMP conversations. (The link is to an an article from "How Stuff Works", via Richard Monson-Haefel.)
START HERE FOR THE "ORIGINAL" POST FROM 12/29 & 12/20/09:
Background
About a year ago I responded to a conversation between Johnathan Brill, Josh Blake, and Richard Monson-Haefel discussing "post-WIMP" conceptualization regarding natural user interfaces and interaction, otherwise known as NUI. The focus of the discussion was on Johnathan's post, "New Multi-touch Interface Conventions". At the time, we were reading Dan Saffer's book, Designing Gestural Interfaces, and contemplating new ways that technology can support human interaction and activities in a more natural, enjoyable, and intuitive manner.
A few days later, I shared some of the concepts from the discussion on a post on this blog, "Why "new" ways of interaction?". The post includes video of Johnathan Brill discussing PATA, a post-WIMP analogue to assist with multi-touch/gesture based application development, which he describes as follows:
Places
"Lighting, focus, and depth, simplified searching and effecting hyperlinked content."
Animation "Using animation to subtly demonstrate what applications do and how to use them is a better solution than using icons. Animations makes apps easier to learn."
Things "Back in the days of floppy disks, objects helped us organize our content. This limitation was forced by arcane technology, but it did have one huge advantage. We used our spatial memory to help us navigate content. Things will help us organize content and manipulate controllers across a growing variety of devices."
Auras "Auras will help us track what we are tracking and when an interaction has been successful."
(For reference, I've copied some of my responses to the first discussion, which can be found near the end of this post)
A year later....
What has changed? Everything post-WIMP has been covered like a blanket by the NUI-word. "NUI" now functions as a generic term for anything that is not exactly WIMP. There is a sense of urgency now to figure out how best to conceptualize post-WIMP interfaces and interactions. Newer, affordable technologies enable us to interact with friends and family while we are on-the go. Netbooks, e-Readers, SmartPhones, large touch screen displays, interactive HDTV, and new devices with multi-modal I/O's abound. Our grandparents are on Facebook and twitter from their iPhones. Our world no longer requires us to be slaves to the WIMP mentality.
So what is the problem?
The technology has moved along so fast that application designers and developers have not had a chance to catch up. (The iPhone is an exception.) The downturn in the economy has made it difficult for many to take the leap from traditional software or web development and gain new skill sets. On top of it all, most of us over the age of 15 have been brainwashed from years of working within the constraints of WIMP. It doesn't matter if we are users, consumers, students, designers, or developers.
Even the folks least likely to have difficulty expanding into the post-WIMP world have had some difficulty. If you've had training in HCI (Human-Computer Interaction), you were inadvertently brainwashed with the best. The bulk of the theory and research you contemplated was launched at a time when WIMP was king, even as the Web expanded. Many of the of the principles held dear to traditional HCI folks have been shattered, and no-one has come up with a "theory of everything" that will cover all of the human actions and interactions that are supported or guided by new technologies.
The problem, in part, is that letting go of WIMP is hard to do, as illustrated by the following post from the Ars Technica website: Light Touch: A Design Firm Grapples with Microsoft Surface (Matthew Braga, 6/29/09) "Ditching the mouse and keyboard means a whole lot more than just doing without two common peripherals. As those who have worked with Microsoft Surface have found out, you have to jettison decades of GUI baggage and start with a whole new mindset...In actuality, few multi-touch gestures are really anything like what we experience in the physical world. There is no situation in which we pull on the corners of an image to increase its size, or swipe in a direction to reveal more content. So, in the context of real-world interaction, these types of gestures are far from natural...gestures should not only feel natural, but logical; the purpose that gestures like these serve, after all, is to replace GUI elements to the end of making interaction a more organic process." (Be sure to read the comments.)
Now that the Surface is taking root in more places, and touch-screen all-in-one PC's and tablets are starting to multiply, more people are giving "NUI" some thought. Ron George, an interaction and product designer with experience working with Microsoft's Surface team has contributed to the post-WIMP discussion and spent some time sharing ideas with Josh Blake, a .NET, SharePoint, and Microsoft Surface Consultant for InfoStrat and author of Deconstructing the NUI blog. The outcome of this discussion was Ron George's December 28th blog post, "OCGM (pronounced Occam['s Razor] is the replacement for WIMP", and Josh Blake's post, "WIMP is to GUI as OCGM (Occam) is to NUI". (Be sure to read the comments for both of these posts!)
OCGM (as conceptualized by Ron George)
Objects "are the core of the experience. They can have a direct correlation with something physical, or they can just be objects in the interface."
Containers "will be the “grouping” of the objects. This can manifest itself in whatever the system sees fit to better organize or instruct the user on interactions. They do not have to be, nor should they be, windows. They can be any sort of method of presentation or relationship gathering as seen fit."
Gestures "I went into detail about the differences in Gestures and Manipulations in a previous post [check it out for a refresher]. Gestures are actions performed by the user that initiate a function after its completion and recognition by the system. This is an indirect action on the system because it needs to be completed before the system will react to it."
Manipulations "are the direct influences on an object or a container by the user. These are immediate and responsive. They are generally intuitive and mimic the physical world in some manner. The results are expected and should be non-destructive. These are easily performed and accidental activations should be expected and frequent."
To illustrate a point regarding the validity of the OCGM analogy proposed by Ron George, Josh Blake shares the following video of a presentation from REMIX 2009, in which August de los Reyes, the Principle Director of User Experience for Surface Computing at Microsoft, briefly discusses the TOCA (Touch, Objects, Containers, and Actions) concept, suggested to replace the WIMP concept:
The video wouldn't embed, so go to the following link:
Predicting the Past: A Vision for Microsoft Surface
"Natural User Interface (NUI) is here. New systems of interaction require new approaches to design. Microsoft Surface stands at the forefront of this product space. This presentation looks at one of the richest sources for inventing the future: the past. By analyzing preceding inflection points in user interface, we can derive some patterns that point to the brave NUI world."
The concepts outlined in the presentation are similar to Microsoft's Vision for 2019
Richard Monson-Haefel added his thoughts about the discussion about OCGM in his recent blog post, "What is NUI's WIMP?" Richard disagrees with the OCGM concept, as he feels it doesn't encompass some important interactions, such as speech/direct voice input. He'd probably agree that NUI is NOT WIMP 2.0.
Post-NUI, Activity Theory, and Off-the-Desktop Interaction Design:
As I was reading the recent posts and discussions regarding NUI/OCGM, I also contemplated some of what I've been reading over my holiday break, "Acting With Technology: Activity Theory and Interaction Design", written by Victor Kaptelinin and Bonnie A. Nardi. Victor Kaptelinin is the co-editor of "Beyond the Desktop Metaphor: Designing Integrated Digital Work Environments" (MIT Press, 2007), and has an interest in computer-supported cooperative work. Bonnie Nardi brings to the IT world her background in anthropology, and is the co-author of "Information Ecologies: Using Technology with the Heart" (MIT Press, 1999). The authors know what they are talking about.
It is important to note that activity theory-based interaction design is viewed as a "post-cognitivistic", and informed by some of what I studied in psychology, education, and social science years ago. Within the field of activity theory are some important differences, which I'll save for a future post.
Below are some concepts taken from the book. I am still mulling them over through the prism of NUI, post-WIMP, PATA, TOCA, OCGM, etc. That's why there will be at "Part II", with specific examples.
"Means and ends, the extent to which the technology facilitates and constrains attaining user's goals and the impact of the technology on provoking or resolving conflicts between different goals
Social and physical aspects of the environment - integration of target technology with requirements, tools, resources, and social rules of the environment
Learning, cognition, and articulation, internal vs external components of activity and support of their mutual transformations with target technology
Development -Developmental transformation of the above components as a whole"
"Taken together, these sections cover various aspects of the way the target technology supports, or is intended to support, human actions". (page 270)
I especially like the activity checklist included in the appendix of the book, as well as the concept of tool mediation. "The Activity Checklist is intended to be used at early phases of system design or for evaluating existing systems. Accordingly, there are two slightly different versions of the Checklist, the "evaluation version" and the "design version". Both versions are implemented as organized sets of items covering the contextual factors that can potentially influence the use of computer technology in real-life settings. It is assumed that the Checklist can help to identify the most important issues, for instance, potential trouble spots that designers can address". (page 269)
"The Checklist covers a large space. It is intended to be used first by examining the whole space for areas of interest, then focusing on the identified areas of interest in as much depth as possible...there is a heavy emphasis on the principle of tool mediation" (page 270).
Other Thoughts
What is missing from this picture is a Universal Design component, something that I think holds up across time and technologies. Following the principles of Universal design doesn't mean dumbing down or relying on simplicity. It is a multi-faceted approach, and relies on conctructing flexibility in use, one of the key concepts of Universal Design. I'd like to see this concept embedded in the post-WIMP conceptualization somehow.
Because of my background in education/psychology/ special education, I try to follow the principles of Universal Design for Learning (UDL) when I work on technology project. I've spent some time thinking about how the principles of UDL could be realized through new interaction/interface systems. Although this approach focuses on the educational technology domain, it is important to consider, given that a good percentage of our population - potential users, clients, consumers - has a temporary or permanent disability of one kind or another.
Components of Universal Design for Learning:
Multiple Means of Representation
Provide options for perception
Provide options for language and symbols
Provide options for comprehension
Multiple Means of Action and Expression
Provide options for physical action
Provide options for expressive skills and fluency
Provide options for executive functions
Multiple Means of Engagement
Provide options for recruiting interest
Provide options for sustaining effort and persistence
Provide options for self-regulation
-Adapted from the UDL Guidelines/Educator Checklist, which breaks down the components into more specific details.
Note: The concept of Universal Design for Learning shares historical roots with some of the work behind Activity Theory and Interaction Design. Obviously, there is still much to contemplate regarding OCGM and other permutations of post-WIMP concepts!
Here are my comments to the discussion on Johnathan Brill's blog from January 2009:
Thoughts: I'm seeing a trend to "optimize" WIMP-type applications so they can be manipulated by gesture and touch. People who program kiosks, ATM's and POS touch screens are examples of what I'm talking about. Touch and hold, two-fingered touch, and double-tap are just a slight transformation of the WIMP world-view, and in my opinion, are still WIMP (wimpy!). The mouse interaction "pretenders" are fine for using legacy productivity applications, OK in the short run.
For example, I have an HP TouchSmart, but I don't use the touch screen as often as I'd hoped. Try using using Visual Studio to code something on a touch screen. There is so much more that can be done! I know from the touch-screen prototype/demos I've worked on in various classes that applications that support collaboration and decision-making are important, and not just for work.
What do people DO, really? First of all, we are social beings, most of us. Think of what we share and discuss with others, and think about what sort of interactions on a display might best correspond with this interaction. Here are some of the things I've been DOING recently that involved some sort of technology and communication/collaboration with others:
---Travel planning - I recently went on a cruise and with various family members, selected activities I wanted to do on the ship as well plan my shore excursions (a complicated process)
---Picture sharing- I came back from the cruise with lots of pictures that I uploaded on Flickr. Related to this process: Picture annotating, tagging, choosing/comparing & editing it would be SO cool if I could use two sliders to enhance my pictures just so!
---Talking on the phone and responding to e-mail with friends and family members about the pictures, and what they wanted to see on Flickr- "you know, the ones of the dogs in Jamaica"... of which I had about 68!
---Financial planning with my husband. (I took info-viz last semester, so I know the possibilities are there.)
---Using the touch-screen to check-in at my eye-doctor's office: This was a user-unfriendly experience. Such a nice little screen. I was provided with a WIMPY PowerPoint-like interface which was confusing to use- and time consuming!
---Shopping at the new Super Wal-Mart: I asked the greeter if there was a map, and he said, "Food is over there, and the rest of the stuff is that way." Flat panel displays were all over the store, but of course,they weren't interactive. I had no idea where anything was, and the few items I was looking at had no bar-code tags. There wasn't a clerk in sight. Wal-Mart TV rolled on-and-on via the display above my head. If I could only harness the display and have the talking head answer my questions! I gave up on my shopping trip when I was in the facial lotion/potion section. Too many choices, and too much fine print to read.
---Shopping at the mall with my young-adult daughters... I'll have to hold my thoughts on that one for now!
Some suggestions:
I think the artist/designers, (even dancers,) who are interested in multi-touch and gesture interaction have some interesting things to consider. (I linked to some of my previous posts.)
Again:
I am still mulling things over through the prism of NUI, post-WIMP, PATA, TOCA, OCGM, etc. So that is why there will be at "Part II". With specific examples!
RELATED
Multimedia, Multi-touch, Gesture, and Interaction Resources
My thoughts:
2007 Letter to the Editor, Pervasive Computing
Useful Usability Studies (pdf)
2007 Blog Post
Usability/Interaction Hall of Shame (In a Hospital)
2008 Blog Posts
Emerging Interactive Technologies, Emerging Interactions, and Emerging Integrated Form Factors
Interactive Touch-Screen Technology, Participatory Design, and "Getting It"
An Example of Convergence: Interactive TV: uxTV 2008
2009 Blog Posts
Why "new" ways of interaction?
Microsoft: Are You Listening? Cool Cat Teacher (Vicki Davis) Tries out Microsoft's Multi-touch Surface Table
Haptic/Tactile Interface: Dynamically Changeable Physical Buttons
The Convergence of TV, the Internet, and Interactivity: Update
UX of ITV: The User Experience and Interactive TV (or Let's Stamp Out Bad Remote Controls)
Digital Convergence and Interactive Television; Boxee and Digital Convergence
ElderGadget Blog: Useful Tech and Tools
Other People's Thoughts
Ron George's blog, OCGM (pronounced Occam['s Razor] is the replacement for WIMP 12/28/09
Ron George: Welcome to the OCGM Generation! Part 2
Stephen, Microsoft Kitchen: OCGM, A New Windows User Experience
Richard Monson-Haefel's blog, Multi-touch and NUI: What is NUI's WIMP?
Richard Monson-Haefel: OCGM: George's Razor
Josh Blake's blog, Deconstructing the NUI: WIMP is to GUI as OCGM (Occam) is to NUI
Bill Buxton: Gesture Based Interaction (pdf) (Updated 5/2009)
Bill Buxton: "Surface and Tangible Computing, and the "Small" Matter of People and Design" (pdf) - ISSCC 2008
Dan Saffer, Designing for Gestural Interfaces: Touchscreens and Interactive Devices
Dan Saffer, Designing for Interaction
Mark Weiser, Computer for the 21st Century Scientific American, 09, 1991
Touch User Interface: Readings in Touch Screen, Multi-Touch, and Touch User Interface
Jacob O Wobbrock, Meredith Ringel Morris, Andrew D. Wilson User-Defined Gestures for Surface Computing CHI 2009, April 4–9, 2009, Boston, Massachusetts, USA.
Posted by
Lynn Marentette
Labels:
Brill,
dan saffer,
gesture,
interaction design,
Josh Blake,
metaphors,
Monson-Haefel,
multi-touch,
NUI,
Occam's Razor,
OCGM,
post Wimp,
surface
3 comments:
Subscribe to:
Posts (Atom)
