Showing posts with label surface. Show all posts
Showing posts with label surface. Show all posts

Feb 15, 2013

Designing for Touch & Gesture: Tips for Apps and the Web (Updated)

In the past, our fingers did the walking, sifting through files, papers, pamphlets, and phonebooks, and then by point-click-clicking with a mouse to interact with images and text, in essence, electronic imitations of the paper-based world. Traditional forms, brochures, ad inserts, and posters informed much of the design. 

How much have things change?   It is 2013, but you'd think it was 1997 from the PowerPoint look and feel of many apps and web sites!   Touch is everywhere, but from what I can tell, not enough designers and developers have stepped up to the plate to think more deeply about ways their applications can support human endeavors though touch and gesture interactions.  

For an overview of this topic, take a look at my 2011 post, written after a number of ugly encounters with user-unfriendly applications:  Why bother switching from GUI to NUI?  

For an in-depth look into the history of multi-touch, the wisdom of Bill Buxton is well-worth absorbing.  He's worked with all sorts of interfaces, and has been curating the history of multi-touch and gesture systems since 2007:


Multi-Touch Systems that I have Known and Loved
Bill Buxton, Microsoft Research, Updated 8/30/12



Even if you are not a designer or developer, I encourage you to explore some of the links below:

Touch Gestures for Application Design
Luke Wroblewski, 10/9/12

Common Misconceptions About Touch
Steven Hoober, 3/18/13

Designing With Tablets in Mind:  Six Tips to Remember
Connor Turnbull, Webdesign tuts+, 9/27/11

Finger-Friendly Design: IDeal Mobile Touchscreen Target Sizes
Anthony T, Smashing Magazine, 2/21/12

Best Practices: Designing Touch Tablet Experiences for Preschoolers (pdf)
Sesame Street Workshop


Are Touch Screens Accessible?
AcessIT, National center on Accessible Information Technology in Education

iOS Human Interface Guidelines
Apple

Android User Interface Guidelines
Using Touch Gestures
Handling Multi-Touch Gestures
Android

Designing for Tablets?  We're Here to Help!
Roman Nurik, Android Developers Blog 11/26/12

Touch interaction design (Windows Store apps)
Microsoft - MSDN

Multi-Touch Systems that I have Known and Loved
Bill Buxton, Microsoft Research, Updated 8/30/12


Nov 19, 2011

Camera-less Tabletop Computing with Samsung SUR40 for Microsoft® Surface® with PixelSense™

Here is the press release: 
Next Generation of Microsoft Surface Available for Pre-Order in 23 Countries
"Software developers interested in creating solutions for the Samsung SUR40 can get started immediately by visiting the Surface Developer Center. The site provides free and easy access to the Surface 2.0 software developer kit, featuring the Input Simulator, which enables developers to write Surface applications on any Windows 7 machine, as well as other helpful developer-related resources. There are already hundreds of highly skilled Surface software development partners that can be found at http://www.surface.com."


(I've listed Microsoft Surface partners at the end of this post and plan to share more about the latest applications for surface computing in the near future.)


The following product information was taken from the Samsung website:
40" Surface Experience 
"Samsung SUR40 is the new generation of Microsoft® Surface® experience featuring PixelSense™ technology, which gives LCD panels the power to see without the use of cameras. Building from the innovation of the first version of Microsoft® Surface® and Samsung’s leading display technology, it is now possible for people to share, collaborate and explore together using a large, thin display that recognizes fingers, hands and other objects placed on the screen." 


PixelSense™ 
"PixelSense™ allows an LCD display to recognize fingers, hands, and objects placed on the screen, including more than 50 simultaneous touch points. With PixelSense™, pixels in the display see what’s touching the screen and that information is immediately processed and interpreted."


Resolution:             1920 x 1080 
Viewing Angle:       (H/V) 178 / 178° (CR ≥ 10) 
CPU:                     Athlon X2 Dual-Core 245e (2.9GHz) 
Operating System: Windows 7 Professional  x64 


GPU:        AMD HD6750M 
N/B:         AMD RS780E 
S/B:         AMD SB710 


Storage:                  SATA2 320 GB 
Memory:                 DDR3 4 GB
USB:                      4 USB 2.0 
VGA:                     Out HDMI Ethernet: 100 / 1000 
Audio Codec:         Realtek ALC262 Azalia CODEC


Product Dimensions (With Stand):            1,095 x 728 x 707.4 mm 
Product Dimensions (Without Stand):       1,095 x 102.5 x 707.4 mm 
Shipment Dimensions:                            1,214 x 299 x 832 mm 


Product Weight:     35 Kg 
Shipment Weight:  45.4.Kg


RELATED
Samsung
Microsoft Surface
Microsoft Surface "What's New"
Microsoft Surface Partners:
Aftermous.com
AKT
AM Production
Black Marble
ETT
Headcandy
IdentityMine
Information Strategies
Infusion
Inhance Digital
Interknowlogy
Intuilab
nSquared
Object Consulting
Onwijs
Razorfish
Sevensteps
Stimulant
Touchtech
T-Systems
MultiMedia
UID
Vectorform
XFace

Dec 11, 2010

Gesture "multitouch" 12 x 7 interactive video wall provides tours of I/O Data Centers' facilities

I came across this demonstration of I/O DataCenter's 12 by 7 foot interactive video wall that makes playing around with views of data center modules...interesting! The display is a gesture-based "multi-touch" system. (I'll update this post when I get more information.)



Here is the description from the Datacenter YouTube channel:


"Instead of hauling a 40-foot long modular data center to a trade show, i/o Data Centers is taking a high-tech approach to customer tours of their i/o Anywhere modular data center. The i/o team has created a 12-foot by 7-foot touchscreen video wall to provide interactive tours of the company's facilities. Selecting a "hot spot" pops up a virtual data center, complete with cross sections and product info, following the concept of the touch screens in the sci-fi movie "Minority Report.""


FYI: I/O Data Centers has an application that runs on the Surface.

UPCOMING:
Stay tuned for my upcoming posts! 


News about LM3LABS (Previous post)
Interactive Surveillance CCeline Latulipe (technologist) Annabel Manning (artist)

Nov 24, 2010

Microsoft Surface Light and Physics App for Kids at the Smithsonian

Microsoft Surface at the Smithsonian


The Surface is located in the Smithsonian's Castle,  and is part of "The Wonder of Light: Touch and Learn!" exhibit, which opened on Tuesday, November 9th (2010).  Microsoft donated the Surface unit to the Smithsonian.


Below is  slideshow of the interactive exhibit:



The video below provides a closer look at the applications created by Infostrat for the Smithsonian exhibit:


RELATED
New Interactive Exhibit Opens in Smithsonian's Castle, Bringing Light To Life
Smithsonian News Release, 11/9/20

Josh Blake's post, Microsoft Surface and Magical Object Interaction.

Nov 10, 2010

New Version of Surface from Microsoft?

Next Gen Microsoft Surface 'Imminent'
Seamus Byrne, Gizmodo  11/11/10


Here is a quote from the Gizmodo article:

"Iain McDonald of agency Amnesia Razorfish, owned by Microsoft until late 2009 and now part of the Publicis Groupe, told Gizmodo the next generation Microsoft Surface will indeed be a flat surface concept, not the entire coffee table system with cameras and projectors living underneath. The new Surface will also have higher resolution cameras so that special codes will no longer be required to identify objects. And the new Surface will also be around $8,000 (whether this was USD or AUD wasn’t specified)." - Seamus Byrne


More to come...

Nov 6, 2010

Interactive iPad Apps for Kids with Autism: Could some of these be transformed for multi-touch tabletop activities?

I came across a great post about interactive iPad apps for special needs:

Ten Apple iPad Apps to Help Children with Autism
Joanne Carter, MacCreate 11/5/10


In this article,  Joanne Carter shares screen shots and detailed descriptions of a variety of iPad apps that support learning and communication skills of young people with autism.  You can find additional information about the apps discussed in the article by visiting the following links: 

Proloquo2go, Story_Builder, "Off we go" book series, Soundtastic, Visual Impact, Living Safely,  Tapspeak Sequence for iPad, iCommunicate for iPad, Autoverbal Talking Soundboard Pro, Is that Gluten Free?, and I Dress for Weather


I think that some of these apps have the potential to be transformed and tweaked for use on multi-touch, multi-user  tables such as the SMARTTable or Microsoft's Surface.  The aim would be to encourage paired and group communication and social skills among children with special needs.  I'll share my thoughts on this topic in a future post.



Oct 21, 2010

Emerging Interactive Ed. Tech: Classmate Assist and Wayang Outpost -Sensors, AI, and Context Awareness for Learning -and Teaching

Brief background: I've been following developments in intelligent tutoring systems for a while, and find it interesting to see how researchers are combining artificial intelligence, learning theory, affective computing, and sensor networks to create applications that might prove to be useful and effective.

The advantage of using intelligent tutoring applications in some cases is that it provides students with additional support and feedback the moment it is needed, something that is difficult for teachers to provide to students in large classrooms. With the increase in use of smartphones and other mobile devices such as the iPad, there is a good chance that this sort of technology will be used to support learning anywhere, anytime.

Although most intelligent tutoring systems are geared for 1-1 computing, I think there are some components that could be tweaked and then transfered to create intelligent "tutoring" systems for collaborative learning. Students like game-based learning, and what could be more fun than playing AND learning with a partner or group of peers? (I plan to revisit the research in this area in an upcoming post.)

Some thoughts:I envision a system could support learning as well as important skills useful to students in life beyond the school walls, such as positive social interaction, teamwork, and problem-solving skills. The path of least resistance? Most likely applications that support the learning of pairs or small groups of students working at one display. However, in this era of the "21st Century Learner", there is a growing need for applications that can support small groups of students for collaborative groups and project-based learning activities.

There are a few applications developed for collaborative learning activities around a multi-touch table, such as the SMARTTable or the Surface, and more are needed. Also needed are intelligent systems that can support video conferencing and collaborative learning between students who are not physically co-located.

There are some problems that have yet to be solved. For example, the use of multiple sensors for an application designed for young people might be too intrusive. There are serious issues related to privacy/security. Who would have access to data regarding a student's emotional or physiological state? How would this data be utilized? How would this information be protected? Many school districts have security vulnerabilities, so it is possible that this information could be misused, if in the wrong hands.

Below I've highlighted two "intelligent" tutoring systems that incorporate the use of sensors in one form or another to generate information about student learning in a way that simulates what good teachers do every day. The ClassroomAssist application was developed by researchers at Intel, in collaboration with several universities. The Wayang Outpost application was developed by researchers at UMASS, and is aligned with the principles of Universal Design for Learning.

CLASSROOM ASSIST
ClassmateAssist is an application developed by Intel's Everyday Sensing and Perception team. Here is the description of the application from Intel Research:"The advent of 1:1 computing in the classroom opens the door for teachers to set up individualized learning for their students who have a wide spectrum of interests and skills. ClasmateAssist technology uses computer vision and image projection to assist and guide students in a 1:1 learning environment, helping them to independently accomplish tasks at their own pace, while at the same time allowing teachers to be apprised of student progress."

In the following video, Richard Beckwith, a developmental psychologist at Intel, demonstrates a prototype of an application that uses video-sensing to track student's hand movements during a coin sorting lesson. The application provides feedback to the student, and also tracks data about the student's progress that can be transformed into a report for the teacher. The system can also monitor student's facial expression, note attention levels, and provide feedback.


SPAIS Publications:
Theocharous, G., Beckwith, R., Butko, N., Philipose, M. Tractable POMDP Planning Algorithms for Optimal Teaching in "SPAIS". International Joint Conferene on Artificial Intelligence (IJCAI) workshop on Plan Activity, and Intent Recognition (PAIR), Pasadena, California, July 2009.
 May 2010.
Theocharous, G., Butko, N., Philipose, M. Designing a Mathematical Manipulatives Tutoring System using POMDPS. (pdf). POMDP Practitioners Workshop: Solving Real-world POMDP Problems, International Conference on Automated Planning and Scheduling (ICAPS). Toronto, May 2010



Wayang OutpostWeb-based Interactive Math/Intelligent Tutoring System, with Sensors.
I've followed the work of Beverly P. Woolf and her colleagues for some time.  Much of their research has centered around a web-based application, Wayang Outpost, an intelligent electronic tutoring system that incorporates multimedia and animated adventures while providing activities designed to prepare teens for standardized math tests, such as the SAT and state-mandated end-of-course exams.

In recent years, the team has been using non-invasive sensors in their research, including a camera that views facial expressions, a posture-sensing device located in the seat of the student's chair, and a pressure-sensitive mouse, and a wireless skin conductance wristband. Data collected through all of these sensors can provide useful information about student learning.  The system can also note when students try to "game" the system.
Related Publications
Woolf, B.P., Arroyo, I., Muldner, K., Burleson, W., Cooper, D., Dolan, R., Christopherson, R.M (2010)The Effect of Motivational Learning Companions on Low Achieving Students and Students with Disabilties (pdf) International Conference on Intelligent Tutoring Systems, Pittsburgh.
Abstract "We report the results of a randomized controlled evaluation of the effectiveness of pedagogical agents as providers of affective feedback. These digital learning companion were embedded in an intelligent tutoring system for mathematics, and were used by approximately one hundred students in two public high schools. Students in the control group did not receive the learning companions. Results indicate that low-achieving students—one third of whom have learning disabilities—had higher affective needs than their higher achieving peers; they initially considered math problem-solving more frustrating, less exciting, and felt more anxious when solving math problems.  However, after they interacted with affective pedagogical agents, low-achieving students improved their affective outcomes, e.g., reported reduced frustration and anxiety."


Arroyo, I., Cooper, D.G., Burleson, W., Woolf, B.P., Muldner, K., Christopherson, R. (2009)
Emotion Sensors Go To School. AIED 2009. Pp. 17-24. IOS Press.
Low-tech description of Wayang Outpost, the math application used in the above publication: Paul Franz, Recoder.Com 5/16/09
Cooper, D.G., Arroyo, I., Woolf, B.P., Muldner, K., Burleson, W., Christoperson, R.  Sensors Model Student Self-Concept in the Classroom (pdf) UMass Amherst, June 22, 2009/UMAP 2009


Cross posted in the TechPsych Blog

Oct 12, 2010

Update on Josh Blake, newly designated Microsoft Surface MVP

Josh Blake is the Tech Lead of the InfoStrat Advance Technology Group in DC.  He has been creating multi-touch applications Microsoft's Surface multi-user table-tops for a while. Recently, his team built a suite of applications designed for use by young children at a museum.  Below is a video demonstration of some of this work. It really looks exciting!


Microsoft Surface and Magical Object Interaction

Josh Blake's blog is called Deconstructing the NUI- for those of you new to this blog, NUI stands for Natural User Interface (also known as Natural User Interaction).  See his post, Microsoft Surface and Magical Object Interaction, for more information!

RELATED
Here is a plug for Josh Blake's book, "Multitouch on Windows"

Book Ordering Information

FYI:  InfoStrat  is hiring  WPF experts as well as Microsoft CRM and Microsoft SharePoint experts.


Microsoft Surface MVPs
Dr. Neil Roodyn
Dennis Vroegop
Rick Barraza
Joshua Blake





Aug 2, 2010

New Hollywood Hard Rock Cafe Sparkles with Interactive Multi-touch Wall and Microsoft Surface Booths!

I came across a blogpost entitled "Tourist in my own town". In this post, the author shares is positive experience of his visit to the new Hard Rock Cafe, located on the Hollywood Walk of Fame.  I loved his comment:  "A whole wall of Microsoft software running and not a single BSOD!"  In addition to the interactive wall, visitors have the chance to play with the content on Microsoft's interactive Surface tables. Below is a picture from the post from the Sure Beats Work blog:



-Sure Beats Work


A recent post on the Hard Rock Cafe blog provides more information about the interactive technologies at the Hollywood site: "Hard Rock International Rocks Its Way to Hollywood Boulevard":


New Look ~ New Vibe ~ New Memorabilia Technology
"In the latest example of Hard Rock’s concept-driven design evolution, the Hollywood Boulevard cafe was developed to integrate technology, creating a new look and vibe that will rock Hollywood. Hard Rock Cafe Hollywood on Hollywood Boulevard showcases new and unique interactive experiences for guests – from an 18’ x 4’ Rock Wall™ to touch screens in booths throughout the cafe to Microsoft Surface™, each featuring innovative multi-touch technology that enables fans to explore the world’s greatest rock ‘n’ roll memorabilia collection and virtually tour all of Hard Rock’s venues worldwide."

"In addition to the cutting-edge multimedia memorabilia experience, hundreds of items from Hard Rock’s iconic collection adorn the walls of Hard Rock Cafe Hollywood on Hollywood Boulevard, including items from many of the world’s most beloved and recognizable musicians, as well as contemporary artists with local ties. Key memorabilia items are now on display, from 
Jimi Hendrix’s purple crushed velvet hat; to Janis Joplin’s love letter to then boyfriend Peter LeBlanc; Jim Morrison’s leather pants and handwritten lyrics to “L.A. Woman”; to Katy Perry’s sparkly dress and Fergie’s tour outfit worn while on tour with the Black Eyed Peas."

The memorabilia wall was created for the Hard Rock Cafe by Obscura Digital, a company that is involved in off-the-desktop ubiquitous computing, including ambient technologies that include natural-user interfaces and interaction. Obscura Digital aims to "make data pervasive and accessible in almost any situation, allowing virtually any surface to be turned into a portal to the Internet".  


The Memorabilia Wall has been installed in several Hard Rock Cafes around the world- additional pictures can be found on the Obscura Digital website.The first installation of the wall was at the Hard Rock Cafe in Las Vegas in 2009. - Below is the interaction of the wall at the Las Vegas Hard Rock Cafe:

-Obscura Digital

The following video, set to Beck's "Elevator Music", provides a great demonstration of the Hard Rock Cafe Memorabilia application as experienced on the Surface:

Hard Rock memorabilia app for Microsoft Surface (extended) from Duncan/Channon on Vimeo.
(The music in the video "Elevator Music, by Beck.)


RELATED
My megapost about the Hard Rock Cafe interactive wall and website:
Interactive Memorabilia at the Hard Rock Cafe: 
Microsoft's Multi-touch Rock Wall, Companion Surface Installations, and Awesome Touch-Responsive Interactive Memorabilia Website

Below is a screenshot of the main portal of the Hard Rock Cafe interactive memorabilia website, which compliments the "real" wall. You can interact with all 1532 items and learn more about the history behind the various artists.  It is fun to play with on a touch-screen display!


Duncan Channon: Sin City Memorabilia Interfaces



SOMEWHAT RELATED
Obscura Digital
Obscura Digital's Cuelight, and interactive pool table at the SOHO Esquire House:

Cuelight from Obscura Digital on Vimeo.
"Featured at the Esquire House's "Ultimate Bachelor Pad" in NYC, the one-of-a kind Obscura CueLight projection system turns a game of pool into an amazing interactive art display"

May 7, 2010

The attracTable is Coming Soon: Sony will launch a high-definition touch and gesture- interactive tabletop, using Actracsys's technology!

Sony will be introducing a full high-definition interactive table, a result of a collaboration with the Swiss company Atracsys.


EXCLUSIVE: Sony atracTable to take on Microsoft Surface from JuneatracTable Baselworld 2009 reference 3


(At about 2:14 in the video below, there is a demonstration of an application that recognizes facial features and expressions, which are used to control and manipulate images on the screen.)
Images from the Sony Stand at Vision 2009


Here is an "overview" video that shows a number of uses for the Attractable:



Here is a version of the atracTable, using a tangible user interface to create music:





Here is the "Nespresso" table, which provides people with information about the type of coffee that you are drinking. It makes more sense as demonstrated in the video.
Atracsys @ Baselworld 2010


beMerlin:  Interactive gesture-based application for retail:

Dec 31, 2009

The Post-WIMP Explorers' Club: Update of the Updates, Morning of 12/31/09

What is the Post WIMP Explorers Club?  
I came up with the name of this semi-fictional club as a way to organize my thoughts (and blog posts) regarding the development of a new metaphor for post-WIMP applications and technologies, related specifically to natural user interfaces, natural user interaction design, and off-the-desktop user experience.  


Update, morning of 12/31/09:
Josh Blake, author of the blog "Deconstructing the NUI", posted Metaphors and OCGM  this morning.  It fleshes out post-WIMP concepts, addressing metaphors & interfaces.  The premise is that NUI metaphors will be less complex than GUI (WIMP) metaphors.    My feeling is that on the surface, this will hold true, especially for consumers/users and people creating light-weight applications and software widgets.  


Underneath the surface,  where designers and developers brains spend more time than users & consumers, things might be more complex.  Why? The technology to support the required wizardry is more complex.  With convergence, the creation of new technologies, applications, communication systems, and even electronic entertainment, is  now dependent upon the work and thinking of people from a wider range of disciplines.  Each discipline brings to the table a set of terms rooted in theory, and even research practices.


Update,  late afternoon, 12/30/09:
Richard Monson-Haefels response to Ron George's "Part 2".  The concept of OCGM might be growing on him now... OCGM: George's Razor : "If Ron George can explain how OCGM encompasses Affordances and Feedback than I'll be convinced that OCGM works for NUI. Otherwise, I think OCGM is a great start that would benefit from an added "A" and "F"." -Richard   
  • OCGM relates to Occam's Razor.  It is helpful to read a bit about it if you are are interested in the post WIMP conversations. (The link is to an an article from "How Stuff Works", via Richard Monson-Haefel.)
UPDATE 12/30/09  -- This post is part of a discussion between several different bloggers, and was written before Ron George wrote his latest post, Welcome to the OCGM Generation!  Part 2, which I recommend that you read now, or within the same time frame, as this post.   Since I'm not ready to write "Part 2" of this post, I tweaked what I had and added some links to a handful of my previous posts that touch on this and related topics.  The links can be found at the bottom of this post.




START HERE FOR THE "ORIGINAL" POST FROM  12/29 & 12/20/09:


Background
About a year ago I responded to a conversation between Johnathan Brill, Josh Blake, and Richard Monson-Haefel discussing "post-WIMP" conceptualization regarding natural user interfaces and interaction, otherwise known as NUI.  The focus of the discussion was on Johnathan's post, "New Multi-touch Interface Conventions". At the time, we were reading Dan Saffer's book, Designing Gestural Interfaces, and contemplating new ways that technology can support human interaction and activities in a more natural, enjoyable, and intuitive manner.  

A few days later, I shared some of the concepts from the discussion on a post on this blog, "Why "new" ways of interaction?".  The post includes video of Johnathan Brill discussing PATA, a post-WIMP analogue to assist with multi-touch/gesture based application development, which he describes as follows:
Places
"Lighting, focus, and depth, simplified searching and effecting hyperlinked content."
Animation "Using animation to subtly demonstrate what applications do and how to use them is a better solution than using icons. Animations makes apps easier to learn."
Things "Back in the days of floppy disks, objects helped us organize our content. This limitation was forced by arcane technology, but it did have one huge advantage. We used our spatial memory to help us navigate content. Things will help us organize content and manipulate controllers across a growing variety of devices."

Auras "Auras will help us track what we are tracking and when an interaction has been successful."
(For reference, I've copied some of my responses to the first discussion, which can be found near the end of this post)


A year later....
What has changed?   Everything post-WIMP has been covered like a blanket by the NUI-word.  "NUI" now functions as a generic term for anything that is not exactly WIMP.  There is a sense of urgency now to figure out how best to conceptualize post-WIMP interfaces and interactions.  Newer, affordable technologies enable us to interact with friends and family while we are on-the go. Netbooks, e-Readers, SmartPhones, large touch screen displays, interactive HDTV, and new devices with multi-modal I/O's abound.  Our grandparents are on Facebook and twitter from their iPhones.  Our world no longer requires us to be slaves to the WIMP mentality.


So what is the problem?
The technology has moved along so fast that application designers and developers have not had a chance to catch up. (The iPhone is an exception.)  The downturn in the economy has made it difficult for many to take the leap from traditional software or web development and gain new skill sets.  On top of it all, most of us over the age of 15 have been brainwashed from years of working within the constraints of WIMP. It doesn't matter if we are users, consumers, students, designers, or developers.


Even the folks least likely to have difficulty expanding into the post-WIMP world have had some difficulty.  If you've had training in HCI (Human-Computer Interaction), you were inadvertently brainwashed with the best. The bulk of the theory and research you contemplated was launched at a time when WIMP was king, even as the Web expanded. Many of the of the principles held dear to traditional HCI folks have been shattered, and no-one has come up with a "theory of everything" that will cover all of the human actions and interactions that are supported or guided by new technologies.


The problem, in part, is that letting go of WIMP is hard to do, as illustrated by the following post from the Ars Technica website:  Light Touch:  A Design Firm Grapples with Microsoft Surface  (Matthew Braga, 6/29/09) "Ditching the mouse and keyboard means a whole lot more than just doing without two common peripherals.  As those who have worked with Microsoft Surface have found out, you have to jettison decades of GUI baggage and start with a whole new mindset...In actuality, few multi-touch gestures are really anything like what we experience in the physical world. There is no situation in which we pull on the corners of an image to increase its size, or swipe in a direction to reveal more content. So, in the context of real-world interaction, these types of gestures are far from natural...gestures should not only feel natural, but logical; the purpose that gestures like these serve, after all, is to replace GUI elements to the end of making interaction a more organic process."   (Be sure to read the comments.)

Now that the Surface is taking root in more places, and touch-screen all-in-one PC's and tablets are starting to multiply, more people are giving "NUI" some thought. Ron George, an interaction and product designer with experience working with Microsoft's Surface team has contributed to the post-WIMP discussion and spent some time sharing ideas with Josh Blake, a .NET, SharePoint, and Microsoft Surface Consultant for InfoStrat and author of Deconstructing the NUI blog. The outcome of this discussion was Ron George's December 28th blog post, "OCGM (pronounced Occam['s Razor] is the replacement for WIMP", and Josh Blake's post, "WIMP is to GUI as OCGM (Occam) is to NUI".   (Be sure to read the comments for both of these posts!)



OCGM (as conceptualized by Ron George)


Objects "are the core of the experience. They can have a direct correlation with something physical, or they can just be objects in the interface."


Containers "will be the “grouping” of the objects. This can manifest itself in whatever the system sees fit to better organize or instruct the user on interactions. They do not have to be, nor should they be, windows. They can be any sort of method of presentation or relationship gathering as seen fit."


Gestures "I went into detail about the differences in Gestures and Manipulations in a previous post [check it out for a refresher]. Gestures are actions performed by the user that initiate a function after its completion and recognition by the system. This is an indirect action on the system because it needs to be completed before the system will react to it."


Manipulations "are the direct influences on an object or a container by the user. These are immediate and responsive. They are generally intuitive and mimic the physical world in some manner. The results are expected and should be non-destructive. These are easily performed and accidental activations should be expected and frequent."

To illustrate a point regarding the validity of the OCGM analogy proposed by Ron George, Josh Blake shares the following video of a presentation from REMIX 2009, in which August de los Reyes, the Principle Director of User Experience for Surface Computing at Microsoft, briefly discusses the TOCA (Touch, Objects, Containers, and Actions) concept, suggested to replace the WIMP concept:

The video wouldn't embed, so go to the following link:


Predicting the Past: A Vision for Microsoft Surface
"Natural User Interface (NUI) is here. New systems of interaction require new approaches to design. Microsoft Surface stands at the forefront of this product space. This presentation looks at one of the richest sources for inventing the future: the past. By analyzing preceding inflection points in user interface, we can derive some patterns that point to the brave NUI world." 


The concepts outlined in the presentation are similar to Microsoft's Vision for 2019


Richard Monson-Haefel added his thoughts about the discussion about OCGM in his recent blog post, "What is NUI's WIMP?"  Richard disagrees with the OCGM concept, as he feels it doesn't encompass some important interactions, such as speech/direct voice input.   He'd probably agree that NUI is NOT WIMP 2.0.



Post-NUI, Activity Theory, and Off-the-Desktop Interaction Design:
As I was reading the recent posts and discussions regarding NUI/OCGM, I also contemplated some of what I've been reading over my holiday break, "Acting With Technology:  Activity Theory and Interaction Design", written by Victor Kaptelinin and Bonnie A. Nardi.   Victor Kaptelinin is the co-editor of "Beyond the Desktop Metaphor: Designing Integrated Digital Work Environments" (MIT Press, 2007), and has an interest in computer-supported cooperative work.  Bonnie Nardi brings to the IT world her background in anthropology, and is the co-author of "Information Ecologies:  Using Technology with the Heart" (MIT Press, 1999). The authors know what they are talking about. 


It is important to note that activity theory-based interaction design is viewed as a "post-cognitivistic", and informed by some of what I studied in psychology, education, and social science years ago. Within the field of activity theory are some important differences, which I'll save for a future post. 


Below are some concepts taken from the book. I am still mulling them over through the prism of NUI, post-WIMP, PATA, TOCA, OCGM, etc.  That's why there will be at "Part II", with specific examples.


"Means and ends, the extent to which the technology facilitates and constrains attaining user's goals and the impact of the technology on provoking or resolving conflicts between different goals


Social and physical aspects of the environment - integration of target technology with requirements, tools, resources, and social rules of the environment
Learning, cognition, and articulation,  internal vs external components of activity and support of their mutual transformations with target technology


Development -Developmental transformation of the above components as a whole" 
"Taken together, these sections cover various aspects of the way the target technology supports, or is intended to support, human actions".  (page 270)


I especially like the activity checklist included in the appendix of the book, as well as the concept of tool mediation. "The Activity Checklist is intended to be used at early phases of system design or for evaluating existing systems.  Accordingly, there are two slightly different versions of the Checklist, the "evaluation version" and the "design version".  Both versions are implemented as organized sets of items covering the contextual factors that can potentially influence the use of computer technology in real-life settings.  It is assumed that the Checklist can help to identify the most important issues, for instance, potential trouble spots that designers can address". (page 269)


"The Checklist covers a large space.  It is intended to be used first by examining the whole space for areas of interest, then focusing on the identified areas of interest in as much depth as possible...there is a heavy emphasis on the principle of tool mediation"  (page 270).


Other Thoughts
What is missing from this picture is a Universal Design component, something that I think holds up across time and technologies.  Following the principles of Universal design doesn't mean dumbing down or relying on simplicity. It is a multi-faceted approach, and relies on conctructing flexibility in use, one of the key concepts of Universal Design. I'd like to see this concept embedded in the post-WIMP conceptualization somehow. 


Because of my background in education/psychology/ special education, I try to follow the principles of  Universal Design for Learning (UDL) when I work on technology project.  I've spent some time thinking about how the principles of UDL could be realized through new interaction/interface systems.   Although this approach focuses on the educational technology domain, it is important to consider, given that a good percentage of our population - potential users, clients, consumers - has a temporary or permanent disability of one kind or another.


Components of Universal Design for Learning:
Multiple Means of Representation
Provide options for perception
Provide options for language and symbols
Provide options for comprehension
Multiple Means of Action and Expression
Provide options for physical action
Provide options for expressive skills and fluency
Provide options for executive functions
Multiple Means of Engagement
Provide options for recruiting interest
Provide options for sustaining effort and persistence
Provide options for self-regulation
-Adapted from the UDL Guidelines/Educator Checklist, which breaks down the components into more specific details.


Note:  The concept of Universal Design for Learning shares historical roots with some of the work behind Activity Theory and Interaction Design. Obviously, there is still much to contemplate regarding OCGM and other permutations of post-WIMP concepts!   


Here are my comments to the discussion on Johnathan Brill's blog from January 2009:
Thoughts: I'm seeing a trend to "optimize" WIMP-type applications so they can be manipulated by gesture and touch. People who program kiosks, ATM's and POS touch screens are examples of what I'm talking about. Touch and hold, two-fingered touch, and double-tap are just a slight transformation of the WIMP world-view, and in my opinion, are still WIMP (wimpy!). The mouse interaction "pretenders" are fine for using legacy productivity applications, OK in the short run.

For example, I have an HP TouchSmart, but I don't use the touch screen as often as I'd hoped. Try using using Visual Studio to code something on a touch screen. There is so much more that can be done! I know from the touch-screen prototype/demos I've worked on in various classes that applications that support collaboration and decision-making are important, and not just for work.

What do people DO, really? First of all, we are social beings, most of us. Think of what we share and discuss with others, and think about what sort of interactions on a display might best correspond with this interaction. Here are some of the things I've been DOING recently that involved some sort of technology and communication/collaboration with others:



---Travel planning - I recently went on a cruise and with various family members, selected activities I wanted to do on the ship as well plan my shore excursions (a complicated process)


---Picture sharing- I came back from the cruise with lots of pictures that I uploaded on Flickr. Related to this process: Picture annotating, tagging, choosing/comparing & editing it would be SO cool if I could use two sliders to enhance my pictures just so!


---Talking on the phone and responding to e-mail with friends and family members about the pictures, and what they wanted to see on Flickr- "you know, the ones of the dogs in Jamaica"... of which I had about 68!


---Financial planning with my husband. (I took info-viz last semester, so I know the possibilities are there.)


---Using the touch-screen to check-in at my eye-doctor's office: This was a user-unfriendly experience. Such a nice little screen. I was provided with a WIMPY PowerPoint-like interface which was confusing to use- and time consuming!


---Shopping at the new Super Wal-Mart: I asked the greeter if there was a map, and he said, "Food is over there, and the rest of the stuff is that way." Flat panel displays were all over the store, but of course,they weren't interactive. I had no idea where anything was, and the few items I was looking at had no bar-code tags. There wasn't a clerk in sight.   Wal-Mart TV rolled on-and-on via the display above my head. If I could only harness the display and have the talking head answer my questions! I gave up on my shopping trip when I was in the facial lotion/potion section. Too many choices, and too much fine print to read.

---Shopping at the mall with my young-adult daughters... I'll have to hold my thoughts on that one for now!

Some suggestions:
I think the artist/designers, (even dancers,) who are interested in multi-touch and gesture interaction have some interesting things to consider. (I linked to some of my previous posts.)


Again:
I am still mulling things over through the prism of NUI, post-WIMP, PATA, TOCA, OCGM, etc.  So that is why there will be at "Part II".  With specific examples!


RELATED
Multimedia, Multi-touch, Gesture, and Interaction Resources


My thoughts:
2007 Letter to the Editor, Pervasive Computing
Useful Usability Studies (pdf)
2007 Blog Post
Usability/Interaction Hall of Shame (In a Hospital)
2008 Blog Posts

Emerging Interactive Technologies, Emerging Interactions, and Emerging Integrated Form Factors
Interactive Touch-Screen Technology, Participatory Design, and "Getting It"
An Example of Convergence: Interactive TV: uxTV 2008
2009 Blog Posts

Why "new" ways of interaction?
Microsoft: Are You Listening?  Cool Cat Teacher (Vicki Davis) Tries out Microsoft's Multi-touch Surface Table
Haptic/Tactile Interface:  Dynamically Changeable Physical Buttons
The Convergence of TV, the Internet, and Interactivity:  Update
UX of ITV:  The User Experience and Interactive TV (or Let's Stamp Out Bad Remote Controls)
Digital Convergence and Interactive Television;  Boxee and Digital Convergence 

ElderGadget Blog: Useful Tech and Tools


Other People's Thoughts
Ron George's blog, OCGM (pronounced Occam['s Razor] is the replacement for WIMP  12/28/09
Ron George: Welcome to the OCGM Generation! Part 2 
Stephen, Microsoft Kitchen: OCGM, A New Windows User Experience
Richard Monson-Haefel's blog, Multi-touch and NUI:  What is NUI's WIMP?
Richard Monson-Haefel:  OCGM: George's Razor
Josh Blake's blog,  Deconstructing the NUI: WIMP is to GUI as OCGM (Occam) is to NUI
Bill Buxton: Gesture Based Interaction (pdf) (Updated 5/2009)
Bill Buxton: "Surface and Tangible Computing, and the "Small" Matter of People and Design" (pdf) - ISSCC 2008
Dan Saffer, Designing for Gestural Interfaces: Touchscreens and Interactive Devices
Dan Saffer, Designing for Interaction 
Mark Weiser,  Computer for the 21st Century  Scientific American, 09, 1991
Touch User Interface:  Readings in Touch Screen, Multi-Touch, and Touch User Interface
Jacob O Wobbrock, Meredith Ringel Morris, Andrew D. Wilson User-Defined Gestures for Surface Computing CHI 2009, April 4–9, 2009, Boston, Massachusetts, USA.