Interactive Touchscreens
Interactive Table
Mediateam Interactive Multitouch Table from mediateam on Vimeo.
Wall Screen
Mediateam Interactive Multitouch Screen from mediateam on Vimeo.
Video is from Mediateam
-via NUITEQ
I don't have much information about Mediateam. I think it might be MediaTeam Oulu, but I'm not sure. MediaTeam Oulu has quite a bit of research that focuses on ubiquitous computing.
Focused on interactive multimedia and emerging technologies to enhance the lives of people as they collaborate, create, learn, work, and play.
Showing posts with label +. Show all posts
Showing posts with label +. Show all posts
Jan 20, 2010
Jan 18, 2010
Facebook Settings and Privacy: Jeff Elder's post. "Walk through Facebook Privacy Settings". A must-read & do!
I use Facebook at least once a day to keep up with relatives, friends, colleagues, colleagues of colleagues, and interest groups. In many situations, I find that Facebook is much more efficient than relying on e-mail, Twitter, and RSS feeds. Sad to say, the frequent use of Facebook without regularly inspecting and modifying privacy settings (and other settings) will result in exposing parts of your life to the world, seeming without your informed consent.
One person I rely on for good advice regarding privacy issues and social networking sites is Jeff Elder. His recent blog post, "Walk through Facebook privacy settings" is something I recommend members of Facebook read read and follow. It might take up to 30 minutes of your time, but the time you spend will be important.
Through blogging, so much of "me" is out there, and this is the case for many others. Even so, it is important for me to have control over what Jeff Elder calls "the giant peephole". What people can see through the peephole of Facebook changes, often in the periphery of our awareness, and as a result, we might be sharing more information to others, including marketers, than we would like.
(Jeff Elder is a longtime Charlotte Observer columnist who studied social media on a Knight fellowship at Stanford University, blogs about social media and networking for folks in the Charlotte, NC region.)
One person I rely on for good advice regarding privacy issues and social networking sites is Jeff Elder. His recent blog post, "Walk through Facebook privacy settings" is something I recommend members of Facebook read read and follow. It might take up to 30 minutes of your time, but the time you spend will be important.
Through blogging, so much of "me" is out there, and this is the case for many others. Even so, it is important for me to have control over what Jeff Elder calls "the giant peephole". What people can see through the peephole of Facebook changes, often in the periphery of our awareness, and as a result, we might be sharing more information to others, including marketers, than we would like.
(Jeff Elder is a longtime Charlotte Observer columnist who studied social media on a Knight fellowship at Stanford University, blogs about social media and networking for folks in the Charlotte, NC region.)
Posted by
Lynn Marentette
Special Effect's on-line Accessible Gamebase network, supporting accessible games for young people with disabilities.
"SpecialEffect is a charity dedicated to helping ALL young people with disabilities to enjoy computer games. For these children, the majority of computer games are simply too quick or too difficult to play, and we can help them and their parents to find out which games they CAN play, and how to adapt those games that they can't."
Here is a video that tells a story of how SpecialEffects created a game for a young woman, Helen, with a motor disability. Helen operates the computer with her eyes to play against her brother, who uses the touch-screen interface:
If you are interested in supporting accessible games, consider joining Accessible Gamebase, a new on-line community maintained by Special Effect. Below is the message I recently received from SpecialEffect regarding this opportunity to connect others regarding accessible games:
"Have you been wondering just what it is you could do to get involved with SpecialEffect? Well, that question is answered today with the launch of SpecialEffect's 'accessible Gamebase.
Go to http://www.gamebase.info to sign up and Be a Part of It!"
Here is a video that tells a story of how SpecialEffects created a game for a young woman, Helen, with a motor disability. Helen operates the computer with her eyes to play against her brother, who uses the touch-screen interface:
If you are interested in supporting accessible games, consider joining Accessible Gamebase, a new on-line community maintained by Special Effect. Below is the message I recently received from SpecialEffect regarding this opportunity to connect others regarding accessible games:
"Have you been wondering just what it is you could do to get involved with SpecialEffect? Well, that question is answered today with the launch of SpecialEffect's 'accessible Gamebase.
- It deals with all access devices for all physical and learning disabilities - from switch users to eye controllers.
- It's not just a place for gamers but a place where everyone - carers, gamers, developers and, of course, end-users themselves - can both share information and try out the latest games whether they are seasoned gamers or absolute beginners.
- It has the potential to be a great training tool, too, and we've already put up some example videos to illustrate how the games are played.
- As it's based on a social networking model, anyone can easily join up and share information.
- It tells you not only how to adapt mainstream games for use by everyone but also provides information on which special games are available - and for whom.
Go to http://www.gamebase.info to sign up and Be a Part of It!"
Posted by
Lynn Marentette
Jan 16, 2010
For a smile: Gain Detergent Container Looks Like Don Norman's User Unfriendly Teapot

The designers who created the Gain detergent container below obviously didn't read "The Design of Everyday Things."
No matter which way you try to pour the darn thing, it still makes a BIG mess. The "spout" is really air vent, I've been told. More info on The World Is My Interface blog.
Posted by
Lynn Marentette
Big Data: What are the possibilities for collaborative interactive information visualization? (Video interviews of Roger Magoulas, director of research at O'Reilly)
When I return to graduate school (hopefully I'll have the means to attend full-time), I want to flesh out my ideas for a "interactive multi-dimensional multi-media multi-user timeline" for use on interactive multi-touch/gesture tables and displays. Although I've limited my work to a prototype of a template, I know that this concept won't work unless the application can incorporate an efficient means of handling large volumes of data, as well as data in various formats.
I want this template to be useful to people in a variety of contexts, such as students studying world history and humanities, education administrators looking at educational data over time, producers and viewers of interactive documentary programs (think interactive TV), the health industry, urban planners, the military, serious games, etc.
One of my stumbling blocks is how all of the data would be stored and analysed. What I learned a few years ago in my computer classes simply won't work.
So now what?! I think that Roger Magoulas, the director of research at O'Reilly, has some good things to say about the critical problem of handling what he calls "Big Data". Here are a few videos that I think are worth watching.
The Future of Work
Part One
Next Device (SmartPhones, netbooks, creation & consumption factors - supporting usability in multiple contexts)
You Tube Series: O'Reilly Media
Big Data: Technologies & Techniques for Large-Scale Data (Emphasis on experimental approach) Part I
Part II (Discusses new forms of databases and the user of parallel processors to handle Big Data)
Part III Key Technology Dimensions
Part IV, Focus on hardware- Solid state disks, new data structure called "triadic continuum" which handles real-time data and ongoing probability estimates of data.
I would be happy to hear from anyone who is working on a project similar to the one I'm working on as a "hobby".
RELATED
Triadic Continuum
"Phaneron, KStore, Knowledge store, or simply K, is a dynamic data model that is based on the cognitive theory of C. S. Peirce. Phaneron efficiently organizes data into a unique, compact, interconnected, and fully-related data model. Phaneron is constructed using the Triadic Continuum."
For those of you who like visual representations of geeky-techy concepts, here a few visuals and related descriptions of KStore fundamentals from the Triadic Continuum website:
"The KStore data model is constructed using the basic triad. For example, the event sequence 'cat' would be recorded as shown in 'a sequence' below. A new level of nodes is created above a lower level of nodes as a result of the triadic process. In this case the lower level of nodes contains a node for each character of the alpha-numeric character set and the new nodes reference the lower level nodes to record the sequence 'cat'. Each sequence is initialize with a reference to a BOT (beginning of thought) and terminated with an EOT (end of thought) reference."

"The data set above was used to create the K structure below with the lowest level that contains the alpha-numeric character set, the second level is created to record sequences that represent the field variables. Then a third level is created using the field variables of the second level to record the record sequences. Records recorded in this K structure reuse the field variable nodes so that these field variable sequences never have to be recorded more than once. This is just one of the attributes of a K structure that makes it very efficient." -Triadic-conintuum.com

Mazzagatti, J.C. (2006) The Potential for Recognizing Errors in a Dataset Using Computer Memory Resident data Structure Based on the Phaneron of C.S. Peirce (doc)
Personal Note:
Due to the economic downturn and its impact on my family (two kids in college), I returned to work full time in mid 2008. I have a very busy day job as a school psychologist, working at two high schools as well as a program for students with multiple, severe disabilities, including autism. This has limited my ability to work on my project.
I want this template to be useful to people in a variety of contexts, such as students studying world history and humanities, education administrators looking at educational data over time, producers and viewers of interactive documentary programs (think interactive TV), the health industry, urban planners, the military, serious games, etc.
One of my stumbling blocks is how all of the data would be stored and analysed. What I learned a few years ago in my computer classes simply won't work.
So now what?! I think that Roger Magoulas, the director of research at O'Reilly, has some good things to say about the critical problem of handling what he calls "Big Data". Here are a few videos that I think are worth watching.
The Future of Work
Part One
Next Device (SmartPhones, netbooks, creation & consumption factors - supporting usability in multiple contexts)
You Tube Series: O'Reilly Media
Big Data: Technologies & Techniques for Large-Scale Data (Emphasis on experimental approach) Part I
Part II (Discusses new forms of databases and the user of parallel processors to handle Big Data)
Part III Key Technology Dimensions
Part IV, Focus on hardware- Solid state disks, new data structure called "triadic continuum" which handles real-time data and ongoing probability estimates of data.
I would be happy to hear from anyone who is working on a project similar to the one I'm working on as a "hobby".
RELATED
Triadic Continuum
"Phaneron, KStore, Knowledge store, or simply K, is a dynamic data model that is based on the cognitive theory of C. S. Peirce. Phaneron efficiently organizes data into a unique, compact, interconnected, and fully-related data model. Phaneron is constructed using the Triadic Continuum."
For those of you who like visual representations of geeky-techy concepts, here a few visuals and related descriptions of KStore fundamentals from the Triadic Continuum website:
"The KStore data model is constructed using the basic triad. For example, the event sequence 'cat' would be recorded as shown in 'a sequence' below. A new level of nodes is created above a lower level of nodes as a result of the triadic process. In this case the lower level of nodes contains a node for each character of the alpha-numeric character set and the new nodes reference the lower level nodes to record the sequence 'cat'. Each sequence is initialize with a reference to a BOT (beginning of thought) and terminated with an EOT (end of thought) reference."
"The data set above was used to create the K structure below with the lowest level that contains the alpha-numeric character set, the second level is created to record sequences that represent the field variables. Then a third level is created using the field variables of the second level to record the record sequences. Records recorded in this K structure reuse the field variable nodes so that these field variable sequences never have to be recorded more than once. This is just one of the attributes of a K structure that makes it very efficient." -Triadic-conintuum.com
Mazzagatti, J.C. (2006) The Potential for Recognizing Errors in a Dataset Using Computer Memory Resident data Structure Based on the Phaneron of C.S. Peirce (doc)
Personal Note:
Due to the economic downturn and its impact on my family (two kids in college), I returned to work full time in mid 2008. I have a very busy day job as a school psychologist, working at two high schools as well as a program for students with multiple, severe disabilities, including autism. This has limited my ability to work on my project.
Posted by
Lynn Marentette
Jan 14, 2010
Shared computing with Windows MultiPoint in classrooms: Why not use Mouse Mischief (beta version)?
I came across this post this on Long Zheng's I Started Something blog:
Windows MultiPoint Server -- a multiseat computing solution worthy for the home?
Long Zheng points out that Window's MultiPoint server is an outgrowth of the Multi-Mouse project, in which students multiple numbers of students can work together to interact with content a PC screen or a projected PC screen.
The picture below shows how a Windows MultiPoint server can work in a classroom.

-Microsoft
I'm not so sure I like the set up in the picture of the Multi-Point 2010 system in the above picture. The students all have huge monitors in front of them, so the opportunities for shared or collaborative interaction are limited. I like the multi-mice concept better, since the children can really be together
Mouse Mischief

I tried this with a few students during the 2008-09 school year, and they liked it. Since I serve more schools this current year, I haven't had the opportunity to explore this further. I plan to download a newer version and try it out soon.
Good news!
The free beta version of Microsoft Mouse Mischief from the Microsoft website was recently released: Microsoft Mouse Mischief: Make your PowerPoint presentations interactive
Below is information about Mouse Mischief from the Microsoft website:
"Mouse Mischief is a tool that Microsoft makes available free of charge, and that allows teachers to work with Microsoft Office PowerPoint to make interactive presentations. With Mouse Mischief, teachers can add multiple choice questions to their presentations, and large groups of students can answer the questions using mice connected to the teacher’s PC."
"Mouse Mischief not only gives students the ability to engage, have fun, and learn in new, interactive ways, but it also provides teachers with a more affordable alternative to purchasing expensive student response systems, commonly known as clickers, by letting students use affordable wired or wireless USB mice that their school already own."
If you are interested in developing applications for Mouse Mischief, you can download Windows MultiPoint Software Development Kit 1.5 This kit allows developers to enable up to 25 mouse devices to work at the same time on one computer. It was released on 1/12/2010 and can be downloaded from the Microsoft website.
Here a plug from Microsoft about the benefits of the MultiPoint Mouse SDK:
"Applications built on the MultiPoint Mouse SDK can provide teachers with tools to gain real-time assessment information to help them provide a personalized learning experience for each of their students...Applications built on the MultiPoint Mouse SDK can increase student learning comprehension through interactive methods.MultiPoint Mouse applications can further a student’s engagement, collaboration, interaction and overall cognitive and social skills within a classroom or lab environment."
Here is the information about the MultiPoint SDK:
"The Windows MultiPoint Mouse SDK version 1.5 is a development framework that allows developers to build applications that enable up to 25 individual mouse devices to work simultaneously on one computer. As a developer, you can use the MultiPoint Mouse SDK to create educational applications that take advantage of collaborative learning methodologies. In schools with minimum infrastructure, MultiPoint Mouse greatly enhances the shared computing experience. Initial pilot programs conducted in India by Microsoft Research show that for certain subjects, MultiPoint Mouse can enhance learning when compared to a 1:1 computing scenario."
"MultiPoint Mouse should not be confused with applications that allow multiple people to control multiple mouse devices to perform standard operations. In those cases, the system traditionally cannot identify which mouse has made which changes, and there is normally no option for controlling the permissions of the various devices. MultiPoint Mouse is a development framework that enables developers to build applications to take advantage of multiple mouse devices, including the ability to handle mouse clicks from different users independently and to assign different permissions to each mouse. For example, the mouse belonging to a teacher in a learning application might need additional permissions to control the activity."
The MultiPoint SDK is compatible with Windows 7, Windows Vista Service Pack 2, Windows XP Service Pack 3, the .NET Framework version 3.5 SP1 or higher, Microsoft Expression Blend (you can use the trial version), Visual Studio 2008 or 2010 (you can use the free Express version), 2-4 mice devices for testing, and USB ports on the computer
Other thoughts:
Schools with money for advanced technology tools have purchased SMARTTables, and few have Microsoft Surface tables. They are expensive, and don't offer a range of form factors to choose from.
I sort of like the concept behind the multi-user poker table that was in the casino on my cruse ship:
Near the poker table is a display that shows the action from the poker game. In classroom settings, this display could be an interactive whiteboard, a projected display, or even a flat-panel screen.

There is a need for tables of different shapes in the schools. Speech pathologists, school psychologists, counselors, and others who provide guided group activities in the schools could use a multi-user table that follows this tried and true configuration:

I'd love to hear from anyone who is using MultiPoint or Mouse Mischief, and also from anyone who is experimenting with various multi-touch table form factors.
Related:
Multple Mice for Computers in Education in Developing Countries (pdf)
Windows MultiPoint Server -- a multiseat computing solution worthy for the home?
Long Zheng points out that Window's MultiPoint server is an outgrowth of the Multi-Mouse project, in which students multiple numbers of students can work together to interact with content a PC screen or a projected PC screen.
The picture below shows how a Windows MultiPoint server can work in a classroom.

-Microsoft
I'm not so sure I like the set up in the picture of the Multi-Point 2010 system in the above picture. The students all have huge monitors in front of them, so the opportunities for shared or collaborative interaction are limited. I like the multi-mice concept better, since the children can really be together
Mouse Mischief
Neema Moraveji, of the Stanford University HCI group, has videos and information about the multiple mice-related work on his project Page:
Teachers provide content using an add-on for PowerPoint that allows for simultaneous input from multiple mice. The teacher can set up limits regarding how the mice are used by the students.

I tried this with a few students during the 2008-09 school year, and they liked it. Since I serve more schools this current year, I haven't had the opportunity to explore this further. I plan to download a newer version and try it out soon.
Good news!
The free beta version of Microsoft Mouse Mischief from the Microsoft website was recently released: Microsoft Mouse Mischief: Make your PowerPoint presentations interactive
Below is information about Mouse Mischief from the Microsoft website:
"Mouse Mischief is a tool that Microsoft makes available free of charge, and that allows teachers to work with Microsoft Office PowerPoint to make interactive presentations. With Mouse Mischief, teachers can add multiple choice questions to their presentations, and large groups of students can answer the questions using mice connected to the teacher’s PC."
"Mouse Mischief not only gives students the ability to engage, have fun, and learn in new, interactive ways, but it also provides teachers with a more affordable alternative to purchasing expensive student response systems, commonly known as clickers, by letting students use affordable wired or wireless USB mice that their school already own."
"It’s simple. After Mouse Mischief is installed, the Mouse Mischief toolbar will appear as part of the PowerPoint ribbon when a new or old PowerPoint presentation is opened. This intuitive Mouse Mischief toolbar lets teachers add interactive elements such as multiple-choice question slides with a single click. When the teacher opens a Mouse Mischief enabled presentation, students in the classroom can answer each question by clicking it with their uniquely designed mouse cursor. Once the students have selected their answers, the teacher can display the correct answer...The best part? Mouse Mischief gives teachers the option to have their students answer questions individually or as part of a team, in order to encourage both competition and collaboration in the classroom...Special teacher controls allow the teacher to disable student’s mouse cursors, navigate between slides, set timers, and more. With Mouse Mischief the teacher is always in control, whether there are two or 25 cursors on the screen."
If you are interested in developing applications for Mouse Mischief, you can download Windows MultiPoint Software Development Kit 1.5 This kit allows developers to enable up to 25 mouse devices to work at the same time on one computer. It was released on 1/12/2010 and can be downloaded from the Microsoft website.
Here a plug from Microsoft about the benefits of the MultiPoint Mouse SDK:
"Applications built on the MultiPoint Mouse SDK can provide teachers with tools to gain real-time assessment information to help them provide a personalized learning experience for each of their students...Applications built on the MultiPoint Mouse SDK can increase student learning comprehension through interactive methods.MultiPoint Mouse applications can further a student’s engagement, collaboration, interaction and overall cognitive and social skills within a classroom or lab environment."
Here is the information about the MultiPoint SDK:
"The Windows MultiPoint Mouse SDK version 1.5 is a development framework that allows developers to build applications that enable up to 25 individual mouse devices to work simultaneously on one computer. As a developer, you can use the MultiPoint Mouse SDK to create educational applications that take advantage of collaborative learning methodologies. In schools with minimum infrastructure, MultiPoint Mouse greatly enhances the shared computing experience. Initial pilot programs conducted in India by Microsoft Research show that for certain subjects, MultiPoint Mouse can enhance learning when compared to a 1:1 computing scenario."
"MultiPoint Mouse should not be confused with applications that allow multiple people to control multiple mouse devices to perform standard operations. In those cases, the system traditionally cannot identify which mouse has made which changes, and there is normally no option for controlling the permissions of the various devices. MultiPoint Mouse is a development framework that enables developers to build applications to take advantage of multiple mouse devices, including the ability to handle mouse clicks from different users independently and to assign different permissions to each mouse. For example, the mouse belonging to a teacher in a learning application might need additional permissions to control the activity."
The MultiPoint SDK is compatible with Windows 7, Windows Vista Service Pack 2, Windows XP Service Pack 3, the .NET Framework version 3.5 SP1 or higher, Microsoft Expression Blend (you can use the trial version), Visual Studio 2008 or 2010 (you can use the free Express version), 2-4 mice devices for testing, and USB ports on the computer
Other thoughts:
Schools with money for advanced technology tools have purchased SMARTTables, and few have Microsoft Surface tables. They are expensive, and don't offer a range of form factors to choose from.
I sort of like the concept behind the multi-user poker table that was in the casino on my cruse ship:
Near the poker table is a display that shows the action from the poker game. In classroom settings, this display could be an interactive whiteboard, a projected display, or even a flat-panel screen.
There is a need for tables of different shapes in the schools. Speech pathologists, school psychologists, counselors, and others who provide guided group activities in the schools could use a multi-user table that follows this tried and true configuration:

I'd love to hear from anyone who is using MultiPoint or Mouse Mischief, and also from anyone who is experimenting with various multi-touch table form factors.
Related:
Multple Mice for Computers in Education in Developing Countries (pdf)
Posted by
Lynn Marentette
Jan 12, 2010
Windows Embedded Intelligent Digital Signage for 2011.
My comments will be forthcoming.
Posted by
Lynn Marentette
Jan 4, 2010
Thoughts about technology on a cruise ship, and other reflections...
It is January 4 2010, and I am enjoying my Caribbean cruise trip.
I’m a little disappointed that technology on cruise ships has not moved forward as I’d hoped over the past few years. On my ship, which is less than three years old, Wi Fi is available in each stateroom, in addition to the common areas. This is a good thing, but it is very expensive. The pay-as-you-go rate is 75 cents a minute! If you have a 3G iPhone or Smartphone, you’ll have to pay outrageously high fees to connect to the internet from the ship’s connection, I’m told.
I was pleasantly surprised by some of the digital displays on the ship, especially the “show-reel” of the beautiful destination points and exciting activities that everyone looks forward to when going on a cruise. I was also impressed with the digital touch-screen poker table in the casino, even if I don't play poker.
I even liked some of the digital signage that were basically slide show posters of nice vacation pictures.
My biggest disappointments?
I guess I shouldn’t have had such high technological expectations for my trip. I’m on a Carnival cruise ship, and I know that the line is owned by the same company that owns the Holland America ships. From previous cruises on Holland America ships, I know that they are more upscale than Carnival. I guess I got too excited when I recently learned that a few Holland America ships provide cruise-goers with the magic experience of Microsoft Surface in their lounges, and also adopted the Windows 7 operating system. On the Carnival Freedom, things aren’t quite so advanced.
Why is this important to me?
I’m a little disappointed that technology on cruise ships has not moved forward as I’d hoped over the past few years. On my ship, which is less than three years old, Wi Fi is available in each stateroom, in addition to the common areas. This is a good thing, but it is very expensive. The pay-as-you-go rate is 75 cents a minute! If you have a 3G iPhone or Smartphone, you’ll have to pay outrageously high fees to connect to the internet from the ship’s connection, I’m told.
I was pleasantly surprised by some of the digital displays on the ship, especially the “show-reel” of the beautiful destination points and exciting activities that everyone looks forward to when going on a cruise. I was also impressed with the digital touch-screen poker table in the casino, even if I don't play poker.
I even liked some of the digital signage that were basically slide show posters of nice vacation pictures.
My biggest disappointments?
- The large touch-screen flat-panel display that served as an interactive shore excursion kiosk. It was tucked away in a poor location, and it didn’t make any sense!
- The interactive TV experience, specifically the the shore excursion selection process. This experience made me hate TV remote controls more than ever!
- The cruise ship wayfinding system. Arrrggghh.
I guess I shouldn’t have had such high technological expectations for my trip. I’m on a Carnival cruise ship, and I know that the line is owned by the same company that owns the Holland America ships. From previous cruises on Holland America ships, I know that they are more upscale than Carnival. I guess I got too excited when I recently learned that a few Holland America ships provide cruise-goers with the magic experience of Microsoft Surface in their lounges, and also adopted the Windows 7 operating system. On the Carnival Freedom, things aren’t quite so advanced.
Why is this important to me?
- I’m interested in studying how technology can facilitate collaboration, communication, information-gathering, and decision making in public spaces, and since I have plenty of cruise ship travel experience, cruse ship spaces.
- I’d like to follow up on the work I did on a student project. Three years ago, I did a lot of people-watching during a cruise-ship vacation, which inspired the topic of my Human Computer Interaction team project during the semester after my trip. I took another cruise ship during that semester, which further informed my thinking about this topic. Since then, I’ve been on 4 cruises.
- I think that much of the information I obtain from my observations related to travel experiences, including cruise ships, can inform work in other related domains, such as shopping malls, museums, historical points of interest, libraries, airports, bus, railroad, and subway terminals, parks and squares, and so forth. I also think this work can inform educational applications and simulations, such as 3D “Virtual Field Trip” games, following Universal Design for Learning principles.
- Acting With Technology: Activity Theory and Interaction Design (Victor Kaptelinin and Bonnie A. Nardi)
- Thoughts on Interaction Design (Editor: Jon Kolko)
UPDATE TO COME!
Posted by
Lynn Marentette
Jan 1, 2010
Digital Out of Home (DOOH): Screens Large and Small at the Mall (and some touch interactive Coke machines!)
I was at the Southpark Mall in Charlotte yesterday and noticed that screens of all sizes were everywhere I went. I happened to have my little HD video camera with me and thought I'd share what came across my path.
Most of what I saw wasn't too innovative or interactive. Many of the smaller video displays were located on the market karts in the main traffic areas of the mall. Scattered about the mall are cozy living-room like areas, with comfy couches, WiFi access, and in on spot, a few large-screen HD televisions, perfect for watching sports or the news while other members of your social/family network do some serious shopping. I especially liked the infomercial about North Carolina's beaches around Wilmington.
I wasn't too excited about the information display about the mall, which provides what looks like a version of the Southpark Mall website, shown at :44 on the first video clip. Located about 20 feet from a static mall directory, not a single soul looked at the screen or used the keyboard and mouse to find out more about what the stores in the mall had to offer. The static directory, on the other hand, had groups of people looking at it all of the time as I observed. (I added screen-shots and pictures of the keyboard-and-mouse display near the end of this post.)
At the end of the first video clip, you'll see a new interactive touch-screen Coke vending machine, but the one featured at the end of the first video is out of order.
Not to worry. I stopped to rest in another area of the mall, and in right in my line of sight was another Coke machine, just as a young man was trying to figure out how to get a Coke out of the machine. It took him 93 seconds. That might not seem like too long, but if you watch the second video, you'll see that it was almost painful to watch.
As the young man finished purchasing his soda, a family with two young children were nearby and figured out that the display wasn't just for ads. The second video clip has a few shots of the younger child playing with the touch screen, and later on, his dad. The little guy probably would have played with the touchable spinning Coke bottle for a long time! The dad commented, "They should have something like this for the home!", and mentioned that his kids liked the SMARTboards at school.
In my opinion, the interactive Coke Machine didn't know what it wanted to be. An eye-catching, attention grabbing infomercial? A useful interactive information display? A fun toy to touch and play? A system to make it difficult to quickly reach your goal of getting your thirst quenched, better to get the ads into your brain?
Marketers, designers, and developers, listen to this:
A lot of people still do not know about larger interactive touch screens. Even if they have an iPhone!
I told the parents about touch-enabled all-in-one PC's, touch-screen netbook/laptops, and the rumor that Apple might come out with a touch-screen tablet. They'd never heard of such things. This mall is very upscale, and the families that come to shop there have money. They still can buy shoes at Nordstrom and drink specialty coffees, and of course, crowd around in the Apple Store.
The Videos
Note: The participants in the following two videos gave permission for me to video. The videos were not staged.
A Young Guy and an Interactive Coke Machine
A Kid and an Interactive Coke Machine
RELATED
Below is a picture of the web-connected directory at the Southpark Mall from about a year ago. No-one used it then, and at the time, the display was not working. If you look closely, you'll see the keyboard and mouse set up. Although this display contains a lot of information about the mall, via the web, it does not meet the needs of most shoppers, who travel in pairs, groups, families, and extended families during the holiday season.
Below are two screen shots of the SouthPark website, which can be accessed by using the keyboard and mouse on the information display, as I previously mentioned.
In my opinion, there is enough screen space on the touch-screen Coke machine to provide interactive information about the mall. Ripping content from the mall's website won't do, since it is text-based, boring, and oh-so WIMP-y!
Better yet, the mall should transform the large static directories into something useful, keeping in mind that most of the time it will need to support two or more people deciding where to go and what to do while they are at the mall. Beam a mini-map of the mall to the shoppers to use on their iPhones/Smart phones, and give them a shopping advisor app while you are at it.
Psssst....
There are too many talking head screens in the mall. Make them interactive, add some value, and see what might happen, especially if you want to target reluctant shoppers like myself.
For fun:
I Want the Giant iPhone! (Short Glimpse of the Apple Store)
RELATED
Coca-Cola Testing Interactive Vending Machines Patricia Odell, Promo, 4/2/09:
"Shoppers will come upon the units in high traffic locations and can use the large format touch screen displays to interact with and buy Coca-Cola products. People will also be learning about specials and promotions available at the mall and will be able to purchase the beverages using Simon Giftcards.
"The flat screen is set in the vending machine doors and is divided into three sections. The machines feature functionality similar to an iPhone. For example, the mid section of the screen is where people can buy drinks. Clicking on a product lets the shopper rotate the bottle to see the label. The top and bottom sections of the screen are used for running commercials for Coke and other Coca-Cola brands and for Simon Mall promos...This is just preliminary to see how the functionality goes," Coca-Cola spokesperson Ray Crockett said. Next-gen models of the machines will offer mobile phone downloads in the form of music files, ringtones and wallpaper, along with cashless payment and more, Coca-Cola said...The machines were first Introduced at the Summer Olympics last year in Beijing and one the Simon dTour...The new machines incorporate sight, sound and motion video to take the vending experience from transaction to true interaction,” Anthony Phillips, global brand manager at Coca-Cola said in a release. “We wanted the machines to be eye-catching in a way that would turn heads and command attention.” The new venders were developed by The Coca-Cola Co. in partnership with Samsung and interactive marketing agency Sapient".
Sapient Interactive Services
Sapient Interactive Mobile Group
Update: Some links to Bill Gerba's blog posts:
Digital Signage Screen Placement: Modeling Consumer Behavior http://bit.ly/4oXPWM
Digital Signage Screen Placement: Angle, Height and Text Size http://bit.ly/7hG6NZ
Making great digital signage content: A quick reference guide http://bit.ly/74rNL5
Most of what I saw wasn't too innovative or interactive. Many of the smaller video displays were located on the market karts in the main traffic areas of the mall. Scattered about the mall are cozy living-room like areas, with comfy couches, WiFi access, and in on spot, a few large-screen HD televisions, perfect for watching sports or the news while other members of your social/family network do some serious shopping. I especially liked the infomercial about North Carolina's beaches around Wilmington.
I wasn't too excited about the information display about the mall, which provides what looks like a version of the Southpark Mall website, shown at :44 on the first video clip. Located about 20 feet from a static mall directory, not a single soul looked at the screen or used the keyboard and mouse to find out more about what the stores in the mall had to offer. The static directory, on the other hand, had groups of people looking at it all of the time as I observed. (I added screen-shots and pictures of the keyboard-and-mouse display near the end of this post.)
At the end of the first video clip, you'll see a new interactive touch-screen Coke vending machine, but the one featured at the end of the first video is out of order.
Not to worry. I stopped to rest in another area of the mall, and in right in my line of sight was another Coke machine, just as a young man was trying to figure out how to get a Coke out of the machine. It took him 93 seconds. That might not seem like too long, but if you watch the second video, you'll see that it was almost painful to watch.
As the young man finished purchasing his soda, a family with two young children were nearby and figured out that the display wasn't just for ads. The second video clip has a few shots of the younger child playing with the touch screen, and later on, his dad. The little guy probably would have played with the touchable spinning Coke bottle for a long time! The dad commented, "They should have something like this for the home!", and mentioned that his kids liked the SMARTboards at school.
In my opinion, the interactive Coke Machine didn't know what it wanted to be. An eye-catching, attention grabbing infomercial? A useful interactive information display? A fun toy to touch and play? A system to make it difficult to quickly reach your goal of getting your thirst quenched, better to get the ads into your brain?
Marketers, designers, and developers, listen to this:
A lot of people still do not know about larger interactive touch screens. Even if they have an iPhone!
I told the parents about touch-enabled all-in-one PC's, touch-screen netbook/laptops, and the rumor that Apple might come out with a touch-screen tablet. They'd never heard of such things. This mall is very upscale, and the families that come to shop there have money. They still can buy shoes at Nordstrom and drink specialty coffees, and of course, crowd around in the Apple Store.
The Videos
Note: The participants in the following two videos gave permission for me to video. The videos were not staged.
A Young Guy and an Interactive Coke Machine
A Kid and an Interactive Coke Machine
RELATED
Below is a picture of the web-connected directory at the Southpark Mall from about a year ago. No-one used it then, and at the time, the display was not working. If you look closely, you'll see the keyboard and mouse set up. Although this display contains a lot of information about the mall, via the web, it does not meet the needs of most shoppers, who travel in pairs, groups, families, and extended families during the holiday season.
Below are two screen shots of the SouthPark website, which can be accessed by using the keyboard and mouse on the information display, as I previously mentioned.
In my opinion, there is enough screen space on the touch-screen Coke machine to provide interactive information about the mall. Ripping content from the mall's website won't do, since it is text-based, boring, and oh-so WIMP-y!
Better yet, the mall should transform the large static directories into something useful, keeping in mind that most of the time it will need to support two or more people deciding where to go and what to do while they are at the mall. Beam a mini-map of the mall to the shoppers to use on their iPhones/Smart phones, and give them a shopping advisor app while you are at it.
Psssst....
There are too many talking head screens in the mall. Make them interactive, add some value, and see what might happen, especially if you want to target reluctant shoppers like myself.
For fun:
I Want the Giant iPhone! (Short Glimpse of the Apple Store)
RELATED
Coca-Cola Testing Interactive Vending Machines Patricia Odell, Promo, 4/2/09:
"Shoppers will come upon the units in high traffic locations and can use the large format touch screen displays to interact with and buy Coca-Cola products. People will also be learning about specials and promotions available at the mall and will be able to purchase the beverages using Simon Giftcards.
"The flat screen is set in the vending machine doors and is divided into three sections. The machines feature functionality similar to an iPhone. For example, the mid section of the screen is where people can buy drinks. Clicking on a product lets the shopper rotate the bottle to see the label. The top and bottom sections of the screen are used for running commercials for Coke and other Coca-Cola brands and for Simon Mall promos...This is just preliminary to see how the functionality goes," Coca-Cola spokesperson Ray Crockett said. Next-gen models of the machines will offer mobile phone downloads in the form of music files, ringtones and wallpaper, along with cashless payment and more, Coca-Cola said...The machines were first Introduced at the Summer Olympics last year in Beijing and one the Simon dTour...The new machines incorporate sight, sound and motion video to take the vending experience from transaction to true interaction,” Anthony Phillips, global brand manager at Coca-Cola said in a release. “We wanted the machines to be eye-catching in a way that would turn heads and command attention.” The new venders were developed by The Coca-Cola Co. in partnership with Samsung and interactive marketing agency Sapient".
Sapient Interactive Services
Sapient Interactive Mobile Group
Update: Some links to Bill Gerba's blog posts:
Digital Signage Screen Placement: Modeling Consumer Behavior http://bit.ly/4oXPWM
Digital Signage Screen Placement: Angle, Height and Text Size http://bit.ly/7hG6NZ
Making great digital signage content: A quick reference guide http://bit.ly/74rNL5
Posted by
Lynn Marentette
Apple iSlate, iTablet , MacBook Touch: Will it support gesture interaction & haptic feedback?
Soldier Knows Best produces great tech-oriented videos. Here's his spin on all of the rumors about the possibility of the Apple iSlate.
I just inherited a 10 month-old Mac Book, installed Snow Leopard and upgraded to iLife 2009. I'm so used to touching the screen on my HP TouchSmart PC that I found myself touching my Mac Book screen from time to time, especially when I was editing video clips in iMovie. I think the latest version of iMovie was designed with touch/gesture interaction in mind!
From what I can tell, Snow Leopard and iLife 2009 will be able to support a range of touch interactions, if not gesture input as well.
Here are some rumors that have been conjured up and distributed on the web:
The Exhaustive Guide to Applet Tablet Rumors (Matt Buchanan, Gizmodo, 12/26/09)
Apple Expects to Sell 10 Million Tablets in First Year (Pete Cashmore, Mashable, 1/1/10)
iGuide Emerges as Another Potential Apple Tablet Name (Adam Ostrow, Mashable, 12/29/09)
The Tablet (John Gruber, Daring Fireball, 12/31/09)
"And so in answer to my central question, regarding why buy The Tablet if you already have an iPhone and a MacBook, my best guess is that ultimately, The Tablet is something you’ll buy instead of a MacBook."
Apple Owns iSlate.com Domain: The Mystery Deepens (Dan Nosowitz, Gismodo, 12/25/09)
What is the Ultimate Role of the Apple Tablet? (Arnold Kim, MacRumors, 12/31/09)
iPad, iTablet, iSlate, or MacTab (Cruz Miranda, 8/31/09)
Why am I excited about this?
I want to see if the iSlate would be good for collaborative educational games, assisted technology, augmentative communication, and alternative assessment for students who have multiple/severe disabilities.
That is a huge goal, so I'm going to start simple. I am not giving up on Windows 7 multi-touch programming. I just have an urge to find out for myself what works, what doesn't, and what platform works best for specific "personas" and "scenarios".
I plan to make a little app for the iPhone/iPod Touch, based on a game I made several years ago, "Shoes Your Battles" for a game class. I think I'd like to make this game for the Apple iTablet!
The first version of Shoes Your Battles created with Game Maker, and the second version was in Flash, back in the days of ActionScript 2.0. I started on third version, one that could be used as an advergame for people to play while shopping for shoes during shoe sales, but it never got past the planning stage.
The idea for the third version came to me when I my elderly aunt came to visit from out-of-town and just had to go shoe shopping on the day after Thanksgiving. It was extremely difficult to figure out what was on sale, how much it cost, after taking off the previous mark-downs and what was on sale that had a price that was not yet marked down.
Adding to the confusion was the fact that there were few salespeople and herds of women. It was madness. There were pairs of shoes in the wrong boxes, boxes of shoes and no way to quickly find out the true prices! We were in the shoe department for hours, and it wasn't as fun as you'd think. If you've been in a crowded women's shoe department to buy that special pair of shoes during a fantastic shoe sale, you'll know what I mean.
At any rate, I wanted my little "Shoes Your Battles" game to help with this dreadful scenario, by somehow incorporating a shoe shopping advisor and a means to figure out the REAL sales prices of those awesome, to-die-for shoes. Unfortunately, the technology wasn't where it needed to be at the time- I am always dreaming up things that are too d--- futuristic!
4 years later, we have iPhones and SmartPhones and 3G internet and RFID and ubiquitous WiFi and the Wii and more women who like to play games and...and... The time is ripe.
Apple better come up with the iSlate!
SOMEWHAT RELATED
Thinking about post-WIMP HCI
It is always important to re-visit wisdom from the past when thinking about new interfaces and means of technology-supported human interaction. Here are a few resources from the field of Human-Computer Interaction found on the HCI Vistas website:
The Prism of User Experience -A nice graphic metaphor to help the conceptualization process. (Denish Katre, 2007)
Journal of HCI Vistas: Multi-disciplinary Perspective of Usability and HCI
Personas as part of a user-centered innovation process Lene Nielsen, 1/08 HCI Vistas Vol-IV
10 Steps to Personas (Lene Nielsen, 7/07, HCI Vistas Vol-III)
I just inherited a 10 month-old Mac Book, installed Snow Leopard and upgraded to iLife 2009. I'm so used to touching the screen on my HP TouchSmart PC that I found myself touching my Mac Book screen from time to time, especially when I was editing video clips in iMovie. I think the latest version of iMovie was designed with touch/gesture interaction in mind!
From what I can tell, Snow Leopard and iLife 2009 will be able to support a range of touch interactions, if not gesture input as well.
Here are some rumors that have been conjured up and distributed on the web:
The Exhaustive Guide to Applet Tablet Rumors (Matt Buchanan, Gizmodo, 12/26/09)
Apple Expects to Sell 10 Million Tablets in First Year (Pete Cashmore, Mashable, 1/1/10)
iGuide Emerges as Another Potential Apple Tablet Name (Adam Ostrow, Mashable, 12/29/09)
The Tablet (John Gruber, Daring Fireball, 12/31/09)
"And so in answer to my central question, regarding why buy The Tablet if you already have an iPhone and a MacBook, my best guess is that ultimately, The Tablet is something you’ll buy instead of a MacBook."
Apple Owns iSlate.com Domain: The Mystery Deepens (Dan Nosowitz, Gismodo, 12/25/09)
What is the Ultimate Role of the Apple Tablet? (Arnold Kim, MacRumors, 12/31/09)
iPad, iTablet, iSlate, or MacTab (Cruz Miranda, 8/31/09)
Why am I excited about this?
I want to see if the iSlate would be good for collaborative educational games, assisted technology, augmentative communication, and alternative assessment for students who have multiple/severe disabilities.
That is a huge goal, so I'm going to start simple. I am not giving up on Windows 7 multi-touch programming. I just have an urge to find out for myself what works, what doesn't, and what platform works best for specific "personas" and "scenarios".
I plan to make a little app for the iPhone/iPod Touch, based on a game I made several years ago, "Shoes Your Battles" for a game class. I think I'd like to make this game for the Apple iTablet!
The first version of Shoes Your Battles created with Game Maker, and the second version was in Flash, back in the days of ActionScript 2.0. I started on third version, one that could be used as an advergame for people to play while shopping for shoes during shoe sales, but it never got past the planning stage.
The idea for the third version came to me when I my elderly aunt came to visit from out-of-town and just had to go shoe shopping on the day after Thanksgiving. It was extremely difficult to figure out what was on sale, how much it cost, after taking off the previous mark-downs and what was on sale that had a price that was not yet marked down.
Adding to the confusion was the fact that there were few salespeople and herds of women. It was madness. There were pairs of shoes in the wrong boxes, boxes of shoes and no way to quickly find out the true prices! We were in the shoe department for hours, and it wasn't as fun as you'd think. If you've been in a crowded women's shoe department to buy that special pair of shoes during a fantastic shoe sale, you'll know what I mean.
At any rate, I wanted my little "Shoes Your Battles" game to help with this dreadful scenario, by somehow incorporating a shoe shopping advisor and a means to figure out the REAL sales prices of those awesome, to-die-for shoes. Unfortunately, the technology wasn't where it needed to be at the time- I am always dreaming up things that are too d--- futuristic!
4 years later, we have iPhones and SmartPhones and 3G internet and RFID and ubiquitous WiFi and the Wii and more women who like to play games and...and... The time is ripe.
Apple better come up with the iSlate!
SOMEWHAT RELATED
Thinking about post-WIMP HCI
It is always important to re-visit wisdom from the past when thinking about new interfaces and means of technology-supported human interaction. Here are a few resources from the field of Human-Computer Interaction found on the HCI Vistas website:
The Prism of User Experience -A nice graphic metaphor to help the conceptualization process. (Denish Katre, 2007)
Journal of HCI Vistas: Multi-disciplinary Perspective of Usability and HCI
Personas as part of a user-centered innovation process Lene Nielsen, 1/08 HCI Vistas Vol-IV
10 Steps to Personas (Lene Nielsen, 7/07, HCI Vistas Vol-III)
Posted by
Lynn Marentette
Labels:
accessible games,
Apple,
apps,
creative programming,
design,
games,
gizmodo,
iGuide,
iTablet,
mac,
Macbook Touch,
multi-touch,
NUI,
post-WIMP,
product,
rumors,
Soldier Knows Best,
touch
No comments:
Dec 31, 2009
Josh Blake's' Nice Multi-touch and Natural User Interface Applications for Surface (Cross-Post )
Information from Josh's YouTube channel:
"This is a video of some of the cool multi-touch and Natural User Interface (NUI) applications I designed and developed for Surface and Windows 7." The InfoStrat.VE map control for WPF and Surface is available for free at http://virtualearthwpf.codeplex.com.
I especially like the moving ring-menu concept, as it facilitates smoother collaboration between people on an interactive table or surface, where flexible orientation control is important.
At 3:15, the demonstration of Josh's ink-shape recognition begins. This is a feature that would be great to incorporate in my applications for children with disabilities who have some fine-motor limitations.
Josh's Blog: Deconstructing the NUI
Josh's Recent Post about post-WIMP concepts:
Metaphors and OCGM
Josh works at InfoStrat
Posted by
Lynn Marentette
The Post-WIMP Explorers' Club: Update of the Updates, Morning of 12/31/09
What is the Post WIMP Explorers Club?
I came up with the name of this semi-fictional club as a way to organize my thoughts (and blog posts) regarding the development of a new metaphor for post-WIMP applications and technologies, related specifically to natural user interfaces, natural user interaction design, and off-the-desktop user experience.
Update, morning of 12/31/09:
Josh Blake, author of the blog "Deconstructing the NUI", posted Metaphors and OCGM this morning. It fleshes out post-WIMP concepts, addressing metaphors & interfaces. The premise is that NUI metaphors will be less complex than GUI (WIMP) metaphors. My feeling is that on the surface, this will hold true, especially for consumers/users and people creating light-weight applications and software widgets.
Underneath the surface, where designers and developers brains spend more time than users & consumers, things might be more complex. Why? The technology to support the required wizardry is more complex. With convergence, the creation of new technologies, applications, communication systems, and even electronic entertainment, is now dependent upon the work and thinking of people from a wider range of disciplines. Each discipline brings to the table a set of terms rooted in theory, and even research practices.
Update, late afternoon, 12/30/09:
START HERE FOR THE "ORIGINAL" POST FROM 12/29 & 12/20/09:
Background
About a year ago I responded to a conversation between Johnathan Brill, Josh Blake, and Richard Monson-Haefel discussing "post-WIMP" conceptualization regarding natural user interfaces and interaction, otherwise known as NUI. The focus of the discussion was on Johnathan's post, "New Multi-touch Interface Conventions". At the time, we were reading Dan Saffer's book, Designing Gestural Interfaces, and contemplating new ways that technology can support human interaction and activities in a more natural, enjoyable, and intuitive manner.
A few days later, I shared some of the concepts from the discussion on a post on this blog, "Why "new" ways of interaction?". The post includes video of Johnathan Brill discussing PATA, a post-WIMP analogue to assist with multi-touch/gesture based application development, which he describes as follows:
Places
"Lighting, focus, and depth, simplified searching and effecting hyperlinked content."
Animation "Using animation to subtly demonstrate what applications do and how to use them is a better solution than using icons. Animations makes apps easier to learn."
Things "Back in the days of floppy disks, objects helped us organize our content. This limitation was forced by arcane technology, but it did have one huge advantage. We used our spatial memory to help us navigate content. Things will help us organize content and manipulate controllers across a growing variety of devices."
Auras "Auras will help us track what we are tracking and when an interaction has been successful."
(For reference, I've copied some of my responses to the first discussion, which can be found near the end of this post)
A year later....
What has changed? Everything post-WIMP has been covered like a blanket by the NUI-word. "NUI" now functions as a generic term for anything that is not exactly WIMP. There is a sense of urgency now to figure out how best to conceptualize post-WIMP interfaces and interactions. Newer, affordable technologies enable us to interact with friends and family while we are on-the go. Netbooks, e-Readers, SmartPhones, large touch screen displays, interactive HDTV, and new devices with multi-modal I/O's abound. Our grandparents are on Facebook and twitter from their iPhones. Our world no longer requires us to be slaves to the WIMP mentality.
So what is the problem?
The technology has moved along so fast that application designers and developers have not had a chance to catch up. (The iPhone is an exception.) The downturn in the economy has made it difficult for many to take the leap from traditional software or web development and gain new skill sets. On top of it all, most of us over the age of 15 have been brainwashed from years of working within the constraints of WIMP. It doesn't matter if we are users, consumers, students, designers, or developers.
Even the folks least likely to have difficulty expanding into the post-WIMP world have had some difficulty. If you've had training in HCI (Human-Computer Interaction), you were inadvertently brainwashed with the best. The bulk of the theory and research you contemplated was launched at a time when WIMP was king, even as the Web expanded. Many of the of the principles held dear to traditional HCI folks have been shattered, and no-one has come up with a "theory of everything" that will cover all of the human actions and interactions that are supported or guided by new technologies.
The problem, in part, is that letting go of WIMP is hard to do, as illustrated by the following post from the Ars Technica website: Light Touch: A Design Firm Grapples with Microsoft Surface (Matthew Braga, 6/29/09) "Ditching the mouse and keyboard means a whole lot more than just doing without two common peripherals. As those who have worked with Microsoft Surface have found out, you have to jettison decades of GUI baggage and start with a whole new mindset...In actuality, few multi-touch gestures are really anything like what we experience in the physical world. There is no situation in which we pull on the corners of an image to increase its size, or swipe in a direction to reveal more content. So, in the context of real-world interaction, these types of gestures are far from natural...gestures should not only feel natural, but logical; the purpose that gestures like these serve, after all, is to replace GUI elements to the end of making interaction a more organic process." (Be sure to read the comments.)
Now that the Surface is taking root in more places, and touch-screen all-in-one PC's and tablets are starting to multiply, more people are giving "NUI" some thought. Ron George, an interaction and product designer with experience working with Microsoft's Surface team has contributed to the post-WIMP discussion and spent some time sharing ideas with Josh Blake, a .NET, SharePoint, and Microsoft Surface Consultant for InfoStrat and author of Deconstructing the NUI blog. The outcome of this discussion was Ron George's December 28th blog post, "OCGM (pronounced Occam['s Razor] is the replacement for WIMP", and Josh Blake's post, "WIMP is to GUI as OCGM (Occam) is to NUI". (Be sure to read the comments for both of these posts!)
OCGM (as conceptualized by Ron George)
Objects "are the core of the experience. They can have a direct correlation with something physical, or they can just be objects in the interface."
Containers "will be the “grouping” of the objects. This can manifest itself in whatever the system sees fit to better organize or instruct the user on interactions. They do not have to be, nor should they be, windows. They can be any sort of method of presentation or relationship gathering as seen fit."
Gestures "I went into detail about the differences in Gestures and Manipulations in a previous post [check it out for a refresher]. Gestures are actions performed by the user that initiate a function after its completion and recognition by the system. This is an indirect action on the system because it needs to be completed before the system will react to it."
Manipulations "are the direct influences on an object or a container by the user. These are immediate and responsive. They are generally intuitive and mimic the physical world in some manner. The results are expected and should be non-destructive. These are easily performed and accidental activations should be expected and frequent."
To illustrate a point regarding the validity of the OCGM analogy proposed by Ron George, Josh Blake shares the following video of a presentation from REMIX 2009, in which August de los Reyes, the Principle Director of User Experience for Surface Computing at Microsoft, briefly discusses the TOCA (Touch, Objects, Containers, and Actions) concept, suggested to replace the WIMP concept:
The video wouldn't embed, so go to the following link:
Predicting the Past: A Vision for Microsoft Surface
"Natural User Interface (NUI) is here. New systems of interaction require new approaches to design. Microsoft Surface stands at the forefront of this product space. This presentation looks at one of the richest sources for inventing the future: the past. By analyzing preceding inflection points in user interface, we can derive some patterns that point to the brave NUI world."
The concepts outlined in the presentation are similar to Microsoft's Vision for 2019
Richard Monson-Haefel added his thoughts about the discussion about OCGM in his recent blog post, "What is NUI's WIMP?" Richard disagrees with the OCGM concept, as he feels it doesn't encompass some important interactions, such as speech/direct voice input. He'd probably agree that NUI is NOT WIMP 2.0.
Post-NUI, Activity Theory, and Off-the-Desktop Interaction Design:
As I was reading the recent posts and discussions regarding NUI/OCGM, I also contemplated some of what I've been reading over my holiday break, "Acting With Technology: Activity Theory and Interaction Design", written by Victor Kaptelinin and Bonnie A. Nardi. Victor Kaptelinin is the co-editor of "Beyond the Desktop Metaphor: Designing Integrated Digital Work Environments" (MIT Press, 2007), and has an interest in computer-supported cooperative work. Bonnie Nardi brings to the IT world her background in anthropology, and is the co-author of "Information Ecologies: Using Technology with the Heart" (MIT Press, 1999). The authors know what they are talking about.
It is important to note that activity theory-based interaction design is viewed as a "post-cognitivistic", and informed by some of what I studied in psychology, education, and social science years ago. Within the field of activity theory are some important differences, which I'll save for a future post.
Below are some concepts taken from the book. I am still mulling them over through the prism of NUI, post-WIMP, PATA, TOCA, OCGM, etc. That's why there will be at "Part II", with specific examples.
"Means and ends, the extent to which the technology facilitates and constrains attaining user's goals and the impact of the technology on provoking or resolving conflicts between different goals
Social and physical aspects of the environment - integration of target technology with requirements, tools, resources, and social rules of the environment
Learning, cognition, and articulation, internal vs external components of activity and support of their mutual transformations with target technology
Development -Developmental transformation of the above components as a whole"
"Taken together, these sections cover various aspects of the way the target technology supports, or is intended to support, human actions". (page 270)
I especially like the activity checklist included in the appendix of the book, as well as the concept of tool mediation. "The Activity Checklist is intended to be used at early phases of system design or for evaluating existing systems. Accordingly, there are two slightly different versions of the Checklist, the "evaluation version" and the "design version". Both versions are implemented as organized sets of items covering the contextual factors that can potentially influence the use of computer technology in real-life settings. It is assumed that the Checklist can help to identify the most important issues, for instance, potential trouble spots that designers can address". (page 269)
"The Checklist covers a large space. It is intended to be used first by examining the whole space for areas of interest, then focusing on the identified areas of interest in as much depth as possible...there is a heavy emphasis on the principle of tool mediation" (page 270).
Other Thoughts
What is missing from this picture is a Universal Design component, something that I think holds up across time and technologies. Following the principles of Universal design doesn't mean dumbing down or relying on simplicity. It is a multi-faceted approach, and relies on conctructing flexibility in use, one of the key concepts of Universal Design. I'd like to see this concept embedded in the post-WIMP conceptualization somehow.
Because of my background in education/psychology/ special education, I try to follow the principles of Universal Design for Learning (UDL) when I work on technology project. I've spent some time thinking about how the principles of UDL could be realized through new interaction/interface systems. Although this approach focuses on the educational technology domain, it is important to consider, given that a good percentage of our population - potential users, clients, consumers - has a temporary or permanent disability of one kind or another.
Components of Universal Design for Learning:
Multiple Means of Representation
Provide options for perception
Provide options for language and symbols
Provide options for comprehension
Multiple Means of Action and Expression
Provide options for physical action
Provide options for expressive skills and fluency
Provide options for executive functions
Multiple Means of Engagement
Provide options for recruiting interest
Provide options for sustaining effort and persistence
Provide options for self-regulation
-Adapted from the UDL Guidelines/Educator Checklist, which breaks down the components into more specific details.
Note: The concept of Universal Design for Learning shares historical roots with some of the work behind Activity Theory and Interaction Design. Obviously, there is still much to contemplate regarding OCGM and other permutations of post-WIMP concepts!
Here are my comments to the discussion on Johnathan Brill's blog from January 2009:
Thoughts: I'm seeing a trend to "optimize" WIMP-type applications so they can be manipulated by gesture and touch. People who program kiosks, ATM's and POS touch screens are examples of what I'm talking about. Touch and hold, two-fingered touch, and double-tap are just a slight transformation of the WIMP world-view, and in my opinion, are still WIMP (wimpy!). The mouse interaction "pretenders" are fine for using legacy productivity applications, OK in the short run.
For example, I have an HP TouchSmart, but I don't use the touch screen as often as I'd hoped. Try using using Visual Studio to code something on a touch screen. There is so much more that can be done! I know from the touch-screen prototype/demos I've worked on in various classes that applications that support collaboration and decision-making are important, and not just for work.
What do people DO, really? First of all, we are social beings, most of us. Think of what we share and discuss with others, and think about what sort of interactions on a display might best correspond with this interaction. Here are some of the things I've been DOING recently that involved some sort of technology and communication/collaboration with others:
---Travel planning - I recently went on a cruise and with various family members, selected activities I wanted to do on the ship as well plan my shore excursions (a complicated process)
---Picture sharing- I came back from the cruise with lots of pictures that I uploaded on Flickr. Related to this process: Picture annotating, tagging, choosing/comparing & editing it would be SO cool if I could use two sliders to enhance my pictures just so!
---Talking on the phone and responding to e-mail with friends and family members about the pictures, and what they wanted to see on Flickr- "you know, the ones of the dogs in Jamaica"... of which I had about 68!
---Financial planning with my husband. (I took info-viz last semester, so I know the possibilities are there.)
---Using the touch-screen to check-in at my eye-doctor's office: This was a user-unfriendly experience. Such a nice little screen. I was provided with a WIMPY PowerPoint-like interface which was confusing to use- and time consuming!
---Shopping at the new Super Wal-Mart: I asked the greeter if there was a map, and he said, "Food is over there, and the rest of the stuff is that way." Flat panel displays were all over the store, but of course,they weren't interactive. I had no idea where anything was, and the few items I was looking at had no bar-code tags. There wasn't a clerk in sight. Wal-Mart TV rolled on-and-on via the display above my head. If I could only harness the display and have the talking head answer my questions! I gave up on my shopping trip when I was in the facial lotion/potion section. Too many choices, and too much fine print to read.
---Shopping at the mall with my young-adult daughters... I'll have to hold my thoughts on that one for now!
Some suggestions:
I think the artist/designers, (even dancers,) who are interested in multi-touch and gesture interaction have some interesting things to consider. (I linked to some of my previous posts.)
Again:
I am still mulling things over through the prism of NUI, post-WIMP, PATA, TOCA, OCGM, etc. So that is why there will be at "Part II". With specific examples!
RELATED
Multimedia, Multi-touch, Gesture, and Interaction Resources
My thoughts:
2007 Letter to the Editor, Pervasive Computing
Useful Usability Studies (pdf)
2007 Blog Post
Usability/Interaction Hall of Shame (In a Hospital)
2008 Blog Posts
Emerging Interactive Technologies, Emerging Interactions, and Emerging Integrated Form Factors
Interactive Touch-Screen Technology, Participatory Design, and "Getting It"
An Example of Convergence: Interactive TV: uxTV 2008
2009 Blog Posts
Why "new" ways of interaction?
Microsoft: Are You Listening? Cool Cat Teacher (Vicki Davis) Tries out Microsoft's Multi-touch Surface Table
Haptic/Tactile Interface: Dynamically Changeable Physical Buttons
The Convergence of TV, the Internet, and Interactivity: Update
UX of ITV: The User Experience and Interactive TV (or Let's Stamp Out Bad Remote Controls)
Digital Convergence and Interactive Television; Boxee and Digital Convergence
ElderGadget Blog: Useful Tech and Tools
Other People's Thoughts
Ron George's blog, OCGM (pronounced Occam['s Razor] is the replacement for WIMP 12/28/09
Ron George: Welcome to the OCGM Generation! Part 2
Stephen, Microsoft Kitchen: OCGM, A New Windows User Experience
Richard Monson-Haefel's blog, Multi-touch and NUI: What is NUI's WIMP?
Richard Monson-Haefel: OCGM: George's Razor
Josh Blake's blog, Deconstructing the NUI: WIMP is to GUI as OCGM (Occam) is to NUI
Bill Buxton: Gesture Based Interaction (pdf) (Updated 5/2009)
Bill Buxton: "Surface and Tangible Computing, and the "Small" Matter of People and Design" (pdf) - ISSCC 2008
Dan Saffer, Designing for Gestural Interfaces: Touchscreens and Interactive Devices
Dan Saffer, Designing for Interaction
Mark Weiser, Computer for the 21st Century Scientific American, 09, 1991
Touch User Interface: Readings in Touch Screen, Multi-Touch, and Touch User Interface
Jacob O Wobbrock, Meredith Ringel Morris, Andrew D. Wilson User-Defined Gestures for Surface Computing CHI 2009, April 4–9, 2009, Boston, Massachusetts, USA.
I came up with the name of this semi-fictional club as a way to organize my thoughts (and blog posts) regarding the development of a new metaphor for post-WIMP applications and technologies, related specifically to natural user interfaces, natural user interaction design, and off-the-desktop user experience.
Update, morning of 12/31/09:
Josh Blake, author of the blog "Deconstructing the NUI", posted Metaphors and OCGM this morning. It fleshes out post-WIMP concepts, addressing metaphors & interfaces. The premise is that NUI metaphors will be less complex than GUI (WIMP) metaphors. My feeling is that on the surface, this will hold true, especially for consumers/users and people creating light-weight applications and software widgets.
Underneath the surface, where designers and developers brains spend more time than users & consumers, things might be more complex. Why? The technology to support the required wizardry is more complex. With convergence, the creation of new technologies, applications, communication systems, and even electronic entertainment, is now dependent upon the work and thinking of people from a wider range of disciplines. Each discipline brings to the table a set of terms rooted in theory, and even research practices.
Update, late afternoon, 12/30/09:
Richard Monson-Haefels response to Ron George's "Part 2". The concept of OCGM might be growing on him now... OCGM: George's Razor : "If Ron George can explain how OCGM encompasses Affordances and Feedback than I'll be convinced that OCGM works for NUI. Otherwise, I think OCGM is a great start that would benefit from an added "A" and "F"." -Richard
- OCGM relates to Occam's Razor. It is helpful to read a bit about it if you are are interested in the post WIMP conversations. (The link is to an an article from "How Stuff Works", via Richard Monson-Haefel.)
START HERE FOR THE "ORIGINAL" POST FROM 12/29 & 12/20/09:
Background
About a year ago I responded to a conversation between Johnathan Brill, Josh Blake, and Richard Monson-Haefel discussing "post-WIMP" conceptualization regarding natural user interfaces and interaction, otherwise known as NUI. The focus of the discussion was on Johnathan's post, "New Multi-touch Interface Conventions". At the time, we were reading Dan Saffer's book, Designing Gestural Interfaces, and contemplating new ways that technology can support human interaction and activities in a more natural, enjoyable, and intuitive manner.
A few days later, I shared some of the concepts from the discussion on a post on this blog, "Why "new" ways of interaction?". The post includes video of Johnathan Brill discussing PATA, a post-WIMP analogue to assist with multi-touch/gesture based application development, which he describes as follows:
Places
"Lighting, focus, and depth, simplified searching and effecting hyperlinked content."
Animation "Using animation to subtly demonstrate what applications do and how to use them is a better solution than using icons. Animations makes apps easier to learn."
Things "Back in the days of floppy disks, objects helped us organize our content. This limitation was forced by arcane technology, but it did have one huge advantage. We used our spatial memory to help us navigate content. Things will help us organize content and manipulate controllers across a growing variety of devices."
Auras "Auras will help us track what we are tracking and when an interaction has been successful."
(For reference, I've copied some of my responses to the first discussion, which can be found near the end of this post)
A year later....
What has changed? Everything post-WIMP has been covered like a blanket by the NUI-word. "NUI" now functions as a generic term for anything that is not exactly WIMP. There is a sense of urgency now to figure out how best to conceptualize post-WIMP interfaces and interactions. Newer, affordable technologies enable us to interact with friends and family while we are on-the go. Netbooks, e-Readers, SmartPhones, large touch screen displays, interactive HDTV, and new devices with multi-modal I/O's abound. Our grandparents are on Facebook and twitter from their iPhones. Our world no longer requires us to be slaves to the WIMP mentality.
So what is the problem?
The technology has moved along so fast that application designers and developers have not had a chance to catch up. (The iPhone is an exception.) The downturn in the economy has made it difficult for many to take the leap from traditional software or web development and gain new skill sets. On top of it all, most of us over the age of 15 have been brainwashed from years of working within the constraints of WIMP. It doesn't matter if we are users, consumers, students, designers, or developers.
Even the folks least likely to have difficulty expanding into the post-WIMP world have had some difficulty. If you've had training in HCI (Human-Computer Interaction), you were inadvertently brainwashed with the best. The bulk of the theory and research you contemplated was launched at a time when WIMP was king, even as the Web expanded. Many of the of the principles held dear to traditional HCI folks have been shattered, and no-one has come up with a "theory of everything" that will cover all of the human actions and interactions that are supported or guided by new technologies.
The problem, in part, is that letting go of WIMP is hard to do, as illustrated by the following post from the Ars Technica website: Light Touch: A Design Firm Grapples with Microsoft Surface (Matthew Braga, 6/29/09) "Ditching the mouse and keyboard means a whole lot more than just doing without two common peripherals. As those who have worked with Microsoft Surface have found out, you have to jettison decades of GUI baggage and start with a whole new mindset...In actuality, few multi-touch gestures are really anything like what we experience in the physical world. There is no situation in which we pull on the corners of an image to increase its size, or swipe in a direction to reveal more content. So, in the context of real-world interaction, these types of gestures are far from natural...gestures should not only feel natural, but logical; the purpose that gestures like these serve, after all, is to replace GUI elements to the end of making interaction a more organic process." (Be sure to read the comments.)
Now that the Surface is taking root in more places, and touch-screen all-in-one PC's and tablets are starting to multiply, more people are giving "NUI" some thought. Ron George, an interaction and product designer with experience working with Microsoft's Surface team has contributed to the post-WIMP discussion and spent some time sharing ideas with Josh Blake, a .NET, SharePoint, and Microsoft Surface Consultant for InfoStrat and author of Deconstructing the NUI blog. The outcome of this discussion was Ron George's December 28th blog post, "OCGM (pronounced Occam['s Razor] is the replacement for WIMP", and Josh Blake's post, "WIMP is to GUI as OCGM (Occam) is to NUI". (Be sure to read the comments for both of these posts!)
OCGM (as conceptualized by Ron George)
Objects "are the core of the experience. They can have a direct correlation with something physical, or they can just be objects in the interface."
Containers "will be the “grouping” of the objects. This can manifest itself in whatever the system sees fit to better organize or instruct the user on interactions. They do not have to be, nor should they be, windows. They can be any sort of method of presentation or relationship gathering as seen fit."
Gestures "I went into detail about the differences in Gestures and Manipulations in a previous post [check it out for a refresher]. Gestures are actions performed by the user that initiate a function after its completion and recognition by the system. This is an indirect action on the system because it needs to be completed before the system will react to it."
Manipulations "are the direct influences on an object or a container by the user. These are immediate and responsive. They are generally intuitive and mimic the physical world in some manner. The results are expected and should be non-destructive. These are easily performed and accidental activations should be expected and frequent."
To illustrate a point regarding the validity of the OCGM analogy proposed by Ron George, Josh Blake shares the following video of a presentation from REMIX 2009, in which August de los Reyes, the Principle Director of User Experience for Surface Computing at Microsoft, briefly discusses the TOCA (Touch, Objects, Containers, and Actions) concept, suggested to replace the WIMP concept:
The video wouldn't embed, so go to the following link:
Predicting the Past: A Vision for Microsoft Surface
"Natural User Interface (NUI) is here. New systems of interaction require new approaches to design. Microsoft Surface stands at the forefront of this product space. This presentation looks at one of the richest sources for inventing the future: the past. By analyzing preceding inflection points in user interface, we can derive some patterns that point to the brave NUI world."
The concepts outlined in the presentation are similar to Microsoft's Vision for 2019
Richard Monson-Haefel added his thoughts about the discussion about OCGM in his recent blog post, "What is NUI's WIMP?" Richard disagrees with the OCGM concept, as he feels it doesn't encompass some important interactions, such as speech/direct voice input. He'd probably agree that NUI is NOT WIMP 2.0.
Post-NUI, Activity Theory, and Off-the-Desktop Interaction Design:
As I was reading the recent posts and discussions regarding NUI/OCGM, I also contemplated some of what I've been reading over my holiday break, "Acting With Technology: Activity Theory and Interaction Design", written by Victor Kaptelinin and Bonnie A. Nardi. Victor Kaptelinin is the co-editor of "Beyond the Desktop Metaphor: Designing Integrated Digital Work Environments" (MIT Press, 2007), and has an interest in computer-supported cooperative work. Bonnie Nardi brings to the IT world her background in anthropology, and is the co-author of "Information Ecologies: Using Technology with the Heart" (MIT Press, 1999). The authors know what they are talking about.
It is important to note that activity theory-based interaction design is viewed as a "post-cognitivistic", and informed by some of what I studied in psychology, education, and social science years ago. Within the field of activity theory are some important differences, which I'll save for a future post.
Below are some concepts taken from the book. I am still mulling them over through the prism of NUI, post-WIMP, PATA, TOCA, OCGM, etc. That's why there will be at "Part II", with specific examples.
"Means and ends, the extent to which the technology facilitates and constrains attaining user's goals and the impact of the technology on provoking or resolving conflicts between different goals
Social and physical aspects of the environment - integration of target technology with requirements, tools, resources, and social rules of the environment
Learning, cognition, and articulation, internal vs external components of activity and support of their mutual transformations with target technology
Development -Developmental transformation of the above components as a whole"
"Taken together, these sections cover various aspects of the way the target technology supports, or is intended to support, human actions". (page 270)
I especially like the activity checklist included in the appendix of the book, as well as the concept of tool mediation. "The Activity Checklist is intended to be used at early phases of system design or for evaluating existing systems. Accordingly, there are two slightly different versions of the Checklist, the "evaluation version" and the "design version". Both versions are implemented as organized sets of items covering the contextual factors that can potentially influence the use of computer technology in real-life settings. It is assumed that the Checklist can help to identify the most important issues, for instance, potential trouble spots that designers can address". (page 269)
"The Checklist covers a large space. It is intended to be used first by examining the whole space for areas of interest, then focusing on the identified areas of interest in as much depth as possible...there is a heavy emphasis on the principle of tool mediation" (page 270).
Other Thoughts
What is missing from this picture is a Universal Design component, something that I think holds up across time and technologies. Following the principles of Universal design doesn't mean dumbing down or relying on simplicity. It is a multi-faceted approach, and relies on conctructing flexibility in use, one of the key concepts of Universal Design. I'd like to see this concept embedded in the post-WIMP conceptualization somehow.
Because of my background in education/psychology/ special education, I try to follow the principles of Universal Design for Learning (UDL) when I work on technology project. I've spent some time thinking about how the principles of UDL could be realized through new interaction/interface systems. Although this approach focuses on the educational technology domain, it is important to consider, given that a good percentage of our population - potential users, clients, consumers - has a temporary or permanent disability of one kind or another.
Components of Universal Design for Learning:
Multiple Means of Representation
Provide options for perception
Provide options for language and symbols
Provide options for comprehension
Multiple Means of Action and Expression
Provide options for physical action
Provide options for expressive skills and fluency
Provide options for executive functions
Multiple Means of Engagement
Provide options for recruiting interest
Provide options for sustaining effort and persistence
Provide options for self-regulation
-Adapted from the UDL Guidelines/Educator Checklist, which breaks down the components into more specific details.
Note: The concept of Universal Design for Learning shares historical roots with some of the work behind Activity Theory and Interaction Design. Obviously, there is still much to contemplate regarding OCGM and other permutations of post-WIMP concepts!
Here are my comments to the discussion on Johnathan Brill's blog from January 2009:
Thoughts: I'm seeing a trend to "optimize" WIMP-type applications so they can be manipulated by gesture and touch. People who program kiosks, ATM's and POS touch screens are examples of what I'm talking about. Touch and hold, two-fingered touch, and double-tap are just a slight transformation of the WIMP world-view, and in my opinion, are still WIMP (wimpy!). The mouse interaction "pretenders" are fine for using legacy productivity applications, OK in the short run.
For example, I have an HP TouchSmart, but I don't use the touch screen as often as I'd hoped. Try using using Visual Studio to code something on a touch screen. There is so much more that can be done! I know from the touch-screen prototype/demos I've worked on in various classes that applications that support collaboration and decision-making are important, and not just for work.
What do people DO, really? First of all, we are social beings, most of us. Think of what we share and discuss with others, and think about what sort of interactions on a display might best correspond with this interaction. Here are some of the things I've been DOING recently that involved some sort of technology and communication/collaboration with others:
---Travel planning - I recently went on a cruise and with various family members, selected activities I wanted to do on the ship as well plan my shore excursions (a complicated process)
---Picture sharing- I came back from the cruise with lots of pictures that I uploaded on Flickr. Related to this process: Picture annotating, tagging, choosing/comparing & editing it would be SO cool if I could use two sliders to enhance my pictures just so!
---Talking on the phone and responding to e-mail with friends and family members about the pictures, and what they wanted to see on Flickr- "you know, the ones of the dogs in Jamaica"... of which I had about 68!
---Financial planning with my husband. (I took info-viz last semester, so I know the possibilities are there.)
---Using the touch-screen to check-in at my eye-doctor's office: This was a user-unfriendly experience. Such a nice little screen. I was provided with a WIMPY PowerPoint-like interface which was confusing to use- and time consuming!
---Shopping at the new Super Wal-Mart: I asked the greeter if there was a map, and he said, "Food is over there, and the rest of the stuff is that way." Flat panel displays were all over the store, but of course,they weren't interactive. I had no idea where anything was, and the few items I was looking at had no bar-code tags. There wasn't a clerk in sight. Wal-Mart TV rolled on-and-on via the display above my head. If I could only harness the display and have the talking head answer my questions! I gave up on my shopping trip when I was in the facial lotion/potion section. Too many choices, and too much fine print to read.
---Shopping at the mall with my young-adult daughters... I'll have to hold my thoughts on that one for now!
Some suggestions:
I think the artist/designers, (even dancers,) who are interested in multi-touch and gesture interaction have some interesting things to consider. (I linked to some of my previous posts.)
Again:
I am still mulling things over through the prism of NUI, post-WIMP, PATA, TOCA, OCGM, etc. So that is why there will be at "Part II". With specific examples!
RELATED
Multimedia, Multi-touch, Gesture, and Interaction Resources
My thoughts:
2007 Letter to the Editor, Pervasive Computing
Useful Usability Studies (pdf)
2007 Blog Post
Usability/Interaction Hall of Shame (In a Hospital)
2008 Blog Posts
Emerging Interactive Technologies, Emerging Interactions, and Emerging Integrated Form Factors
Interactive Touch-Screen Technology, Participatory Design, and "Getting It"
An Example of Convergence: Interactive TV: uxTV 2008
2009 Blog Posts
Why "new" ways of interaction?
Microsoft: Are You Listening? Cool Cat Teacher (Vicki Davis) Tries out Microsoft's Multi-touch Surface Table
Haptic/Tactile Interface: Dynamically Changeable Physical Buttons
The Convergence of TV, the Internet, and Interactivity: Update
UX of ITV: The User Experience and Interactive TV (or Let's Stamp Out Bad Remote Controls)
Digital Convergence and Interactive Television; Boxee and Digital Convergence
ElderGadget Blog: Useful Tech and Tools
Other People's Thoughts
Ron George's blog, OCGM (pronounced Occam['s Razor] is the replacement for WIMP 12/28/09
Ron George: Welcome to the OCGM Generation! Part 2
Stephen, Microsoft Kitchen: OCGM, A New Windows User Experience
Richard Monson-Haefel's blog, Multi-touch and NUI: What is NUI's WIMP?
Richard Monson-Haefel: OCGM: George's Razor
Josh Blake's blog, Deconstructing the NUI: WIMP is to GUI as OCGM (Occam) is to NUI
Bill Buxton: Gesture Based Interaction (pdf) (Updated 5/2009)
Bill Buxton: "Surface and Tangible Computing, and the "Small" Matter of People and Design" (pdf) - ISSCC 2008
Dan Saffer, Designing for Gestural Interfaces: Touchscreens and Interactive Devices
Dan Saffer, Designing for Interaction
Mark Weiser, Computer for the 21st Century Scientific American, 09, 1991
Touch User Interface: Readings in Touch Screen, Multi-Touch, and Touch User Interface
Jacob O Wobbrock, Meredith Ringel Morris, Andrew D. Wilson User-Defined Gestures for Surface Computing CHI 2009, April 4–9, 2009, Boston, Massachusetts, USA.
Posted by
Lynn Marentette
Labels:
Brill,
dan saffer,
gesture,
interaction design,
Josh Blake,
metaphors,
Monson-Haefel,
multi-touch,
NUI,
Occam's Razor,
OCGM,
post Wimp,
surface
3 comments:
Subscribe to:
Posts (Atom)
