Showing posts sorted by relevance for query gesture. Sort by date Show all posts
Showing posts sorted by relevance for query gesture. Sort by date Show all posts

Feb 1, 2009

Reflections: Need for Interactive Infoviz for the Financial Biz, Business Leaders, Government Officials, Educators and the Rest of Us...

If you follow this blog because you are interested in emerging multimedia technologies such as multi-touch and gesture-based displays and tables, you probably know that there is a huge void in terms of content -rich applications for these systems.

Most of the demos show how you can zoom, rotate, and resize photographs, sort through your "stuff", or bat things around the surface as a game.
There is so much more power behind surface technology that needs to be realized!

Here are some of my reflections...

As I write this post, leaders of the financial industry, large corporations, and governments are in Davos, Switzerland at the annual meeting of the World Economic Forum. It is interesting to note that all of these bright men and women are struggling to grasp the enormity of the world's financial crisis and come up with strategies that hopefully will work.


The graphic below depicts how much has changed in the world economy between the 2008 annual meeting of the World Economic forum and the present. It lacks
the "wow" factor that one would expect for an application running on an interactive display. With some tweaking, it could be transformed into an application that supports two people interacting with the data at the same time.


(Click above photo to link to the interactive graphic.)
Via the Wall Street Journal

Here are more examples related to the current economic crisis:


Annus Horribilis in 3D
Financial chart by artist Andreas Nicholas Fischer
via Dan Pink





Life in the Left Tail
(Click for a larger image) via Greg Mankiw's Blog:
Random Observations for Students of Economics, via
Daily Kos

"On this chart each block represents a year and each column represents a range of return on the S&P index. Over on the right side are those lucky years where the index has soared upward from 50-60%. In the middle are the more typical years, where the market has risen less than 10%. That little box on the far left? Yeah, that's this year..And hey, how many of you knew the S&P had been around since 1825?." - Devilstower of the Daily KOS
I've been thinking about interactive information visualization and how it can support our understanding of the current economic crisis a bit lately, inspired by what I learned in Dr. Robert Kosara's InfoViz class I took last year. In a recent post on the Eager Eyes blog, Dr. Kosara floats the idea of the establishment of a "National Data Agency".

http://eagereyes.org/media/2009/nda.png

"What we need is a National Data Agency (NDA). This agency would be tasked with collecting data that all other agencies collect and produce, and making it available in a central place and in electronic, machine-readable form. There could and should be a reasonable data presentation on its website, perhaps even a National Data Dashboard (showing data of interest like debt, spending, jobless rate, etc.). But the bulk of data analysis would be left to third parties: analysts, journalists, citizens (and also aliens like me). Easily available data would make for more insightful reporting, more informed decisions, and endless business opportunities." -Robert Kosara

This makes sense.

There simply is too much data to absorb, explore, analyze, understand, and act upon. It is difficult to know if you have all of the data that you need, because some of it is difficult to access. It doesn't matter if you are a banker, a stock broker, a CEO, a CFO, a government leader, an economist, a shareholder, or a student. The current state of world economic affairs is the strongest evidence that our methods simply aren't working.

The work of Hans Gosling provides a good example of how information visualization can help increase our understanding of large quantities of data over time. Hans Gosling is a Swedish professor of development and one of the founders of Gapminder. ("Unveiling the beauty of statistics for a fact-based world view".)

The following video is Rosling's latest presentation, focused on debunking the myths regarding population growth:


What stops population growth? from Gapminder Foundation on Vimeo.

"Gapminder is a non-profit venture promoting sustainable global development and achievement of the United Nations Millennium Development Goals by increased use and understanding of statistics and other information about social, economic and environmental development at local, national and global levels. We are a modern “museum” that helps making the world understandable, using Internet."


The visual representation of economic data, if done well, packs a powerful punch. To me, images form a kernel in my memory related to the messages conveyed, and when recalled, also bring up a range of related conceptual details. It is sort of like what happens when I hear the first few notes of a tune from the past.

This doesn't seem to be the case for me when thinking about related text, or even thinking about "boring" charts and graphs.


The world needs effective and efficient data and information analysis and interactive visualization tools in order to solve problems that are on such a colossal scale.

The use of collaborative gesture and multi-touch display systems for data and information visualization is something that I believe will support better methods of decision-making in a variety of fields. Now is the time for the interactive information visualization community and related disciplines such as interactive multimedia and HCI to assist in this effort.

Here are some thoughts:


  • Those who are coding gesture-based or multi-touch programs need to understand what sort of content people will explore, and make sure that applications provide flexibility in use.
  • Human-computer interaction specialists will need to continue the study a range of interfaces and interactions in order to determine what supports human cognition of larger amounts of data and information.
  • Creators of interactive multimedia content, web developers, and others will need to re-examine their work and think about ways their content can support new ways of thinking and problem-solving within the context of "surface" computing.
  • Computer Supported Cooperative Work researchers will need to figure out what needs to be in place so that information can be effectively shared and analyzed between pairs or teams of people, and how this information can best be communicated to others within a business, agency, or organization, as well as the public.
One of the challenges facing this effort is that few people have an in-depth understanding of what it will take to make it happen. We will need to take an inter-disciplinary effort requiring a much higher level of communication and collaboration between people not accustomed to working within this context.

We will also need to take a "big picture" approach.


Because of the world's economic crisis, I think that interactive information/data visualization applications should target the needs of people who are working to understand the crisis and who have the power to do something constructive about it. This can not happen if they rely on the models and data analysis techniques of our recent past.


At the same time, these tools should be available to the rest of us, via the Internet, so that we may do our part to move us forward.

Back Story:
I started keeping up with the current economic on a more serious level in October. I was becoming numb from information overload. My knowledge about the economic and financial fields was lacking, so I decided to create a blog that I entitled "Economic Sounds and Sights" as my personal on-line repository of searchable content.

The blog has lots of pictures, info-graphics, embedded video clips, and links to a wide range of web-based resources. In my quest for information, I came across interesting quotes, jokes about economists, and tales of greed and scandals. I even found one blogger who has responded to each unfolding event of our economic crisis by re-writing lyrics to popular tunes.

For an example of one of my posts, read
"Celestial Economic Sphere, Data Viz for the Finance Biz..." It is my hope that the content I've collected and shared on the blog will become part of an interactive information visualization/timeline designed to support two or more people on a large display or table.

11/4/09: Update: The economic crisis got a bit complicated, so I stopped posting. The blog still remains on-line.  Interactive Infoviz for the Health Care Biz will be the topic of an upcoming post.


RELATED

Three Mirrors of Interaction: A Holistic Approach to User Interfaces (Bill Buxton)
Andreas Nicolas Fischer (Berlin-based artist who works with data, sculpture, and code.)
Google Spreadsheets Data Visualization Gadgets
Google Motion Chart (like Gapminder)
Panopticon
Death and Taxes (Wallstats.Com: The Art of Information)
2009 Index of Economic Freedom (Wall Street Journal and the Heritage Foundation)
Visual Business Intelligence Stephen Few's Blog
Sunlight Foundation
Transparency Timeline - A History of Congressional Public Access Reform
"The Sunlight Foundation is committed to helping citizens, bloggers and journalists be their own best congressional watchdogs, by improving access to existing information and digitizing new information, and by creating new tools and Web sites to enable all of us to collaborate in fostering greater transparency."

MapLight.org "Money and Politics: Illuminating the Connection"

Free Our Data Blog (Guardian Technology campaign for free public access to data about the UK and its citizens)
2009 Death and Taxes Interactive Graphic (Click to explore.)



Via Stephen Few: Example of Horizon Graphs, developed by Panopticon. (Year's worth of prices of 50 stocks in 2005 and comparisons between them, click to enlarge)

Mark Lombardi
Take the time to listen to NPR's Lynn Neary's interview with Robert Hobbs, curator of the an exhibit of the late Lombardi's "conspiracy" art/visualizations linking global finance and international terrorism. Lombardi's background as an archivist and reference librarian served him well in his art depicting interesting large-scale networks. Although his art was not interactive, his techniques have inspired the development of computer-based interactive information visualizations.

FYI:
To satisfy my curiosity about Mark Lombardi, I followed a link to "Obsessive-Generous": Toward a Diagram of Mark Lombardi, by Frances Richard, posted in the 2001-02 section of the WBURG website.

The examples below are of Lombardi's work connecting the relationships between George W. Bush, Harken Energy, and Jackson Stephens:



George W. Bush, Harken Energy and Jackson Stephens
c. 1979-90, 5th Version
1999

Enlarged Version


Close-up of network detail



(missing)
Close up depicting a profit made by Bush, 2 weeks before Saddam Hussein invaded Kuwait
via Frances Richard

"...though he possessed the instincts of a private eye and the acumen of a systems-analyst, Lombardi was of course an artist, and from the raw material of wire-service reports and books by political correspondents, he drew not only chronicles of covert, high-stakes trade, but technically pristine and sensually compelling visual forms"-Frances Richard


Update:
Lombardi's Narrative Structures and Other Mappings of Power Relations
Fosco Lucarelli, SOCKS, 8/22/13

Learning from Lombardi
Ben Fry, 9/2009

Dec 31, 2009

The Post-WIMP Explorers' Club: Update of the Updates, Morning of 12/31/09

What is the Post WIMP Explorers Club?  
I came up with the name of this semi-fictional club as a way to organize my thoughts (and blog posts) regarding the development of a new metaphor for post-WIMP applications and technologies, related specifically to natural user interfaces, natural user interaction design, and off-the-desktop user experience.  


Update, morning of 12/31/09:
Josh Blake, author of the blog "Deconstructing the NUI", posted Metaphors and OCGM  this morning.  It fleshes out post-WIMP concepts, addressing metaphors & interfaces.  The premise is that NUI metaphors will be less complex than GUI (WIMP) metaphors.    My feeling is that on the surface, this will hold true, especially for consumers/users and people creating light-weight applications and software widgets.  


Underneath the surface,  where designers and developers brains spend more time than users & consumers, things might be more complex.  Why? The technology to support the required wizardry is more complex.  With convergence, the creation of new technologies, applications, communication systems, and even electronic entertainment, is  now dependent upon the work and thinking of people from a wider range of disciplines.  Each discipline brings to the table a set of terms rooted in theory, and even research practices.


Update,  late afternoon, 12/30/09:
Richard Monson-Haefels response to Ron George's "Part 2".  The concept of OCGM might be growing on him now... OCGM: George's Razor : "If Ron George can explain how OCGM encompasses Affordances and Feedback than I'll be convinced that OCGM works for NUI. Otherwise, I think OCGM is a great start that would benefit from an added "A" and "F"." -Richard   
  • OCGM relates to Occam's Razor.  It is helpful to read a bit about it if you are are interested in the post WIMP conversations. (The link is to an an article from "How Stuff Works", via Richard Monson-Haefel.)
UPDATE 12/30/09  -- This post is part of a discussion between several different bloggers, and was written before Ron George wrote his latest post, Welcome to the OCGM Generation!  Part 2, which I recommend that you read now, or within the same time frame, as this post.   Since I'm not ready to write "Part 2" of this post, I tweaked what I had and added some links to a handful of my previous posts that touch on this and related topics.  The links can be found at the bottom of this post.




START HERE FOR THE "ORIGINAL" POST FROM  12/29 & 12/20/09:


Background
About a year ago I responded to a conversation between Johnathan Brill, Josh Blake, and Richard Monson-Haefel discussing "post-WIMP" conceptualization regarding natural user interfaces and interaction, otherwise known as NUI.  The focus of the discussion was on Johnathan's post, "New Multi-touch Interface Conventions". At the time, we were reading Dan Saffer's book, Designing Gestural Interfaces, and contemplating new ways that technology can support human interaction and activities in a more natural, enjoyable, and intuitive manner.  

A few days later, I shared some of the concepts from the discussion on a post on this blog, "Why "new" ways of interaction?".  The post includes video of Johnathan Brill discussing PATA, a post-WIMP analogue to assist with multi-touch/gesture based application development, which he describes as follows:
Places
"Lighting, focus, and depth, simplified searching and effecting hyperlinked content."
Animation "Using animation to subtly demonstrate what applications do and how to use them is a better solution than using icons. Animations makes apps easier to learn."
Things "Back in the days of floppy disks, objects helped us organize our content. This limitation was forced by arcane technology, but it did have one huge advantage. We used our spatial memory to help us navigate content. Things will help us organize content and manipulate controllers across a growing variety of devices."

Auras "Auras will help us track what we are tracking and when an interaction has been successful."
(For reference, I've copied some of my responses to the first discussion, which can be found near the end of this post)


A year later....
What has changed?   Everything post-WIMP has been covered like a blanket by the NUI-word.  "NUI" now functions as a generic term for anything that is not exactly WIMP.  There is a sense of urgency now to figure out how best to conceptualize post-WIMP interfaces and interactions.  Newer, affordable technologies enable us to interact with friends and family while we are on-the go. Netbooks, e-Readers, SmartPhones, large touch screen displays, interactive HDTV, and new devices with multi-modal I/O's abound.  Our grandparents are on Facebook and twitter from their iPhones.  Our world no longer requires us to be slaves to the WIMP mentality.


So what is the problem?
The technology has moved along so fast that application designers and developers have not had a chance to catch up. (The iPhone is an exception.)  The downturn in the economy has made it difficult for many to take the leap from traditional software or web development and gain new skill sets.  On top of it all, most of us over the age of 15 have been brainwashed from years of working within the constraints of WIMP. It doesn't matter if we are users, consumers, students, designers, or developers.


Even the folks least likely to have difficulty expanding into the post-WIMP world have had some difficulty.  If you've had training in HCI (Human-Computer Interaction), you were inadvertently brainwashed with the best. The bulk of the theory and research you contemplated was launched at a time when WIMP was king, even as the Web expanded. Many of the of the principles held dear to traditional HCI folks have been shattered, and no-one has come up with a "theory of everything" that will cover all of the human actions and interactions that are supported or guided by new technologies.


The problem, in part, is that letting go of WIMP is hard to do, as illustrated by the following post from the Ars Technica website:  Light Touch:  A Design Firm Grapples with Microsoft Surface  (Matthew Braga, 6/29/09) "Ditching the mouse and keyboard means a whole lot more than just doing without two common peripherals.  As those who have worked with Microsoft Surface have found out, you have to jettison decades of GUI baggage and start with a whole new mindset...In actuality, few multi-touch gestures are really anything like what we experience in the physical world. There is no situation in which we pull on the corners of an image to increase its size, or swipe in a direction to reveal more content. So, in the context of real-world interaction, these types of gestures are far from natural...gestures should not only feel natural, but logical; the purpose that gestures like these serve, after all, is to replace GUI elements to the end of making interaction a more organic process."   (Be sure to read the comments.)

Now that the Surface is taking root in more places, and touch-screen all-in-one PC's and tablets are starting to multiply, more people are giving "NUI" some thought. Ron George, an interaction and product designer with experience working with Microsoft's Surface team has contributed to the post-WIMP discussion and spent some time sharing ideas with Josh Blake, a .NET, SharePoint, and Microsoft Surface Consultant for InfoStrat and author of Deconstructing the NUI blog. The outcome of this discussion was Ron George's December 28th blog post, "OCGM (pronounced Occam['s Razor] is the replacement for WIMP", and Josh Blake's post, "WIMP is to GUI as OCGM (Occam) is to NUI".   (Be sure to read the comments for both of these posts!)



OCGM (as conceptualized by Ron George)


Objects "are the core of the experience. They can have a direct correlation with something physical, or they can just be objects in the interface."


Containers "will be the “grouping” of the objects. This can manifest itself in whatever the system sees fit to better organize or instruct the user on interactions. They do not have to be, nor should they be, windows. They can be any sort of method of presentation or relationship gathering as seen fit."


Gestures "I went into detail about the differences in Gestures and Manipulations in a previous post [check it out for a refresher]. Gestures are actions performed by the user that initiate a function after its completion and recognition by the system. This is an indirect action on the system because it needs to be completed before the system will react to it."


Manipulations "are the direct influences on an object or a container by the user. These are immediate and responsive. They are generally intuitive and mimic the physical world in some manner. The results are expected and should be non-destructive. These are easily performed and accidental activations should be expected and frequent."

To illustrate a point regarding the validity of the OCGM analogy proposed by Ron George, Josh Blake shares the following video of a presentation from REMIX 2009, in which August de los Reyes, the Principle Director of User Experience for Surface Computing at Microsoft, briefly discusses the TOCA (Touch, Objects, Containers, and Actions) concept, suggested to replace the WIMP concept:

The video wouldn't embed, so go to the following link:


Predicting the Past: A Vision for Microsoft Surface
"Natural User Interface (NUI) is here. New systems of interaction require new approaches to design. Microsoft Surface stands at the forefront of this product space. This presentation looks at one of the richest sources for inventing the future: the past. By analyzing preceding inflection points in user interface, we can derive some patterns that point to the brave NUI world." 


The concepts outlined in the presentation are similar to Microsoft's Vision for 2019


Richard Monson-Haefel added his thoughts about the discussion about OCGM in his recent blog post, "What is NUI's WIMP?"  Richard disagrees with the OCGM concept, as he feels it doesn't encompass some important interactions, such as speech/direct voice input.   He'd probably agree that NUI is NOT WIMP 2.0.



Post-NUI, Activity Theory, and Off-the-Desktop Interaction Design:
As I was reading the recent posts and discussions regarding NUI/OCGM, I also contemplated some of what I've been reading over my holiday break, "Acting With Technology:  Activity Theory and Interaction Design", written by Victor Kaptelinin and Bonnie A. Nardi.   Victor Kaptelinin is the co-editor of "Beyond the Desktop Metaphor: Designing Integrated Digital Work Environments" (MIT Press, 2007), and has an interest in computer-supported cooperative work.  Bonnie Nardi brings to the IT world her background in anthropology, and is the co-author of "Information Ecologies:  Using Technology with the Heart" (MIT Press, 1999). The authors know what they are talking about. 


It is important to note that activity theory-based interaction design is viewed as a "post-cognitivistic", and informed by some of what I studied in psychology, education, and social science years ago. Within the field of activity theory are some important differences, which I'll save for a future post. 


Below are some concepts taken from the book. I am still mulling them over through the prism of NUI, post-WIMP, PATA, TOCA, OCGM, etc.  That's why there will be at "Part II", with specific examples.


"Means and ends, the extent to which the technology facilitates and constrains attaining user's goals and the impact of the technology on provoking or resolving conflicts between different goals


Social and physical aspects of the environment - integration of target technology with requirements, tools, resources, and social rules of the environment
Learning, cognition, and articulation,  internal vs external components of activity and support of their mutual transformations with target technology


Development -Developmental transformation of the above components as a whole" 
"Taken together, these sections cover various aspects of the way the target technology supports, or is intended to support, human actions".  (page 270)


I especially like the activity checklist included in the appendix of the book, as well as the concept of tool mediation. "The Activity Checklist is intended to be used at early phases of system design or for evaluating existing systems.  Accordingly, there are two slightly different versions of the Checklist, the "evaluation version" and the "design version".  Both versions are implemented as organized sets of items covering the contextual factors that can potentially influence the use of computer technology in real-life settings.  It is assumed that the Checklist can help to identify the most important issues, for instance, potential trouble spots that designers can address". (page 269)


"The Checklist covers a large space.  It is intended to be used first by examining the whole space for areas of interest, then focusing on the identified areas of interest in as much depth as possible...there is a heavy emphasis on the principle of tool mediation"  (page 270).


Other Thoughts
What is missing from this picture is a Universal Design component, something that I think holds up across time and technologies.  Following the principles of Universal design doesn't mean dumbing down or relying on simplicity. It is a multi-faceted approach, and relies on conctructing flexibility in use, one of the key concepts of Universal Design. I'd like to see this concept embedded in the post-WIMP conceptualization somehow. 


Because of my background in education/psychology/ special education, I try to follow the principles of  Universal Design for Learning (UDL) when I work on technology project.  I've spent some time thinking about how the principles of UDL could be realized through new interaction/interface systems.   Although this approach focuses on the educational technology domain, it is important to consider, given that a good percentage of our population - potential users, clients, consumers - has a temporary or permanent disability of one kind or another.


Components of Universal Design for Learning:
Multiple Means of Representation
Provide options for perception
Provide options for language and symbols
Provide options for comprehension
Multiple Means of Action and Expression
Provide options for physical action
Provide options for expressive skills and fluency
Provide options for executive functions
Multiple Means of Engagement
Provide options for recruiting interest
Provide options for sustaining effort and persistence
Provide options for self-regulation
-Adapted from the UDL Guidelines/Educator Checklist, which breaks down the components into more specific details.


Note:  The concept of Universal Design for Learning shares historical roots with some of the work behind Activity Theory and Interaction Design. Obviously, there is still much to contemplate regarding OCGM and other permutations of post-WIMP concepts!   


Here are my comments to the discussion on Johnathan Brill's blog from January 2009:
Thoughts: I'm seeing a trend to "optimize" WIMP-type applications so they can be manipulated by gesture and touch. People who program kiosks, ATM's and POS touch screens are examples of what I'm talking about. Touch and hold, two-fingered touch, and double-tap are just a slight transformation of the WIMP world-view, and in my opinion, are still WIMP (wimpy!). The mouse interaction "pretenders" are fine for using legacy productivity applications, OK in the short run.

For example, I have an HP TouchSmart, but I don't use the touch screen as often as I'd hoped. Try using using Visual Studio to code something on a touch screen. There is so much more that can be done! I know from the touch-screen prototype/demos I've worked on in various classes that applications that support collaboration and decision-making are important, and not just for work.

What do people DO, really? First of all, we are social beings, most of us. Think of what we share and discuss with others, and think about what sort of interactions on a display might best correspond with this interaction. Here are some of the things I've been DOING recently that involved some sort of technology and communication/collaboration with others:



---Travel planning - I recently went on a cruise and with various family members, selected activities I wanted to do on the ship as well plan my shore excursions (a complicated process)


---Picture sharing- I came back from the cruise with lots of pictures that I uploaded on Flickr. Related to this process: Picture annotating, tagging, choosing/comparing & editing it would be SO cool if I could use two sliders to enhance my pictures just so!


---Talking on the phone and responding to e-mail with friends and family members about the pictures, and what they wanted to see on Flickr- "you know, the ones of the dogs in Jamaica"... of which I had about 68!


---Financial planning with my husband. (I took info-viz last semester, so I know the possibilities are there.)


---Using the touch-screen to check-in at my eye-doctor's office: This was a user-unfriendly experience. Such a nice little screen. I was provided with a WIMPY PowerPoint-like interface which was confusing to use- and time consuming!


---Shopping at the new Super Wal-Mart: I asked the greeter if there was a map, and he said, "Food is over there, and the rest of the stuff is that way." Flat panel displays were all over the store, but of course,they weren't interactive. I had no idea where anything was, and the few items I was looking at had no bar-code tags. There wasn't a clerk in sight.   Wal-Mart TV rolled on-and-on via the display above my head. If I could only harness the display and have the talking head answer my questions! I gave up on my shopping trip when I was in the facial lotion/potion section. Too many choices, and too much fine print to read.

---Shopping at the mall with my young-adult daughters... I'll have to hold my thoughts on that one for now!

Some suggestions:
I think the artist/designers, (even dancers,) who are interested in multi-touch and gesture interaction have some interesting things to consider. (I linked to some of my previous posts.)


Again:
I am still mulling things over through the prism of NUI, post-WIMP, PATA, TOCA, OCGM, etc.  So that is why there will be at "Part II".  With specific examples!


RELATED
Multimedia, Multi-touch, Gesture, and Interaction Resources


My thoughts:
2007 Letter to the Editor, Pervasive Computing
Useful Usability Studies (pdf)
2007 Blog Post
Usability/Interaction Hall of Shame (In a Hospital)
2008 Blog Posts

Emerging Interactive Technologies, Emerging Interactions, and Emerging Integrated Form Factors
Interactive Touch-Screen Technology, Participatory Design, and "Getting It"
An Example of Convergence: Interactive TV: uxTV 2008
2009 Blog Posts

Why "new" ways of interaction?
Microsoft: Are You Listening?  Cool Cat Teacher (Vicki Davis) Tries out Microsoft's Multi-touch Surface Table
Haptic/Tactile Interface:  Dynamically Changeable Physical Buttons
The Convergence of TV, the Internet, and Interactivity:  Update
UX of ITV:  The User Experience and Interactive TV (or Let's Stamp Out Bad Remote Controls)
Digital Convergence and Interactive Television;  Boxee and Digital Convergence 

ElderGadget Blog: Useful Tech and Tools


Other People's Thoughts
Ron George's blog, OCGM (pronounced Occam['s Razor] is the replacement for WIMP  12/28/09
Ron George: Welcome to the OCGM Generation! Part 2 
Stephen, Microsoft Kitchen: OCGM, A New Windows User Experience
Richard Monson-Haefel's blog, Multi-touch and NUI:  What is NUI's WIMP?
Richard Monson-Haefel:  OCGM: George's Razor
Josh Blake's blog,  Deconstructing the NUI: WIMP is to GUI as OCGM (Occam) is to NUI
Bill Buxton: Gesture Based Interaction (pdf) (Updated 5/2009)
Bill Buxton: "Surface and Tangible Computing, and the "Small" Matter of People and Design" (pdf) - ISSCC 2008
Dan Saffer, Designing for Gestural Interfaces: Touchscreens and Interactive Devices
Dan Saffer, Designing for Interaction 
Mark Weiser,  Computer for the 21st Century  Scientific American, 09, 1991
Touch User Interface:  Readings in Touch Screen, Multi-Touch, and Touch User Interface
Jacob O Wobbrock, Meredith Ringel Morris, Andrew D. Wilson User-Defined Gestures for Surface Computing CHI 2009, April 4–9, 2009, Boston, Massachusetts, USA.

Feb 20, 2013

Disney Research: Touche, Touch and Gesture Sensing

The following video is a demonstration of something called Swept Frequency Capacitive Sensing. It recognizes various configurations of hands and body during interactions.  This system is different than conventional capacitive touch sensing, as it senses a range of frequencies to develop a capacitive profile that provides a significant amount of data that can be analysed and utilized in an application.

At 1:23, the SFCS is demonstrated on a table, sensing body posture or body configuration. It is a wireless system and can be used on smaller touch screens, such as mobile devices.  It can recognize interactions in liquids.


Touche was awarded Best Paper at ACM CHI 2012:

RELATED
Touche: Touch and Gesture Sensing for the Real World
Disney Research
Sato, M., Poupyrev, I, and Harrison, C. Touché: Enhancing Touch Interaction on Humans, Screens, Liquids, and Everyday Objects. In Proceedings of CHI’12. 2012. ACM.
Paper [PDF, 10Mb]
Touche with Arduino
Swept Frequency Capacitive Sensing (SFCS)
Audrey Cropp, Responsive Landscapes, 2/18/13

SOMEWHAT RELATED Synthetic Ecologies Course Reading List
Responsive Environments Course
Allen Sayegh, Harvard Graduate School of Design



Nov 17, 2012

Human Computer Interaction + Informal Science Education Conference (NUI News)

I recently learned of the HCI + ISE conference, funded by the National Science Foundation and organized by Ideum and Independent Exhibitions that will provide the groundwork for the future of the development and design of interactive computer-based science exhibits.
Science museums have a long history of interactivity, well suited to groups of "explorers", such as families or students visiting on a field trip.  

What is really exciting is that new interactive applications and technologies have the power to transform the way people learn and understand science in a collaborative and social way.  Innovations in the field of HCI - Human-Computer Interaction- such as multi-touch and gesture interaction, are  well-suited to meet the goals of science education for all, beyond the school doors and wordy textbooks. 

Below is a screen-shot of the conference website, a description about the conference, quoted from the site, and some related resources.



About the HCI+ISE Conference
"HCI technologies, such as motion capture, multitouch, augmented reality, RFID, and voice recognition are beginning to change the way computer-based science exhibits are designed and developed. Human Computer Interaction in Informal Science Education (HCI+ISE) is a first-of-its-kind gathering to explore and disseminate effective practices in developing a new generation of digital exhibits that are more intuitive, interactive, and social than their predecessors."
"The HCI+ISE Conference, to be held in Albuquerque, New Mexico June 11-14 2013, will bring together 60 museum exhibit designers and developers, learning researchers, and technology industry professionals to share effective practices, and to explore both the enormous potential and possible pitfalls that these new technologies present for exhibit development in informal science education settings."
"HCI+ISE will focus on the practical considerations of implementing new HCI technologies in educational settings with an eye on the future. Along with a survey of how HCI is shaping the museum world, participants will be challenged to envision the museum experience a decade into future. The conference results will provide a concrete starting point for exhibit developers and informal science educators who are just beginning to investigate these emerging technologies and design challenges in creating these new types of exhibits."
Why HCI+ISE?
"Since the mid-1980s informal educational venues have increasingly incorporated computer-based exhibits into their science communication offerings in an effort to keep pace with public expectations and make use of the expanding opportunities these technologies provide. The advent and popularity of once novel HCI technologies are becoming commonplace: the Wii and Microsoft Kinect now allow for motion capture video games, tablet PCs have multitouch interaction, and smart phones and other devices come standard with voice recognition. Yet many museums are still developing single-touch and trackball-driven, single-user computer kiosks."
"Science museums have a long history of championing hands-on, physical, and inquiry-based activities and exhibits. This vast experience has only just begun to be applied to interactive computer interfaces. Along with seasoned science exhibit developers, the Conference will draw upon individuals outside of ISE who will provide fresh insight into the technologies, design issues, and audience expectations that these visitor experiences present."
Involvement and Findings
"HCI+ISE will bring together a diverse group of practitioners and other professionals to discuss (and in some cases share and prototype) new design approaches utilizing emerging HCI technology. Please see our Apply page to learn how you can participate. Conference news and findings will be distributed through a variety of ISE and museum websites, including this one."
"We welcome your questions and comments about the HCI+ISE Conference."
CONTACTS
Kathleen McLean of Independent Exhibitions
& Jim Spadaccini of Ideum
HCI+ISE Co-chairs
"Open Exhibits is a multitouch, multi-user tool kit that allows you to create custom interactive exhibits."
CML:  Creative Mark-up Language
GML: Gesture Mark-up Language
GestureWorks
Ideum

May 21, 2012

Leap Motion: Low Cost Gesture Control for Your Computer Display

Jessica Vascellaro, of the Wall Street Journal, reports about gesture,  motion. and even object control for computers, highlighting the work of  Leap Motion and Flutter.




Apparently the Leap Motion sensor is less expensive than Microsoft's Kinect. It can track movements down to 1/100 of a millimeter and can track fingers and movement. It handles interaction with 8 cubic feet of space.


Below is a video from the Leap Motion website:






RELATED
Leap FAQs
Leap Motion Developer Kit Application
Leap Motion: 3D hands-free motion control, unbound
Daniel Terdiman, CNET, 5/20/12
FYI:  Do a search and you'll find many more articles and posts about Leap Motion!

Dec 13, 2011

Kinect in Education! (kinectEDucation)

Although I'm currently exploring the world of interactive HTML5, interactive video, etc., I think I just might make "kinecteducation" the focus of my tech-hobbies. I have some experience with game programming-one of my computer courses required a project using XNA- and I know quite a bit about gesture and multitouch, multi-user interactio, so it would'nt be too much of a stretch.


My motivation?

As a school psychologist, my main assignment is a school/program for students with disabilities, including about 40 or so who have autism spectrum disorders. Yesterday, the principal of the school attended a demonstration of the Kinect and requested that our school be considered for piloting it. One of my other assignments is a magnet high school for technology and the arts, and rumor has it that it will be offering a game programming curriculum.  I'd love to co-sponsor an after-school game club and encourage the students to program educational apps for the Kinect sometime in the near future! 


I'm also working as a client, in collaboration with come of my educator colleagues, with a team of university students who are creating a communication/social skills game suite geared for students with autism and related disabilities....


I'm inspired by the possibilities!


We have large SMARTboards in each classroom and in other locations around the building, and we have a Wii set up in the large therapy room adjacent to my office. The Wii has proven to be very useful in helping the students develop social and leisure skills that they can use in and outside of the school settings, but some of the students have difficulty manipulating the buttons on the controllers.


You can get Kinect-based apps from the Kinect Education website! Below are selected links from the website:

You can also get additional information from the Microsoft in Education "Kinect in the Classroom" website.

Below are a few videos to give you an overview of how open-source applications designed for the Kinect can be used in education: