Showing posts with label gesture. Show all posts
Showing posts with label gesture. Show all posts

Feb 4, 2011

Immersive Labs' Intelligent Interactive Display/Billboard App Demo

"If a "Minority Report" future was possible, and if it is coming, how do we create a model of it that is not intrusive, that is fun to play with, that is social, that is respectful and not intrusive, that doesn't invade people's privacy?" -Jason Sosa, CEO, Immersive Labs



Future of Advertising Technology (Immersive Labs demo from 2 years ago)

Future of Advertising Technology from Immersive Labs on Vimeo.


RELATED
Immersive Labs is looking for an interactive developer!


Some of my thoughts on this topic:  Interactive Touch-Screen Technology, Participatory Design, and "Getting It" - Revisited

Jan 18, 2011

"Hi, Google. My name is Johnny" Johnny Chung Lee leaves Microsoft. (I still wish I could be Johnny Chung Lee for a day.)

"Hi, Google.  My name is Johnny"  Johnny Chung Lee announced on his Procrastineering blog that he's accepted a position at Google as a "Rapid Evaluator".   I'm not sure what he will be doing in this position, but his title is intriguing!

Here are some of my previous posts devoted to the work of Johnny Chung Lee:

I wish I could be Johnny Chung Lee for a Day!  Tracking fingers with the Wii Remote
Video Clips of Projects Inspired by Johnny Chung Lee
More about Project Natal:  Richochet - Great Gaming for Fitness, Johnnie Chung Lee's Contribution


I STILL wish I could be Johnny Chung Lee for a day!

RELATED
Microsoft Kinect Developer Johnny Chung Lee Jumps Ships and Lands at Google
Leena Rao, TechCrunch, 1/18/11
What Microsoft Kinect Defection to Google Means
Rich Tehrani, TMCnet Blog 1/18/11
Microsoft Loses a Top Kinect Researcher to Google
Tricia Duryee,Yahoo! Finance, 1/18/11

Jan 9, 2011

New Microsoft Surface 2.0 and InfoStrat's Surface 2.0 Information Visualization Controls

Microsoft Surface 2.0 was unveiled at CES 2011 a few days ago,  the result of a collaboration between Microsoft and Samsung.  Surface 2.0 is a step up!  The 40 inch 1080p high-definition LCD display no longer requires a projection/camera system, which clears the area below the screen  of bulky hardware.  The best part about Surface 2.0, in my opinion, is that it doesn't have to be used as a table.  It can be configured in a variety of ways, even mounted on walls.   For this reason, it will be useful in a variety of settings and situations.


Below is a quote about Surface 2.0 from Steve Ballmer's recent keynote address at CES 2011 that outlines the new technology that is embedded in the Surface 2.0 display:


"But what's really amazing about this technology, what really makes it magical, is the sensor itself. So, those first-generation Surface PCs needed cameras underneath that would look up to try to see what was going on. But what we have here is called PixelSense. PixelSense is new technology we've invented where there's infrared sensors all across this screen. Every single pixel is actually acting as a camera. The PC, the Surface here, can actually see." -Steve Ballmer:  My Keynote Address at the 2011 International Consumer Electronics Show" (Huffington Post, 1/6/2011)





The good news is that developers have been busy at work to create applications for Surface 2.0. Below is a video demonstration of what the folks at InfoStrat have recently created to support collaborative information visualization activities:







Here's the information about the controls from the Infostratcville YouTube channel:

"This is a sneak preview of a suite of data visualization controls developed by InfoStrat for Microsoft Surface 2.0. The controls will be made available as open source software at no charge on CodePlex.com in the first half of 2011."


"This data visualization control suite provides multi-touch versions of the following controls:
- DeepZoom multi-resolution image control that allows high performance display of very high-resolution imagery
- PowerPoint Viewer which enables slide decks to be arranged and presented using multi-touch
- PivotViewer chart control that allows dynamic sorting and categorization of data
- PhysicsCanvas which provides an infinite, dynamic canvas for viewing and organizing content"




RELATED
Josh Blake's post:  "Microsoft Surface 2.0 Data Visualization Controls by InfoStrat" 
Microsoft Surface Blog: "Microsoft and Samsung Unveil the Next Generation of Surface"


PRESS RELEASE
For Immediate Release

9 a.m. PST
January 6, 2010
InfoStrat Releases Next-Generation Data Visualization Controls for Microsoft Surface 2.0
Washington DC – January 6, 2010 – InfoStrat today announced plans to support Microsoft Surface 2.0 by releasing a control suite that accelerates the development of next-generation multi-touch data visualizations. The controls will be made available as open source software at no charge on CodePlex.com in the first half of 2011.
This data visualization control suite provides multi-touch versions of the following controls:
§ Deep Zoom multi-resolution image control that allows high performance display of very high-resolution imagery
§ PowerPoint Viewer which enables slide decks to be arranged and presented using multi-touch
§ Pivot Viewer chart control that allows dynamic sorting and categorization of data
§ Physics Canvas which provides an infinite, dynamic canvas for viewing and organizing content
Other features of the controls:
§ Works on both Microsoft Surface and Microsoft Windows 7 with touch
§ A single application built with the data visualization framework can support multiple hardware form factors including: horizontal multi-touch tables, tablets, and large format vertical touch screens
§ Innovative object recognition to enable rapid data manipulations (only on Microsoft Surface)
Watch a sneak preview of the control suite on YouTube:http://www.youtube.com/watch?v=lEVtjHlrf4I
InfoStrat is a member of Microsoft’s Technology Adoption Program (TAP) for Microsoft Surface. As a Microsoft Surface 2.0 TAP member, InfoStrat receives early access to hardware and software, allowing InfoStrat to gain expertise and influence the development of the product before it was released to the public.
In 2008, InfoStrat solved the problem of using Bing Maps 3D on Microsoft Surface in a way that performed well and was WPF-friendly. InfoStrat open-sourced the solution as a reusable control for the WPF and Surface community. Since then, the control has received over 120,000 page views and has over 8200 downloads, and has also been featured in many of our own applications. This control, known as InfoStrat.VE, has become one of the most popular controls for building mapping applications on Microsoft Surface: http://bingmapswpf.codeplex.com
“We are proud to be part of the Microsoft Surface development community,” according to Jim Townsend, president of InfoStrat, “and excited about the possibilities of Microsoft’s new version of Surface.”
Microsoft Surface provides a new way to experience and use information and digital content, engaging the senses, improving collaboration and empowering people to interact. Microsoft Surface is at the forefront of developing software and hardware that uses vision-based technology to fundamentally change the way people use computing devices. More information can be found at http://www.surface.com.
Information Strategies ("InfoStrat") is an award-winning Microsoft Gold Certified Partner and a Microsoft Surface Strategic Partner and member of the Technology Adopter Program.
For more information, press only:
Josh Wall, InfoStrat, (202) 364-8822 ext. 202, joshw@infostrat.com

Dec 3, 2010

More gesture and multi-touch interaction! Windows 7 Navigation with Kinect; Product browser by Immersive Labs,

Here are a couple of new natural user interface videos.  The first video, by Evoluce, demonstrates gesture interaction/navigation in Windows 7 applications supported by Kinect. The second video, by Immersive Labs, shows multi-touch product browsing interaction on a large display.

Kinect Treatment of Windows 7, by Evoluce

Evoluce: Leading Surface Technologies


Immersive Labs - Multi-touch Product Browser

Immersive Labs

Jun 8, 2010

John Underkolffler Demonstrates G-Speak-collaborative, multi-display interaction (TED Talk Video by John Underkloffer, Minority Report science advisor)

John Underkloffer Points to the Future of UI (User Interface)


"Minority Report science adviser and inventor John Underkoffler demos g-speak -- the real-life version of the film's eye-popping, tai chi-meets-cyberspace computer interface. Is this how tomorrow's computers will be controlled?"

In this video, technologies that have been around for 10-15 years are demonstrated, along with newer user interface interaction, navigation, manipulation, and analysis techniques. Includes 3D interaction as well as collaborative, multi-display interaction.


"Media should be accessible, in fine grained form."

Feb 26, 2010

More Multi-touch: So touch Multi-touch Presentation Software for Windows 7

So touch is a creative software company that has developed So touch Presention for creating multi-touch presentations for Windows 7.  You can download a trial version from the So touch website.  Minimal requirements are a 1.6GHz processor (Core2 Duo), 2 Gb of RAM, and a 512Mb graphic card.)

Here's the promotional video:


Here is the promotional information from the So touch YouTube site:

"Create your own multi-touch presentations! Discover the NEW So touch Presentation software!


Get your audience captivated and make your presentations more intuitive and entertaining than ever!


Manipulate images or screenshots of your usual documents with multi-touch gestures! Navigate multi-
images format, scroll up and down long images. Then open the original file or document in one tap on the screen leveraging the usual Windows associated application!


Thank to its user-friendly visual administration interface, the So touch presentation software is easy to use and will bring to life your presentations on a day to day basis!"


For more information, get in touch at contact [at] so-touch [dot] com or visit http://www.so-touch.com


Music by http://www.myspace.com/wouhouh


So touch / onedtozero / Martin Senyszak by So touch


FOR THE TECH-CURIOUS
"So touch Presentation for Windows 7 is developed using Adobe AIR and our unique AS3 framework for Adobe AIR and Windows 7 that is also available on our website! It is the first professional and transparent solution to develop Windows 7 compatible Adobe AIR applications. We are proud to be the first to announce it! There is some open source existing solutions but they don't offer the transparency and efficiency of SoBridge, the TUIO to Windows 7 C# bridge, included with So touch Framework."


Contact Person: Julien Lescure
Company Name: So touch
Telephone Number: +44 20 3239 3912
Email Address: contact[at]so-touch[dot]com
Web site address: http://www.so-touch.com


Jan 29, 2010

iPad multi-touch gestures for iWork, page navigator tool, fast data entry & infographs, on-touch form creation, iPad wall.(Updated 1/30/10.)

Update 1/30/10
Know HTML & JavaScript?  Open source PhoneGap lets you create apps for the iPhone and other platforms. (Update: Including the iPad.)
Update 1/30/10
According to Brian Chen's Gadget Lab post, Apple recently made a change to enable the iPhone and iPad function as web phone:
"ICall, a voice-over-Internet Protocol (VOIP) calling company, said the latest revisions in Apple’s iPhone developer agreement and software development kit enable the iPhone to make phone calls over 3G data networks. ICall promptly released an update to its app today, adding the 3G support...Because the iPad includes a microphone and will run iPhone apps, that means the tablet will gain internet telephony, too." Read More http://www.wired.com/gadgetlab/2010/01/iphone-voip/#ixzz0e5aErE6q

Interactions in Apple's iWork Applications for iPad


RELATED
Interesting iPad Interactions  -Craig Villamor
New Multi-touch Interactions on the Apple iPad - Craig Villamore & Luke Wroblewski
The iPad's Actually New UI and Gestures -Matt Buchanan, Gizmodo
-Multi-finger multi-touch
-Popovers
-Media Navigator
-"Long" touch and drag
-Layered UI elements
iPad.org Forum


ClarkeHopkinsClarke iPad Wall Concept for a Library

Dec 31, 2009

The Post-WIMP Explorers' Club: Update of the Updates, Morning of 12/31/09

What is the Post WIMP Explorers Club?  
I came up with the name of this semi-fictional club as a way to organize my thoughts (and blog posts) regarding the development of a new metaphor for post-WIMP applications and technologies, related specifically to natural user interfaces, natural user interaction design, and off-the-desktop user experience.  


Update, morning of 12/31/09:
Josh Blake, author of the blog "Deconstructing the NUI", posted Metaphors and OCGM  this morning.  It fleshes out post-WIMP concepts, addressing metaphors & interfaces.  The premise is that NUI metaphors will be less complex than GUI (WIMP) metaphors.    My feeling is that on the surface, this will hold true, especially for consumers/users and people creating light-weight applications and software widgets.  


Underneath the surface,  where designers and developers brains spend more time than users & consumers, things might be more complex.  Why? The technology to support the required wizardry is more complex.  With convergence, the creation of new technologies, applications, communication systems, and even electronic entertainment, is  now dependent upon the work and thinking of people from a wider range of disciplines.  Each discipline brings to the table a set of terms rooted in theory, and even research practices.


Update,  late afternoon, 12/30/09:
Richard Monson-Haefels response to Ron George's "Part 2".  The concept of OCGM might be growing on him now... OCGM: George's Razor : "If Ron George can explain how OCGM encompasses Affordances and Feedback than I'll be convinced that OCGM works for NUI. Otherwise, I think OCGM is a great start that would benefit from an added "A" and "F"." -Richard   
  • OCGM relates to Occam's Razor.  It is helpful to read a bit about it if you are are interested in the post WIMP conversations. (The link is to an an article from "How Stuff Works", via Richard Monson-Haefel.)
UPDATE 12/30/09  -- This post is part of a discussion between several different bloggers, and was written before Ron George wrote his latest post, Welcome to the OCGM Generation!  Part 2, which I recommend that you read now, or within the same time frame, as this post.   Since I'm not ready to write "Part 2" of this post, I tweaked what I had and added some links to a handful of my previous posts that touch on this and related topics.  The links can be found at the bottom of this post.




START HERE FOR THE "ORIGINAL" POST FROM  12/29 & 12/20/09:


Background
About a year ago I responded to a conversation between Johnathan Brill, Josh Blake, and Richard Monson-Haefel discussing "post-WIMP" conceptualization regarding natural user interfaces and interaction, otherwise known as NUI.  The focus of the discussion was on Johnathan's post, "New Multi-touch Interface Conventions". At the time, we were reading Dan Saffer's book, Designing Gestural Interfaces, and contemplating new ways that technology can support human interaction and activities in a more natural, enjoyable, and intuitive manner.  

A few days later, I shared some of the concepts from the discussion on a post on this blog, "Why "new" ways of interaction?".  The post includes video of Johnathan Brill discussing PATA, a post-WIMP analogue to assist with multi-touch/gesture based application development, which he describes as follows:
Places
"Lighting, focus, and depth, simplified searching and effecting hyperlinked content."
Animation "Using animation to subtly demonstrate what applications do and how to use them is a better solution than using icons. Animations makes apps easier to learn."
Things "Back in the days of floppy disks, objects helped us organize our content. This limitation was forced by arcane technology, but it did have one huge advantage. We used our spatial memory to help us navigate content. Things will help us organize content and manipulate controllers across a growing variety of devices."

Auras "Auras will help us track what we are tracking and when an interaction has been successful."
(For reference, I've copied some of my responses to the first discussion, which can be found near the end of this post)


A year later....
What has changed?   Everything post-WIMP has been covered like a blanket by the NUI-word.  "NUI" now functions as a generic term for anything that is not exactly WIMP.  There is a sense of urgency now to figure out how best to conceptualize post-WIMP interfaces and interactions.  Newer, affordable technologies enable us to interact with friends and family while we are on-the go. Netbooks, e-Readers, SmartPhones, large touch screen displays, interactive HDTV, and new devices with multi-modal I/O's abound.  Our grandparents are on Facebook and twitter from their iPhones.  Our world no longer requires us to be slaves to the WIMP mentality.


So what is the problem?
The technology has moved along so fast that application designers and developers have not had a chance to catch up. (The iPhone is an exception.)  The downturn in the economy has made it difficult for many to take the leap from traditional software or web development and gain new skill sets.  On top of it all, most of us over the age of 15 have been brainwashed from years of working within the constraints of WIMP. It doesn't matter if we are users, consumers, students, designers, or developers.


Even the folks least likely to have difficulty expanding into the post-WIMP world have had some difficulty.  If you've had training in HCI (Human-Computer Interaction), you were inadvertently brainwashed with the best. The bulk of the theory and research you contemplated was launched at a time when WIMP was king, even as the Web expanded. Many of the of the principles held dear to traditional HCI folks have been shattered, and no-one has come up with a "theory of everything" that will cover all of the human actions and interactions that are supported or guided by new technologies.


The problem, in part, is that letting go of WIMP is hard to do, as illustrated by the following post from the Ars Technica website:  Light Touch:  A Design Firm Grapples with Microsoft Surface  (Matthew Braga, 6/29/09) "Ditching the mouse and keyboard means a whole lot more than just doing without two common peripherals.  As those who have worked with Microsoft Surface have found out, you have to jettison decades of GUI baggage and start with a whole new mindset...In actuality, few multi-touch gestures are really anything like what we experience in the physical world. There is no situation in which we pull on the corners of an image to increase its size, or swipe in a direction to reveal more content. So, in the context of real-world interaction, these types of gestures are far from natural...gestures should not only feel natural, but logical; the purpose that gestures like these serve, after all, is to replace GUI elements to the end of making interaction a more organic process."   (Be sure to read the comments.)

Now that the Surface is taking root in more places, and touch-screen all-in-one PC's and tablets are starting to multiply, more people are giving "NUI" some thought. Ron George, an interaction and product designer with experience working with Microsoft's Surface team has contributed to the post-WIMP discussion and spent some time sharing ideas with Josh Blake, a .NET, SharePoint, and Microsoft Surface Consultant for InfoStrat and author of Deconstructing the NUI blog. The outcome of this discussion was Ron George's December 28th blog post, "OCGM (pronounced Occam['s Razor] is the replacement for WIMP", and Josh Blake's post, "WIMP is to GUI as OCGM (Occam) is to NUI".   (Be sure to read the comments for both of these posts!)



OCGM (as conceptualized by Ron George)


Objects "are the core of the experience. They can have a direct correlation with something physical, or they can just be objects in the interface."


Containers "will be the “grouping” of the objects. This can manifest itself in whatever the system sees fit to better organize or instruct the user on interactions. They do not have to be, nor should they be, windows. They can be any sort of method of presentation or relationship gathering as seen fit."


Gestures "I went into detail about the differences in Gestures and Manipulations in a previous post [check it out for a refresher]. Gestures are actions performed by the user that initiate a function after its completion and recognition by the system. This is an indirect action on the system because it needs to be completed before the system will react to it."


Manipulations "are the direct influences on an object or a container by the user. These are immediate and responsive. They are generally intuitive and mimic the physical world in some manner. The results are expected and should be non-destructive. These are easily performed and accidental activations should be expected and frequent."

To illustrate a point regarding the validity of the OCGM analogy proposed by Ron George, Josh Blake shares the following video of a presentation from REMIX 2009, in which August de los Reyes, the Principle Director of User Experience for Surface Computing at Microsoft, briefly discusses the TOCA (Touch, Objects, Containers, and Actions) concept, suggested to replace the WIMP concept:

The video wouldn't embed, so go to the following link:


Predicting the Past: A Vision for Microsoft Surface
"Natural User Interface (NUI) is here. New systems of interaction require new approaches to design. Microsoft Surface stands at the forefront of this product space. This presentation looks at one of the richest sources for inventing the future: the past. By analyzing preceding inflection points in user interface, we can derive some patterns that point to the brave NUI world." 


The concepts outlined in the presentation are similar to Microsoft's Vision for 2019


Richard Monson-Haefel added his thoughts about the discussion about OCGM in his recent blog post, "What is NUI's WIMP?"  Richard disagrees with the OCGM concept, as he feels it doesn't encompass some important interactions, such as speech/direct voice input.   He'd probably agree that NUI is NOT WIMP 2.0.



Post-NUI, Activity Theory, and Off-the-Desktop Interaction Design:
As I was reading the recent posts and discussions regarding NUI/OCGM, I also contemplated some of what I've been reading over my holiday break, "Acting With Technology:  Activity Theory and Interaction Design", written by Victor Kaptelinin and Bonnie A. Nardi.   Victor Kaptelinin is the co-editor of "Beyond the Desktop Metaphor: Designing Integrated Digital Work Environments" (MIT Press, 2007), and has an interest in computer-supported cooperative work.  Bonnie Nardi brings to the IT world her background in anthropology, and is the co-author of "Information Ecologies:  Using Technology with the Heart" (MIT Press, 1999). The authors know what they are talking about. 


It is important to note that activity theory-based interaction design is viewed as a "post-cognitivistic", and informed by some of what I studied in psychology, education, and social science years ago. Within the field of activity theory are some important differences, which I'll save for a future post. 


Below are some concepts taken from the book. I am still mulling them over through the prism of NUI, post-WIMP, PATA, TOCA, OCGM, etc.  That's why there will be at "Part II", with specific examples.


"Means and ends, the extent to which the technology facilitates and constrains attaining user's goals and the impact of the technology on provoking or resolving conflicts between different goals


Social and physical aspects of the environment - integration of target technology with requirements, tools, resources, and social rules of the environment
Learning, cognition, and articulation,  internal vs external components of activity and support of their mutual transformations with target technology


Development -Developmental transformation of the above components as a whole" 
"Taken together, these sections cover various aspects of the way the target technology supports, or is intended to support, human actions".  (page 270)


I especially like the activity checklist included in the appendix of the book, as well as the concept of tool mediation. "The Activity Checklist is intended to be used at early phases of system design or for evaluating existing systems.  Accordingly, there are two slightly different versions of the Checklist, the "evaluation version" and the "design version".  Both versions are implemented as organized sets of items covering the contextual factors that can potentially influence the use of computer technology in real-life settings.  It is assumed that the Checklist can help to identify the most important issues, for instance, potential trouble spots that designers can address". (page 269)


"The Checklist covers a large space.  It is intended to be used first by examining the whole space for areas of interest, then focusing on the identified areas of interest in as much depth as possible...there is a heavy emphasis on the principle of tool mediation"  (page 270).


Other Thoughts
What is missing from this picture is a Universal Design component, something that I think holds up across time and technologies.  Following the principles of Universal design doesn't mean dumbing down or relying on simplicity. It is a multi-faceted approach, and relies on conctructing flexibility in use, one of the key concepts of Universal Design. I'd like to see this concept embedded in the post-WIMP conceptualization somehow. 


Because of my background in education/psychology/ special education, I try to follow the principles of  Universal Design for Learning (UDL) when I work on technology project.  I've spent some time thinking about how the principles of UDL could be realized through new interaction/interface systems.   Although this approach focuses on the educational technology domain, it is important to consider, given that a good percentage of our population - potential users, clients, consumers - has a temporary or permanent disability of one kind or another.


Components of Universal Design for Learning:
Multiple Means of Representation
Provide options for perception
Provide options for language and symbols
Provide options for comprehension
Multiple Means of Action and Expression
Provide options for physical action
Provide options for expressive skills and fluency
Provide options for executive functions
Multiple Means of Engagement
Provide options for recruiting interest
Provide options for sustaining effort and persistence
Provide options for self-regulation
-Adapted from the UDL Guidelines/Educator Checklist, which breaks down the components into more specific details.


Note:  The concept of Universal Design for Learning shares historical roots with some of the work behind Activity Theory and Interaction Design. Obviously, there is still much to contemplate regarding OCGM and other permutations of post-WIMP concepts!   


Here are my comments to the discussion on Johnathan Brill's blog from January 2009:
Thoughts: I'm seeing a trend to "optimize" WIMP-type applications so they can be manipulated by gesture and touch. People who program kiosks, ATM's and POS touch screens are examples of what I'm talking about. Touch and hold, two-fingered touch, and double-tap are just a slight transformation of the WIMP world-view, and in my opinion, are still WIMP (wimpy!). The mouse interaction "pretenders" are fine for using legacy productivity applications, OK in the short run.

For example, I have an HP TouchSmart, but I don't use the touch screen as often as I'd hoped. Try using using Visual Studio to code something on a touch screen. There is so much more that can be done! I know from the touch-screen prototype/demos I've worked on in various classes that applications that support collaboration and decision-making are important, and not just for work.

What do people DO, really? First of all, we are social beings, most of us. Think of what we share and discuss with others, and think about what sort of interactions on a display might best correspond with this interaction. Here are some of the things I've been DOING recently that involved some sort of technology and communication/collaboration with others:



---Travel planning - I recently went on a cruise and with various family members, selected activities I wanted to do on the ship as well plan my shore excursions (a complicated process)


---Picture sharing- I came back from the cruise with lots of pictures that I uploaded on Flickr. Related to this process: Picture annotating, tagging, choosing/comparing & editing it would be SO cool if I could use two sliders to enhance my pictures just so!


---Talking on the phone and responding to e-mail with friends and family members about the pictures, and what they wanted to see on Flickr- "you know, the ones of the dogs in Jamaica"... of which I had about 68!


---Financial planning with my husband. (I took info-viz last semester, so I know the possibilities are there.)


---Using the touch-screen to check-in at my eye-doctor's office: This was a user-unfriendly experience. Such a nice little screen. I was provided with a WIMPY PowerPoint-like interface which was confusing to use- and time consuming!


---Shopping at the new Super Wal-Mart: I asked the greeter if there was a map, and he said, "Food is over there, and the rest of the stuff is that way." Flat panel displays were all over the store, but of course,they weren't interactive. I had no idea where anything was, and the few items I was looking at had no bar-code tags. There wasn't a clerk in sight.   Wal-Mart TV rolled on-and-on via the display above my head. If I could only harness the display and have the talking head answer my questions! I gave up on my shopping trip when I was in the facial lotion/potion section. Too many choices, and too much fine print to read.

---Shopping at the mall with my young-adult daughters... I'll have to hold my thoughts on that one for now!

Some suggestions:
I think the artist/designers, (even dancers,) who are interested in multi-touch and gesture interaction have some interesting things to consider. (I linked to some of my previous posts.)


Again:
I am still mulling things over through the prism of NUI, post-WIMP, PATA, TOCA, OCGM, etc.  So that is why there will be at "Part II".  With specific examples!


RELATED
Multimedia, Multi-touch, Gesture, and Interaction Resources


My thoughts:
2007 Letter to the Editor, Pervasive Computing
Useful Usability Studies (pdf)
2007 Blog Post
Usability/Interaction Hall of Shame (In a Hospital)
2008 Blog Posts

Emerging Interactive Technologies, Emerging Interactions, and Emerging Integrated Form Factors
Interactive Touch-Screen Technology, Participatory Design, and "Getting It"
An Example of Convergence: Interactive TV: uxTV 2008
2009 Blog Posts

Why "new" ways of interaction?
Microsoft: Are You Listening?  Cool Cat Teacher (Vicki Davis) Tries out Microsoft's Multi-touch Surface Table
Haptic/Tactile Interface:  Dynamically Changeable Physical Buttons
The Convergence of TV, the Internet, and Interactivity:  Update
UX of ITV:  The User Experience and Interactive TV (or Let's Stamp Out Bad Remote Controls)
Digital Convergence and Interactive Television;  Boxee and Digital Convergence 

ElderGadget Blog: Useful Tech and Tools


Other People's Thoughts
Ron George's blog, OCGM (pronounced Occam['s Razor] is the replacement for WIMP  12/28/09
Ron George: Welcome to the OCGM Generation! Part 2 
Stephen, Microsoft Kitchen: OCGM, A New Windows User Experience
Richard Monson-Haefel's blog, Multi-touch and NUI:  What is NUI's WIMP?
Richard Monson-Haefel:  OCGM: George's Razor
Josh Blake's blog,  Deconstructing the NUI: WIMP is to GUI as OCGM (Occam) is to NUI
Bill Buxton: Gesture Based Interaction (pdf) (Updated 5/2009)
Bill Buxton: "Surface and Tangible Computing, and the "Small" Matter of People and Design" (pdf) - ISSCC 2008
Dan Saffer, Designing for Gestural Interfaces: Touchscreens and Interactive Devices
Dan Saffer, Designing for Interaction 
Mark Weiser,  Computer for the 21st Century  Scientific American, 09, 1991
Touch User Interface:  Readings in Touch Screen, Multi-Touch, and Touch User Interface
Jacob O Wobbrock, Meredith Ringel Morris, Andrew D. Wilson User-Defined Gestures for Surface Computing CHI 2009, April 4–9, 2009, Boston, Massachusetts, USA.

Oct 25, 2009

GDIF: Gesture Description Interchange Format, a tool for music-related movements, actions, and gestures.

There has been a flurry of work in the computer music technology world that relates to what has been going on with interactive display technology, multi-touch & gesture interaction. I came across a link to the GDIF website when I was searching for information about interactive music and the use of multi-touch technologies for a future blog post.   

So what is GDIF?  Gesture description interchange format

"The Gesture Description Interchange Format (GDIF) is being developed as a tool for streaming and storing data of music-related movements, actions, and gestures.  Current general purpose formats developped within the motion capture industry and biomechanical community (e.g. C3D) focus mainly on describing low-level motion of body joints.  We are more interested in describing gesture qualities, performer-instrument relationships, and movement-sound relationships in a coherent and consistent way.  A common format will simplify working with different software, platforms and devices, and allow for sharing data between institutions."  (The Jamoma environment is used to prototype GDIF.)


Alexander Refsum Jensenius is the man who initiated the GDIF project.  He's written a variety of articles about music, gestures, movement, and emerging technologies.  


Here's Alexander's bio"Alexander (BA, MA, MSc, PhD) is a music researcher and research musician working in the fields of embodied music cognition and new interfaces for musical expression (NIME) at the University of Oslo and at the Norwegian Academy of MusicHe studied informatics, mathematics, musicology, music performance and music technology at UiOChalmersUC Berkeley and McGill. Alexander is active in the international computer music community through a number of collaborative projects, and as the initiator of GDIFHe performs on keyboard instruments and live electronics in various constellations, including the Oslo Laptop Orchestra (OLO)."




Related Publications
Godoy, R. I., E. Haga, and A. R. Jensenius (2006b). Playing `air instruments':Mimicry of sound-producing gestures by novices and experts. InS. Gibet, N. Courty, and J.-F. Kamp (Eds.), Gesture in Human-Computer Interaction and Simulation, GW 2005, Volume LNAI 3881, pp. 256{267.Berlin: Springer-Verlag.
Jensenius, A. R (2009): Motion capture studies of action-sound couplings in sonic interaction. STSM COST Action SID report. fourMs lab, University of Oslo.
Jensenius, A. R. (2007). Action - Sound: Developing Methods and Tools to Study Music-related Body Movement. PhD thesis. Department of Musicology. University of Oslo, Norway
Jensenius, A. R., K. Nymoen and R. I. Godoy (2008): A Multilayered GDIF-Based Setup for Studying Coarticulation in the Movements of Musicians. Proceedings of the International Computer Music Conference, 24-29 August 2008, Belfast.
Jensenius, A. R., T. Kvifte, and R. I. Godoy (2006). Towards a gesture description interchange format. In N. Schnell, F. Bevilacqua, M. Lyons, and A. Tanaka (Eds.), NIME '06: Proceedings of the 2006 International Conference on New Interfaces for Musical Expression, Paris, pp. 176{179. Paris: IRCAM { Centre Pompidou.}
Kvifte, T. and A. R. Jensenius (2006). Towards a coherent terminology and model of instrument description and design. In N. Schnell, F. Bevilacqua, M. Lyons, and A. Tanaka (Eds.), Proceedings of New Interfaces for Musical Expression, NIME 06, IRCAM - Centre Pompidou, Paris, France, June 4-8, pp. 220–225. Paris: IRCAM - Centre Pompidou. [PDF]
Marshall,M. T., N. Peters, A. R. Jensenius, J. Boissinot, M. M. Wanderley, and J. Braasch (2006). On the development of a system for gesture control of spatialization. In Proceedings of the 2006 International Computer Music Conference, 6-11 November, New Orleans. [PDF]

RELATED
"Sonic Interaction Design is the exploitation of sound as one of the principal channels conveying information, meaning, and aesthetic/emotional qualities in interactive contexts."
SID Action has four working groups:
WG1: Perceptual, cognitive, and emotional study of sonic interactions
WG2: Product sound design
WG3: Interactive art and music
WG4: Sonification



    "SoundHack was my main thing for a long time, and I poured a lot of effort into it. It was the place I put my ideas. I did have something of a mission with SoundHack. I wanted to take some computer music techniques that were only used in academia, and get them out there so that all types of musicians could use them."-Tom Erbe  SoundHack Spectral Shapers


Csound Blog "Old School Computer Music"
"Csound is a sound and music synthesis system, providing facilities for composition and performance over a wide range of platforms. It is not restricted to any style of music, having been used for many years in the creation of classical, pop, techno, ambient, experimental, and (of course) computer music, as well as music for film and television."-Csound on Sourceforge


Quote from Dr. Richard Boulanger (Father of CSound):
"For me, music is a medium through which the inner spiritual essence of all things is revealed and shared. Compositionally, I am interested in extending the voice of the traditional performer through technological means to produce a music which connects with the past, lives in the present and speaks to the future. Educationally, I am interested in helping students see technology as the most powerful instrument for the exploration, discovery, and realization of their essential musical nature - their inner voice."


Upcoming post about innovations at Stantum:
I'll be focusing on Stantum and its music and media technologies division, JazzMutant. in my next post. It is interesting to note that the co-founders of Stantum, Guilliam Largilleir and Pascal Joget, have a background in electronic music.  Guiliam specializes in multi-modal user interfaces and human-machine interface technologies. Pascal has a background in physics and electronics, and has worked as a sound engineer.


My music back-story:



The very first computer-related course I took was Computer Music Technology (in 2003), since I play an electronic midi/digital keyboard and previously tried to teach myself a few things, long before computers and related technologies were "easy" for me to figure out.  During the mid-90's, I tried my hand at Dr. Richard Boulanger' CSound, and tried to acquaint myself with tools from Cycling'74, but I gave up.  Not long after that, bought the first version of MOTU's  Freestyle, which nicely worked on my Performa 600, hooked up to my Ensoniq 32, after the nice people at MOTU sent me an update that was compatible with my set-up.  Later on,  I came across Tom Erbe's SoundHack freeware.   


A lot has changed since then!