Showing posts sorted by relevance for query NUI. Sort by date Show all posts
Showing posts sorted by relevance for query NUI. Sort by date Show all posts

Nov 13, 2008

RENCI at Duke University: Multi-Touch Collaborative Wall and Table utilizing TouchLib; More about UNC-C's Viz lab...

RENCI is a multi-disciplinary collaboration between several universities in North Carolina, with centers located at the Europa Center, Duke University, N.C. State, UNC Chapel Hill, East Carolina University, UNC-Asheville, UNC-Charlotte, and the Health Sciences Library at UNC-Chapel Hill. Many of the centers focus on visualization and collaborative technologies, and have been involved in multi-touch "surface" computing.

The pictures below are from the RENCI center at Duke University:

http://vis.renci.org/multitouch/wp-content/themes/daleri-dark-10/img/front.jpg

Duke Multi-Touch Collaborative Wall

The multi-touch wall is 13 x 5 feet, and utilizes six high-definition projectors, resulting in a combined resolution of 5760-2160, and supports multiple users. According to information on the RENCI website, the design is scalable and applicable to non-flat surfaces. The wall system runs on Windows and Linux.

Duke Multitouch Wall. (Photo credit: Josh Coyle)

(Photo by Josh Coyle)

The Wall is positioned at the end of the primary collaboration space. (Photo credit: Josh Coyle)

(Photo by Josh Coyle)

DI, or Direct Illumination is used for touch detection in both the wall and the table for detecting touch. A separate instance of Touchlib runs for each of the 8 cameras used to detect touch. A gesture engine interprets the information about touches on the screen as gesture events. Each camera is handled separately for image processing and blob tracking tasks.

Direct Illumination (DI)

Graphics from the RENCI Vis Group Multi-Touch Blog

The Duke Multi-Touch Wall System

Here is cool picture of the "Multi-touch Calibration Device", which uses a built-in TouchLib utility.

Calibrating using the utility built into TouchLib.

Additional information can be found on the RENCI Vis Group Multi-Touch Blog.

FYI

Touchlib is a multi-touch development kit that can be found on the NUI-Group website.

"Touchlib is a library for creating multi-touch interaction surfaces. It handles tracking blobs of infrared light, and sends your programs these multi-touch events, such as 'finger down', 'finger moved', and 'finger released'. It includes a configuration app and a few demos to get you started, and will interace with most types of webcams and video capture devices. It currently works only under Windows but efforts are being made to port it to other platforms."

If you are interested in creating your own multi-touch table, the NUI-Group website and forums are a great place to start.

Related:

If you follow my blog, you probably know that I've taken several graduate courses at UNC-Charlotte. Some of my professors and a classmate or two have been involved in some exciting visualization research over the past year. (If you are serious about multi-touch and other visually-based applications, it is worth taking some time to familiarize yourself with visualization and interaction research.)

News from the UNC-Charlotte Vis Center:

At the University of North Carolina at Charlotte, RENCI is a collaboration between the UNC Charlotte Urban Institute, the Center for Applied Geographic Information Science, and the Charlotte Visualization Center.

11/06/2008
Robert Kosara’s group wins two awards at IEEE VisWeek Caroline Ziemkiewicz and Robert Kosara won Honorable Mention (the second highest award) at the IEEE InfoVis Conference for their paper, “The Shaping of Information by Visual Metaphors”. Also, Alex Godwin, Kosara’s student, won Best Poster for his submission, “Visual Data Mining of Unevenly-Spaced Event Sequences”.

The Vis Center is pretty fascinating, as you can see by the group of visitors at an open house.

If you are just as fascinated by this stuff as the guys in the picture, here are links to some recent papers by UNC-Charlotte faculty affiliated with the Vis Center:

The Shaping of Information by Visual Metaphors (Caroline Ziemkiewicz and Robert Kosara)

Evaluating the Relationship Between User Interaction and Financial Visual Analysis (Don Hyun Jeong, Wenwen Dou, Felsia Stukes, William Ribarsky, Heather Richter Lipford, Remco Chang)

Visual Analytics for Complex Concepts Using a Human Cognition Model (Tera Marie Green, William Ribarsky, and Brian Fisher)

Oct 26, 2008

Grafiti - a multi-touch, table-top, surface computing application, from a member of the NUI group.

Are you interested in multi-touch gestures for multimedia tabletop applications, tangible or surface computing, and interaction design? Take a look at this video of Grafiti:



Alessandro De Nardi, a student in computer science at the University of Pisa, Italy, has worked on Grafiti during Google's Summer of Code and is still involved with the project. Alessandro, a member of the NUI-group, is also in the Music Technology Group of Barcelona's UPF University. Allesandro was supervised by by Martin Kaltenbrunner and Sergi Jorda, of the reactable project.

According to De Nardi's
Google Code website:

"Grafiti is a C# framework built on top of the Tuio client that manages multi-touch interactions in table-top interfaces. The possible use of tangible objects is particularly contemplated. It is designed to support the use of third party modules for (specialized) gesture recognition algorithms. However a set of modules for the recognition of some basic gestures is included in this project.

The development is in the alpha state, going to beta soon.

The goals I've been aiming at are: generality, versatility, speed of execution, extensibility, ease of programming (integration) of external modules (simple APIs and protocols) and effective interface design with regards to HCI aspects." -Alessandro De Nardi

Demos in binary form are available for download for MacOSX and Windows on the Google Grafiti site. There is a Grafiti site on Sourceforge. Apparently the alpha version will work on all 32-bit MS Windows,, BSD platforms, and POSIX (linux).

(Note: This post was updated on 12/14/08 and includes corrections.)

Aug 9, 2008

Creative Programming: openFrameworks - AWESOME for interactive multimedia applications!

openFrameworks: Better Tools, Enhanced Creativity, Better Projects: YES. Artists can make tools at the same time they make artwork.

To learn all about this, delve into the video. It highlights interviews with creative people who are using openFrameworks, including their innovative work.


made with openFrameworks from openFrameworks on Vimeo.

If you are working with openFrameworks, or thinking about it, let me know.


This looks like a great tool to use for projects I'm creating for my new HP TouchSmart....

.....and my multi-touch thought experiments ; }














I learned about openFrameworks from Seth Sandler, aka "cerupcat", a member of NUI-group who was chosen to participate in Google's Summer of Code. He's posted about his progress on his
AudioTouch blog.

Here is a screenshot of Seth's tracking application, still under development, is the result of porting touchlib, the main tracker used by NUI-Group members, to openFrameworks:
http://www.nuicat.com/tracker.jpg

Jun 6, 2008

Emerging Interactive Technologies, Emerging Interactions, and Emerging Integrated Form Factors

Every so often I come across a blog that has been around for a while, but is a new discovery to me. If you are interested in interactive multimedia and emerging technology, and you haven't heard of Richard Banks, you will probably be pleased with his vast array of interesting posts!

After browsing around Richard Bank's blogs, I found that he works for Microsoft Research. Like me, he maintains more than one blog, and he blogs about similar topics.

Here is a view into his world:

rb.trends: tracking future technology experience
rb.work: technology, design and research
rb.log: family, photography, architecture etc.

One of Richard's recent posts brought me back to my musings about interactive surfaces:

Via Richard Banks and Gizmodo: A Touch Screen Poker Table from Hard Gaming

If it is possible to program an application to handle the metrics that support poker, then it is possible to program a multi-user application for work or education, and have this application integrated into any type display, according to need.

The form factor above would be useful for team meetings or collaborative project planning, especially during group decision-making or policy planning activities that involve data analysis or information visualization.

Could this concept be modified to fit the form of a kidney table for use in education? Think about it. We could combine the best features of surface computing with the best features of multi-user video games.
http://www.qualitychurchfurniture.com/images/activitytablecolors/lrg_48KID72LOset.jpg
Rationale? Video-game applications can handle the metrics of multi-user interaction, which would be ideal for tracking student progress and interaction. Touch interaction would enable young children with an easy way to access the application- or all of the information that will be available on the multi-touch web of the future, without learning first to type.

(Touch and gesture interaction could help us move towards the implementation of the Internet of Things)

Could this concept be transformed for use by two people collaborating on an electronic drafting table?


http://www.dwvartengine.com/media_downloads/dw_draftingTable_lo.jpg The image “http://ecx.images-amazon.com/images/I/31Y2BKBBhxL._SL500_AA280_.jpg” cannot be displayed, because it contains errors.

If you are a NUI Group member, think about this!


Note: I came up with the phrase, "multi-touch web of the future" as I was typing this post. I guess I didn't invent it - I looked it up and found Andrew Foote's
post, "The Multi-Touch Web"

I'm in.


Somewhat Related.......


More for the tech-curious:

Interactive data visualization in Second Life

Idle together "Technology blog oriented towards good design and impressive web applications"

Apple Developing Full Multi-Touch Macs - Apple Insider


More for the tech-savvy:

Multi-touch gestures in the Factor UI (Apple)- by Pestov. Includes code.

NSResponder Modifications: Swipe, Rotate, and Magnify, by Elliott Harris, a member of the NUI Group, I think.

Web Development Guidelines for iPhone
(lots of good info, with video clips and code)

Web 3D Consortium

Unity 3D kit for the

Google's Android


Multi-touch Web Consortium (in my dreams)

Gesture Web Consortium (in my dreams)

May 31, 2008

May 20, 2008

NUI-Group Member Bridger Maxwell Receives High School Science Fair Award for Multi-Touch Screen Project


Yet another post about a NUI group member... Bridger Maxwell, a high school student at the Utah County Academy of Sciences, submitted his multi-touch screen to the science fair, and went on to win first place in the engineering category, and now will be competing in the International Science and Engineering Fair.





Bridger has created "Lumen", a puzzle game for OSX and Windows, and markets this through his business, Fiery Ferret.

Even More Multi-Touch, Delivered by NUI...



May 20, 2008
NUI announces delivery of their multi-touch solution for Cityscape, Abu Dhabi 2008






www.natural-ui.com
Natural User Interface Europe Ltd.

May 14, 2008

Multi-touch Crayon Physics

Also posted on the TSHWI blog:
Watch how you can draw simple shapes that can instantly turn into a game!


Multitouch Crayon Physics from multitouch-barcelona on Vimeo.
For a better version of this video, see http://www.vimeo.com/980528 For more information, see the RXSurface blog post. On May 18, Multi touch crayon physics will be offered as an alphabeta opensource! The people behind RXSurface are members of the Natural User Interface (NUI) group:
"Natural User Interface or ~ NUI Group is an interactive media group researching and creating open source machine sensing techniques to benefit artistic and educational applications."



Nov 29, 2013

Does Google Know All? Google's ways of encouraging my on-line participation.

For the past couple of months I have had quite a busy off-line work/life situation.  Sad to say, I've had little time to tinker with code, write blog posts, keep up with my tech journals (the short stack is now growing into a little pile), and attend to the people I follow on the web.

I'm not totally off the grid.  

While walking my dog, I check Twitter, Facebook, and Instagram, peek at my email. To be social, I quickly retweet, "like",  comment, or post a pic from time-to-time.  

Things just aren't the same.

Google noticed.

I hadn't been checking my Analytics account, and Google let me know.   I hadn't been nurturing my blog as in the past, and I guess this was a cause for concern.  I was amused that Google was prepared to delve deeper with short survey!



























Yes, Google, I have been neglecting you.

Although I regularly check Facebook, I ignored the Google Plus world.  So today, I decided to take a few steps to get back on track.

I saw that a number of people had added me to their circles.  I added a few back.  

I noticed that many people shared similar interests in technology, and sighed, longing for the days when I was taking interesting graduate computer courses and happily working on innovative projects.....

I perked up when I noticed that Google Plus had a "recommended communities" feature.

I was curious.  What sort of communities would be in store for me?    Could Google really read my ming?!   :  }

Well, maybe.  Almost.   

And I am thankful.

This was the nudge I needed -  a huge rabbit hole for me to explore:






























There were more communities to consider, but scanning through the recommendations, I could see that Google was pretty much on-target regarding the technology interests.    

Google was off a bit regarding my educational and assistive technology interests, which I'd expect to see represented in the array.   

I'm still a school psychologist in my "day" job.   Early retirement is an option for me someday. When I do, I know I will devote more TLC to my NUI/interactive multimedia tech-related passions.  

Google, now you know!



RELATED
Google+ Communities
Join a Google+ Community
Google launches restricted Google+ communities to let businesses make conversations private or invite-only.  Emil Protalinski, The NextWeb, 11/5/13
Google Plus Communities-Complete User Guide
Martin Shervington, Are you commoogling, 3/6/13

Jun 6, 2013

Interactive Displays and "Billboards" in Public Spaces; Pervasive Displays 2013

The 2013 International Symposium on Pervasive Displays (PerDis 2013), recently convened  in Mountain View, California.  Since I couldn't attend this conference, I was happy to learn from Albrecht Schmidt that the conference proceedings were recently uploaded to the ACM Digital library.  There are many exciting things going on in this interdisciplinary field!

Researchers involved with the Instant Places project, described in the video below, presented their work at PerDis 2013. The Instant Places project was part of PD-Net, a series of research efforts exploring the future of pervasive display networks in Europe. (See the "Related" section for additional references and links.)


Instant Places: Tools and Practices for Situated Publication in Display Networks

Below is information from the Instant Places video and website:
"The video describes a novel screen media system that explores new practices for individual publication and identity projection in public digital displays." 

"Instant Places has been developed by the Ubicomp group of the Information Systems Department, at the University of Minho, and has been funded within the scope of pd-net: Towards Future Pervasive Display Networks, by the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 244011."

Saul Greenberg was the keynote speaker at PerDis 2013.  His keynote, "Proxemic Interactions: Displays and Devices that Respond to Social Distance", highlights how far off-the-desktop our digital/physical lives have become, and how this has influenced recent research in human-computer interaction. Saul is a professor at the University of Calgary and leads research in Human Computer Interaction, Computer Supported Cooperative Work, and Ubiquitous Computing.

Although the video of Saul Greenberg's presentation below is not from PerDis 2013, it touches on the same topics and is worth taking an hour to watch.  In this video, Greenberg presents an overview of the history of human-computer interaction. He also offers up a discussion how an understanding social theory, perception of spatial relationships, and embodied interaction can be applied to the design of natural user interfaces and interactive systems.  Useful examples of interaction design explorations, within an ecological context, are provided later in the video.

Proxemic Interactions: the New Ubicomp?




RELATED


My Backstory
Regular readers of this blog know that to subject interactive displays in public spaces holds my interest. When I was taking computer courses during the mid 2000s, I focused some of my energy on projects designed for large interactive displays, inspired by reading articles like "Physically Large Displays Improve Performance on Spatial Tasks" (Desney S. Tan, Darren Gergle, Peter Scupelli, and Randy Pausch) and "Dynamo: public interactive surface supporting the cooperative sharing and exchange of media(Shahram Izadi, Harry Brignull, Tom Rodden, Yvonne Rogers, Mia Underwood).  

Jeff Han's 2006 TED talk was another inspiration. I remember my excitement as watched his demonstration of an interactive multi-touch touch screen the size of a drafting board, before the iPhone/iPad was born.  Another inspiration was Hans Rosling's TED Talk  about health statistics, with his animated interactive data visualizations presented on a huge screen.

The following year, I stumbled upon the  NUI-Group while searching for information about multi-touch displays, and was inspired by many of the early members of the group.  I also became acquainted with a world-wide network of people who share similar interests, such as Albrecht Schmidt and his team of researchers at the Unversity of Stuttgart. This busy group recently presented at PerDis 2013 and at CHI 2013 and are involved in a wider range of ongoing projects.

INTERACTIVE DISPLAYS
Alt, F. Sahami, A., Kubitza, T., Schmidt, A.  Interaction Techniques for Creating and Exchanging Content with Public Displays. In: Proceedings of the 2013 ACM Annual Conference on Human Factors in Computing Systems 
Hinrichs, U., Carependale, S., Valkanova, N., Kulkkaniemi, K., Jacucci, G., Moer, A.V., Interactive Public Displays   Computer Graphics, Vol. 33(2) IEEE Computer Society (25-27)
PerDis 2013 Program
Sample Papers:
Otero, N., Muller, M., Alissandrakis, A., and Milrad, M. Exploring video-based interactions around digital public displays to foster curiosity about science in the schools. PerDis 2013 (pdf)
Alt, F., Schneegass, S., Girgis, M., Schmidt, A. Cognitive Effects of Interactive Public Display Applications. Proceedings of the 2nd ACM International Symposium on Pervasive Displays. 2013
Langeinrich, M., Schmidt, A., Davies, N., and Jose, R.  A practical framework for ethics: the 

Note:  Members of ACM have access to all of the proceedings of PerDis2013 in the ACM Digital Library. Non-members have access to the abstracts.

PD-NET
PD-net approach to supporting ethics compliance in public display studies. Proceedings of the 2nd ACM International Symposium on Pervasive Displays. 139-143
PD-Net 
PD-NET Publications - a great reference list, with links to many papers
Reading List on Pervasive Public Displays
About Instant Places
About the Living Lab for Screens Set

DOOH-DIGITAL OUT-OF-HOME
Daily Digital Out of Home post "Billboards That Look Back" : Could miniature cameras embedded in ads lead to Big Brother at the mall? The World Is My Interactive Interface, 5/28/08
J. Müller et al., "Looking Glass: A Field Study on Noticing Interactivity on a Shop Window," Proc. 2012 SIGCHI Conf. Human Factors in Computing Systems (CHI 12), ACM, 2012, pp. 297–306
Michelis, D., Meckel, M. Why Do We Want to Interact With Electronic Billboards in Public Space?  First Workshop on Pervasive Advertising, Pervasive 2009, 5/11/09
The Rage of Interactive Billboards
The Print Innovator, 11/28/12
10 Brilliant Interactive Billboards (Videos)
Amy-Mae Elliot, Mashable, 8/21/11


SOME INTERESTING EARLIER WORK
Jeff Han's 2006 TED Talk (This is worth revisiting, as it came out before the iPhone, iPad, etc.)


Tan, D.S., Gergle, D, Scupelli, P., Pauch, R. Physically large displays improve performance on spatial tasks. ACM Transactions on Computer-Human Interaction, V13(1) 2006 (71-99)

Revisiting promising projects: Dynamo an application for sharing information on large interactive displays in public spaces (blog post)
Lynn Marentette, Interactive Multimedia Technology, 09/16/07

Brignull, H., Izadi, S., Fitzpatrick, G., Rogers, Y., Rodden,  T. The introduction of a shared interactive surface into a communal space. Proceedings of the 2004 ACM conference on Computer supported cooperative work (CSCW'04), Chicago, ACM Press, 2004 (pdf)


Izadi, S., Brignull, H., Rodden, T., Rogers, Y. and Underwood,M. Dynamo: public interactive surface supporting the cooperative sharing and exchange of media. In Proc. User
Interfaces and Software Technologies (UIST’03), Vancouver, ACM Press, 2003, 159-168. (pdf)

Proxemics (Wikipedia)


Why Do We Want to Interact With Electronic Billboards in Public Space? 


May 5, 2013

Leap Motion Update: Slow-going progress for me, at least for now!

Leap Motion Progress

My Leap Motion dev kit arrived in March. With excitement,   I installed it on my new 27-inch iMac. I decided that this would be the time to take the "leap" into Objective-C and explore the mysteries of Xcode.  I had planned to make a simple  iPad app for my 2-year-old grandson, but this inspired me to change my plans.





















Why not learn Objective-C to make a simple music/art/dance Leap Motion app for little ones?  

My progress so far?  Slow.

I updated Xcode. I installed the Leap Motion SDK.  I updated the Leap Motion SDK.  I played with the samples that came with the Leap Motion kit.   

When it came time for me to try something on my own, I thought I had everything set up in Xcode.  I got error messages that I did not understand. My attempt to figure things out led me to the Stack Overflow website, and by then, I had to get back to my paperwork in order to prepare for the next work day.

Today I realized that I missed the link about installing the Leap API docs for Objective-C in XCode.  Other things needed to be updated, so at that point, I decided to write this post....





















Reflection:
After writing some code and making repeated errors, I realized how much I had let Microsoft take root in my head.  Until 2003, the coding part of my brain was a pristine slate. It wasn't cluttered with bits and pieces previous coding languages.

Since I tend to be a knowledge junkie, my brain soaked up more than I needed when I was taking computer courses.  If you could peek inside,  you'd see C# code snippets for multi-touch and NUI, a few algorithms for A.I. and data visualization, trivia from MSDN,  and images of the Visual Studio workspace. There would be odds and ends from VB.Net, JavaScript, ActionScript, CSS, Java, C++, and pseudocode for a variety of computational thought experiments.    

A lot of stuff, and for most of it, no place to go, except for an occasional technology dream.


What's ahead?
In the short term, I'll be doing what I always do this time of year.  For many school psychologists, the last couple of months of the school year is sort of like tax season for accountants.  I have lots of students to see, lots of psychological evaluation reports to write, and meetings to attend.  The paperwork will crowd up many evenings and weekends, but  there is an end in sight.   

Summer.  This will be my summer of code.

I'll be in NYC for one week in June, attending the Interactive Design for Children conference (IDC 2013).  Many of the workshops I'd like to attend will be held at the same time.  Take a look at the program and you'll see why!  I

Decisions to be made... 
Although I am pretty good at keeping a lid on my desire to design and code during my day-to-day life as a school psychologist,  I'm finding that it is getting more difficult to ignore. I have some thinking to do. In the not too distant future, it is possible that I'll leap out of my K-12 cocoon.  

I don't think I'll leap too far, because I'd like to focus my work on projects that enhance the lives of children and families.  I will ensure sure that some of my work will benefit people of all ages who have disabilities or encounter barriers in their lives.



SOMEWHAT RELATED
Joy of Computing, 1985















My daughter, who was just two years old in the above picture, returned to school to take computer courses after working in the non-profit arts management field.  I'm happy about this, but I know that she'll face many hidden barriers when she starts working in a male-dominated environment. She is not alone.

I'm working on a future post about computer and technology-related careers.  Things have changed rapidly over the past several years and there are many new ways to learn how to code, and over time, more opportunities for creative computational thinkers - male and female, to take the lead.

In Google's Inner Circle, a Falling Number of Women
Claire Cain Miller, NYT, 8/22/13

So You Don't Want to be a Programmer After All
Jeff Attwood, Coding Horror, 4/29/13

StackExchange  (Includes StackOverflow, helpful when I troubleshoot coding problems)

An Overview of HTML 5, PhoneGap, and Mobile Apps: Understanding how web languages are used for apps and how they work with native code
Dan Bricklin



Apr 23, 2013

Google Earth and Leap Motion - I'll experiment with this after work today!

Leap Motion + Google Earth


I have the Leap Motion dev kit and can't wait until I can use it with Google Earth. Hopefully I'll find time tonight after I get home from work! For now, here is the promotional video:


RELATED
Leap Motion
Leap Motion: My Dev Kit Arrived - Now What?!   Thoughts About "NUI" Child-Computer-Tech-Interaction -- and More

Feb 15, 2013

Designing for Touch & Gesture: Tips for Apps and the Web (Updated)

In the past, our fingers did the walking, sifting through files, papers, pamphlets, and phonebooks, and then by point-click-clicking with a mouse to interact with images and text, in essence, electronic imitations of the paper-based world. Traditional forms, brochures, ad inserts, and posters informed much of the design. 

How much have things change?   It is 2013, but you'd think it was 1997 from the PowerPoint look and feel of many apps and web sites!   Touch is everywhere, but from what I can tell, not enough designers and developers have stepped up to the plate to think more deeply about ways their applications can support human endeavors though touch and gesture interactions.  

For an overview of this topic, take a look at my 2011 post, written after a number of ugly encounters with user-unfriendly applications:  Why bother switching from GUI to NUI?  

For an in-depth look into the history of multi-touch, the wisdom of Bill Buxton is well-worth absorbing.  He's worked with all sorts of interfaces, and has been curating the history of multi-touch and gesture systems since 2007:


Multi-Touch Systems that I have Known and Loved
Bill Buxton, Microsoft Research, Updated 8/30/12



Even if you are not a designer or developer, I encourage you to explore some of the links below:

Touch Gestures for Application Design
Luke Wroblewski, 10/9/12

Common Misconceptions About Touch
Steven Hoober, 3/18/13

Designing With Tablets in Mind:  Six Tips to Remember
Connor Turnbull, Webdesign tuts+, 9/27/11

Finger-Friendly Design: IDeal Mobile Touchscreen Target Sizes
Anthony T, Smashing Magazine, 2/21/12

Best Practices: Designing Touch Tablet Experiences for Preschoolers (pdf)
Sesame Street Workshop


Are Touch Screens Accessible?
AcessIT, National center on Accessible Information Technology in Education

iOS Human Interface Guidelines
Apple

Android User Interface Guidelines
Using Touch Gestures
Handling Multi-Touch Gestures
Android

Designing for Tablets?  We're Here to Help!
Roman Nurik, Android Developers Blog 11/26/12

Touch interaction design (Windows Store apps)
Microsoft - MSDN

Multi-Touch Systems that I have Known and Loved
Bill Buxton, Microsoft Research, Updated 8/30/12


Jan 10, 2013

Gesture Markup Language (GML) for Natural User Interaction and Interfaces

Quick post:
"GML is an extensible markup language used to define gestures that describe interactive object behavior and the relationships between objects in an application.  Gesture Markup Language has been designed to enhance the development of multiuser multi-touch and other HCI device driven applications." -Gesture ML Wiki

GestureML was created and maintained by Ideum. 

More information to come!
The Pano













Photo credit: Ideum

RELATED
Ideum Blog

OpenExhibits Free multitouch and multiuser software initiative for museums, education, nonprofits, and students

GestureWorks  Multi-touch authoring for Windows 8 & Windows 7



Nov 17, 2012

Human Computer Interaction + Informal Science Education Conference (NUI News)

I recently learned of the HCI + ISE conference, funded by the National Science Foundation and organized by Ideum and Independent Exhibitions that will provide the groundwork for the future of the development and design of interactive computer-based science exhibits.
Science museums have a long history of interactivity, well suited to groups of "explorers", such as families or students visiting on a field trip.  

What is really exciting is that new interactive applications and technologies have the power to transform the way people learn and understand science in a collaborative and social way.  Innovations in the field of HCI - Human-Computer Interaction- such as multi-touch and gesture interaction, are  well-suited to meet the goals of science education for all, beyond the school doors and wordy textbooks. 

Below is a screen-shot of the conference website, a description about the conference, quoted from the site, and some related resources.



About the HCI+ISE Conference
"HCI technologies, such as motion capture, multitouch, augmented reality, RFID, and voice recognition are beginning to change the way computer-based science exhibits are designed and developed. Human Computer Interaction in Informal Science Education (HCI+ISE) is a first-of-its-kind gathering to explore and disseminate effective practices in developing a new generation of digital exhibits that are more intuitive, interactive, and social than their predecessors."
"The HCI+ISE Conference, to be held in Albuquerque, New Mexico June 11-14 2013, will bring together 60 museum exhibit designers and developers, learning researchers, and technology industry professionals to share effective practices, and to explore both the enormous potential and possible pitfalls that these new technologies present for exhibit development in informal science education settings."
"HCI+ISE will focus on the practical considerations of implementing new HCI technologies in educational settings with an eye on the future. Along with a survey of how HCI is shaping the museum world, participants will be challenged to envision the museum experience a decade into future. The conference results will provide a concrete starting point for exhibit developers and informal science educators who are just beginning to investigate these emerging technologies and design challenges in creating these new types of exhibits."
Why HCI+ISE?
"Since the mid-1980s informal educational venues have increasingly incorporated computer-based exhibits into their science communication offerings in an effort to keep pace with public expectations and make use of the expanding opportunities these technologies provide. The advent and popularity of once novel HCI technologies are becoming commonplace: the Wii and Microsoft Kinect now allow for motion capture video games, tablet PCs have multitouch interaction, and smart phones and other devices come standard with voice recognition. Yet many museums are still developing single-touch and trackball-driven, single-user computer kiosks."
"Science museums have a long history of championing hands-on, physical, and inquiry-based activities and exhibits. This vast experience has only just begun to be applied to interactive computer interfaces. Along with seasoned science exhibit developers, the Conference will draw upon individuals outside of ISE who will provide fresh insight into the technologies, design issues, and audience expectations that these visitor experiences present."
Involvement and Findings
"HCI+ISE will bring together a diverse group of practitioners and other professionals to discuss (and in some cases share and prototype) new design approaches utilizing emerging HCI technology. Please see our Apply page to learn how you can participate. Conference news and findings will be distributed through a variety of ISE and museum websites, including this one."
"We welcome your questions and comments about the HCI+ISE Conference."
CONTACTS
Kathleen McLean of Independent Exhibitions
& Jim Spadaccini of Ideum
HCI+ISE Co-chairs
"Open Exhibits is a multitouch, multi-user tool kit that allows you to create custom interactive exhibits."
CML:  Creative Mark-up Language
GML: Gesture Mark-up Language
GestureWorks
Ideum