Showing posts with label HCI. Show all posts
Showing posts with label HCI. Show all posts

Feb 9, 2011

"Where am I in this information space?" Bifocal Display Concept Video, 1982, via the Interaction Design.org Encylopedia

The following video about the Bifocal Display concept is an interesting look at how a set of ideas unfolded in the early 1980's. These ideas took flight and influenced many of the design concepts we view as "new" today.


The video was posted on the on-line Interaction Design Encyclopedia, a free, open source multimedia resource that includes a range of chapters written by leading researchers and practitioners from fields related to Human-Computer Interaction (HCI), Interaction Design (IXD), User Experience (UX), Information Architecture (IA), Usability, and more.



"People want to see context...." -Robert Spence

1980's Paper Prototype of  Carousel Concept (screen shot from video)
"Let's apply this idea to my in-tray,  an information that is quite varied, and often full of surprises..." -Robert Spence


For more information, see the article by Robert Spence and Mark Apperley on the Interaction -Design.org website.  Scroll down to "The Bifocal Display Explained" for great sketches of the concepts discussed in the video.  


Also take a look at the recent videos from InteractionDesign.org related to this topic.  What a great way to gain insight about the design and development process!


Video 1: Introduction to the Bifocal Display
Video 2. Main guidelines and future directions
Video 3: How the Bifocal Display was invented and launched


REFERENCE
Spence, Robert and Apperley, Mark (2011). Encyclopedia entry on Bifocal Display. Retrieved 9 February 2011 from Interaction-Design.org: http://www.interaction-design.org/encyclopedia/bifocal_display.html


RESOURCES
Interaction-Design.org YouTube Channel
Interaction-Design.org

Feb 8, 2011

Toyota Vision Multi-Touch Wall at the 2011 Detroit Autoshow: Good example of a collaboration between PUSH, Juxt Interactive, GPJ, and Hush (updated)

GOOD EXAMPLE!

I didn't have the chance to get up to the Detroit Auto Show and interact with the iWall, so I watched the videos.


Toyota iWall from Justin Micklish on Vimeo.


From Justin Micklish's Vimeo site:
"Named "Belle of the Ball" at the 2011 Detroit Auto Show, the Toyota Vision Wall is a large-format interactive installation leveraging a great deal of UX intelligence and bleeding edge technology."


This project was a collaboration between a team at PUSH Offices and Juxt Interactive (lead agency), GPJ (experience marketers), and Hush (motion graphics).


Toyota Vision Multi-Touch Wall Case Study:  A good explanation of the "how and why" of this project:

Toyota Vision Multi-Touch Wall Case Study from JUXT Interactive on Vimeo.

From the JUXT Interactive Vimeo site:
"In January, Toyota debuted the largest digital multi-touch installation ever seen at the 2011 North American International Auto Show in Detroit. GPJ and JUXT Interactive created the digital touch wall experience to deliver a feast for the senses, allowing consumers to explore videos, photos and custom content through a larger-than-life, interactive surface."


RELATED
PUSH
"PUSH is a marketing & design research firm based in Venice, CA. We shirk labels and defy classifications. We have been called strategists, writers, technologists, marketers, typographers, entrepreneurs, and designers. Above all, we are curious. We make a living creating compelling experiences with our clients. Recent collaborations include work with Yahoo, West, and Sony."
Juxt Interactive 
"JUXT is a company that creates from the ground up, top to bottom, or even sideways. It kinda makes sense since we were started by a couple of architects....Lots of people ask us if we only do digital work. Well we like to think and play where people are, and these days that just happens to be the Internet. That suits us just fine because the online space has far less restrictions and barriers than other media. JUXT does not take kindly to limits, or being put in a box."
GPJ: George P. Johnson
"GPJ is the world's premier experience marketing agency, working with leading companies who are looking for innovative marketing solutions that motivate their audiences and activate their brands through live experiences."
Hush
"Hush is a design and direction studio crafting stories for digital, broadcast, mobile and physical installations. Whether using the latest technology or the most classic of techniques, we unifyu stories for agencies and brands alike."


Comment:  This post was revised on 2/11/11 to reflect a clarification from Justin MicKlish.  He's a partner and producer with Push and worked on this project with others in the role of technology consulting and software development. 

Feb 3, 2011

Interactions (ACM) Cover Article- "Proxemic Interactions: The New Ubicomp?" - Plus - Close encounters with displays at the airport and JC Penney

Cover Image

Today I came home from and found one of my favorite publications, Interactions, on my kitchen counter.  Imagine my excitement when I saw that the cover article for the Jan-Feb issue was  "Proxemic Interactions:  The New Ubicomp?". (pdf)  The authors of this article are Saul Greenberg, Nicolai Marquardt, Till Ballendat, Rob Diaz-Marino, and Miaosen Wang, from the University of Calgary. (A list of some of the articles from the current issue of Interactions can be found at the end of this post.)


On the topic of proxemic interactions.... I welcome the research that is germinating in this area, and agree with the authors of Interaction's cover article that there is much work to be done in this space!   


I've been searching for great examples of interactive displays in public spaces.  Since there is much room for improvement in this field, much of what I share on this blog is a bit  disheartening.   Most of my user-unfriendly encounters with large displays happen when I'm in an airport, mall, health care facility, or a cruise ship.  Things are improving, but at a slow pace. The best work I've seen so far has been in museums.  


Info display at the Orlando airport with a QR tag.A few weeks ago I wrote a post about my nearly comical interaction with a large display that featured a QR tag that simply would NOT cooperate with my smartphone's tag reader.   What were they thinking when they decided that the best place for a QR tag on a large display should be at knee level?
Interactive Display with QR Tag:  Close Encounter at the Orlando Airport

There is more to this story!  On my way home,  I had a few hours to kill at the Orlando airport. I stopped by a convenience store the Orlando airport, and discovered the non-static display below. I didn't notice the display the first time I was in the store that day.


I spotted another QR tag!

There were a few problems with this set-up.  First, the display was located on a wall close to a ceiling, making it very inconvenient to scan with my phone.  The page with the QR tag didn't stay up long enough for me to attempt to scan it, although waited around until it cycled back in view-  three times.  I gave up and took a picture instead, standing on my tip-toes with my arms extended over my head, the same position required to scan the QR tag, if it would stay put!


Another problem was that the display was perched above a high-traffic area, right above the  refrigerated drinks.  The only place to scan the QR tag was in a narrow aisle, behind the throngs of thirsty travelers making a bee-line to the cold drink section of the store.  I was in the way of a number of people in this bee-line.


What were they thinking when they decided that the best place this large display featuring a QR tag should be several feet above eye-level in this particular location?  

The displays in front of the store were located on either side of the doorway, at "body" level. This is a logical place to put a display.  Potential customers can view the displays without getting in the way of other travelers.


I didn't notice QR tags on these displays.  QR tags linking to web-based reviews of books I'd like to purchase would be useful in this scenario.


Non-static displays outside of the store:





PART II: Interactive Touch Screen Display Encounter at JC Penney

Interactive displays are cropping up everywhere.  The display featured below is part of the Find More at JC Penney  campaign. The following is a quote from a JC Penney press release: 

"JC Penney has announced the launch of a new mobile initiative that connects mobile commerce with social and instore media applications that include location-based check-in offers and enhancements to its mobile apps. JCPenney's mobile initiatives are part of the company's investment in its overall digital platform, which also includes jcp.com, social media, and its in-store FindMore™ smart fixture."


"The smart fixture integrates JCPenney's online and offline stores by providing customers with a 52-inch interactive touch screen experience that allows them to view and purchase items from jcp.com while in a JCPenney store. Customers can also scan a bar code to get product details, access recommendations, or email data about an item of interest to themselves or friends."

Can you find the large interactive touch screen display?    After shopping around in the mall, I decided to go to JC Penney and check out the sales.  Below is what I passed as I entered the store.  I went up and down the main aisle a few times before I noticed a large interactive display along a side aisle.  



The stack of red holiday boxes obstructed my view:



I didn't notice the display the first few times I walked in the other direction. From the back, the display looked like a poster. 
  
When I finally discovered the display, I was impressed with the slick graphics and stylish models. 



I wasn't so impressed by the placement of the display. To interact with the screen, I had to stand in a narrow aisle, in the path of a steady stream of slow-walking shoppers, often moving in small groups, including parents with babies in strollers. 


I wasn't impressed by the confusing array of input methods on the display.  I noticed that the light blue square in the lower left-hand corner of the display is designed to provide access for people with disabilities.   I couldn't figure it out.

Accessibility:  There is a need for research in this area!

The accessibility feature was confusing to use. The virtual trackball activated a curser on the screen, but the curser did not accurately map the virtual trackball movements. This section of the screen was located on the lower left hand corner, which would be convenient for someone in a wheelchair to use. The placement of the "accessibility window" was too low for someone who might to interact with the screen in an accessible manner but don't happen to need a wheelchair.


I shot video of my interaction with the screen, but found that I was in the way of shoppers, so I stopped.  Not to worry.  Below is a video taken by Mike Cearly, shared on his 11th Screen blog post, "Out and About: JC Penney's "Find More" Touch Screen".   (Mike's impressions of this display were similar to mine and worth reading if this is an area that interest you.)




Mike uses his 11th Screen Scorecard to evaluate the technology he encounters in public spaces.  I am updating a similar rubric to use when evaluating displays in public spaces.   It is important to look at multiple dimensions, such as the physical placement of the display, weather and lighting conditions,  how the display interoperates with mobile devices or a system of displays,  how the content is presented/accessed across channels and devices, and how users share the experience with others.

Components of the 11th Screen Scorecard  -Mike Cearly
Purpose: "What is the purpose of the solution? Is it to drive awareness? Acquisition?  Loyalty?  What is the brand trying to accomplish in this medium?"
Drama: "Does the solution make a big impact on the user? Does it make them stop and interact?"
Usability: "Can the user navigate through the experience with ease? Are the paths to information intuitive?  There's also an element of functions, too, but I think that is much more subjective.  Do the functions enhance the user experience?"
Interactivity:  "How does the user interact with it?  Is it gesture based? Is it touch-based? Can the user interact with it through any other enabling technology?"
Information: "How much and what kind of content is available for the user to interact with?  Generally speaking, the more information and the different formats of information, the better."
Personalization: "What level of personalization does the experience provide?"


COMMENT
I've been obsessed with large displays and the amazing potential they hold for supporting all sorts of  interactions -  collaboration, planning, creating, information sharing, playing, viewing, communicating, learning, shopping,  way-finding, artistic expression.  

I'm also drawn to smaller screens and how we use our mobile devices- smartphones, iphones, e-readers, digital cameras, etc.,  to interact with the displays we encounter in public spaces.   We are just beginning to figure out how technology can effectively - and seamlessly-support interaction between people who are co-located as well as located in different places.

INTERACTIONS Volume XVIII.1 
Here is a list of some articles from the current issue of Interactions:
The Cloud (Yue Pan, Eli Blevis)

Feb 2, 2011

"Undercover User Experience" by Cennydd Bowles and James Box: Charlotte UX Book Club Skype Session and Discussion

I just got back from a meeting of the Charlotte UX Book Club.  The group meets in the loft office of the Charlotte branch of Atlanta-based Macquarium,  and provides a great opportunity for like-minded folks to socialize and toss around informed ideas.


Tonight's meeting featured Cennydd Bowles and James Box, the authors of Undercover User Experience.  They joined the group via Skype, on a large screen TV.  After a minute or so, it was as if they were right in the room, even though they were in the UK.  What a delightful experience!


Cennydd and James both work at Clearleft, a team of designers and creative technologies that provide user-experience consulting.  They are bright, insightful, and know their stuff.  The book packs in a lot of good information in an easy-to-read, concise, and thought provoking manner.  The author's address tools of the trade within the context of how work gets done across disciplines in organizations.  


In my opinion, this book would be useful to people working on traditional projects as well as those who focus NUI design & development for emerging technologies.


BTW: Cennyd Bowles will be the closing plenary speaker at the IA Summit 2011 in March.




"Deliverables are a step on the journey, not the end of the line" -Undercover UX Design





Here are a couple of quotes from the Amazon.com editorial reviews:

  Product Description

"Once you catch the user experience bug, the world changes. Doors open the wrong way, websites don't work, and companies don't seem to care. And while anyone can learn the UX remedies usability testing, personas, prototyping and so on unless your organization 'gets it', putting them into practice is trickier. Undercover User Experience is a pragmatic guide from the front lines, giving frank advice on making UX work in real companies with real problems. Readers will learn how to fit research, ideation, prototyping and testing into their daily workflow, and how to design good user experiences under the all-too-common constraints of time, budget and culture."

  From the Back Cover

"Once you catch the user experience bug, the world changes. Doors open the wrong way, websites don't work, and companies don't seem to care. And while anyone can learn the UX remedies: usability testing, personas, prototyping and so on unless your organization 'gets it', putting them into practice is tricky. Undercover User Experience is a pragmatic guide from the front lines, giving frank advice on making UX work in real companies with real problems. Readers will learn how to fit research, ideation, prototyping and testing into their daily workflow, and how to design good user experiences under the all-too-common constraints of time, budget and culture." 
RELATED

Jan 21, 2011

MIT MediaLab's DepthJS: Now your web page can interact with the Microsoft Kinect using Javascript (Link to code, more)

MIT Media Lab's DepthJS website

Info from the DepthJS website:
"Navigating the web is only one application of the framework we built - that is, we envision all sorts of applications that run in the browser, from games to specific utilities for specific sites. The great part is that now web developers who specialize in Javascript can work with the Kinect without having to learn any special languages or code. We believe this will allow a new set of interactions beyond what we first developed."


DepthJS is open source under the AGPL license. Code: https://github.com/doug/depthjs

RELATED
Gestures that your TV Will Understand (Features information about PrimeSense)
Tom Simonite, MIT Technology Review, 1/21/11


Hackers Take the Kinect to New Levels
Timothy Carmody, MIT Technology Review 12/2/10


Microsoft Kinect: How the device can respond to your voice and gestures
Erica Naone, MIT Technology Review, January/February 2011






Comment:
I went to Best Buy today and almost bought a Kinect.  Unfortunately, the demo hadn't come in yet, so I decided to wait until I could give it a try.  I'm curious to experiment with what it can do. 

Jan 18, 2011

"Hi, Google. My name is Johnny" Johnny Chung Lee leaves Microsoft. (I still wish I could be Johnny Chung Lee for a day.)

"Hi, Google.  My name is Johnny"  Johnny Chung Lee announced on his Procrastineering blog that he's accepted a position at Google as a "Rapid Evaluator".   I'm not sure what he will be doing in this position, but his title is intriguing!

Here are some of my previous posts devoted to the work of Johnny Chung Lee:

I wish I could be Johnny Chung Lee for a Day!  Tracking fingers with the Wii Remote
Video Clips of Projects Inspired by Johnny Chung Lee
More about Project Natal:  Richochet - Great Gaming for Fitness, Johnnie Chung Lee's Contribution


I STILL wish I could be Johnny Chung Lee for a day!

RELATED
Microsoft Kinect Developer Johnny Chung Lee Jumps Ships and Lands at Google
Leena Rao, TechCrunch, 1/18/11
What Microsoft Kinect Defection to Google Means
Rich Tehrani, TMCnet Blog 1/18/11
Microsoft Loses a Top Kinect Researcher to Google
Tricia Duryee,Yahoo! Finance, 1/18/11

Jan 12, 2011

Child-Computer Interaction Workshop on May 7th at CHI 2011: Deadline Extended to February 18th!

The deadline for submissions to the CHI 2011 Child Computer Interaction: 2nd Workshop on UI Technologies an their Impact on Educational Pedagogy has been extended to February 18, 2011.

The workshop will be held on May 7, 2011, Vancouver, Canada, in conjunction with ACM's CHI 2011 conference. Interested researchers should submit a 4-page position paper in the ACM CHI adjunct proceedings style to the workshop management system. Note that the best 3 papers will be pre-accepted for a upcoming for a theme issue of the Springer Journal Personal and Ubiquitous Computing on “Child Computer Interaction”.
IMPORTANT DATES
Submission Deadline: February 18th, 2011
Author Notification: March 15th, 2011
Final Version Deadline: April 1st, 2011 
Workshop Date: May 7th, 2011

DETAILS 
2011 Workshop Website
Call for Papers
Workshop Management System

Note:  Deadlines for several other CHI 2011 workshops have also been extended to February 18th. (Pictures and links from the CHI 2010 workshop are located at the end of this post.)


CALL FOR PAPERS
IN CONJUNCTION WITH CHI 2011
Child Computer Interaction: Workshop on UI Technologies and Educational Pedagogy
in conjunction with CHI 2011, Vancouver
May 7th, 2011
Topic: Given the emergence of Child Computer Interaction and the ubiquitous application of interactive technology as an educational tool, there is a need to explore how next generation HCI will impact education in the future. Educators are depending on the interaction communities and to deliver technologies that will improve and adapt learning to an ever- changing world. In addition to novel UI concepts, the HCI community needs to examine how these concepts can be matched to contemporary paradigms in educational pedagogy. The classroom is a challenging environment for evaluation, thus new techniques need to be established to prove the value of new HCI interactions in the educational space. This workshop provides a forum to discuss key HCI issues facing next generation education.

We invite authors to present position papers about potential design challenges and perspectives on how the community should handle the next generation of HCI in education. Topics of interest include:
• Gestural input, multitouch, large displays, multi-display interaction, response systems
• Mobile Devices/mobile & pervasive learning
• Tangible, VR, AR & MR, Multimodal interfaces, universal design, accessibility
• Console gaming, 3D input devices, 3D displays
• Co-located interaction, presentations, tele-presence, interactive video
• Child Computer Interaction, Educational Pedagogy, learner-centric, adaptive “smart” applications,
• Empirical methods, case studies, linking of HCI research with educational research methodology
•Usable systems to support learning and teaching: Ecology of learning, any where, anytime, (UX of cloud computing to support teaching and learning)

Submission: The deadline for workshop paper submissions is February 18, 2011. Interested researchers should submit a 4-page position paper in the ACM CHI adjunct proceedings style to the workshop management system. Acceptance notifications will be sent out March 15, 2011. The workshop will be held May 7, 2011 in Vancouver, Canada. Please note that at least one author of an accepted position paper must register for the workshop and for one or more days of the CHI 2011 conference.

The best 3 papers will be pre accepted for a upcoming for a theme issue of the Springer Journal Personal and Ubiquitous Computing on “Child Computer Interaction”. Other authors are also invite to submit/ resubmit their work. We will send a separate call for paper for this special issue soon.
Contact: Edward Tse, SMART Technologies, edwardtse@smarttech.com

CHI 2010
NEXT GENERATION OF HCI AND EDUCATION
Join an interdisciplinary, international group of like-minded peers and colleagues.


Brainstorm with thought leaders.

After the workshop, relax, dine, and converse with interesting people.

Jan 9, 2011

New Microsoft Surface 2.0 and InfoStrat's Surface 2.0 Information Visualization Controls

Microsoft Surface 2.0 was unveiled at CES 2011 a few days ago,  the result of a collaboration between Microsoft and Samsung.  Surface 2.0 is a step up!  The 40 inch 1080p high-definition LCD display no longer requires a projection/camera system, which clears the area below the screen  of bulky hardware.  The best part about Surface 2.0, in my opinion, is that it doesn't have to be used as a table.  It can be configured in a variety of ways, even mounted on walls.   For this reason, it will be useful in a variety of settings and situations.


Below is a quote about Surface 2.0 from Steve Ballmer's recent keynote address at CES 2011 that outlines the new technology that is embedded in the Surface 2.0 display:


"But what's really amazing about this technology, what really makes it magical, is the sensor itself. So, those first-generation Surface PCs needed cameras underneath that would look up to try to see what was going on. But what we have here is called PixelSense. PixelSense is new technology we've invented where there's infrared sensors all across this screen. Every single pixel is actually acting as a camera. The PC, the Surface here, can actually see." -Steve Ballmer:  My Keynote Address at the 2011 International Consumer Electronics Show" (Huffington Post, 1/6/2011)





The good news is that developers have been busy at work to create applications for Surface 2.0. Below is a video demonstration of what the folks at InfoStrat have recently created to support collaborative information visualization activities:







Here's the information about the controls from the Infostratcville YouTube channel:

"This is a sneak preview of a suite of data visualization controls developed by InfoStrat for Microsoft Surface 2.0. The controls will be made available as open source software at no charge on CodePlex.com in the first half of 2011."


"This data visualization control suite provides multi-touch versions of the following controls:
- DeepZoom multi-resolution image control that allows high performance display of very high-resolution imagery
- PowerPoint Viewer which enables slide decks to be arranged and presented using multi-touch
- PivotViewer chart control that allows dynamic sorting and categorization of data
- PhysicsCanvas which provides an infinite, dynamic canvas for viewing and organizing content"




RELATED
Josh Blake's post:  "Microsoft Surface 2.0 Data Visualization Controls by InfoStrat" 
Microsoft Surface Blog: "Microsoft and Samsung Unveil the Next Generation of Surface"


PRESS RELEASE
For Immediate Release

9 a.m. PST
January 6, 2010
InfoStrat Releases Next-Generation Data Visualization Controls for Microsoft Surface 2.0
Washington DC – January 6, 2010 – InfoStrat today announced plans to support Microsoft Surface 2.0 by releasing a control suite that accelerates the development of next-generation multi-touch data visualizations. The controls will be made available as open source software at no charge on CodePlex.com in the first half of 2011.
This data visualization control suite provides multi-touch versions of the following controls:
§ Deep Zoom multi-resolution image control that allows high performance display of very high-resolution imagery
§ PowerPoint Viewer which enables slide decks to be arranged and presented using multi-touch
§ Pivot Viewer chart control that allows dynamic sorting and categorization of data
§ Physics Canvas which provides an infinite, dynamic canvas for viewing and organizing content
Other features of the controls:
§ Works on both Microsoft Surface and Microsoft Windows 7 with touch
§ A single application built with the data visualization framework can support multiple hardware form factors including: horizontal multi-touch tables, tablets, and large format vertical touch screens
§ Innovative object recognition to enable rapid data manipulations (only on Microsoft Surface)
Watch a sneak preview of the control suite on YouTube:http://www.youtube.com/watch?v=lEVtjHlrf4I
InfoStrat is a member of Microsoft’s Technology Adoption Program (TAP) for Microsoft Surface. As a Microsoft Surface 2.0 TAP member, InfoStrat receives early access to hardware and software, allowing InfoStrat to gain expertise and influence the development of the product before it was released to the public.
In 2008, InfoStrat solved the problem of using Bing Maps 3D on Microsoft Surface in a way that performed well and was WPF-friendly. InfoStrat open-sourced the solution as a reusable control for the WPF and Surface community. Since then, the control has received over 120,000 page views and has over 8200 downloads, and has also been featured in many of our own applications. This control, known as InfoStrat.VE, has become one of the most popular controls for building mapping applications on Microsoft Surface: http://bingmapswpf.codeplex.com
“We are proud to be part of the Microsoft Surface development community,” according to Jim Townsend, president of InfoStrat, “and excited about the possibilities of Microsoft’s new version of Surface.”
Microsoft Surface provides a new way to experience and use information and digital content, engaging the senses, improving collaboration and empowering people to interact. Microsoft Surface is at the forefront of developing software and hardware that uses vision-based technology to fundamentally change the way people use computing devices. More information can be found at http://www.surface.com.
Information Strategies ("InfoStrat") is an award-winning Microsoft Gold Certified Partner and a Microsoft Surface Strategic Partner and member of the Technology Adopter Program.
For more information, press only:
Josh Wall, InfoStrat, (202) 364-8822 ext. 202, joshw@infostrat.com

Jan 4, 2011

Interactive Display with QR Tag: Close Encounter at the Orlando Airport

I always like to try out technology in public spaces when I travel.  Imagine my excitement when I was presented with my first opportunity to try out my tag app on my Incredible while I was waiting for my baggage to arrive after my flight from Charlotte to Orlando...

On first glance, the location of this display near the rest rooms and baggage claim area makes sense. Travelers can access information about things to do in North Port from their Smartphones by scanning the QR tag with a tag reader app while they wait for their baggage.

Info display at the Orlando airport with a QR tag.

Poorly situated info dispay at thr Orlando airport

To my dismay, my interaction with this display and my tag reader app was not enjoyable. 

Because the tag was located at the bottom of the display, I had to set aside my carry-on bags, purse,etc., and kneel front of it to center my smartphone precisely at the tag.  The display was on the wall between the restrooms, a convenient gathering point for people. Little kids darted around me, and parents used this spot to re-organize their family's "stuff".  As I tried to scan the tag, a dad had to ask his son to move back, "so the lady could take a picture". 

I was blocking the path of families and groups of travelers with all of their stuff.  I gave up trying after a while, much to the relief of my husband, who watched all of this as he waited for our baggage.

Dec 29, 2010

UPDATE: CALL FOR PAPERS: Workshop on UI Technologies and Educational Pedagogy, Child-Computer Interaction (in conjunction with CHI 2011, May)



CALL FOR PAPERS
 
Child Computer Interaction: 
in conjunction with CHI 2011, Vancouver, Canada
May 8th 2011

Topic: Given the emergence of Child Computer Interaction and the ubiquitous application of interactive technology as an educational tool, there is a need to explore how next generation HCI will impact education in the future. Educators are depending on the interaction communities and to deliver technologies that will improve and adapt learning to an ever- changing world. In addition to novel UI concepts, the HCI community needs to examine how these concepts can be matched to contemporary paradigms in educational pedagogy. The classroom is a challenging environment for evaluation, thus new techniques need to be established to prove the value of new HCI interactions in the educational space. This workshop provides a forum to discuss key HCI issues facing next generation education.

We invite authors to present position papers about potential design challenges and perspectives on how the community should handle the next generation of HCI in education. Topics of interest include:

  1.  Gestural input, multitouch, large displays, multi-display interaction, response systems

  2.  Mobile Devices/mobile & pervasive learning

  3.  Tangible, VR, AR & MR, Multimodal interfaces, universal design, accessibility

  4.  Console gaming, 3D input devices, 3D displays

  5.  Co-located interaction, presentations, tele-presence, interactive video

  6.  Child Computer Interaction, Educational Pedagogy, learner-centric, adaptive “smart” applications

  7.  Empirical methods, case studies, linking of HCI research with educational research methodology

  8. Usable systems to support learning and teaching: Ecology of learning, any where, anytime, (UX of cloud computing to support teaching and learning)

Submission: The deadline for workshop paper submissions is January 14, 2011. Interested researchers should submit a 4-page position paper in the ACM CHI adjunct proceedings style to the workshop management system. Acceptance notifications will be sent out February 20, 2011. The workshop will be held May 7 or May 8, 2011 in Vancouver, Canada. Please note that at least one author of an accepted position paper must register for the workshop and for one or more days of the CHI 2011conference.


Contact: Edward Tse, SMART Technologies, edwardtse@smarttech.com


WORKSHOP ORGANIZERS
Edward Tse, SMART Technologiess
Johannes Schöning, DFKI GmbH
Yvonne Rogers, Pervasive Computing Laboratory, The Open University
Jochen Huber, Technische Universität Darmstadt
Max Mühlhäuser, Technische Universität Darmstadt
Lynn Marentette, Union County Public Schools, Wolfe School
Richard Beckwith, Intel