Showing posts with label research. Show all posts
Showing posts with label research. Show all posts

Mar 16, 2014

MindHabits' Happy Games Paired with Pharrell William's "Happy" Music!

We all could use more smiling people and happy music!

If you are looking for a short burst of happiness, try playing the free MindHabits demo games. I recommend the Matrix Trainer for starters. You can uncheck the "email" box if you don't want to sign up for the newsletter.

The games have upbeat music playing in the background, but you can listen to your own music. In the Matrix game, the objective is to tap as many happy faces that you can find, out of a number of frowny or sad faces.   The research shows that this is an effective way of reducing stress.  I have used the on-line version for years with students who have autism, and it is a fun and effective way of "training" them to focus on facial features and expressions.

The desktop version of the suite of games is just under $20.00, and is available for Windows and Macs.   The desktop version tracks data and and allows users to customize the games with their own photos.   

I am about to explore MindHabit's new mobile apps: Psych Me Up PRO! and Happy Cat! 

If you work with young people - special needs or otherwise, try playing the on-line demo with Pharrell William's "Happy" song in the background (see music video embedded below). 

 This is something that wouldn't hurt to try at home!  


 MindHabits MindHabits MindHabits MindHabits

 MindHabits
You are playing the MindHabits Trainer online demo. Your progress will not be logged beyond this session.
Copyright © 2008 MindHabits inc. inc l rights Reserved.

Note: 
I am sharing information about MindHabits because I have been following this company since 2005, when Dr. Mark Baldwin, the lead creator of the suite of games, gave a presentation.   Dr. Baldwin is a psychology professor at McGill University in Montreal, Canada. He has devoted his career to the study of social intelligence, and more recently, how technology can help people reduce stress, build self-confidence, and development, and maintain positive states of mind.



According to the MindHabits website, the Psych Me Up Pro!   ($.99) and PsychMeUp! (free) mobile apps were developed to help people focus attention on positive social feedback. A quiz is included with the application, as well as information about the research that supports the use of the games.  The "pro" version has more options.

The children's version of PsychMeUp! is Happy Cat.  The objective is to find the happy cats and ignore the grumpy cats.  The smiling cat will meow.




















HOW MINDHABITS WORKS

RELATED

MindHabits Game Tips

MindHabits FAQ

MindHabits Update
Lynn Marentette, Interactive Multimedia Technology, 2/24/08

McEwan, K., Gilbert, P., Dandeneau, S., Lipka, S., Maratos, F., Paterson, K.B., Baldwin, M. (2014) Facial Expressions Depicting Compassionate and Critical Emotions: The Development and Validation of a New Emotional Face Stimulus Set. PLOS One DOI: 10.1317/journal.pone.0088783

Dandeneau, S. D., Baldwin, M. W. (2009) The buffering effects of rejection-inhibiting training against social and performance threats in adult students. Contemporary Educational Psychology, 34, 42-50

Stephane D. Dandeneau, Mark W. Baldwin, Jodene R. Baccus, and Maya Sakellaropoulo, Jens C. Pruessner (2007), Cutting Stress Off at the Pass: Reducing Vigilance and Responsiveness to Social Threat by Manipulation of Attention (pdf) Journal of Personality and Social Psychology, 2007, Vol. 93, No. 4, 651–666 American Psychological Association 0022-3514/07/$12.00 DOI: 10.1037/0022-3514.93.4.651

Mar 15, 2014

Graphene, Nanotechnology, and Programmable Interfaces; Samsung Galaxy Demo


I've been intrigued by graphene's multiple possibilities for the future. It is a flexible, programmable material that harness nano-technology to create flexible touch screens, "wearables", efficient energy storage systems, and more.  The following videos provide just two examples of graphene's potential.  

The details?  If you are curious, follow the links at the end of this post.  




Here is a short clip of a demo of a graphene touch screen on a Samsung Galaxy:


RELATED
Graphene nanoribbons could be the savior of Moore's Law
Ryan Whitwam, Extreme Tech, 2/17/14
High-Performance Multifunctional Graphene Yarns: Toward Wearable All-Carbon Engery Storage Textiles
ACS NANO, 2/11/14
Hydrogenation-Assisted Graphene Origami and Its Application in Programmable Molecular Mass Uptake, Storage, and Release
Shuze Zhu and Teng Li, University of Maryland, ACS Nano, 2/24/14
Teng Li Group, Harvard University
Chemically and structurally functionalized graphene for real-world applications
Marko Spasenovic, Graphenea, 3/06/14
Nanoscale graphene origami cages set world record for densest hydrogen storage
Kurzweil Newsletter, 3/14/14
Auto-switchable graphene bio-interface with a 'zipper' nanoarchitecture
Onur Parlak, Anthony P.F. Turner, Ashutosh Tiwari, Nano Werk 10/31/13
Samsung files patent for graphene-based touch screen
Marko Spasenovic, Graphene Tracker, 3/7/14
Graphene: Wikipedia
Grahpene:  Flexible touch screen, made from a sheet of carbon the thickness of one atom!   
Lynn Marentette, Interactive Multimedia Technology blog, 6/23/10


May 21, 2013

Xbox One and Kinect 2 for the Playground of the Future

Xbox One and Kinect 2, Playground of the Future

The big news in tech today is the unveiling of the new Xbox One/Kinect 2 system.  For now, the video below might be the closest you'll get to the system.  Wired's senior editor, Peter Rubin had a chance to interview Scott Evans, of Microsoft, as he demonstrated the fascinating technical details in a family-room type setting.

Wired's interview of Scott Evans and demo of the new Xbox One and Kinect 2, using Active IR technology.



From what I learned, the new Kinect sensor has six times the fidelity of the previous version. Paired with the new Xbox One, it can do amazing things.  Engineers from around the world collaborated on this project, providing expertise in facial recognition, digital signal processing, speech recognition, machine learning, and computer vision.  The Xbox One is fueled by an 8-core x86 processor, supported by 8GB of RAM, which is sure to handle the hardest gamer's needs. It also includes a 500GB hard drive and an HD Blu-ray player.


The new system was designed to enhance the gaming/user experience. The 1080p camera provides a field of view that is 60 degrees larger than its  predecessor, and can handle a high level of detail.  It provides a better means of interpreting movement and orientation, and it processes skeleton and hand movements more precisely.  The system features "muscle man", a human-based physics model that is layered over the skeleton and depth map. It senses and calculating the forces the player uses while moving in a game. 

What I find interesting is that the camera can detect the player's pulse by measuring subtle changes of the skin that can't be perceived by the naked eye.  It also can quickly identify each player (it handles up to six), and identify facial expressions.  The active IR (infrared) system provides the system with better accuracy than the original Kinect. 

I wasn't able to find out much information regarding privacy issues with this system.  This is a concern, since it can sense your physiological responses, movement patterns, and facial expressions.  Over time, a good deal of very personal information would be gathered about each user. I shudder to think about the consequences if the data fell into the wrong hands.  

Possibilities for Special Needs Populations

I can see that the Xbox One + Kinect 2 system has the potential for games and other interactive applications for use in physical rehabilitation and fitness.  Since it can interpret facial expressions, it could also provide a way to support social skills learning among children and teens who have autism spectrum disorders.

RELATED

Microsoft invests a good deal of attention to proof-of-concept projects that may or not become part of a commercial product.  Below is an example of IllumiRoom:


Hrvoje Benko, of Microsoft Research, discusses the IllumiRoom concept during an interview at CHI 2013.


Xbox One Website
The new Xbox One Kinect tracks your heart rate, happiness, hands and hollers
Matthew Panzarino, The Next Web, 5/22/13
Kinect 2 Full Video Walkthrough: The Xbox Sees You Like Never Before
Kyle Wagner, Gizmodo, 5/21/13
Hands-on with prototypes of the Xbox One and New Kinect Sensor
Ben Gilbert, engadget, 5/21/13
Efficient Human Pose Estimation from Single Depth Images
Shotton, J., Girshick, R., Fitzgibbon, A., Sharp, T., Cook, M., Finocchio, M., Moore, R., Kohli, P., Crinisi, A., Kipman, A., Blake, A.   Video
Consumer Depth Cameras for Computer Vision:  Research Topics and Applications
Fossati, A., Gall, J., Grabner, H., Ren, X., Konolige, K. (Eds.)
Xbox One: Microsoft's supergeeks reveal what's inside the hardware
Dean Takahashi, VentureBeat, 5/21/13
Next Xbox Will Face New Array of Rivals
Nick Wingfield, New York Times, 5/21/13

Mar 16, 2013

UPDATE: What's New for Kinect? Fusion, real-time 3D digitizing, design considerations, and more.

The Evolution of Microsoft Kinect

I've been following the evolution of Microsoft's Kinect, and recently discovered a few interesting videos that show how far the system has come. According to Josh Blake, the founder of the OpenKinect community and author of the Deconstructing the NUI blog,  the Kinect for Windows SDK v1.7 will be released on Monday, March 18th, from http://www.kinectforwindows.com.  More details about this version can be found on Josh's blog as well as the official Kinect for Windows blog.


It is possible to create applications for desktop systems that work with the Kinect in interesting ways, as you'll see in the following videos. I think there is potential here for use in education/edutainment!

Below is a video of Toby Sharp, of Microsoft Research, Cambridge, demonstrating Kinect Fusion.  The software allows you to use a regular Kinect camera to reconstruct the world in 3D.



KinEtre: A Novel Way to Bring Computer Animation to Life
According to information from the YouTube description, "KinÊtre is a research project from Microsoft Research Cambridge that allows novice users to scan physical objects and bring them to life in seconds by using their own bodies to animate them. This system has a multitude of potential uses for interactive storytelling, physical gaming, or more immersive communications."




The following videos are quite long, so feel free to re-visit this post when you have time to relax and take it all in!

Kinect Design Considerations
This video covers Microsoft's Human Interface Guidelines, scenarios for interaction and use, and best practices for user interactions.  It also includes a preview of the next major version of the Kinect SDK. 


Kinect for Windows Programming Deep Dive
This video discusses how to build Windows Desktop apps and experiences with the Kinect, and also previews some future work.




RELATED
Kinect for Windows Developer Downloads
Kinect for Windows Blog
Deconstructing the NUI Blog (Josh Blake)
Microsoft Kinect Learns to Read Hand Gestures, Minority Report-Style Interface Now Possible
Celia Gorman, IEEE Spectrum, 3/13/13
Kinect hand recognition due soon, supports pinch-to-zoom and mouse click gestures.
Tom Warren, The Verge, 3/6/13
Microsoft's KinEtre Animates Household Objects
Samuel K. Moore, IEEE Spectrum, 8/8/12
Kinect Fusion Lets You Build 3-D Models of Anything Celia Gorman, IEEE Spectrum, 3/6/13
Description of Kinect sessions at Build 2012
Kinect for every developer!
Tom Kerhove, Kinecting for Windows, 2/15/13
Kinect in the Classroom
Kinect Education

Note: Although I recently received my developer kit for Leap Motion, another gesture-based interface, I haven't lost interest in following news for Kinect.

Dec 23, 2012

Interactive Tablets and Learning: One Laptop Per Child now One Tablet Per Child in Ethiopia

One Laptop Per Child (OLPC) is a philanthropic organization that focuses on learning technologies, distributing thousands of low-cost laptops to children in developing countries.  In most cases, children have been provided access to OLPC laptops within teachers within traditional school settings.  But what about children who live in remote areas, where there are no schools, teachers, or even access to electricity?  They now have the opportunity to learn, even without teachers, through a small experiment conceived by Nicholas Negroponte, of OLCP and other researchers.  In this experiement, each child was provided with a Motorola Xoom tablet.  No teachers were around, because the children lived in a remote village that had no teachers. 

The following video provides a brief overview of what happened over the course of a few weeks and months after the children received the tablets:





To learn more, I encourage you to follow the link to a video of Nicholas Negroponte's presentation at the October 2012 EmTech conference, held in Cambridge, Massachusetts.  He discusses learning and how it can be supported through technology, anywhere.

"Nicholas Negroponte, founder, One Laptop Per Child, on his latest experiment with the democratization of education - can children teach themselves to read?"


In his presentation, Negroponte discusses the differences between knowing and understanding, and the importance for teachers (or learning applications) to understand the learner.  He goes on to discuss the OLPC research project Ethiopia where children living in remote villages with no teachers, no exposure to print, illiterate communities, and no access to technology, learned to use tablets without instruction or guidance.  The village was provided with a solar panel and one village member was taught how to use it to supply power for the tablets.

Each tablet provided to the children had over 100 applications.  Within four minutes, one child open the box, turned on the on-off button. Within 5 days, each child was using an average of 47 applications.  Within five months, a child hacked the Android tablet to turn on the camera capability.  According to Negroponte, the children were each using different applications, but collaborated with one another.



Maryann Wolf, Director of  the Center for Reading and Language Research at Tufts University, has collaborated with with the "OTPC" project. Other collaborators include Cynthia Breazeal and team at the MIT Media Lab, and Sugata Mitra at Newcastle University, according to Chris Ball, lead software engineer at OLPC.

The tablets include software that tracks data from all of the interactions from the children.  What a goldmine for education and cognitive/developmental psychology researchers According to Negraponte, the data is free for analysis.   (I will update this post with additional information about how the data can be accessed as soon as I can find the link.)

Although the OTPC concept is a noble idea, it does not appear to address the fact the children and their families who live in remote villages do not have access to literacy support in their own language.  


RELATED

OLPC Literacy Project

Given Tablets but No Teachers, Ethiopian Children Teach Themselves:  A bold experiment by the One Laptop Per Child organization has shown "encouraging" results.
David Tabolt,  MIT Technology Review, 10/29/12


OLPC Project Puts Tablets in the Hands of Formerly Illiterate Children with Amazing Results John Biggs, TechCrunch, 11/1/12

Motorola Xoom hacked by Ethiopian kids who can't read; with no instructions whatsoever.
Joe Hindy, 11/4/12
 


DIG DEEPER: SOMEWHAT RELATED
Hourcade, J.P., Beitler, D., Cormenzana, F. and Flores, P. (2009). Early OLPC Experiences in a Rural Uruguayan School. In A. Druin (Ed.), Mobile Technology for Children: Designing for Interaction and Learning. Boston: Morgan Kaufmann.

Growing Up With Nell:  A Narrative Interface for Literacy (pdf)
IDC 2012, June 12–15, 2012, Bremen, Germany
Authors: C. Scott Ananian, Chris J. Ball, Michael Stone
One Laptop Per Child Foundation, 222 Third Street, Cambridge, MA 02142

ABSTRACT
"Nell is a tablet-oriented education platform for children in the developing world.  A novel modular narrative system guides learning, even for children far from educational infrastructure, and provides personalized instruction which grows with the child.  Nell's design builds on experience with the Sugar Learning Platform, used by over two million children around the world"

Quote from above article:
"To further promote collaboration, Nell is free and opensource and implemented in standard web technologies (JavaScript, HTML5, and WebGL) with offline caching. Resources are named by URL, even when disconnected from the internet, which simplifies the distribution of updates to story modules and the Nell system. URL-based identifiers also allow third parties to manage their own namespaces when extending Nell."

TinkRBook
A. Chang and C. Breazeal. TinkRBook: Shared reading interfaces for storytelling. (pdf) In Proc. of the 10th Int’l Conf. on Interaction Design and Children (IDC ’11), pages 145–148. ACM, June 2011.
NOTE:  The above article provides good references about early language and literacy development.



Wilox, Bruce Beyond Facade: Pattern Matching for Natural Language Applications (pdf)
Telltale Games, Feb. 2011
Note:  This paper reviews the history of Natural Language Processing (NLP) as applied to games, and includes information about AIML (Artificial Intelligence Markup Language), Facade, and ChatScript.  The author explains how string matching is no longer simply a matching of words. It now focuses matching patterns of meaning.

ChatScript 
ChatScript Website

Note:  One of my assignments for a class in AI for Games, back in 2006, was to create a mini-game that involved the use of AIML.  I realized that a "smart" chat feature would be useful to incorporate in an educational game. In my opinion, it has the potential to support scaffolding of learning, based on the learner's responses, positive as well as errors.

Dec 2, 2012

EpiCollect: A mobile app, useful for photo + data-collection "in the wild".

EpiCollect is an open-source project developed at Imperial College London, funded by the Wellcome Trust.  According to information posted on the project's website, "EpiCollect is a generic data collection tool that allows you to collect and submit geotagged data forms (along with photos) to a central project website (hosted using Google's App Engine) from suitable mobile phones (Android or iPhone). For example, questionnaires, surveys, etc.  All data synchronised (ie a copy sent from the phone) from multiple phones can then be viewed/charted/filtered at the project website using Google Maps/Earth or downloaded. Furthermore, data can be requested and viewed/filtered from the project website directly on your phone using Google Maps." -EpiCollect

EpiCollect Overview  epicollect.net
(Credit:  EpiCollect Website)

EpiCollect makes use of web API's such as Google Maps, Google Charts, Google Talk, and KML Specifiction, and JavaScript Libraries such as JQuery, script.aculo.us, ExtJS, and Mapstraction.  It runs on the Google AppEngine server, and is available for Android and iPhone.

I think that EpiCollect would be a useful interactive tool for use in education, K-12 and above.  It would be ideal for students working on group projects, such as environmental study.  For young children, a simple assignment might include taking pictures and data about  birds, animals, trees, cloud formations, or even litter, as part of a class project.  Since the data includes photographs, the students could create an end product in the form of an interactive multimedia presentation, available for other students - as well as parents- to view on the web, accessed from any web-enabled device.

HCI research teams could use these tools when observing people using various technologies in public spaces, such as malls, airports, special events, as well as in stores, eateries, and entertainment settings.  

I would be interested in learning more about the use of this application in HCI and K-12 education!

RELATED
EpiCollect Website
EpiCollect Instructions
EpiCollect Instructions (pdf)
The Sight of Road Kill Makes a Pretty, Data-Rich Picture (NPR All Tech Considered)
Note: Audio from the above December 2, 2012 episode can be found on the NPR Weekend Edition Sunday website after 12:00 PM ET on 12/2/12
Mobile app sees science go global  (BBC article)
App for Android Puts Laboratories on Your Phone (Tree Hugger article)
Scientific Data Collection Goes Mobile (Discovery News article)

Paper: EpiCollect: Linking Smartphones to Web Applications for Epidemiology, Ecology and Community Data Collection (PLos One 4(9), 2009)

David M. Aanensen, Derek M. Huntley, Edward J. Feil, Fada'a al-Own, Brian G. Spratt
Conclusion from the above paper:
"Data collection frameworks utilising mobile phones with data submission to and from central databases are widely applicable and can give a field worker similar display and analysis tools on their mobile phone that they would have if viewing the data in their laboratory via the web. We demonstrate their utility for epidemiological data collection and display, and briefly discuss their application in ecological and community data collection. Furthermore, such frameworks offer great potential for recruiting ‘citizen scientists’ to contribute data easily to central databases through their mobile phone."

Nov 4, 2012

CFP for Special Issue of Personal and Ubiquitous Computing on Educational Interfaces, Software, and Technology (EIST) -Extended Deadline: December 9, 2012

CFP for Special Issue of Personal and Ubiquitous Computing on Educational Interfaces, Software, and Technology (EIST) -Extended Deadline: December 9, 2012

Overview 
One of the primary goals of teaching is to prepare learners for life in the real world. In this ever-changing world of technologies such as mobile interaction, cloud computing, natural user interfaces, and gestural interfaces like the Nintendo Wii and Microsoft Kinect, people have a greater selection of tools for the task at hand. Given the potential of these new interfaces, software, and technologies as learning tools, as well as the ubiquitous application of interactive technology in formal and informal learning environments, there is a growing need to explore how next-generation technologies will impact education in the future. 

As a community of Human-Computer Interaction (HCI) and educational researchers, we need to theorize and discuss how new technologies should be integrated into the classrooms and homes of the future. In the last three years, three CHI workshops have provided a forum to discuss key issues of this sort, particularly in the context of next-generation education. The aim of this special issue of Personal and Ubiquitous Computing is to summarize the potential design challenges and perspectives on how the community should handle next-generation technologies in the education domain for both teachers and students. 


We invite authors to present position papers about potential design challenges and perspectives on how the community should handle the next generation of HCI in education. Topics of interest include but are not limited to: 

  • Gestural input, multitouch, large displays 
  • Mobile devices, response systems (clickers) 
  • Tangible, VR, AR & MR, multimodal interfaces 
  • Console gaming, 3D input devices 
  • Co-located interaction, presentations 
  • Educational pedagogy, learner-centric, child computer interaction 
  • Empirical methods, case studies 
  • Multi-display interaction 
  • Wearable educational media 
Important Dates 

  • Full papers due: December 9, 2012 
  • Initial reviews to authors: January 18, 2013 
  • Revised papers due: March 15, 2013 
  • Final reviews to authors: April 26, 2013 
  • Final papers due: June 14, 2013 


Submission Guidelines 

Submissions should be prepared according to the Word template located at the bottom of this page. All manuscripts are subject to peer review. Manuscripts must be submitted as a PDF to the easychair submission system. Submissions should be no more than 8000 words in length. 

Guest Editors and Contact Information 

  • Syed Ishtiaque Ahmed, Cornell University 
  • Quincy Brown, Bowie State University 
  • Jochen Huber, Technische Universität Darmstadt 
  • Si Jung “Jun” Kim, University of Central Florida 
  • Lynn Marentette, Union County Public Schools, Wolfe School 
  • Max Mühlhäuser, Technische Universität Darmstadt 
  • Alexander Thayer, University of Washington 
  • Edward Tse, SMART Technologies 

Contact: eistjournal2012@easychair.org 

Information about the Journal of Personal and Ubiquitous Computing 


Submission Template: PUC_EIST_article_template.docx  (59k)

Oct 8, 2012

Smartphone Use Infographic, via Pew Internet and American Life Project

The Pew Internet & American Life Project website is a treasure trove of statistics about the use of the internet and related technologies.  I especially like the following infographic which outlines how smartphone ownership has reached the "tipping point".  My hunch is that this will lead to some bigger changes in our future!

For more information, see Lee Rainie's article: Smartphone Ownership Update:  September 2012



Jul 21, 2012

Musings about NUI, Perceptive Pixel and Microsoft, Rapid Creative Prototyping (Lots of video and links) Revised

It just might be the right time for everyone to brush up on 21st century tech skills. iPads and touch-phones are ubiquitous. Touch-enabled interactive whiteboards and displays are in schools and boardrooms.  With Microsoft's Windows 8 and the news that the company recently acquired Jeff Han's company, Perspective Pixel, I think that there will be good support - and more opportunities- for designers and developers interested in moving from GUI to NUI.    


In the video below, from CES 2012, Jeff Han provides a good overview of where things are moving in the future.  We are in a post-WIMP world and there is a lot of catching up to do!

CES 2012  Perceptive Pixel and the Future of Multitouch (IEEE Spectrum YouTube Channel)



During the video clip, Jeff explains how far things have come during the past few years:
 "Five and 1/2 years ago I had to explain to everybody what multi-touch was and meant. And then, frankly, we've seen some great products from folks like Apple, and really have executed so brilliantly, that everyone really sees what a good implementation can be, and have come to expect it.  I also think though, that the explosion of NUI is less about just multi-touch, but an awareness that finally people have that you don't have to use a keyboard and mouse, you can demand something else beside that.  People are now willing to say, "Oh, this is something I can try, you know, touch is something I can try as my friendlier interface"."

Who wouldn't want to interact with a friendlier interface?  Steve Ballmer doesn't curb his enthusiasm about Windows 8 and Perceptive Pixel.  Jeff Han is happy how designs created in Windows 8 scales for use on screens large and small. He explains how Windows 8 can support collaboration. The Story Board application (7:58) on the large touchscreen display looks interesting.

I continue to be frustrated by the poor usability of many web-based and desk-top applications.  I like my iPad, but only because so many dedicated souls have given some thought to the user experience when creating their apps.  I often meet with disappointment when I encounter interactive displays when I'm out and about during the day.  It is 2012, and it seems that there are a lot of application designers and developers who have never read Don Norman's The Design of Everyday Things!



I enjoy making working prototypes and demo apps, but my skill set is stuck in 2008, the last year I took a graduate-level computer course.  I was thinking about taking a class next semester, something hands-on, creative, and also practical, to move me forward. I can only do so much when I'm in the DIY mode alone in my "lab" at home.  I need to explore new tools, alongside like-minded others.  


There ARE many more tools available to designers and developers than there were just four years ago.  Some of them are available online, free, or for a modest fee.  I was inspired by a link posted by my former HCI professor, Celine Latulipe, to her updated webpage devoted to Rapid Prototyping tools. The resources on her website look like a good place to start for people who are interested in creating applications for the "NUI" era.  (Celine has worked many interesting projects that explore how technology can support new and creative interaction, such as Dance.Draw.) Below is her description of her updated HCI resources:

"New HCI resource to share: I have created a few pages on my web site devoted to Rapid Prototyping tools, books, and methods. These pages contain reviews of various digital tools, including 7 different desktop prototyping apps, and including 8 different iPad apps for wireframing/prototyping. I hope it's useful to others. Feel free to share... and please send me comments and suggestions if you find anything inaccurate, or if you think there is stuff that I should be adding. I will be continuing to update this resource." -http://www.celinelatulipe.com (click on the rapid prototyping link at the top)



IDEAS
Below are just a few of my ideas that I'd like to implement in some way. I can't claim ownership to these ideas- they are mash-ups of what comes to me in my dreams, usually after reading scholarly publications from ACM or IEEE, or attending tech conferences. 
  • An interactive timeline, (multi-dimensional, multi-modal, multimedia) for off-the-desktop interaction, collaboration, data/info analysis exploration.  It might be useful for medical researchers, historians, genealogists, or people who are into the "history of ideas".  Big Data folks would love it, too. It would handle data from a variety of sources, including sensor networks. It would be beautiful to use.
  • A web-based system of delivering seamless interactive, multi-modal, immersive experiences, across devices, displays, and surfaces. The system would support multi-user, collaborative interaction.  The system would provide an option for tangible interaction.
  • A visual/auditory display interface that presents network activity, including potential intrusions, malfunctions, or anything that needs immediate attention that would be likely to be missed under present monitoring methods. 
  • Interactive video tools for creation, collaboration, storytelling.  (No bad remote controllers needed.)
  • A "wearable" that provides new ways for people to express and communicate creatively, through art, music, dance, with wireless capability. (It can interact with wireless sensor networks.)*
  • An public health application designed to provide information useful in understanding and sepsis prevention efforts. This application would utilize the timeline concept describe at the top of this list. This concept could also be useful in analyzing other medical puzzles, such as autism.
Most of these ideas could translate nicely to educational settings, and the focus on natural user interaction and multi-modal i/o aligns with the principles of Universal Design for Learning, something that is important to consider, given the number of "at-risk" learners and young people who have disabilities.

I welcome comments from readers who are working on similar projects, or who know of similar projects.  I also encourage graduate students and researchers who are interested in natural user interfaces to and move forward with an off-the-desktop NUI project.  I hope that my efforts can play a part in helping people make the move from GUI to NUI!  



Below are a few videos of some interesting projects, along with a list of a few references and links.


SMALLab (Multi-modal embodied immersive learning)


PUPPET PARADE: Interactive Kinect Puppets(CineKid 2011)



MEDIA FACADES: When Buildings Start to Twitter

HUMANAQUARIUM (CHI 2012)

 

NANOSCIENCE NRC Cambridge (Nokia's Morph project)






 
Examples: YouTube Playlists
POST WIMP EXPLORERS' CLUB
POST-WIMP EXPLORER'S CLUB II

Web Resources
Celine Latulipe's Rapid Prototyping Resources 
Creative Applications
NUI Group: Natural User Interface Group
OpenFrameworks and Interactive Multimedia: Funky Forest Installation for CineKid
SMALLab Learning
OpenExhibits: Free multi-touch + multiuser software initiative for museums, education, nonprofits, and students.
OpenSense Wiki 
CINEKID 2012 Website 
Multitouch Systems I Have Known and Loved (Bill Buxton)
Windows 8
Perceptive Pixel
Books
Natural User Interfaces in .NET  WPF 4, Surface2, and Kinect (Josh Blake, Manning Publications)
Chapter 1 pdf (Free)
Brave NUI World: Designing Natural User Interfaces for Touch and Gesture (Daniel Wigdor and Dennis Wixon)
Designing Gestural Interfaces (Dan Saffer)
Posts
Bill Snyder, ReadWrite Web, 7/20/12

I noticed some interesting tools on the Chrome web store - I plan to devote a few more posts to NUI tools in the future.

Jul 14, 2012

Cute NAO robot performs "Evolution of Dance" and is an active participant in research with young people who have autism spectrum disorders.

I came across a cute video of a NAO robot performing the Evolution of Dance, and as I smiled, I remembered that the robot was used in some research about interventions for young people with autism. 


The technology behind the NAO robot was developed by Aldebaran Robotics, and more details can be found on the company's website, along with the video and links I've provided below. (Aldebaran Robotics is hiring, btw.)


Enjoy the dance performance!

Evolution of Dance by NAO Robot 


DEPCO NAO Robot and Notre Dame Autism Research 



NAO Next: Gen: The New Robot of Aldebaran Robotics



New Robot Helps Autistic Children Interact (UConn) Research with Tim Giffort, CEO of Movia Robotics, and UConn professor Anjana Bhat 


(Above)Bruno Maisonnier of Aldebaran Robots Highlights Therapeutic Uses of the NAO Robot 

RELATED 
Aldebaran Robotics NAO Developer Website Psychologist explores effective treatment options for children with autism disorders 
Susan Guibert, Notre Dame News, 4/16/10 
Robot Speaks the Language of Kids 
Beth Krane, UConn Today, 8/5/10 
Movia Robotics: Systems for Learning, Training, Education and Therapy 
Timothy Gifford and Anjana Bhat on Using Robots to Help Autistic Children 
Rachel Z. Arndt, FastCompany, 4/1/11 
Anjana N. Bhat, University of Connecticut Timothy Gifford 
Social story powerpoint for children with autism who are participate in research at the FUN Lab at Notre Dame (ppt)

Jul 12, 2012

CFP for Special Issue of Personal and Ubiquitous Computing on Educational Interfaces, Software, and Technology (EIST)

CFP for Special Issue of Personal and Ubiquitous Computing on Educational Interfaces, Software, and Technology (EIST) 


Overview 
One of the primary goals of teaching is to prepare learners for life in the real world. In this ever-changing world of technologies such as mobile interaction, cloud computing, natural user interfaces, and gestural interfaces like the Nintendo Wii and Microsoft Kinect, people have a greater selection of tools for the task at hand. Given the potential of these new interfaces, software, and technologies as learning tools, as well as the ubiquitous application of interactive technology in formal and informal learning environments, there is a growing need to explore how next-generation technologies will impact education in the future. 


As a community of Human-Computer Interaction (HCI) and educational researchers, we need to theorize and discuss how new technologies should be integrated into the classrooms and homes of the future. In the last three years, three CHI workshops have provided a forum to discuss key issues of this sort, particularly in the context of next-generation education. The aim of this special issue of Personal and Ubiquitous Computing is to summarize the potential design challenges and perspectives on how the community should handle next-generation technologies in the education domain for both teachers and students. 

We invite authors to present position papers about potential design challenges and perspectives on how the community should handle the next generation of HCI in education. Topics of interest include but are not limited to: 

  • Gestural input, multitouch, large displays 
  • Mobile devices, response systems (clickers) 
  • Tangible, VR, AR & MR, multimodal interfaces 
  • Console gaming, 3D input devices 
  • Co-located interaction, presentations 
  • Educational pedagogy, learner-centric, child computer interaction 
  • Empirical methods, case studies 
  • Multi-display interaction 
  • Wearable educational media
Important Dates
  • Full papers due: November 9, 2012
  • Initial reviews to authors: January 18, 2013
  • Revised papers due: March 15, 2013
  • Final reviews to authors: April 26, 2013
  • Final papers due: June 14, 2013
Submission Guidelines
Submissions should be prepared according to the Word template located at the bottom of this page. All manuscripts are subject to peer review. Manuscripts must be submitted as a PDF to the easychair submission system. Submissions should be no more than 8000 words in length.

Guest Editors and Contact Information
  • Syed Ishtiaque Ahmed, Cornell University
  • Quincy Brown, Bowie State University
  • Jochen Huber, Technische Universität Darmstadt
  • Si Jung “Jun” Kim, University of Central Florida
  • Lynn Marentette, Union County Public Schools, Wolfe School
  • Max Mühlhäuser, Technische Universität Darmstadt
  • Alexander Thayer, University of Washington 
  • Edward Tse, SMART Technologies

Information about the Journal of Personal and Ubiquitous Computing

Jun 25, 2012

Ph.D. Student Positions: Intel Collaborative Research Institute on Sustainable Connected Cities

Thanks to Johannes Schöning for sharing information about this opportunity!
This might be of interest to some of my IMT readers:


EngD/PhD Positions within the Intel Collaborative Research Institute on Sustainable Connected Cities (ISCCI) at University College London (UCL) The Department of Computer Science at UCL is inviting applications for up to 6 Research Student Positions (1 EngD of 4 years, and up to 5 PhDs of 3 years), starting September 24th 2012 or January 7th 2013.


With 6.3 billion people expected to dwell in cities by 2050, the aim of the ISCCI is to create and realize a compelling vision of a sustainable future made possible by adaptive technologies that optimize resource efficiency, enable new services and support the quality of life of urban inhabitants. The Institute is located with a rich external ecosystem of companies and researchers both locally and globally investing in this important domain. The ISCCI is led by Prof. Yvonne Rogers at UCL.


We are looking for students willing to pursue a doctoral degree in computer science around the following broad topics:
•       How technology can help recognize, leverage, and support the out-of-sight, hidden or forgotten resources of urban environments, ranging from volunteers to subterranean water systems and other underlying city infrastructures.
•       How communities can encourage sustainable behaviours over time, for example, through meaningful visualizations and feedback about resource usage to individuals and groups.
•       How technology can give us an opportunity to reinvent new ideas of place and identity, considering the diversification & proliferation of new types of communities in cities, with the aim to increase quality of living and lower the barriers for mobility in our future connectedcities.


The applicants should possess a good honours MSc degree (1st Class or 2:1 minimum) in Computer Science, Psychology, Human-Computer-Interaction or related disciplines. Candidates will be expected to work in teams comprising computer scientists, social scientists, and psychologists, so an open attitude towards interdisciplinary research and teamwork is important. Candidates should have interest in at least two of the following research fields (as well as a good command of English) language:
•       Human-Computer-Interaction,
•       Augmented or Mixed Reality,
•       Interactive 3D Computer Graphics,
•       Interaction Design,
•       Perceptual Psychology, and/or
•       Cognitive Sciences.
•       Ethnography•       Data Mining, Machine Learning
•       Crowed Sourced Data
•       Data Visualization, Cartography
•       Geoinformatics
•       Big Data


Fees are fully paid. Salary for the 4-year EngD position is £18,090 tax free p.a., and £15,590 tax free p.a. for the 3-year PhD positions.The closing date for applications is 5pm on 18th July 2012. Interviews will he held on July 26th and 27th. 


The start date is September 24th  2012 (though it can be postponed to January 7th 2013). No part-time option available.Please download the application form http://www.ucl.ac.uk/uclic/phd_studentships/Intel_studentship_application_form/ and email the completed form to Louise Gaynor l.gaynor@ucl.ac.uk as a single PDF document by 5pm on Wed 18th July 2012. Please can you indicate in your application whether you wish to start in Sept 2012 or Jan 2013.

Intel studentship job advert_July 2012.pdfIntel studentship job advert_July 2012.pdf1881K   View   Download