Showing posts sorted by relevance for query kinect. Sort by date Show all posts
Showing posts sorted by relevance for query kinect. Sort by date Show all posts

Apr 1, 2013

What happens when a 2-year old wakes up to the sound of the Google Map Lady? "I CAN'T turn left right now!"

Google Map Lady says, "Turn Left", toddler yells from the back seat, "I CAN'T..."

If you are new to this blog, you might not know that I'm the grandmother of a 2-year-old little boy.  Watching him grow in an increasingly technology-enriched world has been an eye-opener at times, from his first interaction with my iPad, fingers-and-toes at 7 months of age, to his attempts at rafting down a digital river, playing the Kinect Adventure! River Rush game. 

Technology is rapidly changing how we learn, interact, and navigate our world.  Designers, developers, and others who are involved in the process of creating for the near future must be mindful of the ways newer technologies might play out in the real world, where the "user" is not always the person intended for the "user experience".  Off-the-desktop technologies are rapidly advancing, and impact people of all ages, wherever they happen to be.

Today's story is just one example.

I'm fortunate to live about a 35 minute drive from my grandson, and for this reason, I sometimes take him out and about, especially when his parents have a lot of errands to run.

Toddler with replica of the Eiffel Tower, Amalie's French Bakery, NoDa, Charlotte, NC

Toddler dancing around a floor mural











After a nice lunch at Amelie's French Bakery  near the NoDa neighborhood (Charlotte, NC),  and exploring the floor murals in the little mall behind the bakery, I told my grandson that we were going to the "Big Park" (Freedom Park). 

He was so excited, but within a few minutes, he was fast asleep.


Toddler smiling and happy in the back seat


Toddler asleep in the back car seat
I drove up towards the airport to kill time, thinking that he'd wake up and we'd watch the planes. He was still sleeping.  Now what?

I opened up Google Maps on my cell phone to get directions from the airport to the Carolina Raptor Center at Latta Plantation Park, since I wasn't sure how to get there from the airport.

About 15 minutes later, as the Google Map Lady gave directions, Levi woke up, saying "What's that sound? A lady's voice?". The Google Map Lady spoke again, and said something like, "In 1000 feet, take a left turn." 

Levi replied empathically, "I CAN'T turn left right now!". Google Map Lady responded with the next direction, and Levi replied, "I CAN'T do that!". 

The little guy was visibly upset, because he thought the lady was telling him what to do. It was obvious to him that he could not comply with her request.

What to do?   How do I explain the "Google Map Lady" this to a 2-year old? 

This is how I handled the situation:

I told him that the lady's voice was to help me know where turn so I could drive to the raptor center.  I kindly told him that the directions were just for me, not little boys who can't turn the car because they are in car seats and can't drive. He nodded and said, with relief, "Lady's voice for Mi-Mi, NOT for little boys", and was fine after that.

Note:
Although I did not know it at the time, my grandson had somehow wriggled out of the left harness of his car seat. I discovered the problem as I went to unfasten him from the car seat, and wondered how long he'd not been secured safely.  It hadn't occurred to me that this would happen - everything was in place at the beginning of our ride, as you can see from the first picture.  

As I lifted my grandson out of the car seat, it crossed my mind that it would be a good idea if car seats came with sensors to let the driver know if the car seat straps, snaps, or buckles became unsecured. (Systems like Forget Me Not provide a warning system to parents if the child is forgotten in the car.)

After conducting a quick search, I found that Sherine Elizabeth Thomas has applied for a patent that includes the use of a sensor to alert the adult that a child has unbuckled their seat belt.  I think that a system could be developed to provide an alert if the child was not safely secured, as in the case of my wiggly grandson.  


RELATED AND SOMEWHAT RELATED
(Self-activating, self-aware digital wireless safety system)
John Polaceck, 3/24/13
Grandma Got STEM blog (More info to come on this topic!)

Feb 12, 2013

Call for Papers: Human-Computer Interaction and the Learning Sciences


Below is the call for papers for a workshop that I'd like to attend!   (The information below was copied from the Surface Learning website.)

If you are interested in the intersection of learning and interactive surfaces,  the Surface Learning website provides an interdisciplinary forum for like-minded explorers.

Human-Computer Interaction and the Learning Sciences

Full-Day Pre-Conference Workshop, in conjunction with CSCL 2013, University of Wisconsin, Madison, WI, USA

Submission deadline:15 April 2013
Notification of acceptance:29 April 2013
Early registration deadline:TBD
Workshop registration deadline:TBD
Workshop:15 June 2013

Motivation

Both Human-Computer Interaction (HCI) and the Learning Sciences (LS) are active research communities with established bodies of literature. As both have an interest in using computing technologies to support people, there is a natural synergy. However, the practices and values of the two fields are substantially different, leading to tensions felt by researchers who actively participate in both fields. They also make it harder for researchers in either field to move towards the other.

Recently, there has been increased interest in LS to acknowledge the importance of HCI. In his keynote at ICLS 2012, Pierre Dillenbourg made the case that many of the important problems of learning / education are not primarily addressed through innovations in learning theory (a particular emphasis in LS) but of addressing important problems through useful, usable, perhaps innovative designs (a particular emphasis in HCI). At the "Interactive surfaces and spaces: A learning sciences agenda" symposium later that day, the relationship between HCI and LS was heavily debated. That discussion continued in email form. What became clear is that the relationship is complex, viewed differently by different groups (LS researchers interested in HCI, HCI researchers interested in LS and interdisciplinary researchers) and needs to be improved.

Intended Audience

This workshop is intended to be both interdisciplinary and multi-disciplinary:
  • For researchers at the intersection of the two fields (i.e., active participants in both communities), this workshop provides a forum for discussing interdisciplinary research with the aims of supporting the connection between the fields.
  • For HCI researchers interested in LS, this workshop provides an introduction to the learning sciences community (values, practices, literature, venues, etc.), an opportunity to receive LS feedback on your work and support for becoming part of the LS community.
  • For LS researchers interested in HCI, this workshop provides an introduction to Human-Computer Interaction (both the fundamentals taught in an introductory course and the research community), an opportunity to receive feedback on your work from HCI researchers and connections to experienced interdisciplinary researchers.

Participation

We offer two paths to participate in the workshop based on the CSCL 2013 theme: "To See the World and a Grain of Sand: Learning across Levels of Space, Time, and Scale." Send submission in either category tosubmit@surfacelearning.org by 15 April 2013. Submissions are not anonymous and should include all author names and contact details.

The World
We seek position papers on the critical issues in interdisciplinary HCI / LS work or visions of how to advance the relationship between HCI and LS. Topics include, but are not limited to: 
  • What core methods and principles of HCI might be of use to LS researchers?
  • How can LS researchers piggyback on the efforts of HCI research to make the newest technology available for development?
  • What theoretical foundations can LS offer to HCI researchers interested in using technology to support learning?
  • How do we better support true interdisciplinary researchers?
  • How do we promote academic exchange between the communities?
Position papers should be 2–4 pages in CSCL proceedings format. They will be publicly posted on the workshop website and should serve as a resource or discussion point. During the workshop, the position papers will be briefly presented (<10 minutes per presentation) to the entire group at the closing panel. The panel will use these presentations to reflect on the day's work and discuss possible future directions.

A Grain of Sand
One of the core values of HCI is that design (both the product and the process) matters. A great study of a lackluster, ill-conceived system is relatively useless. The time to reflect on and improve a design is during its formative stages (i.e., before it is finished). Here, we give attendees an opportunity to discuss design work in progress. We seek papers on preliminary projects, either before a system has been built (outlining the motivation) or during active development. Design papers should include motivation for the project (why is this necessary research?), related work (what are you building upon?), and a sketch of how you will proceed. The projects can be based in either an HCI or LS tradition of research.

Design papers should be 2–4 pages in CSCL proceedings format. They will be publicly posted on the workshop website. During the workshop, the papers will be briefly presented (<10 minutes per presentation) to a small group who will have time to give concrete feedback on the design / research from both HCI and LS perspectives (e.g., suggestions for improvement, related work).

Organizers

Jochen RickJochen “Jeff” Rick is research associate / lecturer in the Department of Educational Technology (EduTech) at Saarland University, Germany. He received his PhD in the area of "Learning Sciences and Technologies" from the College of Computing, Georgia Institute of Technology in 2007. This will be his ninth ISLS conference. He has published in both JLS and ijCSCL and is on the editorial board of ijCSCL. He is also active in the HCI community, particularly the Interaction Design and Children community, serving as a full papers chair for the 2012 conference. He has experienced multiple perspectives on this interdisciplinary area: LS graduate student at an HCI powerhouse, postdoc in an HCI lab and junior faculty in an LS department. He has helped to organize four workshops, including one at CSCL 2002 and one at ICLS 2010. For two workshops, he successfully employed Open Space Technology, an organizing technique we plan to employ in this workshop.

Michael HornMichael Horn is an assistant professor at Northwestern University, USA where he directs the Tangible Interaction Design and Learning (TIDAL Lab). Michael holds a joint appointment in Computer Science and the Learning Sciences, and his research explores the role of emerging interactive technology in the design of learning experiences. His projects include the design of a tangible computer programming language for use in science museums and early elementary school classrooms; and the design of multi-touch tabletop exhibits for use in natural history museums. Michael has presented work at cross-disciplinary conferences including Interaction Design and Children (IDC), Tangible, Embedded, and Embodied Interaction (TEI), Human Factors in Computing Systems (CHI), ICLS, and AERA; he is on the editorial board for the Journal of Technology, Knowledge, and Learning; and he is the program committee co-chair for ACM Interactive Tabletops and Surfaces (2012 and 2013). Michael also co-organized a workshop on Technology for Today’s Family at CHI 2012.

Roberto Martinez-MaldonadoRoberto Martinez-Maldonado is a PhD candidate in the Computer Human Adapted Interaction Research Group at The University of Sydney, Australia. His research focuses on analysing data generated when groups of students collaborate using shared devices to help teachers to be more aware about their learning processes and take informed decisions. His research grounds on principles of Human-Computer Interaction, CSCL, Educational Data Mining and Learning Analytics; he makes use of a number of technologies including multi-touch interactive tabletops, tablets, kinect sensors and databases. He has presented work at interdisciplinary conferences that include Intelligent Tutoring Systems (ITS), Artificial Intelligence in Education (AIED), Interactive Tabletops and Surfaces (ITS) CSCL, ICLS and Educational Data Mining (EDM). He lead the organisation of the workshop held in conjunction with ICLS 2012 titled Digital Ecosystems for Collaborative Learning. He has published papers at CSCL 2011, ICLS 2012 and other communities related with HCI and Artificial Intelligence in education.

Documents

Dec 29, 2012

KAPi Kids at Play Awards: Best in Children's Technology 2013 Winners Announced

The following information is from a PRWeb press release announcing the winners of the Fourth Annual KAPi Awards:

Living in Digital Times and Children's Technology Review Announce 2013 KAPi Award Winners  The Most Innovative in Children's Technology to be honored on Thursday, January 10, at the 2013 International CES ® in Las Vegas

"Collaboratively organized and produced by Living in Digital Times and Children’s Technology Review, the fourth annual KAPi Kids at Play Awards honor the best of the best in children’s technology."
- PRWeb, 12/18/12

The 2013 KAPi Award Winners Are… 
 1. Best Younger Children’s App: LetterSchool by Boreaal Publishers 
 2. Best Older Children’s App: IMAG-N-O-TRON by Moonbot Studios 
 3. Best Tech Leveraged Toy: Skylanders Giants by Activision 
 4. Best Video Game Software: Kinect Sesame Street TV by Microsoft Studios 
 5. Best Hardware or Peripheral: Kindle Fire HD with Kindle FreeTime Unlimited by Amazon 
 6. Best Technology Toy: littleBits by littleBits Electronics 
 7. Best Educational Technology: BrainPOP GameUp by BrainPOP LLC. 
 8. Innovation: The Cube by 3D Systems, Inc. (3D Printer)
 9. Pioneer: Dale Dougherty, Co-Creator, Maker Faire; Publisher, MAKE Magazine 
 10. Pioneering Team: Toca Boca 

Judges of the KAPi Awards consisted of 13 journalists and/or experts in children’s interactive media, they were: 
Warren Buckleitner, Children’s Technology Review
Chris Crowell,  Children’s Technology Review  
Dan Donahoo, Wired GeekDad and Project Synthesis 
Chip Donohue, Erikson Institute 
David Kleeman, American Center for Children and Media 
Ann McCormick, Co-Founder, The Learning Company 
Frank Migliorelli, Mig Idea 
Robin Raskin, Living in Digital Times 
Reyne Rice, Toy Expert 
Carly Shuler, PlayScience 
Andrea Smith, Mashable 
Aleen Stein, Organa 
Scott Traylor, 360 Kid

Nov 17, 2012

Human Computer Interaction + Informal Science Education Conference (NUI News)

I recently learned of the HCI + ISE conference, funded by the National Science Foundation and organized by Ideum and Independent Exhibitions that will provide the groundwork for the future of the development and design of interactive computer-based science exhibits.
Science museums have a long history of interactivity, well suited to groups of "explorers", such as families or students visiting on a field trip.  

What is really exciting is that new interactive applications and technologies have the power to transform the way people learn and understand science in a collaborative and social way.  Innovations in the field of HCI - Human-Computer Interaction- such as multi-touch and gesture interaction, are  well-suited to meet the goals of science education for all, beyond the school doors and wordy textbooks. 

Below is a screen-shot of the conference website, a description about the conference, quoted from the site, and some related resources.



About the HCI+ISE Conference
"HCI technologies, such as motion capture, multitouch, augmented reality, RFID, and voice recognition are beginning to change the way computer-based science exhibits are designed and developed. Human Computer Interaction in Informal Science Education (HCI+ISE) is a first-of-its-kind gathering to explore and disseminate effective practices in developing a new generation of digital exhibits that are more intuitive, interactive, and social than their predecessors."
"The HCI+ISE Conference, to be held in Albuquerque, New Mexico June 11-14 2013, will bring together 60 museum exhibit designers and developers, learning researchers, and technology industry professionals to share effective practices, and to explore both the enormous potential and possible pitfalls that these new technologies present for exhibit development in informal science education settings."
"HCI+ISE will focus on the practical considerations of implementing new HCI technologies in educational settings with an eye on the future. Along with a survey of how HCI is shaping the museum world, participants will be challenged to envision the museum experience a decade into future. The conference results will provide a concrete starting point for exhibit developers and informal science educators who are just beginning to investigate these emerging technologies and design challenges in creating these new types of exhibits."
Why HCI+ISE?
"Since the mid-1980s informal educational venues have increasingly incorporated computer-based exhibits into their science communication offerings in an effort to keep pace with public expectations and make use of the expanding opportunities these technologies provide. The advent and popularity of once novel HCI technologies are becoming commonplace: the Wii and Microsoft Kinect now allow for motion capture video games, tablet PCs have multitouch interaction, and smart phones and other devices come standard with voice recognition. Yet many museums are still developing single-touch and trackball-driven, single-user computer kiosks."
"Science museums have a long history of championing hands-on, physical, and inquiry-based activities and exhibits. This vast experience has only just begun to be applied to interactive computer interfaces. Along with seasoned science exhibit developers, the Conference will draw upon individuals outside of ISE who will provide fresh insight into the technologies, design issues, and audience expectations that these visitor experiences present."
Involvement and Findings
"HCI+ISE will bring together a diverse group of practitioners and other professionals to discuss (and in some cases share and prototype) new design approaches utilizing emerging HCI technology. Please see our Apply page to learn how you can participate. Conference news and findings will be distributed through a variety of ISE and museum websites, including this one."
"We welcome your questions and comments about the HCI+ISE Conference."
CONTACTS
Kathleen McLean of Independent Exhibitions
& Jim Spadaccini of Ideum
HCI+ISE Co-chairs
"Open Exhibits is a multitouch, multi-user tool kit that allows you to create custom interactive exhibits."
CML:  Creative Mark-up Language
GML: Gesture Mark-up Language
GestureWorks
Ideum

Nov 4, 2012

CFP for Special Issue of Personal and Ubiquitous Computing on Educational Interfaces, Software, and Technology (EIST) -Extended Deadline: December 9, 2012

CFP for Special Issue of Personal and Ubiquitous Computing on Educational Interfaces, Software, and Technology (EIST) -Extended Deadline: December 9, 2012

Overview 
One of the primary goals of teaching is to prepare learners for life in the real world. In this ever-changing world of technologies such as mobile interaction, cloud computing, natural user interfaces, and gestural interfaces like the Nintendo Wii and Microsoft Kinect, people have a greater selection of tools for the task at hand. Given the potential of these new interfaces, software, and technologies as learning tools, as well as the ubiquitous application of interactive technology in formal and informal learning environments, there is a growing need to explore how next-generation technologies will impact education in the future. 

As a community of Human-Computer Interaction (HCI) and educational researchers, we need to theorize and discuss how new technologies should be integrated into the classrooms and homes of the future. In the last three years, three CHI workshops have provided a forum to discuss key issues of this sort, particularly in the context of next-generation education. The aim of this special issue of Personal and Ubiquitous Computing is to summarize the potential design challenges and perspectives on how the community should handle next-generation technologies in the education domain for both teachers and students. 


We invite authors to present position papers about potential design challenges and perspectives on how the community should handle the next generation of HCI in education. Topics of interest include but are not limited to: 

  • Gestural input, multitouch, large displays 
  • Mobile devices, response systems (clickers) 
  • Tangible, VR, AR & MR, multimodal interfaces 
  • Console gaming, 3D input devices 
  • Co-located interaction, presentations 
  • Educational pedagogy, learner-centric, child computer interaction 
  • Empirical methods, case studies 
  • Multi-display interaction 
  • Wearable educational media 
Important Dates 

  • Full papers due: December 9, 2012 
  • Initial reviews to authors: January 18, 2013 
  • Revised papers due: March 15, 2013 
  • Final reviews to authors: April 26, 2013 
  • Final papers due: June 14, 2013 


Submission Guidelines 

Submissions should be prepared according to the Word template located at the bottom of this page. All manuscripts are subject to peer review. Manuscripts must be submitted as a PDF to the easychair submission system. Submissions should be no more than 8000 words in length. 

Guest Editors and Contact Information 

  • Syed Ishtiaque Ahmed, Cornell University 
  • Quincy Brown, Bowie State University 
  • Jochen Huber, Technische Universität Darmstadt 
  • Si Jung “Jun” Kim, University of Central Florida 
  • Lynn Marentette, Union County Public Schools, Wolfe School 
  • Max Mühlhäuser, Technische Universität Darmstadt 
  • Alexander Thayer, University of Washington 
  • Edward Tse, SMART Technologies 

Contact: eistjournal2012@easychair.org 

Information about the Journal of Personal and Ubiquitous Computing 


Submission Template: PUC_EIST_article_template.docx  (59k)

Aug 12, 2012

Tech and Stuff shared by my FB friends.

It seems that the weekend is ripe for sharing interesting things on Facebook, judging from what I've seen from my FB friends.  These are just a few that came my way:


This picture below is from the World is Beautiful FB page. Where?  The  Igloo Village of Hotel Kakslauttanen, in Finland.  The igloos are made of glass, and according to the description, provide views of the Aurora Borealis:



In case you missed this--- at about 1:45 the dolphins appear.  Beautiful!

The Blue from Mark Peters on Vimeo.

17 minute video from LEGO about the history of the company:


Context-Aware Computing, by Albrecht Schmidt:


iGlass, shared by Pixelonomics:


Patent application for "peripheral treatment for head-mounted displays", for the above device.

Michael Husted's post:


Shared by Barbara Bray, via Smart Apps for Kids, via Success in Learning



My comment:
"It doesn't hurt to take a few self-defense classes.  I took kickboxing for the exercise and I do not feel defenseless.  As adults, we encounter criminals who are beyond the bully stage, who don'e care if they hurt (or kill) when they want to engage in illegal activities.  It makes sense to do the things that make us strong, healthy, fit, and safe.  This means having the strength to help others during a crisis, such as the shootings at the movie theater and other seemingly "random" acts of local terrorism."

I shared the following picture on Facebook:  
I set up the XBox 360 and the Kinect in the Activities of Daily Living room (it is also the music room), and when I went to take a picture of my rafting adventure, the system took a picture of me!
Photo: We got the Kinect working at school, here I'd a picture of me  taking a picture of the screen when the in-game camera took a picture of me trying to ride the rapids...

Shared by World Sepsis Day - the German delegation's presentation at the Project Fair of the International Federation of Medical Students Association August meeting.



RELATED
Albrecht Schmidt's blog
Interaction Design Foundation:  "Free educational materials - made by the world's technology elite"
Mashable

Jul 21, 2012

Musings about NUI, Perceptive Pixel and Microsoft, Rapid Creative Prototyping (Lots of video and links) Revised

It just might be the right time for everyone to brush up on 21st century tech skills. iPads and touch-phones are ubiquitous. Touch-enabled interactive whiteboards and displays are in schools and boardrooms.  With Microsoft's Windows 8 and the news that the company recently acquired Jeff Han's company, Perspective Pixel, I think that there will be good support - and more opportunities- for designers and developers interested in moving from GUI to NUI.    


In the video below, from CES 2012, Jeff Han provides a good overview of where things are moving in the future.  We are in a post-WIMP world and there is a lot of catching up to do!

CES 2012  Perceptive Pixel and the Future of Multitouch (IEEE Spectrum YouTube Channel)



During the video clip, Jeff explains how far things have come during the past few years:
 "Five and 1/2 years ago I had to explain to everybody what multi-touch was and meant. And then, frankly, we've seen some great products from folks like Apple, and really have executed so brilliantly, that everyone really sees what a good implementation can be, and have come to expect it.  I also think though, that the explosion of NUI is less about just multi-touch, but an awareness that finally people have that you don't have to use a keyboard and mouse, you can demand something else beside that.  People are now willing to say, "Oh, this is something I can try, you know, touch is something I can try as my friendlier interface"."

Who wouldn't want to interact with a friendlier interface?  Steve Ballmer doesn't curb his enthusiasm about Windows 8 and Perceptive Pixel.  Jeff Han is happy how designs created in Windows 8 scales for use on screens large and small. He explains how Windows 8 can support collaboration. The Story Board application (7:58) on the large touchscreen display looks interesting.

I continue to be frustrated by the poor usability of many web-based and desk-top applications.  I like my iPad, but only because so many dedicated souls have given some thought to the user experience when creating their apps.  I often meet with disappointment when I encounter interactive displays when I'm out and about during the day.  It is 2012, and it seems that there are a lot of application designers and developers who have never read Don Norman's The Design of Everyday Things!



I enjoy making working prototypes and demo apps, but my skill set is stuck in 2008, the last year I took a graduate-level computer course.  I was thinking about taking a class next semester, something hands-on, creative, and also practical, to move me forward. I can only do so much when I'm in the DIY mode alone in my "lab" at home.  I need to explore new tools, alongside like-minded others.  


There ARE many more tools available to designers and developers than there were just four years ago.  Some of them are available online, free, or for a modest fee.  I was inspired by a link posted by my former HCI professor, Celine Latulipe, to her updated webpage devoted to Rapid Prototyping tools. The resources on her website look like a good place to start for people who are interested in creating applications for the "NUI" era.  (Celine has worked many interesting projects that explore how technology can support new and creative interaction, such as Dance.Draw.) Below is her description of her updated HCI resources:

"New HCI resource to share: I have created a few pages on my web site devoted to Rapid Prototyping tools, books, and methods. These pages contain reviews of various digital tools, including 7 different desktop prototyping apps, and including 8 different iPad apps for wireframing/prototyping. I hope it's useful to others. Feel free to share... and please send me comments and suggestions if you find anything inaccurate, or if you think there is stuff that I should be adding. I will be continuing to update this resource." -http://www.celinelatulipe.com (click on the rapid prototyping link at the top)



IDEAS
Below are just a few of my ideas that I'd like to implement in some way. I can't claim ownership to these ideas- they are mash-ups of what comes to me in my dreams, usually after reading scholarly publications from ACM or IEEE, or attending tech conferences. 
  • An interactive timeline, (multi-dimensional, multi-modal, multimedia) for off-the-desktop interaction, collaboration, data/info analysis exploration.  It might be useful for medical researchers, historians, genealogists, or people who are into the "history of ideas".  Big Data folks would love it, too. It would handle data from a variety of sources, including sensor networks. It would be beautiful to use.
  • A web-based system of delivering seamless interactive, multi-modal, immersive experiences, across devices, displays, and surfaces. The system would support multi-user, collaborative interaction.  The system would provide an option for tangible interaction.
  • A visual/auditory display interface that presents network activity, including potential intrusions, malfunctions, or anything that needs immediate attention that would be likely to be missed under present monitoring methods. 
  • Interactive video tools for creation, collaboration, storytelling.  (No bad remote controllers needed.)
  • A "wearable" that provides new ways for people to express and communicate creatively, through art, music, dance, with wireless capability. (It can interact with wireless sensor networks.)*
  • An public health application designed to provide information useful in understanding and sepsis prevention efforts. This application would utilize the timeline concept describe at the top of this list. This concept could also be useful in analyzing other medical puzzles, such as autism.
Most of these ideas could translate nicely to educational settings, and the focus on natural user interaction and multi-modal i/o aligns with the principles of Universal Design for Learning, something that is important to consider, given the number of "at-risk" learners and young people who have disabilities.

I welcome comments from readers who are working on similar projects, or who know of similar projects.  I also encourage graduate students and researchers who are interested in natural user interfaces to and move forward with an off-the-desktop NUI project.  I hope that my efforts can play a part in helping people make the move from GUI to NUI!  



Below are a few videos of some interesting projects, along with a list of a few references and links.


SMALLab (Multi-modal embodied immersive learning)


PUPPET PARADE: Interactive Kinect Puppets(CineKid 2011)



MEDIA FACADES: When Buildings Start to Twitter

HUMANAQUARIUM (CHI 2012)

 

NANOSCIENCE NRC Cambridge (Nokia's Morph project)






 
Examples: YouTube Playlists
POST WIMP EXPLORERS' CLUB
POST-WIMP EXPLORER'S CLUB II

Web Resources
Celine Latulipe's Rapid Prototyping Resources 
Creative Applications
NUI Group: Natural User Interface Group
OpenFrameworks and Interactive Multimedia: Funky Forest Installation for CineKid
SMALLab Learning
OpenExhibits: Free multi-touch + multiuser software initiative for museums, education, nonprofits, and students.
OpenSense Wiki 
CINEKID 2012 Website 
Multitouch Systems I Have Known and Loved (Bill Buxton)
Windows 8
Perceptive Pixel
Books
Natural User Interfaces in .NET  WPF 4, Surface2, and Kinect (Josh Blake, Manning Publications)
Chapter 1 pdf (Free)
Brave NUI World: Designing Natural User Interfaces for Touch and Gesture (Daniel Wigdor and Dennis Wixon)
Designing Gestural Interfaces (Dan Saffer)
Posts
Bill Snyder, ReadWrite Web, 7/20/12

I noticed some interesting tools on the Chrome web store - I plan to devote a few more posts to NUI tools in the future.

Jul 15, 2012

60-Minutes Segment about iPads and Autism; James Winchester's Tech and Special Needs Blog

Tonight's episode of 60 Minutes included a repeat of a segment about the use of iPad apps with young people who have autism spectrum disorders.   I missed it, but I found it on the CBS website. 


Along with the segment, I found several related videos and transcripts. If you have a moment, take the time to look!


Apps for Autism (60 Minutes Video)

RELATED
Interview of Temple Grandin about autism
Temple Grandin's Unique Brain
SEN Classroom: Ideas and Tech in a SEN Classroom
(James Winchester's blog)
James is a special educator who has a wealth of  "how-to" knowledge about technology and special needs. If you are interested, take a look at his blog's archive. He writes about iPad apps,  the use of the Kinect with students at his school, and more.  He specializes in a Life Skills curriculum, which focuses on social, communication, and vocational skills that the students will need as they become members of the community.  



I recently wrote a post about Po-Motion, an interactive tech start-up based in Winnipeg, Canada, and learned that the system is used as an interactive wall display in a sensory room at a school for children who have severe disabilities, including autism. More information about the use of this system, including a video, can be found on James Winchester's blog post, Po-Motion Interactive Wall in the Sensory Room.  



Comment:

In my work as a school psychologist, I use technology with students who have severe autism several days a week, along with my colleagues.  I plan to share more information on this topic from time-to-time in future posts. 


I am putting together a web page with resources about autism and technology. My resources include descriptions of systems and applications, videos, and presentation slides from a variety of researchers, developers, and practitioners.  Suggestions are welcome!


Jul 12, 2012

CFP for Special Issue of Personal and Ubiquitous Computing on Educational Interfaces, Software, and Technology (EIST)

CFP for Special Issue of Personal and Ubiquitous Computing on Educational Interfaces, Software, and Technology (EIST) 


Overview 
One of the primary goals of teaching is to prepare learners for life in the real world. In this ever-changing world of technologies such as mobile interaction, cloud computing, natural user interfaces, and gestural interfaces like the Nintendo Wii and Microsoft Kinect, people have a greater selection of tools for the task at hand. Given the potential of these new interfaces, software, and technologies as learning tools, as well as the ubiquitous application of interactive technology in formal and informal learning environments, there is a growing need to explore how next-generation technologies will impact education in the future. 


As a community of Human-Computer Interaction (HCI) and educational researchers, we need to theorize and discuss how new technologies should be integrated into the classrooms and homes of the future. In the last three years, three CHI workshops have provided a forum to discuss key issues of this sort, particularly in the context of next-generation education. The aim of this special issue of Personal and Ubiquitous Computing is to summarize the potential design challenges and perspectives on how the community should handle next-generation technologies in the education domain for both teachers and students. 

We invite authors to present position papers about potential design challenges and perspectives on how the community should handle the next generation of HCI in education. Topics of interest include but are not limited to: 

  • Gestural input, multitouch, large displays 
  • Mobile devices, response systems (clickers) 
  • Tangible, VR, AR & MR, multimodal interfaces 
  • Console gaming, 3D input devices 
  • Co-located interaction, presentations 
  • Educational pedagogy, learner-centric, child computer interaction 
  • Empirical methods, case studies 
  • Multi-display interaction 
  • Wearable educational media
Important Dates
  • Full papers due: November 9, 2012
  • Initial reviews to authors: January 18, 2013
  • Revised papers due: March 15, 2013
  • Final reviews to authors: April 26, 2013
  • Final papers due: June 14, 2013
Submission Guidelines
Submissions should be prepared according to the Word template located at the bottom of this page. All manuscripts are subject to peer review. Manuscripts must be submitted as a PDF to the easychair submission system. Submissions should be no more than 8000 words in length.

Guest Editors and Contact Information
  • Syed Ishtiaque Ahmed, Cornell University
  • Quincy Brown, Bowie State University
  • Jochen Huber, Technische Universität Darmstadt
  • Si Jung “Jun” Kim, University of Central Florida
  • Lynn Marentette, Union County Public Schools, Wolfe School
  • Max Mühlhäuser, Technische Universität Darmstadt
  • Alexander Thayer, University of Washington 
  • Edward Tse, SMART Technologies

Information about the Journal of Personal and Ubiquitous Computing

May 21, 2012

Leap Motion: Low Cost Gesture Control for Your Computer Display

Jessica Vascellaro, of the Wall Street Journal, reports about gesture,  motion. and even object control for computers, highlighting the work of  Leap Motion and Flutter.




Apparently the Leap Motion sensor is less expensive than Microsoft's Kinect. It can track movements down to 1/100 of a millimeter and can track fingers and movement. It handles interaction with 8 cubic feet of space.


Below is a video from the Leap Motion website:






RELATED
Leap FAQs
Leap Motion Developer Kit Application
Leap Motion: 3D hands-free motion control, unbound
Daniel Terdiman, CNET, 5/20/12
FYI:  Do a search and you'll find many more articles and posts about Leap Motion!

May 19, 2012

Johnny Chung Lee's Recent Words of Wisdom & Google's Open-Source Ceres Non-Linear Least Squares Solver


I have been a fan of Johnny Chung Lee since 2007 or 2008, before he finished his Ph.D in Human-Computer Interaction.  Johnny went on to work at Microsoft (Kinect) and then Google, where he works as a Rapid Evaluator. 


Johnny is known for his experiments with the Wii Remote, which he introduced to the world during a TED Talk in 2008.  He continues to maintain his Procrastineering blog, and from time-to-time, uses his blog to share his take on the world of technology.  The following quote is a good example of his viewpoint, taken from his post, "Technology as a Story":


"...what saddens me is when I encounter technologists with the brilliance to create new and wonderful things, but lack a sense of what is beautiful to people. Technology is most often known for being ugly and unpleasant to use, because technologists most often build technology for other technologists.
...But to touch millions of people, you have to tell a story - a story that they can believe in, a story that can inspire them. Technology is a tool by which new stories can be crafted." - 



Today, I came across Johnny's most recent post, which asks, "So, what exactly is a "non-linear least squares solver"?  And why should you care?   Take a moment to read his post, "Ceres: solving complex problems using computing muscle".  Google just open sourced the Ceres Non-Linear Least Squares Solver.


If Johnny Chung Lee thinks that this is "probably the most interesting code library" that he's had a chance to work with, it probably has some value. 


Even if if you don't have a clue about the Ceres Non-Linear Lest Squares Solver,  you might appreciate Johnny's examples of how would it would useful. In today's rapidly-accelerating technology-supported world, you just might need it in your future!


Here are a few examples:
---Making sense of sensor data from multiple locations (see video "SLAM 1: Viewed at 6X speed")
---Figuring out the position of a camera and the objects in view (see video "Parallel Tracking and Mapping for Small AR Workspaces")
---Combining GPS data with vehicle sensors in cars. (see video "Street View Sensor Fusion with Ceres")


RELATED
Johnny Chung Lee's Website
Excerpt from a post I wrote about Johnny Chung Lee four years ago:
I wish I could be Johnny Chung Lee for a Day! 3/2/08
I've mentioned in previous posts that I am a fan of Johnny Chung Lee, a Ph.D. student in the Human-Computer Interaction department at Carnegie-Mellon University. Johnny expects to complete his Ph.D this year. Johnny recently presented his innovative work at TED 2008. 


What impresses me about Johnny is the way that he has documented his intellectual journey in a very accessible way, by using YouTube and his well-organized, appealing website. Johnny has taken interesting ideas that most would dismiss as silly or impractical, and transformed them into useful, usable applications that hold great promise for future work. 


 In my opinion, many of Johnny's "hacks" will spark ideas related to the design and development of universally designed technologies and applications that will meet the technology needs of a wider range of people. This is important, especially now that an increasing number of "connected" interactive displays and kiosks (known by the marketing industry as interactive digital signage) in public spaces.


January 2011 post:
"Hi, Google. My name is Johnny Chung Lee": Johnny Chung Lee Leaves Microsoft. (I still wish I could be Johnny Chung Lee for a day.)