Feb 12, 2013

Call for Papers: Human-Computer Interaction and the Learning Sciences


Below is the call for papers for a workshop that I'd like to attend!   (The information below was copied from the Surface Learning website.)

If you are interested in the intersection of learning and interactive surfaces,  the Surface Learning website provides an interdisciplinary forum for like-minded explorers.

Human-Computer Interaction and the Learning Sciences

Full-Day Pre-Conference Workshop, in conjunction with CSCL 2013, University of Wisconsin, Madison, WI, USA

Submission deadline:15 April 2013
Notification of acceptance:29 April 2013
Early registration deadline:TBD
Workshop registration deadline:TBD
Workshop:15 June 2013

Motivation

Both Human-Computer Interaction (HCI) and the Learning Sciences (LS) are active research communities with established bodies of literature. As both have an interest in using computing technologies to support people, there is a natural synergy. However, the practices and values of the two fields are substantially different, leading to tensions felt by researchers who actively participate in both fields. They also make it harder for researchers in either field to move towards the other.

Recently, there has been increased interest in LS to acknowledge the importance of HCI. In his keynote at ICLS 2012, Pierre Dillenbourg made the case that many of the important problems of learning / education are not primarily addressed through innovations in learning theory (a particular emphasis in LS) but of addressing important problems through useful, usable, perhaps innovative designs (a particular emphasis in HCI). At the "Interactive surfaces and spaces: A learning sciences agenda" symposium later that day, the relationship between HCI and LS was heavily debated. That discussion continued in email form. What became clear is that the relationship is complex, viewed differently by different groups (LS researchers interested in HCI, HCI researchers interested in LS and interdisciplinary researchers) and needs to be improved.

Intended Audience

This workshop is intended to be both interdisciplinary and multi-disciplinary:
  • For researchers at the intersection of the two fields (i.e., active participants in both communities), this workshop provides a forum for discussing interdisciplinary research with the aims of supporting the connection between the fields.
  • For HCI researchers interested in LS, this workshop provides an introduction to the learning sciences community (values, practices, literature, venues, etc.), an opportunity to receive LS feedback on your work and support for becoming part of the LS community.
  • For LS researchers interested in HCI, this workshop provides an introduction to Human-Computer Interaction (both the fundamentals taught in an introductory course and the research community), an opportunity to receive feedback on your work from HCI researchers and connections to experienced interdisciplinary researchers.

Participation

We offer two paths to participate in the workshop based on the CSCL 2013 theme: "To See the World and a Grain of Sand: Learning across Levels of Space, Time, and Scale." Send submission in either category tosubmit@surfacelearning.org by 15 April 2013. Submissions are not anonymous and should include all author names and contact details.

The World
We seek position papers on the critical issues in interdisciplinary HCI / LS work or visions of how to advance the relationship between HCI and LS. Topics include, but are not limited to: 
  • What core methods and principles of HCI might be of use to LS researchers?
  • How can LS researchers piggyback on the efforts of HCI research to make the newest technology available for development?
  • What theoretical foundations can LS offer to HCI researchers interested in using technology to support learning?
  • How do we better support true interdisciplinary researchers?
  • How do we promote academic exchange between the communities?
Position papers should be 2–4 pages in CSCL proceedings format. They will be publicly posted on the workshop website and should serve as a resource or discussion point. During the workshop, the position papers will be briefly presented (<10 minutes per presentation) to the entire group at the closing panel. The panel will use these presentations to reflect on the day's work and discuss possible future directions.

A Grain of Sand
One of the core values of HCI is that design (both the product and the process) matters. A great study of a lackluster, ill-conceived system is relatively useless. The time to reflect on and improve a design is during its formative stages (i.e., before it is finished). Here, we give attendees an opportunity to discuss design work in progress. We seek papers on preliminary projects, either before a system has been built (outlining the motivation) or during active development. Design papers should include motivation for the project (why is this necessary research?), related work (what are you building upon?), and a sketch of how you will proceed. The projects can be based in either an HCI or LS tradition of research.

Design papers should be 2–4 pages in CSCL proceedings format. They will be publicly posted on the workshop website. During the workshop, the papers will be briefly presented (<10 minutes per presentation) to a small group who will have time to give concrete feedback on the design / research from both HCI and LS perspectives (e.g., suggestions for improvement, related work).

Organizers

Jochen RickJochen “Jeff” Rick is research associate / lecturer in the Department of Educational Technology (EduTech) at Saarland University, Germany. He received his PhD in the area of "Learning Sciences and Technologies" from the College of Computing, Georgia Institute of Technology in 2007. This will be his ninth ISLS conference. He has published in both JLS and ijCSCL and is on the editorial board of ijCSCL. He is also active in the HCI community, particularly the Interaction Design and Children community, serving as a full papers chair for the 2012 conference. He has experienced multiple perspectives on this interdisciplinary area: LS graduate student at an HCI powerhouse, postdoc in an HCI lab and junior faculty in an LS department. He has helped to organize four workshops, including one at CSCL 2002 and one at ICLS 2010. For two workshops, he successfully employed Open Space Technology, an organizing technique we plan to employ in this workshop.

Michael HornMichael Horn is an assistant professor at Northwestern University, USA where he directs the Tangible Interaction Design and Learning (TIDAL Lab). Michael holds a joint appointment in Computer Science and the Learning Sciences, and his research explores the role of emerging interactive technology in the design of learning experiences. His projects include the design of a tangible computer programming language for use in science museums and early elementary school classrooms; and the design of multi-touch tabletop exhibits for use in natural history museums. Michael has presented work at cross-disciplinary conferences including Interaction Design and Children (IDC), Tangible, Embedded, and Embodied Interaction (TEI), Human Factors in Computing Systems (CHI), ICLS, and AERA; he is on the editorial board for the Journal of Technology, Knowledge, and Learning; and he is the program committee co-chair for ACM Interactive Tabletops and Surfaces (2012 and 2013). Michael also co-organized a workshop on Technology for Today’s Family at CHI 2012.

Roberto Martinez-MaldonadoRoberto Martinez-Maldonado is a PhD candidate in the Computer Human Adapted Interaction Research Group at The University of Sydney, Australia. His research focuses on analysing data generated when groups of students collaborate using shared devices to help teachers to be more aware about their learning processes and take informed decisions. His research grounds on principles of Human-Computer Interaction, CSCL, Educational Data Mining and Learning Analytics; he makes use of a number of technologies including multi-touch interactive tabletops, tablets, kinect sensors and databases. He has presented work at interdisciplinary conferences that include Intelligent Tutoring Systems (ITS), Artificial Intelligence in Education (AIED), Interactive Tabletops and Surfaces (ITS) CSCL, ICLS and Educational Data Mining (EDM). He lead the organisation of the workshop held in conjunction with ICLS 2012 titled Digital Ecosystems for Collaborative Learning. He has published papers at CSCL 2011, ICLS 2012 and other communities related with HCI and Artificial Intelligence in education.

Documents

Jan 29, 2013

OpenPilot: A Next-Gen Open Source Autopilot Approach to Aerial Videography

I recently learned about OpenPilot, an open-source project that promotes the development of economical unmanned aerial vehicles, or UAVs.  

According to information on the website, "OpenPilot is an ideal platform for researchers and hobbyists working on aerial robotics or other mobile vehicular platforms where stabilization and guidance are required. OpenPilot brings the cost down to reasonable prices so people can focus on developing and refining applications rather than paying the extremely high prices of most commercial offerings, or having to do ‘from the ground up’ hardware development."

A number of OpenPilot community members have used their UAVs to explore interesting landscapes and at the same time, create engaging video clips.  Wouldn't it be fantastic to figure out how to get a 3D or 360 camera in a UAV?  

Below is an assortment of videos I came across while visiting the OpenPilot website. (I've also included some videos that were created using YellowBird 360 technology, which to my knowledge, has not been attempted with a UAV.)






OpenPilot Revolution Trailer from OpenPilot on Vimeo.

I think that the UAV concept would be great for an after-school technology club. It similar to robotics, but it also would get the kids outdoors.   It would provide a great experience for students who are also interested in photography and videography. 

RELATED
OpenPilot Website
OpenPilot Wiki
YellowBird

Here are some examples of YellowBird 360 videos: 
Interactive YellowBird 360 Video for KIA ceed

Behind the Scenes - Mont Blanc 360º shoot from yellowBird on Vimeo.





YellowBird 360 Interaction with Kinect

yellowBird 360º video player - KINECT from yellowBird on Vimeo.

Jan 17, 2013

XBox Kinect in the OR: Kinect supports gesture interaction with 3D imaging of the patient, while operating.

Here's an interesting use of technology for health - the Xbox Kinect in the OR!

Thanks to Harry van der Veen for the link!


RELATED
Kinect sensor poised to leap into everyday life
Niall Firth, NewScientist, 1/17/13

For the tech-curious:
PrimeSense (Company that developed the 3D depth sensor that powers the Kinect, the sensor in Ava, a healthcare robot by iRobot, and more.)

OpenNI (Framework for the development of 3D sensing middleware libraries and applications.)

NiTE: Natural Interface Technology for End User (Perception algorithms layer for 3D computer vision, allows for hand locating, tracking, analyzing scenes, and tracking skeleton joints.)

Telemedicine in Schools: Promoting Health (and Mental Health)

Telemedicine might be coming to a school near you in the future!

The use of a Telemedicine cart, made by Rubbermaid, will be piloted in one of the Union County Public Schools soon. In the article below,  the school district's superintendent was quoted as saying that she hopes the technology can also be used to tackle the problem of mental health:

Presbyterian, UCPS partner to put Telemedicine in schools
Carolyn Steeves, The Enquirer Journal, 1/9/13 


According to information from the Rubbermaid Healthcare Telemedicine website, the cart supports high definition video teleconferencing, a plug and play I/O panel, platform computing, and is white board capable. The touchscreen has annotation capabilities.  

For more information, view the following video and also see the Rubbermaid Telemedicine Resources site.



Here's the promotional information from the Rubbermaid website:

HD Video, Touch-Screen Apps, & Shared Content 

"The Rubbermaid Telemedicine Cart combines full computing capabilities with HD video conferencing into one, easy-to-use, mobile point-of-care clinical platform. Its clean, slim line design and small footprint provide access into the smallest and busiest clinical settings. Its multi-touch interface, simple integration, and superior maneuverability streamlines work flow and creates high adoption rates by staff members."

 "Each Telemedicine Cart comes equipped with a 720p HD video camera, upgradable to 1080p HD video. It also provides computing access to any software or web-based application, including electronic medical records and PACS imaging systems. It can be outfitted with any number of optional medical devices (both analog and digital) that can be shared through the computer or video conferencing equipment or both. The Telemedicine Cart supports digital input through DVI and HDMI as well as legacy inputs such as VGA, S-Video, and composite video. In addition, it is a fully portable platform that runs for two hours via built-in battery power and can be quickly and easily wheeled from room to room, requiring only a standard, high speed internet connection (wired or wireless) to initiate an HD video conference."

Below is a screen shot of telemedicine images from a Google search:



RELATED
Rubbermaid Healthcare Telemedicine Resources (Videos, Whitepapers, News)
Rubbermaid Healthcare Telemedicine
FCC Gives Telehealth $400M Boost
Mary Mosquera, Healthcare IT News, 1/10/13


Jan 11, 2013

InteractiveTV Today (ITVT): Links to information and updates about CES 2013, via Tracy Swedlow.:

If you want a quick look at the latest news related to interactive TV, including cool stuff featured at CES 2013, a good resource is the InteractiveTV Today website.   

Below are a few quick links to posts written by Tracy Swedlow, the founder of ITVT.  You'll see that there has been a flood of news and information generated from the companies featured at the recent Consumer Electronics Show (CES 2013) held in Las Vegas:

Interactive TV headlines Round-Up (I): Aereo, Amazon, A +E Networks, Turner, Warner Bros., Samsung, Roku, Nintendo Wii U, CBS, Microsoft Xbox

Interactive TV Headlines Round-Up (II):  ConnectTV, DDD, LG, Delivery Agent, Samsung, Magic Ruby, "Sons of Anarchy", Digitalsmiths, Time Warner Cable, i.TV

Interactive TV Headlines Round-Up (III): DirectTV, Dish Network, Ensequence, Sony, ES3, Azuki Systems, Microsoft Mediaroom

Interactive TV Headlines Round-Up (IV): Asus, Google TV, Marvell, Sony, Ubitus, Amazon, YouTube

It might take a while to catch up!

BTW, the ITVT website has a number of bloggers who share insights and news related to interactive TV and multimedia.  The ITVT Community Blogstream is a good place to start.

Jan 10, 2013

Gesture Markup Language (GML) for Natural User Interaction and Interfaces

Quick post:
"GML is an extensible markup language used to define gestures that describe interactive object behavior and the relationships between objects in an application.  Gesture Markup Language has been designed to enhance the development of multiuser multi-touch and other HCI device driven applications." -Gesture ML Wiki

GestureML was created and maintained by Ideum. 

More information to come!
The Pano













Photo credit: Ideum

RELATED
Ideum Blog

OpenExhibits Free multitouch and multiuser software initiative for museums, education, nonprofits, and students

GestureWorks  Multi-touch authoring for Windows 8 & Windows 7