Jul 9, 2013

The Life and Contributions of Seymour Papert: Inspiring video of a tribute panel, Interaction Design and Children Conference

Seymour Papert Tribute at #IDC13

I recently attended the Interaction Design and Children (IDC 2013) conference in NYC.  It was like a summer tech camp for grown-ups. We were busy all day and had interesting evening events scheduled, like a field trip to the New York Hall of Science and a screening of Flying Paper, an award-winning documentary. 

One of the highlights of IDC 2013 was a panel that gave tribute to the life and contributions of Seymour Papert.  Well ahead of his time, Seymour Papert imagined a world in which children would generate their own computer programs, make awesome robots, collaborate with others, create, and learn. 

I encourage you to take some time and watch the video.


Seymour Papert Tribute Panel from IDC2013 Conference on Vimeo.



The following information is from the description of the video:

"Seymour Papert was one of the key pioneers of interaction design for children, merging the constructivist ideas of Jean Piaget and cutting-edge technological advances in computer programming and cybernetics..generating well-known designs such as the Logo programming language and the Lego Mindstorms robotics kits.  This work, which in the beginning was done in collaboration with many colleagues at the MIT Artificial Intelligence Lab, Bolt, Beranek and Newman, and Atari Research Labs, has been highly influential for decades."

"Paulo Blikstein from Stanford University hosted a panel at the Interaction Design and Children (IDC 2013) conference on the impact of Seymour Papert's research on the past, present, and future of child-computer interaction.  The purpose of this pane is to investigate current trends, designs, and theoretical advances in the IDC community in light of the groundbreaking work of Papert and his close collaborators, recapitulate the history of this early work in IDC, and imagine future scenarios for IDC research."

Panelists:
Allison Druin, University of Maryland
Edith Ackermann, MIT
Mike Eisenberg, University of Colorado
Mitch Resnick, MIT
Uri Wilensky, Northwestern University

More posts to come soon!

RELATED
IDC 2013 Website - an archive of treasures
MIT Media Lab
Human-Computer Interaction Lab: Children as Design Partners






Jun 13, 2013

Stanford's "Coding Together: Developing Apps for iPhone and iPad" Course Video Presentations on iTunesU

Now that the school year has ended, I've taken the first step to begin my "Summer of Code".  I have five weeks off each summer, and for me, it is the best time to brush up on my coding skills.   Since my school recently piloted an iPad program, I've developed an urge to learn Objective-C.  

So on the very first day of my summer break, I noticed in an email from Apple that that all of the presentation videos from Coding Together: Developing Apps for iPhone and iPad were made available, for free, though iTunes U. The course was designed for people who have some programming courses/experience, and from what I can see, provides a relatively "quick" and useful path for those who'd like to create an app for the iPhone or iPad.

After viewing the first video,  I am happy to say that I'm impressed with the way the professor, Paul Hegarty, explains it all.  




Course Description
"Updated for iOS 6. Tools and APIs required to build applications for the iPhone and iPad platform using the iOS SDK. User interface designs for mobile devices and unique user interactions using multi-touch technologies. Object-oriented design using model-view-controller paradigm, memory management, Objective-C programming language. Other topics include: object-oriented database API, animation, multi-threading and performance considerations. Prerequisites: C language and programming experience at the level of 106B (Programming Abstractions) or X. Recommended: UNIX, object-oriented programming, graphical toolkits."  -iTunesU Website

RELATED
iTunes U links to all course materials, including videos
Coding Together: Developing Apps for iOS Videos and Lecture Slides (iTunesU)
Website with files for course-related code
StackOverflow CS193P tagged items (Stack Overflow is an online resources for people with coding Q & As)

Jun 6, 2013

Interactive Displays and "Billboards" in Public Spaces; Pervasive Displays 2013

The 2013 International Symposium on Pervasive Displays (PerDis 2013), recently convened  in Mountain View, California.  Since I couldn't attend this conference, I was happy to learn from Albrecht Schmidt that the conference proceedings were recently uploaded to the ACM Digital library.  There are many exciting things going on in this interdisciplinary field!

Researchers involved with the Instant Places project, described in the video below, presented their work at PerDis 2013. The Instant Places project was part of PD-Net, a series of research efforts exploring the future of pervasive display networks in Europe. (See the "Related" section for additional references and links.)


Instant Places: Tools and Practices for Situated Publication in Display Networks

Below is information from the Instant Places video and website:
"The video describes a novel screen media system that explores new practices for individual publication and identity projection in public digital displays." 

"Instant Places has been developed by the Ubicomp group of the Information Systems Department, at the University of Minho, and has been funded within the scope of pd-net: Towards Future Pervasive Display Networks, by the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 244011."

Saul Greenberg was the keynote speaker at PerDis 2013.  His keynote, "Proxemic Interactions: Displays and Devices that Respond to Social Distance", highlights how far off-the-desktop our digital/physical lives have become, and how this has influenced recent research in human-computer interaction. Saul is a professor at the University of Calgary and leads research in Human Computer Interaction, Computer Supported Cooperative Work, and Ubiquitous Computing.

Although the video of Saul Greenberg's presentation below is not from PerDis 2013, it touches on the same topics and is worth taking an hour to watch.  In this video, Greenberg presents an overview of the history of human-computer interaction. He also offers up a discussion how an understanding social theory, perception of spatial relationships, and embodied interaction can be applied to the design of natural user interfaces and interactive systems.  Useful examples of interaction design explorations, within an ecological context, are provided later in the video.

Proxemic Interactions: the New Ubicomp?




RELATED


My Backstory
Regular readers of this blog know that to subject interactive displays in public spaces holds my interest. When I was taking computer courses during the mid 2000s, I focused some of my energy on projects designed for large interactive displays, inspired by reading articles like "Physically Large Displays Improve Performance on Spatial Tasks" (Desney S. Tan, Darren Gergle, Peter Scupelli, and Randy Pausch) and "Dynamo: public interactive surface supporting the cooperative sharing and exchange of media(Shahram Izadi, Harry Brignull, Tom Rodden, Yvonne Rogers, Mia Underwood).  

Jeff Han's 2006 TED talk was another inspiration. I remember my excitement as watched his demonstration of an interactive multi-touch touch screen the size of a drafting board, before the iPhone/iPad was born.  Another inspiration was Hans Rosling's TED Talk  about health statistics, with his animated interactive data visualizations presented on a huge screen.

The following year, I stumbled upon the  NUI-Group while searching for information about multi-touch displays, and was inspired by many of the early members of the group.  I also became acquainted with a world-wide network of people who share similar interests, such as Albrecht Schmidt and his team of researchers at the Unversity of Stuttgart. This busy group recently presented at PerDis 2013 and at CHI 2013 and are involved in a wider range of ongoing projects.

INTERACTIVE DISPLAYS
Alt, F. Sahami, A., Kubitza, T., Schmidt, A.  Interaction Techniques for Creating and Exchanging Content with Public Displays. In: Proceedings of the 2013 ACM Annual Conference on Human Factors in Computing Systems 
Hinrichs, U., Carependale, S., Valkanova, N., Kulkkaniemi, K., Jacucci, G., Moer, A.V., Interactive Public Displays   Computer Graphics, Vol. 33(2) IEEE Computer Society (25-27)
PerDis 2013 Program
Sample Papers:
Otero, N., Muller, M., Alissandrakis, A., and Milrad, M. Exploring video-based interactions around digital public displays to foster curiosity about science in the schools. PerDis 2013 (pdf)
Alt, F., Schneegass, S., Girgis, M., Schmidt, A. Cognitive Effects of Interactive Public Display Applications. Proceedings of the 2nd ACM International Symposium on Pervasive Displays. 2013
Langeinrich, M., Schmidt, A., Davies, N., and Jose, R.  A practical framework for ethics: the 

Note:  Members of ACM have access to all of the proceedings of PerDis2013 in the ACM Digital Library. Non-members have access to the abstracts.

PD-NET
PD-net approach to supporting ethics compliance in public display studies. Proceedings of the 2nd ACM International Symposium on Pervasive Displays. 139-143
PD-Net 
PD-NET Publications - a great reference list, with links to many papers
Reading List on Pervasive Public Displays
About Instant Places
About the Living Lab for Screens Set

DOOH-DIGITAL OUT-OF-HOME
Daily Digital Out of Home post "Billboards That Look Back" : Could miniature cameras embedded in ads lead to Big Brother at the mall? The World Is My Interactive Interface, 5/28/08
J. Müller et al., "Looking Glass: A Field Study on Noticing Interactivity on a Shop Window," Proc. 2012 SIGCHI Conf. Human Factors in Computing Systems (CHI 12), ACM, 2012, pp. 297–306
Michelis, D., Meckel, M. Why Do We Want to Interact With Electronic Billboards in Public Space?  First Workshop on Pervasive Advertising, Pervasive 2009, 5/11/09
The Rage of Interactive Billboards
The Print Innovator, 11/28/12
10 Brilliant Interactive Billboards (Videos)
Amy-Mae Elliot, Mashable, 8/21/11


SOME INTERESTING EARLIER WORK
Jeff Han's 2006 TED Talk (This is worth revisiting, as it came out before the iPhone, iPad, etc.)


Tan, D.S., Gergle, D, Scupelli, P., Pauch, R. Physically large displays improve performance on spatial tasks. ACM Transactions on Computer-Human Interaction, V13(1) 2006 (71-99)

Revisiting promising projects: Dynamo an application for sharing information on large interactive displays in public spaces (blog post)
Lynn Marentette, Interactive Multimedia Technology, 09/16/07

Brignull, H., Izadi, S., Fitzpatrick, G., Rogers, Y., Rodden,  T. The introduction of a shared interactive surface into a communal space. Proceedings of the 2004 ACM conference on Computer supported cooperative work (CSCW'04), Chicago, ACM Press, 2004 (pdf)


Izadi, S., Brignull, H., Rodden, T., Rogers, Y. and Underwood,M. Dynamo: public interactive surface supporting the cooperative sharing and exchange of media. In Proc. User
Interfaces and Software Technologies (UIST’03), Vancouver, ACM Press, 2003, 159-168. (pdf)

Proxemics (Wikipedia)


Why Do We Want to Interact With Electronic Billboards in Public Space? 


May 27, 2013

Leap Motion and Google Earth Experiment: Cute Doggie Photo-globe Mashup

Leap Motion and Google Earth Experiment: Cute Doggie Photo-globe Mashup 

I finally experimented with my Leap Motion controller and Google Earth, using a mashup I created a few years ago with pictures of cute dogs from my Flickr photo-stream.  In the video below, you can see that my gesture navigation skills still need some practice!

I should have watched the following video of Leap Motion in action with Google Earth before trying this experiment at home : )  

I am pretty sure that developers will be able to tweak Leap Motion + Google Earth interaction in the near future.  I'd like to adapt it for use with kids as well as adults who have mild motor impairments.





















Cute Doggies Photo-Globe Mash-up using Google Earth and a Flickr Set (How-to)

If you'd like to make your very own photo-globe using Google Earth and Flickr photos, here are the directions, ported and updated from a previous post:


This photo is a screen shot of photos of just about every dog I know, and some that happened to cross my path. In this post, I'll share some information about how to create a photo-globe in Google Earth. 

The first step is to make sure you have lots of pictures related to your theme uploaded to a site such as Flickr.  (You can also create a photo-globe using pictures from your computer's hard drive.)

To get the pictures into Google Earth, I used the Image Overlay feature, and in the "link" textbox, I entered the image URL for each picture that I'd previously loaded as a set in Flickr.



To do prepare for this, make sure you go to "view" tab on the upper left-hand section of your screen, and make sure that "toolbar" is checked. Also make sure that "Grid" is selection, as this will help make it easier to arrange and align your pictures.  You can turn off this feature later. Near the top of the screen, click on the Image Overlay icon. (I've highlighted it in the picture.)



You'll have to enter the URL of the image you'd like to add to the globe in the "Link" textbox, which I've highlighted in the above picture.  In this case, I've used a link to one of my pictures in a Flickr set I created for this project.

One thing to keep in mind is that the picture will take up a much larger space than you might prefer, so you'll have to adjust the size using the green markers:

Positioning the Overlay in the Viewer
The following directions are from the "Positioning the Imagery in the Viewer" section in the help section:


  1. Use the center cross-hair marker to slide the entire overlay on the globe and position it from the center. (Tip: do this first.)
  2. Use the triangle marker to rotate the image for better placement.
  3. Use any of the corner cross-hair markers to stretch or skew the selected corner. If you press the Shift key when selecting this marker, the image is scaled from the center.
  4. Use any of the four side anchors to stretch the image in or out of from the selected side. If you press the Shift key when doing this, the image is scaled from the center.

TIP:  Try positioning the center of the image as a reference point first, and then use the Shift key in combination with one of the anchors to scale the image for best positioning.

Directions updated to reflect latest version of Flickr, as of 5/27/13:

To find the image URL for a photo in Flickr that you wish to link on your photo-globe, select your desired photo and right click "Copy Image URL".
















Put your curser in the Link section of  the "New Image Overlay" dialog box in Google Earth, and right click to select "paste" from the drop-down menu















Then repeat the process.  It helps to name each picture so that you can find it easily in Google Earth.

To enhance your mash-up, you can add place-marks that contain URLs that link to additional information about the subject of a picture, such as blog posts with embedded videos and/or text related to a picture, and so forth. Directions can be found in Google Earth's help section.

The process of building a photo-globe in Google Earth is a bit tedious.  If someone has a short-cut to share, please let me know!


RESOURCES
Google Earth
Flickr
Programmable Web (My hunch is that this site might provide some information about shortcuts for creating a photo-globe in Google Earth.)
LEAP Motion

May 24, 2013

Summer of Fun and Game Development with Unity

Summer of Fun and Game Development- Unity


I just downloaded the GameAnalytics Unity Package and plan to spend some of my summer break digging deeper into Unity.  I have been experimenting with Xcode stuff and Leap Motion.  

I want to make the most of my 5 weeks "Summer of Code". Unity fits the bill.  It has offers a wealth of wonderful online  learning resources.  

I've explored Unity in the past and loved it, and am very impressed with how it has evolved over the past few years.

I'll add more to this post later!


May 21, 2013

Xbox One and Kinect 2 for the Playground of the Future

Xbox One and Kinect 2, Playground of the Future

The big news in tech today is the unveiling of the new Xbox One/Kinect 2 system.  For now, the video below might be the closest you'll get to the system.  Wired's senior editor, Peter Rubin had a chance to interview Scott Evans, of Microsoft, as he demonstrated the fascinating technical details in a family-room type setting.

Wired's interview of Scott Evans and demo of the new Xbox One and Kinect 2, using Active IR technology.



From what I learned, the new Kinect sensor has six times the fidelity of the previous version. Paired with the new Xbox One, it can do amazing things.  Engineers from around the world collaborated on this project, providing expertise in facial recognition, digital signal processing, speech recognition, machine learning, and computer vision.  The Xbox One is fueled by an 8-core x86 processor, supported by 8GB of RAM, which is sure to handle the hardest gamer's needs. It also includes a 500GB hard drive and an HD Blu-ray player.


The new system was designed to enhance the gaming/user experience. The 1080p camera provides a field of view that is 60 degrees larger than its  predecessor, and can handle a high level of detail.  It provides a better means of interpreting movement and orientation, and it processes skeleton and hand movements more precisely.  The system features "muscle man", a human-based physics model that is layered over the skeleton and depth map. It senses and calculating the forces the player uses while moving in a game. 

What I find interesting is that the camera can detect the player's pulse by measuring subtle changes of the skin that can't be perceived by the naked eye.  It also can quickly identify each player (it handles up to six), and identify facial expressions.  The active IR (infrared) system provides the system with better accuracy than the original Kinect. 

I wasn't able to find out much information regarding privacy issues with this system.  This is a concern, since it can sense your physiological responses, movement patterns, and facial expressions.  Over time, a good deal of very personal information would be gathered about each user. I shudder to think about the consequences if the data fell into the wrong hands.  

Possibilities for Special Needs Populations

I can see that the Xbox One + Kinect 2 system has the potential for games and other interactive applications for use in physical rehabilitation and fitness.  Since it can interpret facial expressions, it could also provide a way to support social skills learning among children and teens who have autism spectrum disorders.

RELATED

Microsoft invests a good deal of attention to proof-of-concept projects that may or not become part of a commercial product.  Below is an example of IllumiRoom:


Hrvoje Benko, of Microsoft Research, discusses the IllumiRoom concept during an interview at CHI 2013.


Xbox One Website
The new Xbox One Kinect tracks your heart rate, happiness, hands and hollers
Matthew Panzarino, The Next Web, 5/22/13
Kinect 2 Full Video Walkthrough: The Xbox Sees You Like Never Before
Kyle Wagner, Gizmodo, 5/21/13
Hands-on with prototypes of the Xbox One and New Kinect Sensor
Ben Gilbert, engadget, 5/21/13
Efficient Human Pose Estimation from Single Depth Images
Shotton, J., Girshick, R., Fitzgibbon, A., Sharp, T., Cook, M., Finocchio, M., Moore, R., Kohli, P., Crinisi, A., Kipman, A., Blake, A.   Video
Consumer Depth Cameras for Computer Vision:  Research Topics and Applications
Fossati, A., Gall, J., Grabner, H., Ren, X., Konolige, K. (Eds.)
Xbox One: Microsoft's supergeeks reveal what's inside the hardware
Dean Takahashi, VentureBeat, 5/21/13
Next Xbox Will Face New Array of Rivals
Nick Wingfield, New York Times, 5/21/13