Showing posts with label HCI. Show all posts
Showing posts with label HCI. Show all posts

Dec 14, 2010

"Design is the Solution-From Visual Clarity to Clarity in the Mind" (gem of an article by Gerd Waloszek, SAP User Experience)

Design is the Solution - From Visual Clarity to Clarity in the Mind
Gerd Waloszek, SAP User Experience, 12/7/10


In this article, Gerd Waloszek provides an overview of traditional usability principles and shares his thoughts about broadening the concept of clarity to include mental states and models. His article includes charts/concept maps as well as links to great resources.


If this topic interests you, plan to block out some time to read this article and explore the links.

Dec 12, 2010

Interactive Surveillance: Live digital art installation by Annabel Manning and Celine Latulipe

Interactive Surveillance, a live installation by artist Annabel Manning and technologist Celine Latulipe, was held at the Dialect Gallery in the NoDa arts district of Charlotte, N.C. on Friday, December 10th, 2010. I attended this event with the intention of capturing some of the interaction between the participants and the artistic content during the experience, but I came away with so much more. The themes embedded in the installation struck a chord with me on several different levels.


Friday's version of Interactive Surveillance provided participants the opportunity to use wireless gyroscopic mice to manipulate simulated lenses on a large video display. The video displayed on the screen was a live feed from a camera located in the stairway leading to the second-floor gallery.  When both lenses converged on the screen, a picture was taken of the stairway scene, and then automatically sent to Flickr. Although it was possible for one person to take a picture of the scene holding a mouse in each hand, the experience was enhanced by collaborating with a partner.

In another area of the gallery, guests had the opportunity to use wireless mice to interact with previously recorded surveillance video on another large display.  The video depicted people crossing desert terrain at night from Mexico to the U.S. In this case, the digital lenses on the screen functioned as search lights, illuminating - and targeting- people who would prefer not to be seen or noticed in any way.  On a nearby wall was another smaller screen with the same video content displayed on the larger screen.  This interaction is demonstrated in the video below:



A smaller screen was set out on the refreshment table so participants could view the Flickr photostream of the "surveillance" pictures taken of the stairway.   On a nearby wall was a smaller digital picture frame that provided a looping video montage of Manning's photo/art of people crossing the border.

The themes explored in the original Interactive Surveillance include border surveillance, shadow, and identity, delivered in a way that creates an impact beyond the usual chatter of  pundits, politicians, and opinionators. The live installation provided another layer to the event by providing participants to be the target of the "stairway surveillance", as well as play the role of someone who conducts surveillance.    

Reflections:
In a way, the live component of the present installation speaks to the concerns of our present era, where the balance between freedom and security is shaky at best. It is understandable that video surveillance is used in our nation's efforts to protect our borders. But in our digital age, surveillance is pervasive. In most public spaces it is no longer possible to avoid the security camera's eye.  Our images are captured and stored without our explicit knowledge. We do not know the identities or the intentions of those who view us, or our information, remotely. 

We are numb to the ambient surveillance that surrounds us. We go about our daily activities without notice.  We are silently tracked as we move across websites,  dart in and out of supermarkets and shopping malls, and pay for our purchases with plastic.  Our SMART phones know where we are located and will give out our personal information if we are not vigilant, as our default settings are often "public".

It is easy to forget that the silent type of surveillance exists.  It is not so easy to ignore more invasive types of "surveillance".  We must agree to submit to a high degree of inspection in the form of metal detectors, baggage searches, and in recent weeks, uncomfortable physical pat-downs, for the privilege of traveling across state borders by plane, within our own country.  In some airports, we are subject to whole-body scans that provide strangers with views of our most private spaces. We go along with this effort and prove our innocence on-the-spot, for the greater good.   Conversely, we have multiple means of conducting our own forms of surveillance, through Internet searches, viewing pictures and videos posted to the web, and playing around with Google Streetview. 

As I wandered around the Dialect Gallery with my video camera, I realized that I was conducting my own form of surveillance, adding another layer to the mix.  Unfortunately, some of the time I had my camera pressed to "pause" when I thought I was filming, and vice versa, and as a consequence, I did not capture people using the wireless mice to interact with the content on the displays. I went ahead with my mission and created a short video reflection of my impressions of Interactive Surveillance.  If you look closely at the video between :40 and :47, you'll see some people from across the street from the gallery that I unintentionally captured, and now they are part of my surveillance.

Although the video below was hastily edited, it includes music and sounds from the iMovie library that approximated the "soundtrack" that formed in my mind as I experienced the exhibit.

To get a better understanding of Interactive Surveillance,  I recommend the following links:


Barbara Schrieber, Charlotte Viewpoint



Video Reflection of Interactive Surveillance (Lynn Marentette, 12/10/10)

Live Installation: Interactive Surveillance, by Annabel Manning and Celine Latulipe from Lynn Marentette on Vimeo.



Interactive Surveillance Website



Interactive Surveillance Flickr Photostream

Dec 11, 2010

SMALLab Update: Embodied and Engaged Learning - ASU researchers partner with GameDesk

SMALLab is an interdisciplinary collaborative project at the Arts, Media and Engineering program at Arizona State University, and includes people from fields such as education, art, theatre, computer science, engineering, and psychology.  The SMALLab provides students with a multi-sensory, multi-modal way of learning concepts in an immersive environment, and uses a motion capture system that tracks the position of the students as they move and interact within the environment.

SMALLab's project lead is David Birchfield,  a media artist, researcher, and educator who focuses on K-12 learning, media art installations, and live computer music performances.  SMALLab researchers have recently partnered with GameDesk to develop a 6th grade curriculum for a GameDesk charter school in 2012. (Information and links related to GameDesk are located in the RELATED section of this post.)

Below is a detailed excerpt from an overview of SMALLab:
"In today’s world, digital technology must play a central role in students’ learning. A convergence of trends in the learning science and human-computer interaction (HCI) research offers new theoretical and technological frameworks for learning. in particular, mixed-reality, experiential media systems can support learning in a way that is social, collaborative, multimodal, and embodied. These systems comprise a new breed of student-centered learning environments [SCLE’s]. Importantly, they must address the practicalities of today’s classrooms and informal learning environments (eg.: space, infrastructure, financial resources) while embracing the innovative forms of interactivity that are emerging from our media research communities (eg: multimodal sensing, real time interactive media, context aware computing)...
...SMALLab is an extensible platform for semi-immersive, mixed-reality learning. By semi-immersive, we mean that the mediated space of SMALLab is physically open on all sides to the larger environment. Participants can freely enter and exit the space without the need for wearing specialized display or sensing devices such as head-mounted displays (HMD) or motion capture markers. Participants seated or standing around SMALLab can see and hear the dynamic media, and they can directly communicate with their peers that are interacting in the space. As such, the semi-immersive framework establishes a porous relationship between SMALLab and the larger physical learning environment. By mixed-reality, we mean that there is an integration of physical manipulation objects, 3D physical gestures, and digitally mediated components. By extensible, we mean that researchers, teachers, and students can create new learning scenarios in SMALLab using a set of custom designed authoring tools and programming interfaces."

Below are a few videos about SMALLab, and information about GameDesk, an organization that is collaborating with SMALLab in California.


Below is a demonstration of a Smallab learning activity:

SMALLab from SMALLab on Vimeo.


RELATED
Sara Corbett, NYTimes Magazine, 9/15/10

Info about GameDesk, from the GameDesk website:
"GameDesk is a 501(c)3 nonprofit research and outreach organization that seeks to reshape models for learning through game-play and game development. The organization looks to help close the achievement gap and engage students to learn core STEM curriculum. It develops project-based learning with a strong focus on purpose, ownership, and personal value. The organization (originally developed out of research and support at the University of Southern California's IMSC) has now been in development, practice, and/or evaluation for over two years in various schools in the Los Angeles area." -Gamedesk

Gamedesk Concept Chart

Cross-posted on the Tech Psych blog.

Nov 30, 2010

Call for Papers - Child Computer Interaction: Workshop on UI Technologies and Educational Pedagogy, in conjunction with CHI 2011, Vancouver, May 7th or 8th

CALL FOR PAPERS 
Child Computer Interaction 
in conjunction with CHI 2011, Vancouver, Canada
May 7th or May 8th 2011

Topic: Given the emergence of Child Computer Interaction and the ubiquitous application of interactive technology as an educational tool, there is a need to explore how next generation HCI will impact education in the future. Educators are depending on the interaction communities and to deliver technologies that will improve and adapt learning to an ever- changing world. In addition to novel UI concepts, the HCI community needs to examine how these concepts can be matched to contemporary paradigms in educational pedagogy. The classroom is a challenging environment for evaluation, thus new techniques need to be established to prove the value of new HCI interactions in the educational space. This workshop provides a forum to discuss key HCI issues facing next generation education.

We invite authors to present position papers about potential design challenges and perspectives on how the community should handle the next generation of HCI in education. Topics of interest include:

  1.  Gestural input, multitouch, large displays, multi-display interaction, response systems

  2.  Mobile Devices/mobile & pervasive learning

  3.  Tangible, VR, AR & MR, Multimodal interfaces, universal design, accessibility

  4.  Console gaming, 3D input devices, 3D displays

  5.  Co-located interaction, presentations, tele-presence, interactive video

  6.  Child Computer Interaction, Educational Pedagogy, learner-centric, adaptive “smart” applications

  7.  Empirical methods, case studies, linking of HCI research with educational research methodology

  8. Usable systems to support learning and teaching: Ecology of learning, any where, anytime, (UX of cloud computing to support teaching and learning)

Submission: The deadline for workshop paper submissions is January 14, 2011. Interested researchers should submit a 4-page position paper in the ACM CHI adjunct proceedings style to the workshop management system. Acceptance notifications will be sent out February 20, 2011. The workshop will be held May 7 or May 8, 2011 in Vancouver, Canada. Please note that at least one author of an accepted position paper must register for the workshop and for one or more days of the CHI 2011conference.


Contact: Edward Tse, SMART Technologies, edwardtse@smarttech.com


WORKSHOP ORGANIZERS
Edward Tse, SMART Technologiess
Johannes Schöning, DFKI GmbH
Yvonne Rogers, Pervasive Computing Laboratory, The Open University
Jochen Huber, Technische Universität Darmstadt
Max Mühlhäuser, Technische Universität Darmstadt
Lynn Marentette, Union County Public Schools, Wolfe School
Richard Beckwith, Intel

Nov 15, 2010

Human-Machine-Music Interaction: KarmetiK Machine Orchestra (Video, links)

Here is an example of innovative interaction between humans, machines, and music:


KarmetiK Machine Orchestra - Live at REDCAT Walt Disney Hall - Los Angeles - Jan 27, 2010 from KarmetiK on Vimeo.


Information from the KarmetiK Machine Orchestra Vimeo page:
"On January 27th, 2010, KarmetiK and California Institute of the Arts brought together a group of interdisciplinary artists to perform in a revolutionary production. During this performance, The Machine Orchestra, a collective of musicians, engineers, dancers, and theatre designers, gave an audience at the Walt Disney Concert Hall's REDCAT performance space a glimpse of the future: one in which computers, robots, and humans join forces to make music.Featuring a cast of musicians, new musical interfaces, and musical robotics, The Machine Orchestra fused a wide array of musical styles ranging from free electronic improvisation to world dance music.This DVD features uninterrupted footage of The Machine Orchestra's debut concert, a performance exploring human interaction with KarmetiK's collection of musical robots: MahaDeviBot, GanaPatiBot, Tammy, Raina, and ReyongBot. Directed by Ajay Kapur and Michael Darling."
Music Director, Co-Creator: Ajay Kapur
Production Director, Co-Creator: Michael Darling
Guest Electronic Artists: Curtis Bahn & Perry Cook
World Music Performers: Ustad Aashish Khan, Pak Djoko Walujo, & I Nyoman Wenten
Multimedia Performer-Composers: Charlie Burgin, Dimitri Diakopoulos, Jordan Hochenbaum, Jim Murphy, Owen Vallis, Meason Wiley, and Tyler Yamin

Visual Design: Jeremiah Thies
Dance: Raakhi Sinha & Kieran Heralall
Lighting Design: Tiffany Williams
Sound Design: John Baffa
Production: Lauren Pratt
Editing: Meason Wiley
Filming: Benny Schuetze 

machineorchestra.com
Follow KarmetiK on Facebook and Twitter: 
facebook.com/​karmetik
twitter.com/​karmetik



Detailed information about this performance and Machine Orchestra:

Lisa Zyga, Physorg.com 

MACHINE ORCHESTRA
KarmetiK Machine Orchestra

RELATED
Building a Hybrid Man/Machine Orchestra, Pt 1
Jordan Hochenbaum, Create Digital Music 1/25/10

Jordan Hochenbaum, Create Digital Music 4/22/10




Direct links to the publications listed below, and more, on the  Publications: Refereed Journals and Conference Papers page of the Karmetik website.


Kapur, A. & M. Darling A Pedagogical Paradigm for Musical Robotics, Proceedings of the
International Conference on New Interfaces for Musical Expression, Sydney, Australia, June 2010.



Hochenbaum, J., Kapur, A., & M. Wright, Multimodal Musician Recognition, Proceedings of the International Conference on New Interfaces for Musical Expression, Sydney, Australia, June 2010


Vallis, O., Hochenbaum, J,, & A. Kapur, A Shift Towards Iterative and Open-Source Design for Musical Interfaces, Proceedings of the International Conference on New Interfaces for Musical Expression, Sydney, Australia, June 2010


Hochenbaum, J., Vallis, O., Diakopoulos, D., Murphy, J. & A. Kapur, On Designing Expressive Musical Interfaces for TableTop Surfaces , Proceedings of the International Conference on New Interfaces for Musical Expression, Sydney, Australia, June 2010


Murphy, J., Kapur, A., & C. Burgin, The Helio: A Study of Membrane Potentiometers and Long Force Sensing Resistors for Musical Interfaces, Proceedings of the International Conference on New Interfaces for Musical Expression, Sydney, Australia, June 2010

Nov 2, 2010

EyeTube for YouTube! Eye-gaze interaction software, free and downloadable from GazeGroup

Gaze interaction systems provide access to computers and the rich content now available on the web for many people with disabilities.  Unfortunately, commercial gaze tracking systems are very expensive and at times, difficult to calibrate.  There is hope!


Following up on my recent post, "Open-source Eye-tracking: The ITU Gaze Tracker 2.0 Beta", I thought I'd share the GazeGroup's EyeTube for YouTube interface.  


What is great about EyeTube for YouTube is that it provides two different interfaces. The simplified version looks good for younger children or people with cognitive disorders, and is icon-based.  The second version is appropriate for people who can navigate through more complex visual representations of content. 


EyeTube requires a Windows-based system and .Net 3.5 at this time. It can be downloaded from the GazeGroup website.  If you plan to download the application, you must also make sure you have a YouTube account. To get the application up and running, you'll need to change the settings (EyeTubeSettings.xml) to match your account.   (If you don't know much about changing settings or xml, ask someone you know who works in IT.)


Below is the icon-based version of the eye-gaze interface for YouTube:
EyeTube - Gaze Interaction for YouTube (simplified version)


Feature-rich version of the EyeTube interface for YouTube:
EyeTube - Gaze Interaction for YouTube

From the GazeGroup site:

"The EyeTube prototype offers a feature rich eye controlled interface for the popular YouTube service. Instead of emulating a mouse pointer and interacting with a web browser the EyeTube interface is especially designed to be driven by gaze input. It offers a wide range of features such as keyword searching, popular video feeds, favorites and social aspects such as subscriptions, friends and commenting on videos.The highly optimized interfaces allows for a streamline interaction which is aleviated from the Midas Touch problem. In most previous gaze interfaces selection is made by a dwell time activator, e.g fixat a button for a specific amount of time and it will execute the function. In the EyeTube interface a fixation on a U.I element will highlight it and a second fixation on the activation button is required to execute the function. This removes the stress of having to constantly move the eyes to avoid unintentional activation."
"The EyeTube also exists in another simplified incarnation developed for users whom are distracted by a larger number of options. It supports basic features such as browsing categories, optional keyword searching and favorites."

RELATED
The GazeGroup
(The individuals mentioned below may be currently working elsewhere, but involved in the gaze research in some way.)

GazeGroup Research Areas

COGAIN (Communication by Gaze Interaction)

ACM CHI Conference Articles
San Agustin, J., Skovsgaard, H., Hansen, J. P., and Hansen, D. W. 2009. Low-cost gaze interaction: ready to deliver the promises. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 - 09, 2009). CHI EA '09. ACM, New York, NY, 4453-4458. DOI= http://doi.acm.org/10.1145/1520340.1520682
San Agustin, J., Hansen, J. P., Hansen, D. W., and Skovsgaard, H. 2009. Low-cost gaze pointing and EMG clicking. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 - 09, 2009). CHI EA '09. ACM, New York, NY, 3247-3252. DOI= http://doi.acm.org/10.1145/1520340.1520466 
Tall, M., Alapetite, A., San Agustin, J., Skovsgaard, H. H., Hansen, J. P., Hansen, D. W., and Møllenbach, E. 2009. Gaze-controlled driving. InProceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 - 09, 2009). CHI EA '09. ACM, New York, NY, 4387-4392. DOI= http://doi.acm.org/10.1145/1520340.1520671

UPDATE

Eye-controlled games and leisure applications from the COGAIN wiki: http://www.cogain.org/wiki/Leisure_Applications
  • EyeArt - EyeArt eye-drawing program, developed by Andre Meyer and Markus Dittmar, Technical University of Dresden, Applied Cognitive Research Unit, Germany.
  • GazeTrain - Gaze-controlled action oriented puzzle game, developed by Lasse Farnung Laursen, Technical University of Denmark
  • Puzzle - Simple puzzle game that can be played with eye movements, developed by Vytautas Vysniauskas, Siauliai University, Lithuania
  • Road to Santiago - Gaze-controlled adventure game (full game), developed by Javier Hernandez Sanchiz, Universidad Publica de Navarra, Spain
  • Snap Clutch - An application that uses eye gaze data to generate key and mouse events for playing games such as World of Warcraft and Second Life.
  • ASE: Accessible Surfing Extension for Firefox - Follow this link to access ASE, an Accessible Surfing Extension for Firefox, developed by Emiliano Castellina and Fulvio Corno at Politecnico di Torino. (Note that this is a beta version.)
  • Eye Gaze Music (SAW Selection Sets) - Point and Play – eye gaze (direct pointing) musical activities, developed by DART. Please note that SAW (Special Access to Windows) framework application is needed to play these 15 music selection sets. SAW is available for free athttp://www.oatsoft.org/Software/SpecialAccessToWindows
  • EyeTube - Gaze interaction for YouTube - Follow this link to get more information and download EyeTube at ITU GazeGroup's web pages
  • Eye3D and other head eye mouse software - Eye3D for education, and a collection of links to free software that works with head or eye mouse. Includes links to downloads and original sites.
  • Gaze-controlled Breakout - Follow this link to access a modified version of the LBreakout2 game which can be operated by an SMI eye tracker, developed by Michael Dorr et al. at University of Luebeck
  • Oleg Spakov's Freeware games for MyTobii - Follow this link to access MyTobii compatible games developed by Oleg Spakov, University of Tampere, Finland
  • Free ITU Gaze Tracker and applications - Download a webcam based open-source gaze tracker and several applications that work with it, developed at IT University of Copenhagen
  • GameBase - Check out the Eye-Gaze Games category at the SpecialEffect GameBase!
  • More information about Gaze-Controlled Games - Follow this link to see a list of online information resources on using gaze for the control of games and other leisure applications

Oct 30, 2010

Philipp Geist: Blending the Physical with the Digital; Google TV/Leanback, Vimeo's new Couch Mode, oh...and ViewSonic's 3D (glasses-less) pocket camcorder...

I'm thinking about getting one of the new "internet ready" TVs.  I have a serious reason to do this. I'm working on some interactive video projects, and a couple of my projects are geared for teens and young adults who have autism.*   My hunch is that many of my students would like to watch- and interact with-content optimized for Google TV and Vimeo's Couch Mode.  The content is designed to look good on larger high-resolution flat-screen displays, and I'm sure it would be great on my school's newer SMARTBoards.  I need to learn more about  developing applications for this purpose.

(Currently I use my HP 22-inch TouchSmart PC to view web-based video content, and to evaluate websites that provide "touchable" and interactive content that might work well on interactive whiteboards.)

At any rate,  I've been looking for great videos that have the potential for use at work with older students who have autism. I'm also looking for effective ways that the students can use to interact with multimedia and video content. This is important, since the students have minimal verbal communication skills, have limited reading ability- if they can read at all.  They learn about their world through visual means, and are capable of learning much more - but not through traditional means.

Since our school is focusing on globalization and learning about the cultures of other countries, I've been on the lookout for some interesting videos that might appeal to our students.  

Today I came across a great find- Philipp Geist.  Who is Philipp Geist?  According to his bioPhilipp works internationally as a light and multi-media artist in the mediums of video, performance, photography and painting. Some of his work focuses on architecture, history, and cultural heritage.  A good example of his work is the installation he created for a festival in Thailand in 2009:

"The one-hour show is the central part of the celebrations and will be seen by thousands of visitors.  It interprets artistically the king's life and his work dedicated to public welfare. The art installation combines images of the kings and his social projects in the past and present with 3D animations of Thai natural and cultural heritage and abstract painterly passages." (from the Vimeo site)

Phillip Geist's Showreel

HIGH-RES MULTIMEDIA WEB CONTENT ON LARGE PANEL HD TV!
This might boost holiday gift sales and in turn, give a little jolt to the economy. To do my duty for my country, I will continue to research Internet TV as I narrow down my selection for my new Internet-ready TV.... Below is some Information about Google TV, Google Leanback, and Vimeo's Couch mode that I've recently gathered to share with my IMT followers:

GOOGLE TV:  "The web is now a channel"

"With Google Chrome and Adobe Flash Player 10.1, Google TV lets you access everything on the web. Watch your favorite web videos, view photos, play games, check fantasy scores, chat with friends, and do everything else you're accustomed to doing online. Plus, the world's best websites are now being perfected for television -- check out our Spotlight gallery for examples."  "The worlds' favorite websites are being tweaked and perfected for the television." -Google TV


I'm not too excited about the design of the application that transforms your Android phone or iPhone into a remote control.  I hate most remote controls.  According to Google TV, multiple phones can control the same TV, and you can use your voice to search, which seems like it would be a good thing...  I wonder if they tested this out with real families, not just families of Google TV techies .
















GOOGLE LEANBACK Video (Integrated into GoogleTV)


Google Leanback 
When I visited the Leanback website, I encountered the following screen with a suggestion that I type in what I was looking for.  I typed in "lynnvm", the name of my YouTube channel.  Apparently Google provides you with a randomly generated featured video that appears in the background that has nothing to do with what you are looking for.  


In this screen shot, my YouTube channel offerings are in the foreground. "Maleficent Halloween Tutorial" is what played in the background: 













































































VIMEO INTRODUCES COUCH MODEVimeo's version of Google's Leanback is Couch Mode.  It is optimized for use on Google TV, so that makes things less complicated in the world of videoviewingland.   According to Ryan Hefner's article on the Vimeo staff blog, "Couch Mode is a special new section of Vimeo that allows you to watch collections of videos (such as Staff Picks, your inbox, your videos, etc.) completely uninterrupted like a TV channel."
Couch Mode works on computers, but since it relies on HTML5 and CSS3, without Flash, it only works with Chrome and Safari browsers.  For more information, see the video below:



RELATED
"A few of our favorites include Net-A-Porter, which lets you watch runway videos and shop for high fashion; Meegenius, a place where you can read and customize children’s books; TuneIn, a personal radio for your TV; and The Onion which always gives us a good laugh." - Google TV Blog
MeeGenius If you are a teacher, parent, kid, or lover of children's books, visit this interactive website ASAP. It is optimized for Google TV and works nicely on touch-enabled screens and devices.

As I was wrapping up this post, I came across information about Viewsonic's new 3D, glassesless pocket camcorder.  I'll update information about this new gadget when I have a chance to learn more about it. !



Comment: The idea of developing interactive multimedia apps in 3D intrigues me. At this point, the technology is too new for an "armchair technologist" like me to pursue with my incredibly busy work obligations.  I don't have the money to buy a 3D video camera.  But I might try this out, if it is true that it only costs $238.00!

Viewsonic introduces 3Dv5 3D pocket camcorder, no glasses required
Darren Murph, Engadget, 10/20/10
Film Videos In 3D for Under $250 With Viewsonic's 3DV5
HotHardware, 10/28/10


* About me:  
I presently work full time as a school psychologist at a high school and at a program for students with more severe disabilities, including autism.  The students I work with have made amazing gains through the use of interactive multimedia applications, and also have responded well to video presented on the large IWB screens.   


I went back to school to take computer courses, initially so I could make interactive multimedia applications and games. I continue to blog about interactive multimedia,  emerging/ new technologies, and topics related to post-WIMP HCI/UX/ID/IA.  Although my "spare time" is limited,  I try to keep up my technical skills whenever I can by working on projects that can support the students I work with.