Showing posts sorted by relevance for query kinect. Sort by date Show all posts
Showing posts sorted by relevance for query kinect. Sort by date Show all posts

Apr 6, 2011

mtBAR Multi-touch Bar Video (Aaron Bitler); Physical Vision, a Kinect Alternative Video (Jonathan Brill); DARPA's 3D Glassless Urban Photonic Sandtray

Here are a few interesting videos and links:


The mtBAR Comes Alive!
Aaron Bitler, 3/28/11


Physical Vision, The Alternative to Kinect
Jonathan Brill, Physical Vision, 2/26/11




Tim Stevens, 3/28/11, Engadget
"It's a DARPA project, a fully holographic table (no glasses required) that can be scaled up to six feet diagonally and allows visual depth of up to 12-inches. The technology comes courtesy of Zebra Imaging, which earlier wowed us with some insane 3D printouts, and the data will come from LIDAR systems like this ROAMS bot." -Tim Stevens

-Photo from Vizworld


Weekly Belated Linkfest
Rouli, Games Alfresco, 4/4/11


So much to share, so little time!

Feb 20, 2011

Human-Computer Interaction (HCI) Is Changing the World: BLUR Conference, February 22-23, Omni Orlando Resort (Includes video)

Tuesday, February 22, 2011 at 8:00 AM - Wednesday, February 23, 2011 at 5:00 PM (ET) Omni Orlando Resort, 1500 Masters Blvd. 
 ChampionsGate, Florida 33896 Phone: (407) 390-6664
Blur Conference

ABOUT BLUR 
(from the conference website)

"It’s easy to forget that the computer mouse is over 45 years old."

"What’s not as easy to forget is that we’re now collectively getting used to interacting with computers via means and interfaces that have moved way beyond the keyboard and the mouse — the iPhone and Wii being the most prominent examples."

"The truth is that we stand on the verge of a major revolution in the models of Human Computer Interaction (HCI). A revolution that will fly right past academic and into a world of retail, medical, gaming, military, public event, sporting, personal and marketing applications."

"From multi-touch to motion capture to spatial operating environments, over the next 10 years, everything we know about HCI will change."

"Blur is the only conference that is exploring the line of interaction between computers and humans in a substantive, real-world and hands-on way."

"At Blur, vendors, strategists, buyers and visionaries assemble to not only discuss the larger issues of HCI, but also to lay their hands on the latest in HCI technology. Blur is the only forum for a focused, hands-on exploration of the varied technologies evolving in the HCI."

"Come play, investigate, learn and apply at Blur — where we’re changing how you interact with computers forever." -Blur




BLUR Conference Agenda
(Note:   I added the links to conference participants and/or their organizations. Feel free to leave a comment if you know of any corrections or better links!)
Keynotes:

Neuroergonomics: How an Understanding of the Brain is Changing the Practice of Human Factors Engineering - Dr. Kay Stanney, Design Interactive
When Computers Feel: Understanding Human Emotional Measurement  - Hans Lee, EmSense
A Quick Hit on Mobility and HCI - Juan Pons, Swype
Panel Discussion: Haptics- The Beginnings and Future of Touch  - Nimish Mehta
Why HCI will lead the biggest tech revolution yet - Andrew Tschesnok, Organic Motion
Location as a Primary Interface Input - Matt Galligan, SimpleGeo; Nick Brachet, Skyhook Wireless
Robotics, Gaming and The Future of Entertainment- Paul Berberian, Orbotix
Virtual Coaches in Healthcare: A Vision of the Future - Dan Siewiorek, Carnegie Mellon University
10 reasons to be happy about giving computers emotion sensing - Dr. Rosalind Picard, MIT
Commercializing HCI Technology - Dr. Paul Kedrosky, Ewing Marion Kauffman Foundation and Dr. Gerry Barnett


Breakout Sessions:
Human Instrumentation - James Park, FitBit; Ben Rubin, Zeo; Jason Jacobs, RunKeeper;
Steve Larsen, moderator

New Museum Experiences: Learning from Multitouch and Multiuser Installations - Jim Spadacinni, Ideum
Kinect Hacks - Jonathan C. Hall; Lonergan Harrington; Jim Spadacinni, Sean Kean, moderator
Interactive Ads and Consumer Experiences - Alessio Signorini, Immersive Labs; Jon Fox, Helios Interactive
Augmented Reality - Ready for Primetime? - Vikas Reddy, Occipital; Carlin Getliffe, OmniarEdwin Rivera, Credelis; Dan Rua, moderator
Building an Interface for Endangered Language Learners - Finn Thye and Kelson Adams, Univ of Colorado - Boulder
Alternative Interface Inputs - Gary Clayton, Nuance; Nick Langdale-Smith, Seeing Machines; RJ Auburn, Voxeo; Steve Larsen, moderator
Ewing Marion Kauffman Foundation "Idea Hack" - Commercializing HCI Technology: A Discussion - led by Paul Kedrosky
Building Natural User Interfaces - Thomas Peterson, SoftKinetic; Ohad Shvueli, Prime Sense; David Minnen, Oblong
3D Interactive Design for the Human Body - Albert Hwang
Panel: Will the Kinect Change the HCI Industry Forever? A Group Discussion
HCI in the 21st Century:  Technologies for Extending and Amplifying the Human Experience (pdf) - Dr. Charlie Hughes, UCF; Dr. David Pratt; Dr. Joseph LaViola;  moderated by Steve Fiore, UCF


Some Videos of HCI/Tech featured at Blur 2011 
360 Panorama occipitalhq


"Illuminous" Eric Gradman



"Standard Gravity" Eric Gradman, OpenKinect (libfreenect/python)


Organic Motion Markerless Motion Capture


Advisory:
Steve Fiore, University of Central Florida
Bob Allen, Disney R&D
Kay Stanney, Design Interactive
Capt. Dylan Schmorrow, USN





Jan 22, 2011

Close Encounter with "Best Buy On": Example of a multi-channel marketing approach using in-store digital media that includes an on-line magazine.

Over the next 9 months or so,  I'll be in the market for a computer, an iPad or something similar, a Kinect, and maybe a mid-size Internet-enabled HDTV.  As a consequence,  I turned in to  a Best Buy on my way home from work Friday to see what I could find.  I didn't want to spend much time at the store, since I'm the type of shopper who goes in, finds the desired item(s), and leaves.  


During this trip to Best Buy, my shopping habits were broken, at least temporarily.  I found myself wandering around, looking at the numerous displays of all sizes located about the store.  The displays distracted me from my intended mission.  Below is a slideshow of the pictures I snapped as I wandered about Best Buy:





As a result of the displays,  I spend more time in the store than I'd planned. I left the store empty-handed.  But that is OK, since I think that this strategy, in the long run, will have the power to entice me back to the store and leave knowing that I've made an informed purchase.


According to Gail Chiasson in her Daily DOOH post,"Best Buy's New In-Store Network", Best Buy established an internal advertising and editorial team in 2009, and officially launched a multichannel network called Best Buy On, which extends the in-store network of displays, to the online magazine, also called Best Buy On .   Best Buy On focuses on different theme each month. This month's theme featured all of the cool things that were unveiled at the recent Consumer Electronics Show. 


The bottom line with in-store DOOH is that customers are provided with with enhanced information about various products in each department, and this information is coordinated with the enhanced information they can access on-line.  Put together, this might lead to better sales,  and happier repeat customers in the long run.  


As I went about Best Buy,  I thought that it would be better if the in-store network of displays was less disjointed.  I also was puzzled by the somewhat inconvenient placement of some of the displays.  People who stand in the middle of an aisle to look at a screen block the view of the screen and the other items on display, and also block the paths of other shoppers!


Although I didn't spend a lot of time scrutinizing the content on all of the screens I encountered, what I did look at gave me the impression that there is need for improvement.  Pretty multimedia content might be attractive to the eye, but if it doesn't  meet the needs of customers in specific departments, it might not be effective.  


Here is a personal scenario:
My 80-year-old mother needs a new laptop and would also like to find an eReader that is suitable for someone her age.  She needs a new printer and is thinking about getting a digital camera.  She has some money to spend, but from what I can tell, she'd be a little confused by all of the flashing screens!  


I'm not sure what my mother would make of the displays, especially those that run animated comments about products from Facebook fans!


(Note:  I plan to go with my mom to Best Buy soon, and I'll share this experience on this blog in the future.)


Although there is room for improvement, the Best Buy On approach is a start.   I would like to see more touch-enabled displays as part the in-store network, given the fact that Best Buy sells touch-screen gadgets, tablets, and all-in-one PCs.   The large-screen touch-enabled displays could provide customers with an Amazon- like advisory feature.  This is something that would be very helpful to my mom and other non-geeks.


Best Buy also needs to get the Kinect demo up and running!


RELATED

FAR OUT 1/9/11 (Best Buy On video about products at CES)
video platform video management video solutions video player
What is Best Buy On?  (Description of on-line magazine)
Best Buy Addresses High-Tech Obsolescence
Shirley Brady, Brand Channel, 1/18/11
Buy Back Program from Best Buy Helps "Future-Proof" Today's CE Purchases
Jeremy Baier, Best Buy, 1/10/11
"Best Buy CEO Brian Dunn describes the consumer electronics retailer's just-launched Buy Back program as "a game-changer." The goal: to help customers embrace new technology with greater confidence."

Apr 11, 2013

Interesting Videos I Almost Missed (Future/Emerging/Creative Tech)

Creative Tech Videos I Almost Missed


I admit that sometimes I just don't have the time to hang out and watch interesting or quirky tech/future tech videos on the web.  Here are a few that passed me by the first time around.  

Enjoy!


The first video for this post is of an interactive game installed permanently for children at the Royal London Hospital.  Woodland Wiggle is a work commissioned by Vital Arts, in collaboration with Nexus Interactive Arts, Chris O'Shea, Felix Massie, and Brains & Hunch.  The game was created in C++ using openFrameworks, and relies on an Xbox Kinect camera.   The installation is part of play and garden spaces designed as healing environments for young patients.  (See links in the "Related" section for more information.)




The next video is the creation of Igor Labutov, Jason Yosinski, and Hod Lipson, of the Cornell Creative Machines Lab.

AI vs. AI:  Two chatbots talking to each other


I liked this video because I once created a chatbox video game for an AI for Games class I took several years ago, and have fond memories of the hours I spent reading the textbook supporting the display on the right- Artificial Intelligence: A Modern Approach)

Tom Jenkins and Simon Sharp, of thetheory, created the following two video shorts. Address Is Approximate is a stop-motion video about a lonely desk toy who makes a journey across the US via Google Maps Street View.   Speed of Light uses a pocket projector, a video feed, and creativity to create an augmented reality-like police-chase short.  According to information from the Vimeo website, Speed of Light was filmed using a Cannon 5d Mkll + HD MiniCam, with MicroVision projectors.

Address Is Approximate, from The Theory

Address Is Approximate from The Theory on Vimeo.


Speed of Light / aka / The World's Tiniest Police Chase from The Theory on Vimeo.

RELATED
Woodland Wiggle:  Interactive games on a giant television at the Royal London Hospital
Interactive Woodland at Royal London Hospital (Nexus Productions Website)
Giant tigers and rooftop teepees: the Royal London Hospital play space
Oliver Wainwright, The Guardian, 2/21/13
Note: I especially liked that in his article about the Royal London Hospital's play space, Oliver Wainwright shared this quote from Florence Nightingale's 1859 Notes on Nursing: "variety of form and brilliancy of colour in objects presented to patients are an actual means of recovery".
Cornell Creative Machines Lab
Robot-To-Robot Chat Yields Curious Conversation
Robert Siegel, Host, All Things Considered, 9/1/11
Introduction to Artificial Intelligence (Udacity Course)
Meet the Creators: Tom Jenkins and Simon Sharp Trade Viral Shorts for A Studio Film
Joe Berkowitz, Co.Create

Listen

Apr 1, 2013

What happens when a 2-year old wakes up to the sound of the Google Map Lady? "I CAN'T turn left right now!"

Google Map Lady says, "Turn Left", toddler yells from the back seat, "I CAN'T..."

If you are new to this blog, you might not know that I'm the grandmother of a 2-year-old little boy.  Watching him grow in an increasingly technology-enriched world has been an eye-opener at times, from his first interaction with my iPad, fingers-and-toes at 7 months of age, to his attempts at rafting down a digital river, playing the Kinect Adventure! River Rush game. 

Technology is rapidly changing how we learn, interact, and navigate our world.  Designers, developers, and others who are involved in the process of creating for the near future must be mindful of the ways newer technologies might play out in the real world, where the "user" is not always the person intended for the "user experience".  Off-the-desktop technologies are rapidly advancing, and impact people of all ages, wherever they happen to be.

Today's story is just one example.

I'm fortunate to live about a 35 minute drive from my grandson, and for this reason, I sometimes take him out and about, especially when his parents have a lot of errands to run.

Toddler with replica of the Eiffel Tower, Amalie's French Bakery, NoDa, Charlotte, NC

Toddler dancing around a floor mural











After a nice lunch at Amelie's French Bakery  near the NoDa neighborhood (Charlotte, NC),  and exploring the floor murals in the little mall behind the bakery, I told my grandson that we were going to the "Big Park" (Freedom Park). 

He was so excited, but within a few minutes, he was fast asleep.


Toddler smiling and happy in the back seat


Toddler asleep in the back car seat
I drove up towards the airport to kill time, thinking that he'd wake up and we'd watch the planes. He was still sleeping.  Now what?

I opened up Google Maps on my cell phone to get directions from the airport to the Carolina Raptor Center at Latta Plantation Park, since I wasn't sure how to get there from the airport.

About 15 minutes later, as the Google Map Lady gave directions, Levi woke up, saying "What's that sound? A lady's voice?". The Google Map Lady spoke again, and said something like, "In 1000 feet, take a left turn." 

Levi replied empathically, "I CAN'T turn left right now!". Google Map Lady responded with the next direction, and Levi replied, "I CAN'T do that!". 

The little guy was visibly upset, because he thought the lady was telling him what to do. It was obvious to him that he could not comply with her request.

What to do?   How do I explain the "Google Map Lady" this to a 2-year old? 

This is how I handled the situation:

I told him that the lady's voice was to help me know where turn so I could drive to the raptor center.  I kindly told him that the directions were just for me, not little boys who can't turn the car because they are in car seats and can't drive. He nodded and said, with relief, "Lady's voice for Mi-Mi, NOT for little boys", and was fine after that.

Note:
Although I did not know it at the time, my grandson had somehow wriggled out of the left harness of his car seat. I discovered the problem as I went to unfasten him from the car seat, and wondered how long he'd not been secured safely.  It hadn't occurred to me that this would happen - everything was in place at the beginning of our ride, as you can see from the first picture.  

As I lifted my grandson out of the car seat, it crossed my mind that it would be a good idea if car seats came with sensors to let the driver know if the car seat straps, snaps, or buckles became unsecured. (Systems like Forget Me Not provide a warning system to parents if the child is forgotten in the car.)

After conducting a quick search, I found that Sherine Elizabeth Thomas has applied for a patent that includes the use of a sensor to alert the adult that a child has unbuckled their seat belt.  I think that a system could be developed to provide an alert if the child was not safely secured, as in the case of my wiggly grandson.  


RELATED AND SOMEWHAT RELATED
(Self-activating, self-aware digital wireless safety system)
John Polaceck, 3/24/13
Grandma Got STEM blog (More info to come on this topic!)

Feb 12, 2013

Call for Papers: Human-Computer Interaction and the Learning Sciences


Below is the call for papers for a workshop that I'd like to attend!   (The information below was copied from the Surface Learning website.)

If you are interested in the intersection of learning and interactive surfaces,  the Surface Learning website provides an interdisciplinary forum for like-minded explorers.

Human-Computer Interaction and the Learning Sciences

Full-Day Pre-Conference Workshop, in conjunction with CSCL 2013, University of Wisconsin, Madison, WI, USA

Submission deadline:15 April 2013
Notification of acceptance:29 April 2013
Early registration deadline:TBD
Workshop registration deadline:TBD
Workshop:15 June 2013

Motivation

Both Human-Computer Interaction (HCI) and the Learning Sciences (LS) are active research communities with established bodies of literature. As both have an interest in using computing technologies to support people, there is a natural synergy. However, the practices and values of the two fields are substantially different, leading to tensions felt by researchers who actively participate in both fields. They also make it harder for researchers in either field to move towards the other.

Recently, there has been increased interest in LS to acknowledge the importance of HCI. In his keynote at ICLS 2012, Pierre Dillenbourg made the case that many of the important problems of learning / education are not primarily addressed through innovations in learning theory (a particular emphasis in LS) but of addressing important problems through useful, usable, perhaps innovative designs (a particular emphasis in HCI). At the "Interactive surfaces and spaces: A learning sciences agenda" symposium later that day, the relationship between HCI and LS was heavily debated. That discussion continued in email form. What became clear is that the relationship is complex, viewed differently by different groups (LS researchers interested in HCI, HCI researchers interested in LS and interdisciplinary researchers) and needs to be improved.

Intended Audience

This workshop is intended to be both interdisciplinary and multi-disciplinary:
  • For researchers at the intersection of the two fields (i.e., active participants in both communities), this workshop provides a forum for discussing interdisciplinary research with the aims of supporting the connection between the fields.
  • For HCI researchers interested in LS, this workshop provides an introduction to the learning sciences community (values, practices, literature, venues, etc.), an opportunity to receive LS feedback on your work and support for becoming part of the LS community.
  • For LS researchers interested in HCI, this workshop provides an introduction to Human-Computer Interaction (both the fundamentals taught in an introductory course and the research community), an opportunity to receive feedback on your work from HCI researchers and connections to experienced interdisciplinary researchers.

Participation

We offer two paths to participate in the workshop based on the CSCL 2013 theme: "To See the World and a Grain of Sand: Learning across Levels of Space, Time, and Scale." Send submission in either category tosubmit@surfacelearning.org by 15 April 2013. Submissions are not anonymous and should include all author names and contact details.

The World
We seek position papers on the critical issues in interdisciplinary HCI / LS work or visions of how to advance the relationship between HCI and LS. Topics include, but are not limited to: 
  • What core methods and principles of HCI might be of use to LS researchers?
  • How can LS researchers piggyback on the efforts of HCI research to make the newest technology available for development?
  • What theoretical foundations can LS offer to HCI researchers interested in using technology to support learning?
  • How do we better support true interdisciplinary researchers?
  • How do we promote academic exchange between the communities?
Position papers should be 2–4 pages in CSCL proceedings format. They will be publicly posted on the workshop website and should serve as a resource or discussion point. During the workshop, the position papers will be briefly presented (<10 minutes per presentation) to the entire group at the closing panel. The panel will use these presentations to reflect on the day's work and discuss possible future directions.

A Grain of Sand
One of the core values of HCI is that design (both the product and the process) matters. A great study of a lackluster, ill-conceived system is relatively useless. The time to reflect on and improve a design is during its formative stages (i.e., before it is finished). Here, we give attendees an opportunity to discuss design work in progress. We seek papers on preliminary projects, either before a system has been built (outlining the motivation) or during active development. Design papers should include motivation for the project (why is this necessary research?), related work (what are you building upon?), and a sketch of how you will proceed. The projects can be based in either an HCI or LS tradition of research.

Design papers should be 2–4 pages in CSCL proceedings format. They will be publicly posted on the workshop website. During the workshop, the papers will be briefly presented (<10 minutes per presentation) to a small group who will have time to give concrete feedback on the design / research from both HCI and LS perspectives (e.g., suggestions for improvement, related work).

Organizers

Jochen RickJochen “Jeff” Rick is research associate / lecturer in the Department of Educational Technology (EduTech) at Saarland University, Germany. He received his PhD in the area of "Learning Sciences and Technologies" from the College of Computing, Georgia Institute of Technology in 2007. This will be his ninth ISLS conference. He has published in both JLS and ijCSCL and is on the editorial board of ijCSCL. He is also active in the HCI community, particularly the Interaction Design and Children community, serving as a full papers chair for the 2012 conference. He has experienced multiple perspectives on this interdisciplinary area: LS graduate student at an HCI powerhouse, postdoc in an HCI lab and junior faculty in an LS department. He has helped to organize four workshops, including one at CSCL 2002 and one at ICLS 2010. For two workshops, he successfully employed Open Space Technology, an organizing technique we plan to employ in this workshop.

Michael HornMichael Horn is an assistant professor at Northwestern University, USA where he directs the Tangible Interaction Design and Learning (TIDAL Lab). Michael holds a joint appointment in Computer Science and the Learning Sciences, and his research explores the role of emerging interactive technology in the design of learning experiences. His projects include the design of a tangible computer programming language for use in science museums and early elementary school classrooms; and the design of multi-touch tabletop exhibits for use in natural history museums. Michael has presented work at cross-disciplinary conferences including Interaction Design and Children (IDC), Tangible, Embedded, and Embodied Interaction (TEI), Human Factors in Computing Systems (CHI), ICLS, and AERA; he is on the editorial board for the Journal of Technology, Knowledge, and Learning; and he is the program committee co-chair for ACM Interactive Tabletops and Surfaces (2012 and 2013). Michael also co-organized a workshop on Technology for Today’s Family at CHI 2012.

Roberto Martinez-MaldonadoRoberto Martinez-Maldonado is a PhD candidate in the Computer Human Adapted Interaction Research Group at The University of Sydney, Australia. His research focuses on analysing data generated when groups of students collaborate using shared devices to help teachers to be more aware about their learning processes and take informed decisions. His research grounds on principles of Human-Computer Interaction, CSCL, Educational Data Mining and Learning Analytics; he makes use of a number of technologies including multi-touch interactive tabletops, tablets, kinect sensors and databases. He has presented work at interdisciplinary conferences that include Intelligent Tutoring Systems (ITS), Artificial Intelligence in Education (AIED), Interactive Tabletops and Surfaces (ITS) CSCL, ICLS and Educational Data Mining (EDM). He lead the organisation of the workshop held in conjunction with ICLS 2012 titled Digital Ecosystems for Collaborative Learning. He has published papers at CSCL 2011, ICLS 2012 and other communities related with HCI and Artificial Intelligence in education.

Documents

Dec 29, 2012

KAPi Kids at Play Awards: Best in Children's Technology 2013 Winners Announced

The following information is from a PRWeb press release announcing the winners of the Fourth Annual KAPi Awards:

Living in Digital Times and Children's Technology Review Announce 2013 KAPi Award Winners  The Most Innovative in Children's Technology to be honored on Thursday, January 10, at the 2013 International CES ® in Las Vegas

"Collaboratively organized and produced by Living in Digital Times and Children’s Technology Review, the fourth annual KAPi Kids at Play Awards honor the best of the best in children’s technology."
- PRWeb, 12/18/12

The 2013 KAPi Award Winners Are… 
 1. Best Younger Children’s App: LetterSchool by Boreaal Publishers 
 2. Best Older Children’s App: IMAG-N-O-TRON by Moonbot Studios 
 3. Best Tech Leveraged Toy: Skylanders Giants by Activision 
 4. Best Video Game Software: Kinect Sesame Street TV by Microsoft Studios 
 5. Best Hardware or Peripheral: Kindle Fire HD with Kindle FreeTime Unlimited by Amazon 
 6. Best Technology Toy: littleBits by littleBits Electronics 
 7. Best Educational Technology: BrainPOP GameUp by BrainPOP LLC. 
 8. Innovation: The Cube by 3D Systems, Inc. (3D Printer)
 9. Pioneer: Dale Dougherty, Co-Creator, Maker Faire; Publisher, MAKE Magazine 
 10. Pioneering Team: Toca Boca 

Judges of the KAPi Awards consisted of 13 journalists and/or experts in children’s interactive media, they were: 
Warren Buckleitner, Children’s Technology Review
Chris Crowell,  Children’s Technology Review  
Dan Donahoo, Wired GeekDad and Project Synthesis 
Chip Donohue, Erikson Institute 
David Kleeman, American Center for Children and Media 
Ann McCormick, Co-Founder, The Learning Company 
Frank Migliorelli, Mig Idea 
Robin Raskin, Living in Digital Times 
Reyne Rice, Toy Expert 
Carly Shuler, PlayScience 
Andrea Smith, Mashable 
Aleen Stein, Organa 
Scott Traylor, 360 Kid

Nov 17, 2012

Human Computer Interaction + Informal Science Education Conference (NUI News)

I recently learned of the HCI + ISE conference, funded by the National Science Foundation and organized by Ideum and Independent Exhibitions that will provide the groundwork for the future of the development and design of interactive computer-based science exhibits.
Science museums have a long history of interactivity, well suited to groups of "explorers", such as families or students visiting on a field trip.  

What is really exciting is that new interactive applications and technologies have the power to transform the way people learn and understand science in a collaborative and social way.  Innovations in the field of HCI - Human-Computer Interaction- such as multi-touch and gesture interaction, are  well-suited to meet the goals of science education for all, beyond the school doors and wordy textbooks. 

Below is a screen-shot of the conference website, a description about the conference, quoted from the site, and some related resources.



About the HCI+ISE Conference
"HCI technologies, such as motion capture, multitouch, augmented reality, RFID, and voice recognition are beginning to change the way computer-based science exhibits are designed and developed. Human Computer Interaction in Informal Science Education (HCI+ISE) is a first-of-its-kind gathering to explore and disseminate effective practices in developing a new generation of digital exhibits that are more intuitive, interactive, and social than their predecessors."
"The HCI+ISE Conference, to be held in Albuquerque, New Mexico June 11-14 2013, will bring together 60 museum exhibit designers and developers, learning researchers, and technology industry professionals to share effective practices, and to explore both the enormous potential and possible pitfalls that these new technologies present for exhibit development in informal science education settings."
"HCI+ISE will focus on the practical considerations of implementing new HCI technologies in educational settings with an eye on the future. Along with a survey of how HCI is shaping the museum world, participants will be challenged to envision the museum experience a decade into future. The conference results will provide a concrete starting point for exhibit developers and informal science educators who are just beginning to investigate these emerging technologies and design challenges in creating these new types of exhibits."
Why HCI+ISE?
"Since the mid-1980s informal educational venues have increasingly incorporated computer-based exhibits into their science communication offerings in an effort to keep pace with public expectations and make use of the expanding opportunities these technologies provide. The advent and popularity of once novel HCI technologies are becoming commonplace: the Wii and Microsoft Kinect now allow for motion capture video games, tablet PCs have multitouch interaction, and smart phones and other devices come standard with voice recognition. Yet many museums are still developing single-touch and trackball-driven, single-user computer kiosks."
"Science museums have a long history of championing hands-on, physical, and inquiry-based activities and exhibits. This vast experience has only just begun to be applied to interactive computer interfaces. Along with seasoned science exhibit developers, the Conference will draw upon individuals outside of ISE who will provide fresh insight into the technologies, design issues, and audience expectations that these visitor experiences present."
Involvement and Findings
"HCI+ISE will bring together a diverse group of practitioners and other professionals to discuss (and in some cases share and prototype) new design approaches utilizing emerging HCI technology. Please see our Apply page to learn how you can participate. Conference news and findings will be distributed through a variety of ISE and museum websites, including this one."
"We welcome your questions and comments about the HCI+ISE Conference."
CONTACTS
Kathleen McLean of Independent Exhibitions
& Jim Spadaccini of Ideum
HCI+ISE Co-chairs
"Open Exhibits is a multitouch, multi-user tool kit that allows you to create custom interactive exhibits."
CML:  Creative Mark-up Language
GML: Gesture Mark-up Language
GestureWorks
Ideum

Nov 4, 2012

CFP for Special Issue of Personal and Ubiquitous Computing on Educational Interfaces, Software, and Technology (EIST) -Extended Deadline: December 9, 2012

CFP for Special Issue of Personal and Ubiquitous Computing on Educational Interfaces, Software, and Technology (EIST) -Extended Deadline: December 9, 2012

Overview 
One of the primary goals of teaching is to prepare learners for life in the real world. In this ever-changing world of technologies such as mobile interaction, cloud computing, natural user interfaces, and gestural interfaces like the Nintendo Wii and Microsoft Kinect, people have a greater selection of tools for the task at hand. Given the potential of these new interfaces, software, and technologies as learning tools, as well as the ubiquitous application of interactive technology in formal and informal learning environments, there is a growing need to explore how next-generation technologies will impact education in the future. 

As a community of Human-Computer Interaction (HCI) and educational researchers, we need to theorize and discuss how new technologies should be integrated into the classrooms and homes of the future. In the last three years, three CHI workshops have provided a forum to discuss key issues of this sort, particularly in the context of next-generation education. The aim of this special issue of Personal and Ubiquitous Computing is to summarize the potential design challenges and perspectives on how the community should handle next-generation technologies in the education domain for both teachers and students. 


We invite authors to present position papers about potential design challenges and perspectives on how the community should handle the next generation of HCI in education. Topics of interest include but are not limited to: 

  • Gestural input, multitouch, large displays 
  • Mobile devices, response systems (clickers) 
  • Tangible, VR, AR & MR, multimodal interfaces 
  • Console gaming, 3D input devices 
  • Co-located interaction, presentations 
  • Educational pedagogy, learner-centric, child computer interaction 
  • Empirical methods, case studies 
  • Multi-display interaction 
  • Wearable educational media 
Important Dates 

  • Full papers due: December 9, 2012 
  • Initial reviews to authors: January 18, 2013 
  • Revised papers due: March 15, 2013 
  • Final reviews to authors: April 26, 2013 
  • Final papers due: June 14, 2013 


Submission Guidelines 

Submissions should be prepared according to the Word template located at the bottom of this page. All manuscripts are subject to peer review. Manuscripts must be submitted as a PDF to the easychair submission system. Submissions should be no more than 8000 words in length. 

Guest Editors and Contact Information 

  • Syed Ishtiaque Ahmed, Cornell University 
  • Quincy Brown, Bowie State University 
  • Jochen Huber, Technische Universität Darmstadt 
  • Si Jung “Jun” Kim, University of Central Florida 
  • Lynn Marentette, Union County Public Schools, Wolfe School 
  • Max Mühlhäuser, Technische Universität Darmstadt 
  • Alexander Thayer, University of Washington 
  • Edward Tse, SMART Technologies 

Contact: eistjournal2012@easychair.org 

Information about the Journal of Personal and Ubiquitous Computing 


Submission Template: PUC_EIST_article_template.docx  (59k)