Showing posts with label emerging technologies. Show all posts
Showing posts with label emerging technologies. Show all posts

Nov 28, 2009

Fantasy HCI! Dream Lab and Dream Team for the Future

Fantasy HCI!  

My wish is to have my own lab so I can create and test out various interactive applications that run on screens of all sizes, and play with new interactive gadgets and displays. I'd also like to provide mobile lab services so I can go out and see how emerging technologies play out in real-life situations and settings during the design & development process as well as after-market.

I'd like to focus on social-collaborative & cognitive aspects of emerging technologies. Because of my background in school psychology, I'd work towards ensuring that new applications, technologies, and systems follow the guidelines of Universal Design for Learning as well as Universal Usability. I have some ideas about the transdisciplinary characteristics I'd like to see for members of the lab's Dream Team, but I'm saving that for another post. Now I just need to win the lottery so I can hire my team and run with the ball. Team Charlotte, N.C., anyone?

FYI:
The HCI link is to a blog that corresponds to the Theory and Research in Human Computer Interaction class at Rensselaer Polytechnic Institute. 

For more information about HCI, visit the Human-Computer Interaction Resources website.

Oct 28, 2009

libTISCH, a multi-touch development framework with multi-touch widgets and more!

For techies and the tech-curious who like technologies that support collaboration and multi-touch interaction,  this is great news!

Florian Echtler announced the first stable releas of libTISCH, a multi-touch development framwork, which can be found on Sourceforge.  TISCH stands for Tangible Interaction Surfaces for Collaboration between Humans.  libTISCH, a C++ software framework, is included in this project.  It provides a means for creating GUIs based on multi-touch and/or tangible input devices.

Here is how it works:

Architecture Layers































Here is information from libTISCH announcement:


Highlights of this release are, among others, the following features:

- ready-to-use multitouch widgets based on OpenGL
- reconfigurable, hardware-independent gesture recognition engine
- support for widely used (move, scale, rotate..), pre-defined gestures
 as well as custom-defined gestures

- hardware drivers for FTIR, DI, Wiimote, DiamondTouch..
- TUIO converters: source and sink

- cross-platform: Linux, MacOS X, Windows (32 and 64 bit)
- cross-language: C++ with bindings for C#, Java, Python

libTISCH has a lot to offer for the multitouch developer. For example, 

the textured widgets enable rapid development of applications for many
kinds of multi-touch or tangible interfaces. The separate gesture
recognition engine allows the translation of a wide range of highly
configurable gestures into pre-defined or custom events which are then
acted on by the widgets. While the lower layers of libTISCH provide
functionality similar to tbeta, touche etc. (you can interface existing
TUIO-based software with libTISCH in both directions), it goes far
beyond.

More information about the library and underlying architecture can be
found on http://tisch.sf.net/ and in the Sourceforge wiki at
http://sourceforge.net/apps/mediawiki/tisch/


TISCH Project Wiki

RELATED
Florian is on the scientific staff at the Technisch Universitat Munchen in Germany. Be sure to check out his  webpage.

I especially like the concept of the MeTaTop: "A Multi-Sensory Table Top System for Medical Procedures" that is linked from Florian's website.
MeTaTop A Multi Sensory Table Top System for Medical Procedures

Oct 19, 2009

Chris O'Shea's Hand from Above Interactive Screen; Info about Interactive Architecture

A recent post on the Interactive Architecture blog was of artist Chris O'Shea's "Hand from Above" project, a joint co-commission between FACT (Foundation for Art & Creative Technology), Liverpool City Council for BBC Big Screen Liverpool, and the Live Sites Network.  The installation premiered during the Abandon Normal Devices Festival.


Hand from Above from Chris O'Shea on Vimeo. (Written using openFrameworks & openCV. Sounds by Owen Lloyd.)

"Just imagine walking through your town or city centre, watching yourself on the Big Screen, when all of a sudden a giant finger appears and starts to play with you!...Hand From Above encourages us to question our normal routine when we often find ourselves rushing from one destination to another. Inspired by Land of the Giants and Goliath, we are reminded of mythical stories by mischievously unleashing a giant hand from the BBC Big Screen. Passers by will be playfully transformed. What if humans weren’t on top of the food chain? Unsuspecting pedestrians will be tickled, stretched, flicked or removed entirely in real-time by a giant deity"

For more information about Interactive Architecture and related topics read the following post:


Interactive Architecture and Transdisciplinary Convergence...
(The World Is My Interface blog)

Oct 15, 2009

Interactive Motion Graphics Showreel from Filmview Services - great content!

Here is a showreel from Filmview Services that simulates how tech-usability in an interactive gesture/touch world should be!



Here is a quote from the Filmview Services blog:


What Are Screen Graphics?

"...So it works out more cost effective for the films to actually have someone put the graphics on the screens for real. It also greatly enhances the performance of the actors. You only have to watch any of the Star Wars Eps 1-3 to see how wooden acting is when you don’t actually know what is in front of you. Actors love to be able push buttons and bang touch screens during their scenes. Having to actually do it in a certain order can stretch their capabilities mind you, and I am pretty gob smacked at how absolutely computer illiterate some of them are. Don’t they use email?


Anyway, due to this diminished ability to hit and bang things in any certain order, it is our job to make it impossible to mess things up. That’s why they are all genius typers. We make it so they can type any old thing and the letters still come out the way they are meant to each time. We also put little locking codes into our programming so they can’t accidentally escape the graphic mid job. It’s amazing how many of them can type the Esc button when they are meant to be spelling LOGIN."

Thanks, Tim!

SOMEWHAT RELATED
Coincidentally,  when I was visiting the NUI-Group forums this morning, I came across a link to Jakob Nielsen's "Usability in the Movies -- Top 10 Bloopers", which are worth taking a look at. I've posted the list, but you'll need to go to Nielson's web page to read the descriptions. You'll smile.

1. The Hero Can Immediately Use Any UI
2. Time Travelers Can Use Current Designs
3. The 3D UI
4. Integration is Easy, Data Interoperates
5. Access Denied/Access Granted
6. Big Fonts
7. Star Trek's Talking Computer
8. Remote Manipulators (Waldo Controls)
9. You've Got Mail is Always Good News
10."This is Unix, It's Easy"

May 23, 2009

Razorfish's Touch Framework "Razorfashion" - A lot like my idea for an in-home FashionMirrorAdvisor...

Razorfish recently unveiled the Razorfashion application designed to provide shoppers with an engaging retail experience within the "multi-channel shopping ecosystem". I'm not the "shop to you drop" type of gal, but I can see that this concept could be useful in other situations, after a few tweaks.




As soon as I saw this Razorfish Touch "Fashion" demo video, it touched a nerve. I've been playing around with a similar idea, but for my personal use, in the form of an RFID-enabled system. I'd call it something like "FashionMirrorAdvisor".

Instead of showing skinny fashion models like the Razorfashion application, I'd harness the power of built-in web-cam and mirror my own image on the screen. My mirror would dress me up in the morning when I'm way too foggy to think about matching colors and accessories.

My FashionMirrorAdvisor would be my friend. My "smart" friend, since all of my clothes would be RFID-tagged, along with my shoes, jewelry, and other accessories. My make-up, too.

It would be a no-brainer. I really could use this application - just ask my husband!

More often than not, most mornings I find myself staring at the clothes in my closet, frozen in time, unable to formulate a fashion thought. I might set my eyes on a favorite blouse, but blank out when I try to think about the rest of the steps I need to pull my look together.


I know I can't wear my reddish-pink camisole with my dusty-orange/brown slacks, but at 5:15 A.M., who has the time to think about this little detail? My friend, the TouchFashionMirror would prevent me from making this fashion faux-pas.

No problem.

My FashionMirrorAdvisor would show me a few outfits, and dress my real-time moving image on the screen. Since she knows all things, she'd show me ONLY the articles of clothing that were clean, since my RFID system would keep up with all of that. It would be much more functional than a "virtual wardrobe" application.

I could try out different earrings without having to get them out.

If I couldn't find something, the RFID system would take care of this detail. My FashioMirrorAdvisor would know where I misplaced my clothes, accessories, and even my keys, since they would all be tagged. The mirror application would provide me with a nice little map of my house and car, and highlight the location of the item.

My FashionMirrorAdvisor would keep track of my laundry, too. This would be a great feature. So if my dirty laundry was piling up, and I wanted to wear outfit X, Y, or Z over the next few days, I'd receive a gentle reminder that I'd need to do some laundry first!

Another practical feature:

My FashionMirrorAdvisor would also serve as my health consultant, keeping track of my weight and BMI. This data, along with information gained from the webcam, would be combined so that my advisor would NEVER suggest an outfit that would be too...snug.

I could program the system to provide me with gentle reminders if my weight was an issue. My FashionMirrorAdvisor would show me images of myself "before" and "after", outfits included.

Information about the "after" outfits could be fed to the system from the web-catalogs of my favorite fashion retailers, and once I lost those 10 darned pounds, I'd find a nice parcel delivered to my door.

Thanks to my FashionMirrorAdvisor, I know that the outfit would be just right.


UPDATE 5/8/10:  The FashionMirrorAdvisor would be integrated with a mobile app - since I now have a smartphone, this would be quite useful in planning shopping trips centered around the purchase of new clothes, shoes, accessories, and coordinating cosmetics!  I created a little game  that I think would be ideal for this sort of thing, too.

I still want to work on this....someday.

Too many ideas, too little time!


RELATED
From the Razorfish site:
"The Razorfish Emerging Experiences team is a dedicated group of highly experienced professionals focused solely on emerging experiences and technologies. "Effective innovation" is our multifaceted approach to concepting and delivering pioneering solutions for our clients"

"Founded in 2008, Razorfish Emerging Experiences is a cross-functional team composed of strategists, artists, experience designers, and technologists. We’re part of the Razorfish Strategy & Innovation practice led by Shannon Denton. Jonathan Hull is the managing director of the team, Steve Dawson is the technology lead and Luke Hamilton is the creative lead."


Razorfish Emerging Experiences Portfolio

May 10, 2009

Michael Haller Discusses Multi-touch, Interactive Surfaces, and Emerging Technologies for Learning

I came across an excellent overview of interactive display technologies that hold promise for education. The link below is a research article written by Michael Haller for BECTA, formally known as the British Educational Communications and Technology Agency.

Emerging Technologies for Learning: Interactive Displays and Next Generation Interfaces(pdf)
Becta Research Report (2008) Michael Haller Volume 3 (2008)


"Multi-touch and interactive surfaces are becoming more interesting, because they allow a natural and intuitive interaction with the computer system.

These more intuitive and natural interfaces could help students to be more
actively involved in working together with content and could also help improve whole-class teaching activities. As these technologies develop, the barrier of having to learn and work with traditional computer interfaces may diminish.

It is still unclear how fast these interfaces will become part of our daily life and
how long it will take for them to be used in every classroom. However, we strongly believe that the more intuitive the interface is, the faster it will be accepted and used. There is a huge potential in these devices, because they allow us to use digital technologies in a more human way." -Michael Haller

Michael Haller works at the department of Digital Media of the Upper Austria University of Applied Sciences (Hagenberg, Austria), where he is the head of the Media Interaction Lab.

Michael co-organized the Interaction Tomorrow course at SIGGRAPH 2007, along with Chia Shen, of the Mitsubishi Electric Research Laboratories (MERL). Lecturers included Gerald Morrison, of Smart Technologies, Bruce H. Thomas, of the University oof Southern Australia, and Andy Wilson, of Microsoft Research. The course materials from Interaction Tomorrow are available on-line, and include videos, slides, and course notes.

Below is an excerpt from the discription of the Interaction Tomorrow SIGGRAPH 2007 course:

"Conventional metaphors and underlying interface infrastructure for single-user desktop systems have been traditionally geared towards single mouse and keyboard-based WIMP interface design, while people usually meet around a table, facing each other. A table/wall setting provides a large interactive visual surface for groups to interact together. It encourages collaboration, coordination, as well as simultaneous and parallel problem solving among multiple people.

In this course, we will describe particular challenges and solutions for the design of direct-touch tabletop and interactive wall environments. The participants will learn how to design a non-traditional user interface for large horizontal and vertical displays. Topics include physical setups (e.g. output displays), tracking, sensing, input devices, output displays, pen-based interfaces, direct multi-touch interactions, tangible UI, interaction techniques, application domains, current commercial systems, and future research."

It is worth taking the time to look over Haller's other publications. Here is a few that would be good to read:

M. Haller, C. Forlines, C. Koeffel, J. Leitner, and C. Shen, 2009. "
Tabletop Games: Platforms, Experimental Games and Design Recommendations." Springer, 2009. in press [bibtex]

A. D. Cheok, M. Haller, O. N. N. Fernando, and J. P. Wijesena, 2009.
"Mixed Reality Entertainment and Art," International Journal of Virtual Reality, vol. X, p. X, 2009. in press [bibtex]

J. Leitner, C. Köffel, and M. Haller, 2009. "Bridging the gap between real and virtual objects for tabletop games," International Journal of Virtual Reality, vol. X, p. X, 2009. in press [bibtex]


M. Haller and M. Billinghurst, 2008.
"Interactive Tables: Requirements, Design Recommendations, and Implementation." IGI Publishing, 2008. [bibtex]

D. Leithinger and M. Haller, 2007. "Improving Menu Interaction for Cluttered Tabletop Setups with User-Drawn Path Menus," Horizontal Interactive Human-Computer Systems, 2007. TABLETOP 07. Second Annual IEEE International Workshop on, pp. 121-128, 2007. [bibtex]


J. Leitner, J. Powell, P. Brandl, T. Seifried, M. Haller, B. Dorray, and P. To, 2009."Flux: a tilting multi-touch and pen based surface," in CHI EA 09: Proceedings of the 27th international conference extended abstracts on Human factors in computing systems, New York, NY, USA, 2009, pp. 3211-3216. [bibtex]

P. Brandl, J. Leitner, T. Seifried, M. Haller, B. Doray, and P. To, 2009. "Occlusion-aware menu design for digital tabletops," in CHI EA 09: Proceedings of the 27th international conference extended abstracts on Human factors in computing systems, New York, NY, USA, 2009, pp. 3223-3228. [bibtex]


References from the BECTA paper:

Elrod, S., Bruce, R., Gold, R., Goldberg, D., Halasz, F., Janssen, W., Lee, D., Mc-Call, K., Pedersen, E., Pier, F., Tang, J., and Welch, B., Liveboard: a large interactive display supporting group meetings, presentations, and remote collaboration, CHI ’92 (New York, NY, USA), ACM Press, 1992, pp. 599–607.

Morrison, G., ‘A Camera-Based Input Device for Large Interactive Displays’, IEEE Computer Graphics and
Applications, vol. 25, no. 4, pp. 52-57, Jul/Aug, 2005.

Albert, A. E. The effect of graphic input devices on performance in a cursor positioning task. Proceedings ofthe Human Factors Society 26th Annual Meeting, Santa Monica, CA: Human Factors Society, 1982, pp. 54-58.

Dietz, P.H., Leigh, D.L., DiamondTouch: A Multi-User Touch Technology, ACM Symposium on User
Interface Software and Technology (UIST), ISBN: 1-58113-438-X, pp. 219-226, November 2001.

Rekimoto, J., SmartSkin: An Infrastructure for Freehand Manipulation on Interactive Surfaces,

CHI 2002, 2002.

Kakehi, Y., Iida, M., Naemura, T., Shirai, Y., Matsushita, M.,
Ohguro, T., ‘Lumisight Table: Interactive View-Dependent Tabletop Display Surrounded by Multiple Users’, In IEEE Computer
Graphics and Applications, vol. 25, no.1, pp 48 – 53, 2005.

Streitz, N., Prante, P., Röcker, C., van Alphen, D., Magerkurth, C.,
Stenzel, R., ‘Ambient Displays and Mobile Devices for the Creation of Social Architectural Spaces: Supporting informal communication and social awareness in organizations’ in Public and Situated Displays: Social and Interactional Aspects of Shared Display Technologies, Kluwer Publishers, 2003. pp. 387-409.

Morrison, G., A Camera-Based Input Device for Large Interactive
Displays, IEEE Computer Graphics and Applications, vol. 25, no. 4, pp. 52-57, Jul/Aug, 2005.

Ishii, H., Underkoffler, J., Chak, D., Piper, B., Ben-Joseph, E.,
Yeung, L. and Zahra, K., Augmented Urban Planning Workbench: Overlaying Drawings, Physical Models and Digital Simulation. IEEE and ACM International Symposium on Mixed and Augmented Reality ACM Press, Darmstadt, Germany.

Han, Y., Low-cost multi-touch sensing through frustrated total internal reflection, UIST ’05 (New York), ACM
Press, 2005, pp. 115–118.

Hull., J., Erol, B., Graham, J., Ke, Q., Kishi, H., Moraleda, J., Olst, D., Paper-Based Augmented Reality. In
Proceedings of the 17th International Conference on Artificial Reality and Telexistence (Esbjerg, Denmark,November 28-30, 2007). ICAT ’07. IEEE, 205-209.

Haller, M., Leithinger, D., Leitner, J., Seifried, T., Brandl, P., Zauner, J., Billinghurst, M., The shared design space. In SIGGRAPH ’06: ACM SIGGRAPH 2006 Emerging technologies, page 29, New York, NY,USA, 2006. ACM Press.

Research email: emtech@becta.org.uk

Main email: becta@becta.org.uk
URL: www.becta.org.uk

(This was also posted on the TechPsych blog.)

May 2, 2009

Internet of Things Europe 2009 Conference - Internet Rabbits, Mirrors, Stamps, and More!

The Internet of Things Europe 2009 conference, focusing on emerging technologies for the future, will be held on May 7th and 8th in Brussels at the Sofitel Brussels Europe hotel.

Rafi Haladjian, a co-founder of Violet, will be presenting at the conference during the following session on Thursday, May 7th.

Session 2: Innovation and emerging technologies and business models
"This session will explore what emerging innovations, technologies and market trends are being seen now, and which are likely to emerge in the future. What are the research requirements and obstacles in terms of affordability, usability or accessibility that need to be addressed? How will economic, technological and application trends drive the evolution of architectures for the ‘Internet of Things’? What successful business models are already being seen today, and how can these be adapted with future technological developments?"


In a previous post, "The Internet of Things can be Cute: MIR:ROR by Violet", I discussed how RFID is being used in a variety of playful ways to trigger a link to information.The following video from the Violet website explains how MIR:ROR uses little RFID stamps to interact with the Internet and activate things through the MIR:ROR. Each stamp has an e-mail address.



The rabbit in the picture below is called Nabaztag, from Violet, the first Internet-connected Rabbit. He hears, he reads, and he speaks. He can wake you up, give the weather forecast, update you on your friends face-book and twitter status. He can also send music, e-mail messages, and read stories.
http://idleparis.co.uk/wp-content/uploads/2008/12/mirror-300x219.jpghttp://www.violet.net/img/ztamps_banner.gifhttp://www.violet.net/img/mirror.gif

The little rabbits have been around for quite a while. Below is an opera composed by Antoine Schmitt and Jean-Jacques Birge, following an idea by Guylaine Monnier:


90 of the rabbits were brought to the performance by their owners, and ten were supplied by Violet.

You can purchase books from the Violet website for 3 to 7 year old children. These books feature Ztamps, that are recognized by the MIR:ROR and the Nabaztag rabbit, and will read the book to the child.

On a more serious note, here are a few other sessions that I'd be interested in attending at the Internet of Things conference:

Session 5: Privacy, Security & Data Protection
"Although privacy and data protection policy has become increasingly sophisticated since the emergence of the Internet, controversies are likely to accelerate with the new applications likely to be encountered in the Internet of Things. Security issues, particularly surrounding unauthorised access to and unintended disclosure of data are becoming more prevalent. What qualitatively new challenges are presented by the Internet of Things? How can the rights of citizens or businesses in one country be safeguarded on global networks? Whatrights pertain to Things on the Internet of Things?"

Session 6: Service Architecture and Communication
"The range of connectivity options available is bewildering - but the challenges of scalability, interoperability and ensuring return on investment for network operators remain. How will communication needs change as a result of the Internet of Things? What new service architectures will be required to cater for the connectivity demands of emerging devices? How will spectrum rights holders participate in the Internet of Things"

(A similar post is on the Technology Supported Human-World Interaction blog.)

Aug 18, 2008

Digital Storytelling, Multimodal Writing, Multiliteracies...

Digital storytelling, multimodal writing, and multiliteracies are overlapping concepts that weren't around during my first round as a university student. As more people of all ages create and share digital content on the web in new and imaginative ways, teachers and university scholars have taken notice. Is there a consensus that the printed word, as we've known it, is in the middle of a digital transformation?

Let's start out with digital storytelling.

By now, everyone knows about YouTube and vlogs as new means of communication. There is more to digital storytelling than uploading a few hastily put-together video clips from the family camcorder, or slapping together a PowerPoint presentation with a few bells and whistles. There are now some standards. Digital storytelling is an art.


The following definition is from an article from EduCause, 7 things you should know about Digital Storytelling.:

  • "Digital storytelling is the practice of combining narrative with digital content, including images, sound, and video, to create a short movie, typically with a strong emotional component. Sophisticated digital stories can be interactive movies that include highly produced audio and visual effects, but a set of slides with corresponding narration or music constitutes a basic digital story. Digital stories can be instructional, persuasive, historical, or reflective. The resources available to incorporate into a digital story are virtually limitless, giving the storyteller enormous creative latitude. Some learning theorists believe that as a pedagogical technique, storytelling can be effectively applied to nearly any subject. Constructing a narrative and communicating it effectively require the storyteller to think carefully about the topic and consider the audience’s perspective."

Petter Kittle, from the Northern California Writing Project, Summer Institute 2008, touches on the topic of multimodal writing in Multimodal Texts: Composing Digital Documents. Related to this is the concept of digital writing.

"
Multiliteracies is an approach to literacy which focuses on variations in language use according to different social and cultural situations, and the intrinsic multimodality of communications, particularly in the context of today's new media."

  • "...it is no longer enough for literacy teaching to focus solely on the rules of standard forms of the national language. Rather, the business of communication and representation of meaning today increasingly requires that learners are able figure out differences in patterns of meaning from one context to another. These differences are the consequence of any number of factors, including culture, gender, life experience, subject matter, social or subject domain and the like. Every meaning exchange is cross-cultural to a certain degree." -from Kalantzis and Cope's Multiliteracies website
Here is a short list of resources:
The Center for Digital Storytelling
Multimedia Storytelling
What are multimodality, multisemiotics, and multiliteracies?
(Ben Williamson, Futurelab)
Reading Images: Multimodality, Representation, and New Media
(Gunther Kress)
New Learning: Elements of a Science of Education
(Mary Kalantzis & Bill Cope)
Multiliteracies
The Multiliteracy Project
Multimodal Writing
http://multimodalwriting.com/
(new website, under development)
Multimedia Blogging
(a post from 2004, worth reading for historical context)
Thinking about multimodal assessment
(Digital Writing, Digital Teaching)
Standards related to digital writing
(from Teaching Writing Using Blogs, Wikis...)

I conclude this text-based post with a promise to incorporate more multimedia experiences in my upcoming posts....stay tuned.


Sep 15, 2007

About Displays: Double Sided Touch Screens -LucidTouch



I recently discovered the Display Daily website, a news services about the electronic display industry from Insight Media.

If you are interested in learning more about displays and related hardware that supports interactive multimedia applications, take a look at their recent article about double-sided touch screens.

LucidTouch is a double-sided touch screen prototype that allows people to touch items from behind the screen. The prototype was developed by Microsoft, Mitsubishi, and the University of Toronto. It will be interesting to see how this technology unfolds.