Oct 31, 2009

Sensory-Mind's Ring Wall, an interactive multi-touch wall you don't even have to touch!

Sensory-Minds is a small company in Germany that is focused on research and design in the field of Natural User Interfaces. If you visit the SENSORY-MINDS website, you'll find that it has been designed for touch interaction.



ring°wall from SENSORY-MINDS on Vimeo.

Information about the Ring Wall from Sensory-Mind's Vimeo site:

".....The two-piece ring°wall consists of a LED display and a multitouch information-wall and impresses by its size: a total surface of 425 square meters, which equals more than 6000 computer displays, is the biggest of its kind. An interactive World emerges out of 34 million pixels generated by 15 high definition projectors and is supported by 30 directional speakers.


Multitouch sensors basing on laser technology, convert the usage of the natural user interface into an experience. By direct touching, more than 80 users can simultaneously get informed about news and activities around the ringworld.


The interactive wall is not only a central information system, but also an innovative advertising tool and medium for public viewings."

Heiko Hoffman of Sensory-Minds recently joined the NUI-Group.  Here is a response to a question on the NUI-group forum about the way the system's sensors work:


"The sensor works like spinning radar gun, or like a wiper in a car, that means like a radar gun you get the distance to the object.  It’s not quiet difficult when you know the position of the sensor and the degree from the beam to get the X,Y position.  Yes, you don’t need to touch the surface but we arranged it that the radar beam is very close to the screen so it seems that you have to touch the surface.

At the moment the system (has) limitations because each sensor (gets) coordinates and this means that you got shadows.This is no problem because the people interact very fast. We are working on a system where the data from the sensors were put together and that would be the solution for this problem."



Note:

I'll be writing a few more posts updating the work of past and present NUI-group members as well as news from the commercial multi-touch & gesture community.  Be sure to check back soon, or subscribe to this blog!

Locative & Mobile Media and Martijn de Waal's The Mobile City blog

I subscribe to Martijn de Waal's blog, The Mobile City:  Locative & Mobile Media/Urban Culture/Identity, partly because of my interest in ubiquitous/pervasive computing (specifically how technology can support interaction and collaboration between people across screens of all sizes in public spaces), and partly because I am interested in looking at the ways different disciplines adopt, adapt, and transform emerging interactive technologies.

If you are interested, take a look at my recent post about Martijn de Waal and topics related to locative/mobile media on The World Is My Interface blog.

Oct 29, 2009

UX of ITV: The User Experience and Interactive TV (or Let's Stamp Out Bad Remote Controls)

I prefer to watch our flat-panel HDTV in the dark, and usually I watch something I've DVR'd.  Because of the nature of my remote control, I often have to interrupt the immersive experience, turn on the light, poke at the buttons, and start over again.


Not long ago, I had the flu and I thought I'd try out the interactive channel from my satellite TV provider.  My interactive experience was about the same as interacting with the DVR!

What the satellite TV company offered was not really ITV.  It was
BIRC.  Bad Interactive Remote Control.  You know what I'm talking about.  In the era of the WiiMote, most of us still have to interact with our TV systems as if were 1982.


There is hope!
A good number of researchers are working hard to make the UX of ITV, including mobile TV, a reality.   If you are reading this post, you most likely will appreciate some of the articles and links below:

Konstantinos ChorianopoulosResearch Methods in Interactive TV (pdf)
(Konstantinos Chorianopoulos is a lecturer and  Marie Curie Fellow in the Department of Informatics at the Ionian University, Corfu, Greece He founded UITV.INFO,  a site that has research and information about interactive television.)
Enhancing Social Sharing of Videos: Fragment, Annotate, Enrich, and Share (Proceedings of the 16th ACM international conference on Multimedia) Pablo Cesar, Dick C.A. Bulterman, David Geerts, Jack Jansen, Henrick Knoch and William Seager (This research paper includes a discussion of the concept of "personal secondary screens" on mobile devices that display information that can be shared and annotated while watching content on a large display.)


Sample of presentations and workshops at EuroITV 2009:
The Connected Home Redefines the TV Experience  Jan Van Bogaert (Alcatel-Lucent) 
The Internet Revolution Will Be Televised   Rich Exekiel (Yahoo! Conntected TV)
Needs, emotions, experience!  Marc Hassenahl (Folkwang University)
Marian F. Ursu, Pablo Cesar, and Doug Williams.Enhancing Social Communication and Belonging by Integrating TV Narrativity and Game Play
Rodrigo Laiola Guimaraes
. Telling Stories and Commenting on Media:  The Next Generation of Multimedia Authoring Tools (pdf)
Ana Vitoria Joly.  Designing iTV Interfaces for Preschool Children
Claus Knudsen and Roel Puijk. Television and Presence: Experiments in interaction and mediation in a digital environment


Marianna Obrist, Henddrik Knoche, Damien Alliez Tutorial: User-Experience in TV-centric Services: What to consider in the Design and Evaluation?(pdf) 
David Geets Tutorial: Designing and Evaluating the Sociability of Interactive Television (pdf)
Artur Lugmayr Tutorial:  Ambient Media - An Introduction by Case-Studies(pdf)
Janez Zaletelj, Mladen Savic and Marko Meza. Real-time Viewer Feedback in the iTV production.
Skylla Janssen. Interactive Television Format Development – Could Participatory Design Bridge the Gap?
Jan Hess and Volker Wulf. Explore Social Behaviour around Rich-Media: A Structured Diary Study 
Dimitri Schuurman, Tom Evens and Lieven De Marez. A living lab research approach for mobile TV

RELATED 
Ana Vitoria Joly. Design and Evaluation of Interactive Cross-platform Applications for Pre-literate Children.  IDC 2007 Proceedings: Doctoral Consortium
Ana Vitoria Joly. Interactive Cross-platform Environments for Young Children (pdf)
C. Hesselman, W. Derks, J. Broekens, H. Eertink, M. Guelbahar, and R. Poortinga, "Facilitating an Open Ecosystem to Enhance Interactive TV Experiences", Workshop on Sharing Content and Experiences with Social Interactive Television, co-located with the European Interactive TV Conference (EuroITV2008), Salzburg, Austria, July 2008
R. Kernchen, P. Cesar, S. Meissner, M. Boussard, K. Moessner, C. Hesselman, and I. Vaishnavi, "Intelligent Multimedia Presentation Delivery in Ubiquitous Multi-Device Scenarios,IEEE MultiMedia (IEEE MM), 17(2), April-June, 2010 [in print]

P. Cesar, D.C.A. Bulterman, and J. Jansen, "Leveraging the User Impact: An Architecture for Secondary Screens Usage in an Interactive Television Environment," in Springer/ACM Multimedia Systems Journal (MSJ), 15(3): 127-142, 2009



P. Cesar, D.C.A. Bulterman, and Luiz Fernando Gomes Soares, "Introduction to special issue:  Human-centered television-directions in interactive digital television research"  ACM Transactions on Multimedia Computing, Communications, and Applications, October 2008

Somewhat Related
My preliminary thoughts about 
Adobe's Open Screen project

Oct 28, 2009

libTISCH, a multi-touch development framework with multi-touch widgets and more!

For techies and the tech-curious who like technologies that support collaboration and multi-touch interaction,  this is great news!

Florian Echtler announced the first stable releas of libTISCH, a multi-touch development framwork, which can be found on Sourceforge.  TISCH stands for Tangible Interaction Surfaces for Collaboration between Humans.  libTISCH, a C++ software framework, is included in this project.  It provides a means for creating GUIs based on multi-touch and/or tangible input devices.

Here is how it works:

Architecture Layers































Here is information from libTISCH announcement:


Highlights of this release are, among others, the following features:

- ready-to-use multitouch widgets based on OpenGL
- reconfigurable, hardware-independent gesture recognition engine
- support for widely used (move, scale, rotate..), pre-defined gestures
 as well as custom-defined gestures

- hardware drivers for FTIR, DI, Wiimote, DiamondTouch..
- TUIO converters: source and sink

- cross-platform: Linux, MacOS X, Windows (32 and 64 bit)
- cross-language: C++ with bindings for C#, Java, Python

libTISCH has a lot to offer for the multitouch developer. For example, 

the textured widgets enable rapid development of applications for many
kinds of multi-touch or tangible interfaces. The separate gesture
recognition engine allows the translation of a wide range of highly
configurable gestures into pre-defined or custom events which are then
acted on by the widgets. While the lower layers of libTISCH provide
functionality similar to tbeta, touche etc. (you can interface existing
TUIO-based software with libTISCH in both directions), it goes far
beyond.

More information about the library and underlying architecture can be
found on http://tisch.sf.net/ and in the Sourceforge wiki at
http://sourceforge.net/apps/mediawiki/tisch/


TISCH Project Wiki

RELATED
Florian is on the scientific staff at the Technisch Universitat Munchen in Germany. Be sure to check out his  webpage.

I especially like the concept of the MeTaTop: "A Multi-Sensory Table Top System for Medical Procedures" that is linked from Florian's website.
MeTaTop A Multi Sensory Table Top System for Medical Procedures

Interactive Multimedia Across Platforms and Screens: Adobe's Open Screen Project; MEX Mobile User Experience Manifesto.... (Please don't annoy the user!)

I'm not sure what I think about ubiquitous Flash 10.1. and Adobe's Open Screen project. I like the idea of anything that is seamlessly cross-platform, but I shudder to think that this might let out a wave unwanted or annoying "push" advertising on on-the-go screens of all sizes. I'm assuming web developers, along with TV ad producers, will be jumping on this train without fully thinking about how their applications and designs will play out in the off-the-desktop, digital-out-of home world.

I decided to take a look, drill down through the hype, and share a few links related to this topic.

Adobe Pushes for a Flash-ier Mobile Web
Rob Pegararo, Faster Forward, Washington Post (10/5/09)
"Are you anxious to bring Flash to the mobile Web, even if it means being subjected to some over-eager Web coder's song-and-dance routine? Or would you rather do without it on the go, even if that means having to switch to a "real" computer to use some Web sites' features?"

Hopefully the "over-eager web coders" will heed the MEX Manifesto:

MEX:  Mobile User Experience 2009 Manifesto (pdf)
"The Manifesto sets out our beliefs as to how user-centred design principles can enhance the experience of multi-platform digital services."

A framework for user journeys in a multi-platform world:  Marek Poawlowski, founder of MEX

MEX: User experience journeys in a multi-platform environment from Marek Pawlowski on Vimeo.

"User experiences are evolving into increasingly complex sets of interactions between multiple devices.  In this video presentation, Marek Pawlowski of the MEX Mobile User Experience strategy forum, shows how a framework can be used to map user journeys through the multi-platform environment."

"Unencumbered by wires, information is flowing into every corner of our world at an ever increasing rate and through an ever increasing range of digital platforms. The single greatest challenge facing digital industries is understanding how this explosion of data will be woven into the fabric of consumers' lives." -- Marek Pawlowski, founder of MEX.

MEX Blog 

OPEN SCREEN VIDEO

Open Screen Project from Vyshak V on Vimeo.
"The Open Screen Project is an industry-wide initiative, led by Adobe and backed by other industry leaders who all share one clear vision: Enable consumers to engage with rich Internet experiences seamlessly across any device, anywhere. Partners in the Open Screen Project are working together to provide a consistent runtime environment for open web browsing and standalone applications — taking advantage of Adobe® Flash® Player and, in the future, Adobe® AIR®. This consistent runtime environment will remove barriers to publishing content and applications across desktops, mobile phones, televisions, and other consumer electronics." Learn more

Reinventing Storytelling in the Digital Age Across Platforms, Across Screens

NAB 2009 presentation by Shantanu Narayen of Adobe and A.D. Albers, of Disney Interactive Media Group, from NAB 2009
Adobe and NVIDIA Deliver Rich Web Experiences on Netbooks and Mobile Devices
Reuters (10/5/009)
"At Adobe MAX, Adobe's worldwide developer conference, Adobe Systems Incorporated and NVIDIA Corporation..announced that both companies are bringing uncompromised browsing of rich Web content to netbooks, smarphones and smartbooks built with NVIDIA GPUs. The companies have been working closely together as part of the Open Screen Project to optimize and dramatically improve performance of Flash Player 10.1 by taking advantage of GPU video and graphics acceleration on a wide range of mobile Internet devices. NVIDIA customers embracing Flash Player 10.1 for their new devices include HP, Lenovo, Samsung, Acer, Asus and more..."


RIM Joins Open Screen Project  Reuters (10/4/09 )


Honey I Shrunk the Flash Player Simon Bisson and Mary Branscome, ZDNET, 10/12/09
Teaming up with Adobe and the Open Screen Project -Google Blog   (10/5/09)


Paramount Digital Entertainment Launches Interactive Thriller on MySpace  Tracy Sedlow, InteractiveTV Today (10/28/09)
"The company says that it can deliver the show's multiple interactive elements to viewers across devices using Adobe Flash Player and Adobe AIR, "because of efforts by the Open Screen Project, an industry-wide initiative led by Adobe and supported by PDE and close to 50 other industry leaders, to enable people to engage with rich Internet experiences across any device, anywhere." -

Oct 26, 2009

The Designer's Accord and Sustainability

Goal of the Designers Accord:


"The goal of the Designers Accord is to bring the creative community together as a social and cultural force to create positive impact. We will do this by educating ourselves and our teams about environmental social justice issues, and talking to each and every client and customer about the consequences and opportunities provided by our work. We will create sustainable products and services, and share information about how to do that with each other."


 





















According to an article in TIME, over 170,000 design firms, corporations, and colleges around the world have adopted the Design Accord document over the course of two years.  Valerie Casey, a designer, became concerned about the impact of her work on the environment.  At the time, she was designing products such as diapers with sensors, and realized that many of her creations were not in line with environmental sustainability.

Although I try to conserve energy and I'm conscious of environmental issues on a personal level, and I am fortunate to work in a "Green" certified school a couple of days a week,   I didn't think much about what people can do to make a difference on a global scale.  The design field is vast, it includes people who design the packaging of products. Every day tons of well-designed packaging is carted away to a landfill somewhere on the planet. Every day, large electronic components, in the form of interactive displays, computers, and so forth, are shipped around the planet in large amounts of packaging, and the components themselves are often designed in a way that wastes energy and resources needlessly during the manufacturing process, or directly through the use of large amounts of electricity when in use. When these large components are no longer needed, they do not decompose into the earth.


The Designers Accord recently sponsored the Global Summit on Design Education & Sustainability on October 23-24, 2009 in San Francisco. Some of the thought leaders at the conference included Piper Kujac, of the UC Berkeley Extension,  Barbara Sudick, California State University,Chico,  Nathan Shedroff, California College of the Arts,  Ricardo Gomes, San Francisco State University,  Scott Boylston, Savannah College of Art & Design,  Jeremy Faludi, Worldchanging, Heidrun Mumper-Drumm, Art Center College of Design, and Cindy Gilbert, of the Biomimicry Institute.  (A complete list of the educational institutes that participated in the conference can be found on the Designers Accord website.)

The Designer Accord members encourage people to organize Town Hall events with others in their local design communities.  Information about the Town Hall process can be found on the Events section of the Designers Accord website.  For more information about the Designers Accord, follow the links below.

The Designers Accord

The Designers Accord Educational and Corporate Adopters

Designer Accord Design Firms (Core77 Design Directory)
(Under the search box, select 'search for designers accord adopters only')

Oct 25, 2009

Interactive multi-touch for sound design, dj-ing, and music creation

NUI-Group member Christian Bannister, a musician, designer, and developer behind Subcycle Labs has been experimenting with music and multi-touch technology and interactions. Here's a demonstration of what he's come up with so far:


multi-touch the storm - interactive sound visuals - subcycle labs from christian bannister on Vimeo.
.
"The big picture goal of this project is to bridge the gap between sound visualization and musical instrument. With multi-touch interaction it is possible to manipulate multiple characteristics of a sound—directly and visually. Right now a lot of electronic music involves staring at the back of the performers laptop. This is a shame because in many cases a lot of really interesting things are happening on the computer that the audience is completely unaware of. This project hopes to create a common visual language and experience for the electronic musician and the audience by enhancing the perception of sound and music on both sides...These sketches are built with Processing and Max/MSP networked with OSC on a single computer..." -Christian Bannister



JazzMutant Lemur
The multi-touch tech company now known as Stantum evolved from JazzMutant, Jazzmutant was founded in 2002 by Guilluame Largillier and Pascal Joguet, and joined by Julien Olivier in 2003. The original focus was to create a multi-touch controller for music applications, and the Lemur was born. It is now in version 2, with features such as a gesture object that provides three ways for people to interact with sound, extended scripting abilities, and remote control of your computer's mouse cursor or keyboard.  Stantum recently developed a next-gen multi-touch screen system for use in mobile devices such as smart phones and netbooks.  (I'll discuss this further in a future post.)



Mapping Ableton Live to Jazz Mutant's multi-touch Lemur for sound design:


Additional tutorials can be found the Jazzmutant YouTube channel.


Ableton Live, the software used in the above video, will include Max. Max/MSP is now known as MAX 5:
"In use for over twenty years by performers, composers, artists, scientists, teachers, and students, Max is the way to make your computer do things that reflect your individual ideas and dreams. Version 5 is a dramatic transformation of Max that begins a new chapter in the program's history."

RELATED
Video:  Max for Live
Max is a product of cycling74
Cycling74 created the Make Controller Kit, which includes fully programmable controllers. The kit is networked based. It is capable of working with actuators and can read sensor information into Max.
Jamoma, a platform for interactive art-based research and performance. Jamoma is the prototyping environment for SpatDIF, the Spatial Sound Description Interchange Format, and GDIF, the Gesture Description Interchange Format.
GDIF: Gesture Description Interchange Format, a tool for music related movements, actions, and gestures 
Stantum's Mobile Phone Multi-touch Interface:  Demonstration of precise interactions on a resistive touch screen

How the Stantum's Resistive Multi-Touch Screen Works


GDIF: Gesture Description Interchange Format, a tool for music-related movements, actions, and gestures.

There has been a flurry of work in the computer music technology world that relates to what has been going on with interactive display technology, multi-touch & gesture interaction. I came across a link to the GDIF website when I was searching for information about interactive music and the use of multi-touch technologies for a future blog post.   

So what is GDIF?  Gesture description interchange format

"The Gesture Description Interchange Format (GDIF) is being developed as a tool for streaming and storing data of music-related movements, actions, and gestures.  Current general purpose formats developped within the motion capture industry and biomechanical community (e.g. C3D) focus mainly on describing low-level motion of body joints.  We are more interested in describing gesture qualities, performer-instrument relationships, and movement-sound relationships in a coherent and consistent way.  A common format will simplify working with different software, platforms and devices, and allow for sharing data between institutions."  (The Jamoma environment is used to prototype GDIF.)


Alexander Refsum Jensenius is the man who initiated the GDIF project.  He's written a variety of articles about music, gestures, movement, and emerging technologies.  


Here's Alexander's bio"Alexander (BA, MA, MSc, PhD) is a music researcher and research musician working in the fields of embodied music cognition and new interfaces for musical expression (NIME) at the University of Oslo and at the Norwegian Academy of MusicHe studied informatics, mathematics, musicology, music performance and music technology at UiOChalmersUC Berkeley and McGill. Alexander is active in the international computer music community through a number of collaborative projects, and as the initiator of GDIFHe performs on keyboard instruments and live electronics in various constellations, including the Oslo Laptop Orchestra (OLO)."




Related Publications
Godoy, R. I., E. Haga, and A. R. Jensenius (2006b). Playing `air instruments':Mimicry of sound-producing gestures by novices and experts. InS. Gibet, N. Courty, and J.-F. Kamp (Eds.), Gesture in Human-Computer Interaction and Simulation, GW 2005, Volume LNAI 3881, pp. 256{267.Berlin: Springer-Verlag.
Jensenius, A. R (2009): Motion capture studies of action-sound couplings in sonic interaction. STSM COST Action SID report. fourMs lab, University of Oslo.
Jensenius, A. R. (2007). Action - Sound: Developing Methods and Tools to Study Music-related Body Movement. PhD thesis. Department of Musicology. University of Oslo, Norway
Jensenius, A. R., K. Nymoen and R. I. Godoy (2008): A Multilayered GDIF-Based Setup for Studying Coarticulation in the Movements of Musicians. Proceedings of the International Computer Music Conference, 24-29 August 2008, Belfast.
Jensenius, A. R., T. Kvifte, and R. I. Godoy (2006). Towards a gesture description interchange format. In N. Schnell, F. Bevilacqua, M. Lyons, and A. Tanaka (Eds.), NIME '06: Proceedings of the 2006 International Conference on New Interfaces for Musical Expression, Paris, pp. 176{179. Paris: IRCAM { Centre Pompidou.}
Kvifte, T. and A. R. Jensenius (2006). Towards a coherent terminology and model of instrument description and design. In N. Schnell, F. Bevilacqua, M. Lyons, and A. Tanaka (Eds.), Proceedings of New Interfaces for Musical Expression, NIME 06, IRCAM - Centre Pompidou, Paris, France, June 4-8, pp. 220–225. Paris: IRCAM - Centre Pompidou. [PDF]
Marshall,M. T., N. Peters, A. R. Jensenius, J. Boissinot, M. M. Wanderley, and J. Braasch (2006). On the development of a system for gesture control of spatialization. In Proceedings of the 2006 International Computer Music Conference, 6-11 November, New Orleans. [PDF]

RELATED
"Sonic Interaction Design is the exploitation of sound as one of the principal channels conveying information, meaning, and aesthetic/emotional qualities in interactive contexts."
SID Action has four working groups:
WG1: Perceptual, cognitive, and emotional study of sonic interactions
WG2: Product sound design
WG3: Interactive art and music
WG4: Sonification



    "SoundHack was my main thing for a long time, and I poured a lot of effort into it. It was the place I put my ideas. I did have something of a mission with SoundHack. I wanted to take some computer music techniques that were only used in academia, and get them out there so that all types of musicians could use them."-Tom Erbe  SoundHack Spectral Shapers


Csound Blog "Old School Computer Music"
"Csound is a sound and music synthesis system, providing facilities for composition and performance over a wide range of platforms. It is not restricted to any style of music, having been used for many years in the creation of classical, pop, techno, ambient, experimental, and (of course) computer music, as well as music for film and television."-Csound on Sourceforge


Quote from Dr. Richard Boulanger (Father of CSound):
"For me, music is a medium through which the inner spiritual essence of all things is revealed and shared. Compositionally, I am interested in extending the voice of the traditional performer through technological means to produce a music which connects with the past, lives in the present and speaks to the future. Educationally, I am interested in helping students see technology as the most powerful instrument for the exploration, discovery, and realization of their essential musical nature - their inner voice."


Upcoming post about innovations at Stantum:
I'll be focusing on Stantum and its music and media technologies division, JazzMutant. in my next post. It is interesting to note that the co-founders of Stantum, Guilliam Largilleir and Pascal Joget, have a background in electronic music.  Guiliam specializes in multi-modal user interfaces and human-machine interface technologies. Pascal has a background in physics and electronics, and has worked as a sound engineer.


My music back-story:



The very first computer-related course I took was Computer Music Technology (in 2003), since I play an electronic midi/digital keyboard and previously tried to teach myself a few things, long before computers and related technologies were "easy" for me to figure out.  During the mid-90's, I tried my hand at Dr. Richard Boulanger' CSound, and tried to acquaint myself with tools from Cycling'74, but I gave up.  Not long after that, bought the first version of MOTU's  Freestyle, which nicely worked on my Performa 600, hooked up to my Ensoniq 32, after the nice people at MOTU sent me an update that was compatible with my set-up.  Later on,  I came across Tom Erbe's SoundHack freeware.   


A lot has changed since then! 




Oct 24, 2009

VisWeek 2009: Information Visualization in a 2.0 World

I've been too busy this year to attend conferences, so sadly, I missed the IEEE VisWeek 2009 conference that was held in October in Atlantic City, NJ.  The VisWeek program included members of three related IEEE groups,  IEEE Information Visualization (InfoVis),  IEEE Visual Analytics Science and Technology (VAST), and IEEE Visualization (Vis).  

I was hoping to include a range of videos and graphics in this post, but  decided that it would be best if I devoted a few future posts to information visualization when I had time to select the best digital representations of this vibrant and exciting field.  

I've organized this post by presenting descriptions of the three IEEE groups that participated in the VisWeek conference, and then highlighting the various panels, workshops, tutorials, and paper presentations that I would have attended.   There were many more topics that caught my attention than I have time to share in this post.  I promise to dig deeper and include stories about  about the people behind the data and information, along with their work, in future posts
The descriptions below were taken from the Vis 09 website:
IEEE InfoVis
"Computer-based information visualization centers around helping people explore or explain abstract data through interactive software that exploits the capabilities of the human perceptual system. A key challenge in information visualization is designing cognitively useful spatial mappings of abstract datasets that are not inherently spatial, and accompanying the mappings with interaction techniques that allow people to intuitively explore the data. Information visualization draws on the intellectual history of several traditions, including computer graphics, human-computer interaction, cognitive psychology, semiotics, graphic design, statistical graphics, cartography, and art. The synthesis of relevant ideas from these fields with new methodologies and techniques made possible by interactive computation are critical for helping people keep pace with the torrents of information confronting them."

"IEEE VAST is the science of analytical reasoning supported by highly interactive visual interfaces. People use visual analytics tools and techniques to synthesize information into knowledge; derive insight from massive, dynamic, and often conflicting data; detect the expected and discover the unexpected; provide timely, defensible, and understandable assessments; and communicate assessments effectively for action."
"Visual Analytics requires interdisciplinary science, going beyond traditional scientific and information visualization to include statistics, mathematics, knowledge representation, management and discovery technologies, cognitive and perceptual sciences, decision sciences, and more. Your submission should help develop and/or apply the science of Visual Analytics, clearly showing an interdisciplinary approach."
"IEEE Vis is the premier forum for visualization advances in science and engineering for academia, government, and industry. This event brings together researchers and practitioners with a shared interest in techniques, tools, and technology. The year 2009 marks the20th anniversary of IEEE Vis and it will celebrate the maturation of visualization into an established scientific discipline."

Here are the workshops, panels, and paper presentations I would have attended if I was at the VisWeek conference: 
Changing the World with Visualization
Organizer: Robert Kosara
 Sarah Cohen, Jerome Cukier,  Marten Wattenberg
"With large amounts of data becoming available, and being accessible more easily,  visualization has to step in to provide means to explore and understand that data.  How can we enable people to explore the data that is of importance to them?  how can we present data in a way that is not detached, but rather prompts a reaction (but yet does not distort the data)? How can visualization change the world (and what good is it if it cannot)?  And how do we do all that so it still has academic value?"
(Note: Robert Kosara was my Info Visualization and Visual Communication professor at UNC-Charlotte.  He is the author of the Eager Eyes website and blog. He has great links related to his passions.)
Putting Visualization on the Web
Organizers:  Robert Kosara, Nathan Yau, Andrew Van de Moere
"The World Wide Web is a primary source of information for many people, but not everything that can be found online when searching for "visualization" would be recognized as such by VisWeek attendees.  there is clearly a place for data art, information graphics, mash-ups, etc. -- but what is generally considered the core for visualization research (or visual analytics, for that matter) seems underrepresented.  And while it is not difficult to find images of many techniques, there is little material that explains, demonstrates, compares, and critiques them. All this is even more true for scientific visualization than for information visualization.  To reach more people in the real world, we need to understand the mechanisms for disseminating knowledge outside of the conference or journal paper. A few examples exist where academic visualization research has become successfully poipularized, e.g., TreeMaps, ThemeRiver (used in last.fm), StreamGraphs (a well-received New York Times chart of box office revenues).  More examples like these are clearly needed, as is more open and accessible information from the people in the field.  A blog or website is not just a place for dispensing wisdom, it can provide a platform for experiments and interaction.  It also makes it possible to get feedback from readers about real-world problems that might be tackled in research projects.  And it can even become the subject of research;  Many Eyes is a wonderful example of this.  The proposed workshop will give participants the opportunity to learn about experiences, get hints, and discuss issues.  Such issues include academic blogging (talk about research before it is published?),  finding an audience,  promoting a site, etc.  The goal of the workshop is to collect ideas and best practices, and to come up with useful solutions to problems posed."
Comment: The topics covered in the two workshop listed above interest me, since I get so much information from the web, and I particularly enjoy taking in information visually, especially if it conveys meaning in an efficient, elegant way.  In my opinion, web-based information "social" information visualization has the potential to embed the human story behind the numbers, trends, and factoids that bombard us every day.   This is not simply "information overload".  It is information numbness.   As consequence, it is sometimes difficult for us to tell the difference between what is important or essential, what is true, and what is not.   
VisWeek Workshop: Video Analytics
Organizers:  Nancy Chinchor, William Ribarsky, Michael Christel
"The workshop will focus on tools for analyzing videos whose content ranges from persuasive videos (ads, propaganda, news) to YouTube videos. The purpose of video analysis is to gain insight into the contents of large collections of video and to focus detailed analysis on a smaller set of videos out of that collection. The capability to explore, categorize, and annotate videos is required by multiple types of analysts. User interfaces and techniques that support these activities including techniques in image processing are all central concerns of this workshop. However, the workshop goes beyond retrieval of images to interacting with a large video collection that one may have received from a query." 
(Note:  Dr. Ribarsky is the director of the Charlotte Visualization Center.  "The VisCenter leads the latter effort through the development of the Renaissance Situation Room, located in the VisCenter, which includes a multi-touch table, tiled display, immersive stereoscopic environment, and integrated interaction for all these tools. New interactive visualizations of the urban growth model and related models are displayed and used within this environment.RENCI at UNCC )
Comment
I thought this would be an interesting workshop, because as a school psychologist, I use digital video in my work with students who have special needs, including severe autism.  As a consequence, I have an assortment of digital video files to analyse, organize, and retrieve. I need some tools to help me do this in an efficient and meaningful way.  I work with a team of people, and making this information easy for my colleagues to access and share would be important to me.
Collaborative Visualization on Interactive Surfaces (CoVIS)
Organizers: Petra Isenberg, Michael Sedlmair, Dominikus Baur, Tobias Isenberg, Andreas Butz
"It is common for small groups of people to gather around visual displays of information to discuss or interpret the information to form decisions.  Groups can share the task load of exploring large and complex datasets and can share various interpretations of a dataset when working together.  However, tools to support synchronous collaboration between several people in their data analysis are still relatively scarce.  Traditionally,  visualization and visual analytics tools have been designed from a single-user perspective and for desktop computers.  While hardware such as multi-touch displays and network capabilities have emerged that lend themselves especially well to collaboration, software support for collaboration around visualizations is still relatively scarce.  One of the reasons is that single user systems do not necessarily translate well to collaborative scenarios or interactive surfaces and require specific re-design.  The design of digital systems, therefore, poses additional challenges:  we need to understand (a) how people collaboratively work with visual representations of data and which methods they use to solve information analysis tasks as a team, and (b) what the exact design requirements are for collaborative visual analysis scenarios.  In this workshop we would like to discuss these challenges and discuss the role of interactive surfaces as an emerging technology for supporting collaborative visualization and visual analytics settings."
Comment
If I was at VisWeek, I'd attend this workshop because I've been thinking about topic for a while. One idea I've been toying with is the concept of sharable data widgets that people could use at the table to do real-time data analysis, as well as capture real-time data from external sources (such as information from outdoor videocams, motion sensors, etc.), and data sent from colleagues who are remotely located.  This would allow people to work collaboratively as a group, but also provide a means to distribute some of the data analysis tasks during the session.  
VizWeek Tutorial:  Exploring Design Decisions for Effective Information Visualization
Organizers: Jo Wood, Jason Dykes, Aldan Slingsby
"This tutorial provides an opportunity for participants to design their own information visualization of some sample datasets. Using interactive software and data provided by the instructors, issues of color, layout, symbolization and animation are explored. Results from participants’ visualizations are compared along with those from the presenters allowing insights into the data and good practice in information visualization design to be gained. Participants should be equipped with their own laptop capable of running Java applications. Prior to the session participants are strongly encouraged to download the free software and data that will be used in the tutorial. The tutorial is suitable for anyone working with complex datasets who wishes to improve their data visualization design skills, in particular designing visualization solutions that match the research questions asked and the data to be analyzed."

Comment
This workshop caught my eye because it provides an opportunity to participate in hands-on visualization activities.   There are plenty of free educational, mental health, and public health data sets that I'd like to play around with, if I had the time and the tools!
VAST Capstone Panel: How Interactive Visualization Can Assist Investigative Analysis:  Views and Perspectives from Domain Experts
Organizer:  John Stasko
Panelists:  Sarah Cohen, Lawrence Hunter, Joe Parry
"Interactive visualization could become an essential tool in the work of investigative analysts.  Visualization could help analysts to explore large collections of data and documents, supporting the analysts' investigative sense-making processes.  This panel gathers recognized leaders  from three important domains,  investigative reporting, biosciences (genomics), and intelligence analysis, all that include a fundamental investigative analysis component.  The panelists will provide a glimpse into their worlds, describing and illustrating the data they examine, the goals and methods of their analysts, and the culture of their respective professions.  In particular, the panelists will explore how visualization could potentially benefit investigators from their domain and they will provide guidance for visualization researchers seeking to collaborate with their colleagues."
Comment:
I think that the topics covered by this panel are important.  Much of the important work that needs to be done by computer scientists and related technologists is interdisciplinary or transdisciplinary.   More disciplines rely on technology than ever before, and this is not limited to academia.    In my opinion,  I think the research and statistics courses that are mandatory for most graduate students should also include content related to information and data visualization related to the domain.  Often these research and statistics courses are taught by domain experts who also specialize in research and statistics. Why not take it to the next level?!
The IEEE Visualization & Graphics Community website has links to most of the VisWeek 2009 conference abstracts.  There are some videos and slides available on-line.  Here are a few more topics that I thought were interesting:
FinVis: Applied Visual Analytics for Personal Financial Planning (pdf)



Stephen Rudolph, Anya C. Savikhin, David S. Ebert,  Purdue University
Connecting the Dots in Visual Analysis (pdf)
Yedendra B. Shrinivasan, Eindhoven University of Technology;  David Goetz, Jie Lu, IBM Research 





Tarik Crnovrsanin, Chris Muelder, Carlos Correa, Kwan-Liu Ma, University of California, Davis
Visual Analysis of Graphs with Multiple Connected Components (pdf)
T. von Landesberger, M.Gorner, T. Schreck,  Technische Universitat Darmstadt
I'm marking my calendar and hoping I'll have the opportunity to attend the 2010 conference:
IEEE VisWeek 2010
Salt Lake City, Utah
October 24-29, 2010