The people at Stantum have been working hard to improve multi-touch technology, focusing on smaller tablet-sized systems. Stantum is a company I've been following for several years, from the time it was known as Jazz Mutant. I have been impressed by Stantum's focus on the needs of people as well as the company's careful attention to important details.
I'm pleased to see that the company has an idea of how its multi-modal technology can support multi-touch in education: "Ambidexterity and multi-modality are the two pillars of Stantum's core project – making the use of touch-enabled devices more creative and productive. Amongst others, there is one field of application where we truly see a soaring need for ambidexterity and multi-modality – augmented textbooks." -Guillaume Largillier
At the Society for Information Display's Display Week exhibition this past May, Stantum introduced a new palm rejection feature for its Interpolated Voltage Sensitivity technology. This technology provides users with a more natural way to interact with the interface and application content on tablets. The technology supports Android's multi-touch framework and is also Windows 7 certified. The palm rejection feature will be a welcome improvement for future multi-touch applications designed for education settings, where it is likely that more than one hand - or person, might be interacting with content on the screen at the same time.
Below are two videos that provide a glimpse of Stantum's innovations:
Stantum's technology can enable ten simultaneous touches, is highly responsive, and supports high-resolution content. According to a May press release, "Palm rejection is available as an API (application programming interface) to Windows and Android operating systems on x86 and ARM platforms. IVSM touch modules are offered to OEMs through the company’s Qualified Manufacturers Partners, comprising tier-one touch-screen manufacturers with high-volume production capabilities. More information is available at info@stantum.com"
NIME 2011 OSLO: The International Conference on New Interfaces for Musical Expression - NIME is an outgrowth of a workshop held at CHI 2001 (Human Factors in Computing Systems). "The NIME conference draws a varied group of participants, including researchers (musicology, computer science, interaction design, etc.), artists (musicians, composers, dancers, etc.) and developers (self-employed and industrial). The common denominator is the mutual interest in groundbreaking technology and music, and contributions to the conference cover everything from basic research on human cognition through experimental technological devices to multimedia performances." Just take a look at all of the presentations that were at NIME 2011! NIME 2011 Program (pdf)
Touch the Web 2011: 2nd International Workshop on Web-Enabled ObjectsJune 20-24, 2011, Paphos, Cyprus (in conjunction with the International Conference on Web Engineering ICWE) "The vision of the Internet of Things builds upon the use of embedded systems to control devices, tools and appliances. With the addition of novel communications capabilities and identification means such as RFID, systems can now gather information from other sensors, devices and computers on the network, or enable user-oriented customization and operations through short-range communication. When the information gathered by different sensors is shared by means of open Web standards, new services can be defined on top of physical elements. In addition, the new generation of mobile phones enables a true mobile Internet experience. These phones are today’s ubiquitous information access tool, and the physical token of our "Digital Me“. These meshes of things and “Digital Me” will become the basis upon which future smart living, working and production places will be created, delivering services directly where they are needed."
Upcoming Conferences and Workshops
Ubicomp 2011, September 17-21, 2011, Beijing, China
"Ubicomp is the premier outlet for novel research contributions that advance the state of the art in the design, development, deployment, evaluation and understanding of ubiquitous computing systems. Ubicomp is an interdisciplinary field of research and development that utilizes and integrates pervasive, wireless, embedded, wearable and/or mobile technologies to bridge the gaps between the digital and physical worlds. The Ubicomp 2011 program features keynotes, technical paper sessions, specialized workshops, live demonstrations, posters, video presentations, panels, industrial exhibition and a Doctoral Colloquium."
"Given its multi-disciplinary nature, Ubicomp has developed a broad base of audience over the past 12 years. Key audience communities are: Human Computer Interaction, Pervasive Computing, Distributed and Mobile Computing, Real World Modeling, Sensors and Devices, Middleware and Systems research, Programming Models and Tools, and Human Centric Validation and Experience Characterization. More detailed information about the topical focus of Ubicomp can be found in the Call for Papers."
Eurodisplay 2011: XXXI International Display Research Conference, September 19-22, Bordeaux-Arcacho, France Eurodisplay 2011 in Bordeaux/Arcachon provides researchers, engineers, and technical managers a unique opportunity to present their results and update their knowledge in all display-related research fields...The two keynote addresses on the first day of the Eurodisplay 2011 Symposium on September 20th will be a unique chance to hear a global overview on the future of the display market (Samsung LCD) and on display applications in the automotive industry (Daimler AG)
Eurodisplay ’11 in Bordeaux-Arcachon will be the right spot to learn more about the last results on display research and related fields such as organic electronics. The cognitic science will give a new vision on the impact of display in our day to day life, not only from a perception but from the Information standpoint since SID deals with “Information Display” and not only technologies.The two invited talk of Pr. Bernard Claverie and Dr. Lauren Palmateer will focus on this aspect......A dedicated business oriented session will address two important aspect of our business from display company creation by Thierry Leroux to end user trends analysis for TV market by Dr. Jae Shin. This conference will provide a global vision of our Display World including not only the well-known display leaders but also the very actives BRIC countries......Dr. V. Pellegrini Mammana will give a vivid illustration of this display industry dynamic in Brazil."
UIST Symposium, October 16-19, 2011, Santa Barbara, California "UIST (ACM Symposium on User Interface Software and Technology) is the premier forum for innovations in the software and technology of human-computer interfaces. Sponsored by ACM's special interest groups on computer-human interaction (SIGCHI) and computer graphics (SIGGRAPH), UIST brings together researchers and practitioners from diverse areas that include traditional graphical & web user interfaces, tangible & ubiquitous computing, virtual & augmented reality, multimedia, new input & output devices, and CSCW. The intimate size, single track, and comfortable surroundings make this symposium an ideal opportunity to exchange research results and implementation experiences."
VisWeek 2011: Viz, Infovis, VASTOctober 23-28, Providence, RI "Computer-based information visualization centers around helping people explore or explain data through interactive software that exploits the capabilities of the human perceptual system. A key challenge in information visualization is designing a cognitively useful spatial mapping of a dataset that is not inherently spatial and accompanying the mapping by interaction techniques that allow people to intuitively explore the dataset. Information visualization draws on the intellectual history of several traditions, including computer graphics, human-computer interaction, cognitive psychology, semiotics, graphic design, statistical graphics, cartography, and art. The synthesis of relevant ideas from these fields with new methodologies and techniques made possible by interactive computation are critical for helping people keep pace with the torrents of data confronting them."
6th Annual ACM Conference on Interactive Tabletops and Surfaces 2011ITS 2011 November 13-16, 2011 Portopia Hotel, Kobe, Japan "The Interactive Tabletops and Surfaces 2011 Conference (ITS) is a premiere venue for presenting research in the design and use of new and emerging tabletop and interactive surface technologies. As a new community, we embrace the growth of the discipline in a wide variety of areas, including innovations in ITS hardware, software, design, and projects expanding our understanding of design considerations of ITS technologies and of their applications."
AFFINE: 4th International Workshop on Affective Interaction in Natural Environments (ICMI 2011)November 17, 2011, Alicante, Spain (CFP deadline is August 19, 2011) Scope: "Computer gaming has been acknowledged as one of the computing disciplines which proposes new interaction paradigms to be replicated by software engineers and developers in other fields. The abundance of high-performance, yet lightweight and mobile devices and wireless controllers has revolutionized gaming, especially when taking into account the individual affective expressivity of each player and the possibility to exploit social networking infrastructure. As a result, new gaming experiences are now possible, maximizing users’ skill level, while also maintaining their interest to the challenges in the same, resulting in a state which psychologists call flow: “a state of concentration or complete absorption with the activity at hand and the situation”. The result of this amalgamation of gaming, affective and social computing has brought increased interest in the field in terms of interdisciplinary research........Natural interaction plays an important role in this process, since it gives game players the opportunity to leave behind traditional interaction paradigms, based on keyboards and mice, and control games using the same concepts they employ in everyday human-human interaction: hand gestures, facial expressions and head nods, body stance and speech. These means of interaction are now easy to capture, thanks to low-cost visual, audio and physiological signal sensors, while models from psychology, theory of mind and ergonomics can be put to use to map features from those modalities to higher-level concepts, such as desires, intentions and goals. In addition to that, non-verbal cues such as eye gaze and facial expressions can serve as valuable indicators of player satisfaction and help game designers provide the optimal experience for players: games which are not frustratingly hard, but still challenging and not boring.........Another aspect which makes computer gaming an important field for multimodal interaction is the new breed of multimodal data it can generate: besides videos of people playing games in front of computer screens or consoles, which include facial, body and speech expressivity, researchers in the field of affective computing and multimodal interaction may benefit from mapping events in those videos (e.g. facial signs of frustration) to specific events in the game (large number of enemies or obstacles close to the player) and infer additional user states such as engagement and immersion. Individual and prototypical user models can be built based on that information, helping produce affective and immersive experiences which maintain the concept of ‘flow’. This workshop will cover real-time and off-line computational techniques for the recognition and interpretation of multimodal verbal and non-verbal activity and behaviour, modelling and evolution of player and interaction contexts, and synthesis of believable behaviour and task objectives for non-player characters in games and human-robot interaction.The workshop also welcomes studies that provide new insights into the use of gaming to capture multimodal, affective databases, low-cost sensors to capture user expressivity beyond the visual and speech modalities and concepts from collective intelligence and group modelling to support multi-party interaction."
"Major Topics of Interest to IUI include: Intelligent interactive interfaces, systems, and devices, Ubiquitous interfaces, Smart environments and tools, Human-centered interfaces, Mobile
interfaces, Multimodal interfaces, Pen-based interfaces, Spoken and natural language interfaces, Conversational interfaces, Affective and social interfaces, Tangible interfaces, Collaborative multiuser interfaces, Adaptive interfaces, Sensor-based interfaces, User modeling and interaction with novel interfaces and devices, Interfaces for personalization and recommender systems, Interfaces for plan-based systems, Interfaces that incorporate knowledge- or agent-based approaches, Help interfaces for complex tasks, Example- and demonstration-based interfaces, Interfaces for intelligent generation and presentation of information, Intelligent authoring systems, Synthesis of multimodal virtual characters and social robots, Interfaces for games and entertainment, learning based interactions, health informatics, Empirical studies and evaluations of IUI interfaces, New approaches to designing Intelligent User Interfaces, and related areas"
IXDA: Interaction 12, Dublin, Ireland, February 1-4, 2012 "Interaction|12 is an ideal venue to showcase the most inspiring and original stories in interaction design. We have a world of creative talent to tap into, so our conference roster will fill up fast. We’re on the lookout for thoughtful, original proposals that will inspire our community of interaction designers from all over the world. Do you have an uncommon or enlightening design story, valuable lessons learned from hands-on experience and want to be a part of the programme at Interaction|12? You do? Great!"
"Sifteo cubes are 1.5 inch computers with full-color displays that sense their motion, sense each other, and wirelessly connect to your computer. You, your friends, and your family can play an ever-growing array of interactive games that get your brain and body engaged.
Sifteo’s initial collection of titles includes challenging games for adults, fun learning puzzles for kids, and games people can play together." -Sifteo website
About two years ago, I was interviewed about my thoughts about the interactive, hands-on, programmable cubes, then called Siftables, for an article published in IEEE's Computing Now magazine: Siftables Offer New Interaction Mode (James Figeuroa, Computing Now, 3/2009).
For those of you who'd like more information about tangible user interfaces (TUIs) and the development of Siftables, I've copied my 2009 post, Tangible User Interfaces, Part I: Siftables, below:
TANGIBLE USER INTERFACES, PART I: SIFTABLES (2009)
In 1997, the vision of tangible user interfaces, also known as TUI's, was outlined by Hiroshi Ishii and Brygg Ullmer of the Tangible Media Group at MIT, in their paper, "Tangible Bits: Towards Seamless Interfaces between People, Bits, and Atoms" (pdf). According to this vision, "the goal of Tangible Bits is to bridge the gaps between both cyberspace and the physical environment, as well as the foreground and background of human activities." This article isis a must-read for anyone interested in "new" interactive technologies.
The pictures in the article of the metaDesk, transBoard, activeLENS, and ambientRoom, along with the references, are worth a look, for those interested in this seminal work.
Another must-read is Hiroshi Ishii's 2008 article, Tangible Bits: Beyond Pixels (pdf). In this article, Ishii provides a good overview of TUI concepts as well as the contributions of his lab to the field since the first paper was written.
Related to Tangible User Interface research is the work of theFluid Interfaces Group at MIT. The Fluid Interfaces Group was formerly known as the Ambient Intelligence Group, and many of the group's projects incorporate concepts related to TUI and ambient intelligence.
According to the Fluid Interfaces website, the goal of this research group is to "radically rethink the human-machine interactive experience. By designing interfaces that are more immersive, more intelligent, and more interactive we are changing the human-machine relationship and creating systems that are more responsive to people's needs and actions, and that become true "accessories" for expanding our minds."
The Siftables project is an example of how TUI and fluid interface (FI) interaction can be combined. Siftables is the work of David Merrill and Pattie Maes, in collaboration with Jeevan Kalanithi, and was brought to popular attention through David Merrill's recent TED talk: David Merrill's TED Talk: Siftables - Making the digital physical -Grasp Information Physically "Siftables aims to enable people to interact with information and media in physical, natural ways that approach interactions with physical objects in our everyday lives. As an interaction platform, Siftables applies technology and methodology from wireless sensor networks to tangible user interfaces. Siftables are independent, compact devices with sensing, graphical display, and wireless communication capabilities. They can be physically manipulated as a group to interact with digital information and media. Siftables can be used to implement any number of gestural interaction languages and HCI applications....Siftables can sense their neighbors, allowing applications to utilize topological arrangement..No special sensing surface or cameras are needed."
It seems that people really like the Siftable concept, or they don't see the point. I found the following humerous critique of Siftables on YouTube:
"Imagine if all the little programs you had on your iphone were little separate chicklets in your pocket.
You'd lose em.
Your cat would eat em.
You'd vacuum them up.
They'd fall down in the sofa.
They'd be all over the car floor.
You'd throw them away by mistake..."
In my opinion, it is exciting to learn that perhaps some of this technology has the potential of becoming main-stream.
"It’s easy to forget that the computer mouse is over 45 years old."
"What’s not as easy to forget is that we’re now collectively getting used to interacting with computers via means and interfaces that have moved way beyond the keyboard and the mouse — the iPhone and Wii being the most prominent examples."
"The truth is that we stand on the verge of a major revolution in the models of Human Computer Interaction (HCI). A revolution that will fly right past academic and into a world of retail, medical, gaming, military, public event, sporting, personal and marketing applications."
"From multi-touch to motion capture to spatial operating environments, over the next 10 years, everything we know about HCI will change."
"Blur is the only conference that is exploring the line of interaction between computers and humans in a substantive, real-world and hands-on way."
"At Blur, vendors, strategists, buyers and visionaries assemble to not only discuss the larger issues of HCI, but also to lay their hands on the latest in HCI technology. Blur is the only forum for a focused, hands-on exploration of the varied technologies evolving in the HCI."
"Come play, investigate, learn and apply at Blur — where we’re changing how you interact with computers forever." -Blur
BLUR Conference Agenda (Note: I added the links to conference participants and/or their organizations. Feel free to leave a comment if you know of any corrections or better links!) Keynotes:
Neuroergonomics: How an Understanding of the Brain is Changing the Practice of Human Factors Engineering - Dr. Kay Stanney, Design Interactive
I love to dance- I studied dance through college, and off and on as an adult. I have a DDR (Dance Dance Revolution) game-floor pad somewhere in my attic gathering dust. I'm ready for new challenges.
I'm planning on buying a couple new dance games for the Wii and the Kinect. There is more to this story, given my interest off-the-desktop, post-WIMP HCI (human-computer interaction), interactive multimedia and games, and a career as a school psychologist dedicated to young people with disabilities, I'm excited to see where new technologies, interfaces, and interactions will take us.
So what do the wise men of usability have to say about new ways of interacting with games and other applications?
"Kinect has many great design elements that clearly show that the team (a) knows usability, (b) did user testing, and (c) had management support to prioritize usability improvements, even when they required extra development work." -Jakob Nielsen
Jakob Nielsen, one of the godfathers of usability, shared a few words of wisdom about the Kinect in his 12/27/10 Alertbox post: Kinect Gestural UI: First Impressions. Although he did not review Dance Central, he concludes that the game he reviewed, Kinect Adventures, was fun to play, despite usability problems.
If this is a topic that interests you, I recommend you read Neilsen's post, and also take a look at which are outlined in the post. Also take a look at recent essay Neilsen co-authored with Don Norman, another godfather of usability: Gestural Interfaces: A Step Backwards In Usability
Why is this topic important to me?
I have been involved in the Games for Health and Game Accessibility movement for many years. Lately I've been exploring the OpenKinect project with an aim to create ways of making movement-oriented games accessible for young people with more complex disabilities. For example, there is a need to have dance and movement games modified for students (and adults!) who need wheelchairs or walkers. There are students who have milder mobility challenges who love to dance, and the current games don't address their needs. Some of my students have vision or hearing impairments, too. They deserve a chance to play things designed for the Kinect.
"OpenKinect is an open community of people interested in making use of the amazing Xbox Kinect hardware with our PCs and other devices. We are working on free, open source libraries that will enable the Kinect to be used with Windows, Linux, and Mac."
Note: I currently work as a school psychologist with students up to age 22. My main office is adjacent to a large OT and PT room at Wolfe, a program for students who have special needs. We just had a large interactive whiteboard installed in the room that is begging for us to connect it with the school's Wii, and soon (we hope), a Kinect. If we are going to use dance games to help promote healthy activities among our special students, the games need to be accessible for students with cognitive, motor, and other limitations.
FIRST STEPS Although I can dance, I understand what the world is like through the eyes of many of the young people I work with who have motor coordination and sensory integration problems that interfere with their ability to move and dance, let alone access fast-paced dance games on the Wii or Kinect.
My initial plan is to look at what the new dance games might be like from the view of someone who doesn't know how to dance, and admits that they have "two left feet" - an perhaps, no sense of rhythm. Where would I start?
Wii's Just Dance2 seems to offer some support for learning how to dance through the use of simple movement icons, in the form of outlined figures, that provide information about how to move with the dancer on the screen. As you can see from the video below, the gamer is provided with information about upcoming moves throughout the game.
I decided to take a look at Just Dance2's MIKA "Big Girl" (You Are Beautiful) because some of the adolescent females I work with have weight concerns that interfere with their health. During the teen years, this can become a vicious cycle, resulting in less movement, and less participation with peers in physical activities, such as playing dance games. If a teen has depression as part of this mix, we know that exercise can help, and a fun dance game might be a life-saver, in more ways than one.
The screen shots below show how the movement icons are used in the game:
I thought it would be useful to learn more about the story behind the making of JustDance2. At 2:22, Alexia, the project's usability expert, makes her presence known. From what I can tell, she focused on aspects of the game that would make it more usable for non-dancers, including those with "two left feet", to play the game. (I don't know if there was anyone consulted about accessibility concerns for the game.)
Kinect Dance Central
Dance Central uses a different approach when it comes to "teaching" people how to dance along through the game. It would be interesting to test out Dance Central and JustDance 2 with the same set of people to get a better feel for what works and what doesn't. Below is a video that previews, in split-screen, the interaction that takes place in Dance Central:
Dance Central Full Motion Preview
In Dance Central, gamers are provided with information about the moves through icons that cycle up the right hand side of the screen. The level of dance-coordination to keep up with the moves is challenging at times, even for people who are OK at dancing. Players can select dances according to level of difficulty.
Kinect Usability with Regular People
Steve Cable (CX Partmers) shared his team's look at usability issues related to the Kinect by testing several games, including Dance Central, with groups of people in his article, "Designing for XBox Kinect - a usability study". The quote below is from the Steve's article:
"We’ve loved playing with the Kinect. There’s no doubt that the game play is lots of fun. In-game menus are a barrier to that fun. Kinect should allow players to move through menus quickly and compensate for inaccuracy.
We felt the Kinect would benefit from some standardised global controls – much like a controller uses the A button to select and the B button to move backwards. We also think it needs a more responsive pause gesture – one that doesn’t interfere with the user’s game play.
Most of our participants found the Dance Central menu to be more effective, more efficient and more satisfying to use. Here are our recommendations for designing a Kinect menu interface:
Allow users to make selections through positive gestures, rather than timed positions
Place options on a single axis to make them easier and quicker to select
Allow users to control menus with the game pad if they prefer
Use large easy to read text
Don’t make users scroll through options unnecessarily – it takes too long
Users will be distracted if used in a social setting – test your menus in a social context to see if they are prone to errors
Avoid the cursor metaphor, it’s not what gamers are used to seeing in game menus, and makes it harder to implement alternative joypad controls"
Below are screen shots that provide examples of how the movement icons are displayed in Dance Central:
"Our research shows that the vast majority of people polled in both developed and emerging markets see great potential for NUI applications beyond entertainment. This is especially true in China and India, where 9 out of 10 respondents indicate they are likely to use NUI technology across a range of lifestyle areas – from work, education and healthcare, to social connections, entertainment and the environment. We believe that taking technology to the next billion can be aided by NUI – making technology more accessible and more intuitive to a wider audience". - Steve Clayton, Microsoft
The people at Microsoft don't own the concept! I'm a member of the NUI Group (May, 2007) and SparkOn. Both are on-line communities where you can find people who live and breathe NUI, learn about their work, and even share designs and code. If you are intrigued by NUI - as a designer, developer, or user, please join us.
Note: I've been an evangelist and cheerleader for the NUI cause for many years. If you search this blog for "post-WIMP", "NUI", "multi-touch", "gesture", "off-the-desktop", "natural user interaction", "natural user interface", or even "DOOH", you'll be provided with an overwhelming number of posts that include videos, photographs, and links to NUI-related resources, including scholarly articles. There is a small-but-growing number of people from many disciplines, quietly working on NUI-related projects.
Nicolas Loeillot, of LM3LABS, has been ahead of the natural user interaction/ interface game for many years as his company has expanded. He's done quite a bit of deep thinking about the work of his company, and has used this wisdom to create a nice concept map that describes how LM3LABS' solutions fit into the world of gesture-based control and interaction:
In my opinion, this chart would make a great template for mapping out other natural interaction applications and products!
Here is the description of the concepts outlined in the chart:
"If all of them belong to the “gesture control” world, the best segmentation is made from 4 categories:
Finger tracking: precise finger tracking, it can be single touch or multi-touch (this latest not always being a plus). Finger tracking also encompasses hand tracking which comes, for LM3LABS products, as a gestures.
Body tracking: using one’s body as a pointing device. Body tracking can be associated to “passive” interactivity (users are engaged without their decision to be) or “active” interactivity like 3D Feel where “players” use their body to interact with content.
Face tracking: using user face as a pointing device. It can be mono user or multiple users. Face tracking is a “passive” interactivity tool for engaging user in an interactive relationship with digital content.
Image Tracking: Augmented Reality (AR) lets users use images (flyers, real products, t-shirts, faces,…) to interact with digital content. AR can be markerless or marker-based. Markerless technology has advantages but marker-based AR is easier for users to understand. (Please note here that Markerless AR is made in close collaboration with AR leader Total Immersion)." -LM3LABS
If you are interested in this subject and want to view some good examples of off-the-desktop interfaces and interactions, take a look at the LM3LABS blog, as well as Nicolas Loeillot's Vimeo channel. Also take a look at the sample of posts I've written about LM3LABS over the last few years - the links are at the end of this post.
I first discovered LM3LABS when I was taking a VR class and researching interactive, immersive large displays in 2005 or 2006. Back then, there wasn't much information about this sort of technology. A lot has changed since then!
I've learned quite a bit from watching LM3LABS (and others) grow, given my passion for postWIMP interactive technology and my commitment to blogging about this subject. Nicolas has really worked hard in this arena. As early as 2005, LM3LABS was working with Scala to provide "smart" interactive displays, and his company's applications have been supported by computer vision technologies for many years, allowing for gesture-based, or "touch-less" interaction, as demonstrated by the Catchyoo Interactive Table. This application caught my eye back in early 2007, when I was working on projects for large interactive displays for my HCI and Ubicomp classes, and was thinking about creating a table-top application.
My hunch is that LM3LABS has set the foundation for further growth in the future, given the lessons they've learned by taking risks with postWIMP technologies over the past few years!
"Founded in 2003 by a team of passionate researchers, engineers, designers, and marketers from various international backgrounds, focused on fast transformation of innovation into unique products, LM3LABS is a recognized pioneer in computer vision-based interactivity solutions. Keeping a strong customer focus, LM3LABS' team of unique people pioneers new directions, explores new concepts, new technologies and new interactions. Engaging, playful and magic, LM3LABS' products and solutions are always scalable and reliable"
info@lm3labs.com
Note to readers:
Over the past couple of years there has been an explosion of postWIMP technologies and applications, and with this pace, it has been difficult for me to keep abreast of it all. There is quite a bit I miss, given my full time job and daily life!
I welcome information about postWIMP interactive technologies and applications from my readers. Due to time constraints, not interest, I am not always able to post about a topic as soon as I'd like. That is OK, as my intention is not to be the first blogger to spread the latest tech news. I like to dig in deep when I can and make connections between innovative, interesting technologies and the people and ideas behind them.