Showing posts with label interactive media. Show all posts
Showing posts with label interactive media. Show all posts

Aug 23, 2013

Allison Druin and the HCIL Team Win Emmy for Nick App: Outstanding Creative Achievement in Interactive Media-User Experience and Visual Design

I'd like to give a shout-out to Allison Druin and the team at the University of Maryland Human-Computer Interaction Lab (HCIL) for winning the Outstanding Creative Achievement in Interactive Media - User Experience and Visual Design award!  This is a new category of award for the Emmys.


The Nick App is free and available for the iPad, iPhone, iPod Touch, Windows 8, and Xbox Life.




Here is the scoop from the Emmys website:

"The Nick App is a branded experience that allows kids to watch and play Nick in unprecedented ways. This free App features a moveable tile layout that can be swiped in any direction, promoting discovery and exploration and offering kids instant and on-demand access to more than 1,000 pieces of Nickelodeon-themed content. It includes short-form videos of original skits, sketch and comedic bits, behind-the-scenes clips and photos from Nick stars and animated characters, full episodes, polls, new games, and surprising random hilarity. The Nick App supports the full Nickelodeon on-air line up as well as specials such as the annual Kids' Choice Awards. The App boasts new content daily and includes fun and funny interactive elements such as the "Do Not Touch" button that triggers an array of disruptive comedy and surprises. Nickelodeon's goal was to go beyond a typical app that offers free video viewing and instead offer more interactive content, games, and video not seen on television — whenever and wherever the user wants it." 

RELATED

Kid Design at the HCIL: Human Computer Interaction Lab, University of Maryland 
At the HCIL, children participate as co-designers, and are members of the Kidsteam. The Nick App was created with their input!

Allison Druin is an iSchool Professor & Chief Futurist for the Division of Research at the University of Maryland. She previously was the director of the HCIL has devoted much of her career to children and technology. 

Release:  Immersive and Interactive Digital Media Programs to Receive Emmys

Emmys Category Descriptions:  Outstanding Creative Achievement in Interactive Media 
(Multiplatform Storytelling; Original Interactive Program; Social TV Experience; User Experience and Visual Design)

Nov 12, 2012

Video: Overview of Multimedia Learning Principles, Importance of Visual Learning, Richard Mayer

Richard Mayer has devoted his career to the study of multimedia learning. He is a professor in the Department of Psychological & Brain Sciences at UC Santa Barbara, and the author of Multimedia Learning, 2nd Edition. Although the book was published in 2009, years ago, it is a must-read for anyone interested in this topic.

With the popularity of interactive whiteboards and tablets/iPads in education, it is important for educators, designers and developers to become familiar with the basic principles of multimedia learning. It is also important subject for researchers.

Aug 19, 2011

Role of Data in Interactive Multiplatform Storytelling, via iTVT (video and links)


There is a lot of things going on in the field of interactive multi-platform media!

The following videos from iTVT's StoryCentric video column are worth taking the time to abosrb. In the videos, Brian Seth Hurst, CEO of The Opportunity Management Company, interviews Gunther Sonnenfeld, SVP of Cultural Innovation and Applied Technology at Omnicom-subsidiary RAPP. The role of data in interactive multi-platform storytelling is the main focus of their discussion. 


RELATED

According to the iTVT website, "StoryCentric focuses on the business, technology and art of interactive storytelling, and highlights new technologies and other industry developments that have the potential to fundamentally change the way we create and interact with stories and narratives--in television and beyond."

iTVT (Interactive TV Today)
New Edition of StoryCentric Focuses on the Role of Data in Multiplatform Storytelling
Tracy Swedlow, iTVT 8/4/11
New Edition of StoryCentric Features Seth Hurst's Interview with RAPP's Gunther Sonenfeld Tracy Swedlow, iTVT, 8/18/11

A Literacy of the Imagination (Gunther Sonnenfeld's Blog)

May 28, 2011

NEWSEUM, a highly interactive museum in D.C. with an online component -I want to visit!



What is a newseum?

"The Newseum -- a 250,000-square-foot museum of news -- offers visitors an experience that blends five centuries of news history with up-to-the-second technology and hands-on exhibits.  Within its seven levels of galleries and theaters, the Newseum offers a unique environment that takes museum-goers behind the scens to experience how and why news is made."

"The Newseum is one of the most technologically advanced museums in the world. The Newseum ordered 100 miles of fiber-optic cable to link up-to-the-second technologies that include electronic signage and interactive kiosks, two broadcast studios, 15 theaters and a 40-by-22-foot high-resolution media screen."


Below are some examples of what visitors can experience at the Newseum, located in Washington, D.C.:


Bloomberg Internet, TV and Radio Gallery
Bloomberg Internet, TV and Radio Gallery
-Newseum
Time Warner World News Gallery
Time Warner World News Gallery
-Newseum


The New York Times --Ochs-Sulzberger Family Great Hall of News 
Surrounded by the flow of information
The New York Times–Ochs-Sulzberger Family Great Hall of News
-Newseum
"Around, above and below, visitors to the Great Hall of News are surrounded by a continuous flow of news. Instant, breaking, historic news that is uncensored, diverse and free."


NBC News Interactive Newsroom



"In this 7,000-square-foot interactive gallery, visitors can select any of 48 interactive kiosks or experiences where they can immerse themselves in the many roles -photojournalist, editor, reporter, anchor — required to bring the news to the public. The gallery features eight “Be a TV Reporter” stations that allow visitors to choose from a variety of video backdrops, take their place in front of the screen, read their report from a TelePrompter and see themselves in action." -Newseum










NEWSEUM exhibits have an on-line component.  Here are a few:
Newseum Microsite (Great for use on an interactive whiteboard to introduce students to the museum.)

Dec 22, 2010

Multi-touch SmartBoard! (SMARTBoard 800 Series)

Take a look at the video demonstration of the new SMARTBoard (800 series) that offers multi-touch and gesture interaction support so that two students can interact with the board at the same time.

  • Students can use 2 finger gestures to enlarge objects and move them around.
  • Two students can interact with the board at the same time to complete activities.
  • SMARTInk/Calligraphic Ink creates stylized print as you write. Whatever is written or drawn on the SMARTBoard becomes an object in the SMARTNotebook, allowing for things to be resized or rotated.   (2:04)
  • Multi-touch gestures enabled in Window 7 and Snow Leopard work with the SMARTBoard.
  • Software development kit (3:28):  Example of a physics application developed by a 3rd-party developer.  The application supports two students working at the SMARTBoard at the same time
This video, in my opinion, does not provide viewers with the full range of possibilities that the new features provide.   I'd like to see a "redo" of this video using a live teacher and a group of students.  For example, it would be interested in seeing how the physics application would be incorporated into a broader lesson or science unit.   I'd love to hear what real students have to say as they interact with the physics application, too.

Comment:
I think a multi-user interactive timeline would be a great application for the new SMARTBoard, because students could work together to create and recreate events.  This would be ideal for history, literature, and humanities activities, across a wide span of grade levels.

Dec 12, 2010

LM3LAB's Useful Map of Interactive Gesture-Based Technologies: Tracking fingers, bodies, faces, images, movement, motion, gestures - and more

Nicolas Loeillot, of LM3LABS, has been ahead of the natural user interaction/ interface game for many years as his company has expanded. He's done quite a bit of deep thinking about the work of his company, and has used this wisdom to create a nice concept map that describes how LM3LABS' solutions fit into the world of gesture-based control and interaction:




In my opinion, this chart would make a great template for mapping out other natural interaction applications and products!


Here is the description of the concepts outlined in the chart:


"If all of them belong to the “gesture control” world, the best segmentation is made from 4 categories:
  • Finger tracking: precise finger tracking, it can be single touch or multi-touch (this latest not always being a plus). Finger tracking also encompasses hand tracking which comes, for LM3LABS products, as a gestures.
  • Body tracking: using one’s body as a pointing device. Body tracking can be associated to “passive” interactivity (users are engaged without their decision to be) or “active” interactivity like 3D Feel where “players” use their body to interact with content.
  • Face tracking: using user face as a pointing device. It can be mono user or multiple users. Face tracking is a “passive” interactivity tool for engaging user in an interactive relationship with digital content.
  • Image Tracking: Augmented Reality (AR) lets users use images (flyers, real products, t-shirts, faces,…) to interact with digital content. AR can be markerless or marker-based. Markerless technology has advantages but marker-based AR is easier for users to understand. (Please note here that Markerless AR is made in close collaboration with AR leader Total Immersion)."  -LM3LABS
   If you are interested in this subject and want to view some good examples of off-the-desktop interfaces and interactions, take a look at the LM3LABS blog, as well as Nicolas Loeillot's Vimeo channel.  Also take a look at the sample of posts I've written about LM3LABS over the last few years - the links are at the end of this post.

I love LM3LABS' Interactive Balloon:

Interactive balloons from Nicolas Loeillot on Vimeo.


Interactive Balloons v lm3 labs v2 (SlideShare)



Background
I first discovered LM3LABS when I was taking a VR class and researching interactive, immersive large displays in 2005 or 2006.  Back then, there wasn't much information about this sort of technology.  A lot has changed since then!


I've learned quite a bit from watching LM3LABS (and others) grow, given my passion for postWIMP interactive technology and my commitment to blogging about this subject.   Nicolas has really worked hard in this arena.  As early as 2005, LM3LABS was working with Scala to provide "smart" interactive displays, and his company's applications have been supported by computer vision technologies for many years, allowing for gesture-based, or "touch-less" interaction, as demonstrated by the Catchyoo Interactive Table.  This application caught my eye back in early 2007, when I was working on projects for large interactive displays for my HCI and Ubicomp classes, and was thinking about creating a table-top application.


My hunch is that LM3LABS has set the foundation for further growth in the future, given the lessons they've learned by taking risks with postWIMP technologies over the past few years!


Previous Blog Posts Related to LM3LABS:
Interactive Retail Book (Celebrating history of Christian Dior from 1948-2010 (video)
Ubiq Motion Sensor Display at Future Ready Singapore (video)
Interactive Virtual DJ on a Transparent Pane, by LM3LABS and Brief Ad
LM3LABS' Catchyoo Interactive Koi Pond: Release of ubiq'window 2.6 Development Kit and Reader
A Few Things from LM3LABS
LM3LABS, Nicolas Leoillot, and Multi-touch
More from LM3LABS: Ubiq'window and Reactor.cmc's touch screen shopping catalog, Audi's touch-less showroom screen, and the DNP Museum Lab.


About LM3LABS
"Founded in 2003 by a team of passionate researchers, engineers, designers, and marketers from various international backgrounds, focused on fast transformation of innovation into unique products, LM3LABS is a recognized pioneer in computer vision-based interactivity solutions. Keeping a strong customer focus, LM3LABS' team of unique people pioneers new directions, explores new concepts, new technologies and new interactions.  Engaging, playful and magic, LM3LABS' products and solutions are always scalable and reliable"

info@lm3labs.com

Note to readers:
Over the past couple of years there has been an explosion of postWIMP technologies and applications, and with this pace, it has been difficult for me to keep abreast of it all. There is quite a bit I miss, given my full time job and daily life!

I welcome information about postWIMP interactive technologies and applications from my readers.  Due to time constraints, not interest, I am not always able to post about a topic as soon as I'd like.  That is OK, as my intention is not to be the first blogger to spread the latest tech news.  I like to dig in deep when I can and make connections between innovative, interesting technologies and the people and ideas behind them. 




Nov 23, 2010

First International Visual Learning Lab Conference: Background Info, Program, Abstracts, & Publication Links (Budapest University of Technology and Economics)

Background:


I first came across the work of  Hungarian philosopher Kristóf Nyíri in 2003 when I was researching information related to a paper I was writing - "Thinking, learning, and communicating with multimedia".   I had the honor of meeting Kristof Nyiri when I presented my paper at a conference in 2004 at the Hungarian Academy of Sciences, where Kristóf Nyiri worked at the time.  The conference, "The Global and the Local in Mobile Communications: Places, Images, People, Connections" was co-sponsored by T-Mobile and was part of the Communications in the 21st Century: The Mobile Information Society series of interdisciplinary conferences.


I recently learned that Dr. Nyiri was involved in putting together an upcoming international conference hosted by the Visual Learning Lab at the Budapest University of Technology and Economics.  This important conference is coming up very soon, on December 1st!


Visual and interactive media technologies have come a long way since 2004.  In my opinion, these technologies have the potential to create new, efficient, engaging, and meaningful ways for people to learn, remember, communicate, and share knowledge.  I'm not alone in my thoughts regarding this matter, as you'll see from the topics that will be discussed at the VLL conference.


For your convenience, I've shared some information from the Visual Learning Lab (VLL) website in this post.  I encourage you to take the time to read the VLL mission statement, selected publications of some of the members of the VLL,  and the abstracts of the presentations for the upcoming conference.  The abstracts include short bios of the presenters.   


Be prepared to do some deep thinking when you read Kristóf Nyiri's publications!


Mission Statement of the Visual Learning Lab
"Although we naturally think in both words and images, educational theory has focused overwhelmingly on the verbal dimensions of teaching and learning. This is in part a reflection of the rise of book printing: pictures receded into the background, even in spite of efforts by Comenius and others to integrate them into texts created for educational purposes. In today's networked digital environment, however, images are easy to access, and can be handled just as smoothly as words. In response to the new challenges hereby created, the Department of Technical Education in the Budapest University of Technology and Economics has established the Visual Learning Lab (VLL), with the goal of furthering the use of visual technologies -- including film, video, and interactive digital media -- in the teaching and learning process, and of engaging in high-level research on all aspects of visual education."

VLL Publications (PDF)
Visual Learning Bibilography
A working bibliography compiled by VLL Budapest participants (Stand Jan. 31, 2010, )



Program for the December 1st VLL Conference


Written by Horváth Cz. János   
Monday, 08 November 2010 12:34

Visual Learning (1st VLL Budapest Conference, 2010)

Registration

09:30 –  09:50, Opening addresses

Plenary Session

10:00 – 10:20: Roger Murphy, The Visual Enrichment of Higher Education ()
10:20 – 10:40: Christoph Wagner, Visual Experiences in Art History ()
10:40 – 11:00: Petra Aczél, Enchanting Bewilderment: Concerns of Visual Rhetoric ()

Section A

11:10 – 11:30: Gabriella Németh, The Visual Rhetorical Figures of the Giant Billboard „ARC” (Face) Exhibition ()
11:30 – 11:50: Ágnes Veszelszki, Image and Self-representation ()
11:50 – 12:10: Anna Szlávi, The Image of Women: A Conceptual Analysis of Commercial Posters ()
12:10 – 12:30: Zsuzsanna Kemenesi, Selection by Personalization ()

Section B

11:10 – 11:30: György Molnár, Images, Charts, and the Flow of Knowledge ()
11:30 – 11:50: János Cz. Horváth, Pictorial Skills in the Service of Knowledge-Digging ()
11:50 – 12:10: Franz Dotter – Marlene Hilzensauer, "SignOnOne" – Visual learning for the Deaf ()
12:10 – 12:30: Jean-Rémi Lapaire, Visuo-kinetic Explorations of Grammar ()
12:30 – 14:00: Lunch

14:00 – 14:20: John Mullarkey, Cinema: The Animals that Therefore We Are (On Temple Grandin's Picture Theory, in Pictures) ()
14:20 – 14:40: Zoltán Kövecses, Contextual Images As Metaphors ()

Section A

14:50 – 15:10: Kristóf Nyíri,  Metaphor and Visual Thinking ()
15:10 – 15:30: Mikkel R. Haaheim, Metaphor is a Constellation ()
15:30 – 15:50: Biljana Radić-Bojanić, Mental Images as a Metaphorical Vocabulary Learning Strategy ()
15:50 – 16:10: Barbara Reiter, Visualizing Human Rights (movie) ()

Section B

14:50 – 15:10: Gábor Bencsik, The Image-Anthropological Approach to Historiography: Gypsies in 19th-Century Hungary ()
15:10 – 15:30:  Zsuzsanna Kondor, "World Picture" and Beyond – Representation Revisited ()
15:30 – 15:50: Daniela G. Camhy, Visuality and the Acquisition of the Concept of Time ()
15:50 – 16:10: Anna Somfai, Visual Thinking and the Creation and Transmission of Knowledge in Medieval Philosophical and Scientific Manuscripts ()

Plenary Session

16:20 – 16:40: Dieter Mersch, On Visual Epistemology: The Logic of "Showing" ()


16:40 – 17:00: Concluding discussion

RELATED
Visual Learning Lab's Partner Institutions
University of Nottingham
Universität Potsdam

Universität Potsdam
(GIB, Society for Interdisciplinary Image Science)
(Chair for Philosophy with Focus on Cognitive Science, Prof. Klaus Sachs-Hombach, Chemnitz, Germany)
(Chair for Art History, Prof. Dr. Christoph Wagner)
Universität Innsbruck
Center for Digital Culture Studies,
University of Pécs (Hungary), Department of Philosophy


Jun 28, 2010

Slow Media Manifesto (a link from Nat Torkington of O'Reilly Radar)

Slow Media Manifesto

"In the second decade, people will not search for new technologies allowing for even easier, faster and low-priced content production. Rather, appropriate reactions to this media revolution are to be developed and integrated politically, culturally and socially. The concept “Slow”, as in “Slow Food” and not as in “Slow Down”, is a key for this. Like “Slow Food”, Slow Media are not about fast consumption but about choosing the ingredients mindfully and preparing them in a concentrated manner. Slow Media are welcoming and hospitable. They like to share." -Slow Media Manifesto


I especially liked #5 of the Slow Media Manifesto:

"5. Slow Media advance Prosumers, i.e. people who actively define what and how they want to consume and produce. In Slow Media, the active Prosumer, inspired by his media usage to develop new ideas and take action, replaces the passive consumer. This may be shown by marginals in a book or animated discussion about a record with friends. Slow Media inspire, continuously affect the users’ thoughts and actions and are still perceptible years later. "

Slow Media Blog

RELATED
Slow Media
Beyond the Beyond: The Slow Media Manifesto
Bruce Sterling, Wired 6/28/10
Apres les slow food, les slow media?
Nouvo, 6/25/10
La manifeste des slow media (tradution: fr)

May 28, 2010

CNN's Interactive Map and Timeline of Iraq and Afghanistan Casualties "Home and Away"


Via Flowing Data and CNN

Nathan Yau, of Flowing Data, posted information and a link to CNN's interactive Casualties: Home and Away website. This website allows you to visually explore the casualty statistics of the wars in Afghanistan and Iraq, beginning with the first of the fallen in 2001. You can zoom into a region and see pictures and names of people.  The website provides a way for friends and family to share memories about their loved ones.

Home and Away also provides a "list view" option, shown in one of the pictures below.  Visitors to the site can sort by name or year of death.  Sliders on the map view provide a way of looking at the pattern of deaths over time.  It is sad, but this website makes us remember that war is real.  Deaths are not simply statistics.










Flowing Data
Home and Away

May 8, 2010

Revisiting Razorfish: Emerging Experiences, RockstAR application, and more...

I've written a few posts about Razorfish in the past. What is Razorfish?


"The Razorfish Emerging Experiences team is a dedicated group of highly experienced professionals focused solely on emerging experiences and technologies. "Effective innovation" is our multifaceted approach to concepting and delivering pioneering solutions for our clients."
Razorfish has forged ahead into very interesting-and fun- territory. Here is a video of the RockstAR application. It combines multi-touch technology and augmented reality, utilizing the Razorfish Vision Framework (RVT), integrated with the Razorfish Touch Framework.

RockstAR (Augmented Reality) Experience Demo from Razorfish - Emerging Experiences on Vimeo.


A recent post on the Razorfish Emerging Experiences blog provides a detailed account of the technology that was pulled together to make it happen in the post, The Technology Behind RockstAR. The application is integrated into Twitter and Flickr.
RockstAR
-Razorfish Emerging Experiences Blog
"For the RockstAR experience, we are analyzing each frame coming from an infrared camera to determine if faces are found in the crowd. Once a face is detected, it is assigned a unique ID and tracked. Once receive a lock on the face, we can pass position and size information to the experience where we can augment animations and graphics on top of the color camera feed."


RELATED
One of my previous posts includes a video of the Razorfashion application, which highlights the Razorfish Touch Framework:


Razorfish's Touch Framework "Razorfashion" - A lot like my idea for an in-home FashionMirrorAdvisor...


I'm still hoping to work on my FashionMirrorAdvisor - but with a twist. Now that I have a smartphone, I want to incorporate a mobile app into the concept. Guys probably just wouldn't understand.  (However, something like this would make a nice gift for a guy who is a bit lacking in the fashion department.)


Below is a remix of my previous post


RAZORFISH'S TOUCH FRAMEWORK:  RAZORFASHION - A LOT LIKE MY IDEA FOR AN IN-HOME FASHIONMIRRORADVISOR (5/23/09)


Razorfish recently unveiled the Razorfashion application designed to provide shoppers with an engaging retail experience within the "multi-channel shopping ecosystem". I'm not the "shop to you drop" type of gal, but I can see that this concept could be useful in other situations, after a few tweaks.



As soon as I saw this Razorfish Touch "Fashion" demo video, it touched a nerve. I've been playing around with a similar idea, but for my personal use, in the form of an RFID-enabled system. I'd call it something like "FashionMirrorAdvisor".


Instead of showing skinny fashion models like the Razorfashion application, I'd harness the power of built-in web-cam and mirror my own image on the screen. My mirror would dress me up in the morning when I'm way too foggy to think about matching colors and accessories.
     
My FashionMirrorAdvisor would be my friend. My "smart" friend, since all of my clothes would be RFID-tagged, along with my shoes, jewelry, and other accessories. My make-up, too. It would be a no-brainer. I really could use this application - just ask my husband!


More often than not, most mornings I find myself staring at the clothes in my closet, frozen in time, unable to formulate a fashion thought. I might set my eyes on a favorite blouse, but blank out when I try to think about the rest of the steps I need to pull my look together.
     
I know I can't wear my reddish-pink camisole with my dusty-orange/brown slacks, but at 5:15 A.M., who has the time to think about this little detail? My friend, the TouchFashionMirror would prevent me from making this fashion faux-pas.
     
No problem.
     
My FashionMirrorAdvisor would show me a few outfits, and dress my real-time moving image on the screen. Since she knows all things, she'd show me ONLY the articles of clothing that were clean, since my RFID system would keep up with all of that. It would be much more functional than a "virtual wardrobe" application. I could try out different earrings without having to get them out.
     
If I couldn't find something, the RFID system would take care of this detail. My FashioMirrorAdvisor would know where I misplaced my clothes, accessories, and even my keys, since they would all be tagged. The mirror application would provide me with a nice little map of my house and car, and highlight the location of the item.
     
My FashionMirrorAdvisor would keep track of my laundry, too. This would be a great feature. So if my dirty laundry was piling up, and I wanted to wear outfit X, Y, or Z over the next few days, I'd receive a gentle reminder that I'd need to do some laundry first!


Another practical feature:
     
My FashionMirrorAdvisor would also serve as my health consultant, keeping track of my weight and BMI. This data, along with information gained from the webcam, would be combined so that my advisor would NEVER suggest an outfit that would be too...snug.


I could program the system to provide me with gentle reminders if my weight was an issue. My FashionMirrorAdvisor would show me images of myself "before" and "after", outfits included.

Information about the "after" outfits could be fed to the system from the web-catalogs of my favorite fashion retailers, and once I lost those 10 darned pounds, I'd find a nice parcel delivered to my door. Thanks to my FashionMirrorAdvisor, I know that the outfit would be just right.


UPDATE 5/8/10:  The FashionMirrorAdvisor would be integrated with a mobile app - since I now have a smartphone, this would be quite useful in planning shopping trips centered around the purchase of new clothes, shoes, accessories, and coordinating cosmetics!  I created a little game  that I think would be ideal for this sort of thing, too.   I still want to work on this....someday. Too many ideas, too little time!


ALSO RELATED
From the Razorfish site:
"The Razorfish Emerging Experiences team is a dedicated group of highly experienced professionals focused solely on emerging experiences and technologies. "Effective innovation" is our multifaceted approach to concepting and delivering pioneering solutions for our clients"

"Founded in 2008, Razorfish Emerging Experiences is a cross-functional team composed of strategists, artists, experience designers, and technologists. We’re part of the Razorfish Strategy & Innovation practice led by Shannon Denton. Jonathan Hull is the managing director of the team, Steve Dawson is the technology lead and Luke Hamilton is the creative lead."

Razorfish
Razorfish Emerging Experiences Portfolio
Razorfish Emerging Experiences Blog
Razorfish Emerging Experiences on Vimeo


RELATED 5/8/10
Razorfish Health (Fun music on the home page!)
Razorfish Establishes Cloud Computing Practice
Douglas Quenqua, ClickZ 4/15/10
The Razorfish 5: Five Technologies that Will Change Your Business
Razorfish Whitepapers


If you are looking for a job, you might be interested in the openings at Razorfish. Before applying, take a look at what is expected:
"You dream in digital. You're fluent in the technologies that define our world and passionate about the way they're shaping our future.  You're a communicator. A creator. You understand how the Web connects us, and you want to shape the conversation. You're a restless innovator.  you're not only waiting for the next big idea to happen, you're making it happen.  You're a unique talent, a visionary, an experimenter, and you're looking for an environment that lets you shine. In other words, you're just our type...."


FYI
When I visited the Razorfish website, I noticed that the background appeared to be a live feed of the offices. Since today is Saturday, it makes sense that the only person busy at the office was a custodian. Below is the screenshot: