Here is a quote from Jason Silva's website: "The adjacent possible is a kind of shadow future, hovering on the edges of the present state of things, a map of all the ways in which the present can reinvent itself" - Steven Johnson
Jason Silva is a Fellow at the Hybrid Reality Institute: "A Research and Advisory Group Focused on Human-Technology Co-Evolution and Its Implications for Global Business, Society, and Politics".
SOMEWHAT RELATED My husband DVR'd the pilot of "Touch", a new offering from Fox that appears to incorporate some of the concepts in the above review. We watched it last night, before I came across Jason Silva's review of ABUNDANCE. Coincidence? Maybe not : )
(I'm an armchair futurist. I work with kids with autism spectrum disorders. This stuff probably interests me more than it should!)
"A wireless square with sensors and a simple web app to set rules, Twine tells you what your things are doing by email, text or Twitter." I want one!
This project was developed by David Carr and John Kestener, the designer-engineers behind Supermechanical. They are passionate about creating connectable objects.They honed their skills in the interdisciplinary MIT Media Lab.
More information about Twine can be found on the KICKSTARTER website. Here is a bite of info from the site for the tech-curious:
"Twine is a wireless module tightly integrated with a cloud-based service. The module has WiFi, on-board temperature and vibration sensors, and an expansion connector for other sensors. Power is supplied by the on-board mini USB or two AAA batteries (and Twine will email you when you need to change the batteries)." "The Spool web app makes it simple to set up and monitor your Twines from a browser anywhere. You set rules to trigger messages — no programming needed. The rules are put together with a palette of available conditions and actions, and read like English: WHEN moisture sensor gets wet THEN tweet "The basement is flooding!" We'll get you started with a bunch of rule sets, and you can share rules you create with other Twine owners." "Because the hardware and software are made for each other, setup is easy. There's nothing to install — just point Twine to your WiFi network. Sensors are immediately recognized by the web app when you plug them in, and it reflects what the sensors see in real time, which makes understanding and testing your rules easy."
"The Tomorrow Project" is an international program that explores and creates science fiction based on science fact. The project features science fiction stories, comics and short screen plays based on current research and emerging technologies and examines their affect on our future. -Intel The Tomorrow Project-Seattle
The Reactable featured in the above video is used for DJ-ing in clubs. The one I've played with is at the science museum in my area - I love it. It is fun to improvise on the Reactable with another person. For more information, see my previous blog posts featuring the Reactable.
AUGMENTED REALITY While listening to CNBC on my satellite radio on the way home today, I heard that investing in Qualcomm might be a good idea. I wonder if this means that Wall Street analysts think that AR will become mainstream soon...
The video below shows a variety of creative AR game applications:
Qualcomm has an AR SDK that comes with tutorials, samples, an API reference, and developer forums. The SDK can be downloaded from the Qualcomm AR web-page.
I'm happy to share some information about the topics of upcoming theme issues planned for Personal and Ubiquitous Computing. The information below was taken from the PUC's Facebook page. I added links to information about most of the managing editor for each theme.
"PUC is currently working with some of the leading researchers, research groups, conferences and workshops to produce theme issues around specific topics. Here are the issues we currently have in progress:"
Measuring behavior and interaction – methods and new application domains: E.I. Barakova
"For more details, check back in our Facebook Notes or contact the editor managing the theme issue." - the editor's email addresses can be found on the PUC Facebook site.
Note: Ubiquitous Computing was one of my favorite graduate courses and I still can't get enough of it. In my dreams, I would be happy just playing around with emerging technologies and experimenting with new applications, and nothing else, for a year or two, in and out of the lab.
Gillian Hayes is an Assistant Professor in Informatics in the School of Information and Computer Science and the School of Education at UC Irvine. Some of her research has focused on the use of interactive technologies with children who have autism. She recently was interviewed about a feature article she co-authored that was published in Personal and Ubiquitous Computing: Interactive Visual Supports for Children with Autism.
The interview can be found on the following Facebook Notes page:
"The Razorfish Emerging Experiences team is a dedicated group of highly experienced professionals focused solely on emerging experiences and technologies. "Effective innovation" is our multifaceted approach to concepting and delivering pioneering solutions for our clients."
Razorfish has forged ahead into very interesting-and fun- territory. Here is a video of the RockstAR application. It combines multi-touch technology and augmented reality, utilizing the Razorfish Vision Framework (RVT), integrated with the Razorfish Touch Framework.
A recent post on the Razorfish Emerging Experiences blog provides a detailed account of the technology that was pulled together to make it happen in the post, The Technology Behind RockstAR. The application is integrated into Twitter and Flickr. -Razorfish Emerging Experiences Blog "For the RockstAR experience, we are analyzing each frame coming from an infrared camera to determine if faces are found in the crowd. Once a face is detected, it is assigned a unique ID and tracked. Once receive a lock on the face, we can pass position and size information to the experience where we can augment animations and graphics on top of the color camera feed."
RELATED One of my previous posts includes a video of the Razorfashion application, which highlights the Razorfish Touch Framework:
I'm still hoping to work on my FashionMirrorAdvisor - but with a twist. Now that I have a smartphone, I want to incorporate a mobile app into the concept. Guys probably just wouldn't understand. (However, something like this would make a nice gift for a guy who is a bit lacking in the fashion department.)
Below is a remix of my previous post
RAZORFISH'S TOUCH FRAMEWORK: RAZORFASHION - A LOT LIKE MY IDEA FOR AN IN-HOME FASHIONMIRRORADVISOR (5/23/09)
Razorfish recently unveiled the Razorfashion application designed to provide shoppers with an engaging retail experience within the "multi-channel shopping ecosystem". I'm not the "shop to you drop" type of gal, but I can see that this concept could be useful in other situations, after a few tweaks.
As soon as I saw this Razorfish Touch "Fashion" demo video, it touched a nerve. I've been playing around with a similar idea, but for my personal use, in the form of an RFID-enabled system. I'd call it something like "FashionMirrorAdvisor".
Instead of showing skinny fashion models like the Razorfashion application, I'd harness the power of built-in web-cam and mirror my own image on the screen. My mirrorwould dress me up in the morning when I'm way too foggy to think about matching colors and accessories. My FashionMirrorAdvisor would be my friend. My "smart" friend, since all of my clothes would be RFID-tagged, along with my shoes, jewelry, and other accessories. My make-up, too. It would be a no-brainer. I really could use this application - just ask my husband!
More often than not, most mornings I find myself staring at the clothes in my closet, frozen in time, unable to formulate a fashion thought. I might set my eyes on a favorite blouse, but blank out when I try to think about the rest of the steps I need to pull my look together. I know I can't wear my reddish-pink camisole with my dusty-orange/brown slacks, but at 5:15 A.M., who has the time to think about this little detail? My friend, the TouchFashionMirror would prevent me from making this fashion faux-pas. No problem. My FashionMirrorAdvisor would show me a few outfits, and dress my real-time moving image on the screen. Since she knows all things, she'd show me ONLY the articles of clothing that were clean, since my RFID system would keep up with all of that. It would be much more functional than a "virtual wardrobe" application. I could try out different earrings without having to get them out. If I couldn't find something, the RFID system would take care of this detail. My FashioMirrorAdvisor would know where I misplaced my clothes, accessories, and even my keys, since they would all be tagged. The mirror application would provide me with a nice little map of my house and car, and highlight the location of the item. My FashionMirrorAdvisor would keep track of my laundry, too. This would be a great feature. So if my dirty laundry was piling up, and I wanted to wear outfit X, Y, or Z over the next few days, I'd receive a gentle reminder that I'd need to do some laundry first!
Another practical feature: My FashionMirrorAdvisor would also serve as my health consultant, keeping track of my weight and BMI. This data, along with information gained from the webcam, would be combined so that my advisor would NEVER suggest an outfit that would be too...snug.
I could program the system to provide me with gentle reminders if my weight was an issue. My FashionMirrorAdvisor would show me images of myself "before" and "after", outfits included.
Information about the "after" outfits could be fed to the system from the web-catalogs of my favorite fashion retailers, and once I lost those 10 darned pounds, I'd find a nice parcel delivered to my door. Thanks to my FashionMirrorAdvisor, I know that the outfit would be just right.
UPDATE 5/8/10: The FashionMirrorAdvisor would be integrated with a mobile app - since I now have a smartphone, this would be quite useful in planning shopping trips centered around the purchase of new clothes, shoes, accessories, and coordinating cosmetics! I created a little game that I think would be ideal for this sort of thing, too. I still want to work on this....someday. Too many ideas, too little time!
ALSO RELATED From the Razorfish site: "The Razorfish Emerging Experiences team is a dedicated group of highly experienced professionals focused solely on emerging experiences and technologies. "Effective innovation" is our multifaceted approach to concepting and delivering pioneering solutions for our clients" "Founded in 2008, Razorfish Emerging Experiences is a cross-functional team composed of strategists, artists, experience designers, and technologists. We’re part of the Razorfish Strategy & Innovation practice led by Shannon Denton. Jonathan Hull is the managing director of the team, Steve Dawson is the technology lead and Luke Hamilton is the creative lead." Razorfish Razorfish Emerging Experiences Portfolio Razorfish Emerging Experiences Blog Razorfish Emerging Experiences on Vimeo
If you are looking for a job, you might be interested in the openings at Razorfish. Before applying, take a look at what is expected: "You dream in digital. You're fluent in the technologies that define our world and passionate about the way they're shaping our future. You're a communicator. A creator. You understand how the Web connects us, and you want to shape the conversation. You're a restless innovator. you're not only waiting for the next big idea to happen, you're making it happen. You're a unique talent, a visionary, an experimenter, and you're looking for an environment that lets you shine. In other words, you're just our type...."
FYI When I visited the Razorfish website, I noticed that the background appeared to be a live feed of the offices. Since today is Saturday, it makes sense that the only person busy at the office was a custodian. Below is the screenshot:
"Nothing's impossible, we just get smarter and smarter by the day." - Student, commenting about his experiences in the SMALLab environment.
The research team at Arizona State university, lead by David Birchfield, has worked to create embodied, multimodal, and collaborative mediated learning learning environments using mixed reality that has been in use at Coronado High School with much success. The SMALLab is a learner-centered approach to learning that provides multi-modal, multi-sensory activities that engages learners, and also results in deeper understanding of more complex concepts.
Video of high school students describing their work in SMALLab (Coronado High School) "Central to our work is the development of a new interactive mixed reality learning environment, the Situated Multimedia Art Learning Lab [SMALLab]. SMALLab is an environment developed by a collaborative team of media researchers from education, psychology, interactive media, computer science, and the arts. SMALLab is an extensible platform forsemi-immersive, mixed-reality learning. By semi-immersive, we mean that the mediated space of SMALLab is physically open on all sides to the larger environment. Participants can freely enter and exit the space without the need for wearing specialized display or sensing devices such as head-mounted displays (HMD) or motion capture markers. Participants seated or standing around SMALLab can see and hear the dynamic media, and they can directly communicate with their peers that are interacting in the space. As such, the semi-immersive framework establishes a porous relationship between SMALLab and the larger physical learning environment. By mixed-reality, we mean that there is an integration of physical manipulation objects, 3D physical gestures, and digitally mediated components. Byextensible, we mean that researchers, teachers, and students can create new learning scenarios in SMALLab using a set of custom designed authoring tools and programming interfaces. SMALLab supports situated and embodied learning by empowering the physical body to function as an expressive interface. Within SMALLab, students use a set of “glowballs” and peripherals to interact in real time with each other and with dynamic visual, textual, physical and sonic media through full body 3D movements and gestures. For example, working in theSpring Sling scenario, students are immersed in a complex physics simulation that involves multiple sensory inputs to engage student attention. They can hear the sound of a spring picking up speed, see projected bodies moving across the floor, feel a physical ball in their own hands and integrate how the projected ball moves in accordance with their own body movements to construct a robust conceptual model of the entire system."
About David Birchfield: David Birchfield is "a media artist, researcher, and educator. He has created work that spans from interactive music performance to generative software to robotic installationsn to K-12 learning environments. In recent years, this work cuts across three areas of exploration:K-12 learning, media art installations, and live computer music performance."
Some publications: Birchfield, D., Megowan-Romanowicz, Johnson-Glenberg, M., Next Gen Interfaces: Embodied Learning Using Motion, Sound, and Visuals – SMALLab. To appear in Proceedings of the American Educational Research Association Annual Conference; SIG Applied Research in Virtual Environments for Learning [ARVEL], San Diego, CA, April 2009. Megowan-Romanowicz, M., Uysal, S., Birchfield, D., Growth in Teacher Self-Efficacy Through Participation in a High-Tech Instructional Design Community, to appear in proceedings of the National Association for Research in Science Teaching Annual Conference, Garden Grove, CA, April 2009.
Birchfield, D., Thornburg, H., Megowan-Romanowicz, C., Hatton, S., Mechtley, B., Dolgov, I., Burleson, W., Embodiment, Multimodality, and Composition: Convergent Themes Across HCI and Education for Mixed-Reality Learning Environments, Journal of Advances in Human-Computer Interaction, Volume 2008, Article ID 874563.
Dolgov, I., Birchfield, D., McBeath, M., Thornburg, H., Todd, C., Amelioration of Axis-Aligned Motion Bias for Active versus Stationary Judgments of Bilaterally Symmetric Moving Shapes’ Final Destinations,Perception and Psychophysics, in press 2008.
D. Birchfield, B. Mechtley, S. Hatton, H. Thornburg, Mixed-Reality Learning in the Art Museum Context, Proceedings of ACM SIG Multimedia, Vancouver, BC, October 27, 2008.
S. Hatton, D. Birchfield, M.C. Megowan, Learning Metaphor through Mixed-Reality Game Design and Game Play, Proceedings of ACM Sandbox Conference, Los Angeles, CA, August 10, 2008. [pdf]
Institute of Play's SMALLab contact: Katie Salen, Executive Director, Institute of Play Associate Professor, Parsons The New School for Design
The Institute of Play, along with the Joan Ganz Cooney Center and others, have a number of publications related to technology and learning:
"The mission of The Joan Ganz Cooney Center is to catalyze and support research, innovation and investment in digital media technologies to advance children's learning. Nurturing foundational and "21st century" literacies:
The inaugural focus of the Center—given the national need—will be on determining how technology can help elementary-aged children develop the fundamental building blocks of literacy. These include the vital reading, writing, speaking and listening capabilities that all children must develop during the primary grades. A special emphasis of the Center will be on struggling readers who risk educational failure if they do not catch up to their peers by grade four...Another important focus of the Center is to leverage the potential of interactive media to promote "21st century" literacies that students will need to compete and cooperate in our connected world—competencies such as critical thinking and problem solving, second language competency, inter-cultural understanding and media literacy."
I've been so busy writing reports* that this almost passed me by!
I found out about 6rounds because they use Twitter as a promotional platform. I happened to notice that this company was following me and clicked on the link.
6rounds started out as an outgrowth of a speed dating website, and the application was initially designed for people to use while waiting for speed dating sessions. According to the 6rounds website FAQ's, "6rounds is a live meeting point, offering users a variety of experiences that they enjoy together using a combination of webcams, real-time games, social activities and media engagements."
Since I'm a happily married middle-aged woman, I'm not sure 6rounds is up my alley. I think social singles, college students, and others who don't mind flashing their faces through a webcam would like it.
If I had time, I might like to play around with GixOO, the opensource API that underpins 6rounds. GixOO has the potential for developers to develop games and activities. The application allows the users to track each other as they move their mice, and also enables people to see the same things as their friends as they interact online.
6rounds looks like it might provide possibilities for collaborative projects in education, but I won't be sure until I give it a try.
So what is 6rounds?
FOR THE TECH-CURIOUS
The following information was quoted from the Openomics blog from Sun Microsystem's ISV Engineering:
"6rounds is the first product built on the GixOO live social platform, initially developped on the LAMP stack. As a member of the Sun Startup Essentials program, GixOO connected with Sun's ISV Engineering team to test the scalability of their platform on SAMP --the Solaris-based AMP stack, available in an integrated and optimized package from Sun, the Sun Glassfish Web Stack f.k.a. CoolStack. At the time, we ran the benchmark on a Sun SPARC Enterprise T5120 server --featuring the 64-way CoolThreads processor UltraSPARC T2-- running Solaris 10 and CoolStack 1.3. GixOO loved the DTrace kernel instrumentation of Solaris 10 --DTrace gives unique insights into how the application performs, live on a production system-- and the Containers technology a.k.a. Zones --this light-weigth virtualization layer of Solaris allows multiple applications to run in isolation from each other on the same physical hardware--, and quickly adopted them for their internal use.
"At GixOO, we use Sun SPARC-based server, powered by Solaris 10 for our R&D environment. The system gives us the required flexibility and components isolation that we need. Thanks to SPARC's great SMP abilities, we achieve high performance for many development environments running on one single 1U server.
Solaris Zones are very comfortable and simple to configure, and allow the full utilization of the great power hidden in this small machine, which makes Solaris 10 an excellent choice for system administrators. We are using Sun MySQL Server which gives our application high speed data storage solution, and in the future we might migrate to the MySQL Cluster solution to get even faster results."
Dmitry Shestak, CTO, GixOO"
Somewhat Related
2/26/10: Oracle bought Sun in 2009. Here were the latest results when I did a search to get more information:
Not Really Related
*For those new to this blog, I'm a school psychologist who returned to her day job full time a year and 1/2 ago, when the economy was taking a nosedive. Before that, I was working part-time and taking computer and technology classes, initially to learn how to create interactive multimedia applications and games.
Since some of the kids and teens I work with have a range of abilities and disabilities, including autism, I developed an interest in accessibility. How can universal design principles be applied to games and emerging interactive technologies? I'm also fascinated by interactive displays and surfaces of all sizes, especially ubiquitous systems that support cognition, collaboration and communication.
One of my pet projects:
My vision? A collaborative multimedia, multi-modal interactive time-line might help us to understand complex, interrelated factors and events more effectively. It would provide an opportunity for the inquisitive to view things from a broad perspective, and also explore things in rich detail. Ideally, the time-line would support multi-touch, multi-user interaction on larger displays and interactive whiteboards, and allow for people who are remotely located to participate in the process.
Now that one of my schools will be getting a multi-touch SMARTTable, I'd like to experiment with time-line concepts and interactions on a table surface. I'd also like to figure out how this can work seamlessly with the existing SMARTBoard that is in the classroom. Of course, this would have to take place during after work hours!