Showing posts with label art. Show all posts
Showing posts with label art. Show all posts

Apr 12, 2014

Vodafone's "Firsts" Campaign: 2 Grandmas take their first flight; Neil Harbisson's Color Conducted Concert

Vodafone, a telecommunications company, was off of my radar until an endearing video crossed my path on Facebook today.  Vodafone Firsts is part of a global brand program, led by social media, that aims to inspire people to do things for the very first time. Vodafone's strategy seems to be working.

In the following video, An and Ria, two elderly women, take their very first plane trip together to Barcelona.  It is well worth taking 10 minutes to watch it!   The video also does a good job of showing how the women approach using newer technologies, such as a tablet/video phone -- and a VR flight simulator to prepare for the flight.

An & Ria's First Flight  



In the following video, Neil Harbisson uses technology to create a concert that is based on the transformation of color into music. The interesting thing is that Neil is color-blind and has a "cyborg" apparatus that interprets colors for him by converting them into music/sounds. Vodafone provided him with an amazing creative opportunity.  

For more videos and behind-the-scenes information, see the full story on the Vodafone Firsts site, and also the TED Talk video on this post.

Neil Harbisson's First Color Conducted Concert Trailer


Neil Harbisson's TED Talk:  I listen to color



RELATED
Vodafone Firsts
Vodafone Corporate Site

Vodafone Americas Foundation:  "Mobile for Good"
Vodafone's 2013 Wireless Innovation Project's competition winners pushed the envelope to create new ways of using mobile technology and sensors for health and environmental monitoring.
ColdTrace: Wireless Vaccine Monitoring
MoboSens: Mobile Water Sensor
G-Frensel Cellphone Spectrometer (optical spectroscopy)



Dec 2, 2013

It is in our DNA: Chromosome Painting, DNA Music

I hadn't planned on testing my DNA, but on a whim, I ordered a kit from Ancestry.com.  I downloaded the raw data file of my DNA, thinking that it would be nice to play with some day, and then learned about a free online resource, Gedmatch, that would make it easy to upload my raw DNA file and go exploring.  I'm happy that I did, since the FDA is trying to stop personal genome testing. At the time of this post, Ancestry.com and Family Tree DNA were not impacted by this effort. 23andMe was the target, because the company provides customers with information about health risks and possible genetic conditions.

The picture below is segment of a screenshot of the results of chromosome painting, created using the Ad-Mix utilities and "Davidski's" Eurogenes K13 model.  Each color represents a different group of my ancestors, and each chromosome has a different mix. 

I knew I'd get some very colorful results, given the fact that my ancestors came from all over the planet!  






































Although I like the interplay of color an pattern of my of my DNA Ad-Mixture results, and think the above section would make a great piece of modern art, I learned this morning about that there is a company, DNA11 that creates personalized DNA art that is suitable for displaying in homes - or offices. DNA 11 was featured in my local paper this morning, recommended DNA art as a great holiday gift for science buffs.


Credit:  DNAll http:www.dna11.com/images-gallery













DNA MUSIC
Looking at picture of my painted chromosomes, I was reminded of waveforms, and remembered reading something about DNA music. After a quick search, I came across the Your DNA Song website. Your DNA Song is a company that provides customers with a DNA testing kit , using a sample of saliva, that is processed by a lab, and then bio-sequenced to generate music.  To hear it, take a look at the video - a nice rendition of Chromosome #1, by Stuart Mitchell.

Be forewarned. The Your DNA Song website suggests that the protein translation of your DNA, set to music tuned to 432 Hz, has healing powers.  Yes, it sounds very New Age, but there is a story behind it.  Composers of classical music used this tuning, but in 1939, the standards were changed to A=440 Hz. There is an interesting story behind the change, and a controversy still exists. 

If you don't want to shell out money to hear your personal DNA patterns sing, you might like the DyNAMixers on the Your DNA Song website.  Created by the musician Stuart Mitchell, the mixers provide a nice way of experimenting and remixing music based on DNA and proteins.  The screenshot below gives you an idea of what you'll find if you play with the DyNAMixers on the site
Credit: Your DNA Song http:yourdnasong.com/music_samples.htm

RELATED
F.D.A. Orders Genetic Testing Firm to Stop Selling DNA Analysis Service
Andrew Pollack, New York Times, 11/25/13
AncestryDNA Test
23andMe
Family Tree DNA
GEDMATCH
openSNP
Eurogenes Genetic Ancestry Project;  Eurogenes Blog
Davidski Polako
Ondrizek's Chromosome Painting...and our beautiful DNA
Tina Hoggat, 4Culture Blog, 7/2/12
Gifts for the wcience whiz
DNA11
DNA11 Images Gallery
Go Inside The World's First LAb That Turns DNA Into Custom Art
Dina Spector, Business Insider, 6/12/13
Gedmatch: a DNA geek's dream site
Judy G. Russell, The Legal Genealogist, 8/12/12
John Roach, National Geographic News, 10/21/05
Michael Zev Gordon, The Guardian, 6/24/10
Proportion in Musical Scales (Sacred Geometry website)
Return of the 432hz: Mozart/Cold Play/Radiohead/The Doors
AltoEgo Blog 7/16/13  (scroll down for examples of familiar music in 432 Hz)

Support@DNA11.com



Jul 27, 2013

Drawing with Sounds, an Interactive Musical Drawing App Created with Processing

This is my first creation using the Drawing with Sounds application. This cause and effect application was created with Processing 1.5.1 and produces musical sounds and patterns, along with random abstract shapes that are triggered through movements of a mouse. The application has been adapted for switch users, making it accessible to those who have motor impairments.

Processing is an open-source programming and integrated development environment that was build on the Java programming that has been easy to learn by people who have limited technical backgrounds. It is used by people from from a range of fields, such as art, music, journalism, and data visualization.

The fact that Processing is easy to use makes it great for people with just a little bit of coding knowledge to adopt a variety of open-source applications and tweak a few of the variables to according to the need.   Using the Drawing with Sounds app as an example, I wanted to change the size of the interactive canvas.  By typing in a few keystrokes,  I changed the original size of the screen from 800 x 800 to something larger, as shown below:



























I then typed in another variable for the background color, and was pleased with the results:
























The application was developed from Andrew R. Brown's tutorial on "SoundCipher", based on the SoundCipher library for Processing.  According to the description of the application, "The sketch is intended for use....as a sensory stimulus using sound, shape and color to create engagement". 




Below is a videoclip created by Keith Manville, demonstrating what happens when you interact with Drawing with Sounds: 


I look forward to sharing more on this topic in future blog posts.  I'll be sure to include basic "how-to" information for my "low-tech" readers and colleagues who desire to learn a bit more about using and creating basic interactive applications that appeal to young people with special needs.  


RELATED
Note:  SEN is the UK acronym for "Special Educational Needs"

James Winchester, SEN Classroom Blog, 10/31/13

Keith Manville, opensen blog, 10/27/12
"Open-source, Sensory and Interactive Technology in SEN"

(An accessible switch-enabled version of Drawing with Sounds can be downloaded from the above link.)

Free sensory applications built in Processing 1.5.1"Download links for sensory applications built in Processing 1.5.1, designed to assist SEN students in engaging in learning through the the use of ICT, interactive “Smart” boards, touchscreen or tablet technologies.  Our experience is that using applications on these devices or with natural user interfaces such as the Microsoft Kinect can increase the opportunities for engagement and social communication with many students."


I will be posting more information about emerging interactive technologies for special needs in the future. In the meantime, take a look at the following resources:

The SHAPE Project, technologies to enhance learning for young people on the Autism Spectrum
iPadSEN
KinectSEN Wiki
"Using Kinect in Special Schools for Pupils with Severe Learning Difficulties"
iGazeSEN
OPENSEN Videos



Dec 5, 2012

Augmented Human Conference '13 (ACM CHI) March 7th and 8th; CFP paper deadline Jan 8, 2013

Looks like a fascinating conference!

ACM SIGCHI 4th Augmented Human International Conference





















Call for Papers
The 4th Augmented Human (AH) International Conference in cooperation with ACM SIGCHI will be held in Stuttgart, Germany, on March 7–8 2013, focusing on augmenting human capabilities through technology for increased well-being and enjoyable human experience. 

As in previous years, the conference proceedings will be published in the ACM Digital Library as a volume in its International Conference Proceedings Series with ISBN. 

Topics 
  • Wearable Computing and Ubiquitous Computing 
  • Bionics, Biomechanics, and Exoskeletons 
  • Brain-Computer Interfaces, Muscle Interfaces, Implanted Interfaces 
  • Sensors and Hardware 
  • Smart Artifacts and Smart Textiles 
  • Augmented Sport, Health, & Well-being, Training/Rehabilitation Technology 
  • Augmented and Mixed Reality, Tourism and Games and Context-Awareness 
  • Augmented Fashion and Art 
  • Trust, Privacy, and Security of Augmented Human Technology 
PROGRAM COMMITTEE

Submission Categories for Papers 

Full papers 8 pages, anonymized, 30 minutes presentation 
Short papers 4 pages, anonymized, 15 minutes presentation 
Demonstration papers 2 pages, anonymized, demonstration at conference 
Poster papers 2 pages, anonymized, presented at conference 
Art pieces 1–2 pages, not published, exhibited at conference 

The four paper categories will be published in the ACM digital library and follow the ACM paper format. We encourage authors to submit supporting video material in addition to the PDF submission. 

Important Dates 
  • January 8, 2013      paper submission deadline 
  • February 5, 2013    author notification 
  • February 12, 2013  camera-ready and ACM copyright form due 
  • March 7–8, 2013    scientific conference in Stuttgart 

Art and Exhibition 

Augmented Human 2013 will feature contributions by art researchers and practitioners. Artists participating and exhibiting at Augmented Human 2013 will have to be self-funded to attend the conference. 

All art pieces will be included in a video to be published on the Augmented Human YouTube channel. Additionally, an exhibition catalog will be published on the Augmented Human website, including full-page pictures and the descriptions provided by the authors. 

Submission of Art Pieces 

Authors may choose the format to present their art pieces for submission. The submission should include the requirements of space, light, electricity, and equipment.

Organizing Committee

General Chair:  Albrecht Schmidt, University of Stuttgart, Germany
Program Co-Chairs: Andreas Bulling, University of Cambridge, UK; Christian Holz, Hasso Plattner Institute, Germany

Jul 12, 2012

Seventh International Conference on Tangible, Embedded and Embodied Interaction, Feb 10-13, Barcelona, Spain: Call for Papers

CALL FOR PAPERS
TEI 13:  Seventh International Conference on 
Tangible, Embedded and Embodied Interaction
February 10-13, 2013.  Barcelona, Spain
Home

Important Dates:
August 6th, 2012, 11:59 pm (UTC+1): Paper submission deadline 
November 26th, 2012, 11:59 pm (UTC+1): Work in Progress submission deadline 
October 22nd, 2012: Paper acceptance notification 


February 10-13th, 2013: TEI 2013 Conference at Barcelona 


Description 
TEI 2013 is the seventh international conference on tangible, embedded, and embodied interaction. The work presented at TEI focuses on physical interaction with computing technology and addresses design challenges, theories, experiences, systems, and new developments. The intimate size of this single-track conference provides a unique forum for exchanging ideas and presenting innovative work through talks, interactive exhibits, demos, hands-on studios, posters, art installations and performances.


Research on tangible, embodied and embedded interaction has gained substantial visibility and activity over the past decade, under many different names and research topics and disciplines, including tangible interfaces, graspable interfaces, physical computing, tangible interaction, IT product design, appliance design and interactive spaces. It has also been associated with larger research areas, including mixed, virtual, and augmented reality and ubiquitous and pervasive computing. TEI brings together this emerging field, providing a meeting ground for the diverse communities of research and practice - from computing, hardware, and sensor technology, to HCI, interaction design and CSCW, to product and industrial design and interactive arts. We invite submissions from all of these perspectives: theoretical, conceptual, technical, applied, or artistic. The conference is designed to provide appropriate presentation forms for different types of contributions. Accepted submissions of all types will be included in the Proceedings as papers and will be integrated within the single-track conference. Interdisciplinary submissions are particularly welcome. 


This year TEI will be held in beautiful Barcelona, one of the world's leading cities for tourism, gastronomy, commerce, education, entertainment, media, fashion, science, and the arts. 


Topics for Submission 
Authors are invited to submit high-quality work detailing original research that contributes to advancing this rapidly developing field. Appropriate topics include but are not limited to:



  • Tools to rapidly develop tangible, embedded and embodied user experiences 
  • Case studies and evaluations of working deployments 
  • Analysis of key challenges and proposals of research agendas 
  • Relation of tangible and embedded interaction to other paradigms 
  • Programming paradigms and tools, toolkits, software architectures 
  • Novel interactive uses of sensors+actuators, electronics+mechatronics 
  • Design guidelines, methods, and processes 
  • Novel application areas, innovative systems, industrial applications 
  • Theoretical foundations, frameworks, and concepts 
  • Philosophical, ethical & social implications Interfaces specific in form and context to particular cultures 
  • Advantages, weakness, affordances of tangible, embedded and embodied interaction Learning from the role of physicality in everyday environments 
  • Embodied interaction, movement, and choreography of interaction 
  • Organic User Interfaces: flexible, non-flat or actuated display interfaces 
  • Role of physicality in human perception, cognition and experience 
  • Teaching experiences, lessons learned, and best practices 
  • Standardization, production, and business applications 

Papers 
Papers must present original material and will be reviewed rigorously by at least three reviewers in a double-blind process. Papers in all areas will be assessed on their contribution to the field. Papers are due on the 6th August 2012. Papers may be 4 or 8 pages long in ACM SIGCHI format. All papers will undergo the same review and publication process. The length must match the contribution. Authors may propose the presentation format that they feel best suits their contribution (short or long talk, demo, interactive exhibit, or both talk and demo/exhibit). TEI 2013 remains committed to the principle followed in previous TEI conferences - that TEI values all submissions equally. 


One author of each accepted submission must register for the conference before the early registration deadline in order for the final paper version to be published in the conference proceedings. Papers will be published in the ACM digital library. 


Please contact the program chairs Martin Kaltenbrunner, Dave Kirk and Paul Marshall at
tei2013_programchairs@googlegroups.com with questions about paper submissions.

Submission Details 
All work must be submitted electronically via the TEI 2013 Precision Conference site at https://precisionconference.com/~tei You will have to follow the guidelines regarding paper formats and submission categories at http://www.tei-conf.org/13/papersubmission 


Work in Progress 
There will be a second submission opportunity for work in progress until November 26th 2012. Selected works will be presented during dedicated poster sessions at TEI 2013 but are not published in the conference nor online proceedings. 


Explorations TEI 2013 
Explorations comprise several submission formats intended to embrace and nurture contributions from a wide range of communities - such as artists, designers and graduate students - as well as support the publication of cutting edge research. TEI Explorations include the Graduate Student Consortium and Art tracks as well as the Tangible Interaction Design Challenge. Full details will be confirmed in subsequent calls for papers.

Jul 8, 2012

PO-MO, a creative group that combines digital art, interaction, movement, and play to create engaging surfaces and spaces.

I recently learned more  PO-MO, a relatively new start-up tech company based in Winnipeg, Canada. According to the company's information, PO-MO "specializes in interactive digital display solutions, including gesture and motion based interactivity, interactive display content creation and management, and large interactive display and projection services for advertisers, educators, and events."  


Po-Motion was a finalist in an elevator pitch video contest last fall. It has several advantages over potential competitors.  The system is easy to use, and priced within the range that is affordable for schools, museums, and other cost-conscious groups who would like to provide technology-supported immersive interactive experiences for people of all ages.  The PO-MOtion software designed for interactive floors and walls starts at $39.99, and works on any computer, using any USB web camera and a projector. Other applications make use of Kinect sensors.


I especially like one of PO-MO's recent projects, the Impossible Animals Museum Exhibit, created using Unity 3-D, for the Manitoba Children's Museum.  How does it work?  Children create a colored egg using crayons and paper, which is then scanned into the exhibit and digitally embedded into the system, which includes an interactive wall and floor.  When the egg is touched, it is activated to hatch, and then becomes a motion reactive animal.  The environment includes things like water, landscapes, and even a spaceship.  The system has a "reset world" button for museum staff to use when needed.  

Impossible Animals Exhibit

Impossible Animals Interactive Museum Installation from PO-MO Inc. on Vimeo.



The following video explains how the PO-MO system works:


PO-MO is also involved in promotional projects, assisting retailers, ad agencies, and brand managers with creative ways to engage customers and clients:
Ragpackers Kinect-based Window Display

Ragpickers Kinect Window Display from PO-MO Inc. on Vimeo.


The following video provides a scrolling description about PO-MO's work, including promising data collected during implementation:

Other products and services provided by PO include mobile app development. I especially like the augmented reality business card depicted in the following video clip:

Augmented Reality Business Card from PO-MO Inc. on Vimeo


Imagine if your local shopping centers, museums, libraries, or even schools offered this level of immersive interaction on a regular basis!

RELATED
The PO-MOtion system has a wide range of uses. It is currently used in an educational setting in a sensory room for students with special needs, something that I'd like to try out in the near future with students at Wolfe School. I plan to share more about this in another post.


PO-MO Case Studies


PO-MO Bios:
Meghan Athavale – Director/CEO, PO-MO Inc.
"Meghan has been a professional designer and animator since graduating from Red River College in 1997. After graduation, she moved to Calgary, where she spent almost two years directing projects at Aurenya Studios, a start-up animation company. In 2001, Meghan was engaged by Community Connections to support community-based IT development projects in rural Manitoba and in Winnipeg’s inner city.  In 2008, Meghan joined Manlab, developing educational interactive games and resources for Immigrate Manitoba. She also launched Meghan PO-MO Project, a sole proprietorship which provided sound reactive visuals for DJs and venues across Canada. In 2009, Meghan was contracted as the User Experience Designer at Tipping Canoe, a multinational internet marketing company.

In 2010, Meghan formed PO-MO Inc. in partnership with Curtis Wachs. She began working exclusively for the company in December, 2010. Today, Meghan is the driving force behind PO-MO Inc."


Curtis Wachs – Technical Director/COO, PO-MO Inc.
"Curtis graduated from Assiniboine Community College in 2003 where he studied object oriented programming. Directly upon graduating, Curtis was hired by Assiniboine Community College to help design and develop software for online classes. Curtis relocated to Winnipeg in 2006 to create interactive training material for sales staff at E.H. Price. During the course of his work, Curt was apprenticed in 3D modelling and animation by Liem Ngyuen, a former Frantic Films resident. In 2008, Curtis joined Manlab, where he created online educational games for Travel Manitoba, Immigrate Manitoba, and other clients. In 2010, Curtis formally joined PO-MO Project, and the company became a partnership. In June 2010, PO-MO Inc. was founded.

Curtis is currently the technical director at PO-MO Inc., overseeing the project management and workflow of contracted and R&D development projects."


Mar 11, 2012

Sonic Wire Sculptor and Rhonda, a 3D drawing tool

Amit Pitaru and James Paterson, creators of Numby, an app for kids to teach counting skills, have been involved in a number of interesting projects.  Here are two of them:

Sonic Wire Sculpture


The app creates music in 3D, so it is best to use headphones.


Sonic Wire Sculpture is available as an iPhone app.  More information can be found on the Sonic Wire Sculptor website.


Rhonda Forever is a 3D drawing tool that was born around 2003 and has a team of developers who are working to create stable releases of the application so it can be accessed through different platforms.  There is a newer beta version of Rhonda for PC and Mac that can be downloaded from the Rhonda website. The team is working on releases that will support 2-hand interaction.


I'm hoping that Rhonda Forever will be available for the iPad.




RELATED
Everybody Loves Sketch (an user-friendly 3D drawing application that is designed for non-artists, tested with high school students.)
ILoveSketch website












Feb 15, 2012

Makego: Turn your iPhone into a vehicle, by creative digital artist Chris O'Shea

Chris O'Shea is an artist and designer who uses technology in creative and innovative ways. He's known for his "Hand from Above" outdoor installations.  His latest creation is Makego, is an app for the iPhone that bridges the digital with the real world.  Children can create a small vehicle out of Legos that can serve as a place to set an iPhone, running the app, and move the vehicle about a larger space.  In the examples below, the children have created scenes using crayons, paper, and colored art foam.

"Makego turns your iPhone / iPod Touch into a toy vehicle. It encourages fun, open ended collaborative play between parent and child. Combining creativity and imagination with the virtual world on screen. Select your vehicle within Makego, then interact with the drivers and their world through animations and sound. This release has 3 vehicles to play with: a race car, ice-cream truck, and river boat. More vehicle are coming later". -Chris O'Shea


The application is now available for $1.99 on the App Store

Makego from Chris O'Shea on Vimeo.





RELATED
Chris O'Shea's Website






Nov 13, 2011

Is the answer Voronoi? Looking for possible solutions to an art+dance+music+tech idea from a recurring dream....

If you are a long-time reader of this blog, you probably know that I sometimes have some unusual dreams about technology.  I don't blog about my dreams very often, but last night, I had another technology dream, a continuation of a dream I had one night last week.  
Voronoi Diagram (Wikipedia)

I'm pretty sure that the last two dreams were sparked by playing an online interactive demonstration of a Voronoi application before going to sleep one night, and also reading an article about "extracting ordered patterns from a triangular mesh surface" in the Novemeber/December IEEE Potentials magazine before turning in last night. 

The dance probably was influenced by my recent viewing of the North Carolina Dance Theater's performance of Innovative Works with my mom, someone who encouraged my  study of music, art, and dance at an early age.

Some of my tech dreams are sort of...practical. For example, in one recurring dream, I find myself coding for a flexible mesh/grid application. Sometimes the mesh/grid has something to do with wireless sensor networks on curved terrain, perhaps related to something like the Smart Grid, and sometimes I find myself working on an application that analyzes streaming data from a variety of sources, for security prediction purposes.  At other times, I'm coding for something more artistic, my preference.

Last night, my dream focused on creating a flexible mesh fabric that used in a multimedia dance/graphic arts/music performance.  I was coding for this performance using a Voronoi-like algorithm.


This is the best I can do to explain this: The fabric is carried by the dancers, and is both reactive and generative. In essence, the fabric is intertwined/embedded in the dance, the music, and the graphics. In my dream, everything looked/sounded/felt awesome and otherworldly, and the music that merged and morphed during the dance was so beautiful, not only the melodies, but the sounds.  (In a previous dream, the mesh contained a "nanotechnology" component, but I'll save that quest for the future.)


I thought I'd look at some of my web book marks and search a bit more for information related to this topic.  For now, here is the "brain dump".  I have more to add to this post, and plan to port it to a reference page for this blog in the future.  I hope that this post will be useful to some of my art/music/dance/tech readers!

RELATED AND SOMEWHAT RELATED


Update:  Right after uploaded this post, I came across a link to a WebGL demo for a 3D music video of pop singer Ellie Goulding's song, 'Lights', by HelloJoy.   Visitors to the webpage can click to interact with the environment. If you keep the button pressed, you fly faster.  If you tweet the link, you'll see your name crop up as you fly around in the soundspace.  For more information about the making of 'Lights' - take a look at  Behind the scenes of 'Lights": the latest WebGL sensation!  (Carlos Ulloa, 11/9/11)


After I watched the 'Lights' video, I recalled Radiohead's 'House of Cards' video, which I wrote about back in 2008: 
 "We were rolling computers all day"...The Making of Radioheads House of Cards using imaging and info visualization software.   The process behind the making of the House of Cards video was described in detail in Chapter 10 of the book, Beautiful Data.

Bradley, E., Capps, D., Luftig, J, & Stuart, J.M. Toward Stylistic Consonance in Human Movement Synthesis.(pdf)  The Open Artificial Intelligence Journal, 2010, 4, 1-19
Bradley, E., Stuart, J.  Using Chaos to Generate Variations on Movement Sequences (pdf) Chaos, 8:800-807 (1998)
Bradley, E., Stuart, J.  Learning the Grammar of Dance.(pdf)  Proceedings Fifteenth International Conference on Machine Learning, Madicson, WI, 1998
E. Bradley, D. Capps, and A. Rubin, "Can computers learn to dance?," Proceedings International Dance & Technology (IDAT), Tempe AZ, Feb 1999.
Chaotic Dance: Using mathematics to generate choreographic variations
Schedl, M., Hoglinger, C., Knees, P. Large-Scale Music Exploration in Hierarchically Organized Landscapes Using Prototypicality Information (pdf)
Fournel, N. Procedural Audio for Video Games: Are we there yet? (pdf) GDC 2010


Voronoi Cells, created by Nathan Nifong.  A version of this interactive work was used in a DanceDraw performance















Patterns in the Noise (Nathan Nifong's site - FYI, Nathan worked with Celine Latulipe with the Dance.Draw project while completing his bachelor degree in computer science at UNC-Charlotte)
Voronoi Dance (Christian Gross, using OpenFramework)
Voronoi art: Slow Trip (Oktalist/Mat)


The above video, by Mat/Oktal, was inspired by his viewing of Thomas Ruff's Substrat images. 
Scott Snibbe Studio (Intearctive art, music, and animation for iPhone, iPad, iPod, and Mac)

Interactive Voronoi Diagram Generator with WebGL (Alex Beutel)
 
The above video was found in Alex Beutel's blog post, "Interactive Voronoi Diagrams with WebGL"

Posts about DanceDraw and related work at UNC-Charlotte:
News from the HCI lab at UNC-Charlotte- Creative Interactions (Videos)
Exploring the Design Space in Technology-Augmented Dance at CHI 2010:  Celine Latulipe's team from UNC-Charlotte
Interactive Surveillance:  Live digital art installation by Annabel Manning and Celine Latulipe

SIGCHI  Digital Arts and Interaction Community:  Building Bridges
The Interdisciplinary World of Dance and Interactive Technology

HTML5Voronoi  (HTML5Code website)

HTML5 Voronoi, Live Version
Update to code to compute Voronoi diagrams (Raymond Hill, 5/22/11)
William Forsythe's "Synchronous Objects-One Flat Thing, Reproduced" - Multidisciplinary online interactive project: Translating choreography into new forms.
BCS HCI 2011 Workshop: When Words Fail:  What can Music Interaction tell us about HCI?
Woven Sound (Alex McLean)
Real DJs Code Live (Robert Andrews, Wired, 7/3/06)
Visualization of Live Code (Alex McLean)
Voronoi diagrams of music (pdf)  (Alex McLean, 2006)
WebGL
Sylvester: Vector and Matrix Math for JavaScript
Generative Art Links (Mikael Hvidtfeldt Christensen)
Schacher, J.C. Motion to Gesture to Sound:  Mapping For Interactive Dance (pdf) Proceedings of the 2010 Conference on New Interfaces for Musical Expression (NIME 2010), Sydney, Australia
Code & Form:  Computational Aesthetics (Marius Watz)
Werghi, N. Extracting ordered patterns from a triangular mesh surface.  IEEE Potentials, Nov/Dec 2011
Last night I dreamt about haptic touch-screen overlays
Hyun-Seok Kim's 'Voronoi' dragonfly wing inspired superyacht 2

Hyun-Seok Kim's Voronoi Yacht











Art + Tech: The work of Aparna Rao and Soren Pors - a Ted Talk video worth watching!

Arparna Rao: High-tech art (with a sense of humor)

Aparna Rao is an artist from Bangalore, India, who creates interactive installations supported by electro-mechanical systems.  She collaborates with artist Soren Pors.


RELATED
Pors & Rao website - a delightful interactive website highlighting the works of Arpana Rao and Soren Pors.  Below are pictures of some of the duo's work, along with links to the Vadehra Art Gallery, where the works can be viewed online:


Pygmies: http://www.vadehraart.com/exhibition/viewDetails/63/1285
Decoy: http://www.vadehraart.com/exhibition/viewDetails/63/1287

Drifter: http://www.vadehraart.com/exhibition/viewDetails/63/1288










Sep 4, 2011

Math Multimedia: Paul Bourke's Random Space Filling Tiling of the Plane; Lots'o Related Info and Links


Paul Bourke is a research associate professor and director of iVEC at the University of Western Australia.  I've been following Dr.  for several years and never cease to be amazed by his mathematical and scientific visualization experiements. 


Dr. Bourk recently posted a video of one of his recent projects on Google+ that I found fascinating.  If you are a member of Google+,  you can view the video at https://plus.google.com/117410063754013499525/posts  (I wasn't sure how to embed the video in this post.)


Take the time to follow the link below to Paul Bourke's web page about his current work. The pictures and math are worth the effort!
RANDOM SPACE FILLING TILING OF THE PLANE

Paul Bourke, July 2011


Paul Bourke's "tiling of the plane" work was inspired by John Shier, a computer artist who integrates random numbers in his creations.

In my opinion, introducing students to the works of Paul Bourke, John Shire, and others might be a great way to spark curiosity and creativity in the study of mathematics, computer programming, and digital art.

RELATED
MRI-style 3D volumetric data set of a liver




"A fulldome (planetarium) visualisation of three microCT volumetric datasets rendered in Drishti.  They include: (i) Liver Lobe, Rabbit liver illustrating venous and arterial (orange) systems; (ii) Interior of a lizard head (common garden skink); (iii) Fossil fish eyeball (Placoderm), 410 million years old, found in Wee Jasper, NSW. "


Credits for Artforms of Nature: A new way of seeing. 
Ajay Limaye, Peter Morse, Paul Bourke. Vizlab, Australian National University. WASP, University of Western Australia. Featuring fisheye volume rendering from Drishti. Datasets Fossil fish eyeball (Placoderm). Gavin Young, ANU Common garden skink. Tim Senden, ANU Rabbit liver lobe. Tim Senden, ANU ©2008 by the artists. "-http://www.youtube.com/user/peteremorse


Drishti: Volume Exploration and Presentation Tool
"Drishti has been developed keeping in mind the end-use : visualizing tomography data, electron-microscopy data, etc. Understanding the data set is important and conveying that understanding to the research community or a lay person is equally important. Drishti is aiming for both. The central idea about Drishti is that the scientists should be able to use it for exploring volumetric datasets as well as use it in presentations."


Lynn Marentette, Interactive Multimedia Technology, 5/14/08

Interdisciplinary Workshop - OZVIZ 2011:  November 23-25, Sydney, Australia
"The workshop provides an occasion for participants to present research outcomes, share innovative ideas, publicise work and meet colleagues. It is highly multidisciplinary, with participants from fields such as mathematics, geoscience, architecture, biology, medicine and astronomy presenting alongside computer graphics and visualisation experts."