Nov 19, 2011

Camera-less Tabletop Computing with Samsung SUR40 for Microsoft® Surface® with PixelSense™

Here is the press release: 
Next Generation of Microsoft Surface Available for Pre-Order in 23 Countries
"Software developers interested in creating solutions for the Samsung SUR40 can get started immediately by visiting the Surface Developer Center. The site provides free and easy access to the Surface 2.0 software developer kit, featuring the Input Simulator, which enables developers to write Surface applications on any Windows 7 machine, as well as other helpful developer-related resources. There are already hundreds of highly skilled Surface software development partners that can be found at http://www.surface.com."


(I've listed Microsoft Surface partners at the end of this post and plan to share more about the latest applications for surface computing in the near future.)


The following product information was taken from the Samsung website:
40" Surface Experience 
"Samsung SUR40 is the new generation of Microsoft® Surface® experience featuring PixelSense™ technology, which gives LCD panels the power to see without the use of cameras. Building from the innovation of the first version of Microsoft® Surface® and Samsung’s leading display technology, it is now possible for people to share, collaborate and explore together using a large, thin display that recognizes fingers, hands and other objects placed on the screen." 


PixelSense™ 
"PixelSense™ allows an LCD display to recognize fingers, hands, and objects placed on the screen, including more than 50 simultaneous touch points. With PixelSense™, pixels in the display see what’s touching the screen and that information is immediately processed and interpreted."


Resolution:             1920 x 1080 
Viewing Angle:       (H/V) 178 / 178° (CR ≥ 10) 
CPU:                     Athlon X2 Dual-Core 245e (2.9GHz) 
Operating System: Windows 7 Professional  x64 


GPU:        AMD HD6750M 
N/B:         AMD RS780E 
S/B:         AMD SB710 


Storage:                  SATA2 320 GB 
Memory:                 DDR3 4 GB
USB:                      4 USB 2.0 
VGA:                     Out HDMI Ethernet: 100 / 1000 
Audio Codec:         Realtek ALC262 Azalia CODEC


Product Dimensions (With Stand):            1,095 x 728 x 707.4 mm 
Product Dimensions (Without Stand):       1,095 x 102.5 x 707.4 mm 
Shipment Dimensions:                            1,214 x 299 x 832 mm 


Product Weight:     35 Kg 
Shipment Weight:  45.4.Kg


RELATED
Samsung
Microsoft Surface
Microsoft Surface "What's New"
Microsoft Surface Partners:
Aftermous.com
AKT
AM Production
Black Marble
ETT
Headcandy
IdentityMine
Information Strategies
Infusion
Inhance Digital
Interknowlogy
Intuilab
nSquared
Object Consulting
Onwijs
Razorfish
Sevensteps
Stimulant
Touchtech
T-Systems
MultiMedia
UID
Vectorform
XFace

Nov 13, 2011

Is the answer Voronoi? Looking for possible solutions to an art+dance+music+tech idea from a recurring dream....

If you are a long-time reader of this blog, you probably know that I sometimes have some unusual dreams about technology.  I don't blog about my dreams very often, but last night, I had another technology dream, a continuation of a dream I had one night last week.  
Voronoi Diagram (Wikipedia)

I'm pretty sure that the last two dreams were sparked by playing an online interactive demonstration of a Voronoi application before going to sleep one night, and also reading an article about "extracting ordered patterns from a triangular mesh surface" in the Novemeber/December IEEE Potentials magazine before turning in last night. 

The dance probably was influenced by my recent viewing of the North Carolina Dance Theater's performance of Innovative Works with my mom, someone who encouraged my  study of music, art, and dance at an early age.

Some of my tech dreams are sort of...practical. For example, in one recurring dream, I find myself coding for a flexible mesh/grid application. Sometimes the mesh/grid has something to do with wireless sensor networks on curved terrain, perhaps related to something like the Smart Grid, and sometimes I find myself working on an application that analyzes streaming data from a variety of sources, for security prediction purposes.  At other times, I'm coding for something more artistic, my preference.

Last night, my dream focused on creating a flexible mesh fabric that used in a multimedia dance/graphic arts/music performance.  I was coding for this performance using a Voronoi-like algorithm.


This is the best I can do to explain this: The fabric is carried by the dancers, and is both reactive and generative. In essence, the fabric is intertwined/embedded in the dance, the music, and the graphics. In my dream, everything looked/sounded/felt awesome and otherworldly, and the music that merged and morphed during the dance was so beautiful, not only the melodies, but the sounds.  (In a previous dream, the mesh contained a "nanotechnology" component, but I'll save that quest for the future.)


I thought I'd look at some of my web book marks and search a bit more for information related to this topic.  For now, here is the "brain dump".  I have more to add to this post, and plan to port it to a reference page for this blog in the future.  I hope that this post will be useful to some of my art/music/dance/tech readers!

RELATED AND SOMEWHAT RELATED


Update:  Right after uploaded this post, I came across a link to a WebGL demo for a 3D music video of pop singer Ellie Goulding's song, 'Lights', by HelloJoy.   Visitors to the webpage can click to interact with the environment. If you keep the button pressed, you fly faster.  If you tweet the link, you'll see your name crop up as you fly around in the soundspace.  For more information about the making of 'Lights' - take a look at  Behind the scenes of 'Lights": the latest WebGL sensation!  (Carlos Ulloa, 11/9/11)


After I watched the 'Lights' video, I recalled Radiohead's 'House of Cards' video, which I wrote about back in 2008: 
 "We were rolling computers all day"...The Making of Radioheads House of Cards using imaging and info visualization software.   The process behind the making of the House of Cards video was described in detail in Chapter 10 of the book, Beautiful Data.

Bradley, E., Capps, D., Luftig, J, & Stuart, J.M. Toward Stylistic Consonance in Human Movement Synthesis.(pdf)  The Open Artificial Intelligence Journal, 2010, 4, 1-19
Bradley, E., Stuart, J.  Using Chaos to Generate Variations on Movement Sequences (pdf) Chaos, 8:800-807 (1998)
Bradley, E., Stuart, J.  Learning the Grammar of Dance.(pdf)  Proceedings Fifteenth International Conference on Machine Learning, Madicson, WI, 1998
E. Bradley, D. Capps, and A. Rubin, "Can computers learn to dance?," Proceedings International Dance & Technology (IDAT), Tempe AZ, Feb 1999.
Chaotic Dance: Using mathematics to generate choreographic variations
Schedl, M., Hoglinger, C., Knees, P. Large-Scale Music Exploration in Hierarchically Organized Landscapes Using Prototypicality Information (pdf)
Fournel, N. Procedural Audio for Video Games: Are we there yet? (pdf) GDC 2010


Voronoi Cells, created by Nathan Nifong.  A version of this interactive work was used in a DanceDraw performance















Patterns in the Noise (Nathan Nifong's site - FYI, Nathan worked with Celine Latulipe with the Dance.Draw project while completing his bachelor degree in computer science at UNC-Charlotte)
Voronoi Dance (Christian Gross, using OpenFramework)
Voronoi art: Slow Trip (Oktalist/Mat)


The above video, by Mat/Oktal, was inspired by his viewing of Thomas Ruff's Substrat images. 
Scott Snibbe Studio (Intearctive art, music, and animation for iPhone, iPad, iPod, and Mac)

Interactive Voronoi Diagram Generator with WebGL (Alex Beutel)
 
The above video was found in Alex Beutel's blog post, "Interactive Voronoi Diagrams with WebGL"

Posts about DanceDraw and related work at UNC-Charlotte:
News from the HCI lab at UNC-Charlotte- Creative Interactions (Videos)
Exploring the Design Space in Technology-Augmented Dance at CHI 2010:  Celine Latulipe's team from UNC-Charlotte
Interactive Surveillance:  Live digital art installation by Annabel Manning and Celine Latulipe

SIGCHI  Digital Arts and Interaction Community:  Building Bridges
The Interdisciplinary World of Dance and Interactive Technology

HTML5Voronoi  (HTML5Code website)

HTML5 Voronoi, Live Version
Update to code to compute Voronoi diagrams (Raymond Hill, 5/22/11)
William Forsythe's "Synchronous Objects-One Flat Thing, Reproduced" - Multidisciplinary online interactive project: Translating choreography into new forms.
BCS HCI 2011 Workshop: When Words Fail:  What can Music Interaction tell us about HCI?
Woven Sound (Alex McLean)
Real DJs Code Live (Robert Andrews, Wired, 7/3/06)
Visualization of Live Code (Alex McLean)
Voronoi diagrams of music (pdf)  (Alex McLean, 2006)
WebGL
Sylvester: Vector and Matrix Math for JavaScript
Generative Art Links (Mikael Hvidtfeldt Christensen)
Schacher, J.C. Motion to Gesture to Sound:  Mapping For Interactive Dance (pdf) Proceedings of the 2010 Conference on New Interfaces for Musical Expression (NIME 2010), Sydney, Australia
Code & Form:  Computational Aesthetics (Marius Watz)
Werghi, N. Extracting ordered patterns from a triangular mesh surface.  IEEE Potentials, Nov/Dec 2011
Last night I dreamt about haptic touch-screen overlays
Hyun-Seok Kim's 'Voronoi' dragonfly wing inspired superyacht 2

Hyun-Seok Kim's Voronoi Yacht











Art + Tech: The work of Aparna Rao and Soren Pors - a Ted Talk video worth watching!

Arparna Rao: High-tech art (with a sense of humor)

Aparna Rao is an artist from Bangalore, India, who creates interactive installations supported by electro-mechanical systems.  She collaborates with artist Soren Pors.


RELATED
Pors & Rao website - a delightful interactive website highlighting the works of Arpana Rao and Soren Pors.  Below are pictures of some of the duo's work, along with links to the Vadehra Art Gallery, where the works can be viewed online:


Pygmies: http://www.vadehraart.com/exhibition/viewDetails/63/1285
Decoy: http://www.vadehraart.com/exhibition/viewDetails/63/1287

Drifter: http://www.vadehraart.com/exhibition/viewDetails/63/1288










Nov 11, 2011

Revisiting Good Blogs: IDEUM update

Jim Spadaccini is the director of Ideum and is responsible for the Ideum blog, which is well worth reading if you are interested in creative use of multi-touch, gesture, and interactive multimedia applications.  Jim also serves as the principal investigator of the Open Exhibits project, a non-profit open source, multi-touch, multiuser software initiative, which includes a free software development kit (SDK) for museums, students, nonprofits, and educational use.


Here are a few links to posts from the Ideum blog:
Building a High-Resolution Multi-touch Wall: Part 1, Part 2, Part 3,  Part 4 and Part 5.
Ideum Blog
Events at the The Tech and the Maxwell Museum
Open Exhibits, Year Two

The Open Exhibits Research and Design blog is worth reading, as it often includes Jim Spadaccini's words of wisdom and some interesting video clips.


More about IDEUM
Ideum will be involved in an upcoming day-long conference at the Tech Museum on Tuesday, November 15th (2011), Building Interfaces for the New Decade  Conference San Jose, California, 11/15/11

Ideum is working on interconnectivity between devices, as shown below, where people are connecting their iPhones to a multi-touch table:
Heist at the British Museum

Here is a video about Ideum's Tiny Drifters exhibit,  7 foot multi-touch wall at the Monterey Bay Aquarium:




Here is a promotional video about Ideum's rugged MT-55 multi-touch table. It holds up to the interaction of thousands of museum visitors.  






























Link to a great rant about interaction design. It is all about the hands, screens and surfaces, not so much!?

Brief Rant on the Future of Interaction Design Bret Victor, 11/10/11 Via Putting People First

Revisiting Good Blogs - Web of Things: "Architecting the Web of Things for Tinkerers and Hackers"


















One of my interests is pervasive computing, which casts a very wide "web".  That's why I like the Web of Things blog.  Below is an excerpt from the "About" section of the blog:

"Why do you want to change the Internet? What’s wrong with it? 
Not much really. It’s just a little boring. It only contains data published by other humans, but it’s not enough linked with our physical reality, not real time / real world enough for our taste! 


Why should I read this blog? 
Well depends how much you care about technology. It’s both for geeks/hackers who need a playground to develop and try ideas. But it’s also for the security expert who wants to take part to make the Web a safer place to hang out. It’s also for people who want to find new ideas." 


More about the Web of Things: 
"The Web of Things is a community of developers, researchers, and designers that explore the future of the physical Web. We explore how to leverage Web standards to interconnect all types of embedded devices (sensors, mobile phones, etc.) in order to make them easier to use and to integrate in classic Web applications. We aim to build a future Web of devices that is truly open, flexible, and scalable, and we believe Web standards are the best way to do it." 


The Web of Things blog was founded by Dominique Guinard and Vlad Trifa, two "researchers/geeks" working at ETH Zurich and SAP Research Zurich.