TITLE:  this.mirror

CATEGORY:   Interactive augmented video installation

DATE:   May, 2015

TOOLKITS:   Processing, Kinect V2, OpenCV

COLLABORATORS:   David Cihelna, Karthik Patanjali, Manxue Wang

this.mirror was created in fulfillment of a brief for Lauren McCarthy's Conversation and Computation class at NYU's Interactive Telecommunications Program.  It was designed as an experiment to see what effect technology can have on facilitating interaction between strangers in a physical space.  The software makes use of the OpenCV computer vision library to facial tracking and feature recognition for anyone standing in front of the screen.  A thought bubble appears above each user's head, mimicking a private thought, with the content of the bubble being generated based on the other users' features.  For example, if a user is smiling, or wearing glasses, the software takes this into consideration when creating 'thoughts' for the other users in the environment.

SHOWINGS:   BabyCastles (NYC, 2015), NYU ITP Spring Showcase (2015), NYC Media Lab Annual Summit (2015), UNDP We The People’s Technology Showcase during the United Nations General Assembly (2015), Microsoft Innovation Hub (2015)

this.mirror on display at The United Nations.

The installation received a lot of positive user feedback and appeared to work well in sparking conversations between apparent strangers, often through the use of humor.  We were invited to attend several events, including the United Nations during the 2015 climate summit.

We found that one of the things people enjoyed the most was taking pictures of themselves and sharing them over social media.  At every event the installation was showcased at, we encouraged people to share their pictures using the hashtag #thismirror.