Assignment 13: Zen

Interactions with the final version of my final project are presented below:

Zen is a zen-garden simulation that invites users to slow down and relax during the hectic time of final exams and projects. It does so by presenting an outline of the user in a field of flowers (thank you Lama for making such beautiful plants!), and allowing one to wade through. If one moves too much, the color changes from green to yellow to red, which has consequences on the growth of the plants in the garden.

In the red state, the plants do not grow, and touching them with one’s hands or feet causes them to wither. It is only when the user slows down and reaches the green state that they get to experience the reward – planting their own flowers for everyone to see!

If one stays calm for a short while, pink lotus plants spawn in the regions occupied by what the Kinect detector sees as people. These can be picked up by people’s hands, and planted in the garden.

If one stays calm for a slightly longer while, a purple plant sprouts in one of the person’s hands; the person can plant those as well! As can be seen from the video, these plants are easier to plant (because the person does not need to actively pick up the plant with their hand).

I am very happy that I was able to implement most of the changes requested during user testing! Planting was a big part of the challenge, and I had to rework the majority of the code to make it possible; after making it happen for the purple plants however, adding the lotus plants was very easy.

Additional signifiers were added to tell people what they should try doing with their plants – this helped explain the interaction better, but I am afraid people still did not have enough patience to wait and see what happens. Also, markings were added to the floor that showed people precisely where they should stand in order to be seen by the Kinect camera (blue area), and precisely where they should plant their flowers (green area with a plant symbol). This proved fairly intuitive.

Unfortunately, I did not have enough time to implement the wind-like effects that would bend the flowers with people’s movement. To make the interaction more intuitive, I added the functionality to shrink flowers when they come in contact with a user’s hand. This is not precisely an intuitively expected interaction, but it did engage the users and showed them that there is something that can be done with the visualization, that it is not just a static arrangement of flowers.

I was struggling with interference from people behind my detection area. The other visualization was too close and the Kinect was mistakenly detecting those users as my users. This was a problem because the visualization beyond my detection area had a very long interaction – so people, once detected, would not be un-detected unless the Kinect was forced to forget them (by me blocking their body with my body). The problem was alleviated a little by tweaks to the code that ensured that only one person would be tracked by the Kinect at a time (which was difficult because the library code I was using did not work as expected), but the visualization still required constant surveillance on my part – which is obviously not ideal. I realized too late that I should have requested a blanket or a screen to prevent background people from interfering with the visualization…

Nevertheless, I am pleased with the end result, I think people liked it much more than they did during user testing, and I think they appreciated the ability to leave their mark for others to see in the visualization.

The code is presented below. It is the longest I have written for this class, overcoming even the CM Visualizations project.

import kinect4WinSDK.Kinect;
import kinect4WinSDK.SkeletonData;

Kinect kinect;
ArrayList<SkeletonData> bodies;

PImage kinectDepth;
PImage kinectMask;

int backgroundR = 12;
int backgroundG = 27;
int backgroundB = 16;
color backgroundColor = color(backgroundR, backgroundG, backgroundB);

int numPixels;
int[] previousFrame;
int movementSum = 0;

int MAX_GROUND_BRANCHES = 100;
ArrayList<Branch> groundBranches;

ArrayList<Branch> leftHandBranches;
ArrayList<Branch> rightHandBranches;

ArrayList<Branch> plantedBranches;

ArrayList<Branch> bodyBranches;

boolean leftHandPlantingBodyBranch = false;
boolean rightHandPlantingBodyBranch = false; 

Float[] dummyLine = {width*2.0, height*2.0, width*2.0, height*2.0};

ArrayList<Float[][]> skeletonLines;
Float[][] dummySkeletonLines = {dummyLine, dummyLine, dummyLine, dummyLine, dummyLine};

int state = 0;
int stateLength = 0;
int desiredState = 0;
int desiredStateLength = 0;
int DESIRED_STATE_MIN_LENGTH = 5;

PFont messageFont;
PFont infoFont;

void setup() {
  fullScreen();
  background(backgroundColor);
  kinect = new Kinect(this);
  smooth();
  bodies = new ArrayList<SkeletonData>();
  
  numPixels = width * height;
  previousFrame = new int[numPixels];
  
  groundBranches = new ArrayList<Branch>();
  
  leftHandBranches = new ArrayList<Branch>();
  rightHandBranches = new ArrayList<Branch>();
  
  plantedBranches = new ArrayList<Branch>();
  
  bodyBranches = new ArrayList<Branch>();
  
  makeGroundBranches();
  
  skeletonLines = new ArrayList<Float[][]>();
  
  messageFont = createFont("GothamUltra Regular.otf", 128);
  infoFont = createFont("GothamUltra Regular.otf", 24);
}

void draw() { 
  kinectDepth = kinect.GetDepth();
  kinectMask = kinect.GetMask();
  
  kinectDepth.resize(width,height);
  kinectMask.resize(width,height);
  
  background(backgroundColor);
  
  kinectDepth.loadPixels();
  kinectMask.loadPixels();
  
  // preparation
  getMovementSum(); 
  getState();
  
  updateSkeletonLines();
  
  checkCollisionsGroundBranches(); 
  checkFrontGroundBranches();
  
  adjustGroundBranches(); // avoiding hands and body
  adjustHandBranches(); // move with hands
  adjustPlantedBranches();
  adjustBodyBranches(); // spawn if not person not moving
 
  drawBehindGroundBranches();
  drawPersonFrameDiff();
  drawPersonBones();
  drawBodyBranches();
  drawHandBranches();
  drawFrontGroundBranches();
  drawMessage();
  drawInfo();
}

void getMovementSum() {
  movementSum = 0;
  for (int y = 0; y < height; y += 1) {
    for (int x = 0; x < width; x += 1) {
      int loc = x + y*width;
      
      boolean isMask = (alpha(kinectMask.pixels[loc]) != 0);
      
      if (isMask) {
        int depth = int(brightness(kinectDepth.pixels[loc]));
        
        int diff = abs(depth - previousFrame[loc]);
        movementSum += diff;
      }
    }
  }
}

void getState() {
  if (movementSum == 0) {
    state = 0;
    if (desiredState != 0) {
      desiredState = 0;
      desiredStateLength = 0;
    }
    else {
      desiredStateLength += 1;
    }
  }
  
  else if (movementSum < 5000000) {
    if (state == 0) {
      state = 1;
    }
    if (desiredState != 1) {
      desiredState = 1;
      desiredStateLength = 0;
    }
    else {
      desiredStateLength += 1;
    }
  }
  
  else if (movementSum < 10000000) {
    if (state == 0) {
      state = 2;
    }
    if (desiredState != 2) {
      desiredState = 2;
      desiredStateLength = 0;
    }
    else {
      desiredStateLength += 1;
    }
  }
  
  else {
    if (state == 0) {
      state = 3;
    }
    if (desiredState != 3) {
      desiredState = 3;
      desiredStateLength = 0;
    }
    else {
      desiredStateLength += 1;
    }
  }
  
  if (desiredState == state) {
    stateLength += 1;
  }
  else {
    if (desiredStateLength >= DESIRED_STATE_MIN_LENGTH) {
      state = desiredState;
      stateLength = 0;
    }
    else {
      stateLength += 1;
    }
  }
}

void updateSkeletonLines() {
  synchronized (bodies) {
    synchronized (skeletonLines) { 
      // get current skeleton lines
      for (int i = 0; i < bodies.size(); i += 1) {
        skeletonLines.set(i, getSkeletonLines(bodies.get(i)));
      }
    }
  }
}

void checkCollisionsGroundBranches() {
  // collide branches with person
  for (int i = 0; i < groundBranches.size(); i += 1) {
    Branch currentBranch = groundBranches.get(i);
    
    currentBranch.isCollision = false; // reset isCollision state
    for (int j = 0; j < bodies.size(); j+= 1) {
      Float[][] currentSkeletonLines = skeletonLines.get(j);
      // check for intersection with feet
      currentBranch.checkIsCollision(currentSkeletonLines[3], currentSkeletonLines[4]);
      currentBranch.checkIsCollision(currentSkeletonLines[1], currentSkeletonLines[2]);
    }
  }
  
  for (int i = 0; i < plantedBranches.size(); i += 1) {
    Branch currentBranch = plantedBranches.get(i);
    
    currentBranch.isCollision = false; // reset isCollision state
    for (int j = 0; j < bodies.size(); j+= 1) {
      Float[][] currentSkeletonLines = skeletonLines.get(j);
      // check for intersection with feet
      currentBranch.checkIsCollision(currentSkeletonLines[3], currentSkeletonLines[4]);
      currentBranch.checkIsCollision(currentSkeletonLines[1], currentSkeletonLines[2]);
    }
  }
}

void checkFrontGroundBranches() {
  // is branch behind or before person?
  for (int i = 0; i < groundBranches.size(); i += 1) {
    Branch currentBranch = groundBranches.get(i);
    
    currentBranch.isInFront = false; // resetIsInFront state
    for (int j = 0; j < bodies.size(); j += 1) {
      Float[][] currentSkeletonLines = skeletonLines.get(j);
      
      currentBranch.checkIsInFront(currentSkeletonLines[3], currentSkeletonLines[4]);
    }
  }
  
  for (int i = 0; i < plantedBranches.size(); i += 1) {
    Branch currentBranch = plantedBranches.get(i);
    
    currentBranch.isInFront = false; // resetIsInFront state
    for (int j = 0; j < bodies.size(); j += 1) {
      Float[][] currentSkeletonLines = skeletonLines.get(j);
      
      currentBranch.checkIsInFront(currentSkeletonLines[3], currentSkeletonLines[4]);
    }
  }
}

void adjustGroundBranches() {
  adjustGroundBranchSizes();
}

void adjustGroundBranchSizes() {
  for (int i = 0; i < groundBranches.size(); i += 1) {
    Branch currentBranch = groundBranches.get(i);
    float scale = currentBranch.scale;
    boolean isCollision = currentBranch.isCollision;
    if (state == 0) {
      scale += 0.025;
      if (scale > 1.0) scale = 1.0;
    }
    else if (state == 1) {
      scale += 0.025;
      if (scale > 1.0) scale = 1.0;
    }
    else if (state == 2) {
      if (isCollision) {
        scale -= 0.01;
        if (scale < 0.0) scale = 0.0;
      }
    }
    else {
      if (isCollision) {
        scale -= 0.1;
        if (scale < 0.0) scale = 0.0;
      }
    }
    currentBranch.scale = scale;
  }
}

void adjustHandBranches() {
  // spawn new
  float leafFatness = random(1, 2);
  int[] leafColorBounds = {127, 196, 0, 0, 0, 255};
  if (state == 1 && stateLength >= 100 && stateLength%100 == 0) {
    if (random(2) >= 1) {
      if (leftHandBranches.size() == 0) leftHandBranches.add(new Branch(random(1), 0.3, leafFatness, leafColorBounds));
    }
    else {
      if (rightHandBranches.size() == 0) rightHandBranches.add(new Branch(random(1), 0.3, leafFatness, leafColorBounds));
    }
  }
  
  // adjust position and plant
  boolean leftHandBranchRemoved = false;
  boolean rightHandBranchRemoved = false;
  for (int i = skeletonLines.size()-1; i >= 0; i -= 1) {
    Float[][] currentSkeletonLines = skeletonLines.get(i);
    
    for (int j = 0; j < leftHandBranches.size(); j += 1) {
      Branch currentBranch = leftHandBranches.get(j);
      currentBranch.adjustPosition(currentSkeletonLines[1]);
      
      if (leftHandBranchRemoved) continue;
      if (state != 0) {
        if (currentBranch.modelLine[0] >= 20 && currentBranch.modelLine[0] < width-20 && currentBranch.modelLine[1] >= height-height/5 && currentBranch.modelLine[1] < height) {
          if (leftHandPlantingBodyBranch) plantedBranches.add(new Branch(new PVector(currentBranch.modelLine[0], currentBranch.modelLine[1]), currentBranch.leafFatness, currentBranch.leafColorBounds));
          else plantedBranches.add(new Branch(new PVector(currentBranch.modelLine[0], currentBranch.modelLine[1]), 0.6, currentBranch.leafFatness, currentBranch.leafColorBounds));
          leftHandBranches.remove(j);
          leftHandBranchRemoved = true;
          leftHandPlantingBodyBranch = false;
        }
      }
    }
    
    for (int j = 0; j < rightHandBranches.size(); j += 1) {
      Branch currentBranch = rightHandBranches.get(j);
      currentBranch.adjustPosition(currentSkeletonLines[2]);
      
      if (rightHandBranchRemoved) continue;
      if (state != 0) {
        if (currentBranch.modelLine[0] >= 20 && currentBranch.modelLine[0] < width-20 && currentBranch.modelLine[1] >= height-height/5 && currentBranch.modelLine[1] < height) {
          if (rightHandPlantingBodyBranch) plantedBranches.add(new Branch(new PVector(currentBranch.modelLine[0], currentBranch.modelLine[1]), currentBranch.leafFatness, currentBranch.leafColorBounds));
          else plantedBranches.add(new Branch(new PVector(currentBranch.modelLine[0], currentBranch.modelLine[1]), 0.6, currentBranch.leafFatness, currentBranch.leafColorBounds));
          rightHandBranches.remove(j);
          rightHandBranchRemoved = true;
          rightHandPlantingBodyBranch = false;
        }
      }
    }
  }
  
  // adjust size
  adjustHandBranchSizes();
}

void adjustHandBranchSizes() {
  for (int i = 0; i < leftHandBranches.size(); i += 1) {
    Branch currentBranch = leftHandBranches.get(i);
    float scale = currentBranch.scale;
    if (state == 0) {
      scale -= 1.0;
      if (scale < 0.0) scale = 0.0;
    }
    else if (state == 1) {
      scale += 0.1;
      if (scale > 1.0) scale = 1.0;
    }
    else if (state == 2) {
      scale -= 0.0;
      if (scale < 0.0) scale = 0.0;
    }
    else {
      scale -= 0.001;
      if (scale < 0.0) scale = 0.0;
    }
    currentBranch.scale = scale;
  }
  
  for (int i = 0; i < rightHandBranches.size(); i += 1) {
    Branch currentBranch = rightHandBranches.get(i);
    float scale = currentBranch.scale;
    if (state == 0) {
      scale -= 1.0;
      if (scale > 1.0) scale = 1.0;
    }
    else if (state == 1) {
      scale += 0.1;
      if (scale > 1.0) scale = 1.0;
    }
    else if (state == 2) {
      scale -= 0.0;
      if (scale < 0.0) scale = 0.0;
    }
    else {
      scale -= 0.001;
      if (scale < 0.0) scale = 0.0;
    }
    currentBranch.scale = scale;
  }
}

void adjustPlantedBranches() {
  adjustPlantedBranchSizes();
}

void adjustPlantedBranchSizes() {
  for (int i = 0; i < plantedBranches.size(); i += 1) {
    Branch currentBranch = plantedBranches.get(i);
    float scale = currentBranch.scale;
    boolean isCollision = currentBranch.isCollision;
    if (state == 0) {
      scale += 0.005;
      if (scale > 1.0) scale = 1.0;
    }
    else if (state == 1) {
      scale += 0.01;
      if (scale > 1.0) scale = 1.0;
    }
    else if (state == 2) {
      if (isCollision) {
        scale -= 0.025;
        if (scale < 0.0) scale = 0.0;
      }
    }
    else {
      if (isCollision) {
        scale -= 0.25;
        if (scale < 0.0) scale = 0.0;
      }
      
      scale -= 0.001;
      if (scale < 0.0) scale = 0.0;
    }
    currentBranch.scale = scale;
  }
}

void adjustBodyBranches() {
  // spawn new
  float leafFatness = random(1, 2);
  int[] leafColorBounds = {255, 255, 0, 191, 63, 127};
  if (state == 1 && stateLength >= 20 && stateLength%10 == 0) {
    bodyBranches.add(new Branch(findPosOnBody(), leafFatness, leafColorBounds));
  }
  
  // adjust size
  adjustBodyBranchSizes();
  
  // attach to hands
  boolean bodyBranchAttached = false;
  for (int i = skeletonLines.size()-1; i >= 0; i -= 1) {
    Float[][] currentSkeletonLines = skeletonLines.get(i);
    
    for (int j = 0; j < bodyBranches.size(); j += 1) {
      Branch currentBranch = bodyBranches.get(j);
      
      if (bodyBranchAttached) continue;
      if (state != 0) {
        if (leftHandBranches.size() == 0 && currentBranch.checkModelLineIntersection(currentSkeletonLines[1])) {
          leftHandBranches.add(new Branch(random(1), currentBranch.leafFatness, currentBranch.leafColorBounds));
          bodyBranches.remove(j);
          bodyBranchAttached = true;
          leftHandPlantingBodyBranch = true;
        }
        else if (rightHandBranches.size() == 0 && currentBranch.checkModelLineIntersection(currentSkeletonLines[2])) {
          rightHandBranches.add(new Branch(random(1), currentBranch.leafFatness, currentBranch.leafColorBounds));
          bodyBranches.remove(j);
          bodyBranchAttached = true;
          rightHandPlantingBodyBranch = true;
        }
      }
    }
  }
}

void adjustBodyBranchSizes() {
  if (state != 1) {
    for (int i = 0; i < bodyBranches.size(); i += 1) {
      Branch currentBranch = bodyBranches.get(i);
      currentBranch.scale -= 0.25;
      if (currentBranch.scale < 0.0) bodyBranches.remove(i);
    }
  }
  else {
    for (int i = 0; i < bodyBranches.size(); i += 1) {
      Branch currentBranch = bodyBranches.get(i);
      if (checkIsPosOnBody(currentBranch.pos)) { // if branch is still on body, grow it
        currentBranch.scale += 0.05;
        if (currentBranch.scale > 1.0) currentBranch.scale = 1.0;
      }
      else { // otherwise, make it wither
        currentBranch.scale -= 0.25;
        if (currentBranch.scale < 0.0) bodyBranches.remove(i);
      }
    }
  }
}

PVector findPosOnBody() {
  PVector pos = new PVector(width, height);
  boolean isMask = false;
  while (isMask == false) {
    pos.x = int(random(width));
    pos.y = int(random(height));
    isMask = checkIsPosOnBody(pos);
  }
  return pos;
}

boolean checkIsPosOnBody(PVector pos) {
  int loc = int(pos.x) + int(pos.y*width);
  boolean isMask = (alpha(kinectMask.pixels[loc]) != 0);
  return isMask;
}

void drawBehindGroundBranches() {
  // draw branches that are behind person
  for (int i = 0; i < groundBranches.size(); i += 1) {
    Branch currentBranch = groundBranches.get(i);
    if (!currentBranch.isInFront) currentBranch.display();
  }
  
  for (int i = 0; i < plantedBranches.size(); i += 1) {
    Branch currentBranch = plantedBranches.get(i);
    if (!currentBranch.isInFront) currentBranch.display();
  }
}

void drawPersonFrameDiff() {
  loadPixels();
  if (state != 0) {
  for (int y = 0; y < height; y += 1) {
    for (int x = 0; x < width; x += 1) {
      int loc = x + y*width;
      
      boolean isMask = (alpha(kinectMask.pixels[loc]) != 0);
      
      if (isMask) {
        int depth = int(brightness(kinectDepth.pixels[loc])); 
 
        int diff = abs(depth - previousFrame[loc]);
        if (diff != 0) {
          if (state == 1) pixels[loc] = color(backgroundR, backgroundG+(diff-backgroundG)*0.5, backgroundB);
          if (state == 2) pixels[loc] = color(backgroundR+(diff-backgroundR)*0.5, backgroundG+(diff-backgroundG)*0.5, backgroundB);
          if (state == 3) pixels[loc] = color(backgroundR+(diff-backgroundR)*0.5, backgroundG, backgroundB);
        }
        else {
          pixels[loc] = backgroundColor;
        }
        
        previousFrame[loc] = depth;
      }
      else {
        previousFrame[loc] = 0;
      }
    }
  }
  updatePixels();
  }
}

void drawPersonBones() {
  if (state != 0) {
    for (int i = 0; i < skeletonLines.size(); i += 1) {
      Float[][] currentSkeletonLines = skeletonLines.get(i);
      drawBones(currentSkeletonLines);
    }
  }
}

void drawBodyBranches() {
  for (int i = 0; i < bodyBranches.size(); i += 1) {
    Branch currentBranch = bodyBranches.get(i);
    currentBranch.display();
  }
}

void drawHandBranches() {
  if (state != 0) {
    for (int i = 0; i < skeletonLines.size(); i += 1) {
      Float[][] currentSkeletonLines = skeletonLines.get(i);
      
      for (int j = 0; j < leftHandBranches.size(); j += 1) {
        Branch currentBranch = leftHandBranches.get(j);
        currentBranch.adjustPosition(currentSkeletonLines[1]);
        currentBranch.display();
      }
      
      for (int j = 0; j < rightHandBranches.size(); j += 1) {
        Branch currentBranch = rightHandBranches.get(j);
        currentBranch.adjustPosition(currentSkeletonLines[2]);
        currentBranch.display();
      }
    }
  }
}

void drawFrontGroundBranches() {
  // draw branches that are in front of person
  for (int i = 0; i < groundBranches.size(); i += 1) {
    Branch currentBranch = groundBranches.get(i);
    if (currentBranch.isInFront) currentBranch.display();
  }
  
  for (int i = 0; i < plantedBranches.size(); i += 1) {
    Branch currentBranch = plantedBranches.get(i);
    if (currentBranch.isInFront) currentBranch.display();
  }
}

void drawMessage() {
  String s = "";
  textFont(messageFont);
  textAlign(CENTER,CENTER);
  noStroke();
  if (state == 0) { // no person
    fill(255,255,255);
    s = "STEP INTO THE JUNGLE";
  }
  else if (state == 1) { // little movement
    fill(0,255,0);
    s = "BREATHE";
  }
  else if (state == 2) { // medium movement
    fill(255,255,0);
    s = "SLOW DOWN";
  }
  else { // high movement
    fill(255,0,0);
    s = "TAKE A BREAK";
  }
  text(s, width/2, height/2);
}

void drawInfo() {
  String s = "";
  textFont(infoFont);
  textAlign(CENTER,CENTER);
  noStroke();
  if (state == 0) fill(255,255,255);
  else if (state == 1) fill(0,255,0);
  else if (state == 2) fill(255,255,0);
  else if (state == 3) fill(255,0,0);
  
  if (leftHandBranches.size() > 0 || rightHandBranches.size() > 0) {
    s = "PLANT YOUR FLOWER FOR OTHERS TO SEE";
  }
  else if (bodyBranches.size() > 0) { // little movement
    s = "TRY MOVING THE LOTUS PLANTS WITH YOUR HANDS";
  }
  text(s, width/2, height/8);
}

void makeGroundBranches() { 
  float leafFatness = random(1, 2);
  int[] leafColorBounds = {0, 196, 140, 200, 0, 0};
  while (groundBranches.size() < MAX_GROUND_BRANCHES) {
    groundBranches.add(new Branch(new PVector(random(20, width-20), height - random(1)*(height/5)), 0.6, leafFatness, leafColorBounds));
  }
}

class Branch {
  PVector pos;
  float percentagePosition;
 
  int numNodes;
  float[] nodeX, nodeY;
  color[] nodeColors;
  float nodeDist; //vertical dist between nodes
  float wiggle; // wonkiness
  
  private float bottomX, bottomY, topX, topY = 0;
  public Float[] modelLine = {0.0, 0.0, 0.0, 0.0};
  
  int numLeaves;
  color[] leafColors;
  float[] leafRotations;
  float[] leafScales;
  float leafFatness;
  
  int[] stemColorBounds = {32, 32, 140, 200, 0, 0};
  int[] leafColorBounds;
  
  float maxHeight;
  float scale = 0.0;
  
  boolean isCollision = false;
  boolean isInFront = true;
  
  Branch(PVector pos, float maxHeight, float leafFatness, int[] leafColorBounds) {
    this.pos = pos;
    this.maxHeight = maxHeight;
    this.leafFatness = leafFatness;
    this.leafColorBounds = leafColorBounds;
    
    this.numNodes = (int)random(21, 35);
    this.nodeDist = (height*maxHeight - 100) / numNodes - random(numNodes/5);
    this.wiggle = random(0.03, 0.07); 
    
    init();
  }
  
  Branch(float percentagePosition, float maxHeight, float leafFatness, int[] leafColorBounds) {
    this.pos = new PVector(width*2,height*2);
    this.percentagePosition = percentagePosition;
    this.maxHeight = maxHeight;
    this.leafFatness = leafFatness;
    this.leafColorBounds = leafColorBounds;
    
    this.numNodes = (int)random(21, 35);
    this.nodeDist = (height*maxHeight - 100) / numNodes - random(numNodes/5);
    this.wiggle = random(0.03, 0.07);
    
    init();
  }
  
  Branch(PVector pos, float leafFatness, int[] leafColorBounds) {
    this.pos = pos;
    this.leafFatness = leafFatness;
    this.leafColorBounds = leafColorBounds;
    
    this.numNodes = (int)random(21, 35);
    this.nodeDist = 0.1;
    this.wiggle = 0;
    
    init();
  }
  
  Branch(float percentagePosition, float leafFatness, int[] leafColorBounds) {
    this.pos = new PVector(width*2,height*2);
    this.percentagePosition = percentagePosition;
    this.leafFatness = leafFatness;
    this.leafColorBounds = leafColorBounds;
    
    this.numNodes = (int)random(21, 35);
    this.nodeDist = 0.1;
    this.wiggle = 0;
    
    init();
  }
  
  void init() {
    nodeX = new float[numNodes];
    nodeY = new float[numNodes];
    nodeColors = new color[numNodes];
    
    nodeX[0] = 0;
    nodeY[0] = 0;
    
    for (int i = 1; i < numNodes; i++) {
      nodeX[i] = nodeX[i - 1] + i * wiggle * random(-10, 10);
      nodeY[i] = -nodeDist * i;
      nodeColors[i] = color(random(stemColorBounds[0], stemColorBounds[1]), random(stemColorBounds[2], stemColorBounds[3]), random(stemColorBounds[4], stemColorBounds[5]));
    }
    
    checkEndpoints();
    
    numLeaves = int(random(9, 21));
    
    leafColors = new color[numLeaves];
    leafRotations = new float[numLeaves];
    leafScales = new float[numLeaves];
    for (int i = 0; i < numLeaves; i++) {
      leafColors[i] = color(random(leafColorBounds[0], leafColorBounds[1]), random(leafColorBounds[2], leafColorBounds[3]), random(leafColorBounds[4], leafColorBounds[5]));
      leafRotations[i] = random(PI, TWO_PI);
      leafScales[i] = random(0.025, 0.030) * (numLeaves + i);
    }
  }
  
  void adjustPosition(Float[] line) {
    pos.x = line[0] + (line[2]-line[0])*percentagePosition;
    pos.y = line[1] + (line[3]-line[1])*percentagePosition;
    
    checkEndpoints();
  }
  
  void checkEndpoints() {
    bottomX = nodeX[0];
    bottomY = nodeY[0];
    topX = nodeX[numNodes-1];
    topY = nodeY[numNodes-1];
  }
   
  boolean checkModelLineIntersection(Float[] otherLine) {
    return checkLineIntersection(modelLine, otherLine);
  }
  
  boolean checkLineIntersection(Float[] line0, Float[] line1) {
    float p0x = line0[0];
    float p0y = line0[1];
    float p1x = line0[2];
    float p1y = line0[3];
    
    float p2x = line1[0];
    float p2y = line1[1];
    float p3x = line1[2];
    float p3y = line1[3];
    
    float s1x, s1y, s3x, s3y;
    s1x = p1x - p0x;
    s1y = p1y - p0y;
    s3x = p3x - p2x;
    s3y = p3y - p2y;
    
    float s, t;
    s = (-s1y*(p0x-p2x) + s1x*(p0y-p2y)) / (-s3x*s1y + s1x*s3y);
    t = (s3x*(p0y-p2y) - s3y*(p0x-p2x)) / (-s3x*s1y + s1x*s3y);
    
    if (s >= 0 && s <= 1 && t >= 0 && t <= 1) return true;
    else return false;
  }
  
  void checkIsInFront(Float[] leftFootLine, Float[] rightFootLine) {
    // drop a line from bottom coord to beyond lower edge of screen
    Float[] checkLine = {modelLine[0], modelLine[1], modelLine[0], float(2*height)};
    // connect the feet nodes of the person
    Float[] footLine = {0.0, max(leftFootLine[1],leftFootLine[3]), float(width), max(rightFootLine[1],rightFootLine[3])};
 
    // if foot line intersects check line, that means that the person is stepping in front of the branch - the person is in front
    boolean hasCollision = checkLineIntersection(checkLine, footLine);
    
    if (hasCollision) isInFront = false;
    else isInFront = true;
  }
  
  void checkIsCollision(Float[] firstLine, Float[] secondLine) {
    isCollision = checkLineIntersection(modelLine, firstLine) || checkLineIntersection(modelLine, secondLine);
  }
  
  void display() {
    pushMatrix();
    translate(pos.x, pos.y);
    scale(scale);
    for (int i = 1; i < numNodes; i++) {
      drawStem(i);
    }
    for (int i = 0; i < numLeaves; i++) {
      drawLeaf(i);
    }
    
    modelLine[0] = screenX(bottomX, bottomY); // modelBottomX
    modelLine[1] = screenY(bottomX, bottomY); // modelBottomY
    modelLine[2] = screenX(topX, topY); // modelTopX
    modelLine[3] = screenY(topX, topY); // modelTopY
    popMatrix();
    
    //drawModelLine();
  }
  
  void drawStem(int i) {
    stroke(nodeColors[i]);
    line(nodeX[i], nodeY[i], nodeX[i - 1], nodeY[i - 1]);
  } 
  
  void drawLeaf(int i) {
    noStroke();
    fill(leafColors[i]);
    
    pushMatrix();
    translate(nodeX[numNodes - i - 1], nodeY[numNodes - i - 1]);
    rotate(leafRotations[i]);
    scale(leafScales[i]/leafFatness, leafScales[i]*leafFatness);
    curve(-50, 50, 0, 0, 100, 0, 150, 50);
    curve(-50, -50, 0, 0, 100, 0, 150, -50);
    popMatrix();
  }
  
  void drawModelLine() {
    stroke(255,255,0);
    line(modelLine[0], modelLine[1], modelLine[2], modelLine[3]);
  }
}

Float[][] getSkeletonLines(SkeletonData _s) {
  Float[][] skeletonLines = new Float[5][4];
  
  // head
  skeletonLines[0] = getBoneLine(_s, Kinect.NUI_SKELETON_POSITION_HEAD, Kinect.NUI_SKELETON_POSITION_SHOULDER_CENTER);
  // left hand
  skeletonLines[1] = getBoneLine(_s, Kinect.NUI_SKELETON_POSITION_WRIST_LEFT, Kinect.NUI_SKELETON_POSITION_HAND_LEFT);
  // right hand
  skeletonLines[2] = getBoneLine(_s, Kinect.NUI_SKELETON_POSITION_WRIST_RIGHT, Kinect.NUI_SKELETON_POSITION_HAND_RIGHT);
  // left foot
  skeletonLines[3] = getBoneLine(_s, Kinect.NUI_SKELETON_POSITION_ANKLE_LEFT, Kinect.NUI_SKELETON_POSITION_FOOT_LEFT);
  // right foot
  skeletonLines[4] = getBoneLine(_s, Kinect.NUI_SKELETON_POSITION_ANKLE_RIGHT, Kinect.NUI_SKELETON_POSITION_FOOT_RIGHT);
  
  return skeletonLines;
}

void drawBones(Float[][] skeletonLines) {
  if (state == 1) stroke(0,255,0);
  else if (state == 2) stroke(255,255,0);
  else if (state == 3) stroke(255,0,0);
  line(skeletonLines[1][0], skeletonLines[1][1], skeletonLines[1][2], skeletonLines[1][3]);
  line(skeletonLines[2][0], skeletonLines[2][1], skeletonLines[2][2], skeletonLines[2][3]);
  line(skeletonLines[3][0], skeletonLines[3][1], skeletonLines[3][2], skeletonLines[3][3]);
  line(skeletonLines[4][0], skeletonLines[4][1], skeletonLines[4][2], skeletonLines[4][3]);
}

Float[] getBoneLine(SkeletonData _s, int _j1, int _j2) {
  noFill();
  stroke(255, 255, 0);
  Float[] boneLine = {width*2.0, height*2.0, width*2.0, height*2.0};
  if ((_s.skeletonPositionTrackingState[_j1] != Kinect.NUI_SKELETON_POSITION_NOT_TRACKED) && (_s.skeletonPositionTrackingState[_j2] != Kinect.NUI_SKELETON_POSITION_NOT_TRACKED)) {
    boneLine[0] = _s.skeletonPositions[_j1].x*width;
    boneLine[1] = _s.skeletonPositions[_j1].y*height;
    boneLine[2] = _s.skeletonPositions[_j2].x*width;
    boneLine[3] = _s.skeletonPositions[_j2].y*height;
  }
  return boneLine;
}

void appearEvent(SkeletonData _s) {
  if (_s.trackingState == Kinect.NUI_SKELETON_NOT_TRACKED) return;
  synchronized(bodies) {
    synchronized(skeletonLines) {
      if (bodies.size() < 1) {
        bodies.add(_s);
        Float[][] currentSkeletonLines = getSkeletonLines(_s);
        skeletonLines.add(currentSkeletonLines);
      }
    }
  }
}

void disappearEvent(SkeletonData _s) {
  synchronized(bodies) {
    synchronized(skeletonLines) {
      for (int i = bodies.size()-1; i >= 0; i -= 1) {
        if (_s.dwTrackingID == bodies.get(i).dwTrackingID) {
          bodies.remove(i);
          skeletonLines.remove(i);
        }
        else if (bodies.get(i).dwTrackingID == 0) {
          bodies.remove(i);
          skeletonLines.remove(i);
        }
        else {
        }
      }
    }
  }
}

void moveEvent(SkeletonData _b, SkeletonData _a) {
  if (_a.trackingState == Kinect.NUI_SKELETON_NOT_TRACKED) return;
  synchronized(bodies) {
    for (int i = bodies.size()-1; i >= 0; i -= 1) {
      if (_b.dwTrackingID == bodies.get(i).dwTrackingID) {
        bodies.get(i).copy(_a);
        break;
      }
    }
  }
}

 

Dream Box

 

This final project was inspired by La Monte Young’s Dream House (hence the name), however, not only I wanted to play with how sound travels through space like La Monte Young did, I also want whoever enters the space of my project to be able to control sound in the whole space. The only way I could think of for making this project come true was building a room (Dream Box), sound isolating it as much as I could, or at least blocking as many of the external sounds as was possible in the given circumstances, and then finding a way for the visitor of the box to be able to move sound through space. The main purpose of sound isolating the room was to make sure it was easy to hear how the sound played in the Dream Box moves within the space by swiping the wall. Swiping the wall is not quite real though, so I was faking it by having an ultrasonic rangefinder in one part of the wall, so that depending of different values(distance) the sound would travel from speaker to speaker.

Making this final come to life was a long journey that consisted of two parts:

1) Making a wooden box (a.k.a. the Dream Box)

2) Programming processing so that it knows what speaker to send the mp3 file to depending on the value from the ultrasonic rangefinder

While making the room (or the box) of wood was time consuming and did not required a lot of planning and calculating for the right dimensions of the foam and food I had to cut, it did not cause me much of a problem in a sense that it all went as planned.

The initial idea for making the walls was taking 25mm plywood and simply attaching a couple of pieces together, however, this proved to be very unsafe and extremely heavy. Knowing that, the plan then was to do the walls in the similar way theater flats are done. I took 1×4 inch stick lumber and made three frames  (I used the Arts Center wall as the 4th wall) for the walls that I later simply skinned with 6mm luan plywood. The dimensions of the room, which turned out to be a cube, were 244x244x244cm, which seemed quite small on paper; however, it felt really large once I saw it in life. At some point I even doubted myself whether I need to keep working on it, or whether making something that big would be a waste of time. But, as the saying goes, go big or go home, and I’m not home yet J.

Those are the flats I’ve built.

I thought that the coding part of this project would be a lot simpler and would not cause me any problems. So first I made a working prototype with Arduino. I have soldered four speakers and had the Arduino tell which speaker to make noise depending on the value I was getting from the analog sensor. However, after spending hours making this prototype to work, I was pointed out that it is nearly worthless to have a nice big room be filled with annoying buzz. And that was totally right, why would I want the visitor of the Dream Box experience any annoying sound when one is supposed to enjoy their time in the room playing with the directionality of sound, rather than get tired of a random tone that is being played?

Unfortunately for me, I could not play anything but tone using Arduino only, unless I was to use an MP3 shield, but I could not do that since I had 4 outputs (speakers), and an MP3 shield is only capable of playing through 2.

So I had to use processing for this, which was fine, I never expected anything to go wrong. However, it felt like everything that could go wrong went wrong. The first problem was that I did not have enough outputs for four speakers in my computer either, so I had to find an audio interface with 4 or more outputs. Once I had that, I did not know how to tell processing to play it through this exact device. And once there was a way, processing thought that the audio interface was merely a speaker. I did not know a way to tell processing that this device that it thinks is a speaker actually has 14 different outputs, so I had to teach it how to see all of those separate outputs. If not Aaron, I would never figure this out. Aaron provided me with the code that would let processing know which specific output to play the sound from using the audio interface. I thought the struggle would end, however, it has just begun, because the Beads library was a little confusing, and once I figured out how to make it all work the way I want to with one sound, I did not stop there. I though one sound might be too boring, so I decided to add more sounds so that the visitor of the Dream Box had more freedom when choosing the direction and the type of sound. Making it work with four different sounds was really hard, but eventually the battle was won.

There was another problem along the way as I was going to use infra-red rangefinder at first, but the values there were too inaccurate and unpredictable art times, so I had to change to the ultrasonic rangefinder, which is a bit harder to program.

This is the Arduino code

#define trigPin 13
#define echoPin 12
#define led 11
#define led2 10
int buttonPressPin = 8;
int buttonPress = 0;

boolean isAvailable = true;
void setup() {
Serial.begin (9600);
Serial.println('0');
pinMode(trigPin, OUTPUT);
pinMode(echoPin, INPUT);
pinMode(buttonPressPin, INPUT);
pinMode(led, OUTPUT);
pinMode(led2, OUTPUT);

}

void loop() {
buttonPress = digitalRead(buttonPressPin);
long duration, distance;

digitalWrite(trigPin, LOW); // Added this line
delayMicroseconds(2); // Added this line
digitalWrite(trigPin, HIGH);
// delayMicroseconds(1000); - Removed this line
delayMicroseconds(10); // Added this line
digitalWrite(trigPin, LOW);
duration = pulseIn(echoPin, HIGH);
distance = (duration / 2) / 29.1;
if (distance < 4) { // This is where the LED On/Off happens
digitalWrite(led, HIGH); // When the Red condition is met, the Green LED should turn off
digitalWrite(led2, LOW);
}
else {
digitalWrite(led, LOW);
digitalWrite(led2, HIGH);
}
if (distance >= 200 || distance <= 0) {
//Serial.println("Out of range");
}
else {
Serial.print(distance);
Serial.print(",");
Serial.println(buttonPress);
//Serial.println(" cm");
}
delay(10);
if (digitalRead(buttonPressPin) == 1) {
if (isAvailable == true) {
buttonPress = 1;
isAvailable = false;
}
} else {
isAvailable = true;
}

}

Processing code

  • import beads.*;
    import org.jaudiolibs.beads.AudioServerIO;
    import java.util.Arrays; 
    
    AudioContext audioContext;
    IOAudioFormat audioFormat;
    float sampleRate = 44100;
    int buffer = 512;
    int bitDepth = 16;
    int inputs = 2;
    int outputs = 14; //set for soundflower now
    float speaker1Gain, speaker2Gain, speaker3Gain, speaker4Gain;
    float soundPos;
    int distance, previousDistance;
    int buttonPress;
    import processing.serial.*;
    String sourceFile1;
    String sourceFile2;
    String sourceFile3;
    String sourceFile4;
    boolean isPressed;
    int spCounter;
    SamplePlayer sp1;
    SamplePlayer sp2;
    SamplePlayer sp3;
    SamplePlayer sp4;
    Serial myPort;
    
    WavePlayer wp;
    
    Gain g1;
    Gain g2;
    Gain g3;
    Gain g4;
    
    Glide gainGlide1;
    Glide gainGlide2;
    Glide gainGlide3;
    Glide gainGlide4;
    
    Glide rateValue1;
    Glide rateValue2;
    
    boolean initializeSound = true;
    
    void setup() {
    size(640, 640);
    
    buttonPress = 0;
    spCounter = 0;
    isPressed = false;
    
    sourceFile1 = sketchPath("") + "fire.wav";
    sourceFile2 = sketchPath("") + "waves.mp3";
    sourceFile3 = sketchPath("") + "thunder.mp3";
    sourceFile4 = sketchPath("") + "birds.wav";
    audioFormat = new IOAudioFormat(sampleRate, bitDepth, inputs, outputs);
    audioContext = new AudioContext(new AudioServerIO.JavaSound(), buffer, audioFormat);
    println("no. of inputs: " + audioContext.getAudioInput().getOuts()); 
    println("no of outputs: " + audioContext.out.getIns()); 
    try {
    // initialize our SamplePlayer, loading the file
    // indicated by the sourceFile string
    sp1 = new SamplePlayer(audioContext, new Sample(sourceFile1));
    sp2 = new SamplePlayer(audioContext, new Sample(sourceFile2));
    sp3 = new SamplePlayer(audioContext, new Sample(sourceFile3));
    sp4 = new SamplePlayer(audioContext, new Sample(sourceFile4));
    }
    catch(Exception e)
    {
    // If there is an error, show an error message
    // at the bottom of the processing window.
    println("Exception while attempting to load sample!");
    e.printStackTrace(); // print description of the error
    exit(); // and exit the program
    }
    
    rateValue1 = new Glide(audioContext, 1, 50);
    rateValue2 = new Glide(audioContext, 1, 50);
    
    wp = new WavePlayer(audioContext, 400, Buffer.SINE);
    
    gainGlide1 = new Glide(audioContext, 0.0, 50);
    gainGlide2 = new Glide(audioContext, 0.0, 50);
    gainGlide3= new Glide(audioContext, 0.0, 50);
    gainGlide4= new Glide(audioContext, 0.0, 50);
    
    g1 = new Gain(audioContext, 2, gainGlide1);
    g2 = new Gain(audioContext, 2, gainGlide2);
    g3 = new Gain(audioContext, 2, gainGlide3);
    g4 = new Gain(audioContext, 2, gainGlide4);
    
    g1.addInput(sp1);
    g2.addInput(sp1);
    g3.addInput(sp1);
    g4.addInput(sp1);
    g1.addInput(sp2);
    g2.addInput(sp2);
    g3.addInput(sp2);
    g4.addInput(sp2);
    g1.addInput(sp3);
    g2.addInput(sp3);
    g3.addInput(sp3);
    g4.addInput(sp3);
    g1.addInput(sp4);
    g2.addInput(sp4);
    g3.addInput(sp4);
    g4.addInput(sp4);
    
    audioContext.out.addInput(0, g1, 0); // OUT 1
    audioContext.out.addInput(1, g2, 0); // OUT 2
    audioContext.out.addInput(2, g3, 0); // OUT 3
    audioContext.out.addInput(3, g4, 0); // OUT 4
    
    audioContext.start();
    
    // for IR rangefinder
    printArray(Serial.list());
    String portname=Serial.list()[2];
    println(portname);
    myPort = new Serial(this, portname, 9600);
    myPort.clear();
    myPort.bufferUntil('\n');
    
    color fore = color(255);
    color back = color(0);
    
    
    
    // SamplePlayer can be set to be destroyed when
    // it is done playing
    // this is useful when you want to load a number of
    // different samples, but only play each one once
    // in this case, we would like to play the sample multiple
    // times, so we set KillOnEnd to false
    sp1.setKillOnEnd(false);
    sp1.setToLoopStart();
    sp2.setKillOnEnd(false);
    sp2.setToLoopStart();
    sp3.setKillOnEnd(false);
    sp3.setToLoopStart();
    sp4.setKillOnEnd(false);
    sp4.setToLoopStart();
    sp1.start(); // play the audio file
    }
    void draw() {
    
    if (buttonPress==1) {
    spCounter++;
    
    if (spCounter==5) {
    spCounter=1;
    }
    
    if (spCounter==1) {
    sp1.setToEnd();
    sp2.setToEnd();
    sp3.setToEnd();
    sp4.setToEnd();
  •  
    
    
    sp1.setToLoopStart();
    sp1.start();
    //println(“————————–ending——————“);
    } else if (spCounter==2) {
    sp1.setToEnd();
    sp2.setToEnd();
    sp3.setToEnd();
    sp4.setToEnd();
    sp2.setToLoopStart();
    sp2.start();
    } else if (spCounter==3) {
    sp1.setToEnd();
    sp2.setToEnd();
    sp3.setToEnd();
    sp4.setToEnd();
    sp3.setToLoopStart();
    sp3.start();
    } else if (spCounter==4) {
    sp1.setToEnd();
    sp2.setToEnd();
    sp3.setToEnd();
    sp4.setToEnd();
    sp4.setToLoopStart();
    sp4.start();
    }
    }

    //if (initializeSound == true) {
    // spCounter = 1;
    // sp1.start();
    // initializeSound = false;
    //}

    //if (buttonPress==1) {
    // spCounter++;

    // if (spCounter==5) {
    // spCounter=1;
    // }

    // if (spCounter==1) {
    // sp1.setToEnd();
    // sp2.setToEnd();
    // sp3.setToEnd();
    // sp4.setToEnd();
    // sp1.setToLoopStart();
    // sp1.start();
    // //println(“————————–ending——————“);
    // } else if (spCounter==2) {
    // sp1.setToEnd();
    // sp2.setToEnd();
    // sp3.setToEnd();
    // sp4.setToEnd();
    // sp2.setToLoopStart();
    // sp2.start();
    // } else if (spCounter==3) {
    // sp1.setToEnd();
    // sp2.setToEnd();
    // sp3.setToEnd();
    // sp4.setToEnd();
    // sp3.setToLoopStart();
    // sp3.start();
    // } else if (spCounter==4) {
    // sp1.setToEnd();
    // sp2.setToEnd();
    // sp3.setToEnd();
    // sp4.setToEnd();
    // sp4.setToLoopStart();
    // sp4.start();
    // }
    //}

    if (distance>200 || distance<=5) {
    distance = previousDistance;
    } else if (distance>6 && distance<50) {
    soundPos=map(distance, 1, 50, 0, 1);
    speaker1Gain= sin((1-soundPos) * PI/2);
    speaker2Gain= sin(soundPos * PI/2);
    speaker3Gain = 0;
    speaker4Gain = 0;
    } else if (distance>51 && distance<100) {
    soundPos =map(distance, 51, 100, 0, 1);
    speaker2Gain= sin((1-soundPos) * PI/2);
    speaker3Gain= sin(soundPos * PI/2);
    speaker4Gain = 0;
    speaker1Gain = 0;
    } else if (distance>101 && distance<150) {
    soundPos = map(distance, 101, 150, 0, 1);
    speaker3Gain= sin((1-soundPos) * PI/2);
    speaker4Gain= sin(soundPos * PI/2);
    speaker1Gain = 0;
    speaker2Gain = 0;
    } else if (distance>151 && distance<200) {
    soundPos = map(distance, 151, 200, 0, 1);
    speaker4Gain= sin((1-soundPos) * PI/2);
    speaker1Gain= sin(soundPos * PI/2);
    speaker2Gain = 0;
    speaker3Gain = 0;
    }

    println(speaker1Gain, speaker2Gain, speaker3Gain, speaker4Gain);
    println(“spCounter: ” + spCounter);
    //if(distance>0||distance<640){
    //soundPos = map(distance,0,640,0,1);
    // speaker4Gain = 1;
    // speaker1Gain = 1;
    // speaker2Gain = 1;
    // speaker3Gain = 1;
    //}

    //gainGlide1.setValue(distance / (float)width);
    //gainGlide2.setValue(distance / (float)width);
    //gainGlide3.setValue(distance / (float)width);
    //gainGlide4.setValue(mouseY / (float)height);

    //println(distance,mouseY);

    gainGlide1.setValue(speaker1Gain);
    gainGlide2.setValue(speaker2Gain);
    gainGlide3.setValue(speaker3Gain);
    gainGlide4.setValue(speaker4Gain);

    //println(mouseX);
    //loadPixels();
    ////set the background
    //Arrays.fill(pixels, back);
    ////scan across the pixels
    //for (int j = 0; j<4; j++) {
    // for (int i = 0; i < width; i++) {
    // //for each pixel work out where in the current audio buffer we are
    // int buffIndex = i * audioContext.getBufferSize() / width;
    // //then work out the pixel height of the audio data at that point
    // int vOffset = (int)((1 + audioContext.out.getValue(j, buffIndex)) * (height/2 ));
    // //draw into Processing’s convenient 1-D array of pixels
    // vOffset = min(vOffset, height);
    // vOffset+=(int)map(j, 0, 3, -250, 250);
    // pixels[vOffset* height + i] = fore;
    // }
    //}
    //updatePixels();

    previousDistance = distance;
    }

    void serialEvent(Serial myPort) {
    String s=myPort.readStringUntil(‘\n’);
    s=trim(s);
    if (s!=null) {
    int values[]=int(split(s, ‘,’));
    if (values.length==2) {
    distance=(int)values[0];
    buttonPress = (int)values[1];
    }
    println(“dist: ” + distance + ” button: ” + buttonPress);
    myPort.write(‘0’);
    }
    }

    void mousePressed() {
    //if

  • 
    
    (buttonPress==1) {
    // spCounter++;
    
    // if (spCounter==5) {
    // spCounter=1;
    // }
    
    // if (spCounter==1) {
    // sp1.setToEnd();
    // sp2.setToEnd();
    // sp3.setToEnd();
    // sp4.setToEnd();
    // sp1.setToLoopStart();
    // sp1.start();
    // //println("--------------------------ending------------------");
    // } else if (spCounter==2) {
    // sp1.setToEnd();
    // sp2.setToEnd();
    // sp3.setToEnd();
    // sp4.setToEnd();
    // sp2.setToLoopStart();
    // sp2.start();
    // } else if (spCounter==3) {
    // sp1.setToEnd();
    // sp2.setToEnd();
    // sp3.setToEnd();
    // sp4.setToEnd();
    // sp3.setToLoopStart();
    // sp3.start();
    // } else if (spCounter==4) {
    // sp1.setToEnd();
    // sp2.setToEnd();
    // sp3.setToEnd();
    // sp4.setToEnd();
    // sp4.setToLoopStart();
    // sp4.start();
    // } 
    //} 
    
    
    
    //spCounter++;
    //println("--------------------------------------------" + spCounter);
    //if (spCounter==1) {
    // turn on 1
    //sp1.start();
    //// turn off 2
    //sp1.setToEnd();
    //sp2.setToEnd();
    //sp3.setToEnd();
    //sp4.setToEnd();
    //sp1.setToLoopStart();
    //sp1.start();
    
    //} else if (spCounter==2) {
    // // turn on 2
    // rateValue2.setValue(1);
    // sp2.setRate(rateValue2);
    // // turn off 1
    // //rateValue1.setValue(0);
    // //sp1.setRate(rateValue1);
    // //sp1.setToEnd();
    //}
    /*else if (spCounter==3) {
    sp3.start();
    } else if (spCounter==4) {
    sp4.start();
    } */
    //if (spCounter==5) {
    //if (spCounter==3) { // was 5
    // spCounter=1;
    //}
    }

Another thing why I decided to do the framing this way is so that I could place the rangefinder flat on the inside of the wall, and still have some space between the 6mm luan and the layer of Styrofoam I am going to place in on top of it.

Last step was to sound-isolate the room, after I made sure everything worked the way it should. I used 10cm thick Styrofoam to go along all walls to provide sound isolation. The top two pieces of Styrofoam were covered with a layer of 15mm plywood to keep the whole Dream Box in place. The plywood also had 4 round holes on every corner of the roof, which was to let me put the speakers through it. To make the sound come from the ceiling once one is inside the box, I carved out slots for the speakers to go to, so that the speakers would face downwards. I must say that cutting Styrofoam with a handsaw is not as easy as it looks.

This is the top view of the way the speakers were stuffed into the ceiling

And this is half of the roof top view to see how they are connected. I had to solder the extensions for the speakers as well.

So that the Styrofoam does not look ugly, and also to create a feeling of a Dream Box, I coated all of the interior of the box with red fabric, which made it even more similar to the Dream House.

This is how it looks on the inside:

The only comment I had from the people who went through the experience in the Dream Box was that the button was more attractive to them than the arrow, probably the swipe indication was not that clear, or maybe it is in the human nature that pressing buttons feels so pleasing. I think working a little more on the button design and the swipe signifier design would fix this issue. Other than that, I’ve seen and heard only positive feedback and I ended up being really proud of what I’ve built in a relatively short period of time as well as very happy with how this piece was accepted by the people who walked into it.

Here is the exterior of the Dream Box

IM Final: Interactive Totoro

Interactive life size Totoro did happen in the end 🙂

As a follow up from my first computer vision assignment and as a way to fulfil my desire of seeing a life-size Totoro, I decided to create a projected image of him that people could interact with!

As a brief recap, this was the project that inspired my final:

Overview of Totoro’s interactions:

Through the PS3Eye and Blob Detection libraries, as well with the infrared LEDs attached to the interactive objects, specific movements and interactions toggled different aspects on screen. This installation had two modes. The first one consists of using an umbrella to try to protect Totoro from the rain. Through two LEDs attached to either side of the umbrella, the program tracked its location and stopped the rain in those locations. As the umbrella gets closer to Totoro, he gets happier, and finally once it is directly in front of him, he growls and smiles widely. The second mode consists of wearing a glove to pet Totoro. Totoro’s eyes follow the user’s glove and, if stroked in his belly, Totoro gets happier and growls as well. Although seemingly simple interactions, the linking between all the components: switching between modes, accurately tracking the umbrella and the glove, toggling the rain on an off, moving Totoro’s eyes, and toggling sound and animation, was a lengthy and time-consuming, although extremely enjoyable, process.

The process for this piece was divided into three sections:

  1. The design: adjusting and making the background and animation frames
  2. The code: writing the program and adjusting the processing – IR camera link
  3. The hardware: attaching IR LEDs to the umbrella and the glove

 

  1. The design

For the project’s visuals, I adjusted both the background and Totoro’s expressions.

Here is a screenshot of the original image from the movie:

There were two issues with this background image. The first was that the girls in the scene, although iconic and the main characters of the movie, were superfluous. Although their colors added a lot to the appeal of the image, leaving them there would not only take attention out from Totoro, but would also give the impression that the girls were interactive as well. The second issue was the rain in the image. Since my rain was created through Processing, the drawn rain would create a saturated image and would also give the sense that the rain was stopping rather than disappearing when someone hovered on an area with the umbrella, since the coded rain would stop but the drawn rain would still be there. Thus, I also set upon myself to overuse the stamp tool in photoshop and get rid of the rain. This all led to the following final background image:

Actual background (the eyes had to be left blank for the ellipses in the code to move)

 

For the animation frames, I compiled Totoro’s smile in other scenes and added them to the umbrella scene, since in this whole part of the movie, there are no actual shots from afar where Totoro changes his expression.

For instance, this is the original scene where I got his smile from:

 

I made the eyes and mouth transparent and then adjusted them to the scene I wanted to use for my project, while trimming everything to 7 frames:

Once the animation was done, it was just a matter of setting up the boundaries as to where the specific frames would be shown.

Here is a sample of the locations where the frames change for the glove code:

  1. The Code

The code was much more complicated than I thought it would be. In summary, I manually change the modes through my keyboard. Depending on each mode, the code checks for the number of “blobs” that are detected on screen. To make the tracking accurate though, I adjusted the brightness threshold as well. When in “umbrella mode”, the code waits for there to be two blobs on screen. Once this occurs, it saves their coordinates and compares them to establish a minimum and a maximum point for the umbrella. Then, it uses these minimum and maximum values to make the rain’s alpha value transparent if it spawns between these locations. For the glove mode, the code checks for only one blob on screen. Once detected, it saves its coordinates. Then, depending on where the coordinates are, it moves Totoro’s pupils accordingly and shifts between animation frames.

Here is a link to the full code

  1. The Hardware

Finally, once the logic of the code was functioning, I attached the infrared LEDs to the umbrella and the glove. I 3D printed battery holders for my two 3V batteries and made switches so I could save the battery life for the exhibition. Then, for the umbrella, I attached all the wires with tape. For the glove, my friend Nikki Joaquin sewed all the components together due to my lack of ability. (thank you Nikki <3) Although seemingly quite simple, setting up all the hardware was one of the most time consuming tasks. At first, Nahil and I had not thought about 3D printing the battery holders. Instead, I had just taped everything up, which made it extremely difficult to attach the wires to the batteries and place them on the umbrella without any of the components moving out of place. At first, I had only thought about using one LED on either side of the umbrella and one on the hand. However, due to the directional aspect of the LEDs, I ended up making another extra set and adjusting their angles slightly so the blob tracking could be more accurate.

 

Sewed components. I could have covered them with a film but the buttons were more accessible this way.

The battery holders were attached with a lot of electric tape to ensure they would not fall off
As seen in the image, the LEDs were slightly shifted to arrange for a wider range.

Challenges and future improvements

This whole process was overall quite challenging. However, by dividing everything into the three sections described earlier and doing everything little by little, I was able to finish Totoro on time. The biggest challenge was definitely the coding. I had to get familiarized with the way the IR camera and the IR LEDs worked, and had to adjust the code for the Blob Detection to fit into the interactions I wanted to create as well. Initially, I made the code in such a way as to make the program automatically recognize the amount of blobs in the camera’s frame and with that identify the mode it was on. However, this made the code extremely unreliable, which is why I chose to manually change it through keys in my computer. Overall, thanks to the help of Aaron, Craig, Nahil, James and María Laura, the code is now fully functional and as bug free as possible (I hope). The visuals and the hardware were also quite time consuming, but were more mechanical, which provided for good breaks once I got tired of writing the code.  

Overall, the whole process of making Totoro come to life was a truly gratifying one. Although it was extremely time consuming and frustrating at times, it was all worth it once I saw how excited people got over seeing a huge Totoro, and realizing they could (even through the most minimal of ways) interact with him in some way. Some people even told me that rubbing Totoro’s belly was just what they needed for final’s week 😀  In the end, I am still at awe at how much all of us have been able to accomplish due to this class. I would never have guessed that I would be making a project like this one ever in the future, especially at the beginning of the semester. Overall, regardless of the times of Sunday stress when certain projects didn’t work out like I envisioned them to, this class has been one of the most rewarding I have taken, thank you so much everyone for being a part of it 😀

In the exhibition, I  was too caught up helping people out with the umbrella and the gloves that I totally forgot about taking videos of the people interacting with Totoro. Here are some of the photos of the exhibition (thank you Craig, James, and Aaron!)

Finally, here are samples of the final interactions: 

 

 

Across The Globe

For my final project I created an opportunity for people to jump around different places on Earth (and off Earth for that matter) in less than a second. With the help of computer vision and a green screen behind, people were able to see themselves in either Rome, a beach in Thailand or on the International Space Station (ISS). In order to navigate these places, all you have to do is move a figure of a person around the map and place it in one of the three locations. Then, this location appears on the screen, and so does the person interacting with the project, because he/she is being filmed. In addition, there is a small carpet on the floor on which to step on. When you start walking or running on it, the background starts moving as well, depending on how fast you move.

The creation of this project was challenging since the first day. I started with connecting two pressure sensors to Arduino and reading the time value between pressing the sensors. That way is possible to know how long is a person’s step. Then I did serial communication to send this data to Processing. In addition to the pressure sensors, there are also 3 LEDs connected to Arduino and it is also sending a different number to Processing depending on which LED is lit up. Each LED is responsible for a certain place on the map.

For the interactive map I got a box, cut 3 holes, added an LED next to each hole, designed the surface and added another layer of cardboard inside, so there would be a bottom for the holes. There are two strips of conductive coper tape coming to each of the holes, and one of the strips is connected to power but the other – to ground. Therefore, whenever there is something conductive placed in the hole, it closes the circuit, and the LED next to the hole lights up. A number is assigned to each LED and this number is being sent to Processing, therefore it knows at which location the person is placed.

the box from the outside
the box from the inside

For making the person I went to the Engineering Design Studio to use their laser cutter and cut 7mm thick clear acrylic. The figure is a traveler with a backpack and a round bottom. In order to make the bottom conductive, I first tried to tape some copper tape on the bottom, but it was lacking weight as it didn’t properly press down on the copper tape strips when placed in the hole. So I had to be creative and that’s how I decided to stick 3 coins on the bottom to give the person some weight as well as make the bottom more conductive (now I know that euros are more conductive than dirhams or dollars).

a two euro coin on the bottom of the figure

When the person is placed somewhere on the map, the appropriate LED lights up and sends a number to Processing. In Processing I then loaded 3 videos from each of the 3 places and display the appropriate video for each place. For example, when the person is placed in Rome, Arduino recognizes it and sends a ‘1’ to Processing which is then set to display a video of Rome. In order to actually play the video, the person interacting with my project needs to start moving on the carpet. Arduino then recognizes the time between the footsteps and again – sends these values to Processing. I’m mapping the incoming time value in Processing and playing the video accordingly to how fast a person is walking. It is slowing down when a person is walking very slowly, playing normally when the speed is normal and speeding up when a person is running. However, if the steps are longer than the maximum value in the map function (1.2 seconds), then the video just plays at the slowest mapped speed. If there is no movement for a little while, the video stops and restarts playing again when movement is detected again. Therefore, the people interacting with my project get an impression that they are actually seeing the background as they would when moving at different speeds.

the whole setup. people are walking on the carpet
pressure sensors on the back of the carpet

The person interacting with my project sees himself or herself in one of the places because of a green screen behind them. The camera from the computer in front of them is filming them and the green screen and substituting all of the green pixels with a video from the place where the figure is located at.

Whenever the person is not placed on any of the locations, this is the photo that shows up on the screen:

The IM show, where we were displaying our projects to public, was an incredible and positively overwhelming experience. For the show I had two screens – one was the computer in front of the person where they were seeing themselves but the other was turned to the public. I was really happy to have the other screen because it definitely dragged more attention to my project because people could see other people interacting with it. I was surprised by people’s interest to interact with my project and observing their reactions was extremely rewarding. The night flew by in a second for me but I tried to capture some moments from it.

Goffredo was really happy to be in Rome!!

Here I have a short time-lapse of people interacting with my project:


And these are some of my favorite moments filmed at the IM show. I have more footage though, and, as soon as the exam period ends, I’ll make sure to make a video about the whole project and I’ll also post it on here! Overall, I have learned a lot not only in this period of making the project but also throughout the whole semester. The IM show was a memorable way how to wind up this semester. Huge thanks to Aaron for the help and the class for the feedback received along the way!

Atmanna (Wish): Final Project Documentation

Inspiration
Atmanna or Wish came to be inspired by my interest in creating an art piece that mimics a motion in nature. I really wanted to work on an art piece instead of a game or other application because this class has given me an interest in satisfying motions that produce aesthetically pleasing visuals. The first piece that I worked on that attempted to create the compositional beauty of nature was the generative art of leaves in Processing.
At first I wanted to create an art piece that allowed the user to also make a wish and have a speech to text conversion mechanism and have their wish appear on screen. Ultimately the concept got changed along the way, but read more to find out.
Concepts of Movement
I created several different ideas in order to think about how to mimic the movements of a dandelion in Processing. I wanted to utilize my knowledge of object oriented programming as well as particle systems to create a beautiful effect. Here are some of the ideas that I came up with:
Concept 1 –
This was the first idea I had in mind, creating the dandelion with random shape particles. This idea was well suited to movements with the mouse and I really liked it, but I felt it was too abstract to be immediately recognized as a dandelion – and the motion wasn’t exactly what I had in mind in terms of the real movement of dandelions.
Concept 2 –
This concept came when I was trying to play instead with lines and nodes like Dan Shiffman’s fractal tree videos. I played a lot with motion in this concept but ultimately I didn’t like the look and feel of the lines and nodes for a dandelion.
Final Concept –
I finally decided to use vector graphics created in illustrator because I had more control over what I wanted the piece to look like, and created different frames of animation for a dandelion within illustrator and imported the different images into an array of images within processing to loop through them.
The particles in my particle system were composed of an image of a dandelion seed that I also imported into processing into the ‘Seed’ class and I played with different movements. I decided to make the seeds flow upwards because it made the most sense spatially on the screen for me. Again, referencing Dan Shiffman’s nature of code book really helped with this phase to be able to add and play with different physical forces to create the desired effect.
I knew I wanted to use the physical action of blowing, but without the use of a wind sensor I had to think of an alternative method. I decided to use a Sparkfun Sound detector that we had available in the lab and was able to read different sound inputs. The act of blowing on a microphone produces certain levels of sound that I was able to explore using the Serial plotter in the Arduino IDE. I used these serial values to trigger motions for the particle system in the Processing sketch.
User Testing
When I did my user testing, I did not yet have a physical dandelion that people could blow on. Some people liked this because it did not take away from the on screen experience and aesthetics that were happening. Others wished that they did have it.
At the time of the user testing, the animation also was not as clear or smooth as I would have liked it to be and people noticed that as well. I also thought about what story was being presented to the user as they interacted with this piece and I didn’t have a set narrative that was being told. I thought about what was being said and how to use that but ultimate idea for me was to allow the user to be able to make their wish and keep the act as simple as possible as it is organically in real life. See the user testing post for more of the notes and improvements that I wanted to work on.
I ultimately did have a physical dandelion for people to blow on, but it was difficult for me to decide on the medium that it should be made with. I used straws for the green stem that made the wiring easier to work with, and cotton balls for the top of the dandelion. It was difficult to embed the microphone in a way that would still allow it to work, but made sense for the user to blow on and interact with.
During the Show
 
I think my project stood out as being one of the ‘calmer’ projects – there was a lot of light and big screens and sound around so it was a different experience for people to pause and reflect and take a moment to make a wish. People ultimately really enjoyed the experience, and I especially liked that I had many people stop by because it was such a simple concept that didn’t take too much time to engage with but still created the impact that I wanted it to.
One thing I wish I had done was incorporated an element of sound or background music – but it was loud in the room anyway so it wouldn’t have created the exact ambiance that I wanted to achieve. One of my favorite comments was that this was a good business model for an ‘alternative stress ball’ to keep on your desk and use to take a moment to breathe and reflect.
I realized a bit too late that the set up for exactly where the dandelion and screen were position was not perfect. Sometimes as the user was blowing they missed what was happening on the screen. I think it might have been better or more immersive if I had created the ability to pick up the dandelion, and/or projected the art instead of having it on my laptop screen. All in all I really enjoyed presenting my work and have people play with it. Some people even came by several times to make more than one wish!
Limitations + Future Improvements
 
  • I didn’t use the right medium to create the physical dandelion – the cotton was really fragile and the straws were not particularly stable as people were blowing on the dandelion itself.
  • During the show I realized the loud room had some sound interference that the microphone detected and triggered the animation without meaning to. Even though I did user testing, I think each space is unique and I possibly need to add some calibration function – thanks to Aaron for showing me the sound smoothing function that saved me!
  • I would actually like to figure out an elegant solution for speech to text in Processing
  • I’m thinking of 3D printing or using some mesh material to create the dandelion instead of cotton balls.

Final Project: The Raga Machine

The Raga Machine:

The final project for this class was one of the most fulfilling projects that I have worked on all semester. Having undergone 10 years of training in Carnatic music and having sorely missed practicing it for the last three years, this project was a fun way to reconnect with it. Based on Carnatic ragas, my project  was designed to be an eight-key keyboard that could play a variety of ragas (five, in this case). A raga, for context, as a particular combination of notes that songs are composed in. For example, if raga A contains the notes Do-Re-Mi and Mi-Do-Re, then a song composed in raga A will only contain those notes and in that order. I think the Western equivalent of this is a key, but I’m not sure. Ragas are generally divided into two kinds: Melakartha ragas, and Janya ragas. Essentially, Melakartha ragas contain all eight notes that can be sung in any order. Janya ragas (derived from Melakartha ragas) generally have more rigid rules. I used five Melakartha ragas in my machine. If anyone is interested in learning more about how this system works, here are some resources:

– http://www.medieval.org/music/world/carnatic/cmc.html
– https://en.wikipedia.org/wiki/Melakarta

Here is a table of all the 72 Melakartha ragas:

Building this project took a lot more work than I envisioned, but I am extremely happy with the way it turned out. Here is a brief breakdown of how I made the Raga Machine over the last two weeks.

PROTOTYPE STAGES:

My first prototype of the project involved no Arduino aspect at all. It simple involved keys on the keyboard (from ‘z’ to ‘,’) that played one note each.I wanted to take it slow (since we had two weeks to work on it) and see what elements I could get up and working just on Processing.

In the second prototype, I included Arduino buttons. I had eight buttons, and I had each button play a single note that would then change its pitch according to which raga was being played. For both these prototypes, the raga would be changed based on the mouse position.

The visuals for each corresponding raga was something that I had continually struggled with envisioning. For a while, I had each button draw an ellipse of a certain size, with raga determining the color. It looked something like this:

I then changed the visuals to draw various henna/mandala patterns instead of ellipses, but still in random positions with size and color being controlled the same way as before. I was still unclear on what the best course of action was, but luckily, it was time for user testing.

USER TESTING:

Here are the notes I made after my user testing session:

“Even though my project is still missing its decoration component as well as refined visuals, having my friend (who didn’t know what my project was about) come test it out gave rise to some incredibly useful feedback. Here are some of the things I learned and plan to work on:

  1. My project is about Carnatic music, but not many people at this school are aware of what that is, and even if they are, would probably not be able to tell that the project is in fact based off the raga system. My user felt that without context, the project was confusing and vague.
  2. To this end, she suggested making an informational page in the beginning of the project, helping people understand situate the project in some sort of context.
  3. My user also suggested that I label the parts of the musical instrument, but on understanding that I was planning to create a keyboard-like structure for the instrument, she thought that it might need lesser labelling than she originally thought.
  4. She also thought that my visuals were random and confusing, and suggested that the visuals have more to do with the physical input that the user is doing. In line with that, I have changed my visuals to reflect the exact position of the key that the user is pressing at any given moment.”

This feedback turned out to be extremely helpful.

FINAL PRODUCT:

The final product, based on the feedback that I had received both during the user testing session and on the day before the IM show, included vastly different visuals and a pretty, compact box within which the Arduino lay. The keys were made out of popsicle sticks, which then pressed down on the buttons. I thought they would be a drawback but it turned out that a lot of people had fun with them — the idea that they were creating music by pressing down popsicle sticks was entertaining and interesting to them. I also had to add another piece of cardboard atop the structure so that the keys wouldn’t come off the structure.

In addition to these changes, I also added an info page at the beginning, which GREATLY helped during the show to contextualize the entire project. The visuals, too, were labelled with what raga was being played at that moment, which also helped greatly to contextualize the project. Here are two videos, one documenting the info page, and the other documenting a young boy playing the instrument:

Info page:

 

The Instrument:

In hindsight, I think that one of the main components that the project was missing was a signifier. Although most people intuitively understood that the popsicle sticks were meant to function as keys, others would try to pull them out or play them as they would an actual piano. The problem with this was that the popsicle sticks would generate the best sound when pressed where the black dot was, but that wasn’t clear enough of a signifier to make people press that spot immediately. I must say, though, that the visuals helped. Because I changed the ellipses to reflect the position of the key that was being played at the moment, people were much more easily able to grasp what exactly was going on.

All in all, I received extremely good feedback from the visitors at the IM show, and an unexpected number of people were truly interested in learning more about Carnatic music. That, to me, is surely the biggest accomplishment of the project.

As always, major thanks to Nahil and James and Aaron for all the help!

Final Project: An Image That Can’t Be Vandalized

My final project was born out of two motivations. I wanted to play with the concept of cult of personality, and I wanted to do some sort of projection mapping. I thus decided to make an image that couldn’t be vandalized.

In terms of technical implementation, the project has three main components. The first is an infrared camera (a PS3Eye), which I use to track the position of an infrared LED attached to an object resembling a spray can. The second is the projection: both the equipment used to set it up as well as the things that needed to be done in order to make it work within the spatial constraints. Finally, there is a set of images that are triggered depending on the position of the infrared LED on the canvas–these are perceived by the user as an animation.

IR LED, Camera & Blob Detection

A PImage variable ‘cam’ (640×480) is created to retain whatever is captured by the PS3Eye

A PImage ‘adjustedCam’ (width*height) is created to retain what is being captured in ‘cam’ but in a larger size.

A smaller PImage ‘img’ (80×60) is created to enable the Blob Detection. It is not drawn in the processing sketch but runs in the background. It adjusts the size of ‘adjustedCam’ to effectively restrict what the IR camera can see to the area being projected. This allows a blob to be drawn in the same place as where the IR LED is turned on.

Setting the coordinates.

A circuit connected to an IR LED is built into a Pringles can adapted to resemble a spray can. I used a weight to resemble the sensation of holding a spray can, and a ping pong ball to mimic the sound.

Spray can circuit and design.

I use Blob Detection — a form of pixel manipulation that sorts bright from non-bright pixels — to track the position of the IR LED over the canvas. The presence of a Blob–which indicated that a light is ON–triggers a drawing over the position of the light.

Projection Setup
The most time-consuming aspect of the project. Setting up in the space and adjusting the projector’s elevation over the ground and its distance from the wooden canvas. I used to film-set stands to hold the wooden frame.

Projection setup in the IM lab, with the wooden frame.

Animation
There are two components of the animation: what happens when the user ‘sprays’ on inside the painting and when they don’t.
When they are spraying outside the painting, the painting’s character follows the position of the spray can with his eyes. This I do by mapping the position of two ellipses drawn in the eyes of the character to the position of the blob.

When spraying happens inside the portrait, different frames get triggered depending on the general position of the blob.

Notes from user testing
My user-testing pointed me toward the following things which I implemented in the final project.

  • Add weight to the spray can and protect the circuit because people will want to shake the can — allow them to have that experience.
  • Allow the users to change the color of the spray paint.
  • Make the character in the painting dock.

IM Showcase

Here are some pictures of the IM showcase and the accumulated paintings that resulted form people interacting with my piece.


// - Super Fast Blur v1.1 by Mario Klingemann <http://incubator.quasimondo.com>
// - BlobDetection library

import processing.video.*;
import blobDetection.*;

BlobDetection theBlobDetection;
PImage img;
boolean newFrame=false;
import com.thomasdiewald.ps3eye.PS3EyeP5;

PS3EyeP5 ps3eye;
PImage cam;

PImage adjustedCam; // adjustedCam image

float posX; // for tracking the position of the blob
float posY;

float eyesXleft, eyesYleft; // positioning pupils
float eyesXright, eyesYright;

// images
PImage frame1; // the frames for animation
PImage frame2;
PImage frame3;
PImage frame4;
PImage frame5;
PImage frame6;
PImage frame7;
PImage pic_frame;
PImage noEyes; // image for static/eye-drawing
PImage rainbow; // rainbow color option square

int colPaint;
//int rainbowPaint;
boolean rainbowPaint;

boolean mode; // a boolean indicating whether
// the blob is moving inside or
// outside the frame

void setup()
{
fullScreen(P3D); // a 640*480 resolution of the screen matches the IR camera
//size(1280,720,P3D);
//fullScreen();
//size(640,480);
ps3eye = PS3EyeP5.getDevice(this);

if (ps3eye == null) {
//System.out.println(“No PS3Eye connected. Good Bye!”);
exit();
return;
}

// start capturing with 60 fps (default)
ps3eye.start();

// BlobDetection
// img which will be sent to detection (a smaller copy of the cam frame);
cam=createImage(640, 480, RGB);
img = new PImage(80, 60);
adjustedCam = createImage(width, height, RGB);
theBlobDetection = new BlobDetection(img.width, img.height);
theBlobDetection.setPosDiscrimination(true);
theBlobDetection.setBlobMaxNumber(1);
theBlobDetection.setThreshold(0.05f); // will detect bright areas whose luminosity > 0.2f;

//===loading images
frame1 = loadImage(“frame1.png”);
frame2 = loadImage(“frame2.png”);
frame3 = loadImage(“frame3.png”);
frame4 = loadImage(“frame4.png”);
frame5 = loadImage(“frame5.png”);
frame6 = loadImage(“frame6.png”);
frame7 = loadImage(“frame7.png”);
pic_frame = loadImage(“frame_picture.png”);
rainbow = loadImage(“rainbow.png”);
noEyes = loadImage(“noEyes.png”);

//===setting intial color
colPaint = color(255,0,0);
rainbowPaint = false;
}

void draw()
{

if (ps3eye.isAvailable()) {
cam = ps3eye.getFrame();
}

adjustedCam.copy(cam, 0, 0, cam.width, cam.height, 0, 0, adjustedCam.width, adjustedCam.height);
int beginX=247; // for IM show
int beginY=146;
int endX=1037;
int endY=498;

//int beginX=272; // for IM show
//int beginY=169;
//int endX=1099;
//int endY=503;

//int beginX=289;
//int beginY=168;
//int endX=1116;
//int endY=510;

//int beginX=198;
//int beginY=131;
//int endX=396;
//int endY=266;

img.copy(adjustedCam, beginX, beginY, endX-beginX, endY-beginY, 0, 0, img.width, img.height);
//img.copy(adjustedCam, 0, 0, adjustedCam.width, adjustedCam.height, 0, 0, img.width, img.height);
//img.copy(cam, 225, 140, 390-225, 261-140, 0, 0, img.width, img.height);
fastblur(img, 2);
//image(cam, 0,0, width,height);
//fastblur(cam,2);

//float threshold =50;

//img.loadPixels();
//adjustedCam.loadPixels();

//for (int x = 0; x < img.width; x++) {
// for (int y = 0; y < img.height; y++ ) { // int loc = x + y*img.width; // // Test the brightness against the threshold // if (brightness(img.pixels[loc]) > threshold) {
// adjustedCam.pixels[loc] = color(255); // White
// } else {
// adjustedCam.pixels[loc] = color(0); // Black
// }
// }
//}
//img.updatePixels();
//adjustedCam.updatePixels();
theBlobDetection.computeBlobs(img.pixels);
//image(img,0,0, width, height); // comment
drawBlobsAndEdges(false, false, true);
// Display the adjustedCam

image(pic_frame, 0, 0, width, height);
image(frame1, 298, 147, 685, 426);

// detecting if there is a blob or not; to trigger animations
if(theBlobDetection.getBlobNb()>=1){

eyesXleft = map(posX,0,width,604,624);
eyesYleft = map(posY,0,height,325,332);

eyesXright = map(posX,0,width,650,676);
eyesYright = map(posY,0,height,325,336);

// determining MODE. TRUE = animation, FALSE = eye tracking
if (posX>=0 && posX<=367 && posY>=0 && posY<=height || // left area posX>=912 && posX<=width && posY>=0 && posY<=height || // right area posX>=368 && posX<=911 && posY>=0 && posY<=227 || // upper area posX>=368 && posX<=911 && posY>=575 && posY<=height // lower area
) {
mode = false;
} else {
mode = true;
}
if (mode == false) {
image(noEyes, 298, 147, 685, 426);
noStroke();
fill(0);
ellipse(eyesXleft, eyesYleft, 5, 5);
fill(0);
ellipse(eyesXright, eyesYright, 5, 5);

noStroke();
if (rainbowPaint==false){ // to set the color either as rainbow or as solid fill
fill(colPaint);
} else {
rainbowPaint=true;
fill(random(0,255),random(0,255),random(0,255));
}
for(int i = 0; i < 5; i++){ // this gives the graffiti-looking effect
float randX = random(0,20);
randX = randX – 10;

float randY = random(0,20);
randY = randY – 10;

ellipse(posX+randX, posY + randY, 3,3);
}
}
if (mode == true) { //changing the animation frames
if (457 <= posX && posX <= 548 && 303 <= posY && posY <= 515) {
image(frame2, 298, 147, 685, 426);
} else if (638 <= posX && posX <= 723 && 303 <= posY && posY <= 515) {
image(frame5, 298, 147, 685, 426);
} else if (725 <= posX && posX <= 815 && 303 <= posY && posY <= 515) {
image(frame4, 298, 147, 685, 426);
} else if (549 <= posX && posX <= 636 && 303 <= posY && posY <= 515) {
image(frame3, 298, 147, 685, 426);
} else if (458 <= posX && posX <= 816 && 207 <= posY && posY <= 254) {
image(frame6, 298, 147, 685, 426);
} else if (458 <= posX && posX <= 816 && 255 <= posY && posY <= 303) {
image(frame7, 298, 147, 685, 426);
}else {
image(frame1, 298, 147, 685, 426);
}
noStroke();
noFill();
ellipse(posX, posY, 10, 10);
}
} else {
}

////===Color palette

fill(random(0,255),random(0,255),random(0,255));

noStroke();

fill(255,0,0); // color 1
rect(0+15, height-50, 40, 40);

fill(0); // color 2
rect(0+65, height-50, 40, 40);

fill(0,0,255); // color 3
rect(0+115, height-50, 40, 40);

//rect(0+165, height-50, 40, 40); // rainbow
image(rainbow, 0+165, height-50, 40, 40); // rainbow

if (16 <= posX && posX <= 55 && 672 <= posY && posY <= 712){
rainbowPaint = false;
colPaint = color(255,0,0);
} else if (66 <= posX && posX <= 105 && 672 <= posY && posY <= 712){
rainbowPaint = false;
colPaint = color(0);
} else if (106 <= posX && posX <= 155 && 672 <= posY && posY <= 712){
rainbowPaint = false;
colPaint = color(0,0,255);
}else if (166 <= posX && posX <= 205 && 672 <= posY && posY <= 712){
rainbowPaint = true;
} else {
}

//fill(0,255,0,100); // for checking projection map
//rect(0,0,width,height);

}
//
// ==================================================
// get the coordinates of the projection — for mapping
// ==================================================

void mousePressed() {
println(mouseX, mouseY);
// prints the coordinates of where the mouse is
// pressed; the coords of the projection.
}

// ==================================================
// drawBlobsAndEdges()
// ==================================================
void drawBlobsAndEdges(boolean drawBlobs, boolean drawEdges, boolean getCoordinates)
{
noFill();
Blob b;
EdgeVertex eA, eB;
for (int n=0; n<theBlobDetection.getBlobNb(); n++) {
b=theBlobDetection.getBlob(n);
if (b!=null) {
//Edges

if (drawEdges) {
strokeWeight(3);
stroke(0, 255, 0);

for (int m=0; m<b.getEdgeNb(); m++) {
eA = b.getEdgeVertexA(m);
eB = b.getEdgeVertexB(m);

if (eA !=null && eB !=null) {

line(
eA.x*width, eA.y*height,
eB.x*width, eB.y*height
);
}
}
}

// Blobs
if (drawBlobs) {

fill(255, 150);
ellipse(b.x*width, b.y*height, 30, 30);

strokeWeight(1);
stroke(255, 0, 0);
rect(
b.xMin*width, b.yMin*height,
b.w*width, b.h*height
);
}

posX = b.x*width;
posY = b.y*height;
//println(“posX”);
//println(posX);
//println(“posY”);
//println(posY);
}
}
}

// ==================================================
// Super Fast Blur v1.1
// by Mario Klingemann
// <http://incubator.quasimondo.com>
// ==================================================
void fastblur(PImage img, int radius)
{
if (radius<1) {
return;
}
int w=img.width;
int h=img.height;
int wm=w-1;
int hm=h-1;
int wh=w*h;
int div=radius+radius+1;
int r[]=new int[wh];
int g[]=new int[wh];
int b[]=new int[wh];
int rsum, gsum, bsum, x, y, i, p, p1, p2, yp, yi, yw;
int vmin[] = new int[max(w, h)];
int vmax[] = new int[max(w, h)];
int[] pix=img.pixels;
int dv[]=new int[256*div];
for (i=0; i<256*div; i++) {
dv[i]=(i/div);
}

yw=yi=0;

for (y=0; y<h; y++) {
rsum=gsum=bsum=0;
for (i=-radius; i<=radius; i++) { p=pix[yi+min(wm, max(i, 0))]; rsum+=(p & 0xff0000)>>16;
gsum+=(p & 0x00ff00)>>8;
bsum+= p & 0x0000ff;
}
for (x=0; x<w; x++) { r[yi]=dv[rsum]; g[yi]=dv[gsum]; b[yi]=dv[bsum]; if (y==0) { vmin[x]=min(x+radius+1, wm); vmax[x]=max(x-radius, 0); } p1=pix[yw+vmin[x]]; p2=pix[yw+vmax[x]]; rsum+=((p1 & 0xff0000)-(p2 & 0xff0000))>>16;
gsum+=((p1 & 0x00ff00)-(p2 & 0x00ff00))>>8;
bsum+= (p1 & 0x0000ff)-(p2 & 0x0000ff);
yi++;
}
yw+=w;
}

for (x=0; x<w; x++) {
rsum=gsum=bsum=0;
yp=-radius*w;
for (i=-radius; i<=radius; i++) {
yi=max(0, yp)+x;
rsum+=r[yi];
gsum+=g[yi];
bsum+=b[yi];
yp+=w;
}
yi=x;
for (y=0; y<h; y++) {
pix[yi]=0xff000000 | (dv[rsum]<<16) | (dv[gsum]<<8) | dv[bsum];
if (x==0) {
vmin[y]=min(y+radius+1, hm)*w;
vmax[y]=max(y-radius, 0)*w;
}
p1=x+vmin[y];
p2=x+vmax[y];

rsum+=r[p1]-r[p2];
gsum+=g[p1]-g[p2];
bsum+=b[p1]-b[p2];

yi+=w;
}
}
}

Final Project: “Light is Like Water”, an Interactive Diorama

I must start off by admitting that time, though endless (as my high school Calculus professor used to say, “there’s more time than life”), is often insufficient. That was my experience these past couple of weeks. There is so much I wanted to do for this project that I couldn’t implement not because of technical difficulties, but because of time.

Thus, my biggest takeaway is this: a project that one is excited about could go on forever. I was thrilled to carry out my ideas for this final assignment, because I’m fascinated by the story that inspired it. It was fruitful in the end: I’m proud of what I made. But I could have continued to work on it more, adding more features and fixing others, and refining the “craftsmanship.” I’m glad this is the case, though. It means that this project motivated and inspired me, in a way few projects throughout the semester had.

Inspiration

The story on which I based my work is titled “La luz es como el agua,” or “Light is like water” in English, written by world-famous Colombian author Gabriel García Márquez in 1978. I learned about the text from a friend who read it in high school, and purchased the book where this short story is featured (Doce cuentos peregrinos, or Twelve Pilgrims Stories) last summer.

Throughout the semester, I wanted to work with a track from Pirates of the Caribbean for my final project. However, having used it for one of our weekly assignments, I began to consider other possibilities, and when I remembered García Márquez’s story, it made perfect sense to use it.

This link contains two edited (abridged) versions of the story: one in English, translated by myself, and the other in Spanish. The story was shortened specifically for this project, but the complete text can easily be found online in both Spanish and English.

What I love about García Márquez’s writing is its richness in imagery. His descriptions very easily make the stories come to life for the reader, and thus (it seems to me) there’s a lot to work with if one wishes to depict his narrations.

Two aspects of this story made it particularly adequate for an interactive media project. Firstly, the text deals with electricity: it tells the story of two brothers who cause light (electricity) to “flow” like water, ending on a tragic note. I’ve been interested in working with neopixels ever since our “Stupid Pet Trick” assignment, and thought that they could be used in this project to literally show light around a house, and to make it appear as water.

Secondly, the story allows for interactivity in a fantastic way. The title of the text comes from the narrator’s confession that he once told the two brothers that light is like water.  Thus, the narrator, who tells most of the story in third person, reveals to the reader his role and direct impact on the events that unfold. I wanted the user to be directly implicated in the story’s events as well, having them “cause” said events.

Process, USER TESTING, & Improvements

I was quite lousy regarding the documentation of this project. I took no photographs of the process, the user testing stage, or of its exhibition during the Interactive Media Spring Showcase.

As the following images (taken after the project was exhibited) show, the “main” component of the project is a house I built mostly out of cardboard. The house can be divided into two sections: the top level (or the fifth floor, according to the story) and the bottom level (or the first floor, in my interpretation a basement). The top level contains the setting of the story, plagued with LEDs, and the roof of the building, which has two servo motors hidden inside it. The bottom level is full of wires that connect the top level’s components to power, as well as to Procesing and Arduino through a RedBoard.

Left: View of the complete house. Center: Bottom level (LEDs and wires). Right: Top level (decorative elements, LEDs, and servo motors).

I now include a video of the final project, to serve as the frame of reference for the explanation of the process. This video shows the interaction in English.

I mentioned that the video’s interaction is in English because, as the starting page of the Processing sketch shows, there is also a version of the project in Spanish, which the user can opt for. To me, it was important to include a version of the experience guided by the original story, given that I could never accomplish an accurate imitation of García Marquez’s style in a translation. Because I’m such a big fan of his writing, I wanted his words to be available to Spanish-speaking users.

On a related note, I asked a fellow classmate to help me with the project because I imagined that his voice, specifically, would make the narration of the story much richer than if I had done it myself. Not only is he a great speaker, but he is also Colombian; thus, I thought, it becomes easier to imagine García Márquez himself reciting the text. Perfect casting (thank you, Sebastián!).

In terms of the structure, the bottom level was made by cutting a cardboard box and covering it with black adhesive material (I’m not sure whether to call it paper, plastic…). I cut a small rectangle on one of its sides to let the cables of the RedBoard, neopixel strips, and a small light out, so that I could connect them all to an external power source.

The aforementioned small light was used to illuminate the four wires that the user has to connect. I chose to use a breadboard and breadboard wires for the user interaction for a couple of reasons. In the first place, because the story deals with electric circuits, I wanted the user to have the experience of messing with the house’s actual circuits. I originally left the entire “basement” open, such that the user could easily see not only the four wires they had to manipulate, but also the ones that they didn’t have to use (the ones connected to the RedBoard and a second breadboard). I added the four LEDs that are associated to these wires so that they could act not only as indicators for my own code (of whether or not the wires are connected), but also as indicators for the user.

However, during user testing, my first user expressed that it was confusing to know which wires to connect and disconnect, given that there were so many. To the user, it wasn’t clear what was expected of them. Following his advice, I added a piece of transparent acrylic (hence, it still allows some visibility) to completely separate the four wire-LED pairs and the rest of the circuit.

I also incorporated written instructions in the Processing sketch, right before the narration begins. In them, I tell the user that they must pay attention to the narration (audio) and both the screen and the house (visual). In this way, they know they must be aware of all these components throughout their experience.

The same user also suggested not having written instructions at all, making the computer screen go black after the title clip, with the instructions transmitted through audio. He thought that this would make the experience with the house more immersive, and to separate the narration from the instructions, I could read the latter out loud myself. I recorded these instructions, and was willing to make these changes. But… time. This is definitely an improvement that I would have liked to try, even though I did have one preoccupation: what if the user didn’t understand the instructions right away? Another user who tested the project was slightly confused at the beginning as to where the wires should be connected, but she figured it out after reading the instructions a second time. I think the solution would have been to loop the audio instructions as long as the required task has not been completed.

Another advantage of using a breadboard as the interface is that, by covering up most of the board with tape and leaving one of the positive rails uncovered, I ask the user to connect the wires “along the red line” and don’t have to worry about where exactly they’ll connect them, or in which order, given that all the openings in the rail act the same.

Regarding decorative elements, the bottom level has a large number 47, in reference to the story’s building number, and a set of stairs and floors on one of its sides. I did this because the story mentions that the brothers and their parents live in the fifth floor of their building. Therefore, there’s the bottom level, three floors in between, and the top level, all connected by stairs.

These decorations (as well as the ones in the top level) were very successful during user testing. People appreciate these small details, even if the information of the building is provided until the very end of the narration.

Stairs and floors and stairs and floors and stairs and floors and stairs and floors…

The top level was more complex. I built the box myself, because it needed double walls. The neopixels strips that simulate the water-electricity are glued to the outer walls and the inner walls are covered with translucent paper that allows the user to see the light of the neopixels, diffusing it a bit as well. The same was done with the ceiling.

There are two small openings in the ceiling that go through the cardboard and the paper. I cut a straw to get two small pieces that I glued to these openings. Each of the two servo motors has a wire tied to its arm (which has openings itself, facilitating the process of tying the wire), which moves up and down through the straw when the servo rotates. This mechanism allows the up and down movement of the boat.

The decorative elements inside the top level were printed out on paper and made sturdy by glueing them to cardboard and wooden sticks that go through the cardboard floor. For the four lamps, wires attached to yellow LEDs also go through the cardboard, so that each of the lamps turns on and off in response to the user’s actions. Additionally, there are two other pieces of furniture inside the house: a grand piano and a bar with a wine bottle. These are also referenced in the story.

I decided to make the “flooding” neopixels blue until the very end of the story, when they become yellow and “end” the metaphor of light as water. If one googles this story, the image results mostly show yellow waves and currents, and in the story, the children’s adventures are very explicitly described as occuring in the light, and not water (though water-related terms are constantly used in the text). The advantage of using actual light to depict García Márquez’s water-electricity is that no matter its color, the light is still (actual, physical) light. Thus, in my opinion, the metaphor becomes stronger with lights in blue, like water. I also made them randomly change to different shades of blue with every loop in the Arduino code, to resemble shimmering water.

This is a compilation of Google Images results for “la luz es como el agua”; all of them show the water colored yellow, or alternatively, the light shaped as waves.

The following show the code that was used in this assignment.

Arduino:

#include <Adafruit_NeoPixel.h>
#ifdef __AVR__
 #include <avr/power.h>
#endif

#include <Servo.h>

Servo boatRight;
Servo boatLeft;

const int ledGreen = 10;
const int ledRed = 11;
const int ledBlue = 2;
const int ledYellow = 13;

const int tallLeft= 8;
const int shortLeft = 7;
const int tallRight = 4;
const int shortRight = 3;

const int PIN = 6;

const int NUMPIXELS = 180;

int stage;

int serialComm;

const int colors[] = {47, 86, 233, 45, 100, 245, 47, 141, 255, 51, 171, 249,
52, 204, 255, 82, 219, 255};
const int colorsFire[] = {255, 127, 0, 255, 143, 0, 255, 105, 0, 229, 83, 0};

Adafruit_NeoPixel pixels = Adafruit_NeoPixel(NUMPIXELS, PIN, NEO_GRB + NEO_KHZ800);

void setup() {
 pinMode(ledGreen, INPUT);
 pinMode(ledRed, INPUT);
 pinMode(ledBlue, INPUT);
 pinMode(ledYellow, INPUT);
 pinMode(tallLeft, OUTPUT);
 pinMode(shortLeft, OUTPUT);
 pinMode(tallRight, OUTPUT);
 pinMode(shortRight, OUTPUT);

boatRight.attach(9); 
 boatLeft.attach(5);

setupStuff();

Serial.begin(9600);
 Serial.println("100");
}

void setupStuff(){
 boatRight.write(180); 
 boatLeft.write(0);

pixels.begin();
 pixels.setBrightness(10);
 pixels.clear();
 pixels.show();
 
 stage = -1;

serialComm = 0;
}

void loop() {
 int green = digitalRead(ledGreen);
 int red = digitalRead(ledRed);
 int blue = digitalRead(ledBlue);
 int yellow = digitalRead(ledYellow);
 if(stage == 0){
 if(green == LOW){
 digitalWrite(tallLeft, LOW);
 }
 else if(green == HIGH){
 digitalWrite(tallLeft, HIGH);
 };
 if(red == LOW){
 digitalWrite(shortRight, LOW);
 }
 else if(red == HIGH){
 digitalWrite(shortRight, HIGH);
 };
 if(blue == LOW){
 digitalWrite(tallRight, LOW);
 }
 else if(blue == HIGH){
 digitalWrite(tallRight, HIGH);
 };
 if(yellow == LOW){
 digitalWrite(shortLeft, LOW);
 }
 else if(yellow == HIGH){
 digitalWrite(shortLeft, HIGH);
 };
 if(green == HIGH && red == HIGH && blue == HIGH && yellow == HIGH){
 serialComm = 1;
 boatRight.write(120); 
 boatLeft.write(50); 
 };
 }
 else if(stage == 1){
 if(green == LOW){
 serialComm = 2;
 digitalWrite(tallLeft, LOW);
 for(int i=0; i<9; i++){
 int index = int(random(0,6))*3;
 int R = colors[index];
 int G = colors[index + 1];
 int B = colors[index + 2];
 pixels.setPixelColor(i, R, G, B);
 };
 for(int i=76; i<88; i++){
 int index = int(random(0,6))*3;
 int R = colors[index];
 int G = colors[index + 1];
 int B = colors[index + 2];
 pixels.setPixelColor(i, R, G, B);
 };
 for(int i=115; i<124; i++){
 int index = int(random(0,6))*3;
 int R = colors[index];
 int G = colors[index + 1];
 int B = colors[index + 2];
 pixels.setPixelColor(i, R, G, B);
 };
 pixels.show();
 for (int pos = 50; pos <= 120; pos ++) {
 boatLeft.write(pos); 
 boatRight.write(120 - pos);
 delay(15);
 }
 for (int pos = 120; pos <= 50; pos --) {
 boatRight.write(pos - 120);
 boatLeft.write(pos); 
 delay(15);
 }
 };
 }
 else if(stage == 2){
 if(red == LOW){
 serialComm = 3;
 digitalWrite(shortRight, LOW);
 for(int i=0; i<18; i++){
 int index = int(random(0,6))*3;
 int R = colors[index];
 int G = colors[index + 1];
 int B = colors[index + 2];
 pixels.setPixelColor(i, R, G, B);
 };
 for(int i=63; i<88; i++){
 int index = int(random(0,6))*3;
 int R = colors[index];
 int G = colors[index + 1];
 int B = colors[index + 2];
 pixels.setPixelColor(i, R, G, B);
 };
 for(int i=106; i<124; i++){
 int index = int(random(0,6))*3;
 int R = colors[index];
 int G = colors[index + 1];
 int B = colors[index + 2];
 pixels.setPixelColor(i, R, G, B);
 };
 pixels.show();
 for (int pos = 50; pos <= 140; pos ++) {
 boatLeft.write(pos); 
 boatRight.write(130 - pos);
 delay(15);
 }
 for (int pos = 140; pos <= 50; pos --) {
 boatRight.write(pos - 130);
 boatLeft.write(pos); 
 delay(15);
 }
 };
 }
else if(stage == 3){
 if(blue == LOW){
 serialComm = 4;
 digitalWrite(tallRight, LOW);
 for(int i=0; i<27; i++){
 int index = int(random(0,6))*3;
 int R = colors[index];
 int G = colors[index + 1];
 int B = colors[index + 2];
 pixels.setPixelColor(i, R, G, B);
 };
 for(int i=49; i<88; i++){
 int index = int(random(0,6))*3;
 int R = colors[index];
 int G = colors[index + 1];
 int B = colors[index + 2];
 pixels.setPixelColor(i, R, G, B);
 pixels.show();
 };
 for(int i=97; i<124; i++){
 int index = int(random(0,6))*3;
 int R = colors[index];
 int G = colors[index + 1];
 int B = colors[index + 2];
 pixels.setPixelColor(i, R, G, B);
 };
 pixels.show();
 for (int pos = 50; pos <= 160; pos ++) {
 boatLeft.write(pos); 
 boatRight.write(160 - pos);
 delay(15);
 }
 for (int pos = 160; pos <= 50; pos --) {
 boatRight.write(pos - 160);
 boatLeft.write(pos);
 delay(15);
 }
 };
 }
else if(stage == 4){
 if(yellow == LOW){
 serialComm = 5;
 digitalWrite(shortLeft, LOW);
 for(int i=0; i<36; i++){
 int index = int(random(0,6))*3;
 int R = colors[index];
 int G = colors[index + 1];
 int B = colors[index + 2];
 pixels.setPixelColor(i, R, G, B);
 };
 for(int i=38; i<88; i++){
 int index = int(random(0,6))*3;
 int R = colors[index];
 int G = colors[index + 1];
 int B = colors[index + 2];
 pixels.setPixelColor(i, R, G, B);
 };
 for(int i=88; i<181; i++){
 int index = int(random(0,6))*3;
 int R = colors[index];
 int G = colors[index + 1];
 int B = colors[index + 2];
 pixels.setPixelColor(i, R, G, B);
 };
 pixels.show();
 for (int pos = 50; pos <= 180; pos ++) {
 boatLeft.write(pos); 
 boatRight.write(180 - pos);
 delay(15);
 }
 for (int pos = 180; pos <= 50; pos --) {
 boatRight.write(pos - 180);
 boatLeft.write(pos);
 delay(15);
 }
 };
 }
 else if(stage == 5){
 serialComm = 6;
 for(int i=0; i<181; i++){
 int index = int(random(0,4))*3;
 int R = colorsFire[index];
 int G = colorsFire[index + 1];
 int B = colorsFire[index + 2];
 pixels.setPixelColor(i, R, G, B);
 };
 pixels.show();
 boatRight.write(180); 
 boatLeft.write(0);
 }
 else if(stage == 6){
 setupStuff();
 };
 if(Serial.available() > 0){
 
 stage = Serial.read();
 Serial.println(serialComm);
 };
}

Processing:

import ddf.minim.*;
import ddf.minim.analysis.*;
import ddf.minim.effects.*;
import ddf.minim.signals.*;
import ddf.minim.spi.*;
import ddf.minim.ugens.*;

Minim minim;
import processing.sound.*;

import processing.serial.*;
Serial myPort;

SoundFile beginSound;
AudioPlayer iESP, iiESP, iiiESP, ivESP, vESP, viESP, viiESP;
AudioPlayer iENG, iiENG, iiiENG, ivENG, vENG, viENG, viiENG;

PFont font1, font2;
boolean startESP, startENG;
int colorChange;
int colorChangerESP, colorChangerESP1, colorChangerESP2, colorChangerESP3;
int colorChangerENG, colorChangerENG1, colorChangerENG2, colorChangerENG3;
String intro1, intro2, intro3, intro4, intro5, intro6, intro7;
int alphaCounter1, alphaCounter2;
boolean alpha;
boolean begin, narration, story, end, restart;
int track;
int bgColor;
String text1, subtext1, text2, text3, text4, text5;
String og, tr, na;
int serialComm;
int stage;

void menu(){
 textAlign(LEFT);
 if(mouseX >= width/10*2.5 && mouseX <= width/10*4.5 
 && mouseY >= height/9*4 && mouseY <= height/9*5){
 if(mousePressed){
 startESP = true;
 }
 else{
 if(colorChangerESP < colorChange){
 colorChangerESP = colorChange;
 }
 if(colorChangerESP1 < colorChange - 30){
 colorChangerESP1 = colorChange - 30;
 }
 if(colorChangerESP2 < colorChange - 60){
 colorChangerESP2 = colorChange - 60;
 }
 if(colorChangerESP3 < colorChange - 90){
 colorChangerESP3 = colorChange - 90;
 }
 if(colorChangerESP3 < 255){
 colorChangerESP+=10;
 colorChangerESP1+=10;
 colorChangerESP2+=10;
 colorChangerESP3+=10;
 }
 };
 }
 else{
 colorChangerESP-=10;
 colorChangerESP1-=10; 
 colorChangerESP2-=10;
 colorChangerESP3-=10;
 };
 if(mouseX >= width/10*5.5 && mouseX <= width/10*7.5 
 && mouseY >= height/9*4 && mouseY <= height/9*5){
 if(mousePressed){
 startENG = true;
 }
 else{
 if(colorChangerENG < colorChange){
 colorChangerENG = colorChange;
 }
 if(colorChangerENG1 < colorChange - 30){
 colorChangerENG1 = colorChange - 30;
 }
 if(colorChangerENG2 < colorChange - 60){
 colorChangerENG2 = colorChange - 60;
 }
 if(colorChangerENG3 < colorChange - 90){
 colorChangerENG3 = colorChange - 90;
 }
 if(colorChangerENG3 < 255){
 colorChangerENG+=10;
 colorChangerENG1+=10; 
 colorChangerENG2+=10;
 colorChangerENG3+=10;
 }
 };
 }
 else{
 colorChangerENG-=10;
 colorChangerENG1-=10; 
 colorChangerENG2-=10;
 colorChangerENG3-=10;
 }
 colorMode(HSB);
 noFill();
 strokeWeight(4);
 stroke(35, 100, colorChangerESP1);
 rect(width/10*2.5, height/9*4, width/10*2, height/9);
 stroke(140, 100, colorChangerENG1);
 rect(width/10*5.5, height/9*4, width/10*2, height/9);
 strokeWeight(3);
 stroke(35, 100, colorChangerESP2);
 rect(width/10*2.5 - 10, height/9*4 - 10, width/10*2 + 20, height/9 + 20);
 stroke(140, 100, colorChangerENG2);
 rect(width/10*5.5 - 10, height/9*4 - 10, width/10*2 + 20, height/9 + 20);
 strokeWeight(2); 
 stroke(35, 100, colorChangerESP3);
 rect(width/10*2.5 - 17.5, height/9*4 - 17.5, width/10*2 + 35, height/9 + 35);
 stroke(140, 100, colorChangerENG3);
 rect(width/10*5.5 - 17.5, height/9*4 - 17.5, width/10*2 + 35, height/9 + 35);
 fill(30, 100, colorChangerESP);
 text("español", width/10*3, height/9*4.65);
 fill(140, 100, colorChangerENG);
 text("english", width/10*6, height/9*4.65);
}

void begin(){
 if(startESP){
 intro1 = "La luz";
 intro2 = " es como el agua:";
 intro3 = "uno abre el grifo,";
 intro4 = " y sale.";
 intro5 = "Esta es una experiencia interactiva audiovisual.";
 intro6 = "La historia será transmitida por audio, las instrucciones aparecerán en la pantalla, y la casa cobrará vida."; 
 intro7 = "Haz click para continuar.";
 bgColor = 140;
 }
 else if(startENG){
 intro1 = "Light";
 intro2 = "is like water:";
 intro3 = "one turns the tap,";
 intro4 = "and out it comes.";
 intro5 = "This is an audiovisual interactive experience.";
 intro6 = "The story will be transmitted by audio, the instructions will appear on screen, and the house will come to life.";
 intro7 = "Click to continue.";
 bgColor = 22;
 };
 if(begin == false && alphaCounter1 >= 20){
 beginSound.play();
 begin = true;
 };
 if(alpha == false){
 alphaCounter1++;
 if(alphaCounter1 > 65){
 alphaCounter2++;
 }
 };
 if(alphaCounter2 >= 240){
 fill(bgColor, 100, 255, alphaCounter2 - 240);
 rect(0, 0, width, height);
 if(mousePressed){
 beginSound.stop();
 narration = true;
 };
 };
 textSize(250);
 fill(0, alphaCounter1);
 text(intro1, width/10*2, height/9*3.5);
 textSize(100);
 fill(0, alphaCounter2);
 text(intro2, width/10*5, height/9*4);
 fill(0, alphaCounter1 - 150);
 text(intro3, width/10*2, height/9*5);
 textSize(170);
 fill(0, alphaCounter2 - 145);
 text(intro4, width/10*3, height/9*6.5);
 textSize(50);
 fill(0, alphaCounter2 - 200);
 text("- Gabriel García Márquez", width/10*6, height/9*8);
 fill(255);
 textFont(font1);
 text(intro5, width/50, height/15);
 text(intro6, width/50, height/15 + 30);
 text(intro7, width/50, height/15 + 60);
}

void story(){
 textAlign(CENTER);
 textFont(font1);
 fill(0);
 if(story == false){
 alphaCounter1 = -10;
 if(track == 1){
 if(startENG == true){
 iENG.play();
 }
 else if(startESP == true){
 iESP.play();
 };
 }
 else if(track == 2){
 if(startENG == true){
 iiENG.play();
 }
 else if(startESP == true){
 iiESP.play();
 };
 }
 else if(track == 3){
 if(startENG == true){
 iiiENG.play();
 }
 else if(startESP == true){
 iiiESP.play();
 };
 }
 else if(track == 4){
 if(startENG == true){
 ivENG.play();
 }
 else if(startESP == true){
 ivESP.play();
 };
 }
 else if(track == 5){
 if(startENG == true){
 vENG.play();
 }
 else if(startESP == true){
 vESP.play();
 };
 }
 else if(track == 6){
 if(startENG == true){
 viENG.play();
 }
 else if(startESP == true){
 viESP.play();
 };
 }
 else if(track == 7){
 if(startENG == true){
 viiENG.play();
 }
 else if(startESP == true){
 viiESP.play();
 };
 };
 story = true;
 }
 else{
 if(track == 1){
 if(startENG == true){
 text1 = "Connect all the cables to the positive rail";
 subtext1 = "(along the red line)";
 }
 else if(startESP == true){
 text1 = "Conecta todos los cables al bus positivo";
 subtext1 = "(a lo largo de la línea roja)";
 };
 if(! iENG.isPlaying() && ! iESP.isPlaying()){
 stage = 0;
 alphaCounter1++;
 fill(0, alphaCounter1);
 text(text1, width/2, height/2);
 text(subtext1, width/2, height/2 + 35);
 if(serialComm == 1){
 story = false;
 track++;
 };
 };
 }
 else if(track == 2){
 if(startENG == true){
 text2 = "Disconnect the green cable";
 }
 else if(startESP == true){
 text2 = "Desconecta el cable verde";
 };
 if(! iiENG.isPlaying() && ! iiESP.isPlaying()){
 stage = 1;
 alphaCounter1++;
 fill(0, alphaCounter1);
 text(text2, width/2, height/2);
 if(serialComm == 2){
 story = false;
 track++;
 };
 };
 }
 else if(track == 3){
 if(startENG == true){
 text3 = "Disconnect the red cable";
 }
 else if(startESP == true){
 text3 = "Desconecta el cable rojo";
 };
 if(! iiiENG.isPlaying() && ! iiiESP.isPlaying()){
 stage = 2;
 alphaCounter1++;
 fill(0, alphaCounter1);
 text(text3, width/2, height/2);
 if(serialComm == 3){
 story = false;
 track++;
 };
 };
 }
 else if(track == 4){
 if(startENG == true){
 text4 = "Disconnect the blue cable";
 }
 else if(startESP == true){
 text4 = "Desconecta el cable azul";
 };
 if(! ivENG.isPlaying() && ! ivESP.isPlaying()){
 stage = 3;
 alphaCounter1++;
 fill(0, alphaCounter1);
 text(text4, width/2, height/2);
 if(serialComm == 4){
 story = false;
 track++;
 };
 };
 }
 else if(track == 5){
 if(startENG == true){
 text5 = "Disconnect the yellow cable";
 }
 else if(startESP == true){
 text5 = "Desconecta el cable amarillo";
 };
 if(! vENG.isPlaying() && ! vESP.isPlaying()){
 stage = 4;
 alphaCounter1++;
 fill(0, alphaCounter1);
 text(text5, width/2, height/2);
 if(serialComm == 5){
 story = false;
 track++;
 };
 };
 }
 else if(track == 6){
 if(! viENG.isPlaying() && ! viESP.isPlaying()){
 stage = 5;
 if(serialComm == 6){
 story = false;
 track++;
 };
 };
 }
 else if(track == 7){
 if(! viiENG.isPlaying() && ! viiESP.isPlaying()){
 stage = 6;
 end = true;
 alphaCounter2 = -10;
 };
 };
 };
}

void credits(){
 fill(bgColor, 100, 255, alphaCounter2);
 alphaCounter2++;
 rect(0, 0, width, height);
 textAlign(CENTER);
 textFont(font2);
 fill(0, alphaCounter2);
 if(startENG == true){
 og = "Original story";
 tr = "Translation and editing";
 na = "Narration";
 }
 else if(startESP == true){
 og = "Cuento original";
 tr = "Edición";
 na = "Narración";
 };
 textSize(70);
 text(og, width/2, height*0.25);
 textSize(50);
 text("Gabriel García Márquez", width/2, height*0.32);
 textSize(70);
 text(tr, width/2, height*0.5);
 textSize(50);
 text("María Laura Mirabelli", width/2, height*0.57);
 textSize(70);
 text(na, width/2, height*0.75);
 textSize(50);
 text("Sebastián Rojas Cabal", width/2, height*0.82);
 if(startESP == true){
 intro5 = "Haz click para finalizar";
 }
 else if(startENG == true){
 intro5 = "Click to end";
 };
 textAlign(LEFT);
 fill(255);
 textFont(font1);
 text(intro5, width/50, height/15);
 if(mousePressed){
 restart = true;
 };
};

void setup(){
 String portName = Serial.list()[2];
 myPort = new Serial (this, portName, 9600);
 myPort.clear();
 myPort.bufferUntil('\n');
 
 fullScreen();
 font1 = createFont("Arvo-Bold.ttf", 30);
 font2 = createFont("Handycheera.otf", 70);
 
 setupStuff();
}

void setupStuff(){
 startESP = startENG = false;
 colorChange = 200;
 colorChangerESP = colorChangerESP1 = colorChangerESP2 = colorChangerESP3 = 0;
 colorChangerENG = colorChangerENG1 = colorChangerENG2 = colorChangerENG3 = 0;
 intro1 = intro2 = intro3 = intro4 = intro5 = intro6 = intro7 = "";
 alphaCounter1 = alphaCounter2 = -10;
 alpha = false;
 begin = narration = story = end = restart = false;
 bgColor = 0;
 track = 1;
 serialComm = 0;
 stage = -1;
 
 beginSound = new SoundFile(this, "beginSound.wav");
 minim = new Minim(this);
 iESP = minim.loadFile("I.wav");
 iiESP = minim.loadFile("II.wav");
 iiiESP = minim.loadFile("III.wav");
 ivESP = minim.loadFile("IV.wav");
 vESP = minim.loadFile("V.wav");
 viESP = minim.loadFile("VI.wav");
 viiESP = minim.loadFile("VII.wav");
 iENG = minim.loadFile("1.wav");
 iiENG = minim.loadFile("2.wav");
 iiiENG = minim.loadFile("3.wav");
 ivENG = minim.loadFile("4.wav");
 vENG = minim.loadFile("5.wav");
 viENG = minim.loadFile("6.wav");
 viiENG = minim.loadFile("7.wav");
}

void draw(){
 background(255);
 textFont(font2);
 if(startESP == false && startENG == false){
 narration = false;
 menu();
 }
 else{
 if(narration == false){
 begin();
 }
 else{
 if(end == false){
 story();
 }
 else{
 if(restart == true){
 setupStuff();
 };
 credits();
 };
 };
 };
}

void serialEvent(Serial myPort){
 String s = myPort.readStringUntil('\n');
 s = trim(s);
 //println(s);
 if(s != null){
 serialComm = int(s);
 };
 myPort.write(stage);
}

Birdy: Full Documentation

There was one thing I knew that I wanted my final project to be the moment I started thinking about it: I wanted it to be cute. Other things I knew were a) I wanted the project to be processing heavy, while physical computing light, and b) to be a game. I really wanted to practice coding more, and I also particularly enjoyed the week in class when we had made a game (I made “Blobby”).

After more in-depth brainstorming, and class suggestions, I came up with the following idea for a game: the user would be a bird, and by flapping his/her wings, would fly the bird around various environments. Games would be hidden/placed around the environments, and the bird could go around playing those games. Later, I fleshed out the idea further: the bird had to play the games in order to earn “seeds” (points) to feed her babies that were to hatch soon. Once the bird reached a certain amount of seeds, the game would end, the user would win, and the babies would hatch and fly happily around with the mother bird on the screen.

The process of making the game looked like this:

a) Planning: I had to plan all the various games, decide what types of visuals they would need, and think about how to convey all the instructions of the game to the user. It was critical that I do this to avoid wasting time in the next steps.

b) Design via Illustrator: I spent many many hours before even starting the code creating all the visual elements of the games. I created the bird(s) myself in Illustrator (making various frames so the bird would look like it was flapping its wings), found various free Illustrator elements online that I would then have to adapt quite a lot to fit my vision, and had to make multiple versions of everything to create the blinking or movement effects that I wanted. Overall, this was extremely time-consuming (especially since I had to redo quite a lot of it later since I had to make sure all the sizes of the files were consistent and appropriate for Processing), but it was a crucial step since the visual aspect of the game was important for it to be successful.

[Below are just a few of the Illustrator elements I created or edited.]

   

c) Coding: The coding aspect of the project was undoubtedly the hardest part of the project. Not only did I have to create the code for five different minigames, but I had to create the code for the overarching/big picture aspects of the game. These include things like: how, and how long, to display instructions, where to start and end the game, how to let the user find different games, whether the user can play each game more than once or not, how to let the user move from one environment to the next, whether the user should be allowed to move to another game without finishing the first game or not, and on and on. I didn’t realize how complicated this process was going to be until I was rather far along in the project, when I started to appreciate the way that the seemingly unimportant questions in a game (like the issues I just specified) actually make or break it completely. However, one thing I am very proud of and would like to note is the following: I did know that, at the very least, the code was going to be rather lengthy, and thus I spoke with a computer science friend who helped me map out the plan for the code. It was through this that I got the following idea: instead of making completely different code for each minigame, I could use classes to sort of reuse the same code over and over, adapting it for each game. This made perfect sense for my game, since in each game there is an “other” (whether it is a coconut, a fish, etc.) and in each one some sort of overlap needs to be detected. Thus, I utilized classes to my advantage in the code, and believe that as a result the code is far more simple/clean/short than it would have been otherwise. This experience demonstrated to me that planning before coding is crucial to avoid headaches and wasted time.

[The Processing code for the game can be found here, and the Arduino code can be found here.]

c) Physical computing: The physical aspect of the game actually turned out to be rather straightforward and reliable — which is something I cannot say about most of my experiences with sensors in the class. I used an accelerometer, which measures (as the name suggests) acceleration/the rate of movement. I wanted to use it to measure the user’s flapping, so it was perfect because the user truly had to flap to make the bird on the screen move; if they moved really slowly would not work. All I really needed to do was detect the x, y, and z coordinate from the accelerometer and measured the difference between that reading and the previous reading (to see if the user was flapping). I found an equation to calculate this online: sqrt[(x2 – x1)^2 + (y2 – y1)^2 + (z2 – z1)^2]. In regard to how I used the accelerometer exactly, what I did was this: I soldered each (of the two) accelerometers to six feet long cables, which were plugged in to the Arduino/bus board. (Each accelerometer needed six cables, so that means I soldered 12 six-feet long cables in total.) Then, I hid the bus board / Arduino in a pretty box, covered cables in pretty, flowery tape, zip-tied the ends of the accelerometers to a pair of black gloves, and then hot-glued tons of pink feathers onto the gloves. Overall, I very happy with how it all looked: the green/pink theme of the box/cables/gloves fit perfectly with the green/pink theme on the screen.

[Below are pictures of the end product. Note how in the picture on the left, Craig’s pink wig makes an appearance.]

The IM Show:

I was really happy with how the game was received at the IM show — especially when a girl came to play the game, lost the first time, asked to play again, then won, and then jumped up and down out of excitement, took a picture of the winning screen, and then gave me a hug. 😛 While not everybody was as excited as she was, a lot of people found the game really cute, rather fun (particularly because of the wings/gloves), and overall a nice idea. I do, however, think that it might not have been the best suited interaction for the show, since the game is rather long if it is played all the way through, and most people want to only spend a short time at each interaction so that they can get through all of them. This means that a lot of people would stop playing part way through the game. In the end, still, most people seemed to really like it, and I was incredibly happy to share the game with others.

[Here is are two extra photos, one of Luize posing after playing the game and winning the high score, and another of the feathers that were sacrificed during the IM show.]

 

 

Demystifying Technology

As ubiquitous as technology is nowadays, it remains a fairly opaque aspect of my life. I often feel I do not have the tools to understand how different systems work, and end up relying on others to tell what things do, or what I can do with them. My interest in Interactive Media stemmed from a desire to further understand technology both in technical terms and in terms of its sociopolitical dimension. I am not able to put forth a satisfactory definition of computing—I can only say, for example, that it largely entails the use of recurring mathematical operations to perform a host of different tasks. However, I think this has been enough to have a profound impact in my life. I have been able to demystify a lot of the technology around me, and grow more aware of how it operates and how much it can actually do. I do not see technology as the solution to all the world’s problems, and I also do not think technology itself has the ability to impact our lives negatively. Instead, engaging with interactive media and art through software has reaffirmed my belief that we should all strive to understand technology in order to make it work for (most of) us instead of against us. That we must fight the urge to leave it to others to deal with the technicalities, because those technicalities (and the biases that are built into them) can affect our lives in profound ways.