Assignment 12: Crystallic

The Crystallic visualization transforms live video frames into a grid of interconnected areas of distinct colors.

Input frames are sampled at every nth pixel, in both the x and y dimensions (where n is a pre-set constant number, for example 7). The selected pixel’s color is compared to a list of 26 colors; the closest color among the options is identified. Then, the algorithm considers the neighbors of the sampled pixel (where neighbors are n pixels away from the sampled pixel in each dimension). If the neighbor has the same identified color, a line is drawn between the two pixels. This produces white-space boundaries between the distinct color bands identified in the frame.

It is possible to change the look of the visualization by selecting a different set of clockwiseExtraX and clockwiseExtraY. The different values in these two arrays represent the different neighbors to consider. By removing some values, the visualization considers fewer neighbors.

The visualization can thus be modified to have a square pattern

a slanted-squares pattern

or a drawing-like, diagonal-line pattern

Furthermore, changing the sampling distance changes the granularity of the visualization. This produces a more modern-art look:

However, reducing the sampling distance slows down the visualization; if near-real-time responsiveness is desired, it was determined that the values should not be reduced below 7.

An additional problem concerned the choice of colors in the color palette. Originally, the visualization used only 9 colors – all the combinations of 0 v. 255 for RGB values. This led to visualizations that featured too many flat surfaces; the banding effect was too extreme. To increase the variety of colors, the set of HTML/CSS named colors was considered instead. However, since this palette contrasted the “extreme” colors (using only 0 and 255 in RGB) with two non-extreme colors (orange, and rebeccaPurple), the two non-extreme colors proved closest to too many sampled colors. The result was an over-abundance of purple in the output.

A solution was to return to the constructed palette, increasing the number of different combination to 27 by adding a third RGB color level. Thus, again, each palette color should have an equal slice of the sampled color space. This was still not optimal, however:

There was an overabundance of gray in the output visualization in bad light conditions (which means, basically, all the time), causing the person’s face to blend with the background. Removing the gray color from the palette proved to be an appropriate solution to the problem; thus, the final number of colors in the palette was reduced to 26.

The code is presented below.

import processing.video.*;

Capture video;

int shapeSize = 7;

color [] colors = {
  color(0,0,0),color(0,0,127),color(0,0,255),
  color(0,127,0),color(0,127,127),color(0,127,255),
  color(0,255,0),color(0,255,127),color(0,255,255),
  color(127,0,0),color(127,0,127),color(127,0,255),
  color(127,127,0)/*,color(127,127,127)*/,color(127,127,255),
  color(127,255,0),color(127,255,127),color(127,255,255),
  color(255,0,0),color(255,255,127),color(255,0,255),
  color(255,127,0),color(255,127,127),color(255,127,255),
  color(255,255,0),color(255,255,127),color(255,255,255)
};

// diamonds:
int [] clockwiseExtraX = {0, shapeSize, shapeSize, shapeSize};
int [] clockwiseExtraY = {-1*shapeSize, -1*shapeSize, 0, shapeSize};

// squares:
//int [] clockwiseExtraX = {0,shapeSize};
//int [] clockwiseExtraY = {-1*shapeSize,0};

// slanted squares:
//int [] clockwiseExtraX = {shapeSize, shapeSize};
//int [] clockwiseExtraY = {-1*shapeSize, shapeSize};

// diagonal lines:
//int [] clockwiseExtraX = {shapeSize};
//int [] clockwiseExtraY = {-1*shapeSize};

void setup() {
  size(1280,960);
  video = new Capture(this,640,480,30);
  video.start();
}

void draw() {
  if (video.available()) {
    video.read();
    
    background(255);
    
    for (int y = 0; y < video.height; y += shapeSize) {
      for (int x = 0; x < video.width; x += shapeSize) {
        int closestColor = getClosestColor(x, y);
        
        for (int i = 0; i < clockwiseExtraX.length; i += 1) {
          int otherX = x+clockwiseExtraX[i];
          int otherY = y+clockwiseExtraY[i];
          if (otherX >= 0 && otherX < video.width && otherY >= 0 && otherY < video.height) {
            int otherColor = getClosestColor(otherX, otherY);
            if (closestColor == otherColor) {
              stroke(colors[closestColor]);
              line(x*2, y*2, (x+clockwiseExtraX[i])*2, (y+clockwiseExtraY[i])*2);
            }
          }
        }
        
        /*noStroke();
        fill(colors[closestColor]);
        ellipse(x*2, y*2, 10, 10);*/
      }
    }
    
    //image(video,0,0);
  }
}

int getClosestColor(int x, int y) {
  int loc = video.width-1-x + y*video.width;
  
  float r = red(video.pixels[loc]);
  float g = green(video.pixels[loc]);
  float b = blue(video.pixels[loc]);
  
  double smallestD = Double.POSITIVE_INFINITY;
  int smallestI = 0;
  for (int i = 0; i < colors.length; i += 1) {
    double d = dist(red(colors[i]),green(colors[i]),blue(colors[i]),r,g,b);
    if (d < smallestD) {
      smallestD = d;
      smallestI = i;
    }
  }
  
  return smallestI;
}