Final Project: An Image That Can’t Be Vandalized

My final project was born out of two motivations. I wanted to play with the concept of cult of personality, and I wanted to do some sort of projection mapping. I thus decided to make an image that couldn’t be vandalized.

In terms of technical implementation, the project has three main components. The first is an infrared camera (a PS3Eye), which I use to track the position of an infrared LED attached to an object resembling a spray can. The second is the projection: both the equipment used to set it up as well as the things that needed to be done in order to make it work within the spatial constraints. Finally, there is a set of images that are triggered depending on the position of the infrared LED on the canvas–these are perceived by the user as an animation.

IR LED, Camera & Blob Detection

A PImage variable ‘cam’ (640×480) is created to retain whatever is captured by the PS3Eye

A PImage ‘adjustedCam’ (width*height) is created to retain what is being captured in ‘cam’ but in a larger size.

A smaller PImage ‘img’ (80×60) is created to enable the Blob Detection. It is not drawn in the processing sketch but runs in the background. It adjusts the size of ‘adjustedCam’ to effectively restrict what the IR camera can see to the area being projected. This allows a blob to be drawn in the same place as where the IR LED is turned on.

Setting the coordinates.

A circuit connected to an IR LED is built into a Pringles can adapted to resemble a spray can. I used a weight to resemble the sensation of holding a spray can, and a ping pong ball to mimic the sound.

Spray can circuit and design.

I use Blob Detection — a form of pixel manipulation that sorts bright from non-bright pixels — to track the position of the IR LED over the canvas. The presence of a Blob–which indicated that a light is ON–triggers a drawing over the position of the light.

Projection Setup
The most time-consuming aspect of the project. Setting up in the space and adjusting the projector’s elevation over the ground and its distance from the wooden canvas. I used to film-set stands to hold the wooden frame.

Projection setup in the IM lab, with the wooden frame.

Animation
There are two components of the animation: what happens when the user ‘sprays’ on inside the painting and when they don’t.
When they are spraying outside the painting, the painting’s character follows the position of the spray can with his eyes. This I do by mapping the position of two ellipses drawn in the eyes of the character to the position of the blob.

When spraying happens inside the portrait, different frames get triggered depending on the general position of the blob.

Notes from user testing
My user-testing pointed me toward the following things which I implemented in the final project.

  • Add weight to the spray can and protect the circuit because people will want to shake the can — allow them to have that experience.
  • Allow the users to change the color of the spray paint.
  • Make the character in the painting dock.

IM Showcase

Here are some pictures of the IM showcase and the accumulated paintings that resulted form people interacting with my piece.


// - Super Fast Blur v1.1 by Mario Klingemann <http://incubator.quasimondo.com>
// - BlobDetection library

import processing.video.*;
import blobDetection.*;

BlobDetection theBlobDetection;
PImage img;
boolean newFrame=false;
import com.thomasdiewald.ps3eye.PS3EyeP5;

PS3EyeP5 ps3eye;
PImage cam;

PImage adjustedCam; // adjustedCam image

float posX; // for tracking the position of the blob
float posY;

float eyesXleft, eyesYleft; // positioning pupils
float eyesXright, eyesYright;

// images
PImage frame1; // the frames for animation
PImage frame2;
PImage frame3;
PImage frame4;
PImage frame5;
PImage frame6;
PImage frame7;
PImage pic_frame;
PImage noEyes; // image for static/eye-drawing
PImage rainbow; // rainbow color option square

int colPaint;
//int rainbowPaint;
boolean rainbowPaint;

boolean mode; // a boolean indicating whether
// the blob is moving inside or
// outside the frame

void setup()
{
fullScreen(P3D); // a 640*480 resolution of the screen matches the IR camera
//size(1280,720,P3D);
//fullScreen();
//size(640,480);
ps3eye = PS3EyeP5.getDevice(this);

if (ps3eye == null) {
//System.out.println(“No PS3Eye connected. Good Bye!”);
exit();
return;
}

// start capturing with 60 fps (default)
ps3eye.start();

// BlobDetection
// img which will be sent to detection (a smaller copy of the cam frame);
cam=createImage(640, 480, RGB);
img = new PImage(80, 60);
adjustedCam = createImage(width, height, RGB);
theBlobDetection = new BlobDetection(img.width, img.height);
theBlobDetection.setPosDiscrimination(true);
theBlobDetection.setBlobMaxNumber(1);
theBlobDetection.setThreshold(0.05f); // will detect bright areas whose luminosity > 0.2f;

//===loading images
frame1 = loadImage(“frame1.png”);
frame2 = loadImage(“frame2.png”);
frame3 = loadImage(“frame3.png”);
frame4 = loadImage(“frame4.png”);
frame5 = loadImage(“frame5.png”);
frame6 = loadImage(“frame6.png”);
frame7 = loadImage(“frame7.png”);
pic_frame = loadImage(“frame_picture.png”);
rainbow = loadImage(“rainbow.png”);
noEyes = loadImage(“noEyes.png”);

//===setting intial color
colPaint = color(255,0,0);
rainbowPaint = false;
}

void draw()
{

if (ps3eye.isAvailable()) {
cam = ps3eye.getFrame();
}

adjustedCam.copy(cam, 0, 0, cam.width, cam.height, 0, 0, adjustedCam.width, adjustedCam.height);
int beginX=247; // for IM show
int beginY=146;
int endX=1037;
int endY=498;

//int beginX=272; // for IM show
//int beginY=169;
//int endX=1099;
//int endY=503;

//int beginX=289;
//int beginY=168;
//int endX=1116;
//int endY=510;

//int beginX=198;
//int beginY=131;
//int endX=396;
//int endY=266;

img.copy(adjustedCam, beginX, beginY, endX-beginX, endY-beginY, 0, 0, img.width, img.height);
//img.copy(adjustedCam, 0, 0, adjustedCam.width, adjustedCam.height, 0, 0, img.width, img.height);
//img.copy(cam, 225, 140, 390-225, 261-140, 0, 0, img.width, img.height);
fastblur(img, 2);
//image(cam, 0,0, width,height);
//fastblur(cam,2);

//float threshold =50;

//img.loadPixels();
//adjustedCam.loadPixels();

//for (int x = 0; x < img.width; x++) {
// for (int y = 0; y < img.height; y++ ) { // int loc = x + y*img.width; // // Test the brightness against the threshold // if (brightness(img.pixels[loc]) > threshold) {
// adjustedCam.pixels[loc] = color(255); // White
// } else {
// adjustedCam.pixels[loc] = color(0); // Black
// }
// }
//}
//img.updatePixels();
//adjustedCam.updatePixels();
theBlobDetection.computeBlobs(img.pixels);
//image(img,0,0, width, height); // comment
drawBlobsAndEdges(false, false, true);
// Display the adjustedCam

image(pic_frame, 0, 0, width, height);
image(frame1, 298, 147, 685, 426);

// detecting if there is a blob or not; to trigger animations
if(theBlobDetection.getBlobNb()>=1){

eyesXleft = map(posX,0,width,604,624);
eyesYleft = map(posY,0,height,325,332);

eyesXright = map(posX,0,width,650,676);
eyesYright = map(posY,0,height,325,336);

// determining MODE. TRUE = animation, FALSE = eye tracking
if (posX>=0 && posX<=367 && posY>=0 && posY<=height || // left area posX>=912 && posX<=width && posY>=0 && posY<=height || // right area posX>=368 && posX<=911 && posY>=0 && posY<=227 || // upper area posX>=368 && posX<=911 && posY>=575 && posY<=height // lower area
) {
mode = false;
} else {
mode = true;
}
if (mode == false) {
image(noEyes, 298, 147, 685, 426);
noStroke();
fill(0);
ellipse(eyesXleft, eyesYleft, 5, 5);
fill(0);
ellipse(eyesXright, eyesYright, 5, 5);

noStroke();
if (rainbowPaint==false){ // to set the color either as rainbow or as solid fill
fill(colPaint);
} else {
rainbowPaint=true;
fill(random(0,255),random(0,255),random(0,255));
}
for(int i = 0; i < 5; i++){ // this gives the graffiti-looking effect
float randX = random(0,20);
randX = randX – 10;

float randY = random(0,20);
randY = randY – 10;

ellipse(posX+randX, posY + randY, 3,3);
}
}
if (mode == true) { //changing the animation frames
if (457 <= posX && posX <= 548 && 303 <= posY && posY <= 515) {
image(frame2, 298, 147, 685, 426);
} else if (638 <= posX && posX <= 723 && 303 <= posY && posY <= 515) {
image(frame5, 298, 147, 685, 426);
} else if (725 <= posX && posX <= 815 && 303 <= posY && posY <= 515) {
image(frame4, 298, 147, 685, 426);
} else if (549 <= posX && posX <= 636 && 303 <= posY && posY <= 515) {
image(frame3, 298, 147, 685, 426);
} else if (458 <= posX && posX <= 816 && 207 <= posY && posY <= 254) {
image(frame6, 298, 147, 685, 426);
} else if (458 <= posX && posX <= 816 && 255 <= posY && posY <= 303) {
image(frame7, 298, 147, 685, 426);
}else {
image(frame1, 298, 147, 685, 426);
}
noStroke();
noFill();
ellipse(posX, posY, 10, 10);
}
} else {
}

////===Color palette

fill(random(0,255),random(0,255),random(0,255));

noStroke();

fill(255,0,0); // color 1
rect(0+15, height-50, 40, 40);

fill(0); // color 2
rect(0+65, height-50, 40, 40);

fill(0,0,255); // color 3
rect(0+115, height-50, 40, 40);

//rect(0+165, height-50, 40, 40); // rainbow
image(rainbow, 0+165, height-50, 40, 40); // rainbow

if (16 <= posX && posX <= 55 && 672 <= posY && posY <= 712){
rainbowPaint = false;
colPaint = color(255,0,0);
} else if (66 <= posX && posX <= 105 && 672 <= posY && posY <= 712){
rainbowPaint = false;
colPaint = color(0);
} else if (106 <= posX && posX <= 155 && 672 <= posY && posY <= 712){
rainbowPaint = false;
colPaint = color(0,0,255);
}else if (166 <= posX && posX <= 205 && 672 <= posY && posY <= 712){
rainbowPaint = true;
} else {
}

//fill(0,255,0,100); // for checking projection map
//rect(0,0,width,height);

}
//
// ==================================================
// get the coordinates of the projection — for mapping
// ==================================================

void mousePressed() {
println(mouseX, mouseY);
// prints the coordinates of where the mouse is
// pressed; the coords of the projection.
}

// ==================================================
// drawBlobsAndEdges()
// ==================================================
void drawBlobsAndEdges(boolean drawBlobs, boolean drawEdges, boolean getCoordinates)
{
noFill();
Blob b;
EdgeVertex eA, eB;
for (int n=0; n<theBlobDetection.getBlobNb(); n++) {
b=theBlobDetection.getBlob(n);
if (b!=null) {
//Edges

if (drawEdges) {
strokeWeight(3);
stroke(0, 255, 0);

for (int m=0; m<b.getEdgeNb(); m++) {
eA = b.getEdgeVertexA(m);
eB = b.getEdgeVertexB(m);

if (eA !=null && eB !=null) {

line(
eA.x*width, eA.y*height,
eB.x*width, eB.y*height
);
}
}
}

// Blobs
if (drawBlobs) {

fill(255, 150);
ellipse(b.x*width, b.y*height, 30, 30);

strokeWeight(1);
stroke(255, 0, 0);
rect(
b.xMin*width, b.yMin*height,
b.w*width, b.h*height
);
}

posX = b.x*width;
posY = b.y*height;
//println(“posX”);
//println(posX);
//println(“posY”);
//println(posY);
}
}
}

// ==================================================
// Super Fast Blur v1.1
// by Mario Klingemann
// <http://incubator.quasimondo.com>
// ==================================================
void fastblur(PImage img, int radius)
{
if (radius<1) {
return;
}
int w=img.width;
int h=img.height;
int wm=w-1;
int hm=h-1;
int wh=w*h;
int div=radius+radius+1;
int r[]=new int[wh];
int g[]=new int[wh];
int b[]=new int[wh];
int rsum, gsum, bsum, x, y, i, p, p1, p2, yp, yi, yw;
int vmin[] = new int[max(w, h)];
int vmax[] = new int[max(w, h)];
int[] pix=img.pixels;
int dv[]=new int[256*div];
for (i=0; i<256*div; i++) {
dv[i]=(i/div);
}

yw=yi=0;

for (y=0; y<h; y++) {
rsum=gsum=bsum=0;
for (i=-radius; i<=radius; i++) { p=pix[yi+min(wm, max(i, 0))]; rsum+=(p & 0xff0000)>>16;
gsum+=(p & 0x00ff00)>>8;
bsum+= p & 0x0000ff;
}
for (x=0; x<w; x++) { r[yi]=dv[rsum]; g[yi]=dv[gsum]; b[yi]=dv[bsum]; if (y==0) { vmin[x]=min(x+radius+1, wm); vmax[x]=max(x-radius, 0); } p1=pix[yw+vmin[x]]; p2=pix[yw+vmax[x]]; rsum+=((p1 & 0xff0000)-(p2 & 0xff0000))>>16;
gsum+=((p1 & 0x00ff00)-(p2 & 0x00ff00))>>8;
bsum+= (p1 & 0x0000ff)-(p2 & 0x0000ff);
yi++;
}
yw+=w;
}

for (x=0; x<w; x++) {
rsum=gsum=bsum=0;
yp=-radius*w;
for (i=-radius; i<=radius; i++) {
yi=max(0, yp)+x;
rsum+=r[yi];
gsum+=g[yi];
bsum+=b[yi];
yp+=w;
}
yi=x;
for (y=0; y<h; y++) {
pix[yi]=0xff000000 | (dv[rsum]<<16) | (dv[gsum]<<8) | dv[bsum];
if (x==0) {
vmin[y]=min(y+radius+1, hm)*w;
vmax[y]=max(y-radius, 0)*w;
}
p1=x+vmin[y];
p2=x+vmax[y];

rsum+=r[p1]-r[p2];
gsum+=g[p1]-g[p2];
bsum+=b[p1]-b[p2];

yi+=w;
}
}
}

Demystifying Technology

As ubiquitous as technology is nowadays, it remains a fairly opaque aspect of my life. I often feel I do not have the tools to understand how different systems work, and end up relying on others to tell what things do, or what I can do with them. My interest in Interactive Media stemmed from a desire to further understand technology both in technical terms and in terms of its sociopolitical dimension. I am not able to put forth a satisfactory definition of computing—I can only say, for example, that it largely entails the use of recurring mathematical operations to perform a host of different tasks. However, I think this has been enough to have a profound impact in my life. I have been able to demystify a lot of the technology around me, and grow more aware of how it operates and how much it can actually do. I do not see technology as the solution to all the world’s problems, and I also do not think technology itself has the ability to impact our lives negatively. Instead, engaging with interactive media and art through software has reaffirmed my belief that we should all strive to understand technology in order to make it work for (most of) us instead of against us. That we must fight the urge to leave it to others to deal with the technicalities, because those technicalities (and the biases that are built into them) can affect our lives in profound ways.

Computer Vision in Art: Seeing the Invisible

Levin outlines the different elements of computer vision that artists and designers must be aware of in order to implement this technology as part of their projects. He also provides a ‘short history’ of the early stages of computer vision in interactive arts pieces, and identifies the major themes that artists have addresses through their work.

I was particularly intrigued by the The Suicide Box (Bureau of Inverse Technology 1996) and Cheese (2003). Natalie Jeremijenko of the Bureau of Inverse Technology reacted to some of the criticism to the project by pointing out that it stemmed from “the inherent suspicion of artists working with material evidence.” Her words are extremely thought-provoking in a context of growing digitisation inasmuch as they force the question: who gets to mobilise digital or digitised data as legitimate evidence? How we answer this question will have consequences for how open and democratic the digital realm ends up being. If we endow everybody with the ability to use and mobilise digital data, then digital platforms can prove themselves to be truly disruptive. If we limit this ability, then we will just be reproducing old structures for producing knowledge.

Cheese successfully objectifies the pressures that different forms of surveillance exert on (female) bodies. In doing so, it highlights one of the most productive areas of computer vision for artists. Computer vision technologies—as well as a host of other data-gathering technologies in the devices we use—are often concealed. By creating environments in which participants can see and react to how they are being perceived and processed as data, and the consequences this has for them and for the information being produced, interactive art relying on computer vision can help people become more aware of the technosocial system we are all embedded in.

Understanding the Second Machine Age Beyond Prediction

In The Digitisation of Just About Everything, the authors explain how the rise in digitisation is changing the nature techno-social systems. They recount the economic properties of information identified by Varian and Shapiro—zero marginal cost of reproduction and the fact that information is non-rival—and add that, in what they regard as a ‘second machine age,’ some information is no longer even costly to produce. All of this is augmented by increasingly better, cheaper and more sophisticated technologies.

At the core of the author’s appraisal of the benefits of digitisation lies the notion that digitisation will help us to better understand and predict different behaviours. There is a strong element of truth in this. Statistically speaking, our models do get better as we have more data—and this is primarily what digitisation has left us, more data. However, I do not think the process will be as straightforward as the authors depict it. More data does not necessarily mean better data. And digitisation is only partially equipped to provide us with that. Digitisation can allow us to record new kinds of information about more people, but that information is limited to the digital realm. As much as technologists want to believe it, there is no one-to-one correspondence between the digital and physical worlds.

Digitisation is an enhancement to old statistical techniques, not a panacea. Our challenge is to understand the consequences of the constant, increasingly more complex interactions between the digital and physical realms. This entails more creativity and audacity than mere statistical prediction, because the encounter between these two worlds in yielding a context that is different from its parts. Once we understand this, we can begin to comprehend new behaviours instead of just extrapolating old ones.

Precision and Interpellation in ‘The Language of New Media’

In The Language of New Media Lev Manovich defines, and traces the evolution of ‘new media’ as a concept and as a form of cultural production. Through a brief recollection of the parallel development of computers and of physical media, Manovich identifies the historical context in which “media becomes new media.” In the late XIX and early XX century, he argues, cultural forms undergo computerization. After this, Manovich moves to point at some principles of new media: numerical representation, modularity, automation, variability, and transcoding.

There are three things I want to highlight about Manovich’s piece. First, his exercise provides with a precise vocabulary with which to distinguish ‘new media’ from ‘media.’ Whether we agree or disagree is another issue, but his effort pointa at the importance of moving toward precision in language rather than to rely on platitudes to describe our changing media landscape. The second aspect I wanted to highlight is a consequence of what I just described. Manovich’s concept of ‘new media,’ and the principles he identifies, are conceptually or theoretically productive because they point at important questions about the nature of the form. In the piece, Manovich himself tries to solve some of these debates, for example, when he debunks the fallacy that all ‘new media’ si digitized analog media, or that ‘new media’ is more interactive than ‘old media.’

Finally, I think Manovich’s piece eloquently reveals an area where processes of isomorphism are understudied in spite of the large consequences they can have for our everyday lives. When discussing the principle of Transcoding, he points at the cultural and cognitive feedback loop between computers and cultural production: we make computers as much as computers make us. This idea is of crucial importance as we think what we lose when we rely on computers for creating and organising different systems of meaning.

Divine Intervention Switch v2

For our computer vision/image manipulation assignment, I chose to go back to the basics and produce another iteration of my handless switch where an LED is turned on whenever God and Adam’s hand touch.

Adam’s hand is linked to a color-tracker coded in Processing, such that whenever the color being tracked comes near God’s hand, Processing communicates with the Arduino to turn on the LED. In order to do this, I rely on serial communication. Moreover, whenever Adam’s hand touches God’s ‘Hallelujah,’ from Handel’s ‘Messiah,’ plays in the background.

In order to do this project, I relied on Dan Shiffman’s tutorials, Aaron’s color-tracking example code, and the MINIM library for playing sound files in Processing.


####### ARDUINO CODE

const int ledPin1 = 3;

void setup() {
// put your setup code here, to run once:
Serial.begin(9600);
Serial.println("0,0");

pinMode(3,OUTPUT);

}

void loop() {
while(Serial.available()){
int switchState=Serial.read();
if(switchState==1){
digitalWrite(ledPin1,HIGH);
delay(100);
} else{
digitalWrite(ledPin1,LOW);
}
}
}

####### PROCESSING CODE

import processing.video.*;
import processing.serial.*;
import ddf.minim.*;
Capture video;
Serial myPort;
PImage adam;
PImage hand;
color trackColor;
int locX, locY;
boolean touch=false;
Minim minim;
AudioPlayer hallelujah;

void setup() {
size(640, 480);
video = new Capture(this, 640, 480, 30);
adam = loadImage("creation_adam1.png");
hand = loadImage("adam_hand.png");
video.start();

minim = new Minim(this);
hallelujah = minim.loadFile( "hallelujah_short.mp3");

printArray(Serial.list());
String portname=Serial.list()[5];
println(portname);
myPort = new Serial(this,portname,9600);
myPort.clear();
myPort.bufferUntil('\n');

}

void draw() {
if (video.available()) {
video.read();
}
video.loadPixels();
float dist=500;
for (int y=0; y<height; y++) {
for (int x=0; x<width; x++) {
int loc = (video.width-x-1)+(y*width);
color pix=video.pixels[loc];
float r1=red(pix);
float g1=green(pix);
float b1=blue(pix);
float r2=red(trackColor);
float g2=green(trackColor);
float b2=blue(trackColor);
float diff=dist(r1,g1,b1,r2,g2,b2);

if (diff<dist){
dist=diff;
locX=x;
locY=y;
}
}
}
video.updatePixels();
pushMatrix();
translate(width,0);
scale(-1,1);
image(video,0,0);
popMatrix();
fill(trackColor);
ellipse(locX,locY,30,30);
image(hand, 30,100, locX, locY);
image(adam, 100, 50, 600, 480);

stroke(255,255,255,0);
fill(255,255,255,0);
ellipse(325,270,20,20);

if(dist(locX,locY,325,270)<20){
touch=true;
println("1");
if (hallelujah.isPlaying())
{
println("audio is playing");
} else {
hallelujah.play();
hallelujah.rewind();
}
}
else {
touch=false;
println("0");
}

myPort.write(int(touch));

}

void mousePressed(){
int loc=(video.width-mouseX-1)+(mouseY*width);
trackColor=video.pixels[loc];
}
//void serialEvent(Serial myPort){
// String s = myPort.readStringUntil('\n');
// s =trim(s);
// println(s);
// if(s!=null){
// int value[]=int(split(s,','));
// if(value.length==2){
// }
// }
// myPort.write(int(touch));
//}

Orienting Objects

My serial communication project involves manipulating the speed of the moving objects in my Objects Oriented arts piece. Using serial communication, the audience can now increase or reduce the speed of movement along the X and Y axes independently.

On the Arduino side of things, I read the values of two potentiometers. These are then mapped in processing for them to have a minimum of 0 and a maximum of 10.
The biggest challenge in this project was making it so the moving objects would not get ‘stuck’ when the speed was re-set every frame.

Visualizing Population Density

This week, Arame and I used processing to visualize data from the World Bank. We were concerned with visualizing population density: the proportion between the number people who live in a given area relative to the size of the area. In our visualization, each ellipse is a country, where the size of the ellipse indicates the country’s area and the number of points within the ellipse represent the country’s population — one point for every 100,000 inhabitants.

Our code has two main components. One retrieves and processes the data from the World Bank’s csv file: it tells processing to make one ellipse per country (row or line in the dataset) and assigns it size attributes based on the area of the country. It also assigns the number of inner ellipses based on the population.

We rely on a single class, ‘Circle’, for our ellipses. This class is largely built using vectors. Movement around the screen and colors are randomly generated every time you run the program.
Things to improve: allow the user to ‘zoom in’ and ‘out’ of the frame in order to visualize all countries (we dropped Russia from the sample because it was just too big), and add other features such as animations to be able to visualize change over time and the relationship of population with other economic performance indicators.

Objects Oriented

Our assignment this week was to use object-oriented programming to create either a game or an arts piece. This piece uses a single object but different modes for both the display and move functions embedded in the object. It was my first time using Processing to create something from scratch.

In the future, I am looking for ways to make my code more efficient and flexible by, for example, implementing loops — I still don’t have a good grasp on. Moreover, I want to move beyond regular polygons and start experimenting with more irregular shapes and add features of interactivity.

Order in Software and Chance in Art

Casey Reas’ talk and the story of Margaret Hamilton are related in the sense that they make think about the place of randomness and order in both art and software.

Her Code Got Humans on the Moon tells the story of how Margaret Hamilton’s work at MIT was crucial in developing the software that allowed the Apollo mission to be successful. At a glance, the most surprising thing about the story is that, in an industry dominated by men, it was a woman who spearheaded the an important part of the project. Given that unequal participation in STEM fields persists to this day, having strong female role models plays a significant role in paving the way toward more representation. For me, however, the major takeaway from the reading is the need to prototype tirelessly and build forgiving systems for users.

I extract this lesson from the fact that the NASA engineers were arrogant enough to claim that their astronauts had been “trained for perfection” and would never commit a mistake such as launching a program midflight. Hamilton had to conform herself with including a cautionary note in the documentation, instead of following her original plan of building an error-correcting mechanism into the machine. But the astronauts did make the mistake, and Hamilton and her team had to rush to save the day. From this, it becomes clear that no matter how much we’ve simulated something, our designs must incorporate error-correction, specially when we design products to be used in previously unknown situations. Being humble and forgiving in our design is not a disrespect to our users, it’s an acknowledgement that mistakes happen even when we’re most prepared. Or in other words, given the chaos that exists outside the system, software must account for as much order and structure as possible.

Casey Reas’ talk on Chance Operations relays a different message: randomness, even a little of it, can be extremely generative. Reas prefaces his examples by highlighting how, historically, the place of art has to be to incorporate order into a chaotic environment, i.e. nature, and then moves to show how with software/programmed art, that landscape or trend is changing abruptly. Through his examples, Reas demonstrates how adjusting how much we live to chance in an artwork we can embark on a journey of discovery about the behavior of form and the visual properties that emerge from it. I must emphasize, however, that this element of leaving place for randomness in code is possible only because of how rigid code can be. Code is a language of commands, it leaves no room for interpretation, and therefore we must incorporate it whenever we want it to be a part of our work.