Jam Box – Live DJ Music Making Tool (Full Documentation)

Jam Box is a self-contained music making device that controls midi signals sent to Ableton Live, a professional music mixing software. The device contains an arrays of music samples from three different genres mapped into each buttons as well as knobs to control volume, tempo and other parameters. The principle is to allow the user to create their own music and experience what it is like to be a DJ.

Context

While taking a DJ class this semester, I became very passionate about mixing different audio samples and layering sounds to create a whole new piece. I came across midi controllers which are devices (keyboards, pads…) that control notes from a particular digital instrument via Ableton live and other music making software.

 

Midi Keyboard Controller
Midi Controller Pad

Although the devices above require prior knowledge of music software and a basic understanding of DJ-ing jargon (BPM, midi mapping, filters etc.), I wanted to create a tool that allows mainstream people to experience the joy of composing music without worrying too much about the technicalities. Experiencing the same joy myself by DJ-ing live during a school gig was a unique opportunity that I really wanted to share with everyone. This device is therefore a way to invite people to walk in the shoes of a DJ and have a great time.

Concept 

The Jam Box holds music samples from three different genres — arabic music, electronic music and hip-hop. Each genre is mapped to 11 selected music samples arranged on Ableton Live. Whenever a genre is selected (corresponding button on Jam Box is pushed), Ableton Live activates the genre’s “group track” (see video below) in Solo mode. This means that all the 11 buttons (should be 12 but one is not working) on the audio samples keypad are music samples from that particular genre. Hence, when the user presses any button on the audio sample keypad with the Arabic genre selected for example, corresponding samples are triggered within the “Arabic” group.

The user can also select multiple genres at the same time which triggers samples from both genres to create a mix of tracks. The Jam Box also has knobs that control master volume, tempo, filter (removes or adds bass) and pan. The user can modify these parameters at any time during their session.

The Jam Box is also designed such that whenever a button is pressed for the first time, the LED underneath is turned on to signal activity. Whenever it is pressed for the second time, the LED turns off. This adds to the intuitiveness of the Jam Box to allow users to visualize which buttons (samples or genres) are active and which ones are not.

Overview

A 2-minute summary of the project. Excuse my video editing skills 🙂

Materials

Parts 

  • 1 4×4 Adafruit Trellis Monochrome Driver
  • 1 Silicone Elastomer 4×4 button keypad
  • 1 Arduino Redboard
  • 4 10k Potentiometers
  • 4 potentiometer covers (knobs)
  • 16 LEDs (size: 3mm)
  • 10 screws
  • 16 male jumpwires
  • Peel stick paper (for labels)

Tools

  • 3D printer
  • Soldering wire and iron
  • Screwdriver
  • Flush Diagonal Cutter

Software

  • Arduino.cc
  • Processing 3
  • Ableton Live 9

Building Process

Step 1

3D print each parts of the Jam Box‘s enclosure. Digital STL printing files can be found on http://www.thingiverse.com/thing:409733.

Step 2

  • Solder 3 mm LEDs onto Trellis PCB. The longer leg of the LED goes into the positive ‘+’ hole of the Trellis. Cut the excess legs using a flush diagonal cutter.

 

  • Solder 4 wires on the Trellis PCB SDA, SCL, GND and 5V which will connect the SDA, SCL, GND and 5V pins on the Arduino Redboard. 

Test LEDs to make sure each is working before proceeding with the rest. Arduino Code can be found in the “Code” section below.

Step 3

Wire up potentiometers and install them in the Jam Box’s enclosure cover. The potentiometers will be connected to pins A0-A3 on the Arduino Redboard and will share the 5V pin on the RedBoard with the Trellis PCB (Solder both together).

 

Step 4

Assemble all parts of the enclosure as well as the Trellis PCB, the keypad and potentiometers. Use screws to tighten everything together.

 

Code

Arduino

I use arduino to set a serial communication from the physical interactions to the processing sketch. Whenever a button is pressed, Arduino serial prints the number of the pressed button and turn on/off the corresponding LED. It then sends that information to Processing. Likewise, Arduino maps potentiometers’ values from 0 to 127 (following Ableton Live midi mapping directives) and sends that information to the Processing Sketch.

Some of the libraries used are the Adafruit Trellis Library and the Wire library.

Full Arduino Code can be found here: Jam Box Arduino Code

Processing

Processing receives information from Arduino and reads the communication to then trigger a set of actions. For instance, when Button ‘0’ is pressed, Arduino sends a ‘0’ to Processing, and through a series of ‘if’ statements, Processing sends a midi signal to Ableton Live to trigger the music sample corresponding to Button 0.

I used Serial and The Midi Bus libraries. The latter helps convert actions into midi signals sent on a specific channel in Ableton Live.

Full Processing Sketch can be found here: Jam Box Processing Code

Ableton Live

Ableton Live is a music making software designed to receive midi signals. To receive midi signals from Processing, we need to set Ableton’s input to “IAC driver (Bus 1)” which corresponds to the Audio Midi signal from our Macbook laptops and turn on the remote controlling option is Ableton preferences settings.

Following that, we click on Ableton’s MIDI mapping tool to assign each audio sample to a particular button on our Jam Box. By selecting the audio samples and pressing on any button on our keypad, a note is automatically assigned to the sample and represents the link between our box and Ableton’s sample. Likewise, our potentiometers can be mapped to the volume on Ableton’s master track, the tempo or BPM, the master pan and a filter using the same method (MIDI mapping).

NB: Audio samples have to be manually selected and placed in different scenes in Ableton. For simplicity, it would be best to group the audio samples by genres to avoid confusion.

Challenges and Improvements

One of the biggest challenges in realizing this project was on the software side. Creating the right communication between 3 different softwares — Arduino, Processing and Ableton turned out to be quite complicated. The process required creativity at different levels – first in setting up Arduino to print information for all buttons and potentiometers, second in programming Processing to decipher group information sent by Arduino and convert them into individual data to be communicated to Ableton. The most difficult part was navigating Ableton Live.

As this was my first time coming across midi mapping and using Ableton Live with a different perspective, I took this part as a personal challenge to really develop my skills. After getting a good grasp of midi mapping fundamentals thanks to Omar Shoukri, the next challenge was to confront concept with software programming. Questions I had to ask myself were: “What do would make sense for users to do?” “How to improve their experience through making a comprehensive Jam Box with intuitive midi mapping?” “How does what I know now and the limitations of Ableton affect my initial concept?” These questions were very useful in helping me put things in perspective and code my device accordingly. For instance, I had to map “genre” buttons to trigger a “Solo Mode” on Ableton to avoid confusion and unintentional mixes of samples. Because each button is in theory mapped to three different samples from three different genres, pressing a button theoretically triggers all three samples, following Ableton’s logic. Implementing such solutions only became apparent while navigating Ableton Live.

Overall, realizing this project was an amazing learning opportunity that still blows my mind to this very moment. I enjoyed putting together the pieces of the device and using tools that I never encountered before such as a Trellis PCB and 3mm LEDs. Seeing my concept evolve was also very enriching in terms of evaluating the extent to which I can push myself to incorporate features that I did not plan on adding and discovering the power of Ableton Live. It was particularly refreshing to see people impressed and excited about my project during the Interactive Media Showcase. A couple people even asked me if I was selling my product!!!! *mindblown*. Kid Koala himself really enjoyed playing with the device and posted a picture of it on his social media account! He also said that there’s a huge demand for portable midi controllers by DJ’s around the world and that this would sell pretty quickly!

DJ Kid Koala posted my project on his Instagram!!!

In terms of improvements, adding a record button on the pad to allow people to carry their own mixes home would be a great way to create a lasting memory. Adding more flexibility to the range of available sounds would also be a great way to allow people to pick genres or sounds that they like most and go from there. In terms of physical components, using an Arduino Leonardo or on that supports USB communication would cut down from using 3 to 2 different softwares. In theory, this can work by only using Arduino and Ableton Live. Future projects can explore that path perhaps.

Jam Box – User Testing

I invited two people to test my device. I did not have any labels on the box and left it to the interpretation of the user. The first candidate found it pretty straightforward and played with each knob and button to discover the individual beats and effects mapped on each. He was looking at Ableton live at the same time and was able to detect the changes in genres. This was perhaps due to the person’s background in music.

The second candidate was pretty confused and did not know what each button did. She suggested I label the “genre” buttons and each of the knobs for more clarity. Another frustration that came along was the fact that one of my buttons was not working, which made the user press on it multiple times before I told her that it wasn’t working.

Since the last candidate would reflect that typical user that I would encounter, I decided to label the buttons and knobs to improve user experience. Another observation from both testers what the absence of a stop button to stop the whole music in case they want to start from scratch. They thought it would be a great way to make another mix if the user decides to.

Overall, the feedback was really helpful in matching both music-experienced and non-music experienced users’ needs and seeing how people interpret the box’s use in general. However, I found that some explanation would be necessary either way before the user utilizes the box to give some context and details about the functionality of the device.

 

Let’s Face It!

For this week’s image and video processing project, I decide to explore face recognition tools in a live video capture. Conceptually, I wanted to write a code that could detect a moving face and substitute the face with a random image. I came across a library called OpenCV that allows easy face detection processes. To mark the area where the person’s face is, I drew a rectangle approximately where the face is and substituted it with an image file drawn from an array of many.

Trial

import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Capture video;
OpenCV opencv;

void setup() {
 size(640, 480);
 video = new Capture(this, 640/2, 480/2);
 opencv = new OpenCV(this, 640/2, 480/2);
 opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE); 
 video.start();
}
void draw() {
 
 scale(2);
 opencv.loadImage(video);

 image(video, 0, 0 );

 noFill();
 stroke(255, 0, 0);
 strokeWeight(1);
 Rectangle[] faces = opencv.detect();
 println(faces.length);
 
 for (int i = 0; i < faces.length; i++) {
 println(faces[i].x + "," + faces[i].y);
 rect(faces[i].x, faces[i].y, faces[i].width+20, faces[i].height+20);
 }
}

void captureEvent(Capture c) {
 c.read();
}

Below is the final result.

Improvements

One of my biggest challenges was to scale the images to the size of the face captured in the live video to generate a smoother more accurate final output. I scaled the images manually but I am wondering if there are ways to adapt the code based on the size of the face captured.

Below is the code for the final output

import gab.opencv.*;
import processing.video.*;
import java.awt.*;
int index;
boolean pressed;

Capture video;
OpenCV opencv;
PImage []img=new PImage[7];
PImage face, obama;
int image_index;

void setup() {
 size(640, 480);
 video = new Capture(this, 640/2, 480/2);
 opencv = new OpenCV(this, 640/2, 480/2);
 opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE); 
 video.start();
 img[0]= loadImage("obama.png");
 img[1]= loadImage("trump.png");
 img[2]= loadImage("aaron.png");
 img[3]= loadImage("zbynek.png");
 img[4]= loadImage("lama1.png");
 img[5]= loadImage("Daniil.png");
 
}

void draw() {
 
 scale(2);
 opencv.loadImage(video);

 image(video, 0, 0 );

 fill(0);
 stroke(255, 0, 0);
 strokeWeight(1);
 Rectangle[] faces = opencv.detect();
 println(faces.length);
 
 for (int i = 0; i < faces.length; i++) {
 println(faces[i].x + "," + faces[i].y);
 image(img[index],faces[i].x, faces[i].y, faces[i].width+20, faces[i].height+20);

 if(mousePressed==true){
 image(img[index++],faces[i].x, faces[i].y, faces[i].width+20, faces[i].height+20);
 }
}
}


void captureEvent(Capture c) {
 c.read();
}


A Whole New Perspective – Computing Reflection

A whole new perspective – that’s what computing has brought to me, hence the title of this post. I have taken a basic python class before taking this course and although it was useful in giving me the basic skills to approach a problem from a computing point of view, it was lacking the practicability and physical aspect that this class was able to provide from minute 1 of being in the classroom. I remember being very excited at being able to make an LED blink from just pressing a button.

From then on, my understanding of computing, interaction, design and universality have expanded in ways that allow me to see beyond simple phenomenon, art pieces, touch-sensitive tools in every day life. Computing to me does not just merely stop at writing a code but it goes beyond – to creating interaction (a reaction and a response), to building up a set of signals that are understood universally by the users, to allowing the creating of new digital art and so on and so forth.

It has also allowed me to visualize a range of possibilities where they did not exist before for me. Be it from going to a restaurant and imagining how a simple art piece made out of electronics and code could complement the decor, to going to a festival and visualize how their interactive maps and information decks work. Here’s a video that I took a couple weeks ago at the Mother of the Nation festival in the Corniche of an interactive installation.

The “me” before understanding computing would simply appreciate the effects and move on but I actually stood there and tried to explain to my friends how fascinating it is!!!

Building up on that, computing has definitely made me a more attentive and imaginative person. I am getting real skills and conceptual planning capabilities that I believe are very useful especially when one has a great idea and tries to bring it to life, no matter the field or specialty. I mentioned in the beginning of class that I have always wanted to take such a class and that it because I was genuinely curious about the mechanisms between generating such applicable, interactive and scalable technologies from the comfort of a classroom setting and I am glad to say that this class has fulfilled my graduate-to-be curiosity.

Floating Cars Around the World

For this week’s project, I recycled my “Floating Cars” project to add a little twist to it. The idea is to change the background where the cars are floating by pressing a button from the Sparkfun Arduino deck.

This is achieved by getting Arduino to communicate with Processing by signaling any button presses which are then translated by a switch in the background image. I chose images of 3 cities (Paris, London and New York) because of their features as large economic hubs with generally dense traffic. Below is the final product.

The most challenging part was establishing an easy and efficient connection between Arduino and Processing. After multiple tries, I simplified my code for the Arduino to just send a value between 0 and 1 to Processing which then interprets it as a cue to change the background image.

Below is my code.

Processing Code

PImage []img=new PImage[3];
int image_index;
import processing.serial.*;
Serial myPort;
boolean buttonState=false;

int num = 500; // number of cars
Car[] myCars; // array for cars
boolean buttonPress;
int drive=0;

void setup() {
for (int i=0; i<3; i++){
img[0] = loadImage("london.jpg");
img[1] = loadImage("pariiis.jpg");
img[2] = loadImage("ny.jpg");
}
 
String portname= Serial.list()[3];
myPort= new Serial(this, portname, 9600);
myPort.clear();
myPort.bufferUntil('\n'); //print until the end of the line
 
size(500, 500);
smooth();
myCars = new Car[num]; 
for (int i =0;i<num;i++) {
myCars[i] = new Car();
}
}

void draw() {
//background(255);
image(img[image_index], 0, 0);
for (int i =0;i<num;i++) {
myCars[i].display();
myCars[i].go();
}
}

class Car {
color col;
int x;
int y;
int speed;

//constructor
Car () {
col = color(random(256), random(256), random(256), random(256));
//speed= s;
speed = int(random(1, 10));
if (random(1)>0.5) {
speed = -speed;
}
x = int(random(width));
y = int(random(height));
}

// methods
void display() {
// draw car as a box
noStroke();
fill(col);
rect(x, y, 20, 10);
ellipse(x+5, y+10, 5, 5);
ellipse(x+15, y+10, 5, 5);
}


void go() {
//speed=s;
x+= speed;
if (x>width && speed>0) {
x = -20;
}
if (x<0 && speed < 0) {
x = width+20;
}
}
}

void serialEvent(Serial myPort){ //no need to call in the function in the draw loop
String s = myPort.readStringUntil('\n');
s=trim(s); //remove any space
println(s);
if (s!=null){
int value[]=int(split(s,','));
if (value.length==1){
if (value[0]==1 && buttonState==false){
buttonState=true;
image_index+=1;
image_index %=3;
}
if (value[0]==0){
buttonState=false;
}
}
}
 
}

Arduino Code

const int Switch=8;
const int ledPin=2;
boolean buttonPress=false;

void setup() {
 // put your setup code here, to run once:
 Serial.begin(9600);
 pinMode(Switch, INPUT);
 pinMode(ledPin, OUTPUT);
 Serial.println("0");
}

void loop() {
 // put your main code here, to run repeatedly:

if (digitalRead(8)==HIGH){
 buttonPress=true;
 Serial.println(1);
 }
 else{
 buttonPress=false;
 Serial.println(0);
 }

}

Response: The Digitization of Just About Everything

I found myself nodding in agreement a lot while reading this chapter. The author offers many insights on what I think is a very important phenomenon happening that is bigger than we ever imagined – Digitization and the power of Data Analytics. There is so much information generated every second by millions of humans and this information can be used to build interdependent real-time tools that can be powerful. Waze is just an example among many others but the key attribute here is the reproduction of dynamic systems. Our decisions are better informed when they are based on real-time data, or data that accurately represents a given situation at a given time. For that to be possible however, each member of the network must contribute by sending in information which at times, might mean forfeiting something very precious: privacy. While this problem does not apply to Waze for instance since any identification is removed from the collected location data, this issue is becoming more alarming in the age of digitization. With more systems that collect AND display your location with little to no awareness from the user and other tools that can easily extract personal information from say, Facebook, our privacy is increasingly compromised. While digitization and data analytics can help us understand more of our past, present and future, it is also threathening our luxury to be unwatched or untracked. But because the benefits of digitization outweigh the luxury of privacy, technology continue to evolve into increasingly dynamic and interdependent systems.

 

Highway Madness

For this week’s object programming project, I decided to create a game. I was inspired by a game application on my phone that features a person attempting to cross roads with moving cars and obstacles.

I first played with the phenomenon of moving cars with different speeds and directions and created this canvas.

int num = 500; // number of cars
Car[] myCars; 

void setup() {
 size(500, 500);
 smooth();
 myCars = new Car[num]; 
 for (int i =0;i<num;i++) {
 myCars[i] = new Car();
 }
}

void draw() {
 background(255);
 for (int i =0;i<num;i++) {
 myCars[i].draw();
 myCars[i].go();
 }
}

class Car {
 color col;
 int x;
 int y;
 int speed;

 Car () {
 col = color(random(256), random(256), random(256), random(256));
 speed = int(random(1, 10));
 if (random(1)>0.5) {
 speed = -speed;
 }
 x = int(random(width));
 y = int(random(height));
 }

 void draw() {
 noStroke();
 fill(col);
 rect(x, y, 20, 10);
 ellipse(x+5, y+10, 5, 5);
 ellipse(x+15, y+10, 5, 5);
 }


 void go() {
 x+= speed;
 if (x>width && speed>0) {
 x = -20;
 }
 if (x<0 && speed < 0) {
 x = width+20;
 }
 }
}

For the game, I created road lanes and a  character that is able to move up, left and right. The principle is for the character to cross the lanes without crashing into a car. However, because I was not able to account for collision in my code, I allowed the character to cross the roads while avoiding cars and try to reach the other side within a given time limit (10seconds).

The number of cars on each lane is chosen randomly as well as the color and speed of each car. Time is displayed on the upper left corner of the screen and the player’s goal is to reach the “Home” area. I also inserted a start and end screen to simulate a real game.

The challenge for this game was creating a collision since I was not familiar with the concept of PVector yet. My aim is to develop this further now that I have the essential tools to do so and add more features such as obstacles and levels and perhaps a moving screen with more roads generated as it moves.

 

Hypnotic Squares – A Reinvention of “Random Squares” by Bill Kolomyjec

For this week’s project, I decided to reproduce the following graphics titled “Random Squares” by Bill Kolomyjec. The graphics appeared in the 1977 edition of Computer Graphics and Art and depicts a canvas with random squares with different depths that gives the impressions of hollow pyramids. Each square has a random number of squares insides it which is decided by the program.

I attempted to reproduce the canvas and here is the result. Every time the program runs, it produces a different pattern due to the randomness of the number of squares inside each square. The minimum number of square inside a square is 0 and the maximum is 20.

Version 1 – Reproduced by Arame Dieng – March 25, 2017
Version 2 – Reproduced by Arame Dieng – March 25, 2017

Method

The pattern was reproduced by building the first square and translating the square in different locations of the canvas using a for loop.

To build the first square, I created a function called drawTarget() with argument such as the starting location of the square, its size and the number of squares inside it. Using a mathematical formula, I drew the square such that it produces and angled-effect on one side of the square (if you look at the canvas carefully, inner squares seems to be dense on the upper left side of the squares).

The translation was done using another function called drawPattern() where I use a nested for-loop to translate the first square along and across the canvas with the number of inner squares randomly chosen.

void setup() {
 size(860, 620);
 background(0);
 noLoop();
 
}

void draw(){
 drawTarget(10,10,120,20); //calling function to draw first shape
 drawPattern(20); //calling function to translate shape across canvas
}


//defining function to draw shape with inner squares
void drawTarget(float xloc, float yloc, float size, float num){
 float steps = size/num;//space between each inner square
 float corner= steps/3; //creating a denser pattern on the top left        corner of each square
 rectMode(CORNER);
 for (int i=0; i<num; i++){
     rect(xloc+i*corner,yloc+i*corner,size-i*steps, size-i*steps);
 }
}

//defining function to repeat pattern across window
void drawPattern(int rand){
 float start_x=120;
 float start_y=120;
 for (int i=0; i<7; i++){
    for (int j=0; j<5; j++){
       pushMatrix();
       translate(start_x*i, start_y*j);
       drawTarget(10,10,120,random(rand)); //draw shape while randomly assigning the number of squares each shape has
       popMatrix();
     }
 }
}

To spice things up a little, I decided to go further with my representation by animating the squares and adding some color.

Pattern 1

This pattern animates the inside squares continuously thus creating an additional subliminal  layer that mimics checkered squares. This is obtained by eliminating the noLoop() function in the code above.

Pattern 2

This pattern animates both the outer and inside squares, rotating each shape and making them jitter every 2 seconds. The squares are also colored with an ombre effect from left to right.

Pattern 3

Pattern 3 is a variation of Pattern 2 with the color variation happening inside each shape rather than across shapes.

Without the jitter.

You can create different effects with the square and choose to make them rotate or not. I really had fun doing this project and creating new patterns based on the original image published in the journal. I also learned to experiment with functions and use mathematical logic to draw patterns. The most challenging part was drawing the first shape with the inner squares and figuring out how to make the translation work. But in the end, it worked out and I discovered new things from having bugs in my code.

Code for pattern 2

float angle;
float jitter;

void setup() {
 size(840, 600);
 background(255);
 drawTarget(0,0,120,15);
 background(255);
}

void draw(){
 drawPattern(15);
}


void drawTarget(float xloc, float yloc, float size, float num){
 float steps = size/num;
 float corner= steps/3;
 rectMode(CORNER);
 for (int i=0; i<num; i++){
    rect(xloc+i*corner,yloc+i*corner,size-i*steps, size-i*steps);
 }
}

void drawPattern(int rand){
 if (second() % 2 == 0) {  //creates the jitter movement
   jitter = random(-0.2, 0.2);
 }
 angle = angle + jitter;
 float c = cos(angle);

 float start_x=120;
 float start_y=120;
 float grayvalues = 255/rand; //adding color ombre 
 for (int i=0; i<rand; i++){
   for (int j=0; j<rand; j++){
      pushMatrix();
      strokeWeight(1.5);
      fill(i*grayvalues); //fill shapes
      translate(start_x*i, start_y*j); //translate then rotate
      rotate(c);
      drawTarget(0,0,120,random(rand));
      popMatrix();
   }
 }
}

Code for Pattern 3

float angle;
float jitter;
void setup() {
 size(860, 620);
 drawTarget(0,0,120,15);
 //noLoop();
}

void draw(){
 
 drawPattern(15);
 
}


void drawTarget(float xloc, float yloc, float size, float num){
 float steps = size/num;
 float corner= steps/3;
 float grayvalues = 255/num;
 rectMode(CORNER);
 for (int i=0; i<num; i++){
    fill(i*grayvalues,0,0); //fill shape before translating 
    rect(xloc+i*corner,yloc+i*corner,size-i*steps, size-i*steps);
 }
}

void drawPattern(int rand){
 if (second() % 2 == 0) { 
    jitter = random(-0.2, 0.2);
 }
 angle = angle + jitter;
 float c = cos(angle);
 float start_x=120;
 float start_y=120;
 float grayvalues = 255/rand;
 for (int i=0; i<rand; i++){
   for (int j=0; j<rand; j++){
     pushMatrix();
     noStroke();
     translate(start_x*i, start_y*j);
     rotate(c);
     drawTarget(0,0,120,random(rand));
     popMatrix();
   }
 }
}

Portrait of an Orange Juice Addict!

Yes, that’s me. My name is Arame and I am addicted to orange juice. Normal people drink 200ml of orange juice a day but I can drink up to 1.5 L. Not any kind. The 100% natural one made from REAL oranges. Proof.

          Original Picture

Here’s a picture that I sent to my friend a couple weeks ago — a symbol of true happiness.

For my portrait project, I decided to recreate this picture using Processing. Here’s the result.

                       Processing Output

CODE HERE: PORTRAIT – ARAME CODE

Methods used

  • ellipse(), line(),rect(), triangle() and quad() to draw the juice box, table, hair, face, body and background pillars
  • arc() to draw the eyes (eyelids, iris, pupil etc)
  • vertex() and bezierVertex() to draw the eyebrows and lips

Challenges

I have learned SO MUCH over the past few days while doing this project. It took a lot of research, graphing, trials and errors and finding ways to creatively create unconventional shapes to obtain this result. The hardest part was to understand the vertex and BezierVertex coordinates to draw the eyebrow and lip (I still do not fully understand the mechanism) AND duplicate the shapes (2 eyebrows, two lips). I did not get frustrated at any point (*claps*) mostly because I enjoyed the process and was genuinely curious to learn how to draw certain shapes.

I also mildly struggled with the layering and had to make compromises here and there but it all turned out fine. I came across the function curve() but did not understand how to use it or control the curve that I was drawing but it definitely would have made this task easier – for instance in drawing the hair.

Response to “Her Code Got Humans To The Moon — And Invented Software Itself”

Great article — I love it when women who have impacted the world in one way or another are talked about and given credit for the tremendous work they have done. It reminded me of the movie “Hidden Figures” released a month ago that tells the story of three African-American women whose knowledge and brilliance as engineers, mathematicians and computer scientists at NASA helped launch astronaut John Glenn into orbit during the Space Race in 1961. Watching the movie and reading this article helped me visualize the tasks that Margaret Hamilton was performing – “punching holes in stacks of punch cards” which are then processed overnight “on a giant Honeywell mainframe computer”.  Computers units back then occupied a whole room and an array of complex techniques to get them started alone.

It was interesting to read that Hamilton created an add-on program to debug the code used to land astronaut back on earth but her superiors chose to discard her idea and claimed that “That would never happen”. The same dynamics presented in this article were portrayed in “Hidden Figures” (you should all watch it by the way – a truly amazing movie) and result from a belief of people at that time that machines were quicker and much superior to human brains that they would never make errors. This conviction resulted in a blind trust for machines which led to comments like the ones highlighted in the articles. Yet, machines did generate errors, something that always came as a surprise to many and it took people like Hamilton and Katherine Johnson to avoid space tragedy and support what was to be one of the greatest achievements of mankind.