IM Final: Interactive Totoro

Interactive life size Totoro did happen in the end 🙂

As a follow up from my first computer vision assignment and as a way to fulfil my desire of seeing a life-size Totoro, I decided to create a projected image of him that people could interact with!

As a brief recap, this was the project that inspired my final:

Overview of Totoro’s interactions:

Through the PS3Eye and Blob Detection libraries, as well with the infrared LEDs attached to the interactive objects, specific movements and interactions toggled different aspects on screen. This installation had two modes. The first one consists of using an umbrella to try to protect Totoro from the rain. Through two LEDs attached to either side of the umbrella, the program tracked its location and stopped the rain in those locations. As the umbrella gets closer to Totoro, he gets happier, and finally once it is directly in front of him, he growls and smiles widely. The second mode consists of wearing a glove to pet Totoro. Totoro’s eyes follow the user’s glove and, if stroked in his belly, Totoro gets happier and growls as well. Although seemingly simple interactions, the linking between all the components: switching between modes, accurately tracking the umbrella and the glove, toggling the rain on an off, moving Totoro’s eyes, and toggling sound and animation, was a lengthy and time-consuming, although extremely enjoyable, process.

The process for this piece was divided into three sections:

  1. The design: adjusting and making the background and animation frames
  2. The code: writing the program and adjusting the processing – IR camera link
  3. The hardware: attaching IR LEDs to the umbrella and the glove


  1. The design

For the project’s visuals, I adjusted both the background and Totoro’s expressions.

Here is a screenshot of the original image from the movie:

There were two issues with this background image. The first was that the girls in the scene, although iconic and the main characters of the movie, were superfluous. Although their colors added a lot to the appeal of the image, leaving them there would not only take attention out from Totoro, but would also give the impression that the girls were interactive as well. The second issue was the rain in the image. Since my rain was created through Processing, the drawn rain would create a saturated image and would also give the sense that the rain was stopping rather than disappearing when someone hovered on an area with the umbrella, since the coded rain would stop but the drawn rain would still be there. Thus, I also set upon myself to overuse the stamp tool in photoshop and get rid of the rain. This all led to the following final background image:

Actual background (the eyes had to be left blank for the ellipses in the code to move)


For the animation frames, I compiled Totoro’s smile in other scenes and added them to the umbrella scene, since in this whole part of the movie, there are no actual shots from afar where Totoro changes his expression.

For instance, this is the original scene where I got his smile from:


I made the eyes and mouth transparent and then adjusted them to the scene I wanted to use for my project, while trimming everything to 7 frames:

Once the animation was done, it was just a matter of setting up the boundaries as to where the specific frames would be shown.

Here is a sample of the locations where the frames change for the glove code:

  1. The Code

The code was much more complicated than I thought it would be. In summary, I manually change the modes through my keyboard. Depending on each mode, the code checks for the number of “blobs” that are detected on screen. To make the tracking accurate though, I adjusted the brightness threshold as well. When in “umbrella mode”, the code waits for there to be two blobs on screen. Once this occurs, it saves their coordinates and compares them to establish a minimum and a maximum point for the umbrella. Then, it uses these minimum and maximum values to make the rain’s alpha value transparent if it spawns between these locations. For the glove mode, the code checks for only one blob on screen. Once detected, it saves its coordinates. Then, depending on where the coordinates are, it moves Totoro’s pupils accordingly and shifts between animation frames.

Here is a link to the full code

  1. The Hardware

Finally, once the logic of the code was functioning, I attached the infrared LEDs to the umbrella and the glove. I 3D printed battery holders for my two 3V batteries and made switches so I could save the battery life for the exhibition. Then, for the umbrella, I attached all the wires with tape. For the glove, my friend Nikki Joaquin sewed all the components together due to my lack of ability. (thank you Nikki <3) Although seemingly quite simple, setting up all the hardware was one of the most time consuming tasks. At first, Nahil and I had not thought about 3D printing the battery holders. Instead, I had just taped everything up, which made it extremely difficult to attach the wires to the batteries and place them on the umbrella without any of the components moving out of place. At first, I had only thought about using one LED on either side of the umbrella and one on the hand. However, due to the directional aspect of the LEDs, I ended up making another extra set and adjusting their angles slightly so the blob tracking could be more accurate.


Sewed components. I could have covered them with a film but the buttons were more accessible this way.

The battery holders were attached with a lot of electric tape to ensure they would not fall off
As seen in the image, the LEDs were slightly shifted to arrange for a wider range.

Challenges and future improvements

This whole process was overall quite challenging. However, by dividing everything into the three sections described earlier and doing everything little by little, I was able to finish Totoro on time. The biggest challenge was definitely the coding. I had to get familiarized with the way the IR camera and the IR LEDs worked, and had to adjust the code for the Blob Detection to fit into the interactions I wanted to create as well. Initially, I made the code in such a way as to make the program automatically recognize the amount of blobs in the camera’s frame and with that identify the mode it was on. However, this made the code extremely unreliable, which is why I chose to manually change it through keys in my computer. Overall, thanks to the help of Aaron, Craig, Nahil, James and María Laura, the code is now fully functional and as bug free as possible (I hope). The visuals and the hardware were also quite time consuming, but were more mechanical, which provided for good breaks once I got tired of writing the code.  

Overall, the whole process of making Totoro come to life was a truly gratifying one. Although it was extremely time consuming and frustrating at times, it was all worth it once I saw how excited people got over seeing a huge Totoro, and realizing they could (even through the most minimal of ways) interact with him in some way. Some people even told me that rubbing Totoro’s belly was just what they needed for final’s week 😀  In the end, I am still at awe at how much all of us have been able to accomplish due to this class. I would never have guessed that I would be making a project like this one ever in the future, especially at the beginning of the semester. Overall, regardless of the times of Sunday stress when certain projects didn’t work out like I envisioned them to, this class has been one of the most rewarding I have taken, thank you so much everyone for being a part of it 😀

In the exhibition, I  was too caught up helping people out with the umbrella and the gloves that I totally forgot about taking videos of the people interacting with Totoro. Here are some of the photos of the exhibition (thank you Craig, James, and Aaron!)

Finally, here are samples of the final interactions: 



Life-Size Totoro: User Testing

Initially, I was going to make a large documentation post which included my user testing notes and the whole process for my final project, since I had not finished my Totoro as early as I intended to do proper user testing. However, in respect of the rest of the class who did separate blog posts, I shall do it as well.

Throughout this whole process, I had two sessions of user testing. The first one was when the code was ready and the logic functioned, but the project was not mounted in the installation yet. Rather, I projected it onto a wall in the IM lab and put the projector on the side as to not interfere with the people’s shadows. The purpose of this initial user testing was to identify whether people actually knew what they had to do with the objects or if more signifiers were needed. A general trend that I identified out of observing the four people who tested my project was a general confusion as to how the umbrella worked. Without any signifiers, they would rotate the umbrella and would change the LEDs’ position, which made the code not work. Also, since there were only two LEDs on each side of the umbrella, a slight rotation would cause it to malfunction, which could be fixed through the addition of more LEDs on both sides. Furthermore, while seeing the two girls in the image, the users assumed they were interactive as well and expected them to do something while they hovered on them. In other cases, such as when the “Glove Mode” was on, most users did not know where their hand was and thought the camera was in front of them rather than in the back, which made them interfere with the IR camera and made the code glitchy as well. Based on these observations I made the following list of improvements:

  • Make more LEDs for the umbrella
  • Fix Totoro’s eyes –> tone down the opacity of the cheeks since people mistake these with his eyes
  • Fix the visuals for the umbrella (the rain gets glitchy and is not accurately updating)
  • Change the X values for the animation frames –> make Totoro move the close one gets to his belly
  • Remove the girls from the background
  • Add a signifier of where the people are, do not only demonstrate the IR lights. Perhaps add Totoro’s eye movement for a solution to this.

After fixing these issues and making the code more reliable, I tested it again in the actual exhibition space. Most of the issues stated beforehand had been addressed and were no longer troublesome. After this user testing session,  I then decided to add a hand silhouette signifier that would show users where their hand was in relation to Totoro, which would also make the interaction more immersive.

Here is a sample video of my user testing:

Totoro and Friend

This has definitely been the assignment I have enjoyed the most this semester. As I wrote on my blogpost earlier, I find computer vision as a form of “magic” (excuse my cheesiness), as it allows for an endless array of possibilities of what could happen on screen. In this same line of thought, for this project I decided to create something that I had always wanted to do.

While brainstorming on the approach I wanted to take for this project, I decided to distract myself by watching a video called “Hayao Miyazaki – The Essence of Humanity” for Communications Lab. As a big fan of Miyazaki’s movies and his iconic and adorable characters, the video inspired me to make a computer vision project that would allow users to interact with the character of Totoro. As perhaps some of my classmates have noted by my computer cover, I am a big fan of Totoro, and making this project would serve as a means of extending this love into the screen. For those who are not familiar with Totoro, here is a glimpse of his adorableness in gifs, which is something I wanted to capture through this project:

Here is a  link to the video of the final project:

And some gifs as a glimpse: 

The overall point of this project is to reunite both Totoros by placing mini Totoro on top of Totoro’s head. The farther away mini-Totoro is from the larger one’s head, the sadder the big Totoro gets. Also, depending on the location of the mini Totoro, the larger one’s eyes follow him as well. However, if mini-totoro is placed on top of Totoro’s body, Totoro won’t be able to see it and will get sad again. This is a rather simple logic, but with the incorporation of animations and Totoro’s adorableness, this project has really made me (and a lot of my friends whom I have shared this code with) happy.

The movement of Totoro’s pupils were done by mapping two ellipses according to the X and Y location of the mini-totoro into Totoro’s eyes.

In order to make the animations, I obtained a transparent .gif file of Totoro’s change from a neutral face to a smile. I separated each of the frames into different png files and adjusted them to a larger image of Totoro.

Possible Improvements and Extension:

As I stated beforehand, this is one of the projects I have enjoyed doing the most. Therefore, I am seriously considering making an extension of this project for my final Intro to IM assignment. Other aspects I could add to these project are the use of different objects with different colors that allow for various interactions with Totoro. Through the projection of Totoro on a wall and the use of a web cam, I could make people fully interact with a real-size image of Totoro. An example of more interactions could be using a different colored glove to rub Totoro on his belly and make him make different growling noises. This is only one of the many ways I could approach this computer vision project, which is why I want to consider an extension of this assignment for my IM final.

Here is the code:


PImage photo;
PImage frame1;
PImage frame2;
PImage frame3;
PImage frame4;
PImage frame5;
PImage frame6;
PImage frame7;
PImage frame8;

Capture video;
color trackColor; 
float locX, locY;
float totoroXleft, totoroYleft;
float totoroXright, totoroYright;

void setup(){
 video = new Capture(this, 1280,800,30);
 photo = loadImage("totoroneutral.png");
 frame1 = loadImage("frame1.png");
 frame2 = loadImage("frame2.png");
 frame3 = loadImage("frame3.png");
 frame4 = loadImage("frame4.png");
 frame5 = loadImage("frame5.png");
 frame6 = loadImage("frame6.png");
 frame7 = loadImage("frame7.png");
 frame8 = loadImage("frame8.png");

void draw(){
 if (video.available()) {;
 float dist=500;
 for (int y=0; y<height; y++) {
 for (int x=0; x<width; x++) {
 int loc = (video.width-x-1)+(y*width);
 color pix=video.pixels[loc];
 float r1=red(pix);
 float g1=green(pix);
 float b1=blue(pix);
 float r2=red(trackColor);
 float g2=green(trackColor);
 float b2=blue(trackColor);
 float diff=dist(r1,g1,b1,r2,g2,b2);
 //wherever the target color is:
 if (diff<dist){
 totoroXleft = map(locX,0,1280,470,542);
 totoroYleft = map(locY,0,800,369,414);
 totoroXright = map(locX,0,1280,815,880);
 totoroYright = map(locY,0,800,366,430);
 //changing the animation frames
 if (0 <= locX && locX <= 203 && 650 <= locY && locY <= 1280 || 1104 <= locX && locX <= 1280 && 652 <= locY && locY <= 800){
 image(frame1, 70, -10);
 else if (0 <= locX && locX <= 290 && 561 <= locY && locY <= 649 || 1036 <= locX && locX <= 1280 && 562 <= locY && locY <= 651){
 image(frame2, 70, -10);
 else if (0 <= locX && locX <= 349 && 429 <= locY && locY <= 560|| 964 <= locX && locX <= 1280 && 431 <= locY && locY <= 561){
 image(frame3, 70, -10);
 else if (0 <= locX && locX <= 418 && 324 <= locY && locY <= 428|| 904 <= locX && locX <= 1280 && 328 <= locY && locY <= 430){
 image(frame4, 70, -10);
 else if (0 <= locX && locX <= 475 && 198 <= locY && locY <= 323|| 856 <= locX && locX <= 1280 && 204 <= locY && locY <= 429){
 image(frame5, 70, -10);
 else if (201 <= locX && locX <= 312 && 0 <= locY && locY <= 197|| 1025 <= locX && locX <= 1140 && 0 <= locY && locY <= 203){
 image(frame6, 70, -10);
 else if (312 <= locX && locX <= 579 && 0 <= locY && locY <= 197|| 871 <= locX && locX <= 1024 && 0 <= locY && locY <= 203){
 image(frame7, 70, -10);
 else if(580 <= locX && locX <= 889 && 0<= locY && locY <= 239){
 image(frame8, 70, -10);
 image(frame1, 70, -10);
 //Totoro's eyes 

//Getting a tracking value
void mousePressed(){
 int loc=(video.width-mouseX-1)+(mouseY*width);


Response: Computer Vision for Artists and Designers

My one mini regret for today was not having read this article before doing my IM assignment for this week, as it would have been useful for brainstorming purposes. Levin’s “Computer Vision for Artists and Designers” deals with the basics of computer vision techniques through methods like background subtraction, motion and presence detection, brightness thresholding, and simple object tracking. Through the use of these algorithms, aided by the use of open-source programs such as Processing, it has become incredibly easy for anyone, including novice or experienced programmers to create projects with computer vision.

An aspect I still find fascinating about this area is its accessibility and ease. Through correct lighting and background correction, it becomes extremely easy to track a color by its color or simulate the green screen in one’s computer webcam. Regardless, with enough creativity and input, even the simplest of tracking devices can be transformed into larger, more complex projects with greater implications. Like in the case of the Bureau of Inverse Technology’s “Suicide Box”, object tracking can turn into a means of bringing to light certain ethical issues that might be overlooked under normal circumstances.

A final note I would like to make regarding the appeal of computer vision is the way it is like “magic”. Through the direct representation of a user’s input into a computer, there are no limits to what someone can do. Whether it is to create an invisible limbo line between two people, or to make visual representations of one’s words, the possibilities are endless. I believe this “augmented reality” feature is what makes computer vision truly appealing, as it serves as a means of escape into a world with no restrictions.


For this assignment, we were asked to establish a serial communication between Arduino and Processing either by making a physical controller or a physical output, all based on one of the projects we had previously done in class. Initially, I really wanted to do a physical controller, since it would be a mini simulation of an actual game controller in real life, and what better thing to brag about than this? 😀

While brainstorming for this assignment and looking through my previous projects in Processing, I noticed how, since most of them were artworks, the only project that actually included some sort of interactivity was my self-portrait. As a recap, I used mouseX and mouseY functions to enable users to draw using their mouse on screen, and then visualising their artworks through different layers. When thinking about how to incorporate this with a controller, the Etch-A-Sketch came to mind. This was an incredibly nostalgic toy most of my classmates and I would play with during recess. Not only was this a perfect way of adapting my previous project, but it was also a great opportunity of enhancing this toy and enabling it to actually save one’s masterpieces.

The famous Etch-A-Sketch

Initially, I thought this idea was perfect since it was appropriate enough for the elaboration of this task yet (supposedly) simple enough to ensure that I would not spend whole days working on this (oh how naïve that I was).

Initially, these were the functionalities that the physical controller would have:

  1. 2 knobs: one for controlling the movement along the X axis, the other for controlling movement along the Y axis.
  2. 1 “Save” button: which would serve as a means for the user to save their drawing and continue with their second layer.
  3. 3 “visualize/layer” buttons: after the drawing functionalities were done, the user would be able to see their different drawings by turning on or off these three layer buttons.
  4. Tilt function: I was unfortunately not able to carry out this function: similar to the classic Etch-A-Sketch, I was planning on implementing a tilt sensor on the controller, so every time a person shook the controller, the drawing would be cleared.
  5. Lights: (also not implemented) depending on the mode you were on (drawing vs display) 2 LEDs would light up based on this.

The Struggles

Surprisingly enough, the struggles to carry out this project were much greater than I envisioned. What I initially thought as a simple extension of one of my previous projects ended up being another hassle altogether. Perhaps the biggest struggle was to successfully connect the Arduino and Processing and making sure that the correct data was being sent and processed between both programs.

The other problem I encountered was regarding the logic of the code. When changing between the display and draw functions, I used the “save” pushbutton. However, I had to implement a statemachine in Arduino that would ensure that it would only use the state of the button once it current state was different from its previous state.

Possible Improvements

Some possible improvements that could be implemented into this project could be adding the possibility of saving each drawing to a .csv file that would update every time the project was run. This could be further enhanced by enabling users to add more than two layers of drawing through the “save” button. For instance, once the user wanted to add a layer, they could press this button, which would add another array that would save the coordinates of the next drawing.

Here is the code I used for the Arduino:

const int ledPin = 3;
const int buttonModePin = 8;
const int button1Pin = 13;
const int button2Pin = 12;
const int button3Pin = 11;
const int restartPin = 7;

bool isAvailable = true;

int counter = 0;

void setup() {
 // put your setup code here, to run once:
 pinMode(buttonModePin, INPUT);
 pinMode(button1Pin, INPUT);
 pinMode(button2Pin, INPUT);
 pinMode(button3Pin, INPUT);
 pinMode(restartPin, INPUT);

void loop() {
 int buttonModeState;
 int button1State;
 int button2State;
 int button3State;
 int restartState; 

 bool sendPressed = false;
 buttonModeState = digitalRead(buttonModePin);
 button1State = digitalRead(button1Pin);
 button2State = digitalRead(button2Pin);
 button3State = digitalRead(button3Pin);
 restartState = digitalRead(restartPin);

 if (restartState == HIGH){
 if(isAvailable == true){
 isAvailable = false;
 sendPressed = true;
 isAvailable = true;
// while (Serial.available()){ //only sending a byte when we need a byte 
 int input =; //using the buffer so it doesn't fill up
 int xPos = analogRead(A0);
// xPos = xPos; //to assure we are in the right range (mapping)
 int yPos = analogRead(A1);
 Serial.print(xPos); //WITHOUT LN
// }


Here is the code that I used for Processing:

import processing.serial.*;
Serial myPort;
PImage img;

//creating layers + arrays
//drawing variables
boolean layer1 = true;
boolean layer2 = false;
boolean layer3 = false;

//mode variables
boolean restartDrawing = false;
boolean drawing = true;
boolean currentData = false;
boolean saveButton = false;
boolean previousData = false;
boolean clearLayer2 = false;

//display variables 
boolean display1 = false;
boolean display2 = false;
boolean display3 = false;

int[] xlayer1 = {};
int[] ylayer1 = {};

int[] xlayer2 = {};
int[] ylayer2 = {};

int[] xlayer3 = {};
int[] ylayer3 = {};

int led;
int led2; 

int xPos = 0;
int yPos = 0;

void setup(){
 printArray(Serial.list()); //choosing serial port from list
 String portname = Serial.list()[2];
 myPort = new Serial(this,portname,9600); //inputing info
 myPort.clear(); //clears out the buffer just in case
 myPort.bufferUntil('\n'); //not gonna fire serial event until they get the new line
 img = loadImage("blanksketch2.png");

void draw(){
 image(img, 0, 0);
 if (restartDrawing == true){
 image(img, 0, 0);
 restartDrawing = false;
 if (drawing == true){
 if (layer1 == true){
 println("DRAWING LAYER 1");
 int Xcoordinate = xPos;
 int Ycoordinate = yPos;
 xlayer1 = append(xlayer1, Xcoordinate);
 ylayer1 = append(ylayer1, Ycoordinate);
 if ( (100 < xPos && xPos < 590) && (100 < yPos && yPos < 450)){ 
 if (layer2 == true){
 if (clearLayer2 == true){
 image(img, 0, 0);
 clearLayer2 = false;
 println("DRAWING LAYER 2");
 int Xcoordinate = xPos;
 int Ycoordinate = yPos;
 xlayer2 = append(xlayer2, Xcoordinate);
 ylayer2 = append(ylayer2, Ycoordinate);
 if ( (100 < xPos && xPos < 590) && (100 < yPos && yPos < 450)){ 
 if (drawing == false){
 if (display1 == true){
 println("first layer visible");
 for (int i = 0; i < (xlayer1.length - 1); i++){
 //restartDrawing = false;
 else if (display2 == true){
 println("second layer visible");
 for (int i = 0; i < (xlayer2.length - 1); i++){
 //restartDrawing = false;
 println("showing image in display");
 image(img, 0, 0);

 //CHECKING FOR SAVE/button press 
 if (saveButton == true){
 if (layer1 == true){
 drawing = true;
 layer1 = false;
 layer2 = true;
 clearLayer2 = true;
 println("CHANGE TO LAYER 2");
 else if (layer2 == true){
 layer1 = false;
 layer2 = false;
 drawing = false;

//no need to call the function into the draw
void serialEvent(Serial myPort){
 String s = myPort.readStringUntil('\n');
 s = trim(s); //will take out extra space in the string to avoid errors
 if (s!=null){
 //if there's nothing present in this place we're looking for
 //making sure there's something in S
 int value[] = int(split(s,',')); //taking string and splitting it at ','
 if (value.length == 7){
 xPos = (int)map(value[0],0,1023,0,width);
 yPos = (int)map(value[1],0,1023,0,height);
 //checking drawing mode
 //if (value[2] == 1){
 // drawing = true; 
 //else if (value[2] == 0){
 // drawing = false; 
 // //display1 = true;
 //checking display first drawing
 if (value[3] == 1 && value[4] == 0){
 //drawing = false; //esto se pone al final despues
 display1 = true;
 else {
 println("VALUE 1 FALSE");
 display1 = false;
 //drawing = true;
 //checking display second drawing
 if (value[4] == 1 && value[3]==0){
 //layer2 = true;
 display2 = true;
 println("VALUE 2 FALSE");
 //layer2 = false;
 display2 = false;
 //drawing = true;
 //checking for saved state
 if (value[6] == 1){
 saveButton = true;
 println("SAVE DATA TRUE");
 else if (value[6] == 0){
 saveButton = false;
 println("SAVE DATA FALSE");
 myPort.write(led+","+led2+"\n"); //will send a byte once it receives one

What Computing Means to Me: Reflections Through Exploding Kittens

There was one quite memorable moment during spring break when I perceived a direct connection between my everyday life and computing. Surprisingly enough, this particular moment occurred right when we were about to play a game of Exploding Kittens (an incredible card game). These cards, apart from having extremely interesting drawings, also came in a box that would produce a meow sound every time it was opened. The first thing I noticed apart from the sound was the small photocell that was hidden in one of the illustrated cat’s eyes. With this, I realized how the meow sound was produced through the photocell, toggling on whenever light hit it. Although small, this discovery filled me with immense joy, as it was the first time I saw a connection between the circuits and sensors we used in class to an actual widespread game.

Exploding Kittens box, the photocell is hidden inside the Taco Cat’s left eye

This small incident is an example of the way computing has shifted my perception of certain objects, even in such a small way as in discovering how an Exploding Kittens box works. Now, whenever I see circuits or wires, I don’t feel the same apprehension or confusion I would usually feel before taking this class. Instead, I wonder at how the circuit works, at the inner workings of an object that seems to use the same mechanisms we use for our weekly projects. In this same light, it is also amazing to think that with everything we know about computing as of right now, there are so many real-life objects we can reproduce. As with the games all of us have recreated using Processing, or even through the Etch-a-Sketch version I made with the serial communication, it is truly incredible how, through the basics of Arduino and Processing, so much can be created.

Floating IM Words – Josie & Mari

For this project, we both thought it would be interesting to identify the most common words found in our class blog and make a visual representation out of that data. Initially, we were going to make our program analyze a series of text pages from the blog and identify by itself the words that were most commonly found. However, after copy-pasting pages of text into a word count program, we noticed that this approach would not end up in the best of results, since the most common words ended up being prepositions like “of”, “and”, “or”, etc. Therefore, to make the outcome more interesting, we decided to run three pages of text from the blog into the program and manually select words we thought were the most appropriate.

Once the idea was established, we decided to divide our roles to make the collaboration easier. Mari started with the code – making the main logic, reading and using the csv file, making the float movement, and Josie polished and added the final interactive and visual elements – setting the visuals for the texts, their actions (varying opacities) and the on-hover & on-click logic.

We experienced a variety of difficulties in creating the project. Some were about concept/look: we originally wanted to add an on click effect to each specific word, but found that it was not easy for the user to chase down a moving word with their mouse and click on it. Additionally, we thought that making the words collide into each other would be interesting/look great, but when we actually implemented it the project just looked messy and dysfunctional. Other difficulties were in writing the actual code: it was very difficult to determine the exact area each word takes up on the screen in order to detect when a) the words overlap, b) the words hit the sides of the screen, and c) when the mouse is on the word. However, we eventually figured out a way through Processing to determine a text’s height, and once we figured that out, we used the text’s height and width to determine its area/location on the screen. Despite these difficulties, it was a really fun project to do – both because we got to work with data, work with a partner, and work with material from the class blog. We loved seeing what words people continuously use in their posts – they are all about innovation, technology, people, etc. If we expanded the project, we would probably want to analyze more than the more recent two pages of the blog in order to get better, more representative data.

Here is a sample gif of the result:

The original is in full screen (1280 x 720) resolution

Here are screen shots demonstrating the different colors (that the user chooses by clicking on the screen):






Final code here

Response: Digitize Everything

Spreading news real time about the situation of natural disasters, calculating the optimum path to get to a certain destination… all these and many more applications are examples of the endless capabilities created by the phenomenon of digitization.

In “The Digitization of Just About Anything”, Erik Brynjolfsson and Andrew McAfee discuss digitization as the third fundamental force that made the second machine age possible. They describe digitization as, essentially, the process of transforming all kinds of media and information into “the ones and zeroes that are the native language of computers”. The “magic” of digitization is the way it is non-rival, or basically indefinitely available, and also extremely cheap and easy to replicate and distribute. Both of these properties allow for the endless possibilities held by digitization, as it allows for the easy and rapid spreading of information.

There were two (rather obvious on my part) observations in this reading that really caught my attention. The first was the distinction between waze and any other GPS program elaborated in previous years. Both types of applications have ultimately the same goal: to lead a person to their destination through the most optimal of routes. However, the way and the considerations they use to establish such solutions are the prime distinction between both aspects. In the case of the traditional GPS, it simply considers the shortest route available. On the other hand, in the case of Waze, digitization is used to obtain real-life data contributed by users. Following from the concept that digital information is effortlessly reproduced and distributed, Waze uses their users’ inputs regarding weather conditions, police cars, car accidents, etc. to build their information and establish the optimum routes. This, once again, highlights the sheer capabilities posed by digitization and user-created information databases.

The final observation posed by this reading was the aspect of free and easily distributable material, and their counterargument to the notion that “time is money”. In current times, where most content is open-source and available for anyone to use, it has become easier than ever to learn and create new products. Either as sources of inspiration, as references, or even as a means to create new tools, all the different products created and distributed through digitization has ultimately led to an era that fosters and cultivates knowledge through seemingly effortless means. This (as I said before, rather obvious) observation is one that I had not actually thought about much before until this reading, which really highlights the possibilities of creating new and original content thanks to the wonders of digitization.

Circle Art

For this assignment, we had the possibilities to use object oriented programming to either create an artwork or a game. At first, I wanted to challenge myself by creating a game. The first thing that came to mind was the simple game that appears in google chrome whenever there is no wifi. Here is a sample of the game:

Chrome game

At first, I thought the game mechanics would be simple enough for the elaboration of this assignment. However, as I started writing the code, I started to notice how complex the creation of even a simple game can be. I planned on creating two classes: one for the user’s object (which would be a ball for the sake of simplicity) and another for the “obstacles” (which would be rectangles to keep up with the simple graphics of the game). Initially, I planned to make the illusion of the moving ball by moving the rectangles horizontally (towards the user) rather than moving the actual ball. This would not only simplify the game mechanic but it would also make it easier for the user to play the game. Then, in order to create different rectangles that would change in size, I would create an array that would hold a certain number of rectangles. Each time a rectangle got out of the screen, the code would restart that object’s position to the start of the screen and would change its height with a random number generator. I had already planned the game mechanics beforehand as well as the collision for the losing conditions. However, when it was time to implement the classes, I had encountered great difficulty in adapting my pseudocode into Processing and also had problems regarding the time it would take to elaborate and debug the whole game. Therefore, although unfortunately a bit too late, I decided to change my idea and do a piece of artwork. This was the first trial of my second idea:

First implementation (after much struggle with the collide)

I then decided it would be interesting to make new circles appear for every time the user clicked, which led to my final pieces:

First: collision


Second: no collision


With this piece, my focus was to allow users to “create their own artwork” through randomly generated circles. As seen in the gifs above, wherever the user clicked on the screen, a new circle would be created, with a randomly generated size, color, speed, and direction.

For the first example, the most difficult aspect was to create the collisions between the balls. For this code, I had to create a nested for loop that would check for each of the circles in an array and would check for collisions between each other individually. To add a more interesting effect in the piece, I also added transparencies to each object. Specially for the code without the collision effect, this paved way for different visualisations due to the different opacity levels. An additional feature that I added was a counter that would make the circles disappear after a certain number of frames, to ensure the processing window would not be saturated after a while.

Overall, although it was not what I originally intended, I learned a lot while making this piece. By making the collision code, I learnt a lot about arrays and how they can be used to check for specific objects. Due to this, I now feel more comfortable with working with arrays and implementing them in future projects.

Here is the final code:

float distance; 
Ball pinkBall;
Ball greenBall;
Ball brightGreenBall; 
Ball yellowBall;
int numBalls = 6;
int counter = 0;
int generalDirection = -1;
int generalDirection1 = 1;
int frames = 0;

Ball [] balls = new Ball[counter];

void setup(){
 //pinkBall = new Ball(40,60,150,-1,1,2,2,237,132,134);
 //greenBall = new Ball(20,300,400,1,-1,4,4,167,195,178);
 //brightGreenBall = new Ball(30,400,200,1,1,3,3,188,204,147);
 //yellowBall = new Ball(38,300,40,-1,-1,2.4,2.4,244,233,168);
 //balls[0] = new Ball(40,60,150,-1,1,2,2,237,132,134);
 //balls[1] = new Ball(20,300,400,1,-1,4,4,167,195,178);
 //balls[2] = new Ball(30,400,200,1,1,3,3,188,204,147);
 //balls[3] = new Ball(30,200,200,1,-1,3,3,244,233,168);
 //balls[4] = new Ball(50,500,50,1,-1,1,1,250,193,175);
 //balls[5] = new Ball(70,100,80,1,-1,1,1,121,94,133);

void draw(){

 for (int i = 0 ; i < counter ; i++){
 for (int j = 0 ; j <counter ; j++){
 if (i!=j){
 for (Ball ball : balls){

class Ball{
 float rad;
 float x;
 float y;
 int xdir;
 int ydir;
 float xspeed;
 float yspeed;
 int red,green,blue;
 int transp;

 Ball(int Rad, float X, float Y, int XDIR, int YDIR, float XSPEED, float YSPEED, int RED, int GREEN, int BLUE, int TRANSP){
 rad = Rad;
 x = X;
 y = Y;
 xdir = XDIR;
 ydir = YDIR;
 xspeed = XSPEED;
 yspeed = YSPEED;
 red = RED;
 green = GREEN;
 blue = BLUE; 
 transp = TRANSP;
 void move(){
 x = x + (xspeed * xdir);
 y = y + (yspeed * ydir);
 if (x > width-rad || x < rad){
 if (y > height-rad || y < rad){
 void collide(Ball otherball){
 float dx = x - otherball.x;
 float dy = y - otherball.y;
 float distance = sqrt(dx*dx + dy*dy);
 float miniDist = rad + otherball.rad;
 if (distance < miniDist){
 xdir *= -1;
 ydir *= -1;
 otherball.xdir *= -1;
 otherball.ydir *= -1;
 void display(){
 if (frames%15 == 0){

void mousePressed(){
 counter += 1;
 int randomValue = int(random(10,50));
 int randomValue2 = int(random(100,255));
 int randomValue3 = int(random(100,255));
 int randomValue4 = int(random(100,255));
 int velocity = int(random(1,5));
 Ball b = new Ball(randomValue,mouseX,mouseY,generalDirection,generalDirection1,velocity,velocity,randomValue2,randomValue3,randomValue4,100);
 balls = (Ball[]) append(balls,b);

Screensaver graphics

For this week’s assignment, we had the task of recreating an old computer graphic from a collection of volumes denominated “Computer Graphics and Art”. While I was first browsing through the examples in each issue, I was amazed by the beauty and complexity of the 3D artworks (and their resemblance to the old screensaver graphcis!) Below are some of the artworks that caught my attention the most:

Initially, with still no idea of how it was possible to create such effects simply by coding, I resorted to the internet and discovered the magic of using parametric equations and the sin() and cos() functions to create different line movements and combinations. After a lot of trial and error and some time when I reconsidered the possibility of recreating this type of graphic, I managed to create an effective parametric equation. This was the effect that emerged from this exploring:

First attempt: solely graphing the dots

In order to create this set path, I created a float t that would be the basis for everything, as its increasing values would be used in the equations I established. I then created 4 methods that would return certain values and would determine the movement for the x1, y1, x2, y2 coordinates, since I wanted to make two points that moved at the same time. With this in mind, upon more browsing through the internet, I came upon an amazing tutorial video that gives an overview on computer graphics and the usefulness of using the sin() and cos() functions. Once I implemented these equations, the circular, fluid motion as shows in the previous image was created. Then, to create the lines seem as if they were moving on a 3D plane, I used the coordinates I previously established and implementing them in the line() function.

Once I had figured out how to approach the task of recreating any of the 3D graphics I previously saw, I thought it would be simply a matter of altering the equation to make the movement approximate that of the other ones. However, establishing parametric equations for each of the four coordinates that would recreate the appearance of my chosen image turned out to be an extremely challenging task that, frustratingly, included much more math applications. These are some of the approximations I tried to make:

Overall, it was an extremely big challenge to recreate the image I had in mind due to (at least my) perceived difficulty of altering the equations in such a precise way as to try to imitate its original movement.  Amongst all the attempts, I was able to create many interesting products that, although not as similar to those I previously wanted to make, still presented the same essence of moving along a same path and creating a 3D appearance with this effect. In the end, I was not able to completely recreate any of these images due to the difficulty of figuring out the precise equations, but I still found the graphics that I created extremely interesting and worth to post on this blog.  This whole process was incredibly useful, as I learned a lot about the sin() cos() and line() functions and the many applications they can have in computer graphic art. Along with the pieces I was able to create and the struggle of this whole process, I surely felt satisfied with the graphics I elaborated, since they managed to capture a similar movement as those of the original artists’ pieces. These are, personally, my favorites of the pieces I made: 


Another attempt at recreating the previous figures
Black and white version of the final product
Alternate colors
Final product with the addition of color

Attached is the code:

float t = 0;
float s; 

void setup(){

void draw(){ 
 //point(x1(t),y1(t)); //calling the methods, using returned value
 //s = map(t,0,400,50,150);
 s = map(t,0,400,70,140);
 //printing lines according to x1, x2, y1, y2 values (using changing values of t)
 if (t <= 100){

//methods that create parametric equations using values of t 
float x1(float t){
 return sin(t/10)*100 + sin(t/15)*100; 
 //better to use lower amplitude if frequency is higher

float y1(float t){
 //coefficient inside: changes frequency, smaller coefficient = larger curve
 //bigger coefficient (multiplying) = smaller curves (more frequency)
 //coefficient outside: amplitude changes
 return cos(t/10)*100; 

float x2(float t){
 return sin(t/20)*100 + sin(t/20)*100;

float y2(float t){
 return cos(t/40)*100 + cos(t / 40) * 55;