Face Replace (ft. LamaLisa and LamaGaga)

For this project I decided I wanted to combine image and webcam use by creating a face swap project.
I found a great processing library with many built in computer vision functions that could be used for different applications. Find it here.
I first used OpenCV to get facial recognition working. Here it is – the green circle shows approximately where your face is.
After that, I created a mask image that could be used and resized based on the length and width of the face that is recognized as seen below.
The facial recognition is also used on the image that the user can input into the code. (It works on any image with a clear face shape!)
The face in the image is cropped according to the mask that we created before, and finally the ‘live face’ is overlayed on top using the blend function in processing.
View the source code.

Bee Here, Bee Gone

(I know. Cheesy, cheesy title.)

My original idea for this assignment had to do with brightness and ghosts. I wanted to use my laptop’s camera to obtain the video’s brightness and set a brightness threshold. If the room wasn’t dark (above the threshold), text would appear to tell the user they should turn off the lights. Once the brightness was sufficiently low, the background would change (using background substraction) to an image of a ghost. Hopefully, I thought, it would cause jump scares but not heart attacks.

I started coding this but realized that I don’t like how dirty the background subtraction turns out when the video is dark, given that the user’s face is also affected. Thus, I changed the idea to a little ghost that would appear and follow the user.

Somewhere along the way, it occured to me that the ghost should be chasing something specific, and my plan changed completely. I imagined the user taking sweets from a candy bowl and a fly appearing on screen to chase the candy from the user’s hand. I googled “fly gif” and found a nice bee. Classic Google. So I decided to make a bee that chases flower pollen.

I made sure that the following happen in the program:

– The bee follows a green LED covered in pollen. The LED can only be covered in pollen after it’s “dipped” into a flower garden. To do this, I obtained the area in the video that corresponds to being inside the garden; if the LED (which the code tracks by color and brightness) is positioned within this area, an image of pollen (a teeny tiny mountain of pollen) appears over the LED and an animation of a bee flutters above it.

– The bee moves closer to the pollen if the LED remains steady (meaning, if the x-coordinate doesn’t vary too much between video frames). Otherwise, it hovers a bit higher. I think this makes the bee’s movements a bit more realistic.

– If one of the code’s global variables is changed (the boolean named “attraction” is changed to false), then the pollen disappears. This is useful if the video captures a bigger object (or a background) that is green, for it changes the interaction. Instead of having the bee follow the little green light, the user can shoo the bee away while it hovers over the green backdrop.

The following video shows how each version of the program can be used (“Bee Here” followed by “Bee Gone”):

(Music: “Seven Days A Week,” Austin Roberts)

And this is the Processing code. I took Aaron’s color tracking example and made some tweaks:

import processing.video.*;
Capture video;
PImage flower, pollen;
PImage bee0, bee1, bee2, bee3, bee4, bee5, bee6, bee7;
color trackColor;
int locX, locY, prevX;
boolean start;
int counter;
boolean attraction = true; //can be changed to false

void setup() {
 size(640, 480);
 video = new Capture(this, 640, 480, 30);
 flower = loadImage("flower.png");
 pollen = loadImage("pollen.png");
 bee0 = loadImage("bee0.png");
 bee1 = loadImage("bee1.png");
 bee2 = loadImage("bee2.png");
 bee3 = loadImage("bee3.png");
 bee4 = loadImage("bee4.png");
 bee5 = loadImage("bee5.png");
 bee6 = loadImage("bee6.png");
 bee7 = loadImage("bee7.png");
 video.start();
 start = false;
 trackColor = color(0, 255, 0);
 counter = 0;
}

void draw() {
 if (video.available()) {
 video.read();
 }
 video.loadPixels();
 float dist=500;
 for (int y=0; y<height; y++) {
 for (int x=0; x<width; x++) {
 int loc = (video.width-x-1)+(y*width);
 color pix=video.pixels[loc];
 float r1=red(pix);
 float g1=green(pix);
 float b1=blue(pix);
 float r2=red(trackColor);
 float g2=green(trackColor);
 float b2=blue(trackColor);
 float diff=dist(r1,g1,b1,r2,g2,b2);
 
 if (diff<dist){
 dist=diff;
 prevX = locX;
 locX=x;
 locY=y;
 }
 }
 }
 video.updatePixels();
 pushMatrix();
 translate(width,0);
 scale(-1,1);
 image(video,0,0);
 popMatrix();
 fill(trackColor);
 if((locX <= 25 && locY >= 370) || 
 (locX >= 25 && locX <= 135 && locY >= 320)||
 (locX >= 135 && locX <= 195 && locY >= 370) ||
 (locX >= 195 && locX <= 220 && locY >= 400) ||
 (locX >= 220 && locX <= 290 && locY >= 370) ||
 (locX >= 290 && locX <= 320 && locY >= 400) ||
 (locX >= 320 && locX <= 430 && locY >= 370) ||
 (locX >= 430 && locX <= 530 && locY >= 400) ||
 (locX >= 530 && locY >= 320)){
 start = true;
 };
 imageMode(CENTER);
 if(attraction == true){
 if(start == true){
 counter++;
 image(pollen, locX, locY);
 }
 }
 else{
 counter++;
 }
 if(counter > 20){
 int randomX = int(random(-10, 10));
 int randomY;
 if(prevX + 5 >= locX && prevX - 5 <= locX){
 randomY = int(random(30, 50));
 }
 else{
 randomY = int(random(80, 100));
 };
 if(counter%8 == 0){
 image(bee7, locX - randomX, locY - randomY);
 }
 else if(counter%7 == 0){
 image(bee6, locX - randomX, locY - randomY);
 }
 else if(counter%6 == 0){
 image(bee5, locX - randomX, locY - randomY);
 }
 else if(counter%5 == 0){
 image(bee4, locX - randomX, locY - randomY);
 }
 else if(counter%4 == 0){
 image(bee3, locX - randomX, locY - randomY);
 }
 else if(counter%3 == 0){
 image(bee2, locX - randomX, locY - randomY);
 }
 else if(counter%2 == 0){
 image(bee1, locX - randomX, locY - randomY);
 }
 else{
 image(bee0, locX - randomX, locY - randomY);
 }
 };
 imageMode(CORNER);
 scale(0.5);
 image(flower, -100, 600);
}

void mousePressed(){
 int loc = (video.width-mouseX-1)+(mouseY*width);
 color tracked = video.pixels[loc];
 float bright = brightness(tracked);
 if(bright > 200){
 trackColor = tracked;
 };
}

Totoro and Friend

This has definitely been the assignment I have enjoyed the most this semester. As I wrote on my blogpost earlier, I find computer vision as a form of “magic” (excuse my cheesiness), as it allows for an endless array of possibilities of what could happen on screen. In this same line of thought, for this project I decided to create something that I had always wanted to do.

While brainstorming on the approach I wanted to take for this project, I decided to distract myself by watching a video called “Hayao Miyazaki – The Essence of Humanity” for Communications Lab. As a big fan of Miyazaki’s movies and his iconic and adorable characters, the video inspired me to make a computer vision project that would allow users to interact with the character of Totoro. As perhaps some of my classmates have noted by my computer cover, I am a big fan of Totoro, and making this project would serve as a means of extending this love into the screen. For those who are not familiar with Totoro, here is a glimpse of his adorableness in gifs, which is something I wanted to capture through this project:

Here is a  link to the video of the final project:

https://drive.google.com/a/nyu.edu/file/d/0Bwr6pFsy04OIR3R4azFYVFRYMjQ/view?usp=sharing

And some gifs as a glimpse: 

The overall point of this project is to reunite both Totoros by placing mini Totoro on top of Totoro’s head. The farther away mini-Totoro is from the larger one’s head, the sadder the big Totoro gets. Also, depending on the location of the mini Totoro, the larger one’s eyes follow him as well. However, if mini-totoro is placed on top of Totoro’s body, Totoro won’t be able to see it and will get sad again. This is a rather simple logic, but with the incorporation of animations and Totoro’s adorableness, this project has really made me (and a lot of my friends whom I have shared this code with) happy.

The movement of Totoro’s pupils were done by mapping two ellipses according to the X and Y location of the mini-totoro into Totoro’s eyes.

In order to make the animations, I obtained a transparent .gif file of Totoro’s change from a neutral face to a smile. I separated each of the frames into different png files and adjusted them to a larger image of Totoro.

Possible Improvements and Extension:

As I stated beforehand, this is one of the projects I have enjoyed doing the most. Therefore, I am seriously considering making an extension of this project for my final Intro to IM assignment. Other aspects I could add to these project are the use of different objects with different colors that allow for various interactions with Totoro. Through the projection of Totoro on a wall and the use of a web cam, I could make people fully interact with a real-size image of Totoro. An example of more interactions could be using a different colored glove to rub Totoro on his belly and make him make different growling noises. This is only one of the many ways I could approach this computer vision project, which is why I want to consider an extension of this assignment for my IM final.

Here is the code:

import processing.video.*;

PImage photo;
PImage frame1;
PImage frame2;
PImage frame3;
PImage frame4;
PImage frame5;
PImage frame6;
PImage frame7;
PImage frame8;

Capture video;
color trackColor; 
float locX, locY;
float totoroXleft, totoroYleft;
float totoroXright, totoroYright;


void setup(){
 size(1280,800);
 video = new Capture(this, 1280,800,30);
 video.start();
 photo = loadImage("totoroneutral.png");
 frame1 = loadImage("frame1.png");
 frame2 = loadImage("frame2.png");
 frame3 = loadImage("frame3.png");
 frame4 = loadImage("frame4.png");
 frame5 = loadImage("frame5.png");
 frame6 = loadImage("frame6.png");
 frame7 = loadImage("frame7.png");
 frame8 = loadImage("frame8.png");
}

void draw(){
 if (video.available()) {
 video.read();
 }
 video.loadPixels();
 float dist=500;
 for (int y=0; y<height; y++) {
 for (int x=0; x<width; x++) {
 int loc = (video.width-x-1)+(y*width);
 color pix=video.pixels[loc];
 float r1=red(pix);
 float g1=green(pix);
 float b1=blue(pix);
 
 float r2=red(trackColor);
 float g2=green(trackColor);
 float b2=blue(trackColor);
 float diff=dist(r1,g1,b1,r2,g2,b2);
 
 //wherever the target color is:
 if (diff<dist){
 dist=diff;
 locX=x;
 locY=y;
 }
 
 totoroXleft = map(locX,0,1280,470,542);
 totoroYleft = map(locY,0,800,369,414);
 
 totoroXright = map(locX,0,1280,815,880);
 totoroYright = map(locY,0,800,366,430);
 
 
 
 }
 }
 video.updatePixels();
 pushMatrix();
 translate(width,0);
 scale(-1,1);
 image(video,0,0);
 popMatrix();
 
 
 //changing the animation frames
 if (0 <= locX && locX <= 203 && 650 <= locY && locY <= 1280 || 1104 <= locX && locX <= 1280 && 652 <= locY && locY <= 800){
 image(frame1, 70, -10);
 }
 else if (0 <= locX && locX <= 290 && 561 <= locY && locY <= 649 || 1036 <= locX && locX <= 1280 && 562 <= locY && locY <= 651){
 image(frame2, 70, -10);
 }
 else if (0 <= locX && locX <= 349 && 429 <= locY && locY <= 560|| 964 <= locX && locX <= 1280 && 431 <= locY && locY <= 561){
 image(frame3, 70, -10);
 }
 else if (0 <= locX && locX <= 418 && 324 <= locY && locY <= 428|| 904 <= locX && locX <= 1280 && 328 <= locY && locY <= 430){
 image(frame4, 70, -10);
 }
 else if (0 <= locX && locX <= 475 && 198 <= locY && locY <= 323|| 856 <= locX && locX <= 1280 && 204 <= locY && locY <= 429){
 image(frame5, 70, -10);
 }
 else if (201 <= locX && locX <= 312 && 0 <= locY && locY <= 197|| 1025 <= locX && locX <= 1140 && 0 <= locY && locY <= 203){
 image(frame6, 70, -10);
 }
 else if (312 <= locX && locX <= 579 && 0 <= locY && locY <= 197|| 871 <= locX && locX <= 1024 && 0 <= locY && locY <= 203){
 image(frame7, 70, -10);
 }
 else if(580 <= locX && locX <= 889 && 0<= locY && locY <= 239){
 image(frame8, 70, -10);
 }
 else{
 image(frame1, 70, -10);
 }
 
 //Totoro's eyes 
 fill(0);
 ellipse(totoroXleft,totoroYleft,30,30);
 fill(0);
 ellipse(totoroXright,totoroYright,30,30);
}

//Getting a tracking value
void mousePressed(){
 int loc=(video.width-mouseX-1)+(mouseY*width);
 trackColor=video.pixels[loc];
}

 

Assignment 11: Photobooth

This week, I tried to make a Photobooth sort of program, in which you can see yourself in a variety of colored filters and save an image with the filter you like best. When the program is run, at the top of the screen there are a few colored boxes — hovering the mouse over each of those boxes produces a differently colored filter. On clicking anywhere on the screen, you are able to save the image into a folder on my computer.

These are the different filters in the program:

(Note: the screen is not lined this heavily when it actually runs, I would think these lines are there because of the screen recording.)

Continue reading “Assignment 11: Photobooth”

Response: Computer Vision for Artists and Designers

My one mini regret for today was not having read this article before doing my IM assignment for this week, as it would have been useful for brainstorming purposes. Levin’s “Computer Vision for Artists and Designers” deals with the basics of computer vision techniques through methods like background subtraction, motion and presence detection, brightness thresholding, and simple object tracking. Through the use of these algorithms, aided by the use of open-source programs such as Processing, it has become incredibly easy for anyone, including novice or experienced programmers to create projects with computer vision.

An aspect I still find fascinating about this area is its accessibility and ease. Through correct lighting and background correction, it becomes extremely easy to track a color by its color or simulate the green screen in one’s computer webcam. Regardless, with enough creativity and input, even the simplest of tracking devices can be transformed into larger, more complex projects with greater implications. Like in the case of the Bureau of Inverse Technology’s “Suicide Box”, object tracking can turn into a means of bringing to light certain ethical issues that might be overlooked under normal circumstances.

A final note I would like to make regarding the appeal of computer vision is the way it is like “magic”. Through the direct representation of a user’s input into a computer, there are no limits to what someone can do. Whether it is to create an invisible limbo line between two people, or to make visual representations of one’s words, the possibilities are endless. I believe this “augmented reality” feature is what makes computer vision truly appealing, as it serves as a means of escape into a world with no restrictions.

Silly Filters :)

For this assignment I decided to just do something silly. Inspired by snapchat filters, I embarked on creating my own. However, since I do not know how to detect faces using Processing, I decided to use physical stickers of bright colors as detection points. What happens is this: the user puts on two stickers of different colors, one on their forehead, and one in the middle of their neck. Then, when they use my filters, the computer first has the user select the two stickers (so the computer knows where to place certain objects later). The whole process works like this:

  1. The user is first prompted to select (by using their mouse) the “hat sticker” (the sticker on his/her forehead).
  2. The user is then prompted to select (by using their mouse) the “shirt sticker” (the sticker on his/her neck).
  3. Then, the user sees the first (of three) filters. It is a wizard cloak and wizard hat.
  4. Extra features: the user can press the up or down arrows to make the hat bigger or smaller; the user can press the left or right arrows to make the shirt bigger or smaller.
  5. The user, once satisfied with the first filter, can switch to the second filter by pressing the “Option” key. It is a Cubs baseball hat and jersey. (Again, the user can scale both the hat and shirt to the desired size.)
  6. The user, once satisfied with the second filter, can switch to the third (and final) filter by again pressing the “Option” key. It is a pair of sunglasses and a lei (flower necklace). (As usual, the user should scale the accessories accordingly.)
  7. Once the user is satisfied with the third filter, they have the option of continuing through the filters (in order) by pressing the “Option” key.
  8. Last note: if, at any point, the user would like to re-select the stickers (for example, if the user accidentally pressed something besides the sticker, which messes up the filters), the user need only to press the “Control” key.

Please excuse how funny and strange I look in these photos:

In summary, the controls are:

  • Mouse press: select the hat sticker and the shirt sticker
  • Up/down arrows: scale the hat
  • Left/right arrows: scale the shirt
  • Option key: switch to next filter
  • Control key: re-select hat sticker and shirt sticker

Overall, I am moderately pleased with the results. The filters do not look that great/realistic, but they still create a funny effect and get the point across (the third filter, the sunglasses and lei, is definitely the best though). It was really difficult to maintain the accuracy of the color selection while trying to stabilize the hats/shirts, but I did my best although the movement of the objects still bothers me a bit. Additionally, it took a long time on photoshop to get the objects right; I used a clip art man* and standard canvas size to scale all the objects, so that the initial size/placement of the objects would be the same.

*The clip art man:

The code for the project can be found here.

Levin’s “Computer Vision for Artists”: Thinking Big with Tech

Apart from the interesting informational aspects of Golan Levin’s “Computer Vision for Artists,” upon reflecting there were two things that really stuck with me:

  1. The first was something that was discussed at the beginning of the reading: Levin explained that the artist of Videoplace, Myron Krueger, believed that the “entire human body ought to have a role in our interactions with computers.” I liked this idea first and foremost because whenever I think of visual interaction with computers I only think of the face/neck and shoulders area/maybe hands — I rarely consider that the whole body should be involved. More importantly, however, it reminded me of the the article “A Brief Rant on the Future of Interaction Design” by Bret Victor. As I recall, he was frustrated that nobody was being more innovate with interaction design, and specifically that nobody was developing (or even really thinking about) creating interactions that involve other human senses/capabilities other than the ability to swipe a screen with a pointer finger. In the same way that Victor resents the lack of more humane physical interactions with technology, I am sure that Krueger would be frustrated that there are very few technologies that use computer vision in a way that incorporates the whole human body. This makes me wonder if programs like Skype, Snapchat, etc., that typically limit interactions to faces, are missing out on a similarly more humane approach to technology.
  2. The second was The Suicide Box technology at the Golden Gate Bridge. Not only is the whole topic extremely sad, but it is quite interesting that the program was able to capture more suicides than was officially reported (which makes me wonder how many suicides have actually occurred there over the past 70 years). More importantly, however, I liked how the technology was controversial, and called attention to an important social issue. What one might think would be somewhat uncontroversial — recording a public place to keep a record of certain incidents (albeit to be used a sort of statement/artpiece) — actually was extremely controversial, and as Levin points out through quoting Jeremijenko, the public is wary of artists (or others) who use real, material “evidence” gathered through surveillance technology.

Computer Vision: A Control Freak’s Nightmare (But Also Their Dream)

(Response to “Computer Vision for Artists and Designers: Pedagogic Tools and Techniques for Novice Programmers” by Golan Levin)

Levin’s article about computer vision succeeds in describing the field through its many dimensions: he discusses its history, main obstacles, common (and useful) techniques, and popular uses (among other aspects). The text is thus informative, instructive, and a reflection of what computer vision is capable of now and its potential for the future.

Two of Levin’s points struck me the most. None of them were “news” (I was familiar with these facts beforehand), but it was their very obviousness (which, by the way, is a word… I had to check) that made me reflect on them. The first is that we are fascinated by the idea of surveillance. The second is that computer vision is, as Levin puts it, “computationally ‘opaque’… [it] contains no intrinsic semantic or symbolic information.”

I’ll begin with the latter. In developing my own computer vision project for this class, I couldn’t stop thinking about how complicated it is to control how the program works and the users’ experience with it. As Levin correctly explains, computer vision can only analyze pixel by pixel and then interpret the information of these individual units. In this sense, control is particularly challenging. However, because so little can be controlled by the basic algorithms we’re using (in comparison to coding only with text, as Levin states), the “outside” (what’s on screen) often has to be a very controlled environment – light levels, colors, motion, etc. There are numerous conditions that can greatly affect the functioning of the program, which the computer can´t deal with on its own.

In terms of surveillance, I wonder why the camera inspires so many works about this topic. It’s certainly related to our awareness of the fact that our degree of privacy is often an illusion (in this age, technology has evolved to track almost anyone). The camera has the ability to serve as a second pair of eyes, and “see” what we couldn’t see without it. We can act as those who carry out the surveillance, which puts us in a position of power, but we also realize that others can do the same to us, which is worrisome.

Artists Working with Material Evidence

In the paper by Golan Levin about computer vision for artists and designers he talks about techniques used to, for example, track people or other objects and their movement in real life to transfer it to a screen for mainly artwork purpose like frame differencing, background subtraction and brightness thresholding. He refers to all these as simple concepts yet as very effective; the key being the fact that these are all suitable for novice programmers. He also mentions several examples that have either been made in the 20th century or the early 2000s, which makes it very interesting to compare how much computer vision has developed in the recent years.

One example that particularly spoke to me was the Suicide Box and the problem raised by it which is “the inherent suspicion of artists working with material evidence”. In this particular case there was an inconsistency with the number of people who committed suicide according to the interactive art piece and the data from the Port Authority. According to Suicide Box there were 4 more people who committed suicide and that raised suspicion whether the data represented in art pieces can be trusted. It reminded me of the French artist who visited us a while ago and talked to us and shared his projects where he had also done some sort of data visualization, for example, death and birth rates and the train departure time. I then realized that in these cases people didn’t seem to have a problem relying on this data because it is also of less significance than suicide attempts. Therefore, with so many more possibilities nowadays it is important to remain with a critical mind and check where the information is coming from (essentially in the Suicide Box case it could have probably been random objects vertically falling down and not necessarily people), although I also think it’s great how many opportunities there are now to make things look great and carry people away.

Etch-A-Sketch

For this assignment, we were asked to establish a serial communication between Arduino and Processing either by making a physical controller or a physical output, all based on one of the projects we had previously done in class. Initially, I really wanted to do a physical controller, since it would be a mini simulation of an actual game controller in real life, and what better thing to brag about than this? 😀

While brainstorming for this assignment and looking through my previous projects in Processing, I noticed how, since most of them were artworks, the only project that actually included some sort of interactivity was my self-portrait. As a recap, I used mouseX and mouseY functions to enable users to draw using their mouse on screen, and then visualising their artworks through different layers. When thinking about how to incorporate this with a controller, the Etch-A-Sketch came to mind. This was an incredibly nostalgic toy most of my classmates and I would play with during recess. Not only was this a perfect way of adapting my previous project, but it was also a great opportunity of enhancing this toy and enabling it to actually save one’s masterpieces.

The famous Etch-A-Sketch

Initially, I thought this idea was perfect since it was appropriate enough for the elaboration of this task yet (supposedly) simple enough to ensure that I would not spend whole days working on this (oh how naïve that I was).

Initially, these were the functionalities that the physical controller would have:

  1. 2 knobs: one for controlling the movement along the X axis, the other for controlling movement along the Y axis.
  2. 1 “Save” button: which would serve as a means for the user to save their drawing and continue with their second layer.
  3. 3 “visualize/layer” buttons: after the drawing functionalities were done, the user would be able to see their different drawings by turning on or off these three layer buttons.
  4. Tilt function: I was unfortunately not able to carry out this function: similar to the classic Etch-A-Sketch, I was planning on implementing a tilt sensor on the controller, so every time a person shook the controller, the drawing would be cleared.
  5. Lights: (also not implemented) depending on the mode you were on (drawing vs display) 2 LEDs would light up based on this.

The Struggles

Surprisingly enough, the struggles to carry out this project were much greater than I envisioned. What I initially thought as a simple extension of one of my previous projects ended up being another hassle altogether. Perhaps the biggest struggle was to successfully connect the Arduino and Processing and making sure that the correct data was being sent and processed between both programs.

The other problem I encountered was regarding the logic of the code. When changing between the display and draw functions, I used the “save” pushbutton. However, I had to implement a statemachine in Arduino that would ensure that it would only use the state of the button once it current state was different from its previous state.

Possible Improvements

Some possible improvements that could be implemented into this project could be adding the possibility of saving each drawing to a .csv file that would update every time the project was run. This could be further enhanced by enabling users to add more than two layers of drawing through the “save” button. For instance, once the user wanted to add a layer, they could press this button, which would add another array that would save the coordinates of the next drawing.

Here is the code I used for the Arduino:

const int ledPin = 3;
const int buttonModePin = 8;
const int button1Pin = 13;
const int button2Pin = 12;
const int button3Pin = 11;
const int restartPin = 7;

bool isAvailable = true;

int counter = 0;

void setup() {
 // put your setup code here, to run once:
 Serial.begin(9600);
 Serial.println("0,0"); 
 pinMode(buttonModePin, INPUT);
 pinMode(button1Pin, INPUT);
 pinMode(button2Pin, INPUT);
 pinMode(button3Pin, INPUT);
 pinMode(restartPin, INPUT);
 pinMode(ledPin,OUTPUT);
}

void loop() {
 int buttonModeState;
 int button1State;
 int button2State;
 int button3State;
 int restartState; 

 bool sendPressed = false;
 
 buttonModeState = digitalRead(buttonModePin);
 button1State = digitalRead(button1Pin);
 button2State = digitalRead(button2Pin);
 button3State = digitalRead(button3Pin);
 restartState = digitalRead(restartPin);

 if (restartState == HIGH){
 if(isAvailable == true){
 isAvailable = false;
 sendPressed = true;
 }
 }
 else{
 isAvailable = true;
 }
 
 
// while (Serial.available()){ //only sending a byte when we need a byte 
 int input = Serial.read(); //using the buffer so it doesn't fill up
 int xPos = analogRead(A0);
// xPos = xPos; //to assure we are in the right range (mapping)
 int yPos = analogRead(A1);
 Serial.print(xPos); //WITHOUT LN
 Serial.print(',');
 Serial.print(yPos);
 Serial.print(',');
 Serial.print(buttonModeState);
 Serial.print(',');
 Serial.print(button1State);
 Serial.print(',');
 Serial.print(button2State);
 Serial.print(',');
 Serial.print(button3State);
 Serial.print(',');
 Serial.println(sendPressed);
 
// }

 }

Here is the code that I used for Processing:

import processing.serial.*;
Serial myPort;
PImage img;

//creating layers + arrays
//drawing variables
boolean layer1 = true;
boolean layer2 = false;
boolean layer3 = false;

//mode variables
boolean restartDrawing = false;
boolean drawing = true;
boolean currentData = false;
boolean saveButton = false;
boolean previousData = false;
boolean clearLayer2 = false;

//display variables 
boolean display1 = false;
boolean display2 = false;
boolean display3 = false;

int[] xlayer1 = {};
int[] ylayer1 = {};

int[] xlayer2 = {};
int[] ylayer2 = {};

int[] xlayer3 = {};
int[] ylayer3 = {};

int led;
int led2; 

int xPos = 0;
int yPos = 0;

void setup(){
 printArray(Serial.list()); //choosing serial port from list
 String portname = Serial.list()[2];
 println(portname);
 myPort = new Serial(this,portname,9600); //inputing info
 myPort.clear(); //clears out the buffer just in case
 myPort.bufferUntil('\n'); //not gonna fire serial event until they get the new line
 size(700,573);
 background(255,255,255);
 img = loadImage("blanksketch2.png");
 
}

void draw(){
 //background(255);
 //CHECKS FOR RESTARTING OF SCREEN
 image(img, 0, 0);
 ellipse(xPos,yPos,1,1);
 
 if (restartDrawing == true){
 background(255,255,255);
 image(img, 0, 0);
 println("RESTARTED DRAWING");
 restartDrawing = false;
 }
 
 //CHECKS IF PERSON IS® IN DRAWING MODE (ON OR OFF), STARTS DRAWING
 if (drawing == true){
 println("ENTERING DRAWING LOOP");
 if (layer1 == true){
 println("DRAWING LAYER 1");
 int Xcoordinate = xPos;
 int Ycoordinate = yPos;
 xlayer1 = append(xlayer1, Xcoordinate);
 ylayer1 = append(ylayer1, Ycoordinate);
 //DRAWS THE ELLIPSE
 if ( (100 < xPos && xPos < 590) && (100 < yPos && yPos < 450)){ 
 ellipse(xPos,yPos,1,1);
 }
 }
 
 if (layer2 == true){
 if (clearLayer2 == true){
 background(255);
 image(img, 0, 0);
 clearLayer2 = false;
 }
 println("DRAWING LAYER 2");
 int Xcoordinate = xPos;
 int Ycoordinate = yPos;
 xlayer2 = append(xlayer2, Xcoordinate);
 ylayer2 = append(ylayer2, Ycoordinate);
 //DRAWS THE ELLIPSE
 if ( (100 < xPos && xPos < 590) && (100 < yPos && yPos < 450)){ 
 ellipse(xPos,yPos,1,1);
 }
 }
 
 }
 
 //DISPLAYS DRAWING DEPENDING ON BUTTONS THAT ARE ON
 if (drawing == false){
 background(255);
 
 if (display1 == true){
 println("first layer visible");
 for (int i = 0; i < (xlayer1.length - 1); i++){
 line(xlayer1[i],ylayer1[i],xlayer1[i+1],ylayer1[i+1]); 
 }
 //restartDrawing = false;
 }
 
 else if (display2 == true){
 //background(255);
 println("second layer visible");
 for (int i = 0; i < (xlayer2.length - 1); i++){
 line(xlayer2[i],ylayer2[i],xlayer2[i+1],ylayer2[i+1]); 
 }
 
 //restartDrawing = false;
 }
 println("showing image in display");
 image(img, 0, 0);
 }

 //CHECKING FOR SAVE/button press 
 if (saveButton == true){
 if (layer1 == true){
 drawing = true;
 layer1 = false;
 layer2 = true;
 clearLayer2 = true;
 println("CHANGE TO LAYER 2");
 }
 else if (layer2 == true){
 layer1 = false;
 layer2 = false;
 drawing = false;
 println("CHANGE TO DRAWING PHASE");
 }
 }
}

//no need to call the function into the draw
void serialEvent(Serial myPort){
 String s = myPort.readStringUntil('\n');
 s = trim(s); //will take out extra space in the string to avoid errors
 println(s);
 
 if (s!=null){
 //if there's nothing present in this place we're looking for
 //making sure there's something in S
 int value[] = int(split(s,',')); //taking string and splitting it at ','
 println("s!null");
 if (value.length == 7){
 xPos = (int)map(value[0],0,1023,0,width);
 yPos = (int)map(value[1],0,1023,0,height);
 
 //checking drawing mode
 //if (value[2] == 1){
 // drawing = true; 
 //}
 //else if (value[2] == 0){
 // drawing = false; 
 // //display1 = true;
 //}
 
 //checking display first drawing
 if (value[3] == 1 && value[4] == 0){
 println("VALUE 1 TRUE, GOING TO DISPLAY 1");
 //drawing = false; //esto se pone al final despues
 display1 = true;
 //drawing = false; //CAMBIAR ESTO DESPUES DE QUE SIRVA EL BOTON
 }
 else {
 println("VALUE 1 FALSE");
 display1 = false;
 //drawing = true;
 }
 
 //checking display second drawing
 if (value[4] == 1 && value[3]==0){
 println("VALUE 2 TRUE, GOING TO DISPLAY 2");
 //layer2 = true;
 display2 = true;
 //drawing = false; //CAMBIAR ESTO DESPUES QUE SIRVA EL BOTON
 }
 else{
 println("VALUE 2 FALSE");
 //layer2 = false;
 display2 = false;
 //drawing = true;
 }
 
 //checking for saved state
 if (value[6] == 1){
 saveButton = true;
 println("SAVE DATA TRUE");
 }
 else if (value[6] == 0){
 saveButton = false;
 println("SAVE DATA FALSE");
 }
 
 }
 myPort.write(led+","+led2+"\n"); //will send a byte once it receives one
 }
}