Let’s Face It!

For this week’s image and video processing project, I decide to explore face recognition tools in a live video capture. Conceptually, I wanted to write a code that could detect a moving face and substitute the face with a random image. I came across a library called OpenCV that allows easy face detection processes. To mark the area where the person’s face is, I drew a rectangle approximately where the face is and substituted it with an image file drawn from an array of many.

Trial

import gab.opencv.*;
import processing.video.*;
import java.awt.*;

Capture video;
OpenCV opencv;

void setup() {
 size(640, 480);
 video = new Capture(this, 640/2, 480/2);
 opencv = new OpenCV(this, 640/2, 480/2);
 opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE); 
 video.start();
}
void draw() {
 
 scale(2);
 opencv.loadImage(video);

 image(video, 0, 0 );

 noFill();
 stroke(255, 0, 0);
 strokeWeight(1);
 Rectangle[] faces = opencv.detect();
 println(faces.length);
 
 for (int i = 0; i < faces.length; i++) {
 println(faces[i].x + "," + faces[i].y);
 rect(faces[i].x, faces[i].y, faces[i].width+20, faces[i].height+20);
 }
}

void captureEvent(Capture c) {
 c.read();
}

Below is the final result.

Improvements

One of my biggest challenges was to scale the images to the size of the face captured in the live video to generate a smoother more accurate final output. I scaled the images manually but I am wondering if there are ways to adapt the code based on the size of the face captured.

Below is the code for the final output

import gab.opencv.*;
import processing.video.*;
import java.awt.*;
int index;
boolean pressed;

Capture video;
OpenCV opencv;
PImage []img=new PImage[7];
PImage face, obama;
int image_index;

void setup() {
 size(640, 480);
 video = new Capture(this, 640/2, 480/2);
 opencv = new OpenCV(this, 640/2, 480/2);
 opencv.loadCascade(OpenCV.CASCADE_FRONTALFACE); 
 video.start();
 img[0]= loadImage("obama.png");
 img[1]= loadImage("trump.png");
 img[2]= loadImage("aaron.png");
 img[3]= loadImage("zbynek.png");
 img[4]= loadImage("lama1.png");
 img[5]= loadImage("Daniil.png");
 
}

void draw() {
 
 scale(2);
 opencv.loadImage(video);

 image(video, 0, 0 );

 fill(0);
 stroke(255, 0, 0);
 strokeWeight(1);
 Rectangle[] faces = opencv.detect();
 println(faces.length);
 
 for (int i = 0; i < faces.length; i++) {
 println(faces[i].x + "," + faces[i].y);
 image(img[index],faces[i].x, faces[i].y, faces[i].width+20, faces[i].height+20);

 if(mousePressed==true){
 image(img[index++],faces[i].x, faces[i].y, faces[i].width+20, faces[i].height+20);
 }
}
}


void captureEvent(Capture c) {
 c.read();
}


Response to Golan Levin’s “Computer Vision for Artists and Designers”

Golan Levin’s piece on computer vision was an extremely useful read. The piece is divided into various sections, offering instructions, tips, examples and ideas for beginners dabbling in computer vision. The examples were a startling look into what computer vision was like before the technology of today existed. It was mind blowing, for example, to see the interactive artwork Videoplace and realize that it existed in a time when computer mouses were not a staple. The thought is incredible — looking at Videoplace is like looking at history come alive.

The paper also details certain specific aspects to consider while creating a computer vision project that are helpful guidelines for beginners: detecting motion, detecting presence, detection through brightness thresholding, simple object tracking, and basic interactions. The paper also launches a discussion of computer vision in the physical world — an example of which is the Suicide Box — and discusses how objects and events int he physical world can affect how we parse out the algorithm in our computer vision projects.

I also must mention the large collection of resources that this article presented us with, which I think I will continue to peruse and use in my future ventures with computer vision. Overall, an interesting and super informative read!

Assignment 12: Crystallic

The Crystallic visualization transforms live video frames into a grid of interconnected areas of distinct colors.

Input frames are sampled at every nth pixel, in both the x and y dimensions (where n is a pre-set constant number, for example 7). The selected pixel’s color is compared to a list of 26 colors; the closest color among the options is identified. Then, the algorithm considers the neighbors of the sampled pixel (where neighbors are n pixels away from the sampled pixel in each dimension). If the neighbor has the same identified color, a line is drawn between the two pixels. This produces white-space boundaries between the distinct color bands identified in the frame.

It is possible to change the look of the visualization by selecting a different set of clockwiseExtraX and clockwiseExtraY. The different values in these two arrays represent the different neighbors to consider. By removing some values, the visualization considers fewer neighbors.

The visualization can thus be modified to have a square pattern

a slanted-squares pattern

or a drawing-like, diagonal-line pattern

Furthermore, changing the sampling distance changes the granularity of the visualization. This produces a more modern-art look:

However, reducing the sampling distance slows down the visualization; if near-real-time responsiveness is desired, it was determined that the values should not be reduced below 7.

An additional problem concerned the choice of colors in the color palette. Originally, the visualization used only 9 colors – all the combinations of 0 v. 255 for RGB values. This led to visualizations that featured too many flat surfaces; the banding effect was too extreme. To increase the variety of colors, the set of HTML/CSS named colors was considered instead. However, since this palette contrasted the “extreme” colors (using only 0 and 255 in RGB) with two non-extreme colors (orange, and rebeccaPurple), the two non-extreme colors proved closest to too many sampled colors. The result was an over-abundance of purple in the output.

A solution was to return to the constructed palette, increasing the number of different combination to 27 by adding a third RGB color level. Thus, again, each palette color should have an equal slice of the sampled color space. This was still not optimal, however:

There was an overabundance of gray in the output visualization in bad light conditions (which means, basically, all the time), causing the person’s face to blend with the background. Removing the gray color from the palette proved to be an appropriate solution to the problem; thus, the final number of colors in the palette was reduced to 26.

Continue reading “Assignment 12: Crystallic”

Response 12: Computer Vision for Artists

I liked Golan Levin’s overview of the techniques for computer vision. His exposition allowed me to look at the complex problem with new eyes, and made me realize that simple algorithms may be used for a complex effect – frame differencing, background subtraction, color tracking, and thresholding; all of which we have mentioned in class. At the same time, I liked Levin’s mention of the state-of-the-art techniques, and everything in between. I felt like that provided perspective to the field and showed me that despite its accessibility, computer vision can also answer some complicated questions. (Consider the question of gaze direction detection – not only does it require tracking of one’s pupils; the orientation of the face in 3D space is also required, as is some notion of depth in the field of view.)

I learned the most from Levin’s emphasis on the importance of physical conditions when using computer vision. His insistence that the assumptions of the different algorithms be taken into account when designing the interactive art piece made me realize how prevalent these problems are. At the same time, it illustrated how impossible-to-solve software questions (e.g. how can I know whether this dark spot in the frame is a person’s hair or a black area on the background wall that just happens to be next to the person’s head?) can be solved by preparation of the scene (e.g. perhaps just use a green screen behind the person. Or the person can be illuminated by sharp light and stand in front of a black wall.).

I have one complaint about the article – despite all of its talk about bringing a fresh, artistic, set of perspectives to computer vision, four out of the six examples revolve around surveillance. Although it is an important topic – and perhaps very natural, given the fact that computer vision systems must necessarily use a video-recording device – I would have appreciated to be exposed to more variety, to get my creativity going in more directions than just surveillance.

Virtual Pets

For this week’s assignment I wanted to work with color tracking in Processing. I wanted to alternate objects that a person is supposedly holding in his or her hands and initially I thought of substituting an object of a certain color with a respective picture of another object. However, I didn’t even need to substitute the colored object, I could just display the image in a certain distance from the object instead. I wanted to create the illusion of being able to hold and move around different animals, therefore in my project I have 5 animals each appearing on the screen when there is a certain color present (the colors I used are pink, blue, green, red and yellow). These are the steps of creating my project:

  1. I found pictures of 5 different animals and resized the pictures in Photoshop to approximately 200×200 pixels to make the animals smaller.
  2. I cut thin slips of paper in 5 different colors – one for each animal. I then printed out the RGB values of these colors from the web cam so I could code it for each of the animal. Once the sketch is run, it displays the image from the computer’s web cam. When there is one or several of the five colors present in the range of the web cam, the respective animal shows up on the screen following the object of the color they are assigned to. If this color is not present, then the animal doesn’t show up either. If a person is holding a colorful slip of paper, he or she can then move it around the screen to make the animal follow it and therefore control its motion with his or her hand.
  3. One of the challenges was determining the right threshold value that is the difference between the pixels from the camera and the coded color. In my case this difference has to be very small in order for the animal to show up, otherwise it can get confused and start showing the animal where it is not supposed to show up. However, that also means that if the lighting changes significantly, the RGB value of the color of the slips of paper appearing on the web cam might also change and the animal might not appear.

Here are pictures of the animals that show up on the screen depending of the color:

 

Here is a video of just one animal moving around the screen:

cat video

 

Here is a video of all of the animals that can appear:

all animals

 

Here is the code:

import processing.video.*;
Capture video;
color trackColor;
int locXpuppy, locYpuppy, locXtiger, locYtiger, locXara, locYara, locXbunny, locYbunny, locXcat, locYcat;
PImage puppy;
PImage tiger;
PImage ara;
PImage bunny;
PImage cat;
color doggy=color(255.0, 98.0, 203.0);
color tiger2=color(118.0, 188.0, 91.0);
color ara2=color(116.0, 196.0, 249.0);
color bunny2=color(232.0, 86.0, 103.0);
color cat2=color(255.0, 253.0, 112.0);
boolean drawPuppy=false;
boolean drawTiger=false;
boolean drawAra=false;
boolean drawBunny=false;
boolean drawCat=false;

void setup() {
 size(640, 480);
 video = new Capture(this, 640, 480, 30);
 video.start();
 puppy = loadImage("puppyx.png");
 tiger = loadImage("tigerx.png");
 ara = loadImage("araax.gif");
 bunny = loadImage("bunnyx.png");
 cat = loadImage("catttx.png");
 trackColor=doggy;
}

void draw() {
 if (video.available()) {
 video.read();
 }
 video.loadPixels();
 float dist=20;
 drawPuppy=false;
 drawTiger=false;
 drawAra=false;
 drawBunny=false;
 drawCat=false;
 for (int y=0; y<height; y++) {
 for (int x=0; x<width; x++) {
 int loc = (video.width-x-1)+(y*width);
 color pix=video.pixels[loc];
 float r1=red(pix);
 float g1=green(pix);
 float b1=blue(pix);
 
 float r2=red(trackColor);
 float g2=green(trackColor);
 float b2=blue(trackColor);
 
 float r3=red(tiger2);
 float g3=green(tiger2);
 float b3=blue(tiger2);
 
 float r4=red(ara2);
 float g4=green(ara2);
 float b4=blue(ara2);
 
 float r5=red(bunny2);
 float g5=green(bunny2);
 float b5=blue(bunny2);
 
 float r6=red(cat2);
 float g6=green(cat2);
 float b6=blue(cat2);
 
 
 float diff=dist(r1,g1,b1,r2,g2,b2);
 float diff2=dist(r1,g1,b1,r3,g3,b3);
 float diff3=dist(r1,g1,b1,r4,g4,b4);
 float diff4=dist(r1,g1,b1,r5,g5,b5);
 float diff5=dist(r1,g1,b1,r6,g6,b6);

 
 if (diff<dist){
 drawPuppy=true;
 dist=diff;
 locXpuppy=x;
 locYpuppy=y;
 } 
 
 if (diff2<dist){
 drawTiger=true;
 dist=diff2;
 locXtiger=x;
 locYtiger=y;
 } 
 
 if (diff3<dist){
 drawAra=true;
 dist=diff3;
 locXara=x;
 locYara=y;
 } 
 
 if (diff4<dist){
 drawBunny=true;
 dist=diff4;
 locXbunny=x;
 locYbunny=y;
 } 
 
 if (diff5<dist){
 drawCat=true;
 dist=diff5;
 locXcat=x;
 locYcat=y;
 } 

 }
 }
 video.updatePixels();
 pushMatrix();
 translate(width,0);
 scale(-1,1);
 image(video,0,0);
 popMatrix();
 if(drawPuppy){
 image(puppy, locXpuppy-100, locYpuppy-200);
 }
 if(drawTiger){
 image(tiger, locXtiger-100, locYtiger-200);
 }
 if(drawAra){
 image(ara, locXara-100, locYara-200);
 }
 if(drawBunny){
 image(bunny, locXbunny-100, locYbunny-200);
 }
 if(drawCat){
 image(cat, locXcat-100, locYcat-200);
 }
}

void mousePressed(){
 int loc=(video.width-mouseX-1)+(mouseY*width);
 trackColor=video.pixels[loc];
 println(red(trackColor)+" "+green(trackColor)+" "+blue(trackColor));
}

 

P.S. Because at first the code for creating what I just described didn’t properly work, I started working on a slightly different idea. Even though Aaron helped me fixing the code above (thanks for that!!), I decided to also include the other code. The idea behind it is that there are also 5 animals uploaded to the sketch, however instead of following precoded colors, the color of interest can be adjusted by a mouse press. Once you press the mouse on a, for example, pink object, it will then follow the pink object.  Also, there can only be one animal present at a time, but they can be changed by pressing the key “c”. There is a random function that randomly displays one of the images of the 5 animals.

 

This is the code for the other example:

import processing.video.*;
Capture video;
color trackColor;
int locX, locY;
PImage puppy;
PImage tiger;
PImage ara;
PImage bunny;
PImage cat;
int randomNumber;

void setup() {
 size(640, 480);
 video = new Capture(this, 640, 480, 30);
 video.start();
 puppy = loadImage("puppyx.png");
 tiger = loadImage("tigerx.png");
 ara = loadImage("araax.gif");
 bunny = loadImage("bunnyx.png");
 cat = loadImage("catttx.png");
}

void draw() {
 if (video.available()) {
 video.read();
 }
 video.loadPixels();
 float dist=500;
 for (int y=0; y<height; y++) {
 for (int x=0; x<width; x++) {
 int loc = (video.width-x-1)+(y*width);
 color pix=video.pixels[loc];
 float r1=red(pix);
 float g1=green(pix);
 float b1=blue(pix);
 
 float r2=red(trackColor);
 float g2=green(trackColor);
 float b2=blue(trackColor);
 
 float diff=dist(r1,g1,b1,r2,g2,b2);

 
 if (diff<dist){
 dist=diff;
 locX=x;
 locY=y;
 }
 }
 }
 video.updatePixels();
 pushMatrix();
 translate(width,0);
 scale(-1,1);
 image(video,0,0);
 popMatrix();
 if(randomNumber == 0){
 image(puppy, locX-100, locY-200);
 }
 else if(randomNumber == 1){
 image(tiger, locX-100, locY-200);
 }
 else if(randomNumber == 2){
 image(ara, locX-100, locY-200);
 }
 else if(randomNumber == 3){
 image(bunny, locX-100, locY-200);
 }
 else if(randomNumber == 4){
 image(cat, locX-100, locY-200);
 }
}

void mousePressed(){
 int loc=(video.width-mouseX-1)+(mouseY*width);
 trackColor=video.pixels[loc];
}

void keyPressed(){
 if (key=='c'){
 randomNumber = int(random(0, 5));
 }
}

 

Here is a video of an animal following my selected color by a mouse press in the second version:

animals second version

Webcam Live Drawing project

I have decided to work on live video for this project, which was mostly inspired by the Computer Vision article as well as I just thought that I can do more things with live video rather than a still image.

 

My initial idea was to track a certain color, say the color of the lips, and then once those pixels were detected, I would change the color of those pixels so that they affect live image. So instead of having pinkish lips, I would be able to change the color to orange, green, or whatever color I would choose in the live video.

Doing this, I faced one problem with the pixel color detection. Because the lips are pink and somewhat similar to the skin tone and the rest of our body, it is very tricky to get just the pixels of the lip color selected.

For example, if I pick just one certain color of the lips and leave the threshold for searching for the similar color in the image as very small, lets say the threshold is 5, then it would select just a tiny bit of the lips:

 

And if I increase the threshold to 25, it would select a lot more than just the lips:

So this made me give up on this idea just because I realized I would not be able to reach the level of accuracy I am looking for. Almost instantly then another idea came to mind, which also involved tracking color, but now I was using tracked color for a different purpose.

 

So the idea is that the program tracks light green color, which in my case is a cap for a pen, and draws a point on every pixel that it finds close to the green color within a threshold of 20.

 

Then, on button press, the program saves the coordinates of those circles, which allows the person who is in the video to draw shapes or whatever (s)he wants while the live video is recording. You can change the colors as well!

This is one the drawings I’ve made, which was fun, but I forgot I had to record a video, so I tried replicating it in the video once again and just having a little fun with it.

 

 

 

I would say that the biggest challenge I faced was figuring out the arrays for colors in a way that when the color is changed, it is changed only for something that is about to be drawn, rather than changing the color of everything that has already been drawn.

 

Here is the code:

import processing.video.*;
Capture video;
color trackColor;
float threshold;
int[] xvalues= {};
int[] yvalues= {};

color currentColorRed;
color currentColorGreen;
color currentColorBlue;
int[] colorsArrayRed= {};
int[] colorsArrayGreen= {};
int[] colorsArrayBlue= {};

boolean saveNow;


void setup() {
 size(640, 480);
 video = new Capture(this, 640, 480, 30);
 video.start();
 trackColor = color(30, 175, 94);
 currentColorRed = 255;
 currentColorGreen = 255;
 currentColorBlue = 255;
}

void draw() {

if (video.available()) {
 video.read();
 }
 video.loadPixels();
 //image(video, 0, 0);

//threshold = map(mouseX, 0, width, 0, 100);
 threshold=20;
 float avgX = 0;
 float avgY = 0;

int count=0;

for (int y=0; y<height; y++) {
 for (int x=0; x<width; x++) {
 int loc = (width-1-x)+(y*width);

//what is the current color
 color currentColor = video.pixels[loc];
 float r1=red(currentColor);
 float g1=green(currentColor);
 float b1=blue(currentColor);
 float r2=red(trackColor);
 float g2=green(trackColor);
 float b2=blue(trackColor);

float d = dist(r1, g1, b1, r2, g2, b2);

if (d<threshold) {
 noStroke();
 strokeWeight(1);
 ellipse(x, y, 10, 10);
 avgX+=x;
 avgY+=y;

if (saveNow ==true) {
 xvalues = append(xvalues, x);
 yvalues = append(yvalues, y);
 colorsArrayRed = append(colorsArrayRed, currentColorRed);
 colorsArrayGreen = append(colorsArrayGreen, currentColorGreen);
 colorsArrayBlue = append(colorsArrayBlue, currentColorBlue);
 }
 count++;
 }
 }
 }
 println(xvalues.length);

if (xvalues.length > 1) {
 for (int i = 0; i < (xvalues.length - 1); i++) {
 fill(colorsArrayRed[i], colorsArrayGreen[i], colorsArrayBlue[i]);
 ellipse(xvalues[i], yvalues[i], 10, 10);
 }
 }
 video.updatePixels();

pushMatrix();
 translate(width, 0);
 scale(-1, 1);
 tint(255, 50);
 image(video, 0, 0);
 tint(255, 255);
 popMatrix();

if (count>0) {
 avgX = avgX/count;
 avgX = avgX/count;
 //fill(trackColor);
 //strokeWeight(4.0);
 //stroke(0);
 //ellipse(avgX, avgY, 8, 8);
 }
}

void keyPressed() {
 if (keyCode == ENTER) {
 saveNow = !saveNow;
 }
 if (key == 'r') {
 currentColorRed = 255; 
 currentColorGreen = 0; 
 currentColorBlue = 0;
 }
 if (key == 'g') { 
 currentColorRed = 0; 
 currentColorGreen = 255; 
 currentColorBlue = 0;
 }
 if (key == 'b') {
 currentColorRed = 0; 
 currentColorGreen = 0; 
 currentColorBlue = 255;
 }
 if (key == 'y') {
 currentColorRed = 255; 
 currentColorGreen = 255; 
 currentColorBlue = 0;
 }
 if (key == 'l') {
 currentColorRed = 0; 
 currentColorGreen = 255; 
 currentColorBlue = 255;
 }
 if (key == 'p') {
 currentColorRed = 255; 
 currentColorGreen = 0; 
 currentColorBlue = 255;
 }
 if (key == 'B') {
 currentColorRed = 0; 
 currentColorGreen = 0; 
 currentColorBlue = 0;
 }
 //if(keyCode == LEFT){
 //int[] colorsArrayRed= {};
 //int[] colorsArrayGreen= {};
 //int[] colorsArrayBlue= {};
 //int[] xvalues= {};
 //int[] yvalues= {};}
 //
}

On Computer Vision article

This reading was very inspirational as it gave me a lot of ideas to choose from when it came to deciding what I wanted to make for this project. After reading Computer Vision in Interactive Art chapter I knew I wanted to work with live video and tracking, be it tracking color or brightness or movement or whatever I could possibly imagine tracking.

The elementary computer vision techniques mentioned in the article, which are detecting motion, detecting presence, detection through brightness thresholding, simple object taking and basic interactions, gave me a better idea of what I could use in my project. All of these techniques were well explained which gave me enough understanding of what I should be aiming for in my work.

The one project I was impressed by the most was the Suicide Box by the Bureau of Inverse Technology installed in 1996. I did not think that a computer vision project, with the help of machine-vision based surveillance, could have such a big social impact and cause ethical controversy. At the same time, this project was a proof (if the data of the amount of recorded suicides was real) that machines can record data with the accuracy that humans cannot.

When working on my project and color detection, I was suggested to watch a YouTube video of Daniel Shiffman explaining the algorithms of color tracking. And he also had this article on Computer Vision open throughout the most part of the video, and even referenced it a couple of times, so this definitely helped me understand what Daniel was talking about!

Processing meets Arduino project

For the Processing meets Arduino assignment I’ve decided to expand on my jumping ball game project and add a joystick to it. Thus, instead of being controlled by arrows on keyboard, the ball is now controlled by the joystick. This was fairly simple, since the ball’s movement is only controlled on the x axis, however, it took me a little to get used to figure out the adjustments I had to do in terms of screen width/canvas width so that the ball moves just like I want it to.

Another thing that was a sort of an easy fix but took a little while to figure out was how to restart the game on button press rather than restarting the whole processing sketch. The first idea was to restart the port connection on button press, but then an error would pop up and freeze the whole computer, because reestablishing the port connection would not work. The second thing I tried was rerunning the setup void (on button press) again once the game is over, however, that would crash processing and freeze the computer as well. Then I got lucky because Pierre walked into the room and suggested something very simple that I have not thought of for some reason. His idea was to create a gameRunning Boolean that would start the initialize void and the draw void. Thus, rather than restarting the whole setup void, I would just have a gameRunning Boolean on button press, and then depending on that the initialize and draw would either run or not, which will then be connected to the gameOver Boolean, and together, if game is over and game is not running the button press would restart the initialize and draw loops to start the game again.

As some improvements from the last time I’ve presented the game, I’ve added a couple of things. First of all, the platforms are now fading in rather than just suddenly appearing out of nowhere. Then I’ve also added the “Welcome” screen (I forgot to change ‘lol’ from when I was trying out if it works, and now I think it is just a part of the project), and the “Game Over” screen.

Here is the new code:

 

Game Sketch

Player p;
ArrayList platforms, platformsDanger;
boolean upPressed, leftPressed, downPressed, rightPressed;
int score, fallCount;
boolean gameOver;
int Ypos;
int Xpos;
int Sel;
boolean gameRunning;
int led=0;
int led2=0;

// to control a ball with a joystick
import processing.serial.*;
Serial myPort;

void setup()
{
 Xpos = 0;
 Ypos=0;
 Sel=1;
 size(480, 640);
 frameRate(60);
 //initializeDanger();
 ellipseMode(CORNER);

// for joystick
 printArray(Serial.list());
 String portname=Serial.list()[2];
 println(portname);
 myPort = new Serial(this, portname, 9600);
 myPort.clear();
 myPort.bufferUntil('\n');
}

void initialize()
{
 if (Sel==0) {
 gameRunning = true;
 score = 0;
 fallCount = 0;
 gameOver = false;
 p = new Player(width/2, height/2);
 platforms = new ArrayList();
 platforms.add(new MovingPlatform(20, 80, 70, 8, false));
 platforms.add(new Platform(width/2, height/2, 100, 8, false));
 platforms.add(new Platform((int)random(40, 500), 320, (int)random(50, 120), 8, false));
 platforms.add(new Platform((int)random(40, 500), 220, (int)random(50, 120), 8, false));
 platforms.add(new Platform((int)random(40, 500), 120, (int)random(50, 120), 8, false));
 platforms.add(new Platform((int)random(40, 500), 20, (int)random(50, 120), 8, false));
 //platforms.add(new MovingPlatform((int)random(20,400),(int)random(10,150),20,20, true));
 //platforms.add(new MovingPlatform((int)random(20,400),(int)random(10,150),30,30, true));
 //platforms.add(new MovingPlatform((int)random(20,400),(int)random(10,150),45,45, true));
 } else { 
 background(255);
 fill(0);
 textSize(50);
 textAlign(CENTER);
 if (gameOver == false) { 
 text("lol", width/2, height/2);
 textSize(25);
 text("press to start", width/2, 500);
 
 } else if (gameOver == true) {
 text("You Lost", width/2, height/2);
 textSize(25);
 text("press to restart", width/2, 500);
 }
 }
}

//void initializeDanger(){
// platformsDanger = new ArrayList();
// platforms.add(new MovingPlatform(20,80,20,20, true));
// platforms.add(new MovingPlatform(20,80,30,30, true));
// platforms.add(new MovingPlatform(20,80,25,25, true));

//}

void draw() { 
 if (gameRunning ==true && gameOver == false) {
 led = 1;
 led2=0;
 } else if (gameOver == true) {
 led=0;
 led2=1;
 } else {
 led2=1;
 led = 0;
 }
 if (gameRunning == true) {
 //println(score);
 background(255);
 fill(0, 10, 153, 204);
 textSize(12);
 text("score", 15, 15);
 text(score, 60, 15);
 //println(platforms.size());
 for (int i=0; i<platforms.size(); i++)
 {

p.collide((Platform)platforms.get(i));
 ((Platform)platforms.get(i)).display();
 ((Platform)platforms.get(i)).move();
 //if(i<3){
 // p.collide((Platform)platformsDanger.get(i));
 // ((Platform)platformsDanger.get(i)).display();
 // ((Platform)platformsDanger.get(i)).move();
 //}
 }
 p.display(); 
 p.move();

adjustViewport();
 cleanUp();
 seedNewPlatforms();
 if (platformsBelow() == 0) gameOver = true;
 if (gameOver) fallCount++;
 if (fallCount > 3 ) initialize();
 } else { 
 initialize();
 }
}

int platformsBelow()
{
 int count = 0;
 for (int i=0; i<platforms.size(); i++)
 {
 if (((Platform)platforms.get(i)).y >= p.y) count++;
 }
 return count;
}


void adjustViewport()
{
 // above midpoint
 float overHeight = height * 0.5 - p.y;
 if (overHeight > 0) {
 p.y += overHeight;
 for (int i=0; i<platforms.size(); i++)
 {
 ((Platform)platforms.get(i)).y += overHeight;
 }
 score += overHeight;
 }
 // falling
 float underHeight = p.y - (height-p.h-4);
 if (underHeight > 0)
 {
 p.y -= underHeight;
 for (int i=0; i<platforms.size(); i++)
 {
 ((Platform)platforms.get(i)).y -= underHeight;
 }
 } 
 //above midpoint danger
 //for(int i=0; i<platformsDanger.size(); i++)
 // {
 // ((Platform)platformsDanger.get(i)).y += overHeight;
 // }
 // score += overHeight;

// falling
 //underHeight = p.y - (height-p.h-4);
 //if(underHeight > 0){
 // p.y -= underHeight;
 // for(int i=0; i<platformsDanger.size(); i++)
 // {
 // ((Platform)platformsDanger.get(i)).y -= underHeight;
 // }
 //}
}

void cleanUp()
{
 for (int i=platforms.size()-1; i>=0; i--) {
 // scrolled off the bottom
 if (((Platform)platforms.get(i)).y > height) {
 platforms.remove(i);
 }
 }
 //for(int i=platformsDanger.size()-1; i>=0; i--){
 // // scrolled off the bottom
 // if(((Platform)platformsDanger.get(i)).y > height){
 // platformsDanger.remove(i);
 // }
 //}
}

void seedNewPlatforms()
{
 if (platforms.size() < 9)
 {
 float randomizer = random(0, 10);

if (score < 1250) {
 if (randomizer < 3) {

platforms.add(new MovingPlatform((int)random(10, width-80), -10, 70, 8, false));
 } 
 //else if (randomizer < 4) {
 // platforms.add(new MovingPlatform((int)random(20,400),-10,30,30, true));
 // platforms.add(new MovingPlatform((int)random(20,400),-10,30,30, true));
 //}
 // else if (randomizer < 1) {
 // platforms.add(new MovingPlatform((int)random(20,400),-10,45,45, true));
 //} else {
 platforms.add(new Platform((int)random(20, 400), -10, (int)random(50, 120), 8, false));
 } else if (score < 500) {
 if (randomizer < 3) {
 platforms.add(new MovingPlatform((int)random(10, width-80), 300, 70, 8, false));
 } else {
 platforms.add(new Platform((int)random(20, 400), 300, (int)random(50, 120), 8, false));
 }
 } else { 
 if (randomizer < 9) {
 platforms.add(new MovingPlatform((int)random(20, 400), -10, 30, 30, true));
 } else {
 platforms.add(new MovingPlatform((int)random(10, width-80), 300, 70, 8, false));
 }
 }
 }
}

//if(platformsDanger.size() < 3)
//{
// platforms.add(new MovingPlatform((int)random(10,width-80),-10,20,20,true));
//}


void keyPressed()
{
 if (keyCode == UP) upPressed = true;
 if (keyCode == LEFT) leftPressed = true;
 if (keyCode == DOWN) downPressed = true;
 if (keyCode == RIGHT) rightPressed = true;
}

void keyReleased()
{
 if (keyCode == UP) upPressed = false;
 if (keyCode == DOWN) downPressed = false;
 if (keyCode == LEFT) leftPressed = false;
 if (keyCode == RIGHT) rightPressed = false;
}


void serialEvent(Serial myPort) {
 String s=myPort.readStringUntil('\n');
 s=trim(s);
 if (s!=null) {
 //println(s);
 int values[]=int(split(s, ','));
 if (values.length==3) {
 //Xpos=(int)map(values[0],0,1023,0, width);
 Xpos=(int)values[0];
 Ypos=(int)map(values[1], 0, 1023, 0, height);
 Sel=values[2];
 println("POS:");
 println(Xpos);
 println("ENDPOS");
 println("SEL "+Sel);
 
 myPort.write(led+","+led2+"\n");
 //println(Ypos);
 }
 }
}

 

Platform

class Platform {
 float x, y, w, h;
 float xvel; 
 boolean danger;
 float alpha;
 Platform(int x_, int y_, int w_, int h_, boolean d)
 {
 x = x_;
 y = y_;
 w = w_;
 h = h_;
 danger = d;

alpha = 0;

println("NEW PLATFORM:");
 println("Y: " + str(y));
 }

void display()
 {
 if (danger == false) {
 fill(0, 0, 0, alpha);
 } else {
 fill(255, 0, 0, alpha);
 }
 noStroke();
 rect(x, y, w, h);


if (alpha < 255) {
 alpha+=4f;
 }
 }

void move()
 {


 x += xvel;
 y += 0;
 }
}

class MovingPlatform extends Platform
{
 static final float speed = 0.9;

MovingPlatform(int x, int y, int w, int h, boolean d)
 {
 super(x, y, w, h, d);
 xvel = speed;
 }

void move()
 {
 super.move();
 if ( (x+w > width - 10) || (x < 10) )
 {
 xvel *= -1;
 }
 }
}

Player

class Player
{
 float gravity = 0.14;
 float bounceVel = 9;
 float maxYVel = 13;
 float maxXVel = 3;

float x, y, xVel, yVel;
 int w, h;
 Player(int x, int y)
 {
 w = h = 20;
 this.x = x;
 this.y = y;
 }

void display()
 {
 fill(0, 0, 240);
 ellipse(x, y, w, h);
 }

void move()
 {
 x += xVel;
 y += yVel;

if (x > width-w) x = 0;
 if (x < 0) x = width-w;

//horizontal
 // for arrows
 //if (!gameOver){
 // if (leftPressed) xVel -= 0.05;
 // else if (rightPressed) xVel += 0.05;
 // else
 // {
 // if (xVel > 0) xVel -= 0.03;
 // else xVel += 0.03;
 // }
 //}
 //for joystick
 if (!gameOver) {
 if (Xpos<500) xVel -= 0.05;
 else if (Xpos>503) xVel += 0.05;
 else
 {
 if (xVel > 0) xVel -= 0.03;
 else xVel += 0.03;
 }
 }
 if (abs(xVel) < 0.01) xVel = 0;
 xVel = min(maxXVel, xVel);
 xVel = max(-maxXVel, xVel);

// vertical
 yVel += gravity;
 yVel = min(maxYVel, yVel);
 yVel = max(-maxYVel, yVel);
 }

void collide(Platform plat) {


 if (x < plat.x + plat.w &&
 x + w > plat.x &&
 y < plat.y + plat.h &&
 y + h > plat.y)
 {
 if (plat.danger == false) {
 if (yVel > 0) {
 yVel = -bounceVel;
 }
 } else {
 if (plat.danger == true) {
 gameOver = true;
 gameRunning = false;
 Sel = 1;
 }
 // game over
 }
 }
 }
}

 

Here is how it looks 🙂

What computing means to me

Before coming to this class I was not exposed to any kind of programming or coding whatsoever, so it definitely was challenging for me at first to understand the concepts of coding and how things work in general. However, as I was getting more and more into it, I started liking what I was doing, because I started seeing connections between things we do in class and things that surround me in my everyday life. I started looking at things around me with a different perspective, things as simple as light switches and as complicated as some computer games would make me think “khm, I sort of know how to make it”, which was an amazing feeling. I am not sure if it had necessarily “made me a better person”, but I think the exposure to this new world of programming and building things in this class has changed the way I look at everyday objects and made me start appreciating them more.

Response: Computer Vision

Computer vision is a dynamic field of computer science that is actively improving our lives and the technologies we use. Accessibility of computer vision is becoming more and more important to people who are working on projects that utilize it. While working on my project this week – I found so many libraries that allow people to do things like facial recognition with just a few lines of code.
This paper showed the various ways creative coding and computer vision techniques intersect to be able to create endless projects. Computer vision seems to really help create an interactive and immersive experience. This reminds me of the readings and conversations we’ve had in class about making interactivity about more than just touch. Now we could use anything from blinking, smiling, or other facial expressions, or even movements, and the colors around us in day to day life to create an experience that is interactive in non-conventional ways.
One popular application of computer vision is the snapchat filters that have become wildly popular. For many teenagers, computer vision has become a part of their daily lives without them even realizing it.