Across The Globe

For my final project I created an opportunity for people to jump around different places on Earth (and off Earth for that matter) in less than a second. With the help of computer vision and a green screen behind, people were able to see themselves in either Rome, a beach in Thailand or on the International Space Station (ISS). In order to navigate these places, all you have to do is move a figure of a person around the map and place it in one of the three locations. Then, this location appears on the screen, and so does the person interacting with the project, because he/she is being filmed. In addition, there is a small carpet on the floor on which to step on. When you start walking or running on it, the background starts moving as well, depending on how fast you move.

The creation of this project was challenging since the first day. I started with connecting two pressure sensors to Arduino and reading the time value between pressing the sensors. That way is possible to know how long is a person’s step. Then I did serial communication to send this data to Processing. In addition to the pressure sensors, there are also 3 LEDs connected to Arduino and it is also sending a different number to Processing depending on which LED is lit up. Each LED is responsible for a certain place on the map.

For the interactive map I got a box, cut 3 holes, added an LED next to each hole, designed the surface and added another layer of cardboard inside, so there would be a bottom for the holes. There are two strips of conductive coper tape coming to each of the holes, and one of the strips is connected to power but the other – to ground. Therefore, whenever there is something conductive placed in the hole, it closes the circuit, and the LED next to the hole lights up. A number is assigned to each LED and this number is being sent to Processing, therefore it knows at which location the person is placed.

the box from the outside
the box from the inside

For making the person I went to the Engineering Design Studio to use their laser cutter and cut 7mm thick clear acrylic. The figure is a traveler with a backpack and a round bottom. In order to make the bottom conductive, I first tried to tape some copper tape on the bottom, but it was lacking weight as it didn’t properly press down on the copper tape strips when placed in the hole. So I had to be creative and that’s how I decided to stick 3 coins on the bottom to give the person some weight as well as make the bottom more conductive (now I know that euros are more conductive than dirhams or dollars).

a two euro coin on the bottom of the figure

When the person is placed somewhere on the map, the appropriate LED lights up and sends a number to Processing. In Processing I then loaded 3 videos from each of the 3 places and display the appropriate video for each place. For example, when the person is placed in Rome, Arduino recognizes it and sends a ‘1’ to Processing which is then set to display a video of Rome. In order to actually play the video, the person interacting with my project needs to start moving on the carpet. Arduino then recognizes the time between the footsteps and again – sends these values to Processing. I’m mapping the incoming time value in Processing and playing the video accordingly to how fast a person is walking. It is slowing down when a person is walking very slowly, playing normally when the speed is normal and speeding up when a person is running. However, if the steps are longer than the maximum value in the map function (1.2 seconds), then the video just plays at the slowest mapped speed. If there is no movement for a little while, the video stops and restarts playing again when movement is detected again. Therefore, the people interacting with my project get an impression that they are actually seeing the background as they would when moving at different speeds.

the whole setup. people are walking on the carpet
pressure sensors on the back of the carpet

The person interacting with my project sees himself or herself in one of the places because of a green screen behind them. The camera from the computer in front of them is filming them and the green screen and substituting all of the green pixels with a video from the place where the figure is located at.

Whenever the person is not placed on any of the locations, this is the photo that shows up on the screen:

The IM show, where we were displaying our projects to public, was an incredible and positively overwhelming experience. For the show I had two screens – one was the computer in front of the person where they were seeing themselves but the other was turned to the public. I was really happy to have the other screen because it definitely dragged more attention to my project because people could see other people interacting with it. I was surprised by people’s interest to interact with my project and observing their reactions was extremely rewarding. The night flew by in a second for me but I tried to capture some moments from it.

Goffredo was really happy to be in Rome!!

Here I have a short time-lapse of people interacting with my project:


And these are some of my favorite moments filmed at the IM show. I have more footage though, and, as soon as the exam period ends, I’ll make sure to make a video about the whole project and I’ll also post it on here! Overall, I have learned a lot not only in this period of making the project but also throughout the whole semester. The IM show was a memorable way how to wind up this semester. Huge thanks to Aaron for the help and the class for the feedback received along the way!

User Testing

Before the IM show I asked two people who weren’t familiar with the concept of my final project to test it to get feedback about what could be improved. Unfortunately my project was not in a fully working order quite yet because one of its components didn’t work but I got feedback about the rest of them.

The idea of my project is that a person stands in front of a screen and sees himself/herself in the screen. In front of the person there is a map and a figure of a person which can be moved around different places in the map. Once placed in a certain location, this place appears as a background for the person on the screen. Then there is a little carpet on the floor with the shape of two feet drawn on it and, when stepping on this carpet, the person can start playing a video which is essentially their background. The speed of the video depends on how fast the person is walking on the carpet. Once the figure is moved to a different location, the journey continues in a different place.

The component, which didn’t yet work, was the green screen, which substitutes the background with a video of a certain place, therefore the person only saw a video of the place instead of him/her being in that place.

 

The first person to try out my project was Isabella. After seeing her interacting with my project I decided to change/include several things in my project:

1) She asked whether to take her shoes off. I realized I hadn’t completely made up my mind yet about it but, after Isabella trying it both ways, I decided to allow to keep the shoes on because then the pressure might even out more on the force resistive sensors that are hidden below the carpet.

2) I will decrease the time after which the image on the screen stops when there is no more movement sensed on the sensors. What seemed to be fine before the user testing, turned out to be too long because Isabella sometimes got confused why the image is still moving even if she stopped walking a good while before.

3) I will adjust the speed of the video when increasing from the speed of walking to the speed of running because it was sometimes either too slow or too fast.

Isabella

 

The second person to try out my project was Erika. After her interaction I concluded the following thing:

1) I will place the sensors further below the shape of the foot on the carpet, because Erika tended to move away a lot form the front of the carpet where the sensors where located during the user testing, especially when she started running. When you are running and paying attention to the screen rather than your feet, it is very easy to go out of the range of the sensors, therefore I’ll try to center them as much as possible.

 

Overall, it was helpful to have some users actually try out the project and see what they do with it when they see it or when they use it. Even though it is never possible to predict all of the potential ways in which an interaction might go, it is still useful to see at least some, because other people will definitely use the project differently than you do it, because you are most likely doing it accordingly to how it is supposed to be used (according to you). Because you have intended it to work a certain way, it’s hard to put yourself in another person’s shoes and think what you would do if you didn’t know all of the logic that’s lying behind the project.

Virtual Pets

For this week’s assignment I wanted to work with color tracking in Processing. I wanted to alternate objects that a person is supposedly holding in his or her hands and initially I thought of substituting an object of a certain color with a respective picture of another object. However, I didn’t even need to substitute the colored object, I could just display the image in a certain distance from the object instead. I wanted to create the illusion of being able to hold and move around different animals, therefore in my project I have 5 animals each appearing on the screen when there is a certain color present (the colors I used are pink, blue, green, red and yellow). These are the steps of creating my project:

  1. I found pictures of 5 different animals and resized the pictures in Photoshop to approximately 200×200 pixels to make the animals smaller.
  2. I cut thin slips of paper in 5 different colors – one for each animal. I then printed out the RGB values of these colors from the web cam so I could code it for each of the animal. Once the sketch is run, it displays the image from the computer’s web cam. When there is one or several of the five colors present in the range of the web cam, the respective animal shows up on the screen following the object of the color they are assigned to. If this color is not present, then the animal doesn’t show up either. If a person is holding a colorful slip of paper, he or she can then move it around the screen to make the animal follow it and therefore control its motion with his or her hand.
  3. One of the challenges was determining the right threshold value that is the difference between the pixels from the camera and the coded color. In my case this difference has to be very small in order for the animal to show up, otherwise it can get confused and start showing the animal where it is not supposed to show up. However, that also means that if the lighting changes significantly, the RGB value of the color of the slips of paper appearing on the web cam might also change and the animal might not appear.

Here are pictures of the animals that show up on the screen depending of the color:

 

Here is a video of just one animal moving around the screen:

cat video

 

Here is a video of all of the animals that can appear:

all animals

 

Here is the code:

import processing.video.*;
Capture video;
color trackColor;
int locXpuppy, locYpuppy, locXtiger, locYtiger, locXara, locYara, locXbunny, locYbunny, locXcat, locYcat;
PImage puppy;
PImage tiger;
PImage ara;
PImage bunny;
PImage cat;
color doggy=color(255.0, 98.0, 203.0);
color tiger2=color(118.0, 188.0, 91.0);
color ara2=color(116.0, 196.0, 249.0);
color bunny2=color(232.0, 86.0, 103.0);
color cat2=color(255.0, 253.0, 112.0);
boolean drawPuppy=false;
boolean drawTiger=false;
boolean drawAra=false;
boolean drawBunny=false;
boolean drawCat=false;

void setup() {
 size(640, 480);
 video = new Capture(this, 640, 480, 30);
 video.start();
 puppy = loadImage("puppyx.png");
 tiger = loadImage("tigerx.png");
 ara = loadImage("araax.gif");
 bunny = loadImage("bunnyx.png");
 cat = loadImage("catttx.png");
 trackColor=doggy;
}

void draw() {
 if (video.available()) {
 video.read();
 }
 video.loadPixels();
 float dist=20;
 drawPuppy=false;
 drawTiger=false;
 drawAra=false;
 drawBunny=false;
 drawCat=false;
 for (int y=0; y<height; y++) {
 for (int x=0; x<width; x++) {
 int loc = (video.width-x-1)+(y*width);
 color pix=video.pixels[loc];
 float r1=red(pix);
 float g1=green(pix);
 float b1=blue(pix);
 
 float r2=red(trackColor);
 float g2=green(trackColor);
 float b2=blue(trackColor);
 
 float r3=red(tiger2);
 float g3=green(tiger2);
 float b3=blue(tiger2);
 
 float r4=red(ara2);
 float g4=green(ara2);
 float b4=blue(ara2);
 
 float r5=red(bunny2);
 float g5=green(bunny2);
 float b5=blue(bunny2);
 
 float r6=red(cat2);
 float g6=green(cat2);
 float b6=blue(cat2);
 
 
 float diff=dist(r1,g1,b1,r2,g2,b2);
 float diff2=dist(r1,g1,b1,r3,g3,b3);
 float diff3=dist(r1,g1,b1,r4,g4,b4);
 float diff4=dist(r1,g1,b1,r5,g5,b5);
 float diff5=dist(r1,g1,b1,r6,g6,b6);

 
 if (diff<dist){
 drawPuppy=true;
 dist=diff;
 locXpuppy=x;
 locYpuppy=y;
 } 
 
 if (diff2<dist){
 drawTiger=true;
 dist=diff2;
 locXtiger=x;
 locYtiger=y;
 } 
 
 if (diff3<dist){
 drawAra=true;
 dist=diff3;
 locXara=x;
 locYara=y;
 } 
 
 if (diff4<dist){
 drawBunny=true;
 dist=diff4;
 locXbunny=x;
 locYbunny=y;
 } 
 
 if (diff5<dist){
 drawCat=true;
 dist=diff5;
 locXcat=x;
 locYcat=y;
 } 

 }
 }
 video.updatePixels();
 pushMatrix();
 translate(width,0);
 scale(-1,1);
 image(video,0,0);
 popMatrix();
 if(drawPuppy){
 image(puppy, locXpuppy-100, locYpuppy-200);
 }
 if(drawTiger){
 image(tiger, locXtiger-100, locYtiger-200);
 }
 if(drawAra){
 image(ara, locXara-100, locYara-200);
 }
 if(drawBunny){
 image(bunny, locXbunny-100, locYbunny-200);
 }
 if(drawCat){
 image(cat, locXcat-100, locYcat-200);
 }
}

void mousePressed(){
 int loc=(video.width-mouseX-1)+(mouseY*width);
 trackColor=video.pixels[loc];
 println(red(trackColor)+" "+green(trackColor)+" "+blue(trackColor));
}

 

P.S. Because at first the code for creating what I just described didn’t properly work, I started working on a slightly different idea. Even though Aaron helped me fixing the code above (thanks for that!!), I decided to also include the other code. The idea behind it is that there are also 5 animals uploaded to the sketch, however instead of following precoded colors, the color of interest can be adjusted by a mouse press. Once you press the mouse on a, for example, pink object, it will then follow the pink object.  Also, there can only be one animal present at a time, but they can be changed by pressing the key “c”. There is a random function that randomly displays one of the images of the 5 animals.

 

This is the code for the other example:

import processing.video.*;
Capture video;
color trackColor;
int locX, locY;
PImage puppy;
PImage tiger;
PImage ara;
PImage bunny;
PImage cat;
int randomNumber;

void setup() {
 size(640, 480);
 video = new Capture(this, 640, 480, 30);
 video.start();
 puppy = loadImage("puppyx.png");
 tiger = loadImage("tigerx.png");
 ara = loadImage("araax.gif");
 bunny = loadImage("bunnyx.png");
 cat = loadImage("catttx.png");
}

void draw() {
 if (video.available()) {
 video.read();
 }
 video.loadPixels();
 float dist=500;
 for (int y=0; y<height; y++) {
 for (int x=0; x<width; x++) {
 int loc = (video.width-x-1)+(y*width);
 color pix=video.pixels[loc];
 float r1=red(pix);
 float g1=green(pix);
 float b1=blue(pix);
 
 float r2=red(trackColor);
 float g2=green(trackColor);
 float b2=blue(trackColor);
 
 float diff=dist(r1,g1,b1,r2,g2,b2);

 
 if (diff<dist){
 dist=diff;
 locX=x;
 locY=y;
 }
 }
 }
 video.updatePixels();
 pushMatrix();
 translate(width,0);
 scale(-1,1);
 image(video,0,0);
 popMatrix();
 if(randomNumber == 0){
 image(puppy, locX-100, locY-200);
 }
 else if(randomNumber == 1){
 image(tiger, locX-100, locY-200);
 }
 else if(randomNumber == 2){
 image(ara, locX-100, locY-200);
 }
 else if(randomNumber == 3){
 image(bunny, locX-100, locY-200);
 }
 else if(randomNumber == 4){
 image(cat, locX-100, locY-200);
 }
}

void mousePressed(){
 int loc=(video.width-mouseX-1)+(mouseY*width);
 trackColor=video.pixels[loc];
}

void keyPressed(){
 if (key=='c'){
 randomNumber = int(random(0, 5));
 }
}

 

Here is a video of an animal following my selected color by a mouse press in the second version:

animals second version

Artists Working with Material Evidence

In the paper by Golan Levin about computer vision for artists and designers he talks about techniques used to, for example, track people or other objects and their movement in real life to transfer it to a screen for mainly artwork purpose like frame differencing, background subtraction and brightness thresholding. He refers to all these as simple concepts yet as very effective; the key being the fact that these are all suitable for novice programmers. He also mentions several examples that have either been made in the 20th century or the early 2000s, which makes it very interesting to compare how much computer vision has developed in the recent years.

One example that particularly spoke to me was the Suicide Box and the problem raised by it which is “the inherent suspicion of artists working with material evidence”. In this particular case there was an inconsistency with the number of people who committed suicide according to the interactive art piece and the data from the Port Authority. According to Suicide Box there were 4 more people who committed suicide and that raised suspicion whether the data represented in art pieces can be trusted. It reminded me of the French artist who visited us a while ago and talked to us and shared his projects where he had also done some sort of data visualization, for example, death and birth rates and the train departure time. I then realized that in these cases people didn’t seem to have a problem relying on this data because it is also of less significance than suicide attempts. Therefore, with so many more possibilities nowadays it is important to remain with a critical mind and check where the information is coming from (essentially in the Suicide Box case it could have probably been random objects vertically falling down and not necessarily people), although I also think it’s great how many opportunities there are now to make things look great and carry people away.

The Meaning of Computing to Me Now

I came to this class not having much knowledge about the things that soon became a part of my every week. I had seen a breadboard but I hadn’t seen an Arduino, I had no idea about how you can actually CONTROL things the way you want them to work. That included writing codes which also became a normal part of the process, even though it seemed so intimidating at first. Every time we started something new it was almost like trying to read in a language I don’t know. And when I finally at least somewhat got the hang of it, we had already moved on to something new and this loop would start again. However, I have learned so much and in such a short period of time. I have firstly challenged my creativity a lot to even come up with ideas for projects. I have then learned to struggle, ask for help, keep trying and figure things out. Some of the most satisfying moments while creating projects have definitely been the ones when I finally manage to figure out something, especially if I do it on my own. Each moment like this makes me believe a little more in my abilities, even though I feel like I’m just in the very beginning of this whole new world. And you are always able to learn something in it because it just keeps expanding and advancing.

People around me have been wondering a lot this semester about what it is that I am doing and why I sometimes disappear in the IM lab for hours. I then have to reassure them that it is not some kind of a Narnia and, by explaining what we do in classes, I become more assured myself that I like being a part of the Interactive Media world and I feel like it’s a great way how to express myself and my ideas.

Virtual Planets Meet Actual Planets

For the assignment of connecting Processing to Arduino I decided to use the Solar System I created in Processing two weeks ago and also create a physical model of it by using LEDs as outputs. I stopped the rotating motion of the planets and aligned them in one line in Processing and created a similar model in real space. Once you click on a planet in Processing, the appropriate planet lights up in my model. This is what I did to create my project:

  1. I did some alterations in my Processing sketch so that the initial view of the planets would be different than before and I also took away their speed. Because they are now on the same line, I determined the X-coordinates for each of the planet, therefore I could divide the screen in parts and make the appropriate LED in my model light up.
  1. In Arduino I set 8 LEDs as outputs, each representing one of the Solar System’s planets. The logic behind my project is that when the mouse is pressed on one of the planets, the respective LED lights up. However, when the mouse is pressed again, the LED turns off, therefore you can control which planets you want to light up and which ones you don’t.
  1. As for my actual model I created a cardboard base with a starry night sky base and glued on 8 planets and the Sun. Each of the LEDs is pierced through each planet. There is also the name of each planet next to it, as well as the information about its size in the Solar System. The whole construction is standing on two awesome tapes that I found in the IM lab!

The code in Processing:

import processing.serial.*;
import peasy.*;
Serial myPort;
PeasyCam cam;

int led2=0;
int led3=0;
int led4=0;
int led5=0;
int led6=0;
int led7=0;
int led8=0;
int led9=0;


Planet earth;
Planet mercury;
Planet venus;
Planet mars;
Planet jupiter;
Planet saturn;
Planet uranus;
Planet neptune;
Planet sun;
Planet moon;

boolean orbitSwitch = true;
boolean moonSwitch=true;


void setup() {
 String portname = Serial.list()[2];
 myPort = new Serial(this, portname, 9600);
 myPort.clear();
 myPort.bufferUntil('\n');
 size(1280, 720, P3D);
 float fov = PI/3.0;
 float aspect=float(width)/float(height);
 perspective(fov, aspect, height, 0);
 cam = new PeasyCam(this, 200);
 cam.setMinimumDistance(50);
 cam.setMaximumDistance(5000);



 earth = new Planet(70, "earth.jpg", 6, 0);
 mercury = new Planet(30, "pl_mercury.jpg", 5, 0);
 venus = new Planet(50, "ven0aaa2.jpg", 6, 0);
 mars = new Planet(85, "2k_mars.jpg", 6, 0);
 jupiter = new Planet (110, "Jupiter.jpg", 12, 0);
 saturn = new Planet (140, "2k_saturn.jpg", 10, 0);
 uranus = new Planet (165, "2k_uranus.jpg", 8, 0);
 neptune = new Planet (190, "preview_neptune.jpg", 8, 0);
 sun = new Planet (0, "texture_sun.jpg", 11, 0);
 moon = new Planet (60, "moon.jpg", 2, 0);
}

void draw() {
 background(0);
 if(mousePressed){
 if (mouseX>700 && mouseX<=750)
 if (led2==0)
 led2=1;
 else led2=0;


 if (mouseX>750 && mouseX<=850)
 if (led3==0)
 led3=1;
 else led3=0;

 if (mouseX>850 && mouseX<=900)
 if (led4==0)
 led4=1;
 else led4=0;

 if (mouseX>900 && mouseX<=950)
 if (led5==0)
 led5=1;
 else led5=0;

 if (mouseX>950 && mouseX<=1050)
 if (led6==0)
 led6=1;
 else led6=0;


 if (mouseX>1050 && mouseX<=1100)
 if (led7==0)
 led7=1;
 else led7=0;


 if (mouseX>1100 && mouseX<=1150)
 if (led8==0)
 led8=1;
 else led8=0;

 if (mouseX>1200 && mouseX<=1280)
 if (led9==0)
 led9=1;
 else led9=0;
}

//planet
earth.planet();
mercury.planet();
venus.planet();
mars.planet();
jupiter.planet();
saturn.planet();
uranus.planet(); 
neptune.planet();
sun.planet();

//orbit circles
if (orbitSwitch==true) {
 earth.orbit();
 mercury.orbit();
 venus.orbit();
 mars.orbit();
 jupiter.orbit();
 saturn.orbit();
 uranus.orbit();
 neptune.orbit();
}

//moon
if (moonSwitch==true) {
 moon.planet();
}

}

void serialEvent(Serial myPort) {
 myPort.write(led2+","+led3+","+led4+","+led5+","+led6+","+led7+","+led8+","+led9+"\n");
}

void keyPressed() {
 if (key=='o') {
 orbitSwitch=!orbitSwitch;
 }

 if (key=='m') {
 moonSwitch=!moonSwitch;
 }
}

class Planet {
 int orbit;
 float x, z;
 PImage img;
 int planetSize;
 float angle;
 float speed;
 int numPointsW;
 int numPointsH_2pi; 
 int numPointsH;

float[] coorX;
float[] coorY;
float[] coorZ;
float[] multXZ;
 
 Planet(int _orbit, String _imageName, int _planetSize, float _speed){
 orbit=_orbit;
 x=0;
 z=0;
 img = loadImage(_imageName);
 planetSize=_planetSize;
 angle=0.;
 speed=_speed;
 initializeSphere(30,30);
 }
 
 
 void planet(){
 //planet
 pushMatrix();
 x = cos(angle)*orbit;
 z = sin(angle)*(orbit+7);
 translate(x,0,z);
 noStroke();
 textureSphere(planetSize, planetSize, planetSize, img);
 popMatrix();
 angle+=speed;
 }
 
 void orbit() {
 stroke(255,120);
 noFill();
 pushMatrix();
 rotateX(radians(90));
 ellipseMode(CENTER);
 ellipse(width/2, height/2, orbit*2, orbit*2+14);
 popMatrix();
 
 
 }
 
 void initializeSphere(int numPtsW, int numPtsH_2pi) {

 // The number of points around the width and height
 numPointsW=numPtsW+1;
 numPointsH_2pi=numPtsH_2pi; // How many actual pts around the sphere (not just from top to bottom)
 numPointsH=ceil((float)numPointsH_2pi/2)+1; // How many pts from top to bottom (abs(....) b/c of the possibility of an odd numPointsH_2pi)

 coorX=new float[numPointsW]; // All the x-coor in a horizontal circle radius 1
 coorY=new float[numPointsH]; // All the y-coor in a vertical circle radius 1
 coorZ=new float[numPointsW]; // All the z-coor in a horizontal circle radius 1
 multXZ=new float[numPointsH]; // The radius of each horizontal circle (that you will multiply with coorX and coorZ)

 for (int i=0; i<numPointsW ;i++) { // For all the points around the width
 float thetaW=i*2*PI/(numPointsW-1);
 coorX[i]=sin(thetaW);
 coorZ[i]=cos(thetaW);
 }
 
 for (int i=0; i<numPointsH; i++) { // For all points from top to bottom
 if (int(numPointsH_2pi/2) != (float)numPointsH_2pi/2 && i==numPointsH-1) { // If the numPointsH_2pi is odd and it is at the last pt
 float thetaH=(i-1)*2*PI/(numPointsH_2pi);
 coorY[i]=cos(PI+thetaH); 
 multXZ[i]=0;
 } 
 else {
 //The numPointsH_2pi and 2 below allows there to be a flat bottom if the numPointsH is odd
 float thetaH=i*2*PI/(numPointsH_2pi);

 //PI+ below makes the top always the point instead of the bottom.
 coorY[i]=cos(PI+thetaH); 
 multXZ[i]=sin(thetaH);
 }
 }
}

void textureSphere(float rx, float ry, float rz, PImage t) { 
 // These are so we can map certain parts of the image on to the shape 
 float changeU=t.width/(float)(numPointsW-1); 
 float changeV=t.height/(float)(numPointsH-1); 
 float u=0; // Width variable for the texture
 float v=0; // Height variable for the texture

 beginShape(TRIANGLE_STRIP);
 texture(t);
 for (int i=0; i<(numPointsH-1); i++) { // For all the rings but top and bottom
 // Goes into the array here instead of loop to save time
 float coory=coorY[i];
 float cooryPlus=coorY[i+1];

 float multxz=multXZ[i];
 float multxzPlus=multXZ[i+1];

 for (int j=0; j<numPointsW; j++) { // For all the pts in the ring
 normal(-coorX[j]*multxz, -coory, -coorZ[j]*multxz);
 vertex(coorX[j]*multxz*rx, coory*ry, coorZ[j]*multxz*rz, u, v);
 normal(-coorX[j]*multxzPlus, -cooryPlus, -coorZ[j]*multxzPlus);
 vertex(coorX[j]*multxzPlus*rx, cooryPlus*ry, coorZ[j]*multxzPlus*rz, u, v+changeV);
 u+=changeU;
 }
 v+=changeV;
 u=0;
 }
 endShape();
}
}

The code in Arduino:

int ledPin2 = 3;
int ledPin3 = 4;
int ledPin4 = 5;
int ledPin5 = 6;
int ledPin6 = 7;
int ledPin7 = 8;
int ledPin8 = 9;
int ledPin9 = 10;

void setup() {
 // put your setup code here, to run once:
 Serial.begin(9600);
 Serial.println("0,0");
 pinMode(ledPin2, OUTPUT);
 pinMode(ledPin3, OUTPUT);
 pinMode(ledPin4, OUTPUT);
 pinMode(ledPin5, OUTPUT);
 pinMode(ledPin6, OUTPUT);
 pinMode(ledPin7, OUTPUT);
 pinMode(ledPin8, OUTPUT);
 pinMode(ledPin9, OUTPUT);
}

void loop() {
 // put your main code here, to run repeatedly:
 while (Serial.available() > 0) {
 int input2 = Serial.parseInt();
 int input3 = Serial.parseInt();
 int input4 = Serial.parseInt();
 int input5 = Serial.parseInt();
 int input6 = Serial.parseInt();
 int input7 = Serial.parseInt();
 int input8 = Serial.parseInt();
 int input9 = Serial.parseInt();
 if (Serial.read() == '\n') {


 if (input2 == 1) {
 digitalWrite(ledPin2, HIGH);
 }

 if (input2 == 0) {
 digitalWrite(ledPin2, LOW);
 }
 if (input3 == 1) {
 digitalWrite(ledPin3, HIGH);
 }

 if (input3 == 0) {
 digitalWrite(ledPin3, LOW);
 }
 if (input4 == 1) {
 digitalWrite(ledPin4, HIGH);
 }

 if (input4 == 0) {
 digitalWrite(ledPin4, LOW);
 }
 if (input5 == 1) {
 digitalWrite(ledPin5, HIGH);
 }

 if (input5 == 0) {
 digitalWrite(ledPin5, LOW);
 }
 if (input6 == 1) {
 digitalWrite(ledPin6, HIGH);
 }

 if (input6 == 0) {
 digitalWrite(ledPin6, LOW);
 }
 if (input7 == 1) {
 digitalWrite(ledPin7, HIGH);
 }

 if (input7 == 0) {
 digitalWrite(ledPin7, LOW);
 }
 if (input8 == 1) {
 digitalWrite(ledPin8, HIGH);
 }

 if (input8 == 0) {
 digitalWrite(ledPin8, LOW);
 }
 if (input9 == 1) {
 digitalWrite(ledPin9, HIGH);
 }

 if (input9 == 0) {
 digitalWrite(ledPin9, LOW);
 }
 }
 }
}

Some pictures:

all of the planets are on
the virtual planets meet the actual planets (and the tapes!!)

the labels

Here are 2 videos of me turning the LEDs in the planets on and off:

The Challenges of Digitization

In The Digitization of Just About Everything digitization is described as “the work of turning all kinds of information and media—text, sounds, photos, video, data from instruments and sensors, and so on—into the ones and zeroes that are the native language of computers and their kin” (79). I understand it as simply making all the many computers, gadgets, apps, etc. that now basically define our daily lives work. It made me realize how big of a flow of information is going on around us all the time. I, as a driver, also use the extremely famous app Waze whenever I go somewhere by car, and it has certainly become a habit. However, I had never thought of how it actually works, what is behind it and how much information it is actually providing to the servers from your phone, keeping the infinite flow of information going.

It is also mentioned in the text that apps, movies or digitized versions of, for example, books have “zero marginal cost of reproduction” (81), meaning that it might cost a lot of money to create the first copy but creating the next ones cost almost nothing. They are also extremely easy to reproduce, digitized copies don’t loose their quality, etc. However, it also raises a problem which is not mentioned in this text – violation of copyrights that becomes way easier as everything is being digitized. So, even though digitization is easing our lives incredibly, there are also certain issues that need to be properly dealt with, as the information spreads rapidly in this era of digitization.

Where are We From?

For this week’s assignment Lama and I decided to go for data visualization in Processing, as this is a field with a lot of potential in the future. There is certainly all kind of data that one can visualize and, as we were looking for ideas and inspirations, we came across a full spectrum of data that people have visualized, for example, the crime rate in Northern Italy, the average number of unhealthy teeth in 12 year old girls in different countries of the world, etc. Our idea eventually connected to NYUAD because we are all part of this university, and we decided to use Processing to show the countries represented in the Class of 2019 and 2020. There are currently info sheets available about these classes called “By the Numbers”, but the only information it gives is a map where the countries where students are from are in a different color. There are no names of the countries and no interactivity with the map. We wanted to add both of these features to our version of this data visualization:

  1. To get started we had to collect the data from the two world maps available for the Class of 2019 and the Class of 2020. Because we did not have a list of the names of the countries represented available, the only way to go about this was to write out all of the countries by using our geography and map reading skills. After that we wrote them out in an Excel spreadsheet.
the available info sheet about the Class of 2020
the list of countries in an Excel spreadsheet for the Class of 2020

 

  1. To create the world map we used a library called GiCentre, which draws all the countries of the world in the Processing sketch. We uploaded two tables to our Processing file, each containing the list of the countries and their representative country codes that we acquired from the GiCentre table with all of the world’s countries and their codes inside.
  1. After including the data about the countries represented at NYUAD in the Processing file, we wanted to create several modes for visualizing the data.
  • The first mode is called the Explore mode, because in this mode you start with a blank world map and, by hovering around the canvas, the countries represented at NYUAD get filled in with dark purple color if the mouse is inside the country’s border. The name of the country also appears on the left side of the canvas. These are pictures of a few countries, which get filled in if the mouse is hovered over them:
the Explore mode when the mouse is not hovered over any country represented at NYUAD Class of 2020
when the mouse is over The United States
when the mouse is over Costa Rica
when the mouse is over The Czech Republic
when the mouse is over Russia
when the mouse is over Lebanon
when the mouse is over Latvia

 

  • The second mode is called the Simple mode because it is just displaying all of the countries at once, of course, using the dark purple color as a filler color. This is the picture of the Simple mode for the Class of 2020:
the Simple mode when the mouse is hovered over the small square that changes the modes

In order to switch in between the Simple mode and the Explorer mode there is a button in the bottom of the screen. The button is a small white square. By hovering over the button the current mode changes.

The code:

import org.gicentre.geomap.*;

// Simple interactive world map that queries the attributes
// and highlights selected countries.

GeoMap geoMap;
Table tab2019; 
Table tab2020;

void setup()
{
 size(1000, 600);

 //reads map data 
 geoMap = new GeoMap(100, 70, 800, 400, this);
 geoMap.readFile("world");

 //read class of 2019 data
 tab2019 = loadTable("co2019.csv");

 //read class of 2020 data
 tab2020 = loadTable("co2020.csv");

 // Set up text appearance.
 textAlign(LEFT, BOTTOM);
 textSize(18);
 
 //viewing the table in the console
 //geoMap.writeAttributesAsTable(300);
}

int buttonColor;
int togglePosX = 100;
int togglePosY = 500;
int toggleSize = 25;

void draw()
{
 background(202, 226, 245); // Ocean colour
 stroke(0, 40); // Boundary colour
 
 fill(75,0,130);
 text("Where is the NYUAD Class of 2020 From?", 300, 50);

 fill(buttonColor);
 rect(togglePosX, togglePosY, toggleSize, toggleSize); 
 fill(75,0,130);
 text("Hover to view all countries", 135, 520);

 //button that toggles countries on and off
 if (mouseX > togglePosX && mouseX < togglePosX+toggleSize && mouseY > togglePosY) {
 buttonColor = 0;
 originalMode();
 }
 else 
 {
 buttonColor = 255;
 exploreMode();
 }
 

} 


void originalMode() {
 // Draw countries
 for (int id : geoMap.getFeatures().keySet())
 {
 String countryCode = geoMap.getAttributeTable().findRow(str(id),0).getString("ISO_A3"); 
 TableRow dataRow = tab2020.findRow(countryCode, 1);
 
 if(dataRow != null)
 {
 fill(75,0,130);
 }
 
 else 
 { // No data found in table.
 fill(250);
 }
 
 geoMap.draw(id); // Draw country
 }
 
}

void exploreMode() {
 // Draw countries
 for (int id : geoMap.getFeatures().keySet())
 {
 String countryCode = geoMap.getAttributeTable().findRow(str(id), 0).getString("ISO_A3"); 
 TableRow dataRow = tab2020.findRow(countryCode, 1);

 int theid=-1;
 String hoveredCountry="";
 String name="";
 
 //rolling over each country
 if (dataRow != null)
 {
 theid = geoMap.getID(mouseX, mouseY);
 if (theid!=-1)
 hoveredCountry=geoMap.getAttributeTable().findRow(str(theid), 0).getString("ISO_A3"); 
 }


 if (countryCode==hoveredCountry) // Table row matches country code
 {
 geoMap.draw(theid);
 fill(75, 0, 130);
 name = geoMap.getAttributeTable().findRow(str(theid),0).getString("NAME"); 
 text(name, 100, 300);
 } 
 
 else // No data found in table.
 {
 fill(250);
 }
 geoMap.draw(id); // Draw country
 }
}

Lastly, a massive thanks to Aaron for helping us figure out the code for the Explore mode!! We were struggling with making just one country show up at a time for the longest time ever…

Solar System

Ever since the midterm project I’ve been thinking about ways how to continue with the universe theme, which was going on in that project. This week’s assignment finally seemed like a good opportunity, so I decided to recreate the Solar System in Processing. There are 8 planets (Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus and Neptune) revolving around the Sun. Since Pluto was excluded from the Solar System in 2006, it is not included in my project either.

 

For creating the Solar System I only had to use one class. It is possible to create both the planets and the orbit circles for the planets with the same class.

1.The function planet creates all the planets as well as the Sun in the middle of the sketch. The function is defined by 4 variables – orbit size, the image which is around the planet, planet size and planet’s speed. The planet itself is a sphere. After creating this class it was easy to quickly add new planets by just changing the variables mentioned above. One of the most interesting parts about this project was setting a texture for the spheres. This is done in functions initializeSphere and textureSphere. Then I found pictures of the textures of these planets, added them to the sketch and assigned them to the corresponding planets. It gives the whole Solar System a more realistic look.

 

2.The function orbit creates the orbit circles for the planets. The orbit is an ellipse with the diameter twice as big as the obit size of the planet. There is one orbit circle for each planet rotating around the Sun. Because orbit circles are obviously not visible in nature, I added a feature that by pressing the key “o” it is possible to turn the orbit circles on and off.

orbit circles are on
orbit circles are off

 

3.There is also a Moon that I added to the Solar System. Because the Moon is not always visible it is also possible to make it appear or disappear by pressing the key “m”. The Moon is located between the Earth and Venus and it is not rotating around the Sun.

 

4.Lastly, I am using the camera built into Processing to be able to navigate around the Solar System by changing the perspective, zooming in and out, etc. That way it is possible to look at the Solar System from different angles.

from above
from the side

 

Here is the code for my Solar System:

import peasy.*;
PeasyCam cam;

Planet earth;
Planet mercury;
Planet venus;
Planet mars;
Planet jupiter;
Planet saturn;
Planet uranus;
Planet neptune;
Planet sun;
Planet moon;

boolean orbitSwitch = true;
boolean moonSwitch=true;


void setup() {
 size(1280,720, P3D);
 float fov = PI/3.0;
 float aspect=float(width)/float(height);
 perspective(fov,aspect,height,0);
 cam = new PeasyCam(this, 100);
 cam.setMinimumDistance(50);
 cam.setMaximumDistance(5000);
 
 
 earth = new Planet(70, "earth.jpg", 6, .012);
 mercury = new Planet(30, "pl_mercury.jpg", 5, .016);
 venus = new Planet(50, "ven0aaa2.jpg", 6, .015);
 mars = new Planet(90, "2k_mars.jpg", 6, .01);
 jupiter = new Planet (110, "Jupiter.jpg", 12, .009);
 saturn = new Planet (130, "2k_saturn.jpg", 10, .008);
 uranus = new Planet (150, "2k_uranus.jpg", 8, .007);
 neptune = new Planet (170, "preview_neptune.jpg", 8, .006);
 sun = new Planet (0, "texture_sun.jpg", 11, 0);
 moon = new Planet (60, "moon.jpg", 2, 0);
 
}

void draw() {
 background(0);
 
 //planet
 earth.planet();
 mercury.planet();
 venus.planet();
 mars.planet();
 jupiter.planet();
 saturn.planet();
 uranus.planet(); 
 neptune.planet();
 sun.planet();
 
 //orbit circles
 if (orbitSwitch==true) {
 earth.orbit();
 mercury.orbit();
 venus.orbit();
 mars.orbit();
 jupiter.orbit();
 saturn.orbit();
 uranus.orbit();
 neptune.orbit();
 }
 
 //moon
 if (moonSwitch==true) {
 moon.planet(); 
 }
 
}

void keyPressed() {
 if (key=='o'){
 orbitSwitch=!orbitSwitch;
 }
 
 if (key=='m'){
 moonSwitch=!moonSwitch; 
 }
 
}

class Planet {
 int orbit;
 float x, z;
 PImage img;
 int planetSize;
 float angle;
 float speed;
 int numPointsW;
 int numPointsH_2pi; 
 int numPointsH;

float[] coorX;
float[] coorY;
float[] coorZ;
float[] multXZ;
 
 Planet(int _orbit, String _imageName, int _planetSize, float _speed){
 orbit=_orbit;
 x=0;
 z=0;
 img = loadImage(_imageName);
 planetSize=_planetSize;
 angle=0.;
 speed=_speed;
 initializeSphere(30,30);
 }
 
 
 void planet(){
 //planet
 pushMatrix();
 x = cos(angle)*orbit;
 z = sin(angle)*(orbit+7);
 translate(x,0,z);
 noStroke();
 textureSphere(planetSize, planetSize, planetSize, img);
 popMatrix();
 angle+=speed;
 }
 
 void orbit() {
 stroke(255,120);
 noFill();
 pushMatrix();
 rotateX(radians(90));
 ellipse(0, 0, orbit*2, orbit*2+14);
 popMatrix();
 
 
 }
 
 void initializeSphere(int numPtsW, int numPtsH_2pi) {

 // The number of points around the width and height
 numPointsW=numPtsW+1;
 numPointsH_2pi=numPtsH_2pi; // How many actual pts around the sphere (not just from top to bottom)
 numPointsH=ceil((float)numPointsH_2pi/2)+1; // How many pts from top to bottom (abs(....) b/c of the possibility of an odd numPointsH_2pi)

 coorX=new float[numPointsW]; // All the x-coor in a horizontal circle radius 1
 coorY=new float[numPointsH]; // All the y-coor in a vertical circle radius 1
 coorZ=new float[numPointsW]; // All the z-coor in a horizontal circle radius 1
 multXZ=new float[numPointsH]; // The radius of each horizontal circle (that you will multiply with coorX and coorZ)

 for (int i=0; i<numPointsW ;i++) { // For all the points around the width
 float thetaW=i*2*PI/(numPointsW-1);
 coorX[i]=sin(thetaW);
 coorZ[i]=cos(thetaW);
 }
 
 for (int i=0; i<numPointsH; i++) { // For all points from top to bottom
 if (int(numPointsH_2pi/2) != (float)numPointsH_2pi/2 && i==numPointsH-1) { // If the numPointsH_2pi is odd and it is at the last pt
 float thetaH=(i-1)*2*PI/(numPointsH_2pi);
 coorY[i]=cos(PI+thetaH); 
 multXZ[i]=0;
 } 
 else {
 //The numPointsH_2pi and 2 below allows there to be a flat bottom if the numPointsH is odd
 float thetaH=i*2*PI/(numPointsH_2pi);

 //PI+ below makes the top always the point instead of the bottom.
 coorY[i]=cos(PI+thetaH); 
 multXZ[i]=sin(thetaH);
 }
 }
}

void textureSphere(float rx, float ry, float rz, PImage t) { 
 // These are so we can map certain parts of the image on to the shape 
 float changeU=t.width/(float)(numPointsW-1); 
 float changeV=t.height/(float)(numPointsH-1); 
 float u=0; // Width variable for the texture
 float v=0; // Height variable for the texture

 beginShape(TRIANGLE_STRIP);
 texture(t);
 for (int i=0; i<(numPointsH-1); i++) { // For all the rings but top and bottom
 // Goes into the array here instead of loop to save time
 float coory=coorY[i];
 float cooryPlus=coorY[i+1];

 float multxz=multXZ[i];
 float multxzPlus=multXZ[i+1];

 for (int j=0; j<numPointsW; j++) { // For all the pts in the ring
 normal(-coorX[j]*multxz, -coory, -coorZ[j]*multxz);
 vertex(coorX[j]*multxz*rx, coory*ry, coorZ[j]*multxz*rz, u, v);
 normal(-coorX[j]*multxzPlus, -cooryPlus, -coorZ[j]*multxzPlus);
 vertex(coorX[j]*multxzPlus*rx, cooryPlus*ry, coorZ[j]*multxzPlus*rz, u, v+changeV);
 u+=changeU;
 }
 v+=changeV;
 u=0;
 }
 endShape();
}
}

 

Here are some pictures I took throughout the progress of creating the Solar System. One of the most challenging parts is to determine the distance between the planets so they wouldn’t overlap when rotating, however, depending from the angle that you look at, it is not completely inevitable.

 

Finally here are some videos of the rotation of the planets from different angles with or without the orbits:

solar system

solar system 2

from above

without orbits

Luize Rieksta

Variability in New Media

In “The Language of New Media” the author Lev Manovich talks about the evolution of new media and how this new concept and form of media took over everything that was previously known and used. He talks about the principles of new media and one that is specifically intriguing to me is variability. “New media is characterized by variability!” What is interesting is how much variability there is in new media compared to old media. With the advancement of computers it is now possible to create almost an infinite amount of variations that can be automatically generated by a computer. For example, even in Processing one can alter their creations in so many ways that would be impossible in old media. Variability also closely connects to interactivity, as the user can create new variations of the elements used in the object. I also liked the metaphor of a map used by the author when describing scalability (a principle od variability). Basically with the automatic generation it is possible to include as much or as little detail about the object as wanted, just like a map with different scales that also provides more or less detailed information about the area. I believe that all this variability is going to continue defining new media in the future and distinguishing it from other forms of media.