Across The Globe

For my final project I created an opportunity for people to jump around different places on Earth (and off Earth for that matter) in less than a second. With the help of computer vision and a green screen behind, people were able to see themselves in either Rome, a beach in Thailand or on the International Space Station (ISS). In order to navigate these places, all you have to do is move a figure of a person around the map and place it in one of the three locations. Then, this location appears on the screen, and so does the person interacting with the project, because he/she is being filmed. In addition, there is a small carpet on the floor on which to step on. When you start walking or running on it, the background starts moving as well, depending on how fast you move.

The creation of this project was challenging since the first day. I started with connecting two pressure sensors to Arduino and reading the time value between pressing the sensors. That way is possible to know how long is a person’s step. Then I did serial communication to send this data to Processing. In addition to the pressure sensors, there are also 3 LEDs connected to Arduino and it is also sending a different number to Processing depending on which LED is lit up. Each LED is responsible for a certain place on the map.

For the interactive map I got a box, cut 3 holes, added an LED next to each hole, designed the surface and added another layer of cardboard inside, so there would be a bottom for the holes. There are two strips of conductive coper tape coming to each of the holes, and one of the strips is connected to power but the other – to ground. Therefore, whenever there is something conductive placed in the hole, it closes the circuit, and the LED next to the hole lights up. A number is assigned to each LED and this number is being sent to Processing, therefore it knows at which location the person is placed.

the box from the outside
the box from the inside

For making the person I went to the Engineering Design Studio to use their laser cutter and cut 7mm thick clear acrylic. The figure is a traveler with a backpack and a round bottom. In order to make the bottom conductive, I first tried to tape some copper tape on the bottom, but it was lacking weight as it didn’t properly press down on the copper tape strips when placed in the hole. So I had to be creative and that’s how I decided to stick 3 coins on the bottom to give the person some weight as well as make the bottom more conductive (now I know that euros are more conductive than dirhams or dollars).

a two euro coin on the bottom of the figure

When the person is placed somewhere on the map, the appropriate LED lights up and sends a number to Processing. In Processing I then loaded 3 videos from each of the 3 places and display the appropriate video for each place. For example, when the person is placed in Rome, Arduino recognizes it and sends a ‘1’ to Processing which is then set to display a video of Rome. In order to actually play the video, the person interacting with my project needs to start moving on the carpet. Arduino then recognizes the time between the footsteps and again – sends these values to Processing. I’m mapping the incoming time value in Processing and playing the video accordingly to how fast a person is walking. It is slowing down when a person is walking very slowly, playing normally when the speed is normal and speeding up when a person is running. However, if the steps are longer than the maximum value in the map function (1.2 seconds), then the video just plays at the slowest mapped speed. If there is no movement for a little while, the video stops and restarts playing again when movement is detected again. Therefore, the people interacting with my project get an impression that they are actually seeing the background as they would when moving at different speeds.

the whole setup. people are walking on the carpet
pressure sensors on the back of the carpet

The person interacting with my project sees himself or herself in one of the places because of a green screen behind them. The camera from the computer in front of them is filming them and the green screen and substituting all of the green pixels with a video from the place where the figure is located at.

Whenever the person is not placed on any of the locations, this is the photo that shows up on the screen:

The IM show, where we were displaying our projects to public, was an incredible and positively overwhelming experience. For the show I had two screens – one was the computer in front of the person where they were seeing themselves but the other was turned to the public. I was really happy to have the other screen because it definitely dragged more attention to my project because people could see other people interacting with it. I was surprised by people’s interest to interact with my project and observing their reactions was extremely rewarding. The night flew by in a second for me but I tried to capture some moments from it.

Goffredo was really happy to be in Rome!!

Here I have a short time-lapse of people interacting with my project:


And these are some of my favorite moments filmed at the IM show. I have more footage though, and, as soon as the exam period ends, I’ll make sure to make a video about the whole project and I’ll also post it on here! Overall, I have learned a lot not only in this period of making the project but also throughout the whole semester. The IM show was a memorable way how to wind up this semester. Huge thanks to Aaron for the help and the class for the feedback received along the way!