IM Final: Interactive Totoro

Interactive life size Totoro did happen in the end 🙂

As a follow up from my first computer vision assignment and as a way to fulfil my desire of seeing a life-size Totoro, I decided to create a projected image of him that people could interact with!

As a brief recap, this was the project that inspired my final:

Overview of Totoro’s interactions:

Through the PS3Eye and Blob Detection libraries, as well with the infrared LEDs attached to the interactive objects, specific movements and interactions toggled different aspects on screen. This installation had two modes. The first one consists of using an umbrella to try to protect Totoro from the rain. Through two LEDs attached to either side of the umbrella, the program tracked its location and stopped the rain in those locations. As the umbrella gets closer to Totoro, he gets happier, and finally once it is directly in front of him, he growls and smiles widely. The second mode consists of wearing a glove to pet Totoro. Totoro’s eyes follow the user’s glove and, if stroked in his belly, Totoro gets happier and growls as well. Although seemingly simple interactions, the linking between all the components: switching between modes, accurately tracking the umbrella and the glove, toggling the rain on an off, moving Totoro’s eyes, and toggling sound and animation, was a lengthy and time-consuming, although extremely enjoyable, process.

The process for this piece was divided into three sections:

  1. The design: adjusting and making the background and animation frames
  2. The code: writing the program and adjusting the processing – IR camera link
  3. The hardware: attaching IR LEDs to the umbrella and the glove

 

  1. The design

For the project’s visuals, I adjusted both the background and Totoro’s expressions.

Here is a screenshot of the original image from the movie:

There were two issues with this background image. The first was that the girls in the scene, although iconic and the main characters of the movie, were superfluous. Although their colors added a lot to the appeal of the image, leaving them there would not only take attention out from Totoro, but would also give the impression that the girls were interactive as well. The second issue was the rain in the image. Since my rain was created through Processing, the drawn rain would create a saturated image and would also give the sense that the rain was stopping rather than disappearing when someone hovered on an area with the umbrella, since the coded rain would stop but the drawn rain would still be there. Thus, I also set upon myself to overuse the stamp tool in photoshop and get rid of the rain. This all led to the following final background image:

Actual background (the eyes had to be left blank for the ellipses in the code to move)

 

For the animation frames, I compiled Totoro’s smile in other scenes and added them to the umbrella scene, since in this whole part of the movie, there are no actual shots from afar where Totoro changes his expression.

For instance, this is the original scene where I got his smile from:

 

I made the eyes and mouth transparent and then adjusted them to the scene I wanted to use for my project, while trimming everything to 7 frames:

Once the animation was done, it was just a matter of setting up the boundaries as to where the specific frames would be shown.

Here is a sample of the locations where the frames change for the glove code:

  1. The Code

The code was much more complicated than I thought it would be. In summary, I manually change the modes through my keyboard. Depending on each mode, the code checks for the number of “blobs” that are detected on screen. To make the tracking accurate though, I adjusted the brightness threshold as well. When in “umbrella mode”, the code waits for there to be two blobs on screen. Once this occurs, it saves their coordinates and compares them to establish a minimum and a maximum point for the umbrella. Then, it uses these minimum and maximum values to make the rain’s alpha value transparent if it spawns between these locations. For the glove mode, the code checks for only one blob on screen. Once detected, it saves its coordinates. Then, depending on where the coordinates are, it moves Totoro’s pupils accordingly and shifts between animation frames.

Here is a link to the full code

  1. The Hardware

Finally, once the logic of the code was functioning, I attached the infrared LEDs to the umbrella and the glove. I 3D printed battery holders for my two 3V batteries and made switches so I could save the battery life for the exhibition. Then, for the umbrella, I attached all the wires with tape. For the glove, my friend Nikki Joaquin sewed all the components together due to my lack of ability. (thank you Nikki <3) Although seemingly quite simple, setting up all the hardware was one of the most time consuming tasks. At first, Nahil and I had not thought about 3D printing the battery holders. Instead, I had just taped everything up, which made it extremely difficult to attach the wires to the batteries and place them on the umbrella without any of the components moving out of place. At first, I had only thought about using one LED on either side of the umbrella and one on the hand. However, due to the directional aspect of the LEDs, I ended up making another extra set and adjusting their angles slightly so the blob tracking could be more accurate.

 

Sewed components. I could have covered them with a film but the buttons were more accessible this way.

The battery holders were attached with a lot of electric tape to ensure they would not fall off
As seen in the image, the LEDs were slightly shifted to arrange for a wider range.

Challenges and future improvements

This whole process was overall quite challenging. However, by dividing everything into the three sections described earlier and doing everything little by little, I was able to finish Totoro on time. The biggest challenge was definitely the coding. I had to get familiarized with the way the IR camera and the IR LEDs worked, and had to adjust the code for the Blob Detection to fit into the interactions I wanted to create as well. Initially, I made the code in such a way as to make the program automatically recognize the amount of blobs in the camera’s frame and with that identify the mode it was on. However, this made the code extremely unreliable, which is why I chose to manually change it through keys in my computer. Overall, thanks to the help of Aaron, Craig, Nahil, James and María Laura, the code is now fully functional and as bug free as possible (I hope). The visuals and the hardware were also quite time consuming, but were more mechanical, which provided for good breaks once I got tired of writing the code.  

Overall, the whole process of making Totoro come to life was a truly gratifying one. Although it was extremely time consuming and frustrating at times, it was all worth it once I saw how excited people got over seeing a huge Totoro, and realizing they could (even through the most minimal of ways) interact with him in some way. Some people even told me that rubbing Totoro’s belly was just what they needed for final’s week 😀  In the end, I am still at awe at how much all of us have been able to accomplish due to this class. I would never have guessed that I would be making a project like this one ever in the future, especially at the beginning of the semester. Overall, regardless of the times of Sunday stress when certain projects didn’t work out like I envisioned them to, this class has been one of the most rewarding I have taken, thank you so much everyone for being a part of it 😀

In the exhibition, I  was too caught up helping people out with the umbrella and the gloves that I totally forgot about taking videos of the people interacting with Totoro. Here are some of the photos of the exhibition (thank you Craig, James, and Aaron!)

Finally, here are samples of the final interactions: