Atmanna (Wish): Final Project Documentation

Atmanna or Wish came to be inspired by my interest in creating an art piece that mimics a motion in nature. I really wanted to work on an art piece instead of a game or other application because this class has given me an interest in satisfying motions that produce aesthetically pleasing visuals. The first piece that I worked on that attempted to create the compositional beauty of nature was the generative art of leaves in Processing.
At first I wanted to create an art piece that allowed the user to also make a wish and have a speech to text conversion mechanism and have their wish appear on screen. Ultimately the concept got changed along the way, but read more to find out.
Concepts of Movement
I created several different ideas in order to think about how to mimic the movements of a dandelion in Processing. I wanted to utilize my knowledge of object oriented programming as well as particle systems to create a beautiful effect. Here are some of the ideas that I came up with:
Concept 1 –
This was the first idea I had in mind, creating the dandelion with random shape particles. This idea was well suited to movements with the mouse and I really liked it, but I felt it was too abstract to be immediately recognized as a dandelion – and the motion wasn’t exactly what I had in mind in terms of the real movement of dandelions.
Concept 2 –
This concept came when I was trying to play instead with lines and nodes like Dan Shiffman’s fractal tree videos. I played a lot with motion in this concept but ultimately I didn’t like the look and feel of the lines and nodes for a dandelion.
Final Concept –
I finally decided to use vector graphics created in illustrator because I had more control over what I wanted the piece to look like, and created different frames of animation for a dandelion within illustrator and imported the different images into an array of images within processing to loop through them.
The particles in my particle system were composed of an image of a dandelion seed that I also imported into processing into the ‘Seed’ class and I played with different movements. I decided to make the seeds flow upwards because it made the most sense spatially on the screen for me. Again, referencing Dan Shiffman’s nature of code book really helped with this phase to be able to add and play with different physical forces to create the desired effect.
I knew I wanted to use the physical action of blowing, but without the use of a wind sensor I had to think of an alternative method. I decided to use a Sparkfun Sound detector that we had available in the lab and was able to read different sound inputs. The act of blowing on a microphone produces certain levels of sound that I was able to explore using the Serial plotter in the Arduino IDE. I used these serial values to trigger motions for the particle system in the Processing sketch.
User Testing
When I did my user testing, I did not yet have a physical dandelion that people could blow on. Some people liked this because it did not take away from the on screen experience and aesthetics that were happening. Others wished that they did have it.
At the time of the user testing, the animation also was not as clear or smooth as I would have liked it to be and people noticed that as well. I also thought about what story was being presented to the user as they interacted with this piece and I didn’t have a set narrative that was being told. I thought about what was being said and how to use that but ultimate idea for me was to allow the user to be able to make their wish and keep the act as simple as possible as it is organically in real life. See the user testing post for more of the notes and improvements that I wanted to work on.
I ultimately did have a physical dandelion for people to blow on, but it was difficult for me to decide on the medium that it should be made with. I used straws for the green stem that made the wiring easier to work with, and cotton balls for the top of the dandelion. It was difficult to embed the microphone in a way that would still allow it to work, but made sense for the user to blow on and interact with.
During the Show
I think my project stood out as being one of the ‘calmer’ projects – there was a lot of light and big screens and sound around so it was a different experience for people to pause and reflect and take a moment to make a wish. People ultimately really enjoyed the experience, and I especially liked that I had many people stop by because it was such a simple concept that didn’t take too much time to engage with but still created the impact that I wanted it to.
One thing I wish I had done was incorporated an element of sound or background music – but it was loud in the room anyway so it wouldn’t have created the exact ambiance that I wanted to achieve. One of my favorite comments was that this was a good business model for an ‘alternative stress ball’ to keep on your desk and use to take a moment to breathe and reflect.
I realized a bit too late that the set up for exactly where the dandelion and screen were position was not perfect. Sometimes as the user was blowing they missed what was happening on the screen. I think it might have been better or more immersive if I had created the ability to pick up the dandelion, and/or projected the art instead of having it on my laptop screen. All in all I really enjoyed presenting my work and have people play with it. Some people even came by several times to make more than one wish!
Limitations + Future Improvements
  • I didn’t use the right medium to create the physical dandelion – the cotton was really fragile and the straws were not particularly stable as people were blowing on the dandelion itself.
  • During the show I realized the loud room had some sound interference that the microphone detected and triggered the animation without meaning to. Even though I did user testing, I think each space is unique and I possibly need to add some calibration function – thanks to Aaron for showing me the sound smoothing function that saved me!
  • I would actually like to figure out an elegant solution for speech to text in Processing
  • I’m thinking of 3D printing or using some mesh material to create the dandelion instead of cotton balls.

User Testing Notes: Atmanna (Wish)

– Tester 1:
  1. Make the animation smoother
  2. Tell a story with it
  3. What do you think of ‘make a wish’
    1.  get a random wish?
  4. It’s pretty clear what needs to be done but it might get confusing if you have a physical object
– Tester 2:
  1. Smoother animation
  2. Talk about why people wish on dandelions
  3. Background music
  4. Think about how to place microphone

Overall I learned that people really enjoyed the visual effect and feeling of satisfaction that you get from blowing a dandelion. I received feedback on how I should make a physical dandelion and what kind of story I’m telling with it.

Response: Computer Vision

Computer vision is a dynamic field of computer science that is actively improving our lives and the technologies we use. Accessibility of computer vision is becoming more and more important to people who are working on projects that utilize it. While working on my project this week – I found so many libraries that allow people to do things like facial recognition with just a few lines of code.
This paper showed the various ways creative coding and computer vision techniques intersect to be able to create endless projects. Computer vision seems to really help create an interactive and immersive experience. This reminds me of the readings and conversations we’ve had in class about making interactivity about more than just touch. Now we could use anything from blinking, smiling, or other facial expressions, or even movements, and the colors around us in day to day life to create an experience that is interactive in non-conventional ways.
One popular application of computer vision is the snapchat filters that have become wildly popular. For many teenagers, computer vision has become a part of their daily lives without them even realizing it.

Face Replace (ft. LamaLisa and LamaGaga)

For this project I decided I wanted to combine image and webcam use by creating a face swap project.
I found a great processing library with many built in computer vision functions that could be used for different applications. Find it here.
I first used OpenCV to get facial recognition working. Here it is – the green circle shows approximately where your face is.
After that, I created a mask image that could be used and resized based on the length and width of the face that is recognized as seen below.
The facial recognition is also used on the image that the user can input into the code. (It works on any image with a clear face shape!)
The face in the image is cropped according to the mask that we created before, and finally the ‘live face’ is overlayed on top using the blend function in processing.
View the source code.

Brick Breaker meets Potentiometer!

It was interesting putting together the concepts of physical computing and creating something on screen with processing. It’s definitely a bit of a complicated process to understand the communication between the two channels and how to make it do exactly what you want it to as I realized was happening when I tried to control the paddle with a potentiometer for the brick breaker game I created in processing.

At first I was trying to use a sliding potentiometer which ultimately seemed like it wasn’t working correctly, so I switched it out for the regular potentiometer you see in the video below.

My other mistake was that I was attempting to create the serial event within the Paddle class in the code – it should have been outside of the code and calling myPaddle (name of object) dot xPos to be able to use the Serial values for the variable I want to manipulate.

All in all, the mistakes helped me to better understand how Arduino and Processing communicate with each other.

My Life // Computing

Computing for me has always been an outlet of creativity and a way to think critically about problems around me. I am always astounded at the infinite possibilities of what I can do with simply the laptop that I carry in my bag everyday, and an internet connection. My thirst for knowledge grows by building skills that allow me the possibility to create new things by ‘talking to the computer’. Natural inquisitiveness has colored my everyday life. I often pause to consider how the tools I use on a day to day basis are created, the implications of computing on my life as an individual, and societal implications of who creates technology and the purpose of its creation. The effects and products of computing are vast and far reaching; something that I create today can instantly reach others with the click of a button. Consequently, computing is like an accessible super-power that should be used to inspire creativity, communicate important ideas, and effect change in the world around us.

Leaves and Branches: Objected Oriented Generative Art

Leaves and Branches

By Lama Ahmad

I created a Branch class to generate different numbers, positions, and colors of branches and leaves. I really liked the different shades of greens and added a parameter called ‘wiggle’ that contributed to the curves in the stems. Each time you click on the canvas a new image is generated.

I wanted to create an art piece as opposed to a game because I have already done a project like that in Processing (bonus, you can see it here).

To view and interact with the project you can click here.



Truchet Tiling inspired by Hex Variation

This week my work was inspired by the Hex Variation computer graphic art by W. Kolomojec.

I found it interesting how the work used curves and lines to create a puzzle like piece. I thought it would also be nice to play around with a bit of color in it.

My work ultimately didn’t look exactly like the original Hex Variation, but I really liked the outcome and the fact that I used a pretty simple technique to generate a cool pattern.

Here’s a screenshot of what I came up with:

To interact with the piece, click here

Overall, I really enjoyed making this and playing around with different ideas. It kind of looks like an optical illusion if you watch it for a while. There’s a link to view the source code on my page, you’ll find that the source code is quite short! Can you tell how I did this? The technique is called Truchet tiling. You might notice some familiar patterns from the computer graphics magazine!

Me, Myself, and I

My self portrait was based on this image that my friend took of me. I liked the side profile and thought it might look good as a black and white line drawing.

Here it is after I used Adobe Illustrator to create a path to the lines. As you can see I tried to get in the details of the folds of the hijab.

Here it is in my attempt in processing. When the image is first displayed, it’s displayed without a face. When you click on the image, the facial features appear. This is supposed to create a message but I hope to continue playing with this piece and adding text to convey the message I want. It was definitely difficult to work with curved lines, so the details in the scarf are not fully drawn in yet.

Response: Her Code Got Humans on the Moon — And Invented Software Itself

Margaret Hamilton has always been a personal hero of mine. Hearing her story is always something astounding to me, particularly in the climate of inequality that exists in tech today. I’ve had a personal attachment to the cause of creating a diverse environment in technology and engineering.

I recently saw the movie Hidden Figures, which depicted the story of several women who worked hard in NASA for the first space mission, and their names are rarely credited. I’m really happy that awareness of the need for the recognition of these women, and the recognition of women in STEM in general is finally coming to light. There’s still a long way to go, though, to get rid of the constructs that we have about the roles that men and women occupy, particularly in the realm of careers.

I recently took an Implicit Association Test conducted by Harvard University regarding Gender-Career associations. Even though I am the President of weSTEM, and pride myself on being an advocate for gender equality in all careers – I still had a moderate implicit bias to associating men with careers and women with family. This was really eye opening to me and just goes to show how deeply embedded our own biases are based on the influences we encounter in society.

It’s crazy that NASA belittled Hamilton’s idea of error checking, and it became foundational in software as we know it today. I hope that people start to understand the importance of diversity and different view points in technology, from an economic and practical standpoint, it is important that women come into the picture.

Here’s a relevant video I made to illustrate this cause last summer: