Birdy: Full Documentation

There was one thing I knew that I wanted my final project to be the moment I started thinking about it: I wanted it to be cute. Other things I knew were a) I wanted the project to be processing heavy, while physical computing light, and b) to be a game. I really wanted to practice coding more, and I also particularly enjoyed the week in class when we had made a game (I made “Blobby”).

After more in-depth brainstorming, and class suggestions, I came up with the following idea for a game: the user would be a bird, and by flapping his/her wings, would fly the bird around various environments. Games would be hidden/placed around the environments, and the bird could go around playing those games. Later, I fleshed out the idea further: the bird had to play the games in order to earn “seeds” (points) to feed her babies that were to hatch soon. Once the bird reached a certain amount of seeds, the game would end, the user would win, and the babies would hatch and fly happily around with the mother bird on the screen.

The process of making the game looked like this:

a) Planning: I had to plan all the various games, decide what types of visuals they would need, and think about how to convey all the instructions of the game to the user. It was critical that I do this to avoid wasting time in the next steps.

b) Design via Illustrator: I spent many many hours before even starting the code creating all the visual elements of the games. I created the bird(s) myself in Illustrator (making various frames so the bird would look like it was flapping its wings), found various free Illustrator elements online that I would then have to adapt quite a lot to fit my vision, and had to make multiple versions of everything to create the blinking or movement effects that I wanted. Overall, this was extremely time-consuming (especially since I had to redo quite a lot of it later since I had to make sure all the sizes of the files were consistent and appropriate for Processing), but it was a crucial step since the visual aspect of the game was important for it to be successful.

[Below are just a few of the Illustrator elements I created or edited.]

   

c) Coding: The coding aspect of the project was undoubtedly the hardest part of the project. Not only did I have to create the code for five different minigames, but I had to create the code for the overarching/big picture aspects of the game. These include things like: how, and how long, to display instructions, where to start and end the game, how to let the user find different games, whether the user can play each game more than once or not, how to let the user move from one environment to the next, whether the user should be allowed to move to another game without finishing the first game or not, and on and on. I didn’t realize how complicated this process was going to be until I was rather far along in the project, when I started to appreciate the way that the seemingly unimportant questions in a game (like the issues I just specified) actually make or break it completely. However, one thing I am very proud of and would like to note is the following: I did know that, at the very least, the code was going to be rather lengthy, and thus I spoke with a computer science friend who helped me map out the plan for the code. It was through this that I got the following idea: instead of making completely different code for each minigame, I could use classes to sort of reuse the same code over and over, adapting it for each game. This made perfect sense for my game, since in each game there is an “other” (whether it is a coconut, a fish, etc.) and in each one some sort of overlap needs to be detected. Thus, I utilized classes to my advantage in the code, and believe that as a result the code is far more simple/clean/short than it would have been otherwise. This experience demonstrated to me that planning before coding is crucial to avoid headaches and wasted time.

[The Processing code for the game can be found here, and the Arduino code can be found here.]

c) Physical computing: The physical aspect of the game actually turned out to be rather straightforward and reliable — which is something I cannot say about most of my experiences with sensors in the class. I used an accelerometer, which measures (as the name suggests) acceleration/the rate of movement. I wanted to use it to measure the user’s flapping, so it was perfect because the user truly had to flap to make the bird on the screen move; if they moved really slowly would not work. All I really needed to do was detect the x, y, and z coordinate from the accelerometer and measured the difference between that reading and the previous reading (to see if the user was flapping). I found an equation to calculate this online: sqrt[(x2 – x1)^2 + (y2 – y1)^2 + (z2 – z1)^2]. In regard to how I used the accelerometer exactly, what I did was this: I soldered each (of the two) accelerometers to six feet long cables, which were plugged in to the Arduino/bus board. (Each accelerometer needed six cables, so that means I soldered 12 six-feet long cables in total.) Then, I hid the bus board / Arduino in a pretty box, covered cables in pretty, flowery tape, zip-tied the ends of the accelerometers to a pair of black gloves, and then hot-glued tons of pink feathers onto the gloves. Overall, I very happy with how it all looked: the green/pink theme of the box/cables/gloves fit perfectly with the green/pink theme on the screen.

[Below are pictures of the end product. Note how in the picture on the left, Craig’s pink wig makes an appearance.]

The IM Show:

I was really happy with how the game was received at the IM show — especially when a girl came to play the game, lost the first time, asked to play again, then won, and then jumped up and down out of excitement, took a picture of the winning screen, and then gave me a hug. 😛 While not everybody was as excited as she was, a lot of people found the game really cute, rather fun (particularly because of the wings/gloves), and overall a nice idea. I do, however, think that it might not have been the best suited interaction for the show, since the game is rather long if it is played all the way through, and most people want to only spend a short time at each interaction so that they can get through all of them. This means that a lot of people would stop playing part way through the game. In the end, still, most people seemed to really like it, and I was incredibly happy to share the game with others.

[Here is are two extra photos, one of Luize posing after playing the game and winning the high score, and another of the feathers that were sacrificed during the IM show.]

 

 

Birdy: User Testing

Even though I ended up user testing people who had at least a slight idea of my project/game (e.g there is a bird, you fly around, etc.), I still learned so much from having users try it out. While the game is very simple and cutesy, and even provides some instructions, it still was far less obvious and clear than I thought it was.

While watching the users try the game was interesting and helpful (and sort of funny), the most useful part was hearing their specific feedback after finishing the game. Feedback included the following:

  • the bird moved too slowly
  • the instructions at the top were not that helpful, because it was difficult to look there/read them while the game was still going on
  • it was not clear when the game ended (because even though the user is taken to the main page after they reach 100 points, there is no “you won” message, etc.)
  • the blinking arrows on the sides add too much visual stimulation/confusion, and they interfere with noticing the other blinking elements that signify games
  • there should be something that signifies how many pages/environments there are, so the user knows when they have been to them all
  • the wing flapping directions are (possibly) counterintuitive, in that when you flap the left wing, the bird goes to the right, and vice versa

To fix these issues, I plan on doing the following before Wednesday:

  • making the bird move faster
  • pausing/freezing the game for several seconds whenever there are instructions, giving the user time to read them
  • making the arrows be still rather than blinking
  • naming the environments, and including those names at the top of the screen always, so the user knows which environment they are in, and how many there are total
  • including a message at the beginning telling the user to get to 100 points to win
  • editing the last message to say something more clear, like actually saying the game is finished, etc.
  • reconsidering the flapping directions

A few videos of user testing:

I also plan on having several users who are *completely* unfamiliar with the game to test it out between now and Wednesday.

Overall, user testing was a great idea, and it has helped me figure out what changes I need to make in order to make the game intuitive and enjoyable.

Silly Filters :)

For this assignment I decided to just do something silly. Inspired by snapchat filters, I embarked on creating my own. However, since I do not know how to detect faces using Processing, I decided to use physical stickers of bright colors as detection points. What happens is this: the user puts on two stickers of different colors, one on their forehead, and one in the middle of their neck. Then, when they use my filters, the computer first has the user select the two stickers (so the computer knows where to place certain objects later). The whole process works like this:

  1. The user is first prompted to select (by using their mouse) the “hat sticker” (the sticker on his/her forehead).
  2. The user is then prompted to select (by using their mouse) the “shirt sticker” (the sticker on his/her neck).
  3. Then, the user sees the first (of three) filters. It is a wizard cloak and wizard hat.
  4. Extra features: the user can press the up or down arrows to make the hat bigger or smaller; the user can press the left or right arrows to make the shirt bigger or smaller.
  5. The user, once satisfied with the first filter, can switch to the second filter by pressing the “Option” key. It is a Cubs baseball hat and jersey. (Again, the user can scale both the hat and shirt to the desired size.)
  6. The user, once satisfied with the second filter, can switch to the third (and final) filter by again pressing the “Option” key. It is a pair of sunglasses and a lei (flower necklace). (As usual, the user should scale the accessories accordingly.)
  7. Once the user is satisfied with the third filter, they have the option of continuing through the filters (in order) by pressing the “Option” key.
  8. Last note: if, at any point, the user would like to re-select the stickers (for example, if the user accidentally pressed something besides the sticker, which messes up the filters), the user need only to press the “Control” key.

Please excuse how funny and strange I look in these photos:

In summary, the controls are:

  • Mouse press: select the hat sticker and the shirt sticker
  • Up/down arrows: scale the hat
  • Left/right arrows: scale the shirt
  • Option key: switch to next filter
  • Control key: re-select hat sticker and shirt sticker

Overall, I am moderately pleased with the results. The filters do not look that great/realistic, but they still create a funny effect and get the point across (the third filter, the sunglasses and lei, is definitely the best though). It was really difficult to maintain the accuracy of the color selection while trying to stabilize the hats/shirts, but I did my best although the movement of the objects still bothers me a bit. Additionally, it took a long time on photoshop to get the objects right; I used a clip art man* and standard canvas size to scale all the objects, so that the initial size/placement of the objects would be the same.

*The clip art man:

The code for the project can be found here.

Levin’s “Computer Vision for Artists”: Thinking Big with Tech

Apart from the interesting informational aspects of Golan Levin’s “Computer Vision for Artists,” upon reflecting there were two things that really stuck with me:

  1. The first was something that was discussed at the beginning of the reading: Levin explained that the artist of Videoplace, Myron Krueger, believed that the “entire human body ought to have a role in our interactions with computers.” I liked this idea first and foremost because whenever I think of visual interaction with computers I only think of the face/neck and shoulders area/maybe hands — I rarely consider that the whole body should be involved. More importantly, however, it reminded me of the the article “A Brief Rant on the Future of Interaction Design” by Bret Victor. As I recall, he was frustrated that nobody was being more innovate with interaction design, and specifically that nobody was developing (or even really thinking about) creating interactions that involve other human senses/capabilities other than the ability to swipe a screen with a pointer finger. In the same way that Victor resents the lack of more humane physical interactions with technology, I am sure that Krueger would be frustrated that there are very few technologies that use computer vision in a way that incorporates the whole human body. This makes me wonder if programs like Skype, Snapchat, etc., that typically limit interactions to faces, are missing out on a similarly more humane approach to technology.
  2. The second was The Suicide Box technology at the Golden Gate Bridge. Not only is the whole topic extremely sad, but it is quite interesting that the program was able to capture more suicides than was officially reported (which makes me wonder how many suicides have actually occurred there over the past 70 years). More importantly, however, I liked how the technology was controversial, and called attention to an important social issue. What one might think would be somewhat uncontroversial — recording a public place to keep a record of certain incidents (albeit to be used a sort of statement/artpiece) — actually was extremely controversial, and as Levin points out through quoting Jeremijenko, the public is wary of artists (or others) who use real, material “evidence” gathered through surveillance technology.

Coding is Like Doing a Puzzle

To me, coding is breaking things up and dividing the logic of a behavior into tiny little bits (pun unintended). Meaning, that everything you do in code is breaking up actions that humans, in daily life, think of as simple, straightforward, and complete (e.g. pressing a button, turning a light on, telling the computer to do something) and figuring out the smallest possible pieces of logic that constitute those actions. You cannot just tell a computer to turn a light on, for example,; you have to tell it when to do so, when not to do so, how to do so, where the light is, how long to do it, when to stop doing it, and so much more. Everything that I thought was simple – even non-technological things – coding has taught me is actually extremely complicated.

At a more tangible level, coding is very worthwhile: there is little that can match the feeling of achievement when one (finally) figures something out while programming. Seeing the circle that one programmed appear on the screen seems like a miracle. This is probably the result of many things: the fact that making it was so difficult, the fact that one created it themself, the fact that something tangible appeared from some intangible jibberish (sorry programmers), and more.

In sum, then, coding has added to my life in (1) a broader, conceptual sense, for I cannot help but think that everything thing we see, everything we do, everything we understand can be broken down into countless tinier pieces of logic, and (2) a tangible sense, for I love the feeling of creating something by way of tying little pieces of logic together. In this tangible sense, coding something successfully is like finishing a really difficult puzzle and taking a step back to see that every little piece that was so difficult and obscure while one worked with it was actually essential to the final product, and makes it beautiful and complete.

And just for fun… a visual representation of 80% of my time programming:

Blobby 2.0

For this week’s project, I decided to upgrade my previous Blobby game (where the user’s pink blob tries to eat the mean blobs) by adding various features to it. I really wanted to practice both a) having Processing receive information from the Arduino, and b) having the Arduino receive information from Processing. So, I added two features to the game of each type of interaction. For the first type of interaction, I have 1) a potentiometer and 2) a button, and for the second type, I have 1) a Piezo buzzer, and 2) a RGB led. How these interactions work is the following:

  • The user can use the potentiometer on the Arduino to speed up or slow down the movement of the mean blobs; this increases or decreases the difficulty of the game.
  • The user can press the button to change the background color of the game. The color changes randomly each time the button is pressed. This feature is both fun/somewhat pretty, as well as another way for the user to increase the difficulty of the game: certain background colors make it harder to see the mean blobs and to avoid them.
  • In regards to the Piezo buzzer, the following happens: when the user (the pink blob) eats a smaller mean blob, a happy two-note sound plays from the buzzer. Additionally, when the user wins, a happy five-note tune plays; when the user loses, an low, “angry” sound plays for a few seconds. These sounds add a really interactive, fun element to the game.
  • The RGB led light serves as a signal of winning/losing. When the user wins, a pink color (like the user’s blob) turns on for a few seconds after the happy five-note wining tune plays. Similarly, when the user loses, a green color (like the mean blobs) turns on for several seconds following the losing buzzer sound. The lights add an extra element to the game because they reinforce the results of the game, and when they turn off after a few seconds the user knows they can start the game over again. (I originally wanted these lights to turn on at the same time as the end buzzing sounds, but I discovered that the buzzer interferes with the LED and alters the color of it. For this reason, I have the LED only turn on after the buzzer sounds are over.)

While it was nice to get to work with previous code, I find it very difficult to wrap my head around the communication between Processing and Arduino. It was for this reason that I wanted to practice implementing so many different types of communications between the two (and because I needed to remind myself again how to use those Arduino sensors/etc.). Overall, the project was a great learning experience.

Here is a picture of the bus board and circuit board:

 

 

 

 

 

 

 

 

It was difficult to get a good video of all the different features; below I will provide only two that can give an idea about the updated game.

(1) This first video shows a winning game. As you can here, every time the user eats a mean blob, there is a positive sound. Additionally, when the user wins in the end, there is a happy tune. (Additionally, even though you cannot see my hand on the bus board, you will notice that the blobs increase/decrease speed — that is me moving the potentiometer. Similarly, when the background color changes it is me pressing a button.) Lastly, I forgot to include it in the video, but the Led lights up pink at the end upon winning.

(2) The second video is just a few seconds and shows the ending of a losing game: the losing screen comes up, the angry buzzer sounds, and the Led lights up green.

The code for this project can be found here.

 

Why Digitization is Wonderful

 

 

 

 

 

 

 

 

In “The Digitization of Just About Everything,” Erik Brynjolfsson and Andrew McAfee rant and rave – in the best way possible – about the interesting/cool/great things that digitization makes possible. They compare the way that digitization has led to far better means of determining travel routes (through apps such as Waze) to the traditional, solely satellite-based GPS method; they explain that digital information makes certain types of real-world predictions (diseases, housing markets, etc.) very possible.

Because of the way that digital information is so easily shared, accessed, created, reproduced, etc., there has been an exponential increase in the amount of information and data collected/stored/shared in the past decade (or more). The authors note two key economic properties of digital information: 1) such information is non-rival, meaning that one person accessing/using such information does not preclude another from using it (in the way that a person who bought a movie ticket for seat 2F precludes another from purchasing a ticket with that same seat), and 2) the marginal cost of production of information is basically zero.

The discussion I found the most interesting was the next one: that while sometimes the initial production of information is costly (while the subsequent reproductions of the information are still low of course), that in general there is actually consistently an increase in “free” information produced on the Internet. Just think of Wikipedia, creative blogs, various free research journals, etc., and it is easy to see that this is true in the real world. What is so interesting about this is that, having grown up in the digital age, I never thought of certain resources on the web as people giving their “services” for “free.” These resources seemed natural to me, and seemed less like the result of someone providing a service, and more like the simple result of what people should do: share their knowledge/skills/expertise, etc. I am glad that I think this way, because it suggests that other people do as well and the next generation will even more so; as we all believe we are somewhat obligated (even in an abstract, ideal way) to share our information with the world, everybody will be able benefit from access to knowledge.

Blobby

I was really excited this week to make a game with Processing (although I cannot say I was as excited to use object oriented programming). After a lot of work, I was able to create a simple (but super cute) one-player game that, for obvious reasons, I dubbed “Blobby.”

The game works like this:

  • You, the player, are the cute, smiley, pink blob (let’s call her Blobby). You control Blobby by moving your mouse.
  • All around you, moving randomly, are mean green blobs of various sizes. The goal of the game is to eat the mean blobs that are smaller than you (by running your Blobby into them) and avoid the mean blobs that are bigger than you.
  • If you eat a smaller mean blob, you get bigger; if a bigger mean blob hits you, then you lose and the game is over.
  • When you (Blobby) get as big as the height of the screen, you win the game. 🙂

Here is a video of me playing a few rounds of the game, and below is a screenshot of what the game looks like:

Creating the Game

First, I simply created the cute blobs in illustrator. I made one cute pink one, and three mean green ones (of different shades of green). Although the pictures below are different sizes (making the blobs seem like they are different sizes), I actually ensured that all of the illustrator blobs were exactly the same size.

The hard part, obviously, was the code, and using classes (which I still do not feel that confident about). You can view it here. I used classes to create 20 different blobs (1 is Blobby/the player, the other 19 are mean). Each blob has a random starting location and direction, and the direction only changes when the blob hits a wall. Whenever a blob hits another blob (ANY blob, even if it is just two random mean blobs), I find out which one is bigger, I remove/delete the smaller one, and then make the bigger blob even bigger.

The hardest part of writing the code for the game was, clearly, learning and trying to understand how classes work. While I understand the basics, and was able to (with lots of googling and help from others) make this game, I can’t wait to learn more about classes and become more comfortable with them.

 

The Holobubble: My Own (Rainbow) “Raumband mit Drei Knoten”

I really loved this week’s assignment, because I thought the computer-generated art in the “Computer Graphics and Art” publication were all so interesting and beautiful. Although it took me a long time to decide which piece I wanted to imitate with Processing, I decided on the Raumband mit Drei Knoten by Dr. Herbert W. Franke (pg. 29) because I loved its strange, twisty, soft shape and thought it would be a good challenge for me (and boy was I right). Directly below on the left is is the original piece by Dr. Franke, and to its right is my attempt at imitating it (which I call the “Holobubble”).

 

Below is a video that more clearly demonstrates how the shape is composed of ellipses:

Recreating this artpiece was incredibly difficult and time-consuming; each curve of the shape was a mystery that I had to discover through trial and error. Despite this (or maybe because of this), I am rather pleased with the result of my attempt, and I believe I get across, albeit imperfectly, the softness and mysteriousness of the original work.

I did not want to stop at only imitating the piece, however; rather, I decided to make a more fun, colorful version that had a little bit of movement. I thus changed the coloring of the work, from white to a multitude of colors that constitute the rainbow. Below I include a video of the rainbow-version; the colors are continuously colored over in rainbow colors, which besides being beautiful adds an element of movement to the piece.

I recreated the artwork by drawing ellipses with various x and y coordinates, as well as various heights and widths. By drawing these consecutive ellipses, I was able to make the appearance of a connected figure. I used sine curves to try to imitate the curves of the figure, and used a frame count to divide the shape into sections.

The most challenging part of creating the figure was determining the right equations according to which I should change the x, y, width, and height parameters. In particular, connecting the various sections of curves proved especially difficult, for I used changes in heights of the ellipses to make the appearances of twists/connections. I include the full code below:

// "HOLOBUBBLE"

float x, y, w, h, c, frame;
int currentColorIndex, colorStep;

int[] rainbowColors = {#f80c12, #ee1100, #ff3311, #ff4422, #ff6644, #ff9933, #feae2d, #ccbb33, #d0c310, #aacc22, #69d025, #22ccaa, #12bdb9, #11aabb, #4444dd, #3311bb, #3b0cbd, #442299};

int[] rainbowColors2 = {#FF0000, #FF7F00, #FFFF00, #00FF00, #0000FF, #4B0082, #9400D3};

void setup() {
 frameRate(60);
 size(600,700);
 background(#333333);
 ellipseMode(CENTER);
 strokeWeight(3);
 noFill();
 repeatedSetup();
}

void repeatedSetup() {
 x = 330 + random(-1,1);
 y = 590 + random(-1,1);
 w = 250;
 h = 40;
 frame = 0;
 currentColorIndex = 0;
 colorStep = 1;
}

void drawWhite() {
 stroke(#FFFFFF);
 ellipse(x,y,w,h);
}

void drawRainbow1() {
 stroke(rainbowColors[int(frame)%rainbowColors.length]); 
 ellipse(x,y,w,h);
}

void drawRainbow2() {
 stroke(rainbowColors[currentColorIndex]); 
 ellipse(x,y,w,h);
 
 if (currentColorIndex == rainbowColors.length-1)
 colorStep = -1;
 
 if (currentColorIndex == 0)
 colorStep = 1;
 
 currentColorIndex += colorStep;
}

void drawRainbow3() {
 stroke(rainbowColors2[currentColorIndex]); 
 ellipse(x,y,w,h);
 
 if (currentColorIndex == rainbowColors2.length-1)
 colorStep = -1;
 
 if (currentColorIndex == 0)
 colorStep = 1;
 
 currentColorIndex += colorStep;
}

void draw() {
 
 drawWhite();
 //drawRainbow1();
 //drawRainbow2();
 //drawRainbow3();
 
 frame += 1;
 
 // PARTS 1 AND 2: LOWER BUBBLE
 if (frame <= 20) {
 x+=5*Math.sin(frame/13);
 y-=6;
 w-=7*Math.sin(frame/26);
 h-=0.25;
 }
 
 else if (frame <= 40) {
 x+=5*Math.sin(frame/13);
 y-=6;
 w-=7*Math.sin(frame/26);
 h-=0.25;
 }
 
 // PARTS 3 AND 4: CONNECTION
 else if (frame <= 60) {
 x+=0.5*Math.sin(frame/13);
 y-=1;
 w-=1.5*Math.sin(frame/26);
 h-=0.25;
 }
 
 else if (frame <= 80) {
 x+=0.5*Math.sin(frame/13);
 y-=1;
 w+=1.5*Math.sin(frame/26);
 h+=0.25;
 }
 
 // PARTS 5 AND 6: UPPER BUBBLE
 else if (frame <= 100) {
 x-=3*Math.sin(frame/13);
 y-=6;
 w-=24*Math.sin((frame-100)/26);
 h+=0.25;
 }
 
 else if (frame <= 120) {
 x-=3*Math.sin(frame/13);
 y-=6;
 w-=12*Math.sin((frame-100)/26);
 h-=0.5;
 }
 
 // PARTS 7 AND 8: TURN
 else if (frame <= 140) {
 x+=7*Math.sin(frame/13);
 y-=1;
 w-=6*Math.sin((frame-100)/26);
 h+=1.5;
 }
 
 else if (frame <= 160) {
 x+=8*Math.sin(frame/13);
 y+=6;
 w+=5*Math.sin((frame-100)/26);
 h-=1;
 }
 
 // PARTS 9 AND 10: UPPER HALF
else if (frame <= 180) {
 x+=Math.sin((frame-80)/13);
 y+=6;
 w-=16*Math.sin((frame-180)/26);
 h-=1;
 }
 
 else if (frame <= 200) {
 x+=2*Math.sin((frame-80)/13);
 y+=6;
 w-=16*Math.sin((frame-180)/26);
 h-=0.5;
 }
 
 
 // PARTS 11 AND 12: LOWER HALF
 else if (frame <= 220) {
 x+=Math.sin((frame-180)/13);
 y+=5;
 w-=4*Math.sin((frame-180)/26);
 h+=0.5;
 }
 
 else if (frame <= 240) {
 x+=Math.sin((frame-200)/13);
 y+=4;
 w+=2*Math.sin((frame-180)/26);
 h-=0.5;
 }
 
 // PARTS 13 AND 14: TWIST
 else if (frame <= 260) {
 x-=4*Math.sin((frame-200)/13);
 y+=3;
 w+=3*Math.sin((frame-180)/26);
 h-=3;
 }
 
 else if (frame <= 280) {
 x-=3*Math.sin((frame-200)/13);
 y-=3;
 w-=24*Math.sin((frame-180)/26);
 h+=6;
 }
 
 else repeatedSetup();
}

What is New Media?

In this excerpt from Lev Manovich’s “The Language of New Media” (2001), we learn from a historical and technical perspective what, exactly, is this notion of “new media” that classes like our own Introduction to Interactive Media are concerned with.

Manovich first asserts that new media is the result of “the convergence of two separate historical trajectories: computing and media technologies” (44). Interestingly, the beginnings of these histories start in the same decade (the 1830s) with two famous inventions: Babbage’s Analytical Engine (a computing technology) and Daguerre’s daguerrotype (a media technology). Throughout the reading, Manovich continues to use these two technological categories to show that they were (and are) both crucial for a modern and informed understanding of new media.

In regards to the issue of defining/describing new media, Manovich specifies five key characteristics (principles) of new media, which I detail below briefly:

(1) Numerical Representation

New media objects are able to be represented in numbers, for they made up of digital code; consequently, they can be a) “described formally (mathematically)” and b) “subject[s] to algorithmic manipulation” (49).

(2) Modularity

Manovich notes that this characteristic can be considered the “fractal structure of new media,” for in the same way that a fractal (computer-generated art) is constituted by the same type of structure throughout, new media also is composed consistently of the same structure. Examples of these structures include pixels, characters, etc. How I think of it is like this: humans are composed of cells in the same way a new media object (like an image) is composed of a media element (like pixels). An example of a fractal is below.

(3) Automation

New media include some level of automation, such as the way that programs like Photoshop can make certain automatic corrections to pictures. Other examples include animation programs that automatically generate certain items/objects as well as word processing programs (as well as others) which can automatically create certain features (such as a document’s layout). (pg. 53)

(4) Variability

New media objects should be able to “exist in different, potentially infinite, versions” instead of being entities that are stable/unchanging. One simple example of variability in new media is the way in which websites utilize user information to customize what the user sees, perpetually creating variants of the website.

(5) Transcoding

This principle of new media was a bit more difficult for me to understand, but from what I gather it refers to the way that new media is composed of two layers: a cultural layer and a computer layer. The computer layer is the workings of the computer — the computer language and data structure, etc., while the cultural layer is how the user interprets what they receive from the computer — a short story, a point of view, etc. Naturally, the two layers affect each other constantly, and in particular the computer layer is affecting the cultural layer in the way users interpret cultural data, etc. (pgs. 63-64)

Lastly, I will quickly mention one thing I found interesting in Manovich’s discussion of what new media is not: in regards to the notion of interactivity, he believes that the term is too broad. He explains that lots of “old media” was interactive — it was just not physically interactive, meaning that users did not actually press a button, touch something, etc. Rather, there is such a thing as psychological interaction, in that users must “fill in” things with their minds, and thus for a long time art (and other things) have been interactive. I really like this notion of psychological interactivity and wonder how I might incorporate it into later projects of mine in this class.