## Assignment 7: Selfie

Are you familiar with the concept of the Uncanny Valley? In case you are not, it is what happens when your are product looks like a human enough for people to think of it as a human – but not human enough for them to think of it as a normal human. For example, it is what happens when you take pictures like these:

And turn them into a picture like the following:

Oh God… It looks enough like me for a viewer to think it should be me; and yet it looks different enough for the viewer question what the hell happened to my face.

Well, it turns out that making a self-portrait in Processing is really hard. Initially, I was trying to use only Processing’s rect(), ellipse(), and line() functions, but this turned out to bee too much work. It was only when I found out about the arc() function – which can make an arc of an ellipse – that I was able to streamline the process by making a helper function faceShape() that draws two arcs with a quadrilateral in between to create a lightbulb shape. Since many features of the face can be represented with this general structure, I ended up using it quite a lot. Nevertheless, I still needed to do a lot of tedious shape layering – there is no escape from that in vector graphics, I guess.

In addition, I had trouble finding a good RGB color for the skin and the lips; turns out that it is hard to determine what skin tone one has!

I considered adding triangular protrusions to my hair and beard, to better simulate their fine structure  and make the selfie’s hair look less like a helmet, but I ran out of time. This is definitely one of those projects one can sink hours into improving, but at some point one should acknowledge that dragging one’s face out of the Uncanny Valley might be impossible. There is a good reason why 3D modelling is difficult – and why 3D-animated movies take years to make!

The code to produce the selfie is attached:

```void setup() {
size(1080,720);
rectMode(CENTER);
background(255);

noStroke();
fill(51,42,34);
faceShape(540, 320, 310, 290, 310, 70, 50); //hair

fill(51,42,34);
faceShape(540, 360, 260, 235, 10, 150, 320); //beard

fill(255,224,189);
faceShape(540, 360, 270, 230, 240, 150, 185); //face

fill(255,224,189, 50);
faceShape(540, 360, 270, 230, 240, 150, 250); //beard face

fill(51,42,34);
faceShape(549, 170, 150, 200, 50, 10, 40); //forehead hair

fill(51,42,34, 230);
rect(470, 335, 70, 20); //left eyebrow

fill(51,42,34, 230);
rect(610, 335, 70, 20); //right eyebrow

fill(51,42,34);
faceShape(470, 375, 100, 95, 5, 50, 5); //left glasses
fill(255,224,189);
faceShape(470, 377, 95, 97, 5, 50, 5);

fill(51,42,34);
faceShape(610, 375, 100, 95, 5, 50, 5); //right glasses
fill(255,224,189);
faceShape(610, 377, 95, 97, 5, 50, 5);

fill(51,42,34);
faceShape(540, 360, 45, 45, 5, 2, 0); //glasses bridge

stroke(0);
fill(255);
faceShape(475, 375, 50, 50, 30, 0, 10); //left eye
fill(51,42,34);
faceShape(475, 370, 20, 20, 20, 0, 20);
fill(0);
faceShape(475, 370, 8, 8, 8, 0, 8);

fill(255);
faceShape(605, 375, 50, 50, 30, 0, 10); //right eye
fill(51,42,34);
faceShape(605, 370, 20, 20, 20, 0, 20);
fill(0);
faceShape(605, 370, 8, 8, 8, 0, 8);

noStroke();
fill(51,42,34);
faceShape(418, 360, 10, 10, 0, 15, 0); //right glasses leg

fill(51,42,34);
faceShape(662, 360, 10, 10, 0, 15, 0); //left glasses leg

stroke(0);
fill(255,224,189);
faceShape(540, 400, 30, 60, 0, 70, 20); //nose

noStroke();
fill(0);
faceShape(525, 440, 15, 15, 10, 0, 10); //right nostril

fill(0);
faceShape(555, 440, 15, 15, 10, 0, 10); //left nostril

fill(51,42,34);
faceShape(540, 495, 140, 140, 60, 25, 0); //moustache

fill(255,224,189);
faceShape(540, 510, 140, 140, 40, 25, 70); //mouth face

fill(51,42,34, 230);
rect(540, 540, 25, 40); //soul patch

fill(227,93,106);
faceShape(540, 495, 115, 115, 20, 0, 40); //lips

fill(0);
faceShape(540, 495, 112, 112, 1, 0, 1); //mouth line

fill(255,224,189);
faceShape(540, 485, 20, 20, 0, 0, 8); //philtrum
}

void draw() {

}

void faceShape(int xCenter, int yCenter, int topWidth, int bottomWidth, int topArcHeight, int quadHeight, int bottomArcHeight) {
arc(xCenter, yCenter-(quadHeight/2), topWidth, topArcHeight, PI, TWO_PI, OPEN);
arc(xCenter, yCenter+(quadHeight/2), bottomWidth, bottomArcHeight, 0, PI, OPEN);
}```

## Response 8: Chance Operations & Her Code Got Humans to the Moon

Chance Operations:

Casey Reas’s talk was interesting because it showcased so many great visualizations, and served as an example of what can be done – that will probably come handy for future Processing projects! At the same time, I liked that he showed the historical context (Mondrian, Manovich, BASIC terminal drawing) of the visuals he was producing, and showing how we can generate them ourselves.

What I did not like was his dissing of what he called Order in Art. He seemed to embrace the critics’ view that there is nothing more boring and fruitless than grid paintings; that their hyper-embrace of order is at the expense of humanity and requires divorcing art from any function of society. I disagree. It is precisely because grid paintings (and the “orderly art” in general) are divorced from humanity that we have to explore them. After all, we are constantly required by society to treat ourselves as machines – to improve ourselves, to work on our human capital, to live in orderly apartment buildings in orderly gentrified cities, to work with machines and think as machines – so perhaps we should explore our instinctive dislike of orderly abstract art of a sign of our insecurity. And perhaps question our society? It is worth remembering abstract art as a whole – the one Reas himself represents – was similarly derided as divorced from society.

Maybe, just maybe, we should all have a more open mind? I am disappointed to see that an abstract artist would be so ready to dismiss an integral part of his own medium.

Her Code Got Humans on the Moon – And Invented Software Itself:

It was great to read about Margaret Hamilton in context of the ongoing wrangling about diversity in Computer Science, and realizing how many things have stayed the same since the 1960’s (“How can you leave your daughter to work?” as well as “the guys'” culture). I have respect for Hamilton for being able to stand up for herself, embrace the work she found interesting, and still take care of her daughter by bringing her to work. (I wonder why her husband wouldn’t take a bigger role, though – as a graduate student, shouldn’t he be more free than Hamilton to take care of his daughter? Maybe societal expectations hit home for Hamilton more than the article mentions?)

Additionally, I thought of this article in the context of the Jump to Universality. We do not really think of Computer Science as anything new anymore, so it is great to be reminded that there was a time – not so long ago – when the term did not even exist (just like the concept of “software”). Hamilton was a mathematician, not a computer scientist – and she and her staff had to invent it all from scratch. That meant attractive novelty and feeling of doing work of national importance, but also great responsibility (“when it all would go wrong, they would trace the events back, and they would trace them to me”) and great pushback (“astronauts will not make mistakes”). It must have felt amazingly thrilling and supremely stressful at the same time to be at the bleeding edge of a new science! Although maybe at the time, it all felt mundane? Lack of funding, narrow focus, dealing with narrow-minded people, that does not sound all that glamorous. Maybe we can only realize the monumentality of our actions in hindsight? Maybe that is what jumps to universality feel like to their inventors – and Hamilton was one of the lucky ones, getting to see the rise of the science she helped invent, while she was still alive.

That being said, I would have appreciated to hear more about how she feels about it all right now (and why she looks so unhappy/disappointed in the title photo).

## Assignment 6: A Very Angry Owl

This owl is really angry. When you trigger it (pun not intended), it will stomp around in circles and make very angry noises. Sometimes, it will even run straight at you!

(If you do not believe that the owl is moving “like a human”, I will demonstrate during my presentation that I definitely can move like the owl!)

I was really impressed with last week’s self-contained cookie jar by Arame, and I wanted to do something similar this week. That is why I borrowed this owl pouch from a friend, and hid most of the electronics inside.

Unfortunately, I could not make it entirely self-contained; the two motors have to be outside to make the owl move, and there was just no space left inside for a battery.

Also, to make the owl stand upright, I had to attach two feet to the bottom part of the pouch. To make them stick, I used lots of sticky tack… There is an aesthetic dimension to the feet too, though! Look, the tack makes cute fingers:

It was the motors, however that posed the most problems.

First of all, the motion itself. My initial idea of attaching crossbars to the spinning bit of the motors did not work out because the motors just were not strong enough to lift the heavy device in order to move. The motors just stalled. I tried solving this problem by attaching wheels to the spinning head with lots of tack, but the wheels were too big and the tack too elastic. When the motors were off, the wheels would become mis-aligned, and when turned on, the wheels would cause excessive vibration. Also, the motors would still stall when the misaligned wheels tried to lift the device! Finally, I realized that I can turn the vibrations into a feature – by attaching lumps of tack just slightly off-center to the motor heads, the owl would start to vibrate and the vibration would move it around! Ta dah, problem solved. To make sure the tack does not become deformed and detached due to centrifugal force, I wrapped it with tape:

Second of all, the vibrations caused the motors to detach from the construction holding the owl together and upright. The double-sided tape was not strong enough to hold the two weird surfaces (wood and metal) together! That is why I had to add rubber bands to pull the motors inward. These are secured out of sight behind the owl’s wings. A side effect of this solution was that the owl was bending backwards. While the bending was resolved by inserting the Arduino and using its board as a structural support, now the whole pouch acted as a lever that was powerful enough to detach from the feet. This problem was solved by applying a lot of tack, in the form of those stylish fingers.

This is how the owl looks like inside:

After inserting the Arduino, there really is not much space inside – just enough to fit the longer wires that connect to the motors and the ones connecting the button. That is why I am very proud of the design of the circuit board:

The tiny breadboard fits perfectly between the pins of the Arduino! To minimize the need for overhead space, flat short wires were used instead of the longer, curved wires that I usually prefer. Additionally, the electronic components (transistor, resistor, and diode) are flattened and held in place by tiny wires. (Special care had to be taken to ensure that the flattened legs of the diode do not touch the legs of the transistor – that is why the diode is held by four wires. The resistor just needed one, as there is not much it can come into unwanted contact with.) The wires connecting the motors (on the left) and the button (on the right) are flattened as much as possible, and held by small wires just before exiting the breadboard. (The yellow wire connecting to the top left pin of the Arduino is there just to help hold the breadboard in place.)

Overall, a good challenge, and a cute outcome! The code for controlling the owl was not complicated at all:

```void setup() {
pinMode(3, OUTPUT);
pinMode(8, INPUT);
}

bool isOn = false;

void loop() {
if (isOn) analogWrite(3, 255);
else analogWrite(3, 0);
}```

## Assignment 5: D.O.G.E.

For my Stupid Pet Project, I chose to make an actual pet!

(Thanks to Arame for participating in the demonstration!)

The Dog-Owner Googly-eyed Experience (DOGE, for short), is a model dog that offers several interactions to its owner. Each one of the interactions has an effect on DOGE’s emotions, communicated with a front face panel with 64 LEDs.

(The DOGE’s face is my own work.)

The DOGE can experience three emotions – it can be Happy, Neutral, and Angry. They are communicated by lighting up different arrangements of LEDs.

In addition, when the DOGE is happy, it wags its tail happily, as it is attached to a servo that sways between 45 and 135 degrees. If the DOGE’s emotion changes to Neutral or Angry, the tail rests pointing downward, at what the servo understands as 90 degrees.

(Credit to Nikki Joaquin for drawing the dog’s tail for me!)

To signal to the owner that they did something to make the DOGE happy, the happy LEDs flash briefly and a higher-pitched tone plays. Conversely, when the owner does something that makes the DOGE angry, the angry LEDs flash alongside a lower-pitched tone.

The user can interact with the DOGE in three different ways: An FSR on the back of its head, a tilt sensor, and a series of switches on top of the DOGE’s back.

The owner has a choice of pressing the FSR gently or harshly. Pressing the sensor lightly will make the DOGE slightly happy (+1), while pressing it harshly will make the DOGE slightly angry (-1). In terms of the analog ranges of the sensor, there is a buffer zone between the two possibilities, so that the owner is less likely to make the DOGE angry when they intend to make it happy.

The tilt sensor actually acts as more of a vibration sensor – it contains a small ball that breaks a connection between two pins when the sensor is disturbed. This functionality is used by the DOGE to detect mean behavior by its owner. If the owner slaps the DOGE or kicks it, the tilt sensor will detect it and significantly reduce the DOGE’s happiness (-5). You will have to do a lot of nice things to your DOGE to make it happy again after one of these!

Finally, the DOGE’s hair switches. These are groups of wires on the DOGE’s back that detect petting motions by the owner.

When the wires are made to touch the strip of conductive fabric, the DOGE becomes happier. But it is not so easy – the owner has to pet the dog in the right direction (from head to tail) and has to start with the group of wires that is closest to the head. The owner has four seconds to complete the petting motion to achieve maximum happiness. As the owner moves the hand along the back of the DOGE, the happiness reward increases progeressively – +1 for the first group, +2 for the second… +4 for the last. This means +10 happiness for each completed petting motion! It pays to learn how to pet your DOGE!

This was easily the most complex project I made for the class. For starters, the LED panel has 64 LEDs in total, easily qualifying this project into the “LED Fetishism” category of IM projects. 64 resistors and more than 320 wires were required to make the whole thing work; by my count I used about half of all the available wires in the lab.

Second of all, this project is HUGE.

Working with it required hours of me leaning into a box that was big enough for a whole IKEA lamp to fit, and was heavy enough that it wanted to topple unless weighed down by a heavy wooden stool.

Third of all, the veritable jungle of wires required a lot of tidying up in order for me to even gain access to the insides of the box, and meticulous color-coding of wires to maintain my sanity when debugging hardware problems.

Fourth, my original wiring arrangement – which used a group of parallel resistors in series with a group of parallel LEDs proved suboptimal. Current would flow through different LEDs differently based on minuscule differences in their internal resistance, threatening to burn out the less-resistant LEDs, while leaving the others too dim (see this illustration for a visualization of the situation). To give each LED its own resistor, I had to dive into the jungle of wires and replace 64 of them with a resistor.

This was relatively easy to do because of a system that holds the DOGE’s head upright and its back sloped while allowing the box to be opened and its contents inspected.

This system also helps with routine fixes of non functional LED, caused by wire connections becoming loose:

The code for the DOGE was more complicated than the violin, mainly because each one of the sensors and output devices required its own finite state machine and its own timer.

```#include <Servo.h>

Servo myServo;

int const greenMouthPin = 3;
int const greenTonguePin = 4;
int const redMouthPin = 2;

int const servoPin = 5;

int const piezoPin = 6;

int const tiltPin = 7; // 1 normally, 0 when hit

int const firstHairPin = 8; // closest to head
int const secondHairPin = 9;
int const thirdHairPin = 10;
int const fourthHairPin = 11;

int const fsrPin = A0;

int currentHappiness = 0; // initial value
int previousHappiness = 0; // initial value

int const happyThreshold = 5; // the minimum happiness for dog to be happy
int const angryThreshold = -5; // the maximum happiness for dog to be angry

bool isHappy = false;
bool isNeutral = false;
bool isAngry = false;

long servoTimerCurrentMillis = 0;
long servoTimerPreviousMillis = 0;

int const servoSpeed = 10; // the speed of happy tail wagging
bool servoReversed = false;
int servoPos = 90;

long hairTimerCurrentMillis = 0;
long hairTimerPreviousMillis = 0;

int const hairLimit = 4000; // the time the user has to finish petting the dog
int currentHair = 0;
int const hairPositiveHappinessMultiplier = 1; // happiness from petting the dog

long fsrTimerCurrentMillis = 0;
long fsrTimerPreviousMillis = 0;

int const fsrThreshold = 200; // the time the FSR waits after touch before starting to sense (to give the user chance to choose pressure)
int const fsrLimit = 1000; // the time between two FSR readings after a reading has been made
int const fsrMinPositiveTouch = 200; // FSR threshold for gentle touch
int const fsrMaxPositiveTouch = 600; // FSR limit for gentle touch
int const fsrMinNegativeTouch = 700; // FSR threshold for mean touch
bool fsrWaiting = true;
bool fsrEnabled = false;
int const fsrPositiveHappiness = 1; // happiness from gentle touch
int const fsrNegativeHappiness = -1; // unhappiness from mean touch

long tiltTimerCurrentMillis = 0;
long tiltTimerPreviousMillis = 0;

int const tiltLimit = 1000; // the time between two tilt sensor readings after a reading has been made
int const tiltNegativeHappiness = -5; // unhappiness from kicking or slapping

int const soundTime = 200; // the time sounds are played for, also determines emotion LED flashing
int const positiveSound = 1047; // the positive sound's frequency
int const negativeSound = 262; // the negative sound's frequency

void setup() {
pinMode(greenMouthPin, OUTPUT);
pinMode(greenTonguePin, OUTPUT);
pinMode(redMouthPin, OUTPUT);

myServo.attach(servoPin);

pinMode(piezoPin, OUTPUT);

pinMode(tiltPin, INPUT);

pinMode(firstHairPin, INPUT);
pinMode(secondHairPin, INPUT);
pinMode(thirdHairPin, INPUT);
pinMode(fourthHairPin, INPUT);

pinMode(fsrPin, INPUT);

//Serial.begin(9600);
}

void loop() {
// setup
servoTimerCurrentMillis = millis();
hairTimerCurrentMillis = millis();
fsrTimerCurrentMillis = millis();
tiltTimerCurrentMillis = millis();

// setting state
isHappy = false;
isNeutral = false;
isAngry = false;

if (currentHappiness >= happyThreshold) isHappy = true;
if (currentHappiness > angryThreshold && currentHappiness < happyThreshold) isNeutral = true;
if (currentHappiness <= angryThreshold) isAngry = true;

// setting face
digitalWrite(greenMouthPin, LOW);
digitalWrite(greenTonguePin, LOW);
digitalWrite(redMouthPin, LOW);

if (isNeutral == true || isHappy == true) digitalWrite(greenMouthPin, HIGH);
if (isHappy == true) digitalWrite(greenTonguePin, HIGH);
if (isAngry == true) digitalWrite(redMouthPin, HIGH);

// hair
if (currentHair == 0 && digitalRead(firstHairPin) == HIGH) {
// if the first hair was detected
hairTimerPreviousMillis = hairTimerCurrentMillis;
currentHair += 1;
currentHappiness += currentHair*hairPositiveHappinessMultiplier;
}
if (currentHair > 0 && hairTimerCurrentMillis-hairTimerPreviousMillis > hairLimit) {
// if the hair timer runs out
currentHair = 0;
}
if (currentHair > 0 && currentHair <= 4) {
currentHair += 1;
currentHappiness += currentHair*hairPositiveHappinessMultiplier;
}
}

// servo
if ((servoTimerCurrentMillis-servoTimerPreviousMillis) >= servoSpeed) {
servoTimerPreviousMillis = servoTimerCurrentMillis;
}
if (isHappy == true) {
if ((servoTimerCurrentMillis-servoTimerPreviousMillis) == 0) {
if (servoReversed == false && servoPos < 135) servoPos += 1;
if (servoPos >= 135) servoReversed = true;
if (servoReversed == true && servoPos > 45) servoPos -= 1;
if (servoPos <= 45) servoReversed = false;
}
}
if (isHappy == false) {
servoPos = 90;
}
myServo.write(servoPos);

// fsr
if (fsrWaiting == true && analogRead(fsrPin) > fsrMinPositiveTouch) {
// touch detected
fsrWaiting = false;
fsrEnabled = true;
fsrTimerPreviousMillis = fsrTimerCurrentMillis;
}
if (fsrTimerCurrentMillis-fsrTimerPreviousMillis > fsrLimit) {
// the fsr timer runs out
fsrEnabled = false;
fsrWaiting = true;
}
if (fsrEnabled == true && fsrTimerCurrentMillis-fsrTimerPreviousMillis > fsrThreshold && fsrTimerCurrentMillis-fsrTimerPreviousMillis < fsrLimit && analogRead(fsrPin) > fsrMinPositiveTouch && analogRead(fsrPin) < fsrMaxPositiveTouch) {
fsrEnabled = false;
currentHappiness += fsrPositiveHappiness;
}
if (fsrEnabled == true && fsrTimerCurrentMillis-fsrTimerPreviousMillis > fsrThreshold && fsrTimerCurrentMillis-fsrTimerPreviousMillis < fsrLimit && analogRead(fsrPin) > fsrMinNegativeTouch) {
fsrEnabled = false;
currentHappiness += fsrNegativeHappiness;
}

// tilt
if (digitalRead(tiltPin) == LOW && tiltTimerCurrentMillis-tiltTimerPreviousMillis > tiltLimit) {
// only if the time is higher than the time limit
tiltTimerPreviousMillis = tiltTimerCurrentMillis;
currentHappiness += tiltNegativeHappiness;
}

// sound
if (currentHappiness != previousHappiness) {
digitalWrite(greenMouthPin, LOW);
digitalWrite(greenTonguePin, LOW);
digitalWrite(redMouthPin, LOW);
delay(soundTime/4);

if (currentHappiness > previousHappiness) {
tone(piezoPin, positiveSound, soundTime);
digitalWrite(greenMouthPin, HIGH);
digitalWrite(greenTonguePin, HIGH);
delay((3*soundTime)/4);
}
if (currentHappiness < previousHappiness) {
tone(piezoPin, negativeSound, soundTime);
digitalWrite(redMouthPin, HIGH);
delay((3*soundTime)/4);
}
}

previousHappiness = currentHappiness;

//Serial.println(fsrWaiting);

delay(1);
}```

## Response 7: The Norm of Design

Roberto Casati’s lecture was interesting because he advocated against the general desire of trying to define ‘design’. According to him, definitions of design usually fall short of accurately describing all aspects of the field. (He illustrated this with the following example: Try to come up with a definition for a chair. ‘An object for sitting’, for example, includes sofas and seats and stools. ‘An object for sitting with four legs reaching halfway of the height’ excludes office roller chairs. And so on, ad infinitam. Any definition we come up is either too broad and includes other classes of objects, or too narrow and excludes some types of chairs! If we cannot come up with a consistent definition for a physical object, how can we hope to define an abstract idea such as ‘design’?)

What is worse, definitions of design are actually dangerous because they tend to try to describe design as an intersection of art and engineering.  Why cannot the converse be true, he asks – that art and engineering are extreme versions of design?

That is why Mr. Casati argues that definitions do not and should not actually matter. What matters, instead, are perceptions. If you asked a thousand people, why would this chair

generally be considered ‘design’, while this one

would not?

What is it about the first chair that screams “Hey people, I’m a design chair, look at me, I’m so cool”? Mr. Casati suggests that there is a specific ‘design look’ that people use when they judge ‘design’ things. They tend to be metallic, sleek, aerodynamic at some times, like this kettle

and/or transparent, sharp, angled, like this chair:

Both are visually striking, that is for sure, but my objection during Mr. Casati’s lecture was that on first glance, I would not even know that the design kettle was a kettle! Where are the signifiers of kettle-specific affordances? How do I know that I can pour water inside for it to be heated up?

If the general perception of a design item includes the user not being able to tell whether the object is intended to be used or merely admired as a sculpture, is that not troubling? Does that not, in fact, position design outside both realms of art and engineering?

I am not questioning Mr. Casati’s theory – unlike a lot of people at the lecture, who seemed to try to force him to articulate a definition of design. I am fine with the unorthodox definition of design as ‘anything that feels design-y enough’, but I am questioning what this implies about the necessary and sufficient elements of the ‘design look.’

And, if we are saying that design items have to have a design look – which disqualifies this kettle, for example

then I do not see how his DIY anti-head-banging-signal leaf could, by any stretch of imagination, be considered design. It is babies’ equivalent of this sign:

That certainly does not have the ‘design look’! Therefore, I am questioning whether Mr. Casati’s non-definition of design is truly all-encompassing. It seems to me that this definition, too, is vulnerable to counter-examples that undermine it!

## Assignment 4: Violin

This musical instrument was designed to look somewhat like a violin; it features two plastic components with powered strips. One of the components is held in the left hand and functions as the neck of the violin, while the other is held in the right hand and functions as the bow.

The bow component has eight perpendicular strips; it is pictured on the left. Each strip is connected to an input pin on the Arduino; when one comes into contact with a strip on the neck component, a note is produced. These cover one octave (from C to C inclusive), with the lower C closer to the handle, and the higher C at the tip.

On the right is the neck component. The farther end is held in the player’s hand, who draws the bow over one of three strips attached lengthwise. Each strip represents one octave, from C4-C5 on the right side (not visible), through C5-C6 on the top, to C6-C7 on the left side.

The whole device is photographed below.

To make cable management easier, the crocodile wires connect to wires while attached to a strip of plastic in an orderly line.

The biggest challenge turned out to be the question of how to detect which neck-component strip was in contact with the bow. Of course, I could have just connected all three strips to power and be satisfied with just one octave, but I wanted to increase the range of possible sounds the violin could produce.

At first, I wanted to detect if a current was passing through one of the three wires – somehow split current from the strip that was part of a closed circuit, and send that signal to an input pin. I could not figure out a way to do that with the electronic components we have in the lab; everything that I tried did not work because a parallel circuit would be formed between the two detecting paths.

I was on the verge of giving up when I had a 3am-piphany. Instead of trying to detect a switch being closed, I can use my Arduino to turn off each of the neck strips in turn and wait for a signal at one of the input pins. The combination of the powered neck strip and detecting bow strip then determines the combination of octave and note, which is exactly what I wanted!

How do I turn on each power strip in turn? A for-loop in my code sends digital signals to three transistors that alternatively allow current to pass to only one power strip at a time. The setup is detailed below; the three green wires connect to Arduino output pins and the three white wires connect to the power strips on the neck of the violin.

Despite this added functionality, the code is still very simple. Adding the octave distinction required only one extra for-loop (octaveCounter).

```#define NOTE_C4 262
#define NOTE_D4 294
#define NOTE_E4 330
#define NOTE_F4 349
#define NOTE_G4 392
#define NOTE_A4 440
#define NOTE_B4 494
#define NOTE_C5 523
#define NOTE_D5 587
#define NOTE_E5 659
#define NOTE_F5 698
#define NOTE_G5 784
#define NOTE_A5 880
#define NOTE_B5 988
#define NOTE_C6 1047
#define NOTE_D6 1175
#define NOTE_E6 1319
#define NOTE_F6 1397
#define NOTE_G6 1568
#define NOTE_A6 1760
#define NOTE_B6 1976
#define NOTE_C7 2093

int notes [3][8] = {{NOTE_C4, NOTE_D4, NOTE_E4, NOTE_F4, NOTE_G4, NOTE_A4, NOTE_B4, NOTE_C5},{NOTE_C5, NOTE_D5, NOTE_E5, NOTE_F5, NOTE_G5, NOTE_A5, NOTE_B5, NOTE_C6},{NOTE_C6, NOTE_D6, NOTE_E6, NOTE_F6, NOTE_G6, NOTE_A6, NOTE_B6, NOTE_C7}};

int const soundPin = 10;

void setup() {
pinMode(2, INPUT);
pinMode(3, INPUT);
pinMode(4, INPUT);
pinMode(5, INPUT);
pinMode(6, INPUT);
pinMode(7, INPUT);
pinMode(8, INPUT);
pinMode(9, INPUT);

pinMode(soundPin, OUTPUT);

pinMode(11, OUTPUT);
pinMode(12, OUTPUT);
pinMode(13, OUTPUT);
}

int note = 0;

void loop() {
note = 0; // default; no sound

for (int octaveCounter = 0; octaveCounter < 3; octaveCounter += 1) {
// turn on each power strip in turn
digitalWrite(11, LOW);
digitalWrite(12, LOW);
digitalWrite(13, LOW);

digitalWrite(octaveCounter+11, HIGH);

for (int noteCounter = 0; noteCounter < 8; noteCounter += 1) {
// check each note strip in turn
// the combination of power strip and note strip determines the octave and note
note = notes[octaveCounter][noteCounter];
}
}
}

tone(soundPin, note, 100);
}```

## Response 6: Design Meets Disability

The author argues that bad design for the disabled is too often excused because of the intended market and challenges the notion that good general design will eventually find its way into the specialist design by the “trickle-down effect.” As a counterexample, he offers the Navy plywood leg splint that enabled a whole era in furniture design by providing the tools to shape wood in ways and volumes previously thought impossible. The author believes that the area of disability design is too engineering-focused and needs to be more creative in the ways it solves its problems.

Eyeglasses are one of the few areas in which disability-reducing devices are conspicuous and fashionable; there is no stigma attached to wearing them – in fact, I did not consider my own glasses when I was reading about the leg splints, but the connection is undeniable! If glasses are fine to showcase as eyewear (even if there are contact lenses, which are indeed invisible), why not other medical devices? Why do they so often have to be in skin tones, as if to deny they are even there? Doesn’t that lead to the uncanny valley effect – when something looks human enough not to be considered artificial, but not human enough not to be considered creepy? Perhaps what is needed is explicitness and transparency. I was shocked to learn that glasses only became stylish in the 1990s. (Although looking back to photos of my parents from the 1980s, the designs really were horrible…)

Material design is used as a term to explain the middle ground between failed invisibility and extravagant visibility. It is what designs for disability must attain in order to be acceptable. There is a need for a large choice in materials, designs, and styles to abate the risk of “a design not suiting a particular individual,” just like fashion in general. There is a big difference between a consumer and a patient. Or user, which is something technology should think about! The argument about hearing aids’ miniaturization is poignant here – if only they did not have to be so constrained in size, the quality of life of their wearers could improve dramatically by allowing them to hear better! Perhaps they too should be rebranded as a -wear, to imply fashion (hearwear, in this case). The “enormous” Hulger hearing aid is a great example of the in-your-face design and exudes self-confidence despite being a medical aid!

When it comes to legwear, the author uses Aimee Mullin’s prosthetic legs as an example of legwear that is not constrained by trying to copy the form of human legs (one could similarly use the now-disgraced Oscar Pistorius). They project such a futuristic vibe! Maybe futurism should not be a universal goal (my grandfather might be reluctant to wear a Bluetooth-like hearing aids), but it should be one among a limitless array of options. Perhaps we need to be allowed to be shallow! I love the golden/leather armwear example – transforming disability into something brilliant and “marvelous and exotic” instead of something to be pitied. Sometimes it is good to hide one’s disability through a discrete prosthesis, sometimes it is not – what is important is that there is a choice!

The second half of the article then provides suggestions of how to design for disability by adopting principles from mainstream design – universality and simplicity. The iPod is an example of an appliance, a product with purposely limited function, “but doing it very well,” while the iPhone is an example of a platform – a multipurpose device. Funnily enough, it seems that the iPhone has won over the iPod; why would I wear two devices if I can listen to music on my phone? I do not understand the dismissal of platforms as useless when compared to products however. Certainly, the success of smartphones (which are single-handedly killing the MP3 player, the dumb phone AND the digital camera) shows that there is a desire on part of the users to carry fewer things!? When are we ever getting pushed around? It seems to me that we are entirely consenting participants in this process!

However, I can see that a design that aims to be universal at the expense of simplicity may be counterproductive, especially when it comes to disability – we need both. Where is the Apple of design for disability?

I like the point about designing iconic radios for people with dementia. What is the most recognizable feature of a radio? How can we make the interaction less complicated, not more? Perhaps tuning should not be necessary (maybe the radio can find the most appropriate station by itself?)

## Response 5: Physical Computing’s Greatest Hits & Making Interactive Art

As I was reading the list of patterns in IM projects, I was surprised to see that we have already made a lot of those as a class! We have done floor pads, we have done gloves, we have done LED fetishism (in fact, my project for tomorrow is a glorified changing LED), we have done cute hearts to transfer emotions. Those that we have not made we have seen on video and appreciated as super cool; anyways, I suspect that most of those were not made because we did not know yet how to work with software!

That does make the question of how to be original in IM all the more pertinent. If everything has been done, what else is there to do? Especially if the second article says that our projects have to speak for themselves (since we should “shut up” and not explain!)? I believe this is where the art part of this class comes in… If we cannot innovate on the function, and we cannot contextualize our work through text, we have to make sure that our object’s art and design appropriately signify the affordaces inherent to the work.

That is why we should all, as a class, perhaps pay less attention to the interaction we are designing (not much innovation can be done there; moreover, nobody really cares about how we solved some problem our bad design choices brought on us), and pay more attention to the choices we often make subconsciously. Why did I use blue LEDs? Why put LEDs in groups of three. Why use dolphin as opposed to a flamingo (loved that one, by the way)? These are the questions that really matter, and we should probably start treating them that way.

## Assignment 3: Wagon Wheel Light

The wagon-wheel effect is an interesting optical illusion that makes spoked wheels appear to rotate backward at certain ranges of frequencies. We have all probably noticed that car rims sometimes seem to stop in place and then slowly rotate backward even though the car was moving! Here is an illustration of the effect, from YouTube:

I decided to use this effect to construct an unexpected effect of an analog switch on a circle of LEDs. When the Wagon Wheel Light is turned on, the light rotates slowly around the circle; as the user turns the knob of a potentiometer, the speed of rotation increases. (Technically, the “time to switch the following LED” decreases, while the “time to keep LED on” stays the same.) At the midway point, the interval of rotation is exactly equal to the “time to keep on;” the LEDs are never turned off. As the user continues turning the knob, the interval of rotation starts increasing again, but now the rotation propagates in the opposite direction! As we approach the maximum, the rotation slows down, as if due to the wagon-wheel effect. (This is not a true wagon-wheel effect, however; the slowing rotation is simulated, and not due to the system actually speeding up.)

There are two parts to this object – the box with 12 LED lights which hides the electronics, and the breadboard with control components. The device comes with its own battery; power can be switched on with the miniature toggle switch on the right side of the breadboard. Once switched on, the speed of rotation can be adjusted with the potentiometer on the left. (Blue LEDs were chosen because they seem to be the brightest available in the IM lab.)

Each LED is controlled independently by the Arduino output pins. To make cable management easier, a repeating pattern of blue-red-green wires go to four triples of LEDs at a time. (In case you are wondering, LED at pin 2 is the one closest to the control breadboard; the pin numbers increase anticlockwise from there.)

To increase reliability of the system (it needed 24 crocodile wires and 12 resistors!), a breadboard was attached in the middle of the circle of LEDs. The crocodile wires go there first, and the Arduino wires connect to them through the breadboard. The ground cables have a black-white-yellow color scheme; only one wire is required to connect to the ground pin on the Arduino.

Apart from managing an excessive number of wires, I faced two challenges when assembling the project. When I first turned the device on, the LEDs were unexpectedly dim, despite wiring everything correctly. After some hair-pulling, I realized I had not initialized the Arduino pins as output pins! After fixing that, another problem became apparent: LEDs 5-10 would not turn on. This turned out to be because the central breadboard comes with a break in the ground and power columns, unlike the control breadboard.

The code for the Wagon Wheel Light is more straightforward than the one for Aeolus. It is presented below:

```int const knobPin = A0;

long currentMillis = 0;
long previousMillis = 0;

int ledStart [] = {0,0,0,0,0,0,0,0,0,0,0,0};
int ledEnd [] = {0,0,0,0,0,0,0,0,0,0,0,0};

int multiplier = 2;
int maxInterval = 1024*multiplier;
int knobValue;
int interval;
int subInterval;
int lightTime;

bool reversed = false;

void setup() {
pinMode(2, OUTPUT);
pinMode(3, OUTPUT);
pinMode(4, OUTPUT);
pinMode(5, OUTPUT);
pinMode(6, OUTPUT);
pinMode(7, OUTPUT);
pinMode(8, OUTPUT);
pinMode(9, OUTPUT);
pinMode(10, OUTPUT);
pinMode(11, OUTPUT);
pinMode(12, OUTPUT);
pinMode(13, OUTPUT);

Serial.begin(9600);
}

void loop() {
currentMillis = millis();

Serial.println(knobValue);
delay(1);

lightTime = maxInterval/12; // how long does a light stay on

if (knobValue >= (maxInterval/2)) {
reversed = false;
interval = map(knobValue, maxInterval/2, maxInterval-1, lightTime, maxInterval-1);
}
else {
reversed = true;
interval = map(knobValue, 0, (maxInterval/2)-1, maxInterval-1, lightTime);
}

subInterval = interval/12; // how long do we wait before turning the next light on

if ((currentMillis-previousMillis) > interval) {
previousMillis = currentMillis;
}

for (int index = 0; index <= 11; index += 1) {
if (reversed == false) {
ledStart[index] = index*subInterval;
ledEnd[index] = ((index*subInterval) + lightTime) % interval;
}
else {
ledStart[index] = ((index*subInterval) + lightTime) % interval;
ledEnd[index] = index*subInterval;
}
}

if (reversed == false) {
for (int index = 0; index <= 11; index += 1) {
if (ledStart[index] < ledEnd[index]) {
if (currentMillis-previousMillis >= ledStart[index] && currentMillis-previousMillis < ledEnd[index]) digitalWrite(index+2,HIGH);
else digitalWrite(index+2,LOW);
}
else {
if ((currentMillis-previousMillis >= 0 && currentMillis-previousMillis < ledEnd[index]) || (currentMillis-previousMillis >= ledStart[index] && currentMillis-previousMillis <= interval)) digitalWrite(index+2,HIGH);
else digitalWrite(index+2,LOW);
}
}
}
else {
for (int index = 0; index <= 11; index += 1) {
if (ledStart[index] < ledEnd[index]) {
if (currentMillis-previousMillis >= ledStart[index] && currentMillis-previousMillis < ledEnd[index]) digitalWrite(11-index+2,LOW);
else digitalWrite(11-index+2,HIGH);
}
else {
if ((currentMillis-previousMillis >= 0 && currentMillis-previousMillis < ledEnd[index]) || (currentMillis-previousMillis >= ledStart[index] && currentMillis-previousMillis <= interval)) digitalWrite(11-index+2,LOW);
else digitalWrite(11-index+2,HIGH);
}
}
}
}```

The only problem I faced when it came to software was the question of how to reverse the direction of rotation for the second half of the potentiometer range. This turned out to require changing the indices of LEDs in the reversed for loop by subtracting the LED pin (index+2) from 11.

## Response 4: A Brief Rant on the Future of Interaction Design

The Rant:

“Visions give people a direction and inspire people to act.” I agree, the linked video hit all the right chords – it was futuristic, heartwarming, and hopeful, all in 6 minutes… but as it turns out, it was boring from an interaction perspective. What the video emphasized was how technology can reconnect us in a world of travel – what technology can do; what nobody paid attention was how we interact with that technology – what we can do!

The author pleads for more thought to be given to how we can use the abilities of our hands to understand dynamic physical information, and look away from the picture under glass paradigm. Apple went part of the way already – the haptic interface on newer iPhones, but it does not go far enough – to approximate textures or enough of depth continuum (I believe there are just two “depths” one can push into) to allow the piano app, for example, to feel like a real piano. At the other hand (pun not intended), this limited capability limits what app developers can do, limiting the reach of the technology – it is not universal!

I was thinking about something else though, related to the weight-distribution detection capability of our hands – we could imagine a network of containers that would liquify a gas in small volume (and make that part of the device heavier by concentrating the weight of the gas) in specific, program-determined locations. What would be the application? The phone would not become heavier (unless we could draw a gas from the outside environment!?), but it would be able to use weight analogies (e.g. this graph is skewing to the left, so the left side of a device would become heavier)! This article definitely opened my mind to new ideas!

The Responses:

I feel like offering an example solution would only restrict the universe of interaction solutions people would think about. It is already true that the example problems restrict people’s thinkning – or at least, they restricted mine to focus coming up with a solution to approximate water swishing around in a glass).

I was surprised to see that smartphones would be part of the future vision – but it made me think “good, perhaps we solved the problem of handheld interaction,” not “gee, is that it.” That is why I thought the article was inspiring – it showed me that I had too narrow a view of the problem!

I like the author’s rebuttals of styluses/keyboards, voice, waving arms in the air (without haptics), saying that they are not dynamic enough and do not use much of our body’s capability. The rant about the fact that “physical exercise” had to be invented as a concept certainly rings true, and also shows why maybe we should not rush to brain-computer interfaces.

The concern about finger-blindness is a new one to me, and makes sense to a degree. Although is it not true that even a child using their iPad is using their fingers? Perhaps this argument should be more like “looking at a computer screen for long periods causes short-sightedness in children” (although it apparently DOES NOT); so “touching touchscreen for long periods causes impairment in one using one’s fingers?” I am not convinced by this argument, though, as it is directly contradicted by what the author himself claims in his article – if our dexterity/touch/proprioception is an innate skill (after all, “the only reason your mind doesn’t explode every morning from the sheer awesomeness of your balletic achievement is that everyone else in the world can do this as well”), how exactly does using it wrong (whatever that means) not count as merely using it. (Our eyes did not evolve to look at text all day every day, and yet nobody claims that reading makes us less able to discern non-text objects in the real world. Our ears did not evolve to listen to speech, and yet nobody claims that listening to people speak makes us less able to hear music or the sounds of nature.)

That being said, I do like the Shakespeare/Dr. Seuss analogy as applied to touch, though – we should not restrict our vocabulary when it comes to touch gestures.