Alva Emerging Fellowship Application

I recently applied to the Alva Emerging Fellowship. I submitted a discovery that came out of my thesis, which I've been calling Open Startup Failures. I pull from arguments in my thesis to discuss the implications, the importance of entrepreneurial learning, and the need for Open Startup Failures. The winners of the Alva Fellowship had some exciting projects and can be found here. The following is what I submitted:

Describe the project you are currently trying to realize, and why it will make an impact on the world.

As I've been working on my graduate thesis, I've discovered a need for a searchable and open database of startup failures – a tool that I've called Open Startup Failures.

During my research into the Philadelphia tech startup ecosystem, I became aware of entrepreneurs repeating past mistakes. The purpose of Open Startup Failures is to make failures and past attempts at solving problems publicly accessible in order to decrease the learning curve for tech entrepreneurs. The goal is to increase the chances of success. I understand this is not a silver bullet, but it tackles one aspect of entrepreneurial learning.

Entrepreneurial Learning in Perspective:

In the following paragraphs, I'm arguing that the unemployment problem is connected to a startup problem, which is connected to a learning problem. Designing and developing Open Startup Failures to tackle one part of the learning problem will have an impact on the unemployment problem.

To put this into the context of the current economic climate, my generation (those graduating post-crash) is having difficulty finding their way into established companies. According to the Bureau ofLabor Statistics, the fourth quarter of 2011 had unemployment ratesamongst those aged 16 to 19 at 23.6%, those aged 20 to 24 at 14.2%,and those aged 25 to 34 at 9.4%. According to Gallop Poll trackingJanuary 2ndto September 30th,2011, for those aged 18 to 29, 30% are underemployed and 14% areunemployed. This means that it has become much more difficult to find employment in existing organizations for those aged 16 to 29.

Also, in recent years, there's been a growing and revived interest in entrepreneurship. The connection between the crash and the current rise of entrepreneurship in American society seems more than correlated. I think it's one of the current causes of, as one of my interviewees put it, “start-up companies popping up like Rock bands in the 80's.” While interviewing a community leader in the Philadelphia startup ecosystem, he said, “every startup that comes in creates ancillary jobs that generate taxes that gives back to the city, that gives back to build better things to make things better.” This isn't simply anecdotal, but is supported by a job growth study performed by the Kauffman Foundation. The report concludes that “newfirms add an average of 3 million jobs in their first year, whileolder companies lose 1 million jobs annually.” The Kauffman report studied startups and existing firms from 1977 to 2005 and defines startups as firms younger than one year old. The report goes on to say that policymakers do not correctly focus their attention on supporting startup growth. This opinion is reflected in the same startup community leader, “the older institutions that were fostered in the 60s, like Ben Franklin tech partners, like the science center, like the chamber of commerce, these bureaucrats and older generation, are trying to sustain their positions and their jobs and companies and they're not fostering innovation.” This community leader has been an advocate for startups in Philadelphia for the past decade. Putting his opinion side-by-side with the findings from the Kauffman report means that people have already realized the potential behind getting involved in a new firm or starting one.

However, ninety percent of startups fail. Seventy percent of them, discovered by the Startup Genome, scale prematurely along any one of five dimensions, which they think may partly explain the ninety percent failure rate amongst technology startups. The Startup Genome has published two reports, one with a data set of 650+ internet technology startups and another on 3200+ internet technology startups. The Startup Genome set out to test three hypotheses, and one which is of interest to this project is, “learning is a fundamental unit of progress for startups. More learning should increase chances of success.” They went on to discover,
“Founder's that learn are more successful. Start-ups that have helpful mentors, track performance metrics effectively, and learn from start-up thought leaders raise 7x more money and have 3.5x better user growth.”
They point to three factors being relevant for funding and user growth: helpful mentors, tracking metrics, and learning from thought leaders. All three provide the foundation to learn. Being able to raise funds and maintain user growth is key to a healthy startup, which means that learning is vital for a healthy startup.

Open Startup Failures will be a significant learning tool for startups that may result in greater successes and thus greater employment opportunities.

Please describe your plan for executing the project. (If the product or service is the subject of a patent application or issued patent, please include that information.)

As a human centered designer, I've learned to co-create with my target audience. This means I have one part of the solution and the entrepreneurs I've been studying have the other part. Together, the final designed service may better meet their needs. Co-creation is part of my plan for executing this project.
The integral part is making knowledge accessible. To make knowledge of startup failures accessible for these entrepreneurs, the knowledge needs to be in a form that is useful to these entrepreneurs. I currently meet with them, receive feedback on the form the information needs to take, and let the entrepreneurs inform the most important aspects of the design. This phase should take up the next month of thesis work, and will lay the foundation for data gathering, frontend web design, and backend server development.

I expect data gathering to happen continuously once a useful form has been structured. Frontend web design for Open Startup Failures will come next and take a month and a half. I'm visually sensitive, know how to architect information, and have experience user-testing products. Backend server development is going to come next and take another month. I'm proficient and have experience developing server-side code in java. The entire data input and retrieval is going to be automated and is informed by the co-created work between the entrepreneurs and myself.

At this pace, we should be in mid-August and launching at Philly Tech Meet-up. I am going to leverage my entrepreneurial networks in Philadelphia through Philly Startup Leaders and the local accelerators, incubators, and coworking spaces. These spaces include Venturef0rth, Novotorium, SeedPhilly, IndyHall, Good Company Ventures, the Corzo Center for theCreative Economy, and the Science Center where DreamIt Ventures is currently housed. The goal is to inform people about the tool and have them contribute – in a fashion similar to Wikipedia.

After Philadelphia, the next logical steps include New York and DC. Again, I believe it's best to use local venues to spread word of mouth and in person. If there's anything I've learned from observing and interacting with entrepreneurs is that they value meeting face-to-face. This means face-to-face is the best way to reach out to them.

Tell us about past initiatives, actions, or projects you've undertaken that qualify you to execute on the project you described above.

I'm interested in pushing the boundaries of human-computer interaction and I've recently had the opportunity to explore and study entrepreneurship with my thesis. You'll see examples of this on my website, but there's a few I'd like to point out.

I'd like to tell you about the work I did at the Franklin Institute. The Franklin Institute is a science museum in Philadelphia. While there, I was in charge of two projects. Both projects leveraged the Xbox Kinect to build full body interfaces to teach kids and adults about science by means of experiential learning. The first project was about teaching kids and adults about change blindness, which is a person's inability to notice their visual surroundings change. Videos and images are available at the following link: The second project is about peaking the museum patron's interest to learn about the human nervous system. This project has the distinction of being made a permanent interactive when the new “Your Brain” Exhibit opens in the Franklin Institute in 2014. Recently, this second project has made it into the local ABC news, videos and images may be found at this link: and a demonstration of the interactive is at the following link:

Another example is the work I did with Slavko Milekic, a professor of mine. The project is called Electrofolksonogram (EFG) and was demonstrated at the Museums and the Web conference in 2011. EFG is the fusion of electroencephalogram(EEG, which is a device that measures brain waves) and folksonomy (which is a collaborative method to categorize content). The EFG adds a new layer of information by recording the user’s engagement, excitement, and opinion about a particular piece of art. This is a proof of concept project displaying the potential applications of the Emotiv EEG, the data gathered from the EFG can be turned into a database in order to find correlations amongst large populations of people and to better cater to museum goers. Videos and images may be found at this link:

Over the summer two other industrial designers and myself designed an environment to encourage collaboration amongst museum patrons. We called it Collabritique. The first iteration of this project was demonstrated at the Museums andthe Web conference in 2011. Collabritique brings people together in a museum space. It fosters collaboration by facilitating a conversation between three people about a single or multiple pieces of art. In this way, Collabritique not only promotes interactions, new discussions, and critiques about art, but also highlights the inherent connections created when artworks are juxtaposed. Collabritique provides a richer experience for the museum patron and a satisfied museum customer. Videos and images may be found at the following link: Our current website for the product is here:

Describe what you're focusing your energy on right now. For example: Are you a student? Are you running a startup? Are you working on a “passion project” alongside a full-time job?

I'm a graduate student studying for my Master of Industrial Design. I'll be graduating mid-May and am currently working on my thesis. As a part of my design thesis, I've been studying the information technology startup ecosystem in Philadelphia. Within this ecosystem, I've focused on enabling entrepreneurial learning. The problem being addressed is the disconnect between novice and experienced entrepreneurs and the gap in knowledge and experience transfer. I've spent the last seven months researching, interviewing, observing, learning, prototyping, networking, and meeting passionate people that want to change the world.

What character traits do you think have contributed to your ability to execute on your ideas thus far?

Curiosity, creativity, persistence, skepticism, and empathy. I've also been influenced by three distinct cultures: American, German, and Iranian. My Iranian parents and I moved to Southern California from Germany when I was a young child. I believe this has nurtured a mindset that draws connections between disparate ideas and a curiosity about people and where they come from. Attending UCLA, I know my degrees in Aerospace Engineering and Philosophy are a direct result of my cultural experiences growing up and the traits I pointed out.

As much as I enjoy building new technologies, my desire to understand people influenced my decision two years ago to pursue a Master of Industrial Design. As an industrial designer trained in human centered design, I pull from fields such as cognitive science, anthropology, psychology, and sociology to quantify the human experience. This means I design and develop interactive technologies to teach kids and adults about science, to foster collaboration between people, and to make sure technology doesn't get in the way of being human. Design is my lens to focus engineering to solve human problems.

If you are selected for the Alva Emerging Fellowship, how would you use the funds to get your project off the ground?

I will use the money to cover the cost of web hosting, server and database space for two years. I'm adept at developing the required software, so what I most need is to cover the overhead cost of online space. As the project develops, I also expect to use a small amount of the funds to cover transportation expenses between New York, Philadelphia, and DC.

What else should we know about you?

I'd like to share about the art I make, how I currently experience art, and why I think it reflects where I am in my life.

Over the past two years, art exhibits and galleries have become therapeutic for me. I enter into these spaces and see objects that spark an emotional reaction. I reflect on these emotions as a way to learn more about myself. In essence, the art I see becomes an external embodiment of an internal emotion. I feel like I transcend the space, the art, and myself.

I've recently tried to imbue the art I make with this same property: an opportunity for the observer to self-reflect and transcend. I'd like to share Chris's story as he experienced a digital interactive art piece I made, titled myFace.

Chris is a brick layer. He walks into Little Berlin's gallery space and he's confronted with a wall of one hundred faces he doesn't recognize. He stands perplexed at the expressions of the hundred faces, and why they loop like online gif images over and over again. As he's observing the wall, he notices his face has appeared on the wall. He doesn't know why or how, but one of the videos of his face is mixed in with a brick wall. I was standing and watching him, he walked over to me and told me his art is laying bricks. I could feel the chills going down his spine as he watched the brick wall behind him transpose on top of his face. The paradox present in this experience is that myFace reflected Chris' art (brick laying), not mine. myFace simply facilitated an environment for Chris to transcend the moment. Videos and images of myFace at Little Berlin's gallery can be found at the following link:

I have a passion for leveraging the digital to create spaces that empower people – whether it's to enable entrepreneurial learning or for trans-existential environments. With the case of myFace, my motivation was to create a space that evolves and grows with each passing observer. I had also hoped to reveal people's emotional states when they do not have control over how they are perceived – an intrusive concept in this day of editable text messages and emails. myFace has had a polarizing effect on its observers. One group refuses to step into the space where interaction occurs and the other group can't seem to leave that space. At the same time, this piece imbues the space in front of the screen with a digital memory, allowing observers to know who has passed through the space.

I hope you've learned a little bit about me, and you should know I'm exuberant about the Alva Emerging Fellowship enabling Open Startup Failures to get off the ground. If there are any more questions feel free to email or call.

Robotic Telepresence via Skype and Face Tracking with OpenCV

A few months ago I hooked up the robotic head I built to Skype. I called up my brother in Los Angeles and had him control it with his head. Based upon where he was looking and turning his head, the robotic head would then match his head motions. By permitting the user to control the robotic head with their head, the goal has been to create a natural user interface. Furthermore, the user sees through the robots perspective, making the interaction feel natural. Connecting these two technologies, Skype and robotic telepresence, people may converse while also fully engaging in the environment.

Tutorial: How to build a Robotic Head that Follows People's Faces

Edit: See the working prototype here.

I've gotten a few requests to put up a tutorial for building a robotic head.

I'm assuming if you want to build a robotic head, you'll have some experience using an arduino, servos, and some other language that can interface between a webcam and the arduino. I used processing as my interface between the arduino and my webcam. I'm also using the OpenCV library for processing.

I'll describe the overarching structure behind the code to give the illusion that a robot is following a person's face. At a high level, there are only a few steps to have a webcam follow a person's face:
  1. Detect face. Take a look at some OpenCV examples on processing's website.
  2. Grab X-Y pixel coordinates of the face.
  3. Calculate the pixel distance between the center of the face and center of the webcam's view. In other words, you're going to take the images from the webcam, and calculate the distance between the center of that image and the center of the face.
  4. Write an algorithm that minimizes the distance between the webcam's center and the face's center.
    1. This algorithm will also control the servos.
You can download my processing code here and my arduino code is based on the code found here from letsmakerobots.

Let me know if you guys have any questions. I'll try to answer them as soon as I get a chance.

The following pictures are the first iteration:

Bill Moggridge Lecture

Bill Moggridge was speaking at UPenn's design department. He broke up his life into three segments that paralleled the changes in design, from traditional design to design thinking. He described the first design firm he started in the upper floor of his apartment and eventually discussed life and work at Ideo. He finally talked about his current work, designing a museum that engages with the museum patron.

Arduino Show & Tell at the Hacktory

Johnny Four and I made it out to the arduino show and tell at The Hacktory this past Saturday. The images and videos tell the rest of the story from the event.

And maybe Johnny Four may finally have a body?

Infographic: Experienced and Novice Entrepreneurs

I've been quite busy making sense of the interviews I've conducted. I've summarized main points from interviews between experienced and first time entrepreneurs and how the two may interact in a mentor relationship. I made it to help me understand everything I've been learning more than anything, and if you have any thoughts or suggestions please contact me and provide feedback.

As always, you're welcome to download it.

To Table of Contents about my thesis.

Rational Intuition by Professor Todd Landman

A year and a half ago I wrote a philosophic essay on intuition and thinking, which I just learned is also an area of study that interests Professor Todd Landman. As he talks about decision making, I'm reminded of collaborative environments I've worked in, and the role the space may play in creating a collective subconscious that may unify individuals working in that space. He presents a few examples of opponents that occupy the same space. As the opponents fight over and over again, they end up cooperating and the fighting stops. I'm curious how comparing the collaborative group space and the inherent paradoxes in group life may present a model for the fighting that may occur between enemies that occupy the same space. This line of thinking brings me to question the nature of space and what it means for a group or multiple groups of people to occupy a certain space. I'll gather more of my thoughts and write it out once I finish my thesis...

Until then, enjoy the following video about rational intuition. More videos can be found on The RSA's archives of talks.

MiD meets IPD

MiD traveled to IPD a few weeks ago. I loved seeing what our neighbors are working on, the things they're interested in, and how they go about making the future. I look forward to getting to know these designers.


Persuasive Technologies and the Spectrum of Responsibility: A Metaphysical Exploration of the Ethical Capacity of Computers

The following essay is in response to a dialogue I had with a professor of mine, Slavko Milekic. It was about software that influences people's behavior - known as persuasive technology. It's exploring a special case of software that continues to exist beyond the lifespan of its creators. Specifically, it's scratching the surface on the idea that certain software will continue to exist beyond the lifespan of its creators, and these creators will not be around to take responsibility for any negative repercussions caused by the software. The question this essay is exploring is whether it's possible for this type of software with long lifespans to be ethically responsible.

Essay Structure:
My discussion with Slavko began around BJ Fogg's argument, so that's where the essay begins. I've split the following essay into three parts. Part A explicates Fogg's argument, Part B begins to introduce the spectrum of responsibility and continues the explication, and Part C explores the concept of the spectrum of responsibility, which forms the basis of my objection to Fogg. For lack of time, I have not performed an adequate job in explaining the spectrum of responsibility because it relies heavily on ideas from Bertrand Russell, Paul Tillich, Baruch Spinoza, and John Locke. I've attempted to combine the ideas of sense-data, knowledge by acquaintance, knowledge by description, being, non-being, and labor and property into a tool that can be used to argue for a special case of stray persuasive technologies that should be considered as ethical agents. I've tried to use a mathematical metaphor to make better sense of the concept, but I am still uncertain if this is the case. The spectrum of responsibility is a tool that I will have to revisit once I have more time in order to fill in the argumentative jumps I've noticed while editing. Furthermore, I think it is possible to use it to prove the ethical agency of certain persuasive technologies independent of Luciano Floridi's information ethics and his idea of the infosphere. Finally, I will also need to allow for some time to pass between the writing of this essay and my revisiting in order to gather my thoughts about possible implications of the spectrum of responsibility for the ethical landscape.

Part A:
In Persuasive Technology, Fogg argues that computers can't take responsibility for an error, and therefore are not moral agents. In the paragraphs to come I will explore his two main assumptions that lead to his conclusion. I'll begin by covering a few foundational concepts and definitions that are key to understanding his argument. Then, I'll explain why he believes that “to be an ethical agent of persuasion,” the agent “must be able to take responsibility for [their] actions and at least partial responsibility for what happens to those whom [they] persuade” (Fogg, 218). Second, I will explain his assumption about a computer's lack of capacity to take responsibility in either the form of punishment or restitution. I will also present the logical steps one must take in order to come to his conclusion. Finding his second assumption objectionable for a special case of stray persuasive technologies, in part B I will explain how these persuasive technologies fit into the ethical picture.

Fogg is considering computers as interactive technologies, but more specifically, their use as persuasive technology. An interactive technology is an actor that can engage in a turn based exchange of information with other agents, including humans, animals, and other interactive technologies. Fogg is concerned about interactive technologies that are designed to persuade. He defines persuasion as “an attempt to change attitudes or behaviors or both (without using coercion or deception)” (Fogg, 15). Since Fogg's definition of persuasion includes “attempt,” the act of persuading does not have to succeed or fail. This implies that the individual agent persuaded changes their attitude or behavior voluntarily. Bringing the concept of persuasion together with interactive technology creates the realm of persuasive technology. These are interactive technologies that attempt to change attitudes, behaviors, or both of other agents.

Fogg uses a few terms that need to be clearly defined and discussed in relation to each other in order to understand his argument. These terms are ethical (moral) agent, ability, error, responsibility, persuasive entity, and restitution. Fogg uses ethical agent and moral agent interchangeably. Though it is possible to define an ethical agent in many different ways, I will be sticking to Fogg's definition. An ethical agent, Fogg defines, is an agent that has the ability to take responsibility for their actions. His definition brings together two important concepts about the agent: her ability and responsibility. Fogg is saying that the agent's ability to perform a certain action should be governed by her capacity to be responsible for that action. This is a normative claim that Fogg supports with an appeal to the moral codes of major civilizations. A moral code is a set of agreed upon rules by participants within a certain group. Since “making restitution for wrongdoings (or at least being appropriately punished) has been part of the moral code of all major civilizations,” then the wrongdoing of an ethical agent requires restitution or punishment (Fogg, 218).

Restitution is the process an ethical agent follows in order to return just value for a wrongdoing. A restitutive process is followed in order to restore a situation post-wrongdoing to a situation that will be equivalent to the situation pre-wrongdoing. Coming back to the capacity to be responsible for an action in Fogg's normative use, there are two ways that an ethical agent can be responsible. They may either provide restitution or be punished for a wrongdoing. If the ethical agent chooses to be punished or does not meet the requirements to provide restitution, then by an appropriate punishment the wrongdoing will be considered restored. An example in our current society is a criminal with no wealth. They have only their time and life that can be used as a kind of payment for a wrongdoing. Since the criminal has no wealth to provide restitution, appropriate prison time is put in place for this criminal.

Fogg mentions the responsibility for errors and seems to be using it synonymously with restitution or punishment for wrongdoings. For the context of this argument, error and wrongdoing are also used similarly. This means that they are both referring to actions that have to do with ethical consequences. The last term that needs to be defined is persuasive entity. Simply, it is an entity that can “advise, motivate, and badger” another entity. In this case, entity is the same as agent, but it does not connote ethical agent. A persuasive entity, Fogg makes clear, does not take responsibility. This means that if an agent is an ethical agent of persuasion, then they are a persuasive entity. However, if they are persuasive entity, they are not necessarily an ethical agent of persuasion.

Now that I've thoroughly beaten you over the head with these definitions, let me discuss some interesting argumentation on Fogg's behalf. Fogg states that an ethical agent of persuasion requires the agent to take responsibility for their actions and at least partial responsibility for what happens to those whom they persuade. Fogg is particularly concerned with persuasive actions and taking responsibility for when these actions detrimentally affect other agents. Furthermore, he stresses the case of computers working separately from humans. This is a case that will become more prominent as the internet continues to exist for an extended period and outlasts the companies that created interactive technologies in cyberspace. These stray interactive technologies that are also persuasive entities have the capacity to change an agent's attitudes, behaviors, or both.

If these stray persuasive technologies injure an individual in some form or another, Fogg believes they cannot take responsibility for this damage in the same way as a human being or a persuasive technology backed by designers and/or companies. This part of his argument relies on the belief that “computers themselves can't be punished or follow any paths to make restitution” the same way as a human being (Fogg, 218). Computers as persuasive entities, Fogg assumes, lack the capacity to take responsibility for an error or wrongdoing either by punishment or restitution. In order to make clear the capacity to take responsibility for an error, I think one needs to understand the source of that capacity. In other words, what provides a persuasive entity (computer or human) with the capacity to be responsible for a wrongdoing?

Part B:
It seems that there are two options for the source of the capacity to be responsible. One option is that the source is innate to the persuasive entity. An example of something innate to a persuasive entity is its being, its existence (For an in depth philosophic and historic explanation of being and non-being, please see The Courage to Be by Paul Tillich). The other option is something that is added onto the persuasive entity. An example of something that is added onto it is the world it organizes around itself, the rules the persuasive entity follows, or even the knowledge it acquires. I think these two options lay opposite of each other on the ends of a spectrum. An agent will fall somewhere along this spectrum of the source for the capacity of responsibility. An agent's source will be a mixture of these two ends and is unique to that agent but can change over time. Let's come back to the example of the criminal with no wealth. Imagine that this criminal now acquired some amount of wealth that can be used as restitution, but not enough to yield the full extent of the punishment. This criminal has shifted along the spectrum of responsibility.

It's of interest to note two parallels. One parallel is between 'the source of capacity being innate to the persuasive entity' and 'punishment'. The other parallel is between 'that which is added to the persuasive entity' and 'restitution'. Understanding the spectrum of responsibility from the lens of punishment and restitution means that an agent will fall on this spectrum somewhere between punishment and restitution. Again, note the example of the criminal with a varied amount of wealth. If the criminal has no wealth, he will fall to one side of the spectrum, pure punishment. This means that punishment in its purest form will affect the criminal's innate source for the capacity of responsibility. If the criminal has enough wealth to restore the full extent of the wrongdoing, then she will fall to the other side of the spectrum, pure restitution. Restitution in its purest form will affect the criminal's world that has been constructed mutually with the rest of society (I.e the criminal's wealth).

To further develop this tool, the spectrum of responsibility, and before we can understand the kinds of agents that can be quantified by this spectrum, I think it is necessary to fully understand this new tool that will aid in clarifying and showing why Fogg's argument superficially discarded persuasive entities from being ethical agents. The spectrum of responsibility is being developed specifically for the case of persuasive technology existing beyond the lifespan of the humans and companies that made them. My objection is special to this case and will be known as the special case of persuasive technology. A general objection that would include persuasive technology that exist within the lifespan of the humans and companies that have made them will not be argued in this paper.

Now let's begin the objection.

Part C:
The spectrum of responsibility can be visualized as an x-and-y coordinate system existing on the substrate of sense-data (See Figure 1). On the x-axis there is pure punishment reaching towards negative infinity and there is pure restitution reaching towards positive infinity. On the y-axis there is knowledge by acquaintance reaching towards negative infinity and knowledge by description reaching towards positive infinity. The following few paragraphs will explain sense-data, pure punishment, pure restitution, knowledge by acquaintance, knowledge by description, and how they all relate to each other in the spectrum of responsibility.

Sense-data, used similarly as Bertrand Russell, includes the “things that are immediately known in sensation: such things as colours, sounds, smells, hardnesses, roughnesses, and so on” (Russell, 12). For the special case of persuasive technology, sense-data includes the data gathering that can be done by the persuasive technology. Senses need not be limited to the physical, but includes the digital. This means that the list of sensation examples can include 0's, 1's, and any combination there-of. However, this digital information requires non-biological senses to be perceivable, for example the laser lens in a DVD player. Another example of senses for technology are push buttons because it provides them sense-data of a sensation occurring at the location of the button.

The y-axis of the spectrum of responsibility is marked by knowledge by acquaintance reaching towards negative infinity and knowledge by description reaching towards positive infinity. Knowledge by acquaintance is the direct awareness of anything, “without the intermediary of any process of inference or any knowledge of truths” (Russell, 33). This is the most basic form of the awareness of knowledge because it does not require the use of inference to become aware of knowledge via a set of premises or truths. Knowledge by description, is more complex than knowledge by acquaintance. Knowledge by description uses truths or premises as its starting point and then requires some sort of rule or formula to be followed in order to become aware of the knowledge. The reason knowledge by acquaintance is set on the negative y-axis is because it is a less complex form of knowledge awareness, while knowledge by description is more complex and is signified as such by its location on the positive y-axis. For example, there is the pedometer that motivates the user to continue walking or running. This pedometer is aware via an accelerometer that it is moving. This is knowledge by acquaintance, and accordingly does not require inferences based on algorithms that require other truths to draw conclusions. However, the pedometer could be aware of knowledge by description of the users calories being burnt, assuming the pedometer also knows the runner's weight and certain biological markers. The calories burnt would be known via an inference. Let's say pedometer one could only detect motion, while pedometer two could also make inferences about calories burnt. This means that pedometer two would have a greater y-axis value than pedometer one, and in turn, a greater awareness of knowledge.

The x-axis of the spectrum of responsibility is marked by the two extremes of pure punishment and pure restitution. On the positive side of the x-axis lays pure restitution, whose extent is determined by Locke's notion of labor and property. I am referring to the State of Nature and the ability of the agent to remove things out of the State of Nature (Locke, 287, 288). Some thing is taken out of the State of Nature by the work that is done to it. Locke provides the acorn example, where by an individual gathers acorns that are freely available. By laboring and picking up the acorns, she has made them her own and she has right to it. For the special case of persuasive technology, I see Locke's notion of labor and property applying in the sense that unused cyberspace (i.e. the portion of hard drives that are unused) is by analogy in a State of Nature. The history logs that are accumulated by certain persuasive technologies is one possible example, such as a pedometer that motivates but also keeps track of all distances covered by the runner.

On the negative side of the x-axis lays pure punishment, whose extent is determined by the notion of being and non-being. Being is everything that is the agent. If the agent were to lose part of its being, then this loss is known as a privation of being, or the privation of a part of its being. For example, the blind man was once able to see with his eyes, but no longer can. This is known as the privation of sight. Non-being is the privation of being. Spinoza writes to Blyenbergh in Letter 21,

privation is nothing else than denying of a thing something, which we think belongs to its nature; negation is denying of a thing something, which we do not think belongs to its nature.” (Spinoza, 277)

This means that being contains non-being (Tillich, 34). To explain, death is inherent to all beings, so if life is denied, non-existence is the privation of everything that is the being. As it pertains to the special case of persuasive technology, it is sufficient to know that the existence of a persuasive technology can be denied.

Bringing these two axes together on top of the capacity of sense-data forms the spectrum of responsibility. If an agent does not have the capacity for sense-data, then they cannot be quantified for a wrongdoing by the spectrum of responsibility. The y-axis determines the awareness of the agent and the x-axis determines the form of responsibility that can be taken upon the agent dependent upon the awareness of the agent. Awareness in this sense refers to the agents ability to be aware of knowledge either by acquaintance or by description, where knowledge by description is more complex than knowledge by acquaintance.

Now revisiting Fogg's argument in light of the spectrum of responsibility, we can provide an objection to Fogg's assumption that “computers themselves can't be punished” for the special case of stray persuasive technologies (Fogg, 218). Fogg uses the word 'computers' to mean persuasive technology. As long as the persuasive technology has the ability, at the minimum, to acquire sense-data in order to be aware of knowledge by acquaintance, then the persuasive technology can take responsibility for a wrongdoing. Persuasive technology can take responsibility for a wrongdoing because their being can be appropriately punished or the information they have organized out of the state of nature can be taken away or deleted. Finally, incorporating Fogg's normative claim to the moral codes of major civilizations, if an agent can make restitution for wrongdoings (or at least be appropriately punished), then that agent should be considered an ethical agent. Since according to the spectrum of responsibility, certain persuasive technology can be punished, those that meet the requirements for sense-data and awareness of knowledge, then these persuasive technologies should be considered ethical agents.


In The Local News!

I just found out that the work I did at the Franklin Institute just made it onto the local ABC news. The piece is about interactively visualizing the human nervous system. You can see the prototype in the following video towards the end of the 30 second clip.

Newsworks, a local news organization powered by WHYY, also covered it in this article.

When the exhibit finally opens in 2014, you'll be seeing this piece to the entrance of the exhibit.

The Anthropology of Human Robot Interaction

A few months ago I dismantled my mind controlled crane and re-used the arduino and servos to build a robotic head. The inspiration to build a robotic head that would follow people's gaze came after I finished reading Sherry Turkle's Alone Together. In the book, she describes and provides analysis for people's reactions and interactions to the robots that are being built at the MIT Media Labs. She describes how people associate emotions to external creatures that appear to pay attention to them. She goes on to argue the seductive nature of these robots taking advantage of the human need and want to connect and feel as if another entity is listening and paying attention. Scroll to the bottom to see a recent Ted Talk by her about some of the concepts in her recent book.

Check out the following videos to watch people interacting and attributing feelings and perceptions to Johnny Four.

A closeup of Johnny Four:

Sherry Turkle giving a ted talk:

PLObject: Playful Living Object - User Testing

A PLObject is an object that is aware of the child’s frustrations, engagements, excitements, and movements. Two semesters ago I prototyped and developed applications for the Emotiv headset, and one such application was a toy and a game for children. The goal was to provide an external representation of the child's frustration in the form of towers that rebuild themselves. When the child became too frustrated, the towers rebuilt. A demonstration explaining the inner workings is here. Check out some of the videos of users testing the plobject.