Persuasive Technologies and the Spectrum of Responsibility: A Metaphysical Exploration of the Ethical Capacity of Computers



Introduction:
The following essay is in response to a dialogue I had with a professor of mine, Slavko Milekic. It was about software that influences people's behavior - known as persuasive technology. It's exploring a special case of software that continues to exist beyond the lifespan of its creators. Specifically, it's scratching the surface on the idea that certain software will continue to exist beyond the lifespan of its creators, and these creators will not be around to take responsibility for any negative repercussions caused by the software. The question this essay is exploring is whether it's possible for this type of software with long lifespans to be ethically responsible.

Essay Structure:
My discussion with Slavko began around BJ Fogg's argument, so that's where the essay begins. I've split the following essay into three parts. Part A explicates Fogg's argument, Part B begins to introduce the spectrum of responsibility and continues the explication, and Part C explores the concept of the spectrum of responsibility, which forms the basis of my objection to Fogg. For lack of time, I have not performed an adequate job in explaining the spectrum of responsibility because it relies heavily on ideas from Bertrand Russell, Paul Tillich, Baruch Spinoza, and John Locke. I've attempted to combine the ideas of sense-data, knowledge by acquaintance, knowledge by description, being, non-being, and labor and property into a tool that can be used to argue for a special case of stray persuasive technologies that should be considered as ethical agents. I've tried to use a mathematical metaphor to make better sense of the concept, but I am still uncertain if this is the case. The spectrum of responsibility is a tool that I will have to revisit once I have more time in order to fill in the argumentative jumps I've noticed while editing. Furthermore, I think it is possible to use it to prove the ethical agency of certain persuasive technologies independent of Luciano Floridi's information ethics and his idea of the infosphere. Finally, I will also need to allow for some time to pass between the writing of this essay and my revisiting in order to gather my thoughts about possible implications of the spectrum of responsibility for the ethical landscape.

Part A:
In Persuasive Technology, Fogg argues that computers can't take responsibility for an error, and therefore are not moral agents. In the paragraphs to come I will explore his two main assumptions that lead to his conclusion. I'll begin by covering a few foundational concepts and definitions that are key to understanding his argument. Then, I'll explain why he believes that “to be an ethical agent of persuasion,” the agent “must be able to take responsibility for [their] actions and at least partial responsibility for what happens to those whom [they] persuade” (Fogg, 218). Second, I will explain his assumption about a computer's lack of capacity to take responsibility in either the form of punishment or restitution. I will also present the logical steps one must take in order to come to his conclusion. Finding his second assumption objectionable for a special case of stray persuasive technologies, in part B I will explain how these persuasive technologies fit into the ethical picture.

Fogg is considering computers as interactive technologies, but more specifically, their use as persuasive technology. An interactive technology is an actor that can engage in a turn based exchange of information with other agents, including humans, animals, and other interactive technologies. Fogg is concerned about interactive technologies that are designed to persuade. He defines persuasion as “an attempt to change attitudes or behaviors or both (without using coercion or deception)” (Fogg, 15). Since Fogg's definition of persuasion includes “attempt,” the act of persuading does not have to succeed or fail. This implies that the individual agent persuaded changes their attitude or behavior voluntarily. Bringing the concept of persuasion together with interactive technology creates the realm of persuasive technology. These are interactive technologies that attempt to change attitudes, behaviors, or both of other agents.

Fogg uses a few terms that need to be clearly defined and discussed in relation to each other in order to understand his argument. These terms are ethical (moral) agent, ability, error, responsibility, persuasive entity, and restitution. Fogg uses ethical agent and moral agent interchangeably. Though it is possible to define an ethical agent in many different ways, I will be sticking to Fogg's definition. An ethical agent, Fogg defines, is an agent that has the ability to take responsibility for their actions. His definition brings together two important concepts about the agent: her ability and responsibility. Fogg is saying that the agent's ability to perform a certain action should be governed by her capacity to be responsible for that action. This is a normative claim that Fogg supports with an appeal to the moral codes of major civilizations. A moral code is a set of agreed upon rules by participants within a certain group. Since “making restitution for wrongdoings (or at least being appropriately punished) has been part of the moral code of all major civilizations,” then the wrongdoing of an ethical agent requires restitution or punishment (Fogg, 218).

Restitution is the process an ethical agent follows in order to return just value for a wrongdoing. A restitutive process is followed in order to restore a situation post-wrongdoing to a situation that will be equivalent to the situation pre-wrongdoing. Coming back to the capacity to be responsible for an action in Fogg's normative use, there are two ways that an ethical agent can be responsible. They may either provide restitution or be punished for a wrongdoing. If the ethical agent chooses to be punished or does not meet the requirements to provide restitution, then by an appropriate punishment the wrongdoing will be considered restored. An example in our current society is a criminal with no wealth. They have only their time and life that can be used as a kind of payment for a wrongdoing. Since the criminal has no wealth to provide restitution, appropriate prison time is put in place for this criminal.

Fogg mentions the responsibility for errors and seems to be using it synonymously with restitution or punishment for wrongdoings. For the context of this argument, error and wrongdoing are also used similarly. This means that they are both referring to actions that have to do with ethical consequences. The last term that needs to be defined is persuasive entity. Simply, it is an entity that can “advise, motivate, and badger” another entity. In this case, entity is the same as agent, but it does not connote ethical agent. A persuasive entity, Fogg makes clear, does not take responsibility. This means that if an agent is an ethical agent of persuasion, then they are a persuasive entity. However, if they are persuasive entity, they are not necessarily an ethical agent of persuasion.

Now that I've thoroughly beaten you over the head with these definitions, let me discuss some interesting argumentation on Fogg's behalf. Fogg states that an ethical agent of persuasion requires the agent to take responsibility for their actions and at least partial responsibility for what happens to those whom they persuade. Fogg is particularly concerned with persuasive actions and taking responsibility for when these actions detrimentally affect other agents. Furthermore, he stresses the case of computers working separately from humans. This is a case that will become more prominent as the internet continues to exist for an extended period and outlasts the companies that created interactive technologies in cyberspace. These stray interactive technologies that are also persuasive entities have the capacity to change an agent's attitudes, behaviors, or both.

If these stray persuasive technologies injure an individual in some form or another, Fogg believes they cannot take responsibility for this damage in the same way as a human being or a persuasive technology backed by designers and/or companies. This part of his argument relies on the belief that “computers themselves can't be punished or follow any paths to make restitution” the same way as a human being (Fogg, 218). Computers as persuasive entities, Fogg assumes, lack the capacity to take responsibility for an error or wrongdoing either by punishment or restitution. In order to make clear the capacity to take responsibility for an error, I think one needs to understand the source of that capacity. In other words, what provides a persuasive entity (computer or human) with the capacity to be responsible for a wrongdoing?

Part B:
It seems that there are two options for the source of the capacity to be responsible. One option is that the source is innate to the persuasive entity. An example of something innate to a persuasive entity is its being, its existence (For an in depth philosophic and historic explanation of being and non-being, please see The Courage to Be by Paul Tillich). The other option is something that is added onto the persuasive entity. An example of something that is added onto it is the world it organizes around itself, the rules the persuasive entity follows, or even the knowledge it acquires. I think these two options lay opposite of each other on the ends of a spectrum. An agent will fall somewhere along this spectrum of the source for the capacity of responsibility. An agent's source will be a mixture of these two ends and is unique to that agent but can change over time. Let's come back to the example of the criminal with no wealth. Imagine that this criminal now acquired some amount of wealth that can be used as restitution, but not enough to yield the full extent of the punishment. This criminal has shifted along the spectrum of responsibility.

It's of interest to note two parallels. One parallel is between 'the source of capacity being innate to the persuasive entity' and 'punishment'. The other parallel is between 'that which is added to the persuasive entity' and 'restitution'. Understanding the spectrum of responsibility from the lens of punishment and restitution means that an agent will fall on this spectrum somewhere between punishment and restitution. Again, note the example of the criminal with a varied amount of wealth. If the criminal has no wealth, he will fall to one side of the spectrum, pure punishment. This means that punishment in its purest form will affect the criminal's innate source for the capacity of responsibility. If the criminal has enough wealth to restore the full extent of the wrongdoing, then she will fall to the other side of the spectrum, pure restitution. Restitution in its purest form will affect the criminal's world that has been constructed mutually with the rest of society (I.e the criminal's wealth).

To further develop this tool, the spectrum of responsibility, and before we can understand the kinds of agents that can be quantified by this spectrum, I think it is necessary to fully understand this new tool that will aid in clarifying and showing why Fogg's argument superficially discarded persuasive entities from being ethical agents. The spectrum of responsibility is being developed specifically for the case of persuasive technology existing beyond the lifespan of the humans and companies that made them. My objection is special to this case and will be known as the special case of persuasive technology. A general objection that would include persuasive technology that exist within the lifespan of the humans and companies that have made them will not be argued in this paper.

Now let's begin the objection.

Part C:
The spectrum of responsibility can be visualized as an x-and-y coordinate system existing on the substrate of sense-data (See Figure 1). On the x-axis there is pure punishment reaching towards negative infinity and there is pure restitution reaching towards positive infinity. On the y-axis there is knowledge by acquaintance reaching towards negative infinity and knowledge by description reaching towards positive infinity. The following few paragraphs will explain sense-data, pure punishment, pure restitution, knowledge by acquaintance, knowledge by description, and how they all relate to each other in the spectrum of responsibility.

Sense-data, used similarly as Bertrand Russell, includes the “things that are immediately known in sensation: such things as colours, sounds, smells, hardnesses, roughnesses, and so on” (Russell, 12). For the special case of persuasive technology, sense-data includes the data gathering that can be done by the persuasive technology. Senses need not be limited to the physical, but includes the digital. This means that the list of sensation examples can include 0's, 1's, and any combination there-of. However, this digital information requires non-biological senses to be perceivable, for example the laser lens in a DVD player. Another example of senses for technology are push buttons because it provides them sense-data of a sensation occurring at the location of the button.

The y-axis of the spectrum of responsibility is marked by knowledge by acquaintance reaching towards negative infinity and knowledge by description reaching towards positive infinity. Knowledge by acquaintance is the direct awareness of anything, “without the intermediary of any process of inference or any knowledge of truths” (Russell, 33). This is the most basic form of the awareness of knowledge because it does not require the use of inference to become aware of knowledge via a set of premises or truths. Knowledge by description, is more complex than knowledge by acquaintance. Knowledge by description uses truths or premises as its starting point and then requires some sort of rule or formula to be followed in order to become aware of the knowledge. The reason knowledge by acquaintance is set on the negative y-axis is because it is a less complex form of knowledge awareness, while knowledge by description is more complex and is signified as such by its location on the positive y-axis. For example, there is the pedometer that motivates the user to continue walking or running. This pedometer is aware via an accelerometer that it is moving. This is knowledge by acquaintance, and accordingly does not require inferences based on algorithms that require other truths to draw conclusions. However, the pedometer could be aware of knowledge by description of the users calories being burnt, assuming the pedometer also knows the runner's weight and certain biological markers. The calories burnt would be known via an inference. Let's say pedometer one could only detect motion, while pedometer two could also make inferences about calories burnt. This means that pedometer two would have a greater y-axis value than pedometer one, and in turn, a greater awareness of knowledge.

The x-axis of the spectrum of responsibility is marked by the two extremes of pure punishment and pure restitution. On the positive side of the x-axis lays pure restitution, whose extent is determined by Locke's notion of labor and property. I am referring to the State of Nature and the ability of the agent to remove things out of the State of Nature (Locke, 287, 288). Some thing is taken out of the State of Nature by the work that is done to it. Locke provides the acorn example, where by an individual gathers acorns that are freely available. By laboring and picking up the acorns, she has made them her own and she has right to it. For the special case of persuasive technology, I see Locke's notion of labor and property applying in the sense that unused cyberspace (i.e. the portion of hard drives that are unused) is by analogy in a State of Nature. The history logs that are accumulated by certain persuasive technologies is one possible example, such as a pedometer that motivates but also keeps track of all distances covered by the runner.

On the negative side of the x-axis lays pure punishment, whose extent is determined by the notion of being and non-being. Being is everything that is the agent. If the agent were to lose part of its being, then this loss is known as a privation of being, or the privation of a part of its being. For example, the blind man was once able to see with his eyes, but no longer can. This is known as the privation of sight. Non-being is the privation of being. Spinoza writes to Blyenbergh in Letter 21,

privation is nothing else than denying of a thing something, which we think belongs to its nature; negation is denying of a thing something, which we do not think belongs to its nature.” (Spinoza, 277)

This means that being contains non-being (Tillich, 34). To explain, death is inherent to all beings, so if life is denied, non-existence is the privation of everything that is the being. As it pertains to the special case of persuasive technology, it is sufficient to know that the existence of a persuasive technology can be denied.

Bringing these two axes together on top of the capacity of sense-data forms the spectrum of responsibility. If an agent does not have the capacity for sense-data, then they cannot be quantified for a wrongdoing by the spectrum of responsibility. The y-axis determines the awareness of the agent and the x-axis determines the form of responsibility that can be taken upon the agent dependent upon the awareness of the agent. Awareness in this sense refers to the agents ability to be aware of knowledge either by acquaintance or by description, where knowledge by description is more complex than knowledge by acquaintance.

Now revisiting Fogg's argument in light of the spectrum of responsibility, we can provide an objection to Fogg's assumption that “computers themselves can't be punished” for the special case of stray persuasive technologies (Fogg, 218). Fogg uses the word 'computers' to mean persuasive technology. As long as the persuasive technology has the ability, at the minimum, to acquire sense-data in order to be aware of knowledge by acquaintance, then the persuasive technology can take responsibility for a wrongdoing. Persuasive technology can take responsibility for a wrongdoing because their being can be appropriately punished or the information they have organized out of the state of nature can be taken away or deleted. Finally, incorporating Fogg's normative claim to the moral codes of major civilizations, if an agent can make restitution for wrongdoings (or at least be appropriately punished), then that agent should be considered an ethical agent. Since according to the spectrum of responsibility, certain persuasive technology can be punished, those that meet the requirements for sense-data and awareness of knowledge, then these persuasive technologies should be considered ethical agents.


References:





No comments:

Post a Comment