The homunculi-headed robot, a thought experiment in philosophy of mind, constitutes a system that is in fact a mind capable of experiencing qualitative states, or “qualia”. (1) I will discuss the arguments put forward by Ned Block (1978), viz. the Absent Qualia Argument in regards to functionalism. (2) Playing the role of the functionalist, I will argue that the causal roles the homunculi occupy in a system and those occupied by the firing of neurons in a brain offer no substantial differentiation by which to accept Block’s prima facie counter-example argument. Providing another counter-example to Block’s counter-example, I argue that the Absent Qualia Argument is grounded in unsubstantiated reasons of complexity. 3) Finally, I posit a gradational machine functionalist view to avoid arbitrarily assigning mind to one system and not another. Drawing parallels between the operating networks of the homunculi and those of neurons compels us to accept that the neuronal system cannot have qualitative states without the homunculi system also having them. Either humans do not possess qualitative states, or both systems do.
1. Block distinguishes between analytic Functionalism (Lewis) and Pscyhofunctionalism (Putnam). The former is characterized by the quantified Ramsey sentence of a given theory that appeals to folk-psychology and the latter by scientific psychology in the same ramseified manner. However, because both areas of functionalism deal with some specification of inputs and outputs, he makes no significant distinction between the two in condemning functionalism as too liberal. In order to substantiate his claim that functionalism is too liberal—that is, it ascribes mentality to systems where there likely is no mentality—he provides a thought experiment. He invites you to imagine a robot that is identical to you in external appearance. Just like you, this robot would step on a thumbtack and exhibit pain behavior, such as wincing and yelling, “ouch!” However, if you were to open the head of the robot, instead of peering down at a biological brain with a network of firing neuron patterns, you would observe a room of some sort with a legion of tiny minions carrying out various, specific tasks.
Maintaining the necessary conditions particularly constituent of Putnam’s machine functionalism, Block explains the homunculi-headed robot as an appropriately complex (i.e. appropriate enough to pass as human) Turing machine. The physical realizers of the Turing machine’s table are the tiny minions who satisfy causal relationships between the inputs, outputs, and other mental states. Here is the general idea: 1) a given input causes a light I432 to light up; 2) the homunculi refer to a bulletin board where a card is posted that indicates the machine table’s current state as G; 3) a tiny minion, who happens to be a member of the group responsible for all input and output related to state G, is uniquely tasked with responding to the input I432 when the system is in the G state. The minion then pushes the specified output button O235; and 4) changes the state card to state M.
Block argues that his homunculi-headed robot example is a prima facie counter-example to machine functionalism in that there is doubt as to whether such a system has mentality or experiences qualitative states at all. In other words, there is nothing it is like to be the homunculi-headed robot. The robot merely consists of a network of tiny minions satisfying the descriptions of a given machine table with respect to input, output, and other internal states. If qualitative state Q is identical to machine table state MQ, as machine functionalism holds, but there is nothing it is like to be the robot, then the robot does not experience Q even when it is in MQ.
2. Why is it like nothing to be the robot? If Rick Moranis were to shrink to the size of a neuron firing in a brain, he might reach a similar “absent qualia” conclusion about humans. He might observe, “This particular group of neurons respond to this particular stimulus when the whole neuronal network is in this configuration and produce this particular neuronal pattern that activates this external behavior and changes to this new particular network pattern. But being in that state of particular neuronal patterns does not equate with experiencing the qualitative states that I experience every day. Therefore,” he concludes, “there is nothing it is like to be this strange network of neuronal patterns.”
One can imagine an Absent Qualia Argument proponent maintaining that the latter neuronal pattern state is something over and above the posted G card state in the homunculi-headed robot. But one reaches such a polar conclusion by focusing too heavily on the complexity of the tasks carried out by individual neurons and the relatively uncomplicated tasks of the tiny minions. This person assigns a mind to some system with an arbitrary level of complexity, while withholding it from another with a lesser degree of complexity.
Indeed, we can imagine a possible world in which a more complex version of the homunculi-robot example takes place. In this world, instead of a G card indicating the system’s state, any given particular configuration of homunculi determines the state. When input light I432 lights up, the homunculi look to a Jumbotron on which an areal view of their total configuration is presented on the screen. There are enough tiny minions such that each one is tasked with responding to a particular holistic configuration of the homunculi, which is determined by minute details in the individuals’ locations. Tiny minion #543345….n is alerted that the homunculi population are configured in such a way that it is its task to push output button O63367…n. Once it pushes the button, the homunculi shift into a new position.
Stretching the homunculi-headed robot example to its limits, we may even comprehend a system whereby the functional roles of the homunculi are most similar in mechanistically minute detail to the functional roles of neurons. Where does one draw the complexity line such that a given system’s level of complexity constitutes that system attaining qualitative states? I argue for a gradational theory of machine functionalism. Such a view is much guiltier of liberalism in Block’s terms, but it avoids the arbitrary assigning of qualia to one functional system rather than another.
3. A gradational machine functionalist holds that any realized machine table, that is any system whereby a realizer(s) occupies a causal role in regards to inputs, outputs, and a change in state of some kind, attains some degree of mentality. This is to argue, in the extreme case, that there is something it is like to be a mousetrap. Indeed, a gradational machine functionalist does not disregard complexity of the parts in relation to the complexity of the whole, but does not draw a definitive line in a given system so as to claim one system “feels” and the other does not. Intuitively, this is a difficult position to support. But we may attribute this difficulty to an implicit bias of the system occupying the position on one side of the complexity scale. That is, we cannot fathom what it is like to be a mousetrap because of the disparity in the complexity of our minds as a functional system and the mousetrap as a functional system. Despite this difficulty, assigning any degree of mentality to one functional system and not another is an arbitrarily concluded position guilty of bias.
If a network of homunculi act within the boundaries of a head (for this argument) and carry out the characteristic causal roles of a network of neurons in a given biological brain, then there is no reason to believe the homunculi-headed robot does not experience the “raw feels” that Absent Qualia Argument proponents chauvinistically attribute to biological systems alone. It is not whether or not the homunculi-headed robot feels, but rather how much it feels.
Block, N. (1978). “Troubles with Functionalism”. Excerpted from David Chalmers. Philosophy of Mind: Classical and Contemporary Readings, 94-98.