1

Title: Dreaming of AI Lovers Author contact: Andrew Oberg Faculty of Humanities The University of Kochi [email protected] [email protected] Author bio: Andrew Oberg is an Assistant Professor in the Faculty of Humanities at the University of Kochi, Japan. His academic interests include definitions of the self, ethics and social morality, consciousness, being, and postcapitalist alternatives.

www.andrewoberg.blogspot.jp

Andrew Oberg

2

Abstract: The vision of building machines that are or can be self-aware has long gripped humankind and now seems closer than ever to being realized. Yet behind this idea lie deep problems associated with the self, with consciousness, and with what it is to be a being capable of experience. It is the aim of this paper to first explore these important background concepts and seek clarity in each one before then turning to the question of artificial intelligence and whether or not such is really possible in the manner in which we are approaching it. Keywords: artificial intelligence, consciousness, ontology, qualia, the self

www.andrewoberg.blogspot.jp

Andrew Oberg

3

1. Methodology: Aims and scope In this piece I want to say something about the nature of the self, something about consciousness and qualia, and something about artificial intelligence; specifically I wish to say something about what I take to be deep confusions in these areas. Each of these topics has a rich and deep philosophical canon (as well as psychological, cognitive, etc.) behind them and I cannot possibly hope to do justice to all of them. I do, however, hope to show how each of these are related, particularly regarding what I take to be a grossly erroneous vision of the (potential) future presented by the last, and how a clarification of the concepts involved and the alternatives on offer can help us think about each in a better way, whether we think about them in tandem or separately. 2. Hardware and software If one wishes to take up a study of the self one will quickly discover that nowadays self studies largely consist in one manner or another of finding ways to deny that there is a self despite what our everyday experience and common sense seem to be telling us. Of these a very recent example is Barry Dainton’s so-called “phenomenal self” (Dainton 2008), which states that the self is nothing but our stream(s) of consciousness, or Galen Strawson’s “minimal self” (Strawson 2009, 2011) which situates the self as, to put it somewhat clumsily, the phenomenology of the now. Dainton, and perhaps to a lesser degree Strawson, clearly owes a debt to Derek Parfit, whose similar account gave us the self as “relation R” (Parfit 1984), that is, psychological continuance and/or connectedness, and all of them can (and do) tip their hats to John Locke’s self as the continuity of consciousness and the conscious access to memory (or at least the possibility for such; Locke 1689/90);1 it will be seen that all of these accounts can be lumped together under the heading of “anti-realist”.2 By this I mean that they each, in their own way, deny that the self is anything lasting at all; the self is rather an illusion in one way or another, a side-effect of how we function as we go through life. (The Buddha of course got this whole line of thinking going and Parfit for one is happy to point that out in an appendix to his book; Parfit 1984, 502-503.)3 Let us make our approach into this topic from that angle, from the notion of function, the idea that the self is a result (whether illusory or real) of how our bodies operate. We 1

See especially Chapter 27, sections 11-12, “Of Identity and Diversity.” Although Strawson’s inclusion here does require something of a caveat to accommodate his subtle and interesting quasi-realist aspects; still I think that based on my reading of the works cited here, along with others of his not included, he himself would not object to having the concept labeled in this way. 3 Titled, “Appendix J: Buddha’s View.” 2

www.andrewoberg.blogspot.jp

Andrew Oberg

4

will remain on this because, in the midst of the contemporary hoopla for anti-realist self accounts, an analogy has made its way to the fore that neatly ties in with our other concerns of consciousness and artificial intelligence. That analogy is with our best friends and future overlords (more on that below): the personal computer. A human being, we are told, is like a PC; we have bodies which are our hardware and selves which are our software. The analogy is thought to work since software is considered to be virtual; it functions without really existing, just as the self is ever with us without really being there at all. What does it mean to say that software (and possibly the self) is virtual? It is first of all to claim that nothing physical exists which can be pointed to as the object in question, we are only able to point to evidence of the object and thereby infer its presence (of sorts). On the face of it this seems like a rather nonsensical claim; software can after all be bought from the store and taken home and inserted into a computer in order to install it. Software can also be downloaded and installed that way though; in which case what is the physical object? When people say that software is virtual they do not mean that it does not exist in the world; they mean rather that it is not something one can reach out one’s hand and hold onto – note that that plastic disk used to put the latest version of Microsoft Blah into a machine is not actually the software in question, it is merely a delivery device for the software. What is the actual software then? It is code that allows a computer to run itself in a way that it theretofore could not; at its most basic level it is a string of ones and zeroes. We confuse ourselves if we think only of the human-readable code that is the higher level at which software is written by and for the maintenance and engineering of programmers (recognizable words such as “run,” “frame,” “printf,” “main,” etc.); this level of code is translated into machine code to obtain the instructions for the central processing unit, the letters and numbers of which ultimately stem from binary sources (bit strings) which represent them. All of this is of course built into the machine – built into the machine’s hardware. The software is therefore code that shifts how the hardware functions, and although we can pick up and move about the various pieces of hardware that are put together into the physical objects on our desks and in our pockets we cannot so pick up the software (although, again, we can pick up the medium by which the software is transferred from one location to another). It is in this way that software is virtual, that it is considered to be physically nonexistent. Pause at those last two words. Here is where everyone gets confused about the self,

www.andrewoberg.blogspot.jp

Andrew Oberg

5

about consciousness, and about a great many things. We mistakenly think that the nonphysical is not there at all, that an absence of physical presence means an absence period; empty nothingness. This may be the result of an (reflexively?) uncritical examination, or it may be just how we happen to work as the biological creatures we are. If there is nothing material we say there is nothing at all; yet we do not say software is nothing at all, nor do we say that software is a mere illusion that stems from the natural way the hardware functions. Things have now become very interesting. Software depends on hardware for its realization and software functions – software exists even – in order to alter how the hardware that is its foundation and object operates. Is this an accidental allusion to dualism? It is and it is not; software is not something that could function separately on its own, although we might be tempted to say that it could exist separately on its own (after all, uninstalled software surely seems to exist in some manner). Granted, no software would exist anywhere without the requisite hardware already being in place, but this not a chicken-or-the-egg query. Software that has been written but has not yet found a home, as it were, is something that appears to exist in the way that we might think a disembodied soul exists. Is that what those who compare the body to hardware and the self to software are really trying to say? Pulled out in this way we find ourselves arriving not at the non-realist destination we supposedly set sail for but rather at a very Cartesian-looking port. By trying to argue that the virtual is real without being real, causal without being material, proponents of this view (and here I include non-realists about the self) have painted themselves into a corner. They either must admit that the immaterial can exist in a very real way or they must assign some version of materiality to software, at which the non-realist self comparison that was sought after fails rather egregiously. I do not think the thought behind this analogy has gotten that far, however. It has instead stopped at the point where it is assumed that a suitable means of understanding how the self is actually nothing has been achieved. I think the argument tucked away in this analogy is meant to go no further than to state that although we may think the self really exists based on our personal experiences it does not and moreover does not need to. I do not think that those who make this claim wish to further argue that software does not exist, although it does seem that by this analogy/argument we must draw that conclusion anyway. After all, if software can exist virtually then the self can exist virtually too; is this a claim for a realist self? It might be or it might become the basis for one, but let us table that thought and linger a moment on this issue of software existing only “virtually” for I think there is still an error here that we have not rooted out, and that is this:

www.andrewoberg.blogspot.jp

Andrew Oberg

6

software does not exist virtually it functions virtually. Software itself is after all code, and readable to us or not code is a form of text and it would take quite the hardy solipsist to argue that text does not exist at all (but, being a solipsist, that the self nevertheless does?). This code need not even be printed out or written down to be said to exist, for surely most of us would agree that language exists in some sense even if it is only – and only ever has been – spoken. Software, it seems, must be said to really be there amongst the hardware. But how? And in what way? I think in dwelling on this subject we are approaching something quite remarkable. What these thoughts appear to be leading towards is the conclusion that there is a middle road between the physical-material and the ideal-immaterial, and software gives us a clue as to what that might be. This middle road is the functionally existent, which we might note is a type of emergence. It manifests itself in its operation but ontologically we cannot really say all that much about it; at least not with the current categories we have at our disposal. Is this due to a failure of imagination? To a failure in ability to imagine? Perhaps so, but we can see this middle way working and that is instructive. Drawing now on this notion of text we can also ask a further question: Does a computer read software in the way that we read a language? That is, even at the root binary level, does a computer approach a string of zeroes and ones the way we approach a string of vowels and consonants? This is the type of question that we will return to in the section on artificial intelligence below and so at present I only wish to point out that for a computer when it is “reading” code there is absolutely nothing happening at the metalevel whereas for us when we are engaged in the reading of anything all sorts of things are taking place at the metalevel, and necessarily so. Additionally, pushing on the software analogy even more, we can say that all of our metalevel happenings too function virtually, and in a way that is existentially deeper than the way in which software functions virtually by reason of our having metalevels at all. We are not blindly and unthinkingly chugging through instructions; we are engaging, considering, weighing, reacting, emoting, analyzing, each and every time and indeed all of the time. An analogy such as the above is not only inaccurate it is wrong, misleading about both of its component parts, software and the self. Whatever the self may be it is a part of us in a way beyond that in which software is a part of hardware, and if software can be said to exist on its own and in a real (if immaterial) way then it stands to reason that similar – and even more profound – claims could be made for the self as well.

www.andrewoberg.blogspot.jp

Andrew Oberg

7

This is not the place to argue for a full realist account of the self, 4 but it can be pointed out that such could exist in a framework that is not strictly Cartesian nor even remotely Cartesian, a framework that might be built around the concept of functionally existent. Much more could be said about the self, particularly regarding the question of whether or not our selves should be seen as natural outgrowths of our biology, but having achieved – I hope – our main aim in this section I will turn instead to the related issues of consciousness and qualia. 3. Consciousness and qualia The so-called “hard problem” of consciousness (explaining the phenomenal side to consciousness as compared to the “easy” cognitive side) seems to reach into areas where we might not expect it and indeed when considering anything related to the self, experience, and/or intelligence (as this paper is attempting to do) the issue seems to demand our attention to the exclusion of all else. The problem with this being of course that the problem itself is so intractable as to be disheartening, depressing even. We seem to be going nowhere very quickly with each new book that claims to tell us what consciousness really “is”, and as such I wish to offer only two very limited pieces of content here before moving on to discuss the notion of qualia, or how it feels to have experience (the phenomenological question): 1) a brief outline of what cognitive scientists are currently telling us about how the brain works, and 2) an alteration to when we view consciousness to be occurring. Michael Gazzaniga has helpfully summarized contemporary findings in the brain sciences which together yield an overall picture of the organ as more like a network than a command center (Gazzaniga 2011; cf. Damásio 2012.). Our brains are not thought to contain an area or component that is the “headquarters” or “director” of the simultaneous multifarious tasks that the brain engages in but rather to have a “constellation” of such “headquarters” which are locally focused and in constant communication with each other. That is, there are a vast number of regions in the brain that give rise to local consciousnesses and these are then coordinated. It will be understood that consciousness on this model is an emergent property of how the brain works, therefore mind somehow arises out of the brain’s normal functioning and that 4

That is an argument I do wish to make but so far can only point the reader to the bare beginnings of such; much work remains to be done on the self-account I want to build. For the moment, see Oberg 2015. I would like to add to this the rather large caution that since first submitting that piece for consideration much has changed in my thinking on the self, although the overall thrust presented in that paper is still one I would make at present.

www.andrewoberg.blogspot.jp

Andrew Oberg

8

mind (or “minds”) thereafter achieves a sense of unity possibly via an interpreting module (which is another function of the brain). It will also be understood that this still doesn’t really explain what consciousness is as applying the term “emergent property” is shorthand for saying we have no clue what is really going on but something certainly seems to be happening. There are two typical responses to descriptions like the preceding; one is to try and work with such in a physicalist and/or reductive manner (in which category I would put an account like Daniel Dennett’s; Dennett 1991), and the other is to try and either add something else onto it such that the account is no longer purely physicalist or to inhere within the physicalist “building blocks” a further substantive nature of some kind (in which category I would put accounts like David Chalmers’ and Thomas Nagel’s; Chalmers 1996, Nagel 2012). On the latter approach we can see that both sides of the coin find the physicalist explanation inadequate and therefore require a nonphysical element to be present in addition to or within the atoms, waves, subatomic particles, etc. that make up our material universe. This might mean that there is something like a “consciousness particle”, or it might mean that consciousness itself is a part of absolutely every particle (here is where panpsychism comes in). Whatever the approach and whatever the case may be, I would like to suggest that instead of the standard and intuitive stance that we normally take to consciousness as being our “waking mind” or “moments of awareness” we rather consider ourselves to always be conscious as long as we are alive. This thought stems from the model above wherein unified consciousness is a communicative network of all the separate consciousness-producing areas firing away as they go about their local and specialized business. As long as one of them is operating, and given the brain’s nonstop monitoring it would seem likely that at least one of them is working in some manner all of the time, we have at least some degree of consciousness occurring and therefore need not try to explain where consciousness “goes” when we sleep or blackout or the like. It does not “go” anywhere, it is just there to a different (lesser) degree. Considering the when of consciousness in this way might not be much but it may help us going forward as we try to work out the phenomenal side. If we are alive we are conscious; what, then, does it mean to have conscious experience? The idea behind the term “qualia” comes from a very famous early piece by Nagel (his “What is it like to be a bat?” paper; Nagel 1979a) which, having the foundational position that it does, we will centrally consider in the below. The remainder of this

www.andrewoberg.blogspot.jp

Andrew Oberg

9

section will therefore consist of some thoughts on Nagel’s piece, some representational objections that have been made to it, and some comments on how those objections may have missed the point. What will be attempted will be a greater degree of clarity regarding the phenomenology of consciousness, of what it is like to experience consciousness (if it is like anything at all). Having, hopefully, achieved that clarity we will then be able to tie in our considerations here with our considerations in the section on the self above and apply them to the final section on artificial intelligence. We begin. “Qualia” is an abbreviation of “phenomenal qualities” and refers to the sense that is associated internally with a mental state; as Chalmers puts it, “a qualitative feel – an associated quality of experience…The problem of explaining these phenomenal qualities is just the problem of explaining consciousness” (Chalmers 1996, 4). This line of thought goes back to Nagel’s “what it is like” concept, which he describes as follows: “But fundamentally an organism has conscious mental states if and only if there is something that it is like to be that organism – something it is like for the organism” (Nagel 1979a, 166). Intuitively it seems to us that our mental states do carry certain “feels” to them, that there is something it is like for us to be happy, to suddenly remember something, to notice a dropped pen, etc. Nagel, and others, associate the subjective – and indeed specially subjective, only subjective, unshareable – nature of such experiences with consciousness itself, consciousness just is “what it is like” (that is, in the “hard problem” sense). Whether or not this definitional move is warranted is a point we will consider in the following objections to Nagel’s position; prior to that we will first need to see just what Nagel may have meant for I think that he, and this piece of his in particular, is often misread. The idea of qualia has been taken to mean, or has been asserted to mean, that in addition to the obvious experiential qualities of a phenomenon like feeling happy or being surprised specific thoughts have their own associated, and again specific, content. Going back to Chalmers we find, “When I think of a lion, for instance, there seems to be a whiff of leonine quality to my phenomenology: what it is like to think of a lion is subtly different from what it is like to think of the Eiffel tower” (Chalmers 1996, 10). Is this the kind of thing that Nagel has in mind in his “bat” piece? I do not think so; there Nagel, in the context of writing about a bat’s perception, pain, fear, hunger, lust, states that each have a certain internally felt thusness to them (in his words: “we believe that these experiences also have in each case a specific subjective character”), and he follows this with a comparison to a hearing and seeing person with “the experience of a

www.andrewoberg.blogspot.jp

Andrew Oberg

10

person deaf and blind from birth” and the inaccessibility of cross-experienceability for each, although each knows that the other has subjective experiences (Nagel 1979a, 169-170). This latter comparison may at first blush appear to confuse the issue and to make it seem as if Nagel is making the same sort of claim that Chalmers does, but I think that a reminder of the context will help here. Prior to this comparison Nagel’s examples have all been at a much higher and more generalized level than that of a thought of a lion compared with a thought of the Eiffel tower; we are dealing – in Nagel’s piece at least – with how a bat perceives via its sonar, how a seeing person experiences the world through vision, how a non-seeing person experiences the world through sound and touch. Does that mean that a seeing person and a non-seeing person cannot know what chocolate tastes like to the other? Of course not, for both have tongues located within human bodies and both know that the other’s acknowledgedly subjective experience will therefore be similar to their own as the same functioning organ is in play for each. Perhaps there will be slight variations between the two but that has nothing to do with the perceptional where their primary differences lie (or anyway very little to do with it; the sight and/or smell of chocolate may generate a certain preparatory excitement in one or the other). This is unlike a person and a bat where the differences extend far beyond any perceptional issues alone. We must note that Nagel does not say that each experience of pain, fear, etc. has a “specific subjective character”; the “each case” in his usage refers rather to type (pain, fear, etc. generally; and in that surely pain for us as humans, pain for bats as bats). This is not a splicing of this pain versus that pain; it is instead of pain, of fear, of hunger, and on and on. This “what it is like” is a statement of a broad phenomenology, experienced subjectively amongst individuals but set at, and focused on, the species level as Nagel’s examples of intelligent bats and Martians go on to show (discussed shortly). I clearly remember the moment when I had this insight, and the experience of it was not a qualia of realizing something about Nagel while reading Nagel, nor even of realizing something about a philosophical point while reading a philosophical text; it was, rather, the same phenomena that I have had before (sadly far too infrequently) when my thoughts have clicked over afresh, opened an unnoticed door, seen through a hitherto opaque wall, that is associated with a certain sense of awe and feeling of time coming to a standstill (more on that – very important – feeling in the below). To reduce the notion of qualia beyond the idea of a type is to go further with the sense of “what it is like” than I believe Nagel would and, as will hopefully become apparent, than is warranted given what is actually going on in our phenomenal experiences. This reading of type

www.andrewoberg.blogspot.jp

Andrew Oberg

11

rather than the more standard “whiff of leonine” exegesis is supported, I believe, in what immediately follows in Nagel’s original text. There, within the same context and indeed in the very next paragraph, Nagel comments on the “certain general types of mental state [that] could be ascribed to us” by intelligent bats or Martians comparing themselves to us and only granting us the types they saw or could see in both; e.g. “perhaps perception and appetite” (Nagel 1979a, 170). Again, the argumentative focus is on the level of type, not minutiae. Now, it is admittedly possible that in this Nagel meant that each particular mental state has its own, and very peculiar, feel to it but that such would not be noticed by beings as different from us as intelligent bats or Martians, but given the examples of experience used (and cited above: perception, pain, fear, hunger, lust, appetite) that are said to have “specific subjective character” together with his thoughts on what might be ascribed to us by very different beings engaged in observing us, I believe that although it is not entirely clear the case for my understanding of Nagel might well be the stronger one. If so then qualia ought to remain on the level of type and not be extended down to the microscopic degree that they have been. I mentioned a certain sense of awe and feeling of time coming to a standstill that was associated with my alleged insight into Nagel’s text; these concurrent affective experiences are paramount in what they teach us about qualia and the implications that such have for the concept as the very notion of qualia has been challenged on these same phenomenal grounds. Prior to our considering some representative objections to Nagel’s paper and what has been taken from it though, I would like to pose a question by way of recap of the above: If the quale of a thought differs depending on the subject of the thought, is that difference not due to the associated emotional content rather than the thought content? As I see it, if it is like anything to think then the act of thinking itself is like something, not thinking of this as opposed to thinking of that (hence qualia as type). It might be that there are special times when a thought has a more profound impact than is typical – such as the experience I described above –, or that I have a certain emotional juxtaposition with a thought about, say, an object that you do not have, but on the whole thinking has its feel, perceiving its, pain its, fear its, etc. On my view there should be no more additionally detailed reduction of the notion of qualia beyond that of type. There are some who go further than this in their challenge to the concept though, and so let us take a look at some criticisms and attempt to answer them. P. M. S. Hacker disagrees broadly with the idea of qualia, arguing that the difference

www.andrewoberg.blogspot.jp

Andrew Oberg

12

between, say, seeing a table and seeing a chair does not consist in a different sort of associated feeling as just the bare perception typically fails to bring about any sort of emotional or attitudinal reaction in us (Hacker 2002). Perception itself does not equal having a sensation. Moreover, different experiences may have the same feel, being as “enjoyable or disagreeable, interesting or boring” (Hacker 2002, 164) so that we suspect it is not the feel itself, when there is such a feel, that sets them apart either. These are good points and should be taken seriously. Hacker goes on to argue that when speaking of “what it is like” we cannot say, “(1) ‘There is something which it is like to V’” and “(2) ‘There is something it is like for A to V’” because in the case of (1) such statements are fit only for comparisons and in the case of (2) such statements wrongly mix “the form of a judgment of similarity with the form of a request for an affective attitudinal characterization of an experience” (Hacker 2002, 166). What I take Hacker to mean here is that in the first instance such a description can do little more than try to connect doing V with doing W, which is more of an allusion based on a (common) knowledge of what W is like. Hence, such statements are really more about W than V, and in that about comparing the latter to the former.5 In the second instance, Hacker seems to be arguing that this kind of description conflates asking for A’s personal take on her doing V (via confirmation, perhaps?) with a further move that determines the appropriateness of a comparison. “Oh, A said V felt like X, which we know is like W.” Again, the case Hacker makes is a strong one. If we try to say that it is “like V” (or W or X) we are implicitly comparing V to something else. Something that we ourselves have done? Something that we have felt? Our specific and subjective qualia with another’s? If we also say that it is like something for A to do V then we are even more clearly off track as when A answers (“It was wonderful/terrible”) the V in question is no longer like anything at all, it simply is (wonderful/terrible). These objections cease to carry weight however when we understand qualia as types; Hacker’s arguments are against the “whiff of leonine” version of qualia, which I have also argued against. If qualia are understood as types then case (1) statements simply indicate the type referred to, and as types are at least species common (that is, typically similarly experienced by members of the same species (even though there may be slight variations within as in the case of perception between a seeing and a non-seeing person)) a comparison between seems entirely apposite. Case (2) would also refer to the type as it is experienced by us, by all of us, and when A answers we know what she means by comparison with our own experiences of that type, a comparison which is once more perfectly valid and which gets its 5

It is also interesting to note the emphasis on action leant by the preposition “to”; I am not sure if Hacker intended to limit the focus here or not.

www.andrewoberg.blogspot.jp

Andrew Oberg

13

credibility from the commonness of types. When A says that her thought was wonderful we too know what it is like for a creature such as ourselves to have a wonderful thought, and from there to go on and try to explicate A’s version of a wonderful thought from mine seems both unnecessary and beside the point. We do need to get A’s input (that is, give our request for an “affective attitudinal characterization of an experience”) in order to make our “judgment of similarity”, but there is nothing in Hacker’s arguments to prohibit that judgment when qualia are taken as types. The other manner in which Nagel’s piece has been criticized is to attack its purported argument against physicalism (everything we observe and know (aside from logic and mathematics) has a foundation in the physical); here too I think that Nagel has been misread. This is less of an argument against qualia and more of an argument regarding what might be underpinning consciousness, and so with our lens drawn slightly out from the phenomenological level we have been at let us continue. Largely repeating an earlier argument made by Frank Jackson (Jackson 1982), 6 Yujin Nagasawa reads Nagel’s “bat” piece as claiming that physicalism, if it be true, must be able to “provide complete explanation[s] of not only physical, chemical and biological but also phenomenal features of the world” (Nagasawa 2003, 381), but that as the bat argument shows it cannot it is therefore not true. Nagasawa puts it this way: (10) If physicalism is true then x, who knows everything physical about bats, knows everything about bats. An addition of the following innocuous statement enables Nagel to derive the falsity of physicalism: (11) If x knows everything about bats then x knows what it is like to be a bat (Nagasawa 2003, 381).

Since Nagel’s argument arrives at the conclusion that we cannot know what it is like to be a bat without being a bat-type creature ourselves, Nagasawa states, even being physically omniscient does not yield the necessary knowledge and hence physicalism must be false (Nagasawa 2003). I find this analysis wide of the mark. Nagel’s argument against physicalism is that it is incomplete, not that it is false tout court. In his “bat” 6

In his piece Jackson interestingly criticizes Nagel for not having made an objection to physicalism despite his meaning to, as Jackson judges it. For Nagel, Jackson says, has mistakenly applied extrapolation or imagination to knowledge, something that his earlier considered “Knowledge argument” (Section I in his paper) makes no assumptions on; see Section III “The ‘What is it like to be’ Argument” in Jackson 1982.

www.andrewoberg.blogspot.jp

Andrew Oberg

14

piece Nagel writes, “For there is no reason to suppose that a reduction which seems plausible when no attempt is made to account for consciousness can be extended to include consciousness. Without some idea, therefore, of what the subjective character of experience is, we cannot know what is required of physicalist theory” (Nagel 1979a, 167). This is clearly a thought directed against simply and directly extending an explanation which works on one level (absent accounting for consciousness) to another level (including accounting for consciousness) without adjusting that explanation in any way whatsoever. Understanding how a bat’s sonar works by studying the pitch of her cries and the biological apparatuses in her ears does not also tell us “what it is like” subjectively to hear in the way a bat does. For physicalism to be complete it would need to also tell us that further datum. Yet it seems that to account for that physicalism would also need to be able to account for consciousness, which, aside from the assertions of the reductive accounts, it does not appear at present – in its present form – to be able to do. Does that mean that we need to reject physicalism? Not necessarily, for it does explain much and may only need to be fine-tuned a bit; a careful reading of Nagel indicates this is his position. That Nagel’s objections to physicalism in his “bat” piece center on its current incompleteness and not its simple falsity is further reinforced by other work he did contemporaneously with the “bat” piece and more recently (Nagel 1979b; 2012). To read Nagel as making a case against physicalism wholesale is to both badly misread him and to fail to grasp the importance of the concept of qualia in the quest to understand consciousness. I have argued that qualia should be considered as types, that the case made against physicalism is not that it is false but that it is currently incomplete; I have also suggested that we ought to consider ourselves as always being conscious as long as we are alive. To have consciousness is to have subjective experiences, and there are a great many creatures who have consciousness, granting them their own experiential types (qualia) that lie beyond our imaginative grasp despite our being able to understand the types of experiences that members of our own species have. Consciousness, moreover, seems through qualia to be tied into the sense of self, and if consciousness can only be (inadequately) explained as being an emergent property of how the brain operates then similar claims may perhaps be able to be made about the nature of the self, with software being a clue as to how such might be ontologically categorized (functionally existent). Might it be then that we can tie all of these strands together and build a computer that works enough like a brain to be generative of its own emergent consciousness and hence also self? The simple answer to that last question is “No.” The

www.andrewoberg.blogspot.jp

Andrew Oberg

15

fuller answer makes up the final section of our investigation. 4. Singular(ity) delusion The main problem with such artificial intelligence musings as the Singularity represents,7 as I see it, is once more the problem of consciousness. That is, once more the problem of our not really knowing anything about consciousness, where it comes from while we are alive, where it goes when we die, and what it might be like for creatures different from ourselves. Consciousness is not intelligence, of course, and it is quite possible to have intelligence without consciousness (think of a calculator), but when those who vaunt the coming explosion of artificial intelligence tell us about it they are clearly not speaking of this limited sort of intelligence; they are speaking instead of intelligence that is self-aware and that, on the basis of where our analysis has brought us and as far as we can currently understand, must include some form of consciousness. To explore this we will continue to use the methodology we have been employing and start with some thoughts on intelligence proper before making our way back roundabout to consciousness. First of all there is the issue of brain complexity and measured/measureable levels of intelligence. Until very recently we thought – were convinced – that a large and complex brain including a neocortex or a neocortex-like structure (a part of the higher brain that regulates sensory perception and language) was necessary to distinguish between human faces. As all of us humans have the same facial features (two eyes above a nose and mouth all more or less centrally featured within an oblong-shaped head) it had been considered that only primates were up to the complexity of the task (AFP-JIJI 2016). It has however recently been established that birds can do this too, and that they indeed have neocortex-like structures, but also that the archerfish, a tropical species with – like all fish – a simple brain wholly lacking in anything like a neocortex, can as well (AFP-JIJI 2016).8 If even a fish is capable of what had heretofore been considered the domain of the most advanced mammalian group on the planet then either we do not really understand intelligence or we do not really understand how the brain works; or perhaps rather both. If we do not in fact have the grasp on the biological brain 7

Shorthand for a period of unchecked machine intelligence growth resulting in computer superintelligence that is as far beyond our levels of intelligence as our own is above other primates’. See the Wikipedia page “Technological singularity.” 8 This type of fish naturally sprays water at insects and so the individuals used in the study were trained to instead spray water at a photograph of a specific human face and then shown a number of different facial photographs to test whether or not recognition was achieved (via only spraying water at the “correct” face’s photograph).

www.andrewoberg.blogspot.jp

Andrew Oberg

16

that we thought we had what hope do we have of building an artificial intelligence that could successfully mimic or even outdo the brain? We have great hope, it might be objected, for not only have we been doing so already the archerfish example shows how little intricacy is in fact needed for higher abilities to emerge (yes, emerge, but hold that thought). Computers running artificial intelligence programs have sometimes beaten humans at chess, and recently even famously did so at go, the world’s oldest and most strategically complex board game. We are teaching our machines to be able to do all sorts of things that were once considered unimaginable; surely the sky must be the limit as our computing technology advances further and further. I think we must give pause to our euphoria here because, as we found with the self and the physically nonexistent, there are all sorts of conceptual confusions creeping in which require us taking a large step back and reassessing our point of view. To begin with, we can point to the language problems present in the way we speak of this issue. We are not “teaching” computers anything, we are writing programs for them which they then run. When our computers run those programs nothing at all is occurring on any kind of metalevel, as we saw above with our example on reading. We have a tremendous tendency to anthropomorphize the machines we build; in another instance it has recently been reported that robots are being taught to “feel pain” (BBC 2016). In such cases the robot does not “feel pain” in anything like the way that you or I or even an archerfish feels pain; what the robot feels is absolutely nothing at all, it is simply alerted via installed sensors to the fact that it is in a situation that might damage it and so it ought to move. Yet is that not exactly what is happening when we feel pain? No, not at all, for what is missing in the robot’s case are the qualia associated with pain, the “what it is like.” Artificial intelligence programs (whether within or without robots), for all of their impressiveness in problem solving and even creative problem solving, have nothing happening on any metalevel, experience no qualia, and therefore if we can say anything about their consciousness at all (given how little we understand consciousness), what seems clearest is to say that they have no consciousness and stop there. It now seems appropriate to equate the having of qualia – understood as type – with consciousness, which was the definitional move we saw Nagel and others making above. The temptation is always there to say that if consciousness can emerge from a structure as simple as some of the brains we see in living organisms (far simpler even than a fish’s) then surely a sufficiently wired “electronic brain” could give rise to

www.andrewoberg.blogspot.jp

Andrew Oberg

17

consciousness too. (Panpsychists in particular would have trouble denying this as their position holds that consciousness is universal.) We have yet to see anything like that happen however, and until it does perhaps the best that we can claim is that what is really not understood is emergence itself; whatever process is involved in the arising of consciousness out of a functioning organic brain could well be beyond our ability to grasp and/or imagine. There are after all limits to what we human beings can conceptually conceive and understand. On a final note we return to our opening queries into the self. It is not the purpose of this paper to definitively answer the many questions surrounding the self, yet it did seem that in looking at software we glimpsed a clue as to how the self might be realist within the framework of being functionally existent (and possibly emergent as well). For any type of self to exist, however, consciousness must surely be a prerequisite, and if our artificial intelligence programs are never able to demonstrate consciousness then they will similarly not be able to have the self-awareness that the Singularitarians and other technologists dream of. It is my view that the goal of building an artificial intelligence that is self-aware is a fundamentally flawed one, and it is so flawed because there is never anything that it is like to be a machine; to be a computer is simply to act out internal programming, and even if we adopt a fully determinist view of the universe there is still that difference between us: although our computers and us are powerless and without any free will (this being the hardest determinism available), we are still feeling (qualia) our way through our preprogrammed steps while the computer is not. Without that feeling all bets for something more (something emergent) are firmly off.

Acknowledgements: I would like to thank Jack James and Anthony Lynch of the University of New England (Australia) for their reading of an earlier version of this paper, the anonymous reviewers of the journal for their very helpful and kind comments, and José-Antonio Orosco and Taine Duncan, co-editors of the journal, for their attention and support. I am much indebted to you all.

www.andrewoberg.blogspot.jp

Andrew Oberg

18

Works Cited AFP-JIJI. June 26, 2016. “‘Smart’ tropical fish can recognize human faces.” The Japan Times: On Sunday, 21. Chalmers, David J. 1996. The Conscious Mind: In Search of a Fundamental Theory. Oxford: Oxford University Press. Dainton, Barry. 2008. The Phenomenal Self. Oxford: Oxford University Press. Damásio, António. 2012. Self Comes to Mind: Constructing the Conscious Brain. New York: Vintage Books. Dennett, Daniel C. 1991. Consciousness Explained. New York: Little, Brown and Co. Gazzaniga, Michael S. 2011. Who’s In Charge?: Free Will and the Science of the Brain. New York: Ecco Press. Hacker, P. M. S. 2002. “Is there anything it is like to be a bat?,” Philosophy 77: 300, 157-174. Jackson, Frank. 1982. “Epiphenomenal Qualia,” The Philosophical Quarterly 32: 127, 127-136. Locke, John. 1689/1690. An Essay Concerning Human Understanding, 2nd edn. Available on Project Gutenberg. Accessed July 13, 2016. http://www.gutenberg.org/cache/epub/10615/pg10615.html. Nagasawa, Yujin. 2003. “Thomas vs. Thomas: A New Approach to Nagel’s Bat Argument,” Inquiry: An Interdisciplinary Journal of Philosophy 46: 3, 377-394. Nagel, Thomas. 2012. Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False. New York: Oxford University Press. Nagel, Thomas. 1979a. “What is it like to be a bat?” in Mortal Questions. New York: Cambridge University Press.

www.andrewoberg.blogspot.jp

Andrew Oberg

19

Nagel, Thomas. 1979b. “Panpsychism” in Mortal Questions. New York: Cambridge University Press. Oberg, Andrew. 2015. “A realist self?,” Journal of Applied Ethics and Philosophy 7: 24-33. Parfit, Derek. 1984. Reasons and Persons. Oxford: Oxford University Press. Strawson, Galen. 2011. “The Minimal Subject” in Shaun Gallagher, ed. The Oxford Handbook of the Self. Oxford: Oxford University Press. Strawson, Galen. 2009. Selves: An Essay in Revisionary Metaphysics. Oxford: Oxford University Press. BBC News: Technology. May 26, 2016. “Researchers teach robots to ‘feel pain’”. Accessed July 13, 2016. http://www.bbc.com/news/technology-36387563. Wikipedia: The Free Encyclopedia. “Technological singularity.” Accessed July 13, 2016. https://en.wikipedia.org/wiki/Technological_singularity.

www.andrewoberg.blogspot.jp

Andrew Oberg

Dreaming of AI Lovers_Andrew Oberg.pdf

idea lie deep problems associated with the self, with consciousness, and with what it is. to be a being capable of experience. It is the aim of this paper to first explore these. important background concepts and seek clarity in each one before then turning to the. question of artificial intelligence and whether or not such is really ...

209KB Sizes 2 Downloads 148 Views

Recommend Documents

Applications of AI
Expert systems: apply reasoning capabilities to reach a conclusion. An .... credit card, his record of payment and also about the item he is buying and about.

always-dreaming-of-you.pdf
Whoops! There was a problem loading more pages. Retrying... always-dreaming-of-you.pdf. always-dreaming-of-you.pdf. Open. Extract. Open with. Sign In.

ai-bike.pdf
We will seek shelter if a storm comes through and can also provide rain jackets if. needed. Generally, we can work around rain. Our staff will be watching the.

Ai
Oct 21, 2016 - 3rd Prize : Cash Prize of PhP 1,000.00 with glass plaque and certificate of participation. 16. Winners will be announced right after the judging of ...

AI translation.pdf
intelligence could help humans work in many areas where humans cannot access by themselves,. and greatly improve the efficiency of human lives and ...

AI July.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. AI July.pdf.

Peter hamilton dreaming
Nitro pdf 7. ... Lodgesand Inns Travelodge'sare places four travelling peopleto peter ... The Travelodge's themselvesare usually 2 starsas they provideacomfy ...

Domo_May 2011.ai - WordPress.com
May 30, 2011 - instead they will deform into fluffy balls with faces. ... flying fluffy balls appearing everywhere every time an ... Sunday 8 pm - 10 pm. Japanese ...

AI robotics.pdf
Whoops! There was a problem loading more pages. AI robotics.pdf. AI robotics.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying AI robotics.pdf.

pdf-1866\dreaming-a-conceptual-framework-for-philosophy-of-mind ...
... the apps below to open or edit this item. pdf-1866\dreaming-a-conceptual-framework-for-philosop ... -empirical-research-mit-press-by-jennifer-m-windt.pdf.

exploring the world of lucid dreaming
Chapter 9: “Creative Problem Solving” discusses lu-cid dreaming as a fruitful source of creativity for art ...... Get a notebook or diary for writing down your dreams.