IJRIT International Journal of Research in Information Technology, Volume 2, Issue 4, April 2014, Pg: 559- 563

International Journal of Research in Information Technology (IJRIT)

www.ijrit.com

ISSN 2001-5569

Ethical Decision of Moral Machines Is a Professional Responsibility in Computing Mr. Athar Hussain1, Prof. Anjali B. Raut2 1

M.E. Student, Department of Computer Science and Information Technology, H.V.P.M’s College of Engineering & Technology, Amravati.(M.S) [email protected] 2

Head of Department of Computer Science and Engg, H.V.P.M’s College of Engineering & Technology, Amravati.(M.S) [email protected]

Abstract The developing academic field of machine ethics seeks to make artificial agents safer as they become more pervasive throughout society. Motivated by planned next-generation robotic systems, machine ethics typically explores solutions for agents with autonomous capacities intermediate between those of current artificial agents and humans, with designs developed incrementally by and embedded in a society of human agents. These assumptions substantially simplify the problem of designing a desirable agent and reflect the near-term future well, but there are also cases in which they do not hold. In particular, they need not apply to artificial agents with human-level or greater capabilities. The potentially very large impacts of such agents suggest that advance analysis and research is valuable. We describe some of the additional challenges such scenarios pose for machine ethics.

Keywords: Term "Moral" Come About, Machine Ethics, Machine Ethics Challenges of Superintelligence, Limitations from the Nature of Ethics, Limitations Arising from Bounded Agents and Complex Environments.

1. Introduction Computing professionals perform a variety of tasks: They write specifications for new computer systems, they design instruction pipelines for superscalar processors, they diagnose timing anomalies in embedded systems, they test and validate software systems, they restructure the back-end databases of inventory systems, they analyze packet traffic in local area networks, and they recommend security policies for medical information systems. Computing professionals are obligated to perform these tasks conscientiously because their decisions affect the performance and functionality of computer systems, which in turn affect the welfare of the systems’ users directly and that of other people less directly. For example, the software that controls the automatic transmission of an automobile should minimize gasoline consumption and, more important, ensure the safety of the driver, any passengers, other drivers, and pedestrians. The ethical obligations of computing professionals go beyond complying with laws or regulations; laws often lag behind advances in technology. For example, before the passage of the Electronic Communications Privacy Act of 1986 in the United States, government officials did not require a search warrant to collect personal information transmitted over computer communication networks. Nevertheless, even in the absence of a privacy law before 1986, computing professionals should have been aware of the obligation to protect the privacy of personal information. Mr. Athar Hussain, IJRIT

559

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 4, April 2014, Pg: 559- 563

Automated vehicle technologies are the computer systems that assist human drivers by automating aspects of vehicle control. These technologies have a range of capabilities, from anti-lock brakes and forward collision warning, to adaptive cruise control and lane keeping, to fully automated driving. Professionals tend to have clients, not customers. Whereas a sales clerk should try to satisfy the customer’s desires, the professional should try to meet the client’s needs (consistent with the welfare of the client and the public). For example, a physician should not give a patient a prescription for barbiturates just because the patient wants the drugs but only if the patient’s medical condition warrants the prescription. Because professionals have specialized knowledge, clients cannot fully evaluate the quality of services provided by professionals. Only other members of a profession, the professional’s peers, can sufficiently determine the quality of professional work. The principle of peer review underlies accreditation and licensing activities: Members of a profession evaluate the quality of an educational program for accreditation, and they set the requirements for the licensing of individuals. For example, in the United States, a lawyer must pass a state’s bar examination to be licensed to practice in that state. (Most states have reciprocity arrangements—a professional license granted by one state is recognized by other states.) The license gives professionals legal authority and privileges that are not available to unlicensed individuals. For example, a licensed physician may legitimately prescribe medications and perform surgery, which are activities that should not be performed by people who are not medical professionals. Ethical Motives and Social Incentives Understanding why people vote is fundamental to the theory and practice of democracy. Analyses rooted in rational choice face difficulty in explaining why so many people incur the cost of voting even when it is improbable that any one of them is pivotal. An obvious shortcoming of pivotal-voter models is that it restricts voter motivations to be purely instrumental in terms of the electoral outcome to the exclusion of motives rooted in civic duties, ethics, the desire to have voice, social norms, and social pressures. Although the ethical voter framework is useful to understand turnout, it is predicated on behavior being intrinsically motivated and is divorced from social mechanisms and pressures that are widely believed to drive voting. A growing empirical literature has instead emphasized the importance of extrinsic motives for turnout and pro-social behavior.

2. Term "Moral" Come About Before proceeding, it is important to note first that because they are fixed in the context of the broader evolution of language, the meaning of terms is constantly in flux. Thus, the following comments must be understood generally. Second, the following is one way redescription of the term “moral” might come about, even though in places I will note that this is already happening to some extent. Not all machine ethicists can be plotted on this trajectory. That said, the project of designing moral machines is complicated by the fact that even after more than two millennia of moral inquiry, there is still no consensus on how to determine moral right and wrong. Even though most mainstream moral theories agree from a big picture perspective on which behaviors are morally permissible and which are not, there is little agreement on why they are so, that is, what it is precisely about a moral behavior that makes it moral. For simplicity’s sake, this question will be here designated as the hard problem of ethics. That it is a difficult problem is seen not only in the fact that it has been debated since philosophy’s inception without any satisfactory resolution, but also that the candidates that have been offered over the centuries as answers are still on the table today. Does moral action flow from a virtuous character operating according to right reason? Is it based on sentiment, or on application of the right rules? Perhaps it is mere conformance to some tried and tested principles embedded in our social codes, or based in self-interest, species-instinct, religiosity, and so forth. The reason machine ethics cannot move forward in the wake of unsettled questions such as these is that engineering solutions are needed. Fuzzy intuitions on the nature of ethics do not lend themselves to implementation where automated decision procedures and behaviors are concerned. So, progress in this area requires working the details out in advance and testing them empirically. Such a task amounts to coping with the hard problem of ethics, though largely, perhaps, by rearranging the moral landscape so an implementable solution becomes tenable.

Mr. Athar Hussain, IJRIT

560

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 4, April 2014, Pg: 559- 563

3. Machine Ethics The research area of machine ethics (also called roboethics) has recently emerged as a subfield of Artificial Intelligence focusing on the task of ensuring ethical behavior of artificial agents (commonly called AMAs, Artificial Moral Agents [Wallach, Allen, and Smit 2008]), drawing contributors from both computer science and philosophy. By focusing on the behavior of artificial agents the field is distinguished from earlier work in ethics as applied to technology, which concerned itself with the use of technology by humans and on rare occasions on the treatment of machines by humans (Anderson and Anderson 2007a). Machine ethics researchers agree that any AMAs would be implicit ethical agents, capable of carrying out their intended purpose in a safe and responsible manner but not necessarily able to extend moral reasoning to novel situations (Weng, Chen, and Sun 2009). Opinions within the field part on the question whether it is desirable, or even possible, to construct AMAs that are full ethical agents, which like ethical human decision-makers would be capable of making explicit moral judgments and justifying them (Anderson and Anderson 2007a). While Isaac Asimov’s “Three Laws of Robotics” are widely recognized to be an insufficient basis for machine ethics (Anderson and Anderson 2007a; Weng, Chen, and Sun 2009), there is little agreement on what moral structure AMAs should possess instead. Suggestions range from applying evolutionary algorithms to populations of artificial agents to achieve the “survival of the most moral” (Wallach, Allen, and Smit 2008), neural network models of cognition (Guarini 2005) and various hybrid approaches (Anderson and Anderson 2007b) to value systems inspired by the Golden Rule (Wallach, Allen, and Smit 2008), virtue ethics (Wallach, Allen, and Smit 2008), Kant’s Categorical Imperative (Wallach, Allen, and Smit 2008; Anderson and Anderson 2007a), utilitarianism (Anderson and Anderson 2007a), and many others.

4. Machine Ethics Challenges of Superintelligence How would an artificial moral agent ideally express human values? Gary Drescher describes two classes of agents: situation-action machines with rules specifying actions to perform in response to particular stimuli, and choice machines, which possess utility functions over outcomes, and can select actions that maximize expected utility. Situation action machines can produce sophisticated behavior, but because they only possess implicit goals they are rigid in behavior and cannot easily handle novel environments or situations. In contrast, a choice machine can easily select appropriate actions in unexpected circumstances based on explicit values and goals (Drescher 2006). Intermediate between these extremes, agents may have goals that are partially explicit and partially implicit. Much discussion in machine ethics implicitly assumes that certain key features of the situations of artificial moral agents will be fixed and can be relied on to help constitute implicit goals for agents towards the situation-action machine end of the spectrum. However, these assumptions are much less likely to apply to super intelligent agents, and the removal of each causes new design challenges.

5. Limitations from the Nature of Ethics This section will survey some of the literature in normative ethics and moral psychology which suggest that morality does not lend itself to an algorithmic solution. This is not to say that humans and machines cannot improve their moral behaviour in many situations by following the prescriptions of something akin to an algorithm – indeed, there is widespread agreement on at least some moral rules (Gert, 2007). However, these rules are often ambiguous and should sometimes be broken, and there is persistent disagreement about the conditions in which such exceptions should be made, as well as broad agreement that some specific ethical domains are still problematic despite the best efforts of philosophers. Importantly for the present discussion, this “unsolved” nature of ethics may not be a transient condition owing to insufficient rational analysis, but rather a reflection of the fact that the intuitions on which our ethical theories are based are unsystematic at their core, which creates difficulties for the feasibility of machine ethics.

Mr. Athar Hussain, IJRIT

561

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 4, April 2014, Pg: 559- 563

6. Limitations Arising from Bounded Agents and Complex Environments Given an arbitrary ethical theory or utility function, a machine (like a human) will be limited in its ability to act successfully based on it in complex environments. This is in part due to the fact that ethical decisionmaking requires an agent to estimate the wider consequences (or logical implications) of his or her actions, and possess relevant knowledge of the situation at hand. Yet humans and machines are limited in their perceptual and computational abilities, and often only have some of the potentially relevant information for a given decision, and these limitations will create possible failure modes of machine ethics. Computational and knowledge limitations apply to ethics in different ways depending on the ethical theory involved. Consequentialist theories, for example, are quite explicitly dependent on the ability to know how one’s actions will affect others, though there is still much heterogeneity here, such as the distinction between objective utilitarianism (which prescribes acting in a way that, in fact, maximizes good outcomes) and subjective utilitarianism (which emphasizes the expected outcomes of one’s actions). Deontological theories, in contrast, are not explicitly about foreseeing the outcomes, but computational limitations are related to deontology in at least two ways. First, to even know in the first place that a given action is, for example, consistent with a given deontological duty may require some knowledge and analysis of the situation and actors involved, not all of which will necessarily be apparent to the actor. Second, some deontological theories and rule consequentialist theories (which prescribe acting on the set of rules that, if universally accepted or adopted, would lead to the best outcomes) require judgments about the logical implications of a given decision if it were to be performed by many or all actors, i.e. the universalizability of a moral judgment. As such, the knowledge and information processing limitations of an artificial agent will be relevant (though perhaps to varying degrees) regardless of the specific ethical theory invoked.

7. Conclusions As the sophistication of artificial moral agents improves, it will become increasingly important to construct fully general decision procedures that do not rely on assumptions of special types of agents and situations to generate moral behavior. Since such development may require extensive research and it is not currently known when such procedures will be needed to guide the construction of very powerful agents, the field of machine ethics should begin to investigate the topic in greater depth. It should be fairly obvious based on the preceding analysis that machine ethics is not an adequate technological fix for the potential risks from AI according to these criteria. Machine ethics does not embody the cause-effect relationship associated with AI risks because humans are involved and responsible in various ways for the social outcomes of the technology, and there is more to successful ethical behaviour than having a good algorithm. Additionally, ethics is hardly unambiguous and uncontroversial – not only are there disagreements about the appropriate ethical framework to implement, but there are specific topics in ethical theory (such as, ex., population ethics and the other topics identified by Crouch) that appear to elude any definitive resolution regardless of the framework chosen. Finally, given the diversity of AI systems today and for the foreseeable future and the deep dependence of ethical behaviour on context, there appears to be no hope of machine ethics building on an existing technical core. All of this suggests that machine ethics research may have some social value, but it should be analysed in a broader lens of the inherent difficulty of intelligent action in general and the complex social context in which humans and computational agents will find themselves in the future. Thus we have conclude that Ethical Decision Of Moral Machines Is A Professional Responsibility In Computing and it have Importance in Artificial Intelligence.

Acknowledgments The author would like to thank to Prof. Anjali B. Raut, for their provision of valuable literature that aided the development of this paper.

Mr. Athar Hussain, IJRIT

562

IJRIT International Journal of Research in Information Technology, Volume 2, Issue 4, April 2014, Pg: 559- 563

References: [1] [2] [3]

[4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14]

Allen, C. et al. (2000). Prolegomena to any future artificial moral agent, Journal of Experimental and Theoretical Artificial Intelligence, 12, 251-261. Allenby, B. (2009). The ethics of emerging technologies: Real time macroethical assessment, IEEE International Symposium on Sustainable Systems and Technology, ISSST ’09. Bringsjord, S. et al. (2011). Piagetian Roboethics via Category Theory: Moving beyond Mere Formal Operations to Engineer Robots Whose Decisions Are Guaranteed to be Ethically Correct. In Anderson, M., and Anderson, S. L. (2011). Machine Ethics. New York: Cambridge University Press. Bringsjord, S., and Bello, P. (2012). On How to Build a Moral Machine, Topoi, 0167-7411. Bringsjord, S., Johnson, J. (2012). Rage against the machine, The Philosophers’ Magazine, 57, 9095. Cloos, C. (2005). The Utilibot Project: An Autonomous Mobile Robot Based on Utilitarianism, American Association for Artificial Intelligence. Helbing, D. (2010). Systemic Risks in Society and Economics. Lausanne: International Risk Governance Council. Horgan, T., and Timmons, M. (2009). What Does the Frame Problem Tell us About Moral Normativity?, Ethical Theory and Moral Practice, 12, 25-51. Human Rights Watch and International Human Rights Clinic. (2012). Losing Humanity: The Case Against Killer Robots. New York: Human Rights Watch. Klein, C. (2011). The Dual Track Theory of Moral Decision-Making: a Critique of the Neuroimaging Evidence, Neuroethics, 4(2), 143-162. Hanson, Robin. 1994. “If Uploads Come First: The Crack of a Future Dawn.” Extropy 6 (2). http ://hanson.gmu.edu/uploads.html. Kurzweil, Ray. 2005. The Singularity Is Near: When Humans Transcend Biology. New York: Viking. Moravec, Hans P. 1999. Robot: Mere Machine to Transcendent Mind. New York: Oxford University Press. Omohundro, Stephen M. 2007. “The Nature of Self-Improving Artificial Intelligence.” Paper presented at Singularity Summit 2007, San Francisco, CA, September8–9.http://intelligence.org/ summit2007/overview/abstracts/#omohundro.

Mr. Athar Hussain, IJRIT

563

Ethical Decision of Moral Machines Is a Professional ...

technological fix for the potential risks from AI according to these criteria. Machine ... Finally, given the diversity of AI ... ://hanson.gmu.edu/uploads.html. [12].

84KB Sizes 9 Downloads 117 Views

Recommend Documents

Ethical Disagreement, Ethical Objectivism and Moral ...
Nov 8, 2007 - advantage of advances in technology. For more ..... which the differences between these schools of thought are quite important. ETHICAL .... fully approximate (in an article or elsewhere) what such a view would be like. Absent ...

Ethical Disagreement, Ethical Objectivism and Moral ...
Nov 8, 2007 - alectical moves made in defense of each of these responses, my three argu- .... create the moral facts, but instead correctly mirror a moral order that exists ..... visit your library's website or contact a librarian to learn about ...

PDF Ethical Choices: An Introduction to Moral ...
... a one handed mode for the default keyboard That’s a great idea which is ... Analysis, and suggestions For Further Reading that include Internet sources.

Ebook Download Business Ethics: Ethical Decision ...
... Access Management OpenAthens provides a range of products and services ... Making Cases, Loose-Leaf Version Best Book, PDF Business Ethics: Ethical ...