Ethical concerns around AI and self-driving cars
Having tested electric car vehicles using simulations and extensive road tests, the cars have performed better than expected and are due to be released on the road. However, the AI (artificial intelligence) consultant has raised some issues on machine learning and possible limitations on cars AI systems adapting to real life situations involving life and death decisions. While the cars have performed well from the point of view of AI tests, there are situations that humans can make better decisions than machines. This report highlights the ethical perspectives relating to AI, and why the concerns raised by the AI consultant on the need to simulate accident and real or death situations the self driving electric cars may encounter, despite the commercial interests of releasing the cars in time ahead of the competition. The report discusses ethical principles in IT and makes recommendations on solving the ethical dilemma.
It can be likened to the ‘rise of the machines’, from the robots on factory floors to automated traffic control systems and self driving cars; machines that use AI have made a lot of things easy, even in more hazardous situations such as handling hazardous waste to battle fields (drones and self guided missile systems) (Bajpai, 2016). However, as the AI world becomes boundless, there are ethical concerns that confront the AI practitioners and consultants. While there are ethical issues around machines and AI relating to their behavior and performance, this report will not delve on them; such as if machines under-perform or ‘make mistakes’, these have really to do with design and production. Ethical issues faced by AI practitioners and consultants have more to do with dilemma rather than poor design. Situations that pertain to rules of action are fairly easy to deal with; for instance, if it is wrong to commit murder, a robotic machine can easily ‘learn’ this using AI. Savings lives is more important than saving costs. But beneath the relationship lies the dilemma faced by the AI consultant; when the robotic machine or AI and human beings both fail to adhere to a rule, when all the choices violate the same rule (Vanian, 2017). Based on the ‘Three Laws of Robotics’ put forth by Asimov, namely;
A robot may not cause harm to a human being or lead to a human being harmed due to inaction
A robot has to obey orders humans give it, except in a situation when such orders are in conflict with the first law
The Dilemma of Self-driving Electric Cars
A robot is obliged to protect its existence as long as the protection does not in conflict with the first law (Hildebrandt & Gaakeer, 2013)
The ethical dilemma faced by the AI consultant is far deeper than just making choices and trade offs; it entails assessing ‘value’ in AI and its use in machines and robots such as the self driving car. There are situations where moral instincts and ethics are skewed by circumstances because determining what is wrong is made more complex when it is mixed with emotions related to friends, family, details of actions take, or some ‘bigger goal’, such as commercial interests. One of the biggest ethical issues AI consultants face is how to guard against mistakes. Intelligence is a process of learning, where AI systems undergo a training phase when they have to ‘learn’ so they can detect the right patterns and make decisions according to the input (Lin, Abney & Bekey, 2014). When a system has been ‘trained’ fully, it goes into a testing phase, where its performance is evaluated. The testing and training phase cannot cover all possible scenarios and examples; so in the case of the tests for the self driving electric cars, all the testing and ‘learning’ as well as simulations (examples) are not enough for situations that the machine must or would have to deal with in the real world. The machines can be fooled in ways that is not possible with human beings. For instance, random dots can make the machine see a non-existent system, so there is need to make sure the machine performs as it is expected (Gunkel, 2017).
Therefore, human beings must not , or must not have the ability to overpower the machine/ system, and use it for their own ends; in this case getting to market ahead of the competition. The consequences for unintended events pose another dilemma, even if designed to operate in a certain way, the cars can still malfunction, and cause unintended consequences, such as applying maneuvers caused by light reflection and not a real object, so it sees the light as an on-coming car and evades, only to cause an accident (Greenwald, 2015, May 10). The owners would have a liability, while the victim may suffer injuries or death; this violates Asimov’s first rule. Based on the rules proposed by Asimov, the electric cars should not cause harm to humans or result in harm to humans because an act of omission (inaction); therefore, the cars must not case an accident. The purpose of AI is augmenting human intelligence, not replacing it (or humans); the purposes for which AI is deployed must be clear and transparent; in this case, to offer fast, cost effective, accurate, and safe transport in a zero emission car and overcome some human failings such as tiredness or drowsiness while driving, and possibly bad driving habits. AI must take into consideration the human side of the facts; safe and efficient transport that has commercial value added to it; the machines must be released ahead of competitors to make business sense and take advantage of commercial opportunities, because that is one of the driving reasons it was developed in the first place (Charisi et al., 2017, March 20). Based on Asimov’s rules on robotics, the AI consultant is faced with the dilemma of how to retain control of a complex self driving electric car system with intelligence.
Three Laws of Robotics
Having to make the choice between having the self driving electric cars on the roads before letting them ‘learn’ situations where decisions mean life and death, the human moralist, will make a decision based on emotion. Emotions related to the sanctity of life, for example, versus commercial interest, will mean a caring person trumps the commercial interest in order to take care of the potential risk the car will pose in cases where accidents and life and death decisions have to be made. The caring person will insist that the accident situations be simulated and the machine allowed time to learn, even if time catches up and competitors release the product before the company does.
According to the ACS, all members of professional societies are required to abide by a specific code of ethics, including the computer professional community, and a consequence, the ACS has six codes of ethics to abide by. For this ethical case, the applicable values include:
The value number 1.2.1; the primacy of the public interest
The value number 1.2.2; the enhancement of quality of life, and
The value number 1.2.2; honesty (‘ACS,’ 2014).
According to value number 1 (1.2.1), the AI consultant must place public interests above those of business, personal, or sectional interests. So in this case, the cars must not endanger the public, by not being tested and simulated on accidents and life and death situations and the decisions to take in those situations. Based on value 2, the AI consultant must strive to enhance the quality of life for those whose life your work affects; they must be at an increased risk or harm danger, or damage due to self driving cars that have not been tested properly. Value 3, honesty implies that during the AI consultants presentation of skills, services, knowledge,and products, they will remain honest; they must be clear that the electric cars still pose a risk and have not been tested for life and death or accident situations.
As discussed before, when it involves emotions and other special interest groups, the perception of right and wrong becomes slewed and complex. Still, the AI consultant can pass the AI components, knowing very well that further modeling can prevent consequences in the future in the event of accidents. This is because, accidents are accidents, those that are known can be mitigated, and the unknown ons must be experienced so mitigation measures can be put in place. The ethical and moral sense for systems and machines can be built on the basis of utilitarianism, but as AI professionals, we must make the choice. Being AI systems, the self driving electric cars must also provide a rationale for the actions they take, based on their ‘learning’ during tests (Hars, 2016).
To solve the ethical dilemma, the AI consultant can apply the principle of a ‘fail safe’ system, in which, while the cars are self driving, in dangerous cases, human beings can intervene and make more informed choices remotely; its like an aircraft on auto pilot; the pilot can still take control and fly manually. So as the cars drive, they must have a remote control and access mechanism that can allow human-assisted decision making (Russell, Hauert, Altman, & Veloso, 2015). Another approach to the ethical dilemma is continuous development and testing, that accident situations can be simulated while the cars are on the road where engineering and real life situations interact. Remember, with human assisted intervention, the people involved get a better view of possible accidents from a practical point of view and therefore, can develop better mitigation measures, made into algorithms that the cars ‘learn’ continuously. The solution lies at the intersection of man and machine learning.
References
‘ACS’ (2014). ACS Code of Professional Conduct. Retrieved April 03, 2017, from https://www.acs.org.au/content/dam/acs/acs-documents/ACS%20Code-of-Professional- Conduct_v2.1.pdf
Bajpai, P. (2016). Artificial Intelligence: Growth, Opportunities and Threats. NASDAQ.com. Retrieved 3 May 2017, from https://www.nasdaq.com/article/artificial-intelligence-growth- opportunities-and-threats-cm688646
Charisi, V., Dennis, C., Fisher, M., Lieck, R., Matthias, A., Slavkovik, M., . . . Yampolskiy, R. (2017, March 20). Towards Moral Autonomous Systems. Retrieved April 03, 2017, from https://arxiv.org/pdf/1703.04741.pdf
Greenwald, T. (2015, May 10). Does Artificial Intelligence Pose a Threat? Retrieved April 03, 2017,from https://www.wsj.com/articles/does-artificial-intelligence-pose-a-threat-1431109025
Gunkel, D. J. (2017). Machine Question: critical perspectives on ai, robots, and ethics. S.l.: MIT Press.
Hars, A. (2016). Top misconceptions of autonomous cars and self-driving vehicles. Retrieved May03, 2017, from https://www.driverless-future.com/?page_id=774
Hildebrandt, M., & Gaakeer, J. (2013). Human Law and Computer Law: Comparative Perspectives. Dordrecht: Springer Netherlands.
Lin, P., Abney, K., & Bekey, G. A. (2014). Robot ethics: the ethical and social implications of robotics. London, England: The MIT Press.
Russell, S., Hauert, S., Altman, R., & Veloso, M. (2015, May 27). Robotics: Ethics of artificial intelligence. Retrieved May 03, 2017, from https://www.nature.com/news/robotics-ethics-of- artificial-intelligence-1.17611
Vanian, J. (2017, February 06). How Powerful AI Technology Can Lead to Unforeseen Disasters. Retrieved May 03, 2017, from https://fortune.com/2017/02/06/artificial-intelligence-ethics-disasters/