Skip to content Skip to sidebar Skip to footer

Will the Machine Take Over? Ethics in Artificial Intelligence (Book Review)

AI EthicsArtificial Intelligence (AI)—its power fueled by its portrayal in science fiction and the like—has captured our imaginations. We are excited about its possibilities yet also tinged with disbelief and sometimes fear of what would happen if the machine took over altogether. Mark Coeckelbergh presents us with a well explained introduction to AI, its far-reaching applications in society, and the importance of ethical considerations.

Artificial Intelligence is defined by the author as “Intelligence displayed or simulated by technological means.” AI has many benefits in improving public and commercial services, he explains, but it also means that it raises some particularly important questions such as “How many decisions and how much of these decisions do we want to delegate to AI? and who is responsible when something goes wrong?” (p. 17).

Dr. Coeckelbergh, an academic philosopher with experience as an advisor for policymaking, states, “AI Ethics is about technological change and its impact on individual lives but also about transformations in society and the economy” (p. 21).

The author carefully poses various questions that allows you to consider and reconsider the popular notions of AI—its uses and its power for good but also situations when AI can be misused or can go beyond what it was designed for.

The book caters to a wide audience, especially those who are curious about how AI could change society and its ethical foundations. Coeckelbergh clearly explains complex concepts with relatable and current scenarios that help contextualize both the arguments for and against AI and the need for human “intervention” or ethics.

For UX professionals and anyone in the technology industry who is new to AI, this book is a must read as it emphasizes the need for responsible and participatory design.

The book begins with an introduction to AI and how it is perceived both in the present and for the future. Science fiction, he describes, seems to present the message that AI is in competition with humans. He examines assumptions about AI and the role that humans and non-humans play in various scenarios in the present and future.

He presents AI and its various applications as it is today, and he discusses the ethical and societal problems that can arise. He then goes on to discuss AI policy for the near future.

With power comes responsibility. In the case of AI ethics, there are some challenges that need to be overcome, for example, in the very nature of policy making. He explains that policy makers should take a proactive vision to try to avoid the challenges faced by society that can be or may have been caused by AI.

In Chapter 11 where the author addresses the concept of “Proactive ethics,” he presents ideas of responsible innovation and embedding value in design—something that is and should be second nature in user experience design. Responsible innovation, in our parlance, would mean participatory design where stakeholders (users included) are involved.

The author goes on to suggest that being practice oriented and following a bottom-up approach can help translate the concepts into practice. The importance of being inclusive when making decisions about the future of a society, he points out, needs to also be extended to developers and end users of technology who may have to deal with the negative consequences. However, being truly democratic is difficult because the intellectual power is concentrated within a few big corporations that will need to be regulated in order to safeguard the public interest.

The author suggests that in order to overcome an AI “Winter” (or the slowing down of AI development and investment), AI ethics need “interdisciplinarity and transdisciplinarity”; that is, the need for joint resources and brain power of people from the humanities and the sciences working together in order to be effective. This means people in the humanities need to be able to think about new technologies, and engineers need to be more sensitive to the effects of new technology on society.

In Chapter 12, the author argues that AI has the potential to really help solve some of the big problems in the world, but it is important to get priorities right. AI, powerful as it is, has limitations when it comes to understanding and solving human and environmental problems such as poverty, climate change, and so on. Thus, AI in conjunction with human thinking, intuition, practical wisdom, and virtue is needed to solve the big problems of the world.

This book was a pleasure to read and is a great addition to any UX book club’s reading list. It makes you think!

Nathaniel Revathi

Revathi Nathaniel has 16 years of experience as a usability expert and user researcher. She has experience working with companies that build enterprise software, SQL products, and non-profit scientific organizations. She enjoys working with multidisciplinary teams that work collaboratively to solve complex problems and continuously improve the user’s experience of software applications.