Ethics in Human-Computer Interaction

Yash Lara
18 min readJan 4, 2021
Image Courtesy: Conditio Humana

Human-Computer Interaction (HCI) is a field that studies the design and development of technologies that involve interaction between computers and humans as the name suggests. Whenever the question of humans come into place, the question of Ethics comes along with it. When HCI researchers like me design and develop technology, Micheal Sandel’s ‘What’s the Right Thing to do’ echoes in our minds. Technologies have immense power to greatly improve human lives. But as the popular proverb goes, ‘With great power comes great responsibility’. The same potential that technology holds in improving and elevating human lives if used in unethical and nefarious ways, can also harm and hurt human lives. In this article, I deep dive into the role of Ethics in Human-Computer Interaction. I evaluate HCI as seen from the perspective of different schools of ethics, and dissect the roles we as designers and developers have in shaping HCI towards a more ethical future. So buckle up, grab your favourite beverage, and get ready to dive into the world of ethics in HCI! It might be a bit long (I apologize in advance), but I assure you its worth the read!

Users are humans. And humans come with the inherent challenges of ethics. The field of HCI is heavily centred around users and hence is unsurprisingly deeply intertwined with the field of ethics. HCI itself is a very vast field. In the very basic sense, HCI asks a simple question, “How can computers, in all its forms, help users have a better experience?”. But this very simple and basic question itself makes HCI a highly collaborative and expansive field. HCI as a research field involves mathematicians, computer scientists, designers, psychologists among others. As a practical field, HCI involves industrialists and politicians. And after this, the field is also heavily influenced by the market, technological developments and consumer needs. Each level of involvement in the field brings its own challenges of ethics.

Many fields have the privilege of prioritizing one aspect over another. HCI however, shares no such privilege. One cannot prioritize user needs and experience over ethics, or vice versa. Hence it is of no surprise that ethics in HCI is often a source of friction and tension within the field. Ethics has started to play an even more major role now in HCI, thanks to the introduction of new concepts such as Dark UX, subliminal influencing etc. To further understand the role and importance of ethics in HCI, lets us first distinguish the different kinds of ethical principles that can be applied to the field of HCI.

Interpreting HCI ethically is not surprising. HCI is a normative science that aims to improve usability. Conventionally, normative sciences are themselves divided into three, i.e, aesthetics, which deals with the looks of things and they feel they can evoke in the users, logic which deals with everything true and finally ethics, which deals with everything good or bad, right or wrong. Hence there is a strong relation between HCI and ethics. In the coming sections, I will evaluate HCI through the lenses of different schools in ethics and justice.

Christian Ethics

Christian Ethics refers to ethics that has been drawn from Christianity historically (Carines and Thimbleby, 2003). Ethics in general has deep religious roots, which has shaped what is considered right or wrong. In HCI itself, this branch of ethics has considerable influence. Why should we empower users? Why not exploit them and maximize corporate profit? Why should we maximize usability? Why not encourage users to buy upgrades and encourage market churn? Why should we allow users to correct their mistakes when using an interface? Why not penalize them for their mistakes? The traditional form of HCI is based on certain principles that are borrowed from the Christian form of ethics. While Christian ethics has principles for humans that have been “made in the image of God”, HCI interprets this for its users. The basic assumption of HCI is that users are diverse and valuable. It is worth developing new systems that improve the general human condition. Good HCI practitioners continue to improve their products for the users. These basic assumptions of HCI borrow as well from biblical ethics.

Many of the modern ethical dilemmas, such as euthanasia, cloning, genetic modification, industrial pollution etc, arise due to the inherent tension and friction that arises between the raw unchecked development in science and technology, and between the implicit Christian ethical habits of thought.

Rawlsian Ethics

The second view of ethics that influences HCI is the Rawlsian ethical school of thought (Carines and Thimbleby, 2003). A standard question in ethics is “how and under what conditions can we do good?”. Rawls provides a thought experiment here and brings forth an idea of justice. Rawls asks that in what sense can the world be just, and if so, how can the rules be defined? This line of thought is quite analogous to what HCI drives to do. HCI itself also aims to create ‘worlds’ for the user that is just and perfect. A device, just like a mobile phone, imposes a set of rules of interaction on the people who use it. In some sense, an HCI designer also wishes to develop a world in which justice is increased. Although Rawls is more considered with the political nature of things, his principles of justice can also be applied to the way HCI functions.

The Rawlsian Veil of Ignorance. Image Courtesy: the University of Austin at Texas

One of the most popular ethical theories ever proposed is that of Rawl’s ‘Veil of Ignorance’. The “veil of ignorance” is a moral reasoning device designed to promote impartial decision making by denying decision-makers access to potentially biasing information about who will benefit most or least from the available options. Rawls suggests that you imagine yourself in an original position behind a veil of ignorance. Behind this veil, you know nothing of yourself and your natural abilities or your position in society. Behind such a veil of ignorance, all individuals are simply specified as rational, free, and morally equal beings. Behind this veil of ignorance, Rawls argues, humans can make impartial and fair ethical decisions. This same Rawlsian principle of the veil of ignorance can be extrapolated to the field of HCI. In HCI too, we are behind a veil of ignorance, or at least try to be. In HCI, we do not know what sort of users our systems will have. While designing systems and interfaces, designers and developers should try to avoid getting biased and influenced by users. While conducting user research, HCI researchers should take care that they are not biasing or basing the feature of their products on any protected characteristics of the users, such as race, sex, gender etc. In the same manner, unless designers consciously work under a Rawlsian veil of ignorance, they are susceptible to building systems they like, rather than systems that benefit their users.

If a designer considered that, in the world where their system is imposed, they might be working on a user support line, or they might be writing the system’s user manuals, or they might be actual users — then they would design with more consideration. In fact, for any successful system, designers are more likely to be ordinary users than any other category of user. One way in which Rawls principle can be applied here is with a very easy example. On Rawls’ account, developing user interfaces to make weapons of mass destruction easier to use would be unjust, since no designer would wish to live in the target population. Similarly, designing a social media network that can lead to severe social media addiction and negative effects on mental health again is unjust, since no designer would themselves want to be subjected to such a social media platform. This clearly shows that many things that cannot be addressed through conventional Christian ethics, can be addressed instead with Rawlsian ethics.

Medical Ethics

The third view of ethics in HCI is medical ethics as suggested by Carines and Thimbleby, 2003. I recently read a very interesting book by Peter-Paul Verbeek, titled, ‘Moralizing Technology: Understanding and Designing the Morality of Things’. The author starts the book by making us think of the impact of ultrasound technology that can let us see a foetus in the womb. As innocuous and helpful as it is, it also has had an immense impact on human lives, and has in a way changed society. Now thanks to ultrasound and other technologies that let us see the foetus in the womb early on, parents can come to know the sex of the child. This has been a huge problem in some countries such as India, where this has led to 1000 plus cases of female foeticide in the last decade. Parents can also decide whether they want to keep the baby or not after seeing more detailed tests done on the Foetus. The article ‘The Last Children of Down Syndrome’ talks about how prenatal testing has changed the lives of parents and children alike. This example shows the inherent values and ethical dilemmas that devices carry with themselves. Similarly, self-driving cars themselves come with their own ethical dilemmas, which can be classified under medical ethics. Seld-driving cars are a form of the trolley problem. In a hypothetical collision with a pedestrian which can prove to be lethal to the pedestrian if the car decides to save the passengers, and lethal to the passengers if the car decides to save the pedestrian, the self-driving car manifests for itself a major ethical dilemma. Whose life is more valuable? Whose life should the car save? Who gets to decide this? This might sound rather far fetched a scenario, but rather scarily, this is how self-driving cars are being programmed now. For example, the new Mercedes self-driving car is programmed to always save the passenger. Many a time although not always, HCI practitioners are posed with a problem that often is concerned with life or death.

Medicine is very similar to HCI in a way. Both HCI and medicine are in a way trying to improve the lives of the people, and both have the knowledge to do so. Yet HCI also has a lot to learn from medicine. In Medicine, a new drug is released after a series of long trials and testing. Only once a drug has been proven to be harmless or shows that its potential for good is much more than its potential for harm, is the drug allowed in the market and is allowed to be used by humans. However, the same is not done for many big tech technologies that are allowed to come to the market and are allowed to be consumed by users without thinking of the effect or impact it would have on the society. Facial recognition technology and self-driving cars are a few examples of this. Another place where HCI can learn from medical ethics is in that of doing research. If further, we are to improve HCI more widely, one of the places to start is in our procedures for publishing and disseminating best practice. In medicine, any reliable knowledge is worth building on; in HCI practice we are more often concerned with the excitement and business side of things than the quality of the science and the ethics of doing it.

Utilitarian ethics in HCI

Utilitarianism argues for maximizing happiness for the majority of the people. Philosophers and technologists that believe in the utilitarian view of HCI argue that technology is utilized, and hence a rule-based utilitarian approach is ideal for it. For example, in the context of social media, many people argue that despite its many problems, social media is allowing people to connect to each other and hence maximizing happiness. Therefore, from a utilitarian perspective, social media does not have any impediments to ethics. Big Data is also defended from a utilitarian perspective. Although Big Data has several issues such as privacy, bias, fairness etc, in the broader sense, big data is allowing for the development of services that improve and increase utility for humans. Big Data also leads to the development of more jobs in the market and hence increasing employment. The utilitarian argument of HCI is also used to protect users online from offensive or hateful content. When companies block or take down hateful posts, tweets, comments on social media platforms, they are using the broad utilitarian argument that a maximum number of people should be protected, even if a small number of people feel that their freedom of speech has been affected.

Libertarianism in HCI

In technology and in HCI as well, the philosophy of libertarianism (different from political libertarianism) has been contentious in nature. Technology has seen rise to the political movement of ‘Technolibertarianism’ in Silicon Valley. The movement argues for unregulated and uncensored use of the internet, free from government interference. Similarly, in HCI, libertarianism philosophy applies when designers discuss giving complete control and power to the user in a system. Giving users complete control over a digital system comes with some advantages and disadvantages. For example, designers often state that users are not aware of what they want. In such a case, giving complete control of a system to the user can lead to several problems. However, allowing unrestricted and free navigation of a digital system can also help in increasing the usability of the system.

Many techno-libertarianism philosophers also argue that restriction and barriers in technology can hamper progress. Government restrictions in the field of AI, for example, can lead to slowed progress in the field. Libertarianism advocates also argue that the lack of libertarian designs impedes the free will and rights of the people. For example, censoring of hate speech on social media platforms is often cited as restricting freedom of speech online. Contrary arguments for unlimited freedom to users for the system argue that this kind of freedom can be used for the wrong purposes. For example, platforms such as Gab, which allow unrestricted and uncensored free speech on the internet, have now become platforms for extreme hate speech.

Justice from Iris Marion Young in HCI

The faces of Oppression by Iris Marion Young

In her book, ‘Justice and Politics of Difference’, Iris Marion Young introduces the five faces of oppression. Larger or more dominant social groups can control minority groups through one of five major ways: exploitation, marginalization, powerlessness, cultural domination, or violence. Nearly every group that has been identified as oppressed in the modern area can relate to these five aspects of oppression. The ideas of exploitation, powerlessness and cultural domination can be extended to the field of HCI as well. I have discussed them below:

Exploitation

What stops HCI professionals from exploiting the user through their designs? Why not exploit user needs to generate profit and increase market churn? There is an underlying assumption that guides HCI is that users should not be harmed in any manner with the products developed for them, and HCI professionals strive to make the lives of the users better and easier. But it’s not necessary that HCI professionals stick with these principles all the time. Ideas in the emerging field of Dark Patterns in UX are geared towards exploiting the weaknesses of the users to advantage the companies providing these services. Deceiving websites that trick users to perform actions they didn’t originally intend is one example. Even the constant advertisements on Youtube, a video streaming platform can be argued as exploitation. For many, Youtube is the only source of education through online courses and classes. But a platform such as Youtube can be argued to exploit this need of the users to show advertisements that impede the continuity of watching a video. Social media platforms exploit the basic human need to maintain social connections to gather data which is later sold as products to third parties.

Powerlessness

Powerlessness ties to the issue of exploitation mentioned above. HCI aims at empowering users to take control of the system. Empowering users is one of the guiding principles of HCI. The design of a system should not make the users feel helpless or powerless in any way. HCI also plays a very important role in public policy development regarding technology. Here, it is important to take care that users of all communities, groups and demographics are equally empowered. Bias and fairness issues in AI can also lead to certain sections of the populations feeling powerless.

Cultural Dominance

HCI as a field has largely developed in the West. It is of no surprise then that a lot of innovation and development that happens in HCI happens in the West. The inability of the designers to make their designs inclusive of users of all demographics and abilities can be a form of cultural dominance. For example, when a user is forced to use a popular app in English only since that app is not offered in any other language, it is a form of the cultural dominance of the language in HCI. When designing and developing digital systems. HCI professionals must make sure that they are designing systems for other users and not for themselves. Having universally accessible and usable systems help designs become more inclusive and avoid injustice through cultural dominance.

Values

No discussion of ethics in HCI is whole without the discussion of values. Human values are the driving force of ethics. HCI strives towards confirming existing human values, and towards meeting human needs. But values also bring up some difficult questions. What values count? Who gets to decide? Which values are more important than the others? Whose values are more important? These questions become increasingly difficult to answer as they think about issues such as privacy, self-driving cars, AI assistants etc. Hence distinguishing values from ethics gets more and more difficult. Values play a crucial role in informing design decisions for the designer, whether the designer is cognizant of it or not. Friedman and Kahn offer three positions on how values can get implicated in technology design: Embodies, exogenous and interactional positions.

The embodied position holds that designers inscribe their intentions and values into technology, and once this technology is released into the world, it can influence human behaviour. This is also very similar to Latour’s idea of designers determining human behaviours through the design of artefacts, like his famous lock and also Langdon Winner’s idea of artefacts having politics, in other words, artefacts embodying politics. How Gatech students and faculty have to go through a dual system of authentication to access any resource, through the DUO app is also a way of ensuring a particular kind of behaviour than ensures that data is secure. This again is a way of embodiment.

The exogenous position holds that societal forces, such as economics, politics, race, class, gender, religion — significantly shape how a technological artefact will be used. Facial recognition technology is a great example of this. Although originally developed for helping security, political and economic forces started using this technology for nefarious reasons. This kind of position is very similar to the Actor-Network theory. Each actor exists with other actors in constantly shifting networks. The way a certain technology hence will be used is highly subjective to several external forces that can dictate how this technology is being used.

The last position is the interactional position. The interactional position holds that whereas the features or properties that people design into technologies more readily support certain values and hinder others, the technology’s actual use depends on the goals of the people interacting with it. The interactional position in HCI focuses on the iterative process through which products are refined and meet user needs. The interactional position helps us understand why certain technologies get accepted by users, while certain technologies get rejected. It also helps us study better the socio-cultural interactions of the product. Two sides of interactional positions have been emphasized in the literature. One side focuses more on the properties of the product. As system designers in HCI, we can choose to try to construct a technological infrastructure that disabled people can access. If we do not make this choice, then we can single-handedly undermine the human value of universal access. The other side focuses on the positioning of technology within socio-organizational infrastructures. Regardless of the side we choose, the interactional position emphasizes the importance of the social and cultural contexts within the use and deployment of technology.

In a similar line of thought, in ‘Do Artefacts have Politics’, Langdon Winner talks about how artefacts themselves embody political and authoritative notions. While some artefacts are widely believed to require social structures in which those can operate, others are thought to work well in conjunction with specific systems of power and authority (Winner, 1980). However, the author states that certain technologies are inherently autocratic and must require particular social structures for their implementation. The atom bomb is one such example. Winner’s arguments can be important to both creators and consumers of new technology. Winner points out that the political nature of certain technologies has been used by both ends of the political spectrum.

A challenge often is connecting these human values to the ethical imports we discussed before, while at the same time maintaining good usability standards and striving for better user experience. Friedman and Kahn suggest four pairwise relationships between usability and human values:

  1. A design is good for all three, usability, values and ethical imports.
  2. A design is good for usability, but at the cost of human values with ethical imports.
  3. A design is good for human values with ethical import but at the expense of usability.
  4. A design good in usability is required to support values and ethical imports.

In ‘Values as Hypothesis: Design, Inquiry, and the Service of Values’, Nassim Parvin et al suggest that values serve as hypotheses by which a situation can be examined, which can tell us the possible courses of action and their effects. Values are not applied to situations, but rather serve situations as hypotheses. The authors also bring up the idea of pluralism of values, that is the same values are at times helpful and productive, but in other situations and time can be problematic. Pluralism parallelly has an appreciation and a scepticism for values, and that no single correct interpretation of values can serve all situations.

The relationship between ethical imports and values has an important role to play in HCI. They form the basic underlying guidelines of HCI and how HCI professionals should act. Ethics and values also provide different perspectives for HCI professionals to design and develop digital products. The current trend in HCI is a promising and hopeful one. More and more people are engaging with ethics in HCI, which can lead to more favourable, equitable, just and fair products and technologies in the future. As technology and computers become more and more pervasive and ubiquitous, we need to be aware of making sure that the development and deployment of these technologies are done ethically, and that no user or groups of users come to any form of harm due to this. An ethical turn in HCI is not new, HCI being a field of accessibility and usability. But only recently we have started to understand the importance of assessing the ethics of artefacts. As the relationship between humans and computers deepens, ethics and justice have a crucial role to play in innovation and research in HCI. Determining ‘What’s the Right Thing to do’ is not always easy, but is essential for a more just, transparent and equitable future of HCI.

Image courtesy: Mozilla, Ethics in tech industry matter

References

I highly suggest these papers for anyone interested in Ethics and HCI.

Cairns, Paul, and Harold Thimbleby. “The diversity and ethics of HCI.” Computer and Information Science 1.2003 (2003): 1–19.

https://www.wired.com/insights/2013/01/the-utilitarian-side-of-big-data/

Friedman, Batya, and Peter H. Kahn Jr. “Human values, ethics, and design.” The human-computer interaction handbook (2003): 1177–1201.

Egger, Florian N. “Deceptive technologies: cash, ethics, & HCI.” ACM SIGCHI Bulletin-a supplement to interactions 2003 (2003): 11–11.

JafariNaimi, Nassim, Lisa Nathan, and Ian Hargraves. “Values as hypotheses: design, inquiry, and the service of values.” Design issues 31.4 (2015): 91–104.

Kisselburgh, Lorraine, et al. “HCI Ethics, Privacy, Accessibility, and the Environment: A Town Hall Forum on Global Policy Issues.” Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. 2020.

Knouf, Nicholas A. “HCI for the real world.” CHI’09 Extended Abstracts on Human Factors in Computing Systems. 2009. 2555–2564.

Knight, John. “Ethics and HCI.” Information Security and Ethics: Concepts, Methodologies, Tools, and Applications. IGI Global, 2008. 231–237.

McMillan, Donald, Alistair Morrison, and Matthew Chalmers. “Categorised ethical guidelines for large scale mobile HCI.” Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 2013.

Parvin, Nassim, and Anne Pollock. “Unintended by Design: On the Political Uses of “Unintended Consequences”.” Engaging Science, Technology, and Society 6 (2020): 320–327.

Shilton, Katie. “Values and ethics in human-computer interaction.” Foundations and Trends® in Human-Computer Interaction 12.2 (2018)

Waycott, Jenny, et al. “Ethical encounters in HCI: Research in sensitive settings.” Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems. 2015.

Winner, L. (1980). Do Artifacts Have Politics?. Daedalus, Modern Technology: Problem or Opportunity? (Winter, 1980), [online] Vol. 109,(№1), pp.pp. 121–136. Available at: http://innovate.ucsb.edu/wp-content/uploads/2010/02/Winner-Do-Artifacts-Have-Politics-1980.pdf[Accessed 2 Oct. 2018].

Book Suggestion

Moralizing Technology by Peter-Paul Verbeek is an excellent book for diving deep into ethics and morals of Technology. I highly recommend it.

--

--

Yash Lara

Researcher at the intersection of AI, UX and society. Opinions strictly my own