Oscar Peace

Does machine agency affect trust and responsibility and what can we do about it? - Oscar Peace

Does machine agency affect trust and responsibility and what can we do about it?

First year literature review

Tags: Essay Other AI

27 views

Oscar Peace -


The following was written for a piece of coursework where we were tasked with writing a literature review for one of several themes and our own question that fitted the theme. All of the themes covered different aspects of artificial intelligence, mostly related to the impacts of artificial intelligence in society. The question I chose is the title of this post.


Chosen theme: The Challenge of Machine "Agency"

Ever since machines have existed they have been aiding human influences on the world. The rise of artificial intelligence throughout the later 20th and early 21st century has disrupted this perennial relationship between humankind and machine. This review looks at whether the integration of artificial intelligence with machines affects trust and responsibility and what we can do to control it.

There are areas where artificial intelligence and automation has already been or could be widely adopted. Industries such as mass manufacturing already use automation extensively with little consequences. Moreover, in (Graeber, 2018) the author proposes that there are many occupations that do not need to exist or can simply be replaced by means of automation. Graeber also argues that many in these roles become nihilists because they feel that their labour is pointless. If these jobs are pointless and performing their tasks is futile, then replacing them with machines should - in theory - have no effect on responsibility or trust. Furthermore, in (Frey & Osborne, 2017) the authors predict that 47% of jobs are at risk of being automated within a few decades. According to their study, roles which deal extensively with human heuristics, are at the lowest risk (in general) of being computerised. Crucially however, there are still a significant number of roles with a computerisation probability of $\gt0.9$ which involve making what are arguably consequential decisions thus having an affect on trust and responsibility.

The technology acceptance model proposes that people are more willing to use technology if they perceive it as something that is both useful and easy to use (Davis, 1989), this model has also been empirically proven (Szajna, 1996). However, it can be argued that this model no longer serves its purpose when it comes to intelligent systems instead of those which are more deterministic in their decision making. An intelligent system inherently increases the opacity of the process behind any decisions, and hence undermines the perceived usefulness of the system to the user; Vorm & Combs 2022 suggest a revised model, in which trust and transparency are added as mediators of the perceived usefulness and efficacy of a system. One model which encompasses ability (and hence agency) as one of its mediators (although not originally developed for technology) is Mayer’s 1995 model of organisational trust. In Mayer’s model the more ability a person believes another person (or in our case a machine) has, the more willing they are to trust said person or machine.

Artificial intelligence (AI) is already being used widely in clinical settings, for example to successfully diagnose diseases early and in the development of new drugs. For example, one system used to identify cases of breast cancer resulted in a 9.4% reduction in false negatives in the United States (McKinney et al., 2020). AI has also been used to facilitate the development of genomic medicines (medicines that are better targeted; can be dependent on an individual's DNA (Roth, 2019)). This is not to say that the use of AI in healthcare comes without any drawbacks though, with concerns about the data privacy and security of such systems perhaps being the most important (Alowais et al., 2023). Research by McKinsey showed that 87% of customers would stop engaging with a company if they had concerns about its security practices. While security is not directly related to agency it is important to consider the implications of what might happen if a malicious model were to gain access to a person's data.

Another area in which the trust and agency of both machines and humans is prescient is the field of warfare. AI is already used in conflict to make decisions on behalf of humans (Davies et al., 2023) however humans are still ultimately carrying out those decisions, leading to a legal grey area of shared accountability. This in turn leads to a question of responsibility; this grey area question could be solved by Popa 2021 who argues that in order for a machine to have agency it must first have human goals. On the other hand, Swanepoel & Corks 2024 argue that current AI systems cannot have their own agency as they cannot overcome what they call “tie-breaking” without resorting to existing logical processes and thus lacking agency. Whilst Swanepole & Corks argument holds true for now, there is no evidence to suggest that models will stop advancing further and further, one researcher even goes as far to say that an “AI Fukushima is inevitable” (Sample, 2024). The possibility of an “AI Fukushima” raises important questions about what we must do to control and govern artificial intelligence.

Loi & Spielkamp 2021 emphasise that a pathway towards more control over AI could be enabled by public transparency, with audits being completed of AI organisations, in turn enabling democratic scrutiny. Moreover, they also argue that audits combined with full transparency incentivise ethical behaviour. De Fine Licht & De Fine Licht 2020 instead argue that full AI transparency may not always be the best approach. In contrast they argue in favour of a “fire alarm” compared to a “police patrol”, meaning an explanation when required is a better approach than full transparency, as full transparency may allow malicious actors to “game the system” and also inhibits the development of artificial intelligence. Finally, as for legislation that enforces this approach, Selbst & Powles 2017 argue that the European Union’s GDPR law provides a legislative foundation for a “right to an explanation” (providing “meaningful information” about the logic involved with a decision), which in turn facilitates De Fine Licht’s “fire alarm” proposition. As for existing legal precedent, in (ECLI:NL:RBDHA:2020:1878, Rechtbank Den Haag, C-09-550982-HA ZA 18-388 (English), 2020) commonly known as the “SyRI case” (SyRI being a system used by the Dutch government to detect welfare fraud) it was found that the system not only violated an individual's right to privacy but was also insufficiently transparent. Crucially however, this case did not base its findings on GDPR but instead relied on Article 8 of the ECHR - an individual’s right to privacy. Large systems such as SyRI are invasive and thus serve to undermine trust.

In conclusion, both the agency and use of machines combined with AI have effects on trust and responsibility. The results of any potential decisions made by AI can both be positive or negative, however in a significant proportion of cases the effects are found to be undesirable. As for controlling the effects of machine agency, it is important to develop new frameworks and legislation around the governance of artificial intelligence.


#

Bibliography


Alowais, S. A., Alghamdi, S. S., Alsuhebany, N., Alqahtani, T., Alshaya, A. I., Almohareb, S. N., Aldairem, A., Alrashed, M., Bin Saleh, K., Badreldin, H. A., Al Yami, M. S., Al Harbi, S., & Albekairy, A. M. (2023). Revolutionizing healthcare: The role of artificial intelligence in clinical practice. BMC Medical Education, 23(1), 689. https://doi.org/10.1186/s12909-023-04698-z

Consumer data protection and privacy | McKinsey. (n.d.). Retrieved 19 January 2025, from https://www.mckinsey.com/capabilities/risk-and-resilience/our-insights/the-consumer-data-opportunity-and-the-privacy-imperative

Davies, H., McKernan, B., & Sabbagh, D. (2023, December 1). ‘The Gospel’: How Israel uses AI to select bombing targets in Gaza. The Guardian. https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets

Davis, F. D. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly, 13(3), 319. https://doi.org/10.2307/249008

De Fine Licht, K., & De Fine Licht, J. (2020). Artificial intelligence, transparency, and public decision-making: Why explanations are key when trying to produce perceived legitimacy. AI & SOCIETY, 35(4), 917–926. https://doi.org/10.1007/s00146-020-00960-w

ECLI:NL:RBDHA:2020:1878, Rechtbank Den Haag, C-09-550982-HA ZA 18-388 (English), No. ECLI:NL:RBDHA:2020:1878 (Rb. Den Haag 5 February 2020). https://deeplink.rechtspraak.nl/uitspraak?id=ECLI:NL:RBDHA:2020:1878

Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs to computerisation? Technological Forecasting and Social Change, 114, 254–280. https://doi.org/10.1016/j.techfore.2016.08.019

Graeber, D. (2018). Bullshit Jobs: A theory. Allen Lane, an imprint of Penguin Books.

Loi, M., & Spielkamp, M. (2021). Towards Accountability in the Use of Artificial Intelligence for Public Administrations. Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 757–766. https://doi.org/10.1145/3461702.3462631

Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An Integrative Model of Organizational Trust. The Academy of Management Review, 20(3), 709. https://doi.org/10.2307/258792

McKinney, S. M., Sieniek, M., Godbole, V., Godwin, J., Antropova, N., Ashrafian, H., Back, T., Chesus, M., Corrado, G. S., Darzi, A., Etemadi, M., Garcia-Vicente, F., Gilbert, F. J., Halling-Brown, M., Hassabis, D., Jansen, S., Karthikesalingam, A., Kelly, C. J., King, D., … Shetty, S. (2020). International evaluation of an AI system for breast cancer screening. Nature, 577(7788), 89–94. https://doi.org/10.1038/s41586-019-1799-6

Popa, E. (2021). Human Goals Are Constitutive of Agency in Artificial Intelligence (AI). Philosophy & Technology, 34(4), 1731–1750. https://doi.org/10.1007/s13347-021-00483-2

Roth, S. C. (2019). What is genomic medicine? Journal of the Medical Library Association, 107(3). https://doi.org/10.5195/jmla.2019.604

Sample, I. (2024, November 22). ‘An AI Fukushima is inevitable’: Scientists discuss technology’s immense potential and dangers. The Guardian. https://www.theguardian.com/science/2024/nov/22/an-ai-fukushima-is-inevitable-scientists-discuss-technologys-immense-potential-and-dangers

Selbst, A. D., & Powles, J. (2017). Meaningful information and the right to explanation. International Data Privacy Law, 7(4), 233–242. https://doi.org/10.1093/idpl/ipx022

Swanepoel, D., & Corks, D. (2024). Artificial Intelligence and Agency: Tie-breaking in AI Decision-Making. Science and Engineering Ethics, 30(2), 11. https://doi.org/10.1007/s11948-024-00476-2

Szajna, B. (1996). Empirical Evaluation of the Revised Technology Acceptance Model. Management Science, 42(1), 85–92. https://doi.org/10.1287/mnsc.42.1.85

Vorm, E. S., & Combs, D. J. Y. (2022). Integrating Transparency, Trust, and Acceptance: The Intelligent Systems Technology Acceptance Model (ISTAM). International Journal of Human–Computer Interaction, 38(18–20), 1828–1845. https://doi.org/10.1080/10447318.2022.2070107

0 Comments




Similar posts