A discussion of the ethical framework around AI
Published:
Content Copyright © 2024 Bloor. All Rights Reserved.
Also posted on: Bloor blogs
A3 Communications’ IT Question Time is a half-day event loosely modelled on the TV debate ‘Question Time’ where up to four panellists (vendors, resellers, service providers, etc.) discuss the issues surrounding a newsworthy subject. It’s in a round-table format with independent speakers, a moderator and a selected group of highly influential journalists, bloggers and analysts. I attended such an event recently, addressing the ethical issues around AI.
First, the bad news, we couldn’t really agree on what AI was, let alone what Ethics was, which might be thought to be a real barrier against establishing ethics for AI. Nevertheless, while I do think that people have to define their terms before discussing AI and ethics, I don’t think that problems with definitions will matter much in practice – as long as the definitions used for any discussion are both accessible and clear.
For the purpose of this blog I’ll define AI as “making a machine behave in ways that would be called intelligent if a human were so behaving” – this is based on McCarthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (2006). A proposal for the Dartmouth summer research project on artificial intelligence, August 31, 1955. AI Magazine, 27(4), 12. And we mostly talked about large language models used for tools such as ChatGPT.
Defining ethics is left as an exercise for the reader (attempts at this have been going on for 2k years or more already). I’ll use “doing things right” as an acceptable working definition.
We all agreed that AI can look clever but it is a long way from being sentient – it has no agency. Someone has to switch it on and tell it what to do, but it returns answers in a good simulation of a human actor, which may cause people to over-estimate the value of what it is doing. Think of things like ChatGPT today as a human-like natural language interface to advanced computational analytics. When it writes essays for students, it is generating the essay, without comprehension, word-by-word using its training on vast amounts of data to decide what word is statistically most likely to come next. If this is a citation, it will put in a citation, in the correct citation format, but possibly pointing to a work that has never existed.
The group spent some time discussing the limitations of this approach, starting with the issue that most of the information in the world will eventually have been produced by an AI, and AIs will start feeding on themselves. It is not clear how they will behave then, but I suspect that they might double-down on the status quo and reject innovation.
Bias is another obvious problem – if we have only ever promoted men to the top jobs, the AI promotion bot will learn this and continue sex discrimination. If we try to tell it that we now want to promote girls, it may still “decide”, on the basis of the data it is trained on, that we really prefer men in top jobs, and just promote men anyway. It is pretty hard for humans to work out exactly how large language AI models produce any particular result and there are plenty of examples of AI’s that emulated the actual behaviour of an organisation in real life, not the normative behaviours that its corporative image required.
Then there is the intellectual property issue. Does the data being used to train the AI belong to the company running the AI or must it be licensed from somewhere else? Have people given meaningful consent for the use of their data to train AIs? Does training AIs compromise data security and privacy – and if the more litigious or privacy-obsessed subjects remove their data from AI training datasets, will that bias the training?
It seemed clear to us that some sort of ethical guidelines – guardrails – for the use of AI will be needed and that these will be culturally or geographically specific. One size will not fit all, and what is ethically acceptable in Beijing will not necessarily be acceptable in New York. And, of course, assuming that Judaeo-Christian norms have universal application just isn’t going to work.
The fact that the AI is a black box may not matter – you can always audit around a black box. You don’t need to know how or why an AI has become sexist to recognise that only men are being promoted and this isn’t acceptable ethically, in your social environment. You can then switch the AI off until it is retrained – and/or you’ve made your organisation less sexist. This does assume that people will question and validate AI-based outcomes and the risk is that they become lazy and just accept what the computer tells them.
The law around AI may not be as big a problem as one might expect, in the UK anyway. Our judge-made law, based on precedent may be able to adapt more easily to rapidly evolving technology than more codified legal schemes. It was even pointed out that antique Roman law might give us pointers to the law covering AIs. Several thousand years ago, Roman slaves could do business in the forum on behalf of their owners, but the owner remained responsible – accountable – for their actions (you can delegate action but not responsibility). Something similar might apply to AIs – if an AI produces an unethical outcome, no point in blaming the AI (or its designer), the human organisation choosing to use the AI in its systems must be accountable.
So, where does all this get us? I don’t think it changed my views much but the first thing I’d suggest for an ethical framework for AI is do encourage this sort of open no-blame discussion of the relevant issues on your organisation, with decision-makers (managers) taking part. AI ethics will reflect the culture of your organisation – so the ethical issues around AI must be discussed in the context of the organisation’s ethics generally, and in the context of societal ethics as a whole. Then, perhaps, look for some externally validated ethical framework that accords with your culture. Obviously, get it from a defensible source, such as a government organisation or industry body. Examples might be:
- Choosing something with a wider application than just AI [link]
- Making an AI impact assessment [link]
- Adopting an Industry Code of Conduct [link]
- Looking at Codes of Conduct specific to particular industry-segments such as New code of conduct for artificial intelligence (AI) systems used by the NHS – GOV.UK
- Looking at training and certification as a basis for establishing AI Ethics [link]
Well-chosen ethical frameworks and codes of conduct are, in my opinion, necessary for AI governance, but they are not sufficient. You will need to document their practicable application to your organisation, preferably without using an AI to write this for you. This will mean establishing formal policies, training courses, enforcement policies – and no-blame feedback mechanisms, so that emerging issues can be identified and addressed without risk to those raising the issues.
I haven’t room to go into detail about what this might mean in practice here but – thanks to Andy Hayler at Bloor for finding this – here is a “set of fictional case studies that are designed to prompt reflection and discussion about issues at the intersection of AI and Ethics. These case studies were developed out of an interdisciplinary workshop series at Princeton University that began in 2017-18. They are the product of a research collaboration between the University Center for Human Values (UCHV) and the Center for Information Technology Policy (CITP) at Princeton”. I think that reviewing these fictional case studies could effectively kick-start discussion of AI and ethics in any organisation.
Finally, don’t overlook any legal obligations you may have. These may well differ markedly in different jurisdictions – and following the most restrictive regulations everywhere may be safe and straightforward, but it may also limit your ability to compete in less regulated jurisdictions.
Basically, my takeaway from this roundtable was that if ethics is already fundamental to your organisation, extending this to cover AI won’t be too difficult. If not, you’ll have some extra work to do, and just publishing an AI Ethics checklist that you found on the Web probably won’t work. What you should do is to provide a discussion-based feedback path for ethical issues as they arise; and what you mustn’t do is to set rules, for an emerging technology, too early. And, of course, think about AI ethics before you start developing and using it, not only after an embarrassing AI-related ethical failure has hit the press. AI ethics must be built-in, not bolted-on.