The Risks of Artificial Intelligence

Written by Rabbi Elliott Karstadt — 11 November 2024

There is much I could talk about this week – quite a lot has happened in the world. Whether it is a Presidential election in the US, appalling and concerning attacks on Jewish football fans in Amsterdam, or the situation in the Middle East that continues to take so many lives and upend so many people’s reality – there is much in the present moment that occupies our minds and gives us cause to worry about the future. But today I wanted to speak about something a bit more long-term, and that is the question of artificial intelligence – what it might offer us and the risk that it poses. I think the Jewish response to AI is actually one that might also help us in the here and now, with many of the challenges we face.

We will all be aware of the huge advances that have been made in artificial intelligence over the last few years, particularly with the release of the most recent versions of Chat GPT. About 18 months ago I asked Chat GPT to write a one thousand word sermon on Parashat Bo in the book of Exodus in the voice of a progressive rabbi from the UK, including a quotation from the Talmud and a quotation from a more contemporary source. (I should say, I did this as an academic exercise, not because I was stuck for a sermon!) The thousand words that I got back was a passable version of a sermon I might give, based on the prompt that I had give to Chat GPT, along with the AI’s reading of all the progressive sermons that are stored online. It used all of those to create something that mirrored, in a very generic way, the kind of sermon that I or one of my colleagues might give, though obviously not in my voice, and without the up-to-date information about the vibe in the room when I would give it.

And this last consideration – the information about people in the room –  became clear when I asked Chat GPT to re-write the sermon, this time with a surprise ending. The result was basically the same, except, the bot now ended the sermon with a paragraph beginning: ‘and now I want to surprise you.’ (Not always the best way in which to surprise someone, telling them you are going to do it, but anyway…) The thing that Chat GPT thought would surprise you was that not everyone is born Jewish – some people convert to Judaism. Not at all something that members (or, I would venture, guests to the synagogue) would necessarily be surprised by.

Things got worse when I asked Chat GPT to include a joke. Rather than weaving a joke into the sermon, the bot simply broke off in the middle of a paragraph and told a joke – a funny one taken on its own, but not one that worked in the context of the sermon it was writing. All of this was incredibly comforting at the time. I thought phew, AI is not supplanting me  in my role quite yet.

That was 18 months ago. The way in which AI is designed is to learn over time, so when I came back to Chat GPT this week and put in the same prompts, it was a little bit disconcerting to discover that the sermons it produced for me were much more subtle, much more tuned in, than the ones it had produced before. They were still in a generic voice, not my voice. They still did not know anything about the vibe, or the people in the congregation. But it had got much better at predicting those things. It still said, ‘now I want to surprise you’, but this time the surprise ending was a much more about the very idea that there is no single Jewish identity, and I at least felt that it would have been much more fitting as a sermon to give at Alyth than the one Chat GPT had written 18 months ago. When it came to the jokes – even better. The jokes were now properly integrated into the sermon, much more relevant to the overall topic of the sermon, and clearly served a purpose in the message it was trying to convey.

Machine learning really is effective. And it will continue to learn and will continue to develop exponentially. Eventually, AI will be able to tell me how best to surprise you. The idea is that the more data machines are able to gather about us, the more information they have about every aspect of our being, the better they will be able to predict our thoughts and our feelings. They will know us (say the theory) better than we know ourselves.

AI promises a huge amount, and it does have the potential to be an incredible tool for humanity, whether it is in solving some of the big issues facing our planet like climate change, or curing diseases, the computing and learning power of artificial intelligence has the ability to transform life on this planet.

The way in which many who want to market artificial intelligence have begun to talk about it is as almost God-like in its ability to know us and prompt us to action in world.

But, there is also a risk that comes with that. And I’m not talking about the risk of a machine takeover in which we are somehow enslaved to the machines, or wiped out by them (though these are of course risks that we should be considering). The risk is instead that, in the belief that these robots understand us better than we understand ourselves, that we delegate to them the task of answering what it means to be human. Part of this is that, what it means to be human is fundamentally that we are all individuals from specific places and with specific personalities and traits.

In 2017, the company Amazon had to scrap its internal AI recruitment tool because its learning model, which was trained on historical data, developed (by its own accord) a bias against women applicants. Not because it saw a label ‘woman’ in a box on the file, but because it could identify and penalise subtle proxies for gender – like seeing on a CV a university predominantly attended by women, or extra-curricula societies often attended by women.

The history of racism in the western world, and the way in which this is recorded in the history of ourselves that we have digitised, means that we risk, for example algorithms like that developed in the US to help hospitals predict who needed the most urgent care taught themselves to divert care away from high-risk black patients and towards low-risk white patients. Again, this was not because someone had intentionally gamed the system by setting these parameters – it is because AI is nothing but a mirror. It is nothing but a mirror of all the prejudices that have been present in western society and which have been recorded explicitly and implicitly in the history that we have digitised and made available to these AI tools.

As the Edinburgh philosopher Shannon Vallor argues, ‘This is precisely what machine learning models are built to do – find old patterns, especially those too subtle to be obvious to us, and then regurgitate them in new judgments and predictions. When those patterns are human patterns, the trained model output can not only mirror but amplify our existing flaws.’[1] She goes on to say: ‘AI mirrors thus don’t just show us the status quo. They are not just regrettable but faithful reflections of social imperfection. They can ensure through runaway feedback loops that we become ever more imperfect.’[2] In other words, all AI can be in its current iteration is a reflection of what humanity has been. It cannot tell us what we might become – unless it is simply as a projection into the future of where we have already been.

Our Torah reading this morning began with God instructing Abram: lech lecha. There is much disagreement amongst commentators as to how we should translate that phrase. Lech is the imperative ‘go’, but with lecha attached it could become ‘go for yourself’, ‘go with yourself’, ‘go into yourself’ – there are so many interpretations and so many implications as to what that command really means. But regardless of how you translate the phrase, for most people the way to understand the command is that Abram is being told to strike out, to set forth, away from the land of his birth and his upbringing, away from his parents, and away from everything he knows. In the words of the late Rabbi Jonathan Sacks: ‘Lech Lecha means: Leave behind you all that makes human beings predictable, unfree, able to blame others and evade responsibility.’[3] We human beings have to take responsibility for our own fate.

In order to tell us what it means to be human, artificial intelligence needs for us to be predictable – to remain in the patterns in which we have always found ourselves. It does not have the ability to shape for us our ambitions to be something other that what we are now. In the Torah, for Abram, this appears to be God’s role – to say to him lech lecha. But it can also be a role that we take upon ourselves – to strike out, to experiment, to use our imagination to think of things and to do things that are not going to be predicted by an algorithm whose job is merely to predict the most likely thing to happen next based on what we have already done. The most likely thing is unlikely to help us very much in any of the challenges I have named today.

There is so much more to be said about artificial intelligence and its consequences for humanity – much more than can be said in one sermon. And I invite us as a community to be in conversation about it, to be curious about it and to be critical when needed.

Whether it is AI, Trump or Israel, we seem to be stuck in old ways and we need that imperative to lech lecha – to go forth and try something different.

We need to become the pilots of our own lives, to break free of old ways that have led us to the point we are today. This is just as significant for our personal lives as for the political. We have just stepped away from the High Holy Days, when we were encouraged to think about such things, but may this Shabbat Lech Lecha encourage us to reflect anew, and make us think again – and think differently.

Shabbat Shalom

 

[1] The AI Mirror, pp. 43-4.

[2] The AI Mirror, p. 45.

[3] https://rabbisacks.org/covenant-conversation/lech-lecha/the-heroism-of-ordinary-life/