Last week I was invited to speak with Amy Follett from TWR-UK about Rishi Sunak’s reflections on AI. An article had been released that claimed Sunak would be more than happy to let AI teach his kids or look after the elderly. Unsurprisingly, this was not in fact the case, but it raised questions all the same as to the use and boundaries of AI—including in the care and education sectors.
In addition to being a writer, I also work for an organisation called Faith in Later Life that supports and encourages local churches in their discipleship of and service to older people, in and through the local church. As such, I was asked whether I personally would be comfortable with “a robot” caring for the elderly, to which I obviously responded, “No.”
You can watch the entire interview here:
Though I went into more detail than that, I didn’t cover everything that I would have liked to during the interview as time didn’t permit, and watching the interview after its release, I had some further thoughts I wanted to share.
In addition, having felt strongly that Sunak ought to have been represented fairly—not a reflection on TWR but on the original article—I wondered whether the same might also be true of ChatGPT. I have my tongue firmly in my cheek here, but all the same I thought it might be interesting to ask “Mr GPT” how he would have responded, followed by my own thoughts:
Are you comfortable with the idea of a robot caring for the elderly?
ChatGPT:
“As an AI, I don't have feelings, but I can provide a perspective based on data and existing AI capabilities. Today's robots and AI systems can effectively perform specific, routine tasks and provide reminders for medication, thereby assisting in elderly care. They could monitor health parameters, alert healthcare professionals in case of an emergency, or provide companionship to some extent. However, caring for the elderly isn't only about physical needs but also emotional needs, which involve empathy, compassion, and understanding - aspects robots are currently incapable of fully replicating. Human caregivers also make nuanced judgments based on non-verbal cues, context, and personal relationships, which is challenging for current AI. So, while AI and robots can support human caregivers, they aren't likely to replace them completely in the near future.”
Interestingly, GPT and I gave quite similar answers here, in short, “there are potential benefits behind the scenes, but emotional care is off the table.” I’m not sure I would call this self-awareness, but GPT does seem to be inbuilt with a prior knowledge of its shortcomings and boundaries. Interestingly, in the time since the interview I’ve pondered the possible uses of technology, including robots, more broadly and how they might actually be use here.
My wife has been saving up for a Roomba for the past few months, ahead of us hopefully moving to a new flat in the near future. Whilst Roomba can’t do everything R2-D2 might be capable of, it is still a “Robot” and has a degree of artificial intelligence, possessing the ability to map out rooms, detect barriers and obstacles, and wake itself up when its owner is out of the house.
In addition to physical appliances, many people already use “intelligent” software like Grammarly, a smpell-checker that can also read your writing and suggest alternative words and phrases. Whilst I am wary of AI and do believe it needs to be regulated—which I will address shortly—I do think there’s a point here that should be addressed, “Are we optimising our lives wisely as technology progresses?”
I’m not convinced that we are.
Whilst ChatGPT and smartphones might pose a threat to our mental health, our productivity, or even our jobs, other technology might do the reverse and give us more time back, or replace mundane household—or care home—tasks altogether.
AI poses some positives. Sunak mentioned its ability to discover new drugs to treat people. Does that prospect excite you?
ChatGPT:
“While I, as an AI, don't experience emotions such as excitement, AI's potential to transform healthcare is considerable. AI can analyze massive data sets rapidly, which can help identify potential new drugs faster than traditional methods. Additionally, AI can predict how different drugs might interact with various proteins in the body, identify potential side effects, and even personalize treatments based on a patient's unique genetic makeup. Such applications could dramatically speed up drug discovery and reduce costs, leading to more accessible treatments and better patient outcomes. However, it's essential to remember that AI's predictions and findings still need to be verified through rigorous testing and trials.”
Again, ChatGPT gave a very similar answer to the one I gave—I’m starting to feel a tad redundant here—in that whilst AI might posit alternative treatments or come up with ideas we’ve not considered before, these will all need to be tested and verified in the same way any other drugs ought to be.
This reminds me of a scene from friends in which Joey finds out about the thesaurus feature on his new laptop. Endeavouring to sound more intelligent in a letter of recommendation to an adoption agency on Monica and Chandler’s behalf, he proceeds to replace every word in his letter with the thesaurus. “They are warm nice people with big hearts” turns into, “they’re humid prepossessing Homo Sapiens with full-sized aortic pumps.” The letter succeeded in sounding artificial due to Joey’s relative lack of intelligence. There has been various news stories since early this year of students turning in assignments written by ChatGPT with factual errors they were unable to detect because they didn’t have enough knowledge of the subject to know better. One of the first tests I ran to understand the capability of ChatGPT early on was whether it could give me accurate information about public figures, it believed Rishi Sunak was still Chancellor of the Exchequer an that I was a Nobel prize winner.
As much as I might have liked that second point to be true, it obviously isn’t, yet…
Simply put, we should be careful not to underestimate AI, but we also shouldn’t overestimate it.
Some AI experts have warned that our human existence is under threat because of AI. What are your main concerns with its development?
ChatGPT:
“…[T]here are valid concerns associated with AI development. These include ethical considerations like the potential for bias in AI decision-making, privacy issues related to data usage, and the risk of AI-enabled technologies falling into the wrong hands. The increasing sophistication of AI also raises questions about job displacement and economic inequality. Moreover, the possibility of autonomous weapons and the concept of superintelligent AI, which could potentially surpass human intelligence, have been cited as threats to humanity. Therefore, it's essential to have robust regulations and ethical guidelines in place to guide AI development and usage.”
I found this answer fascinating. Behind the scenes, ChatGPT must be programmed with this view, yet we’re still debating how or even if to regulate AI properly. A number of high profile AI experts have abandoned the field altogether and called for AI to be shut down and now, it seems, the creators of ChatGPT agrees that limits need to exist. I don’t have a lot to add to my comments on this, but I will highlight that it would be prudent to pray for Rishi Sunak’s success in promoting such regulations, and that these would be emulated across the world.
As Christians, is it important to remember that God is in control of the universe when we hear of threats like this?
ChatGPT:
“As an AI, I don't have personal beliefs or feelings. However, it's clear from human behavior and cultural studies that religious beliefs can often provide comfort and guidance in times of uncertainty. For people of faith, the belief in a higher power controlling the universe could bring perspective and solace when considering potential threats, including those related to technological advancements. In Christianity, as in many religions, there is often a focus on moral and ethical behaviour, which could guide the responsible use and development of AI.”
Here is where the limits of AI show themselves most clearly. ChatGPT, echoing its prior comments about “empathy, compassion, and understanding” which it states it is “currently incapable of fully replicating” it simply cannot “understand” faith. In the same way that some will point to prayer and faith as a “potential good” due to the statistical health benefits and emotional benefits, ChatGPT sees Christianity and God as nothing more than a comfort and a potential guide.
I mention a book from a previous article I’ve written during the course of the interview, which I paraphrase as “using the weapons of prayer and scripture” whenever physical and even existential threats arise. Schilder, writing during the occupation of the Netherlands in WWII says:
“In this occupied land, [we] will not place the matter in [our] own hands, but in God’s hands. To the extent [we] pray, [we] will want to see weapons used in this time, but only those weapons that we know from Revelation 11, namely, the weapons of prayer. We realize that others think such weapons blunt and laugh at them. At any rate, people who think so will deem these weapons completely harmless.”
Klaas Schilder, The Klaas Schilder Reader: The Essential Theological Writings, eds. George Harinck, Marinus De Jong, and Richard Mouw, (Bellingham, WA: Lexham Academic, 2022), 515.
As I remind listeners during the interview, the threat present during WWII has been long since thwarted and yet, as Schilder infers, the weapons of prayer still endure. AI might seem like the greatest threat man has ever known, or it might already be in its final hours. We don’t know what the future holds, but we do know that God is good, that prayer and the sword of truth are our greatest weapons, and that they will endure.
I hope you’ve enjoyed this article, if you’d like to read more, please consider subscribing.
Grace and Peace,
Adsum Try Ravenhill
I enjoyed the video so much and the thoughts expressed there and in this post. I admit to being a bit dismissive of AI. Although I recognize that it could play a very important role in many ways. I recently read a post that admitted at the end that it was composed by AI. If I had seen that at first, I probably wouldn't have read it. I know that it seems like such a stance could just be my age (59) or just not being 'with it'. I think AI can rob some ventures (such as writing, interacting with others, research) of the human element, which is, to me, of great value. AI admits it cannot 'feel' ... which is true, and which limits its abilities to me. I expect, though, that the horse is out of the barn.
I love the conversation between you and the AI, it’s helpful to see it getting played out in the article! Thank you