In a 1999 episode of the church comedy The Vicar of Dibley, the vestry meets to discuss how to lift Vicar Geraldine’s spirits after her breakup with the womanizer Simon. Alice, the daft though creative verger, has a solution. “You know the series Walking with Dinosaurs?” she ventures. “Well, they recreated the dinosaurs digitally, just using a computer. I thought maybe we could do the same with Uncle Simon.” Mr. Horton, the churchwarden, repeats dryly: “Recreate him digitally?” “That’s right,” says Alice, “then send the digital Simon round to the vicarage.” A beat passes, and Mr. Horton clarifies: “So we get a holographic, two-dimensional human to marry the vicar?” Alice nods, and Mr. Horton looks around for help responding to her technically impossible and morally absurd suggestion. “Does anyone spot the defect in this plan?” he asks. No one does, and the vestry votes through Alice’s motion.
Working in artificial intelligence research since ChatGPT launched often makes me feel like hapless Mr. Horton. We researchers are surrounded by onlookers and their suggestions, some having neither desirable goals nor methods in the realms of reality. For example, some suggest outsourcing middle and high school teaching to a chatbot. Its inaccuracies “can be easily improved,” claims an academic dean at the Rochester Institute of Technology. “You just need to train the ChatGPT.” Or, as the tech entrepreneur Greg Isenberg suggested last year, we could task a language model (LM) with writing and marketing the next Great American Novel; all we have to do is code up a program and “start selling.” Each time a public figure urges this kind of unfettered and unrealistic application of LM technology to tasks far too human to morally bear automation, I hear the harried churchwarden’s voice: Recreate him digitally? Does anyone spot the defect in this plan?
Yet in many cases the vestry has voted the dubious motions through. Modern LMs have launched a thousand bullish startups and a thousand uneasy think pieces. Many doubt the wisdom of hastily applying LM technology to areas classically sitting at the core of human creative activity – writing, teaching, interpreting – especially as the general public discovers what machine learning researchers already know: LMs are not omniscient and can, in fact, generate garbage. Moreover, some worry, if LMs are by nature garbage-generating machines, does using them relegate us to mediocrity? Will we let our human creativity atrophy? And, seriously, what real need – not simply the desire for the last scrap of profit at the expense of human pursuits – do LMs give an answer to?
Why have LMs suddenly and dramatically seized our attention? Where should the church say, “Thus far and no further”? Few are truly equipped to assess the deadlock, but as a speech and language AI researcher and a churchwoman, I will attempt the task.
The public reaction to ChatGPT’s launch in November 2022 exceeded industry expectations. OpenAI, its creator company, had called the launch a “low-key research preview,” and many in the wider research community were poised to see it as another incremental though impressive improvement in the long line of LM research. LMs are a workhorse class of statistical model that, as the public now perhaps knows, have one job: to return the next most probable item in a sequence. An LM will predict “mat” given “the cat sat on the” because, roughly, we give it a count of how many times “mat” appeared in that context in some other text corpus, and if that count is high enough, we set it to output that item given the context. LMs have powered common AI applications for decades, including the predictive text suggestions in your next tweet and your last email, so why the explosion of public interest now?
Today’s chatbots offer the glittering peril of instant gratification. Products like ChatGPT are strictly speaking not just LMs, but LMs plus extra statistical steering to facilitate a question-and-answer format, plus a slick web interface enabling any person on earth to type in a query and receive a fast response. This chatbot framing is technically incidental. But it is rhetorically and psychologically powerful, a rapid feedback loop so easy to enter that it has implicitly taught the public that the purpose of an LM is to generate content on demand.
And to generate answers. Our tendency to conflate quick responses with correct responses when talking to humans also transfers onto the chatbot, which we can’t help but personify. A fast and confident-sounding chatbot may mimic the authoritative voice of a reference work, and it may draw from and contribute to our habits of laziness when seeking out truth and our impatience when engaging each other.
Researchers know – but seldom effectively communicate – that indispensable LM applications sit just one level deeper than the “type query, get content” chatbot paradigm. That’s because LMs are only accidentally content machines; they are substantially a dense statistical representation of relationships between words. Eliciting those relationships for a downstream task can be valuable. Borrowing heavily from John Firth’s distributional semantics maxim, “You shall know a word by the company it keeps,” LMs compress and store information about the whole distribution of words in all the text they see. Their insides and outputs are a mathematical study of language as it is actually used, and researchers can and do leverage that information not to drown us with spam, but to promote our good.
One can use a printing press to print libel. But that is not what it is for. What, then, ought LMs to be for?
They are, already, part of innumerable systems that humanize, not alienate – speech recognizers whose generated transcripts are important accessibility tools for deaf people; translators that allow immigrants in tight spots to communicate; record systems that alleviate medical professionals’ documentation burdens and allow them to spend more time at the bedside. Their value can even return to the fields that provided their data. LMs are helping to decipher dead languages, restore lost ancient inscriptions, and predict protein structures. They are tools that, in the hands of intrepid researchers and, yes, entrepreneurs, are well suited to facilitate our exploration of the world and our rapprochement with one another. But none of these mentioned applications use LMs as cheap content machines; they are harder to understand (and get less press) than the instant feedback an impressive or appalling chatbot provides. Our attention spans are short; our demand for content is high. The high-strung discourse around LMs is, in a sense, what we deserve.
And what of the church? Some clerics have voiced concerns about AI writ large, ranging from the pastoral – how should a priest help a parishioner following an automation layoff? – to the theological – can an LM be possessed by a demon? (And, if so, is it the same demon that always pops up in the printer when it’s time to produce bulletins during Holy Week?) Others are more optimistic about the possible elimination of drudgery. This divide mirrors the one in the public’s discussion of AI, with one faction wishing to seize the low-hanging fruit, while the other asserts that from the beginning in the Garden, knowledge of when to appropriately seize fruit has hardly been humanity’s strong point. The fact is, if the church implements suggestions for how to use LMs that are as shallow and dehumanizing as the suggestions that have lately come out of secular society, we will live to regret it.
The first truly interesting suggestion for LM use in the church is to leverage them for organization of language-based materials. Many helpful assistive technologies flow from this: subtitling, transcription, translation of services, searching past sermons or resource documents, and so on. Insofar as these ideas increase access to the life of the church, they can be healthily pursued. But two dangers lurk around this corner.
One temptation is to jump from using LMs to organize text to using LMs to interpret text – including the ultimate text, the Bible, which ought to be wrestled with and interpreted in the human community knit together by the power of the Holy Ghost. Some enthusiasts have advised pastors to save time by using LMs to generate devotionals and Bible discussion questions, but one cannot excise the humanity from these and still obey the call to personal and communal wrestling that Holy Scripture demands.
Technically, the returns will diminish because of the nature of LMs: they will return shallow text probabilistically biased toward any religious text in their training corpus. Spiritually, while LMs may marshal text effectively, they can neither “read, mark, learn,” nor “inwardly digest” it. Meditating on divine words is what human beings do in their inner being. This technically cannot and morally should not be automated. Mary could not have outsourced her pondering of the angel’s words to an LM, not only because an LM’s next-item-prediction objective is not pondering, but also because it would have denied those words’ ability to form her. A pastor might provide his congregation with a Bible study written by another pastor or a church father – but its author is still a person in relationship with the universal church, an inward digester, who, though having died, is alive in Christ and truly helps form the congregation. Rejecting this in favor of artificially generated text is an affront to the reality of the communion of saints.
Some have encouraged training LM-based chatbots on the Bible; others, while warm to the idea, have exhorted machine-learning practitioners to erect guardrails to ensure these LMs return text congruent with both the Bible and their users’ theological stances. As one such practitioner, let me say clearly: there is no way to guarantee this. Because LMs are not comprised of retrievable data and hand-coded interpretable rules, but rather abstracted statistical reflections of their training data, perfectly imposing such guardrails is an unsolved problem to which there may be no final answer. LMs do not look up information. That is not how they work. This makes LMs a fundamentally inappropriate tool for handling the Bible, where information retrieval accuracy and interpretive fidelity are nonnegotiable. Engineers know that building a bridge with the wrong material will cause it to fall down, and good engineers refuse to build bad bridges; let the reader understand.
Another temptation is to slip from using LMs to organize church content to using them to commodify that content. In our postpandemic era of broadcasting everything online, the tendency to turn acts of worship into acts of marketing is hard to resist, and sermons are a facile target for this trap. Outsourcing follows commodification, which could here result in an outright denial of the duty to preach. Some leaders in my own Episcopal Church are already situated at this dangerous pass, placing sermons in the same category as parish announcements – items whose automation will free up overladen clergy for, presumably, real pastoral work.
Karl Barth’s notion of preaching as an exposition of the Word of God is a helpful counterweight to these plans for LM-generated sermons. When a sermon is planned by a minister and proclaimed to the people through the mouth of the church, the Holy Ghost assists the delivery and makes it the very Word of God to the hearer. Not all denominations will agree on this semi-sacramental view of preaching, but all should agree that sampling from a next-word predictor is an inappropriate and unethical replacement for it. A pastor is responsible for the congregation’s spiritual formation, to which preaching is central; who could delegate this to a synthetic-text machine? Accidentally generated heresy is a technical failure; a pastor refusing to speak from the heart and preferring to generate the most probable word sequences for a sermon to the congregation in his care is a moral failure.
The final stop on this dubious trajectory is saddling LMs with the task of liturgical composition. Congregants at a Bavarian church that attempted this found the service trite and unsettling, some even refusing to join in saying the Lord’s Prayer. Their discomfort was well founded: this type of LM use encroaches on the unique vocation of humans within the whole creation’s worship of God and creates a liturgical absurdity that we feel in our gut.
All of creation expresses a cacophony of praise to its Creator. “One day telleth another” of God’s glory, says the psalmist, where “there is neither speech nor language; but their voices are heard among them” (Ps. 19:2–3). In Isaiah, we hear that “the mountains and the hills shall break forth before you into singing, and all the trees of the field shall clap their hands” (Isa. 55:12). Yet God appoints one creature to collect all the noisy voices of creation and consolidate them into ordered expression: the human being, whom God endowed with the richest linguistic faculties. Language powers are integral to our being made in the image of God. Through them, we are able to rationally organize and sit in dominion over creation and cultivate it, enacting God’s goodness to it and bringing forth the harvest of its praises to offer them to God. “O all ye Works of the Lord, bless ye the Lord,” the Prayer Book canticle has us cry before enumerating these Works, from the lightning and clouds to the whales and all that move in the waters. “Praise him, and magnify him forever!”
Of all creation, the human is the priest, mediating between it and God, in part by the ordering power of language. This priesthood is of all believers, as language faculties are universally inherent in us in potentia (and in actuality, far beyond what we might think; indeed, deaf babies babble in a structured manner with their hands, and signed languages possess full phonological and syntactical systems).
When we shirk our duty to use our language faculties to worship God, the infraction is multiple: we not only fail to offer our own sacrifice of praise and reject God who would respond in goodness, but we also deprive all created things from joining their natural expressions of praise to ours. “He that to praise and laud thee doth refrain, / Doth not refrain unto himself alone,” warns George Herbert, “But robs a thousand who would praise thee fain, / And doth commit a world of sinne in one.” The poet has the obligation to sing; the poet also needs to sing for his own sake because his song, reaching out to God, changes him. Given this, the idea of liturgists handing their jobs to LMs is farcical, as laughable as a man whose daughter needs surgery sending a calculator to be operated on instead. We present a dumb machine in our place, hiding from God who hears us when we call and transforms us when we ask, and doing collateral damage to other creatures in our care.
Christians must instead look to Jesus. The Word of God, coming from the mouth of the Most High, became flesh. He took on materiality and carried it to the right hand of the Father at his ascension. He is the great high priest who mediates between God and creation, through whom all creation will be redeemed on the last day. He shares our humanity, collecting the noisy and often incredibly sideways praises of our human life in himself and ordering them according to the divinity of the Logos – and he calls Christians to follow him in this. We, therefore, cannot abandon our role in orchestrating, via our own language, worship in which all creation participates. Outsourcing this to a text generator is absurd in the extreme, a near-literal abdication of the throne God set up for human beings, who, while made a little lower than the angels, have everything in subjection under their feet through Jesus, the eternal Word, true man yet very God.
Where does this leave the church? The fixation on LMs as content generators, tools that circumvent the necessity of thinking together, is symptomatic of a deeper disease, developing out of our failure to integrate our unprecedented technological interconnectedness with the bodily realities that true Christian – true human – interdependence demands. The church uncritically glomming onto the latest LM for its liturgical, educational, or pastoral work will compound the harm in this area already inflicted by the long, lonely slog of the pandemic.
There is no world where deferring preaching and pastoral care to a text generator does not end with deterioration – first of formation, then of the clergy, and finally of the people in their care. As more seminaries move online or shutter altogether, and more clerics are forced to work full-time jobs at part-time pay, what else can the replacement of their functions by LMs spell for those in pastoral need?
There is also no world where increased comfort with liturgical automation does not end with attempts to obviate the sacraments. Our Lord peskily attached himself to material things that force Christians to keep one foot in reality. But this tether is threatened, not by him but us, and not to his detriment but ours, if we go down the path of thinking that a machine can compose or recite a prayer to almighty God.
Meanwhile, captive to the view that sees LMs as content machines only, people who rightly object to this direction in the church will retrench, potentially causing a churchwide neglect of the opportunities to use machine learning well – opportunities less flashy but more helpful. Great promise exists for LMs in service of better research, the offloading of true drudgery, and increased access to various aspects of public and personal life for the linguistically barred – and indeed LMs have been fueling all these things without public fanfare or objection for many years. Opposition will be created in areas where none need exist.
A renewed belief in the communion of saints is a necessary part of the treatment. Saint Paul says that every member of this body is needed. All contribute something irreplaceable by anything else animate or inanimate, carbon or silicon. The incorporation of Christians as human persons into one body, that of the divine Word himself, is a profound mystery that cannot in fact be menaced or usurped by a text generator, try though we might through active promotion or doomsaying alike.
We should take heart in that, and then take up and read – and write, communicate, and contemplate, first enjoying and maintaining those gifts from God without fear of their replacement. Then, having received freely, we should freely give. Using language technology for the right purposes will facilitate the exercise of those gifts by those who would usually be restricted from their use by physical condition or temporal station. Neither slick demos nor technical party tricks can get us there. What is required is nothing less than a true love of God and neighbor, which no machine can generate.