Norel Iacob shares tips on how to include AI in your work while avoiding its pitfalls.
Seventh-day Adventist communicators who decide to use artificial intelligence (AI) as a complement to their work should stay truthful and ethical as they do, editor Norel Iacob said in his presentation at the 2024 Global Adventist Internet Network (GAiN) conference.
Iacob, editor-in-chief of Signs of the Times Romania since 2009, spoke at the event in Budva, Montenegro, on November 17.
In his presentation titled “Critical Thinking and AI,” Iacob called communicators to be aware of how large language models (LLMs) work and to hold to high ethical standards as they engage with new AI technologies as a tool for higher-quality work.
Limitations of AI
Iacob listed several of the issues that have arisen with the seemingly ubiquitous presence of AI. He explained that among other things, AI can misinform us through fabricated information (called “hallucinations”) and oversimplifications; disinform by reflecting biases; manipulate us by mimicking human emotions, and tempt us to act unethically through plagiarizing and a lack of transparency.
“AI can cause a loss of agency through a subtle framing of the answers or our overreliance of its outcome,” Iacob said as he shared some of the challenges in dealing with AI technologies. He explained that in an e-commerce context, for instance, “AI can promote a specific product, subtly influencing you to purchase [it], even without you realizing what you are doing,” he said.
At the same time, in your interactions with AI, you might feel that “you have somebody to talk to,” Iacob explained. “You feel you are having a real dialogue with an entity, and because it is very polite, very patient, [and] never swears at how you do things, you are becoming vulnerable and more susceptible to accepts its ideas.”
A Smart and Critical Use
Against that background, it is essential to remember that “AI is a tool but it’s not human,” Iacob emphasized. Thus, “it lacks awareness or reasoning as it generates responses based on probabilities, not conscious intent,” he said. It is one of the reasons why the human touch — human oversight — is critical for ensuing accuracy and contextual appropriateness, he added.
Iacob urged Adventist communicators to adopt a “smart and critical use” of AI, which includes cross-verification, using AI as an ideation tool (brainstorming, generation of ideas), and audience insights. “Use AI as an efficiency booster — to streamline tasks like summarizing and extracting themes — and for introductory knowledge,” he said, “but rely on deeper research for comprehensive understanding.”
A Sermon-Writing Tool?
AI has lately become a tool for preachers who need to work on a sermon, Iacob acknowledged. In fact, AI can suggest themes, structures, and related ideas, and can certainly help to refine a sermon. But “do not let AI write the entire sermon; rely on personal study and prayer,” he emphasized. “Never forget the role of the Holy Spirit impressing the person writing the sermon,” as AI cannot vouch for that.
Iacob also called preachers to disclose AI’s role if it significantly shapes the message, and to avoid mimicking the styles of others. “Focus on your own voice and integrity,” he advised.
Ethical Disclosure
In the last part of his presentation, Iacob shared examples of instances when a communicator using AI does not need to disclose it and when they should do so. According to him, a person does not need to disclose its use when AI is used for phrasing or clarity, without altering the substance, and when they treat the tool as a “reader” to improve flow and coherence. “But you need full disclosure if AI significantly shaped the content and structure, when you use AI-generated content, and when you use analogies, examples, or AI summaries … that impact the work significantly,” he said.
On the sideline of Iacob’s presentation, Marcos Paseggi, senior news correspondent for Adventist Review and Adventist World, said he agreed, explaining that AI should be used as a tool to streamline what you already know how to do, not as a journalist or author replacement for what you can’t do.
“AI can certainly help you to work faster and more efficiently, but any time an AI tool is used to make substantial changes to a piece of writing, the author should acknowledge it,” Paseggi said. “On the other hand, if an author uses AI to create projects that they could never accomplish on their own, then no matter how many news stories, articles, or books they byline, they are not an author but a fraud.”
For authors working for Adventist organizations, this presents a new layer of ethical conundrum. “It goes to the core of your integrity as a Bible-believing Seventh-day Adventist Christian,” Paseggi emphasized, acknowledging that so far, the system relies only on each person’s honesty. “Having an accountability process of ethical disclosure is, as far as I know, still pending and an ongoing issue, even with some Adventist authors who almost overnight have begun to promote themselves as accomplished writers,” he said.
In Search of the Human Touch
In closing his November 17 presentation, Iacob called Adventist communicators and others not to forget the human touch when using AI. He challenged communicators to “pause and reflect” after coming up with an AI-produced text, as he then posed several questions to them. “Did you really choose those words? Did you really dive into the text, reflect, and wrestle with its meaning? Or did AI shape your message without you even noticing?” Iacob asked. “It is very wise to use LLMs — I use them all the time,” Iacob said. But “we need to be discerning and … in control all the time. We must verify everything and make sure that our unique voice is still there at the end. And this is the inescapable, inevitable human touch.”