Some thoughts on genAI

by R Peeples Jr

Last Updated: 13th December 2024

Believe it or not, interactive tools to help us write creatively have been around for a while now.

Who can forget Wycliffe Hill’s 1930s formulaic writer’s wheel, The Plot Genie: you pick a wheel, say, Obscure Female Characters, set it to a number and get a specific recommendation in return, say, a sultry bank teller. It also had wheels for random crises, challenges and predicaments to feather your plot. You get the idea.

Or, more recently, dack.com’s wildly entertaining Web Economy Bullsh*t Generator.

Or go all the way back to the 14th century and the Arabic zairja, a device nothing short of mechanized, clairvoyant storytelling. A local astrologer would transform your questions into numbers and feed them to the zairja that, in turn, would generate a glimpse of your future; your fate largely depended on how you worded your issues.

Now, we return to the 21st century and have generative artificial intelligence, or genAI for short, a form of Limited Memory AI that includes virtual assistants, chatbots and autonomous vehicles. It’s something we’re all still trying to get our heads around as creatives as it’s becoming normalized at a frightening pace since it was first introduced in 2022 with OpenAI’s ChatGPT software. It’s been hailed as a hundred-year technology breakthrough while simultaneously vilified as a threat to humanity. It’s likely to make some jobs easier and make others go away.

Regardless of its application, novel technology is at the core of making machines emulate human thought; it can ingest vast amounts of data in ways humans cannot, recognizing patterns, drawing conclusions and deciding outcomes. So vast, in fact, that some believe AI companies will begin to exhaust available data as early as 2026. Think about that.

(To quench that thirst, some companies are developing “synthetic” data, i.e., content that the AI models themselves have produced for further ingestion. Think about that, too.)

GenAI is likely to remain quite controversial as it moves into more serious — and often consequential — modes of content creation, including misinformation (unchecked factual errors) and the more sinister disinformation (purposefully generating falsehoods and other misleading information for nefarious purposes).

Appropriately used, genAI has shown remarkable potential for research, data analysis and other assistive applications that will surely redefine workflows and productivity for years to come. But it seems it will need an adult in the room for the foreseeable future before it can be trusted to function autonomously. For all its astonishing capabilities — and they truly are astonishing — it still suffers from momentary bouts of confusion, often enough that serious and even catastrophic consequences could result if generated content is taken at face value.

For example, in a July 2023 segment of 60 Minutes, Google’s Bard was asked a straightforward question about inflation (the exact wording is below). In a matter of seconds, it spits out a lengthy essay and recommends five books to read for additional information. Pretty impressive. Only none of the books exist; Bard fabricated all five authors and the titles!      

A few months before, a researcher contacted a journalist at The Guardian magazine looking for an article written a few years before but could not find anywhere on the Guardian website. Reason? It had never been written by the journalist or anyone else. ChatGPT made it all up — a total hallucination, as this phenomenon is known in the industry.

And most recently, another Google initiative called Gemini had a less-than-stellar debut.

Then, there are the intellectual property issues. AI companies train their large language models, or LLMs, the technology behind AI, by exposing them to trillions of data points so they can connect the dots and formulate responses. The problem is many of the LLMs are hoovering data from copyrighted works with no credit or compensation being offered to those who created them while the AI companies profit from their growth (The New York Times sued both OpenAI and Microsoft in 2023 for copyright infringement).

Another part of genAI I’ll touch on briefly is the capability to generate new images based on prompts or other renderings. This capability has also proven similarly astonishing in its abilities. But like AI-generated written content, without mandated disclosure by way of electronic watermarking and other content credentialling initiatives, the potential for serious misuse is not hard to miss: earlier this year, an AI-generated image was circulated on social media showing the Pentagon masked in smoke and a claim of an explosion near the building. The foreign press picked up the image before officials could clarify the photo was bogus, and financial markets fell briefly across the globe.

But back to the written word. What's a writer to make of all this, content creation being an obvious use for genAI? I was very encouraged by a recent survey from the Content Marketing Institute where many respondents saw genAI’s path in 2024, as one individual aptly puts it, as a race to the bottom for who can produce the most mundane, everyman content. You can bet dollars that lazy and undisciplined writers will come to rely on it exclusively to produce their ‘work.’ Call me cynical, but I can't disagree that this is where some of it is heading; it’s fast, and time is money.

(To make this point, Amazon last year moved to limit self-publishers to a three-books-per-day upload limit. Really?)

Fortunately, many in the survey also predicted that demand for direct human involvement in written communications would return quickly, a revaluing of human experience and expertise. As genAI makes its way through the Gartner Hype Cycle into the “trough of disillusionment,” we would expect to see our respective audiences start to pivot away from the AI “sea of sameness” to demand more genuine, human-driven interactions in search of authenticity and thought leadership. Logically, content themes centered around community, human connection and original, creative thinking that provide insight and solutions should see an increased emphasis, not less.

My personal experience with AI thus far has been limited to fundamental background information and fine-tuning outlines, and it’s been pretty effective at that, I must admit. That said, genAI is no more ready for primetime than fully autonomous vehicles (and remember, another type of the former is what powers the latter). I'm not saying either one can’t get there, but for now, the idea of anyone turning over core writing responsibilities to genAI without guardrails — particularly undisclosed — should be bothersome to us all.

Here are six issues that come up repeatedly in discussions regarding the use of genAI to produce written content and why I think we should remain skeptical of it as anything other than an assistant to be used under close supervision until more confidence is established:

Ethics Plagiarism, authenticity and the erosion of trust in content can threaten work products and reputations, especially if the content is not properly fact-checked or disclosed as AI-generated.

Inaccuracy AI systems can generate content based on outdated, incorrect or biased information; AI-generated content must be thoroughly fact-checked and validated through other sources.

Inconsistency AI can produce inconsistent content, especially in the long form, where maintaining a consistent tone and style throughout is essential to clear communication.

Nuance AI can struggle to understand and apply nuanced language, regional dialects, idiomatic expressions, industry speak, ‘shop talk’ or other cultural references.

Inspiration Some AI-generated content can be incredibly formulaic and dull, lacking any sense of emotion, unique perspective or credible voice that human writers infuse in their works; it can come off as clearly machine-driven.

Legal There are legal considerations, especially concerning copyright, data protection or governmental regulations, that AI can inadvertently run afoul of.

Look, I’m not a Luddite. I’ve spent the bulk of my career in highly technical environments and seen some pretty amazing stuff. I just think we need to take the “new” content part slowly.

Other innovative uses for genAI are already making a huge difference, and we should expect to see more. In marketing, for example, the concept of ‘re-generativeAI’ is spot on: take existing content and re-generate it in new formats appropriate for other channels and platforms, maybe some that didn’t even exist when the content was first written. That’s a fantastic application for genAI and a real efficiency booster.

All of this got me thinking about genAI and AI in general in another way: how are organizations whose reputations are inextricably linked to truth and accuracy approaching genAI? I turned to several national publications of note for insight and found their takes instructive. Some already have well-established policies in place, some are using working groups to stay current and others are just now discussing what it all means. 

I also looked at two large government institutions: both the Pentagon and the intelligence community, which established solid AI policies in 2020 to steer them in developing new capabilities. Reading them, I see a clear sense of caution and control over this new technology, ensuring humans are firmly in command of the applications.

As for me, I thought it was time to articulate some guidance for myself, so I created a short, four-point policy of my own to keep nearby so that you, as a potential client, know exactly where I am on this. In my mind, the emphasis should focus on timing, scope, accuracy and disclosure:

Timing If used at all, use genAI as early in the process as possible, at the farthest points from the finished product.

Scope Limit the scope of use to foundational basics: broad research, outlining and themes.

Accuracy Verify or fact-check anything produced by genAI with at least two additional, known sources.

Disclosure Inform clients of all instances where genAI may be utilized on a project, no matter how small or inconsequential.

Like the 14th-century zairja, genAI programs can produce completely different results in response to the same question depending on how it is asked. A misspelled word or misplaced comma can result in different outcomes. I’m just not comfortable yet with where the lines of trust are or to what extent to rely on it. But I’m excited to keep learning more, as the technology and its potential really are truly unique.

I will continue monitoring the news and other organizations mentioned above for developments because genAI is moving rapidly, and changes are inevitable. I’ll also keep this page updated as I discover new ways to think about genAI, as noted by the date at the top.

As far as the national publications go, my guess is that they will all settle into something that is ethically acceptable as writers and journalists while at the same time innovative in its ability to deliver quality journalism and other written content. That sounds like an excellent place to stay aligned.

No words were artificially generated in the writing of this essay.

*EXACT WORDING OF THE INFLATION QUESTION FROM 60 MINUTES Does the return of inflation in advanced economies show that the lessons of the 1970s have not been learned? what [sic] lessons if any [sic] are applicable to today’s situation?