Artificial intelligence (AI) is everywhere these days. Open a document and an assistant will offer to summarize it for you. Start writing an email and another one will make suggestions on how to word your message. Launch an online meeting and still another will offer to take notes. And then of course there’s generative AI, ready to write you just about anything you want.
This is all meant to save you time and effort, but the reality is that the technologies aren’t all that good at doing those things. Document summaries can misinterpret the text or miss vital points. Email suggestions can be a bit dubious, and they won’t capture your voice. Meeting notes might remind you what was said, but most won’t synthesize the discussion effectively. They can omit critical action items and are easily misled by sidebar discussions, assigning importance to them because of the time they took up, even if they were irrelevant to the main issue at hand. They also won’t understand sarcasm or quirky turns of phrase — and we’ve seen plenty of occasions when these technologies have inserted statements no one even said.
That last is one of AI’s biggest flaws: it’s just not trustworthy. The BBC conducted a study recently and found that not one of the major chatbots was able to accurately answer questions about articles it was asked to summarize. About half the answers included what the BBC described as “significant issues” and almost 20% introduced straight-up factual errors. On top of that, 13% of the quotes the chatbots cited were either altered or entirely made up. And the problem seems to be getting worse. OpenAI’s own tests have found that every version of ChatGPT is more prone to hallucinating than the last — and they can’t explain why that’s happening, which makes it effectively impossible to fix. None of that came as a big surprise to us, because it lines up with our own anecdotal observations. We’ve done some experimenting ourselves and have found that chatbots frequently attribute statistics to reports that, when we look for them, turn out to not exist. With results like that, it’s easy to see why we’re not comfortable trusting AI to get things right when it matters.
At Ascribe, we’ve long held that great writing starts with great thinking. That’s a problem for generative AI models, which don’t do any “thinking” at all. Instead, these models draw on enormous datasets to determine the most likely response to any given prompt. It’s why observers such as Noam Chomsky have called ChatGPT and the like a “kind of super-autocomplete.” The results often sound convincing, even human. But ultimately, what’s produced is a remix of text from the model’s training data.
That lack of real thinking is a problem for any business hoping to shift the burden of copywriting and developing marketing content to AI. So many of our clients are looking to be thought leaders, giving their target audiences fresh thoughts to guide their strategies and decisions. But generative AI can’t produce anything new. The content it generates is often superficial, lacking the depth required to be truly actionable. And if complete, detailed answers to a prompt are absent from the training data, it can lead to plausible-sounding but incorrect information or obviously false statements.
The reliance on information that already exists in its database also makes AI poorly suited for a lot of marketing copy, which is often related to a brand-new product. Newer generative AI tools partially solve that challenge by letting you upload files, meaning unpublished internal documents like sales presentations, agency briefs and product specifications can be used to inform the chatbot’s output. Our experiments have shown that, if prompted carefully, generative AI can use those materials to produce a usable first draft for something simple like a two-page brochure.
But on top of the rewriting required to turn the chatbot’s initial draft into a client-ready deliverable, there’s the matter of data privacy and protection. What happens to the reference content after it has been uploaded? Does it become part of the chatbot’s general database and therefore accessible to anyone else who uses it? Some AI technologies give you the option to exclude anything you upload from their general databases, but it’s still a risk we’re not prepared to take with our clients’ intellectual property.
There’s also the issue of voice and tone, which are important elements of branding. It’s true that you can instruct chatbots to generate text with a certain tone. But in our experience, the particulars of such instructions are often ignored or applied in excess. The result is that your content sounds like everyone else who uses AI. You can already see this in action. Just Google “in the dynamic world of” in quotes.
For businesses with a distinctive brand voice, any AI-generated text will need to be rewritten substantially if not completely. Combine this with the need to fact-check everything produced by a chatbot and any productivity gains start shrinking fast. So much so that we’ve found it’s often better timewise to start from zero rather than reshaping an AI’s output.
None of this is to say there are no worthwhile use cases for AI in our industry. As we discussed in our recent white paper on AI and marketing copy, it can provide some useful support to make your writing tasks easier. For example, generative AI can:
In the right circumstances, there’s even a case for using it for actual writing. If your content needs involve a high volume of short, simple pieces with a lot of overlap and your budget is limited, it’s hard to argue against the power of AI to churn that out.
The theory behind machine learning is that the more tasks an AI system completes, the more it learns and the better it gets. But that theory depends on the system learning from high-quality data. The sheer size of the database required to make generative AI work makes it impossible to vet its contents for quality — and, unfortunately, the AI can’t tell what’s good and what’s not. In fact, as more content gets produced by AI and absorbed into the training dataset, the results are likely to get worse, not better.
Eventually, someone will come up with a new approach that goes beyond the large language models in use today, and who knows what that will look like? If a true AI is developed, its effects will be extremely broad, putting far more than creative labour at risk.
But until then, human input will continue to be required for both originality and quality assurance. So for now, when quality counts and your reputation is on the line, we just don’t think it’s worth the risk.
Whether we’re writing from scratch or providing copyediting services, we never use AI to do the work. That means you get real thought leadership, real facts and real nuance, all conveyed in your brand voice.