How GenAI’s pitfalls can make us better communicators

A new piece of research has revealed how AI can get it wrong, and how businesses can do it better. A major hurdle to a positive impact is “wooden language”. But what does that mean?


The more that a piece of communication relies on AI, the less new information is being communicated.

Perhaps this statement seems obvious. Is anything anyone says truly new? There is nothing new under the sun, after all.

And yet, what we say has an impact on the recipient…and that impact is not always a good one.

And a new piece of research has revealed how AI can get it wrong, and how businesses can do it better. 

A major hurdle to a positive impact is something called “wooden language”. But what does that mean?

Lost in the woods

Under the GenAI umbrella are large language models, or LLMs for short. These are tools—including ChatGPT and Google Bard—that are capable of producing text that closely resembles human-written text.

However, GenAI is prone to producing text that’s vague, ambiguous, or general—the hallmarks of wooden language. It tends to rely on generic phrases, seemingly regurgitated from its training data.

LLMs don’t have a monopoly on wooden language though. People can find themselves doing it too. 

This wouldn’t be a problem if it didn’t affect the impact of the communication. That’s something that researchers homed in on. They were interested in whether AI-generated content influenced people in the same way as human-generated content, or whether there were different outcomes for different communication types.

Listening between the lines

The researchers identified a perfect environment to look into their hypothesis: Earnings calls.

As part of an established routine in many parts of the world, publicly traded companies routinely publish financial results. 

Post-publication, the company holds “earnings calls”—conference calls where investors and financial analysts can ask questions of an executive. They’ve read the official report, and now they want to better understand the ins and outs of the company’s landscape, and to look beyond the polished numbers and calibrated statements of the official statement.

They want as much raw, real info as possible, because it’s this that is instrumental in guiding their decisions and guidance to others.

Bots acting as a benchmark

To measure the informativeness of these conference calls, the researchers examined the content of over 190,000 calls. They compared executives’ responses with those produced by generative AI tools, including ChatGPT.

What makes LLMs such a useful benchmark? They are capable of repeating known information, of giving the most likely next word in a sentence based on the huge amounts of training data they have digested. However, they are incapable of providing truly new information—i.e. the stuff that financial analysts and investors are after.

The human AI difference

The researchers developed an index called the HAID (human AI difference). This index uses natural language processing (NLP) to gauge how much executives’ statements differ from those of LLMs.

The higher the score on the HAID index, the more likely the content is to be truly informative. The lower it is, the more the content resembles that of AI, therefore making it more likely to fit into the “wooden language” camp.

The researchers used the HAID to compare statements made by company executives after the publication of financial results, and then by analyzing the impact these statements had on investor behavior.

The results aligned with their hypothesis: The most informative calls (i.e. with a high HAID score) were linked to positive outcomes like:

  • boosted investor confidence in the company, making its stock more liquid and easier to trade,
  • heightened trading activity for the company’s stock compared to usual, and
  • less disagreement among analysts about the company’s prospects.

Meanwhile, calls where executives simply rephrased what had already been said in the official report, and who merely rehashed generalities about trends and the market, resulted in more uncertainty and less buy-in.

Genuine insight + human touch = better buy-in

The study’s findings suggest wooden language makes the audience less likely to buy into the communication’s messaging.

The success of earnings calls hinges on the executive’s ability to give novel insights and add value on top of the (inevitably more generic) corporate statement.

This is about buy-in. The vast majority of communication is seeking buy-in of some kind. A buy-in of understanding, or opinion … or literally buying a product or making an investment, as in the research’s focus.

But what does this research mean for your business? 

Four key takeaways for businesses

We think these are some important lessons to take on board:

  • Rise above the regurgitation. Equip key staff with the skills to avoid vague language, jargon, and platitudes, instead providing specific data, examples, and insights into the company’s performance and future plans.
  • Use AI for what it’s good at. AI is useful for writing drafts and forming talking points, but ensure there is plenty of human oversight, and avoid relying solely on AI-generated content.
  • Address investor concerns directly. It’s tempting to shy away from sensitive topics, but it’s vital to step into constructive dialogue and show that you’re committed to addressing issues.
  • Share data beyond basic metrics. And provide a unique context for any performance trends.

We’re invested in you, your data, and your future

Penta is the data storage and security company with its finger firmly on the tech pulse and an eye on the future of business IT solutions. 

We’re proud to protect our clients’ most valuable asset—data. Contact us today to learn how we tailor world-class security solutions to each business’s needs.


Related Posts