A little knowledge is a dangerous thing!

EVERY SINGLE DAY we’re talking about AI. We’re looking at technologies and new platforms, discussing which courses to attend and reading about AI platforms being sued for copyright. Just yesterday, 10th June, the news broke that Apple is to partner with OpenAI to bring ChatGPT to Siri. Unsure how we feel about this one!

AI is top of the agenda right now.

It’s actually pretty terrifying the amount of data and info that we’re already feeding these AIs with but in the context that we can't stop the biggest digital transformation in our lifetimes, we need to know about it, its functionality, the watchouts, how it can lead to efficiencies etc. 

I would say we’re fairly early adopters and now use AI platforms every day…but now AI, specifically ChatGPT, is in the hands of many…agggghhhh.

For us, ChatGPT is impressive - the speed is incredible. We use it to generate ideas, help structure of presentations, for topline job descriptions etc. It’s a huge help but it’s never, as in NEVER, the final product. It’s a starting point only…..

So you’ve heard the saying, a little knowledge is a dangerous thing!

OK, let’s start from there. 

We’re a PR company, we operate in communications, balance language, write for multiple platforms every day, are super aware of crafting and adhering to tone of voice, and most importantly, we operate with context. ChatGPT DOES NOT DO THIS! There is no context - it’s AI.

It’s a computer.

It’s not your assistant with emotions, feelings and nuances. 

Recently we’re being presenting with copy from third parties that’s been clearly generated by ChatGPT. How do we know this?

  • It’s full of superlatives
  • The language is generic with no detail
  • No first person  - e.g. ‘I’m a person who..’
  • It doesn’t sound like the person or the organisation i.e. there’s no personality
  • There are far-reaching claims with no backup / evidence / qualifying i.e. context
  • Use of American spelling 
  • It’s not authentic - specifically the copy doesn't align with previous communications from the same organisation – it stands out as being ‘generated by ChatGPT
  • Importantly:  no source or credits

THE ISSUE:

  • ChatGPT enables people who are not versed in comms, grammar or writing  to become “experts”, generating “impressive copy” using big words i.e. superlatives, some of which I can't even spell. This “impressive copy” means nothing to the reader, often contains unsubstantiated claims and often just makes stuff up.
  • And I haven't even got started on ethical concerns – who owns the original copy? Has it been plagiarised?  Is the information bias or misleading? 

Who is Riki? Let’s ask ChatGPT!

As a little experiment, last night I asked ChatGPT to write a favourable biog for me…

See below.  On first glance, this looks smart. However, there’s plenty of phrases that I’m not comfortable with. The phrases and words underlined don’t sit with me and are not authentically Riki. They are not how I speak or words that I would use to describe myself. 

  • My career in comms began with ‘a passion for storytelling’  - did it really, who says?  THIS HAS BEEN MADE UP
  • I’m a ‘visionary leader’. I’m not bad. Actually, I’m pretty good but I’m no Sheryl Sandburg, Steve Jobs or Richard Branson! To be described as visionary is a stretch. 
  • I have an ‘innovative mindset’  - compared to who? How is it innovative? 
  • I’m at the ‘forefront of the industry’ – correct but needs to be clarified / context added e.g. ‘in Ireland and Northern Ireland’ 
  • RNN Comms has grown ‘exponentially’  - I wish! This is an untruth. We’ve experienced solid growth but ‘exponentially’ would be a huge exaggeration and a lie

And these are just some of the inaccuracies in a 200-word piece. The tone and context matter, otherwise, this is just read as a lazy piece of AI copy, that’s recognised by many.

Your copy should be true to what you’re trying to communicate. ChatGPT is an extremely powerful source but without context, tone of voice, and a critical eye, it will fall flat. Do we want to be part of something that churns out bland, unsubstantiated copy? 

This is not to mention, the huge emergence of AI content checker tools, and how tech giants, such as Google, will start to consider and rank AI-generated copy. Google has already warned against the overuse of AI. It doesn't penalise blog posts generated by ChatGPT… YET!

It’s a minefield, which is back to the ‘A little knowledge can be a dangerous thing!

To this end, and aligned to our values, we’re about to launch our AI Charter. Due in July, this will be a public charter of how we use AI at RNN, how we will communicate if AI has been used in the generation of copy, and how we expect our clients to behave with regards to AI disclosures. 

And to that end, I’ll let ChatGPT have the final word…

Like what you read? Share with a friend.

Ready to redefine excellence with Purposeful PR?

menuchevron-down