Leading B2B SEO Agency Discusses the Use of AI-Generated Content
The SEO Unfiltered podcast, produced by a leading B2B SEO agency for tech companies, Geeky Tech, recently discussed the controversial topic of AI-generated content. Genny Methot, the company’s content manager and podcast host, was joined by Jo Priest, the head of online authority at the agency and expert on SEO tools, algorithms, of course, generative AI.
Together, they tried to answer the hot questions that many digital marketers are asking about ChatGPT and generative AI tools. Here are some of the key points discussed in the episode:
Whilst Jo concurs that using AI as a virtual assistant–for generating ideas, summarising articles, and writing meta-descriptions, for example–is feasible, he states that it is not quite there yet for long-form content.
The argument provided is that large language models (LLMs) aren’t capable of independent thought or unique ideas. They simply “study” information already available. So, ChatGPT can generate an article on a topic, but it will not have anything new or interesting to say. It will most likely be a rehash of information already available. Whilst AI-generated long-form content will not necessarily be penalised by Google, the fact is that the search giant prioritises useful content.
As a result, Jo’s recommendation is for writers to use their discretion, as most want their content to be high quality. Using AI to generate a draft might contain errors and wrong information, so it is up to humans to make sure the content they are putting out there is accurate and high quality.
According to Jo, relying on AI content detection tools might lead to false positives. As he explains, “[AI detection tools] look at the prediction of the next token…The way ChatGPT works is that it kind of splits words, or sentences, into little tokens–about three to four characters. Then, it tries to predict what comes next.”
He goes on to explain that AI detectors look for how predictable the next token is. If the next token is a predictable one, the detectors will flag it. Then, depending on how many predictable tokens they find, they will “pass judgement”.
The more the number of flags, the higher the likelihood of the content being written by a computer. The problem with this type of testing, Jo says, is that if a writer has a predictable writing style, their content will get flagged, even if it is original, well-researched, and useful. Even though AI tools aren’t ideal for long-form content, they could be very useful for short-form copy, such as headlines or titles, meta-descriptions, product descriptions, etc.
Many large e-commerce websites have individual product pages that need their own original copy. But if a site has hundreds or thousands of products to tell, writing individual copy for each product description isn’t always possible; as such, businesses generally use a template in which they can edit sections to create descriptions for new products. With AI tools like ChatGPT, one can generate hundreds of descriptions that are varied enough for search engines to not mark them as plagiarised content.
Similarly, meta descriptions for hundreds of pages might be a tedious task for digital marketers to create. With AI, one can generate descriptions for multiple pages in a matter of minutes. Whilst these were two of the main talking points of the episode, there were other issues and opportunities around AI content that were discussed on the show.
Overall, though, the conclusion was that, whilst AI can be very useful in planning and large volumes of short-form content, it’s still not ideal for longer, informative content… yet.
The episode–Is AI-Generated Content Ready For the World?–as well as the podcast is available on the Geeky Tech website and on popular podcast-streaming platforms, including Spotify and Apple Podcasts.
Geeky News
Parallel House, 32 London Road
United Kingdom
COMTEX_434039293/2764/2023-06-01T01:07:48