When you purchase through links on our site, we may earn an affiliate commission.Heres how it works.
News outlet CNET generated around 75 articles with an unspecified AI tool.
That fact washighlighted by Futurismlast week, sparking debate across the web.

Despite that disclaimer, issues managed to get through the system.
CNET’s new AI had at least four identified gaffes related to factuality.
In afollow-up article, Futurism pointed out mistakes within multiple CNET pieces.

The financial figures shared within CNET’s original piece are incorrect, as pointed out by Futurism.
The example listed would result in a person earning $300 in the first year, not $10,300.
That’s a significant difference and one that would likely have been caught by a human writer or editor.

Thearticlehas since been updated, but anarchived versionshows the error.
The piece by Futurism illustrates many of the weaknesses of artificial intelligence.
It also indicates that AI-assisted writing can still have factual errors.

In fact, Futurism highlighted that the Associated Press has used AI to create articles since 2015.
But in those instances AI was used to fill in templates, not create entirely new text.
As someone who has written thousands of news articles, I readily admit that some posts are formulaic.

But content that does not fit within tight parameters requires a human touch, at least for now.
I also want to emphasize that no writer is perfect, human or otherwise.
I’ve certainly made mistakes and had to update articles accordingly.

But I feel like there’s a sense of accountability when a human creates an error.
I’ve had bosses message me when I shared a draft with an obvious mistake.
I’ve also had several editors work with me over the years to improve my writing.

Unfortunately, in the case of some of CNET’s articles, that was not what occurred.





