News companies struggle to trust AI when they don’t trust the companies behind it

By Richard Fairbairn

Glide Publishing Platform

London, United Kingdom

Connect      

It’s hardly a revolutionary statement, but we all know trust in media is important. It underscores relations between news publishers and audiences.

From a simple business perspective, someone is unlikely to become a subscriber or follower if they don’t trust you. Trust is what catalyses the relationship to attain and grow value, and you risk it at great peril.

Trust, to media boardrooms, isn’t only the relationship between publishers and audiences. Trust equally has to exist between media companies and the various technologies holding their future in a vice.

They must trust that it works — and trust that it is being offered fairly — to protect that vital audience trust and to be able to sleep at night knowing firms that like to move and break things have such a hold over their businesses.

This is increasingly hard to do. Media has as much a right to demand trust in its tech as their audiences do of the media.

AI has brought the topic of trust into the spotlight for tech platforms, media companies, and consumer audiences.
AI has brought the topic of trust into the spotlight for tech platforms, media companies, and consumer audiences.

It’s those collective beliefs that make so many of us increasingly dumbfounded by the recent actions of some of the AI platforms desperately trying to convince us all to become even more reliant on their services and products.

Distrust built in

In August, OpenAI showed that, for all its supposed genius, it is a young company still either naive or disingenuous in dealing with real people or enterprise customers.

It has a poor reputation in media and publishing circles anyway, but the company did little to help when the launch of its latest GPT5 model saw the abrupt removal of several older models that were in constant use by enterprises or being used to build numerous services and products.

Uproar, outrage, back peddling, and a lesson learned. Despite OpenAI increasingly targeting enterprise users, this didn’t paint a picture of knowing how to deal with them.

Elsewhere, other actions by fellow AI and tech platforms are just as puzzling if they want to build trust.

AI Q&A start-up Perplexity, vying to be the new Google and proponents of fair compensation for publishers, was embarrassed after Web security giant Cloudflare — the Web’s leading anti-bot firm — alleged Perplexity ignores site robots.txt and no-scrape rules to secretly take content and data against owner wishes.

Perplexity, already tainted to many publishers, retaliated by saying Cloudflare is basically staffed by incompetents uninformed about how the Web works — strikingly similar to what it said to the BBC after a demand to stop stealing content.

Which side of the “debate” do you think more CEOs trust?

Luckily, Perplexity and Cloudflare are both hosting events at the forthcoming INMA Media Tech & AI Week in San Francisco, where getting the pair on stage to talk it over would surely be the must-see talk of the week for anyone in tech and publishing. Finally, a fireside chat that might need a fire extinguisher!

Needless to say, the dissipation of trust continues.

Meta wants more businesses to use its AI tools and products, but internal documents revealed horrifying information about its rulebooks for chatbots, which, frankly, sound like an abuser’s handbook. How can you now trust that system as a backbone for your product?

At the same time, while Meta was saying to business users it is clamping down on scam content, it didn’t take long to see it is difficult to trust that claim, too.

Meanwhile, Microsoft, usually a steady ship in enterprise tech, rocked businesses and developers earlier this month when it switched off a key part of Bing, which was in very heavy use as a key component of business and enterprise AIs running on Microsoft Azure.

I can’t say xAI is a trust leader yet, but massive U.S. government contracts would no doubt have burnished its shaky reputation. And, it seems it came within days of just that scenario, right up until its Grok AI started to act like a horror movie baddie and prompted a rethink of any “Grok.gov” future. It’s probably difficult to trust that one to run your helpline any time soon.

Google ups and downs

Some trust issues are more subtly pernicious, as many publishers who trusted Google algorithms to deliver success know. Unsignalled search algorithm changes still upend traffic without notice, and publishers know it’s risky to rely on major public-facing services — particularly Search, Discover, and AI Mode — to predictably drive traffic their way.

UK media consultant David Buttle and Press Gazette recently revealed just how much traffic now goes to news sites from Google Discover, based on figures released by Web analytics firm Chartbeat. This data showed Discover is now the largest single source of traffic for many publishers — and how exposed it makes them to Google’s whims.

You can read Buttle’s piece here, but as he says: “The growing reliance on Discover creates a new platform dependency that’s subject to all the risks inherent in that. Particularly when [Google] has proven itself willing to disregard the interests of publishers and act in a brazenly anticompetitive manner in order to secure an advantage elsewhere.”

Not for the first time, the message is don’t trust Google to prop up your site. Or, perhaps don’t trust the word of Google at all, according to the very many analysts and organisations saying Google is lying when it claims its AI Mode does not reduce traffic to Web sites.

Ironically, where Google offers a B2B service, its forward signalling can be very good. As an example, the slow departure of Universal Analytics in favour of GA4 was managed over nearly four years, plenty of time to plan and for admins and developers to get tired of the reminders.

There was very good reason for this of course: Discover is of no financial significance to Google — which is what it makes it so risky — while having Google Analytics on eight out of 10 sites collecting data is of untold value to the firm.

Golden goose or golden noose?

We all love new features and innovations, but in enterprise and business technology, it’s better to get bored of reminders than surprised by a shutdown. In full transparency, the company I work with is an AWS Partner. A reason we chose AWS is because it is the best by far at allowing our chief technology officer and development operations people sleep soundly at night — and thus our customers. It’s why we offer 20+ AI models to customers, not just one vendor or service, and none of them take your intellectual property.

Perhaps it’s no surprise that AWS was the first to roll out a trust and safety toolkit for enterprise AI users and builders, of which we promptly made use.

There is a major trust issue with AI across the board. While regulations outlawing the ludicrous claims of some AI firms are growing teeth, public trust of AI is not great, and still especially poor in news and media; this is reiterated by fresh research from the Online News Association.

The truth is, while trust in AI is low, AI itself can be fixed. It is just a tool.

It is the actions of the companies behind many of the AI platforms that may do more damage to our trust in the long run.

About Richard Fairbairn

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.
x

I ACCEPT