How to measure, communicate the impact of AI at news companies
Generative AI Initiative Blog | 01 September 2025
“How can we effectively measure AI maturity and track progress across business, people, and tech/product dimensions as we continue building our AI capabilities?” one AI leader at a media company asked me recently.
Further, this impact often needs to be communicated to top management and the board. How does one manage their expectations and do this well?
Here are some suggestions for addressing this challenge from other AI innovators in the industry:
If you are introducing AI tools, measure the adoption of each tool rather than looking at an aggregate. The sum total will hide variations and nuances that are important to measure, an AI leader at a European media company pointed out.
If you frame using an AI tool or application as an experiment, then you go in with a hypothesis and with a clear idea of the metrics that measure its success. If it is a new tool, start with a pilot, with a pilot metric, so you can run a clean test that has clear results.
Consider running an employee satisfaction survey. If using a tool makes them feel less overwhelmed or more productive, it is a win.
Keep a running list of successes. One AI leader classifies all experiments as “filler and killer.” Newsroom staff often come forward with “filler” ideas — an automated Facebook post nobody engages with, for example. She finds that using AI to take on filler tasks is useful: “The more filler activities I can use AI to deal with, the better” because it frees the newsroom up for higher value journalism. But more importantly, she highlights “killer” experiments that deliver significant value and demonstrate AI’s impact. “Develop a list of killer experiments,” she said, and communicate that upwards, while making it clear that you are growing the list at both ends.
“Have a single key metric that is consistent” and present it regularly, said another AI leader. “Higher ups are happy to glom onto it. Find a repeatable format that is easy for them to wrap their heads around.” No need to make it fancy. It might be something as straightforward as usage statistics for a particular tool, month to month.
It is more important to be interesting than complete when communicating, said Tina Nunno, who is managing vice president of AI business value at Gartner. Don’t inundate your leaders with the details (particularly if you have a technical background and know all the details). What do they really want to know? They are concerned about value for shareholders and stakeholders. What is the value you are pursuing? Focus on that.
Set expectations by quantifying risks and readiness, Nunno said. What are the obstacles to overcome and what will it take for the company to overcome them? For example: Will your organisation be able to hire scarce AI talent or is outsourcing it a better option?
Always present AI through the lens of shareholder and stakeholder value. Key questions to answer: How will AI impact them and employees and the brand? Will it create more volatility for you or help manage it? Where and when will it be visible on the financials?
Remember: Measuring maturity is different for traditional AI and for GenAI. We are used to old-fashioned AI, where maturity is mapped with benchmarks like model accuracy, recall, precision, cost savings, and operational lift over time. But GenAI maturity is harder to measure, in part because it is democratised and cross-functional. It is not only skilled data scientists and engineers interacting with the tools any more. Anyone can prompt and experiment, making the journey to enterprise-wide maturity more complex and nonlinear. Metrics here can revolve around user satisfaction, employee retention, creative output quality, and time savings.
If you’d like to subscribe to my bi-weekly newsletter, INMA members can do so here.