AI integration at Star Media Group centres on data, frameworks, humans

By Xiao Yang

University of Amsterdam

Amsterdam, The Netherlands

Behind every news story, there’s a process. From interviews to transcribe, data to crunch, headlines to write.

At Star Media Group Malaysia, many of these behind-the-scenes steps are now being accelerated by artificial intelligence. 

At the recent INMA Asia/Pacific News Media Summit, Kevin Seng, head of technology, described how AI is woven into the newsroom’s daily operations, from video production to social media publishing.

Star Media Group has been a cornerstone of Malaysian media since 1971. Over the decades, it has expanded from its flagship national newspaper into a diverse operation that now spans English and Bahasa-language titles, digital verticals covering everything from cars to property, radio stations, and large-scale events. 

But the challenges facing legacy publishers are universal: Audiences are fragmented, resources are thinner, and the demands for content across multiple platforms have never been greater.

AI experimentation

Like many others in the industry, Star Media’s first contact with AI came when OpenAI released ChatGPT in late 2022. 

“We started with individually testing … especially those in tech, so we started trying the tools to see whether they could actually help create content, improve workflows, or handle other tasks like image creation and video creation,” Seng recalled.

The early experiments generated excitement but also raised immediate concerns. Issues of data privacy, embargoes on sensitive content, and a lack of oversight quickly became clear. 

“Can we actually protect our data? How can we actually not let the AI train our data?” Seng asked.

For a newsroom that frequently deals with embargoed political or business news, these were not abstract questions.

This realisation prompted a shift from experimentation to structure. “The editorial and IT teams worked together. We actually came up with an SOP and policy on how we should use AI,” Seng said.

AI policies

These guidelines set out key principles. Accountability remained with journalists and editors even when AI assisted in the process. Transparency with readers and colleagues was essential. Data confidentiality had to be safeguarded. And perhaps most importantly, “a human must be the last check before the content gets published to the internet.”

On the technology side, the group standardised its use of enterprise-grade tools where data would not be harvested for external training. This ensured a higher degree of security and also made it easier to manage adoption consistently across departments.

AI integration

Once governance was in place, the newsroom’s focus shifted to integration. At first, staff relied heavily on cutting and pasting from external tools into the CMS.

But as Seng put it, “Once we started using the tools, people got used to them and began wanting better automation. Because if youre relying heavily on an external tool, youre still essentially just copying and pasting.”

To solve this, Star Media worked with vendors to integrate AI capabilities directly into its content management system and into workflow processes. That shift not only saved time but also encouraged wider adoption because tools were embedded in the platforms people already used every day.

The applications of AI quickly expanded across the newsroom. 

In audio-visual production, AI scripts broke down stories into scenes and suggested captions. Voice models trained on internal voices generated professional-quality narration. Automated translation allowed videos to be repurposed across Malaysia’s four major languages: English, Bahasa, Chinese, and Tamil. Subtitling and background music selection, once time-consuming tasks, were also streamlined.

“Voiceover is, I would say, a very big time saver for us,” Seng emphasised.

In graphics, AI tools were deployed to upscale low-quality photos submitted from the field, restore decades-old analog archive images for photobooks, and enhance visuals for both print and digital presentation.

 

Generative AI also helped navigate sensitive editorial challenges. 

In stories dealing with issues like workplace bullying, using models or stock imagery risked misrepresentation or even accidental exposure of victims. Instead, AI-generated imagery allowed the newsroom to illustrate such stories responsibly.

 

For reporters and editors, AI became a powerful assistant in content development. Journalists began using it for ideation and brainstorming, conducting topic research, and transcribing lengthy interviews or legislative proceedings. Fact-checking became faster, and data visualization more accessible.

Within the CMS itself, AI suggested headlines in different tones for different platforms, summarized stories, and applied Star Media’s unique house style rules. 

“AI will be able to do a house rule checking in terms of does that actually meet our standard in terms of how we actually use our internal rules,” Seng explained, noting all newsrooms have their own stylistic standards, and Star Media was no exception.

Meanwhile, in the background, AI improved data workflows by generating accurate metadata and keyword tags, categorising stories under frameworks such as the UN’s 17 Sustainable Development Goals, and powering more sophisticated content recommendations on the Web site.

This not only improved discoverability but also opened up potential new revenue streams by segmenting content in ways valuable to advertisers.

Lessons learned

After nearly two years of integrating AI into operations, Seng was frank about what works and what doesn’t. First and foremost, clarity is essential. “A clear framework, a clear policy, how you should actually use it,” he said, is non-negotiable for both editorial and non-editorial teams. 

Transparency with readers about when and how AI is used also matters, because “rather than be shy away from telling them that we are using AI,” openness builds trust. Above all, human oversight remains critical. “Double-check, triple check … humans must check before the content gets published.”

He also stressed the importance of identifying “power users” within teams. These power users adopt new tools quickly, experiment with them creatively, and often propose new solutions that spread adoption more widely. 

Training sessions also played a pivotal role, both in equipping staff with foundational knowledge about prompts and limitations and in generating enthusiasm. 

“When they saw the wow effect, they would say, ‘Yeah, I should start using it,’” Seng said. Through these workshops, the tech team also discovered new use cases proposed by the journalists themselves.

Some of the lessons were more surprising, Seng said. When Star Media tried to train an AI model on 10 years of archived advice columns, the results were unexpectedly poor: “Although we have 10 years of data, the quality that's coming out is actually worse than with just one year of data.”

The takeaway: Quality of training data matters far more than quantity. Similarly, early pilots that lacked clear goals fizzled out. 

“We just thought that AI is very cool and can do this, but in the end, it didn’t really solve a problem,” he reflected. Today, every pilot project at Star Media must be tied to measurable outcomes, whether saving time or creating new value.

From automating repetitive tasks to enabling multilingual publishing, the technology’s real power lies in supporting human creativity and judgment, Seng said: “The creativity, or the person who’s using it, is actually very important, because humans still play a vital role.” 

About Xiao Yang

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.
x

I ACCEPT