AI Workforce: Why governance is the cornerstone of trust

This article first appeared in Digital Edge, The Edge Malaysia Weekly on July 14, 2025 - July 20, 2025
Artificial intelligence (AI) is changing the way organisations work, from streamlining operations to speeding up decisions and transforming how we engage with customers. In fact, a 2024 survey by International Data Corporation (IDC) predicts that around 70% of Asia-Pacific organisations expect a new breed of AI, known as agentic AI, to disrupt their business models within the next 18 months.
AI agents mark the next step in the evolution of generative AI. Unlike most traditional tools that rely on human input at every step, these autonomous virtual assistants make decisions in real time, learn continuously and operate with increasing independence.
Agentic AI offers a powerful advantage to organisations as they optimise resources, reduce costs and allow teams to focus on higher-value work.
However, despite these clear advantages, many organisations still find that their AI initiatives fall short of delivering the anticipated strategic outcomes, often due to challenges that emerge from implementation, integration and scaling.
Why AI projects run into trouble
In the two years since ChatGPT’s explosive debut, organisations have tried to capitalise on the benefits that AI potentially offers but many of them have met with limited success.
According to A Playbook for Crafting AI Strategy, a report Boomi developed with MIT Technology Review Insights, half of business leaders cite data quality as the biggest hurdle to successful AI deployment. Models that learn from incomplete or low-quality information would produce inaccurate output.
Among the reasons organisations struggle to reap maximum benefit from AI is that AI agents are only as reliable as the systems that train and govern them. In some instances, the AI systems may generate responses that sound credible but are entirely fabricated, a phenomenon commonly known as “hallucinations”. In the case of AI agents, such errors can scale quickly.
This could result in a range of significant implications, from privacy breaches to reputational harm. Hence, there is a need for clear guidance and robust governance.
AI investments in the region are expected to reach US$110 billion (RM465 billion) by 2028. To turn that investment into value, organisations must build on a foundation of high-quality data and support it with strong principles of trust, governance and compliance.
Scaling AI safely
Malaysia is among the countries leading the way on responsible AI. In September 2024, the country launched its Artificial Intelligence Governance and Ethics Guidelines (AIGE).
This framework is designed to ensure the safe use of AI technology through compliance with the principles of responsible AI. Backed by the National AI Office and dedicated working groups on policy and ethics, the country thoughtfully balances innovation with integrity.
However, even with strong guidelines like Malaysia’s, policies often remain aspirational without robust governance and enforcement.
For instance, Boomi’s research indicates that 45% of organisations cite security, privacy and compliance as major obstacles to scaling their AI efforts. These are not merely technical hurdles but direct consequences of an inadequate governance framework that fails to define clear responsibilities, establish proper controls or ensure adherence to regulations.
This governance gap becomes even more critical with the rise of agentic AI as most existing governance models are designed for static systems, not for autonomous and self-evolving agents.
That’s why forward-looking businesses are turning to purpose-built platforms that offer centralised oversight of AI agents.
These advanced governance solutions go beyond basic oversight. They provide actionable insights by helping to define, enforce and evolve policies. They ensure that AI use remains ethical, accountable and aligned with enterprise values.
With tools like application programming interface (API) management and AI agent directories, they give full visibility into how AI is used, from the data it processes to the decisions it makes and usage patterns.
More importantly, these systems allow organisations to step in early when something goes wrong. In complex environments where many AI agents are working together, spotting and stopping potential issues quickly is critical. With the right level of oversight and control, businesses can move forward with confidence, knowing their AI tools are behaving responsibly and securely.
Not just a tech issue, it’s a cultural one
AI governance is deeply rooted in people and culture. While AI agents are not made of humans, they require human values and judgement. And while many organisations have adopted governance frameworks, many organisations struggle with internal resistance, vague responsibilities and inconsistent understanding of governance.
The goal should be to establish a shared understanding of governance that becomes second nature across the business.
Deloitte’s research shows that companies with mature governance structures see 28% higher staff usage of AI tools and almost 5% more revenue growth. Yet across the Asia-Pacific, 91% of organisations still have only the basics in place. That’s a clear signal that more work is needed to ensure investments yield returns.
As AI adoption expands, businesses must navigate the learning curve of AI integration, managing the associated risks and, most importantly, building systems that foster trust with users, customers and society as a whole.
David Irecki is the chief technology officer for Asia-Pacific and Japan at Boomi, which specialises in integration-platform-as-a-service, application programming interface management, master data management and data preparation
Save by subscribing to us for your print and/or digital copy.
P/S: The Edge is also available on Apple's App Store and Android's Google Play.
The content is a snapshot from Publisher. Refer to the original content for accurate info. Contact us for any changes.
Comments