Contents

In every part of life, we follow rules that help keep things balanced, meaningful, and sustainable. The same goes for the digital world, where every click, connection, and innovation can be an advantage or damage. As technology continues to evolve, maintaining strong governance is not just important—it’s essential. It’s what keeps systems secure, processes efficient, and people protected.

Recently, I had the privilege of sharing the journey of my firm, P&A Grant Thornton, in championing AI governance at the Cyberconference hosted by Grant Thornton International. During my talk, I highlighted the key lessons we’ve learned along the way, insights shaped by real challenges, thoughtful strategies, and a commitment to responsible innovation. Let’s dive into what our journey looked like.

Lesson learned # 1: Know the AI tool, read the fine print

Not all AI tools are created equal, which is why it’s critical to approach their adoption with care and diligence. It is essential to conduct thorough assessments with AI tool creators to understand how data is processed, secured, and flows through their systems. At my firm, we also engage focus groups from Audit, Tax, Advisory, and Support units to ensure that everyone is aligned with the processes of the AI tool. Most importantly, we document tool-specific risks and controls to maintain accountability and uphold governance standards.

We determined that the following steps are vital in evaluating AI tools:

- Understand how models are trained, where data resides, and how long it is retained

- Ensure that encryption and access controls are in place

- Ensure vendors are transparent and their systems are auditable

Lesson learned #2: Governance is a cross-functional effort

When talking about improving technological processes, we mostly only consider our IT teams to take the lead. However, effective governance goes beyond. It requires collaboration across key functions. Risk management teams play a vital role in defining the organisation’s risk appetite and creating mitigation strategies. Ethics and independence teams, on the other hand, ensure that professional standards are upheld. Legal counsels must also be considered as they provide guidance on regulatory compliance and contractual obligations. Moreover, clients must also be engaged to ensure transparency and acceptability in how AI is integrated into service delivery. With continuous collaboration among these teams, organisations can build a responsible and resilient AI ecosystem.

In building a strong AI governance framework, a multi-disciplinary approach is required. This is how we did it:

- Form a steering committee composed of experts in various functions to ensure well rounded oversight.

- Establish a shared accountability model where responsibilities are clearly defined and distributed across teams.

- Conduct regular stakeholder meetings to keep everyone aligned.

Lesson learned #3: Don’t reinvent the (AI governance) wheel

In an article published by the Council on Foreign Relations, an independent, nonpartisan think tank and membership organisation based in New York City, it stated that the U.S. government should leverage existing regulatory frameworks to govern AI rather than creating entirely new agencies or legal structures because having a sector-specific governance is more effective than a one-size-fits-all approach.

At P&A Grant Thornton, we’ve learned that having continuous conversations with other member firms about their use of AI strengthens governance across our network. These discussions help benchmark models and tools, allowing us to identify gaps and points for improvement. We also rely on resources such as guidelines on the use of generative AI issued by the National Institute of Standards and Technology (NIST) and Grant Thornton International, host internal webinars and working groups, and maintain risk registers and policy templates so we can adopt practices best suited to each firm’s local context, ensuring that AI is not only effective but also aligned with different regulatory environments.

Lesson learned #4: Respect the humanity in the use of AI

While AI has transformed the way we work and how we connect with others, it is important to keep in mind that it lacks the ability to understand context, values, and ethical nuances the way we humans do. That’s why having a human-centred approach to AI is vital because it ensures that technology serves people, not the other way around. At my firm, we’ve established human-in-the-loop principles across all AI-assisted processes, reinforcing the importance of human oversight and judgment. We also emphasise transparency in how AI is applied in client engagements to maintain trust and clarity. Additionally, we invest in

training and upskilling our professionals not just on how to use them, but also to critically assess AI-generated outputs.

With our efforts, we have developed values among our workforce that are essential when using AI:

- Bias awareness

- Transparency

- Empathy in automation

Now that we know, what comes next?

Summing up all the insights we’ve learned, we now understand that training our people to follow these principles and embedding them into our systems are important in ensuring that AI is used responsibly, securely, and in alignment with our organisation’s values and regulatory obligations.

1. Having a deep familiarity with AI tools is non-negotiable to make informed decisions and reduced blind spots.

2. Governance must be inclusive and dialog-driven, which results to stronger alignment and broader buy-in.

3. Collaboration accelerates maturity resulting to faster implementation and fewer missteps.

4. Respecting the humanity in AI means ensuring that decisions remain accountable, that biases are actively mitigated, and that people remain at the centre of our professional judgment.

As we continue to explore the possibilities of AI, one thing remains clear: governance is more than just an IT policy, it's a human responsibility. Our journey at P&A Grant Thornton has shown us that responsible AI isn’t built overnight by a single team. It takes collaboration, curiosity, and deep respect for the people behind the data and decisions. And as we move forward, we remain committed to using AI not just smartly, but wisely.

 

As published in The Manila Times, dated 21 October 2025