Tuesday Oct 21, 2025
Tuesday, 21 October 2025 05:15 - - {{hitsCtrl.values.hits}}
From left: Meta South and Central Asia Public Policy Director Sarim Aziz, Rajah and Tann Cybersecurity CEO Wong Onn Chee, Nestle Lanka Legal and Regulatory Affairs Asst. Director and Company Secretary Keerthi Pathiraja, General Counsel, Data Protection Officer and PDPA Drafting Committee Member Trinesh Fernando and Moderator CICRA Holdings Group Director and CEO Boshan Dayaratne
By Hiyal Biyagamage
The final session of the 11th Annual Daily FT–CICRA Cyber Security Summit 2025 concluded on a forward-looking note, exploring the intersection of AI governance, open innovation, and cybersecurity resilience. Delivering the keynote under the session theme of “AI Governance Models and Frameworks: What is Best for Asia?,” Meta Director of Public Policy for South and Central Asia Sarim Aziz, urged governments, regulators, and enterprises across the region to embrace open-source collaboration and transparent governance as the foundation for responsible AI adoption.
The state of AI evolution
Aziz began by tempering global hype around artificial intelligence, noting that current systems still lack true reasoning ability. Models like LLaMA, GPT, and Claude, he said, remain pattern predictors rather than cognitive thinkers; a technological checkpoint rather than a limitation. He attributed the rapid acceleration of AI progress to hardware breakthroughs, explaining that GPU performance is now evolving fifteen times faster than predicted, dramatically reducing the cost of training and deploying models. This, he argued, is creating a new window of opportunity for smaller nations like Sri Lanka, which can now access advanced compute capabilities once exclusive to global tech giants.
Setting out Meta’s open-source philosophy, Aziz described how innovation scales faster when knowledge is shared. He traced this ethos to Meta’s Fundamental AI Research Lab (FAIR), which produced open frameworks like React, PyTorch, and the LLaMA series of models. “When AI is open, innovation doesn’t depend on where you live. It depends on your imagination and your computer,” he remarked.
He highlighted that open-source AI allows countries to retain data sovereignty and develop localised systems in their own languages, free from reliance on foreign cloud vendors. For Sri Lanka, he said, this presents a direct opportunity to develop secure, language-specific models that address national priorities, from financial compliance to e-governance.
Global alliances and regional inclusion
Aziz outlined Meta’s participation in several key alliances that are actively shaping global AI standards: the AI Alliance, the Partnership on AI, and ML Commons. These groups, he explained, are not policy clubs but working laboratories that define the technical rules of safety, interoperability, and provenance. According to him, these elements provide the ability to trace whether content was generated by AI.
He invited Sri Lankan universities, regulators, and technology institutions to join these networks, emphasising that Asia must have a seat at the table. “These are open tables. Sri Lanka has every right to sit at them. You can shape how AI safety looks in Asia,” said Aziz.
Aziz went on to unveil details of LLaMA 4, Meta’s latest multimodal large language model. Designed to understand text, images, and audio, it is lightweight enough to run on a single NVIDIA H100 GPU while maintaining enterprise-grade efficiency through a Mixture of Experts (MoE) architecture. “LLaMA 4 can be hosted privately; your data, your infrastructure, your control,” he emphasised, underlining Meta’s commitment to data sovereignty and on-premise deployment.
In alignment with the summit’s cybersecurity theme, Aziz also introduced Meta’s open-source AI safety toolkit, including Cybersecurity Eval 4 and CodeShield, frameworks that benchmark and analyse AI systems for vulnerabilities such as prompt injection and data leakage. These tools, once developed for Meta’s internal use, have been open-sourced to strengthen collective security. “The goal is to make AI both smarter and safer,” he said.
Lessons from Asia
Citing regional success stories, Aziz showcased open AI applications from Sri Lanka, Vietnam, Singapore, Indonesia, and Australia, demonstrating how open ecosystems democratise access to innovation. He praised Sri Lankan developers for initiatives like Watchdog’s “Dissect” fact-checking platform and the “SIN-LLaMA” Sinhala-tuned language model, noting that these projects embody Asia’s growing AI confidence. “Open systems allow nations to build for themselves, not wait for someone else to build for them,” he observed.
Returning to the summit’s central focus, Aziz argued that open AI systems enhance security and not weaken them. Transparency allows more experts to inspect vulnerabilities, interoperability enables seamless integration with local threat detection tools, and local developers gain direct access to learn, modify, and fortify systems. “Security thrives in sunlight,” he said. “When systems are open, more eyes can test, fix, and improve them.”
He added that Meta itself relies heavily on open-source components, demonstrating that openness leads to faster patch cycles and stronger collective defence.
Aziz concluded by outlining Meta’s research into multi-agent AI systems, collaborative models that act as “AI colleagues,” which are capable of coordinating across business and governance functions. While these systems promise immense productivity gains, he cautioned that they introduce new coordination risks, underscoring the need for robust governance.
Quoting Meta CEO Mark Zuckerberg, Aziz ended with a clear call to action. “AI shouldn’t be controlled by just three companies.” He urged Sri Lankan policymakers and developers to contribute to the global AI commons by experimenting, adapting, and creating local frameworks that prioritise safety, transparency, and inclusion.
Singapore’s innovation-friendly governance model
Delivering the guest address at the closing session of the Daily FT–CICRA Cyber Security Summit 2025, Rajah & Tann Cyber Security Pte Ltd., CEO Wong Onn Chee, shared insights on Singapore’s approach to AI governance; one that balances innovation with accountability. His presentation offered a deep dive into the city-state’s regulatory model and its broader implications for Asia’s AI landscape.
Wong described Singapore’s AI governance framework as voluntary but comprehensive, promoting responsible innovation through collaboration between industry and regulators. Central to this framework is the AI Verify Foundation, an open-source testing toolkit developed by the Infocomm Media Development Authority (IMDA). It allows companies to evaluate their AI systems for bias, explainability, robustness, and privacy, ensuring transparency across the model’s entire lifecycle.
“AI Verify represents Singapore’s belief that trust and transparency are the enablers to innovation,” Wong explained. The initiative’s open-source nature, he added, allows organisations across Asia to adapt and customise governance models to local contexts, reinforcing regional alignment while encouraging innovation.
The Cyber Security Agency (CSA) of Singapore complements this with guidelines for AI security, risk management, and critical infrastructure protection. Wong highlighted that the CSA’s framework integrates continuous threat monitoring, incident response, and real-time risk evaluation, ensuring that AI systems deployed in sensitive sectors remain secure and resilient.
Learning from regional contrast
Wong contrasted Singapore’s “light-touch” model with Malaysia’s compliance-heavy regulatory approach, noting that the two reflect different national priorities. While Malaysia focuses on data localisation, content moderation, and mandatory licensing, Singapore adopts a “governance-through-enablement” strategy, leveraging existing laws such as the Online Criminal Harm Act (OCHA), Protection from Online Falsehood and Manipulation Act (POFMA), and Cybersecurity Act (CSA) as enforcement backstops.
This framework, he said, has allowed Singapore to remain agile without stifling technological progress. “Regulation should guide, not restrain. The goal is to create an ecosystem where ethical AI can thrive,” he noted.
Wong also cited recent incidents involving Meta and Salesforce, where weaknesses in AI oversight led to data exposure and misinformation risks. These cases, he argued, underscore the urgent need for clear accountability structures and proactive compliance mechanisms within AI governance frameworks.
Strategic roadmap for Asia
Concluding his address, Wong outlined a multi-layered roadmap for Asian nations, combining self-regulation, legal safeguards, and international cooperation. He called for regional governments and enterprises to establish transparent governance systems, engage stakeholders, and adopt continuous feedback mechanisms to evolve policies alongside technology.
“Asia’s diversity is its advantage,” Wong emphasised. “If we can align around shared principles of fairness, safety, and transparency, we can define a governance model that reflects our region’s values while competing globally.”
He urged policymakers to adopt proactive compliance, openness, and collaboration as guiding principles. “AI governance is about confidence. The more transparent we are, the more resilient our future becomes.”
Panel discussion on building trust through law, ethics, and transparency in AI
The summit’s final panel discussion, moderated by CICRA Holdings Group Director and CEO Boshan Dayaratne, brought together Nestle Lanka Assistant Director – Legal and Regulatory Affairs and Company Secretary Keerthi Pathiraja and General Counsel and Data Protection Officer, and a member of the Personal Data Protection Act (PDPA) drafting committee Trinesh Fernando, to explore how Asia can translate emerging AI governance frameworks into actionable legal and ethical safeguards.
Boshan Dayaratne emphasised that AI governance is no longer a theoretical conversation but a strategic business imperative. “AI is now embedded in how we work, trade, and communicate. The question is not whether to regulate, but how to do so responsibly without slowing innovation,” he observed.
Keerthi Pathiraja offered a corporate perspective, noting that organisations like Nestlé view AI ethics and data governance as extensions of brand trust. He highlighted that global companies must navigate a patchwork of regulations, ranging from Europe’s AI Act to local frameworks while maintaining consistency in compliance and consumer transparency. “In an era where algorithms influence purchasing and perception, ethics becomes a competitive advantage,” he said.
From a legal and policymaking standpoint, Trinesh Fernando reflected on Sri Lanka’s ongoing digital policy journey. As a member of the PDPA drafting committee, he underscored that future legislation must align with international interoperability standards while ensuring local accountability. He called for a “living governance model” that evolves with technology and includes periodic review mechanisms to ensure relevance.
Strategic partners of the 11th annual cyber-security summit were Visa and Sysco LABS, Platinum partner was South Asia Technologies, Community Impact Partner was Meta, Payment network partner was LankaPay. Other supporters included platform partner #HashX, podcast partner Techtalk, hospitality partner Cinnamon Grand, Colombo, Creative partner Mullenlowe Sri Lanka and electronic media partner Yes101, TV1 and News1st.
(Pix by Upul Abeysekara and Ruwan Walpola)