LATEST NEWS   Axiata Group Bhd has appointed Nik Rizal Kamil Nik Ibrahim Kamil to succeed Vivek Sood as Group Chief Executive Officer and Managing Director, effective June 1, 2026. | Maximum Price Control Scheme for CNY 2026 set to be enforced for nine days from Feb 13 - Armizan | Two organised crime syndicates involving losses of over RM4 million in Johor busted - IGP | Malaysia improves its standing in 2025 CPI, rising to 54th from 57th in 2024 — Transparency International | Cloud seeding operations to be conducted in Johor, Kedah and Perak from Feb 11-15 - Ahmad Zahid | 

H2O.ai Partners AI Verify Foundation To Ensure Safe AI Deployment

KUALA LUMPUR, Oct 9 (Bernama) -- Open-source leader in generative artificial intelligence (AI) and machine learning (ML), H2O.ai has announced a collaboration with AI Verify to ensure the safe deployment of AI.

As a part of the collaboration, H2O has launched an initiative alongside the foundation to provide clients the ability to test and govern their AI systems using H2O’s platform and further the global open-source community.

In addition, H2O has agreed to contribute benchmarks and code to AI Verify’s open-source Project Moonshot toolkit for large language model (LLM) application testing and provide support for tests recommended by AI Verify on its ML and LLM Ops platform.

“H2O has been committed to the open-source community since our founding and we believe every organisation should have a strategy to safely test AI.

“Working with AI Verify clearly aligns with our values and we look forward to continue leading the charge for responsible AI adoption,” said H2O.ai chief executive officer and co-founder, Sri Ambati in a statement.

Meanwhile, AI Verify Foundation executive director, Shameek Kundu said: “We believe that appropriate tools and approaches to AI testing is critical to enable adoption of AI for society, business and citizens. We are very pleased to have H2O, an active member of the foundation, as a partner in this journey.”

H2O’s contribution to AI Verify’s Project Moonshot provides one of the world’s first LLM Evaluation Toolkits, designed to integrate benchmarking, red teaming, and testing baselines.

The toolkit helps developers, compliance teams, and AI system owners manage LLM deployment risks by providing a seamless way to evaluate their applications’ performance, both pre- and post-deployment.

This announcement comes on the heels of H2O’s AI 100 List, which recognises the top 100 individuals driving innovation and impact in AI across industries and sectors globally.

-- BERNAMA