The Hong Kong Monetary Authority (HKMA), in collaboration with the Hong Kong Cyberport Management Company Limited (Cyberport), announced the launch of the second cohort of the Generative Artificial Intelligence (GenA.I.) Sandbox initiative on 28 April, 2025. Following the experience sharing from participating banks of first cohort at the launch, AIFT was joined by prominent speakers from HKUST, AWS and Alibaba at the panel themed “Expert Sharing: Risk Management of AI” to discuss on the opportunities and risks of Generative AI (GenAI) in the banking sector, highlighting the do’s and don’ts for an effective risk management strategy.
Many of the GenAI proof-of-concepts (POCs) today in Hong Kong banks evolve around enhancing internal productivity, such as summarizing market research & generating reports, and external-facing smart assistants for customer support and sales. Lettie Sin, Director of Strategy & Client Successat AIFT Enterprise, began by stating that where standard security controls and GenAI-specific development techniques including Retrieval Augmented Generation (RAG) and guardrails are in place, such measures may not fully address the vulnerabilities in Large Language Models (LLM) or GenAI systems. Attackers are consistently introducing new techniques to bypass guardrails through prompt injection, data poisoning and model manipulation. Insufficient understanding of GenAI-specific risks and inability to circumvent such vulnerabilities will lead to serious consequences for regulated institutions, including severe data leakage, reputational harm, or operational disruption.

GenAI demands specialized adversarial testing to uncover novel prompt, data, and model-specific exploits that classic methods never probe. Aligning with HKMA’s AI Risk Management framework, AIFT’s Vulcan provides enterprises a holistic and sustainable solution for adopting GenAI with security, safety and compliance with four focuses:
- Dynamic Cybersecurity Testing: A combination of proprietary vulnerability scanning tools and penetration test expertise are used to simulate adversarial attacks on the GenAI application through the frontend and identify vulnerabilities lying within the infrastructure & network
- A.I. vs A.I.: Multiple LLM agents are deployed to perform Red Teaming on the GenAI prompt layer: a Test Agent for automated test cases generation, a Simulation agent for Red Teaming, and an Evaluation agent to determine the prompt outputs received as Pass/Fail (LLM-as-a-Judge)
- Automated Monitoring: Vulcan continuously monitors all user inputs and LLM-generated outputs flowing through your GenAI app, and visualizes incidents for real-time detection and response
- Adaptive Guardrails: Vulcan comes with a fully customizable policy framework allowing clients to enable or disable guardrails and classifiers based on their needs. The defence draws on Vulcan’s adaptive attack approach to provide dynamic protection that changes over time
Lettie added, “As banks transition GenAI from proofs of concept to full deployment, embracing agile cybersecurity measures designed for GenAI’s unique challenges is critical. Engaging AI Red Teaming early prevents incidents that could undermine confidence in these new AI capabilities.”

AIFT’s committed research in the field of AI security and in-depth localization efforts will continue to provide unparallel values to Hong Kong enterprises as they navigate through the evolving AI landscape and begin to validate GenAI investments.


















