Forum establishes rising AI security risks in Canadian finance
On July 2, the Canadian Office of the Superintendent of Financial Institutions (OSFI) released a report outlining the risks of AI in the financial sector, addressing urgent concerns and best practices related to AI-driven security and cybersecurity threats through a series of expert-led workshops.
The report, based on the most recent workshop among OSFI, the Department of Finance Canada, and the Global Risk Institute, was generated through the first of four workshops, which brought together government officials, financial industry experts, research partners, and academics. Subsequent reports will be released following each workshop.
Suzy McDonald, Associate Deputy Minister, Department of Finance Canada, stated, "Today's forum is a great step toward a better understanding of AI, its role in the financial industry, and how to think about security and cybersecurity risks. A better understanding can dispel unfounded fears and enable us to focus on real problems and to identify tailored solutions."
The report describes AI in the financial sector as "a double-edged sword," outlining four security vulnerabilities and recommending next steps to prevent potential threats.
Forum experts underlined the need for AI-related risks to be integrated into existing risk management structures.
According to the report, 71 per cent of participants identified AI-enhanced social engineering as the primary challenge facing the financial sector, with deepfakes ranking second at 40 per cent. The committee said strengthening two-factor authentication is a key way to counter this threat.
Additionally, cybersecurity attacks have become more sophisticated, utilising AI-assisted techniques, such as adaptive malware. As financial institutions possess a significant amount of personal data, these threats are heightened. "Zero trust" security standards were recommended to verify each request for data.
As more institutions rely on third-party AI models, this infrastructure can grow into a complex web of fourth and fifth-party providers. This could lead to significant impacts due to the disruption of a single provider, as well as a common risk associated with a lack of data transparency. The committee recommends standardised contractual obligations, security standards and disclosure agreements while enhancing oversight of third parties and related data networks.
Finally, the report outlines that AI systems "turn high-value sensitive institutional and client data into increasingly vulnerable targets for threat actors." Adding, the amount of "need to know" information has been extended to many AI infrastructures as they are integrated. The report recommends prioritising "security hygiene," defining "need to know" information and establishing new standards of data export, among others.