The new generative AI technology has spread rapidly and extensively throughout the Swedish financial sector. At the same time, the work to manage the risks accompanying the new technology is lagging behind. These are FI’s findings following a survey of firms’ use of AI.
"We place high demands on the financial sector to know what risks could arise as the use of AI begins to increase. Financial firms need to ensure that they have the competence to understand and manage the risks," says Marie Jesperson, the head of FI's Innovation Center.
To better understand how the financial sector is using AI and the opportunities and risks this could entail, FI's Innovation Center conducted a survey in the autumn of 2024 on AI use at firms under FI's supervision.
The survey shows that 84 per cent of the 234 firms that responded to FI's survey say that their employees use public generative AI tools such as Chat GPT, Copilot or Gemini.
AI use within the firm's own IT environments is less prevalent. Twenty-two per cent of the firms stay that they currently have AI systems in production or development, while 46 per cent of the firms are currently conducting experimental projects or pilot studies. AI is used widely and in several different parts of the financial sector, and the primarily benefits are reported to be to streamline the business, improve internal processes, and analyse large datasets.
While the assessment is that AI use is growing rapidly, primarily driven by the new generative AI technology, only 41 per cent of the firms that have AI in production say that they have a formally approved policy for their development and use of AI. When it comes to policies for employees' use of public generative AI tools, the guidelines lag even further behind.
The EU Regulation on artificial intelligence (the AI Regulation) entered into force in August 2024. It covers all sectors, including the financial sector. The majority of the provisions in the regulation go into effect in 2026.
The Regulation applies a risk-based approach, where AI systems are classified into different risk categories. Systems associated with unacceptable risk are forbidden, and systems classified as high risk are allowed if they meet certain requirements set forth in the regulation.
Of the firms using AI today, 91 per cent say that they have begun to prepare for the application of the AI Regulation or plan to do so in the next few months.
"AI introduces considerable opportunities for the industry to streamline its work. Firms now need to establish policies and procedures to ensure that the systems do not become vulnerable in conjunction with this," says Marie Jesperson.