Artificial intelligence: a game changer for cybercriminals
Frederic Dupeux, Chief Information Security Officer at Banque Havilland
The launch of ChatGPT several months ago generated much excitement but also substantial anxiety and continues to be much discussed. However, artificial intelligence (AI) did not arrive with ChatGPT. Many tools already use this technology to offer various services, and many cybercriminals have already understood its potential.
Simulating human intelligence, that is the objective of AI. Beyond the purely philosophical and ethical debate over whether it is possible to consider tools, algorithms, or programs as « intelligent » , a profound change is already being brought about by the use of techniques simulating human intelligence and interactions. Two questions arise : firstly, how to control an intelligence that seeks to simulate that of a human being? Next, how to distinguish AI from human intelligence. The AI revolution is turning our habits upside down and making us question the future of this technology which without a doubt will require the definition of a legal framework for regulating its behaviour and development.
Accessible to all, fast and intuitive, using natural language and able to remember: ChatGPT is an example of a universal tool with a potential for constructing a logic while preserving records of past discussions. AI tools such as ChatGPT, which is one of the most advanced of these models, are capable of analysing very large quantities of data. With the help of this data, the model can then generate texts closely resembling those written by a human being, it can answer questions, generate a code, etcetera. This is where the problems start.
Impact on cybersecurity
As is always the case with technological progress, there is a possibility for diversion and misuse. AI was created to imitate the human being and to specialise in human interactions. This new tool is a game changer for cybercriminals, as they can use it to improve their business models : writing malicious code, generating persuasive phishing activities, setting up AI-bots, infiltrating systems with false data, creating fake portfolios… These are known scenarios. However, their implementation is very time consuming. As AI accesses wider spectrums of knowledge, new, faster, more sophisticated and less costly attacks can be launched, and the numbers and types of attack attempts can be increased.
The human being is often the weakest link where cybersecurity is concerned. Hacking attempts based on interactions between humans, such as phone hacking, require the cybercriminal to hold complex knowledge of the target person. Through data models, AI provides access to large quantities of information well beyond human capacity. Simply imagine an AI tool capable of tying up information from Facebook, Instagram, Twitter, Linkedin… AI would then have access to highly precise data on the habits and behaviour of a person. Not only can AI models write a text, they can also produce a voice or a video image, or can understand a discussion and react accordingly and in real time. The use of so-called vishing (voice phishing) and all other types of social engineering tricks will of course be on the rise as IA activities grow during the years to come.
Fake news is another form of cybercrime which is developing on the back of AI, and which has an important impact on banking. Recently we experienced the creation of a false image of an explosion near the Pentagon which spread on social media and provoked a brief drop on the US Stock Exchange. Immediately after the publication of the image on Twitter, the Dow Jones lost around 80 points within a few minutes before recovering to its previous level a few minutes later.
We are facing a rise in new, more sophisticated and structured, AI-driven hacking tools, and an arms race between hackers wishing to further industrialize their attacks and companies needing to react with increasingly complex and specialised tools and cybersecurity measures.
Artificial intelligence is also an opportunity in the financial world
To assist in the provision of « ultra » personalized services, the financial world requires tools adaptable to the needs of the client, such as more efficient virtual assistants capable of understanding a client’s context or making appropriate recommendations. AI could analyse vast quantities of data in order to assess, for example, the creditworthiness and automate client onboarding processes. A system for rationalising loan approval processes, reducing manual work and increasing the accuracy of credit risk assessments could also be envisaged. By analysing market data, news and trends, AI could assist during the investment decision making process, helping identify investment opportunities or optimize portfolio distribution. AI therefore is a major step forward for companies, and especially banks. The aim will be to maximize the benefits of AI while at the same time reducing its risks, especially cybersecurity risks. This is where the role of the Chief Information Security Officer (CISO) gains particular significance.
Understanding the risks by supporting the companies
Technologies bring about opportunities as well as challenges. For the time being there is no regulation on the utilization of AI, consequently the CISOs must adapt. It will be essential for regulation to be adopted at a European level in order to ensure auditability and the respect of security norms. AI will need to be defined and risks will need to be reduced. In the meantime, it is essential to understand the technology and provide support to companies.What can the CISO do to ensure the security of these technologies?
Firstly, the CISO can assist the company in understanding its AI needs as well as its possible uses, for example by offering training sessions and discussion forums from which new service opportunities and uses might emerge. It is about providing support by analysing usage risks and sensitivity of data exchanged.
Also, it is important to continue to raise the awareness of employees to cybercrime and inform them of new threats and attacks. The appropriateness of controls in relation to the different types of potential AI-driven attacks must also be defined or reviewed.
Finally, as is done for many ICT systems, an inventory of AI tools and their uses must be established in the interest of good governance. AI will also need to be defined for internal purposes through user manuals and best practice guidance. Usage must be the subject of strict reviews through the definition of AI usage review workflows.
The human being always has the right to speak up !
AI is already the subject of fraudulent use, but a legitimate use of AI is equally key to company growth. As is the case with all new technology, one must understand it and move along with it. AI also has a fundamental role to play in the evolution of our societies. However, the results and the value of AI-generated information are better treated with a critical eye. By the way, who tells you that this article was not written by artificial intelligence ?
(source: AGEFI)