Cybersecurity challenges of AI adoption: Identifying risks from open-source tools


Business



Cybersecurity challenges
of AI Image taken from
thebalancecareers.com
Cybersecurity challenges
of AI Image taken from
thebalancecareers.com

CONTRARY to popular belief, artificial intelligence (AI) has not yet surpassed human cognitive capabilities and likely will not do so for another few years.

Yet there are still grave challenges in the sphere of cybersecurity in the early stages of AI adoption in workplaces.

Currently, open AI tools like Chat-GPT, Google Gemini, and many others can perform specific tasks, like producing minutes from a meeting or responding to an email. These are tasks regular employees at some of the world’s biggest banks and financial firms frequently ask them to perform, at great risk.

The Central Bank recently hosted the Bank of International Settlements (BIS), a bank serving about 70 central banks and other institutions globally, to speak about the risks associated with the increasing adoption of AI in sensitive environments, like central banks.

Sukhvin Notra, senior security specialist at BIS, delivered a presentation outlining the stages of AI development, the risks AI tools currently pose, and its misuse.

AI has developed beyond Artificial Narrow Intelligence, a model/program designed to perform a specific set of functions, with a limited range and the only one currently in operation.

“The second stage, Artificial General Intelligence (AGI), is just as smart as human beings across a broad spectrum of topics,” Notra explained. In concept, AGI would be able to think on the same level as a human.

“Few of us here would argue (they are) as smart as human beings… We’re not quite there yet,” Notra said. Some experts at Google predict its advent in three to five years.

Artificial Super Intelligence, the third and most powerful form of AI, will be far more intelligent than human beings in all ways. Notra describes it as a hypothetical model, not expected to be seen in the foreseeable future.

Though theoretically in its infant stages, AI and open-source tools are becoming significantly more powerful as new models come out. From November 2022 to March 2023, when Chat-GPT released its fourth model, its capabilities, defined by the “parameters” used to train it, increased from 175 billion parameters, reportedly to about 1.76 trillion parameters, when it introduced its Chat-GPT 4 model.

Chat-GPT is commonly used in large corporations and financial institutions in TT and around the world, usually as a means by employees to improve efficiency.

BIS, Notra said, conducted a survey in which it found that about 70 per cent of its member central banks allow employees to use open-source AI tools. Although BIS is not a regulatory body, it engages in exercises with its members and outside institutions to help them combat threats.

Notra identified five major risks posed by the adoption of commercial AI tools, beginning with the threat to data and confidentiality, which he described as one of the biggest risks facing central banks.

Employees use the tools to save time, for tasks like generating minutes from meetings to summarizing exhaustive documents. “Chat-GPT will do an incredible job (but there’s) a huge amount of risk here because every time you copy-paste a document into Chat-GPT, you’re potentially engaging in unauthorized disclosure of sensitive data.”

Notra said he is skeptical about denials from Google and Open AI that customer-supplied data will be used to train their future models. “Even if taken at face value, it is still an IT company, vulnerable to cyber attacks like any other company, and for would-be attackers, (this is) a treasure trove of data and people are willingly putting their sensitive corporate information on these sites.”

Fraud and impersonation fall into the second category of risks identified by Notra. There are thousands of open-source AI models which serve specific functions like image and voice generation, leading to an expected surge in “deep fakes,” with intended and unintended consequences.

There have been instances of employees of medium and large organizations duped by an AI model, which had been fed enough data to impersonate company directors, CEOs, and financial managers.

Thirdly, software developers take a risk of welcoming threats by using AI to generate code. Supply chain vulnerabilities, the fourth risk, existed before AI and a “very, very hard problem to solve,” Notra said.

“You can do everything right; you can spend a lot of resources to secure your organization, but no organization works by itself anymore.” Therefore, if a supplier is vulnerable to a cyber attack, you are now inheriting the risks because they connect to your systems, he said. This is amplified by AI use. “The data that these models train on is another supply chain risk because if that data is poisoned then the AI that trains on the data is going to be poisoned.”

Prompt injection or engineering, the fifth risk, he said is the ability to manipulate AI tools to provide responses to questions or commands it is designed not to answer.

Open-source AI, even Chat-GPT, has proven flaws, including bias, generation of misinformation and other ethical concerns, as well as lingering questions about copyright and ownership of content generated by the models.

AI tools tailored to an organization are particularly useful but just as vulnerable to attackers seeking an organization’s private data, Notra said.

There is scarce legislation to deal with most concerns surrounding AI adoption and use in corporate settings, and when it comes to Open AI, “We don’t know what we don’t know.”

It is imperative, he advised, for organizations to identify the use of AI tools and ensure safeguards are in place to regulate its use.

Central Bank governor Alvin Hilaire delivered welcome remarks, partly generated by AI, to drive the point about using discretion when using tools like Chat GPT.

“Who is to say Alvin didn’t write this? AI can mask a whole set of things, so we have to be very careful about identity theft, misrepresentation, and mimicking of our data and systems.

“Efficiency is possible with AI. But, of course, this comes with some flip sides,” Hilaire said.



Source link