Artificial Intelligence: Boom or Bust?  Part 1

Two gunslingers face each other, one with a white hat and the other with a black hat.

By Guest Blogger, Chris Bonatti, Cybersecurity Consultant, IECA of Casper

You’d have to have been asleep for the last couple of years to miss the growing significance of Artificial Intelligence (AI) in the cybersecurity landscape. On the one hand, we have the cadre that says AI will empower defenders. On the other, we have the mounting hard evidence that AI fuels more potent and novel threats than we’ve ever faced before. Of course, both of these visions are likely to be realized. Since OpenAI’s November 2022 introduction of its Large Language Model (LLM) AI engine, ChatGPT (Generative Pre-trained Transformer), the pace of both seems to have accelerated immensely. In March, Anthropic introduced its own LLM engine called Claude.

AI as Black-Hat Enabler

Before ChatGPT was even a month old, the industry was stunned by the number of creative ways people used to successfully exploit the engine for mischievous purposes. Somebody creatively asked the chat bot to “emulate” a Linux root shell… which it obligingly did by providing a real, functional root shell. Several people were also able to get it to write malware code. This prospect is especially troubling because it means that a lot of low-skilled potential threat actor (i.e., script kiddies) could have access to unique, polymorphic, high caliber malware code. All of this is in spite of supposed “safeguards” in the implementation. There has also been speculation (quite reasonable, in our opinion) that AIs like ChatGPT and Claude could be harnessed to launch more formidable, dynamic, and rapidly adapting cyberattacks. Examples include better phishing, unique malicious implants, faster vulnerability profiling and exploitation, and… potentially… discovery and leverage of unknown zero-day vulnerabilities. All but the last seem almost a foregone conclusion… the last being dependent only on the target containing zero days to find. Since humans don’t seem to have yet evolved the ability to write secure code, this too seems to be a good bet.

AI Aggregating Corporate Secrets

Another esoteric problem that has emerged is that AIs are ingesting what could only be described as corporate secrets across a wide swath of industry. A couple of months ago, an attorney at Amazon cautioned employees on an internal company Slack channel not to share confidential information or code with ChatGPT. This was the result of instances where output from the ChatGPT “closely matches” confidential Amazon source code. The conclusion was that some Amazon employees had used ChatGPT as a coding assistant, asking it to improve internal lines of code. The AI was apparently able to internalize the information and disclose it in answer to unrelated queries.

To be continued in part 2 …

Share:

Register to Receive the Tech Joke of the Week!

This Week's Joke:

How many programmers does it take to change a light bulb?

None, it is a hardware problem!

More Posts: