Artificial Intelligence: Boom or Bust? Part 3

A robot takes the hand of a human climbing a cliff

By Guest Blogger, Chris Bonatti, Cybersecurity Consultant, IECA of Casper

Missed part 1 or part 2? Here are the links

Part 1 https://cyberwyoming.org/artificial-intelligence-boom-or-bust-part-1/

Part 2 https://cyberwyoming.org/artificial-intelligence-boom-or-bust-part-2/

Are AIs Trustworthy?

To be sure, experience with ChatGPT has demonstrated that AI is far from foolproof. This type of LLM is (unsurprisingly) language centric. In some cases, ChatGPT can be persuasive, but nonetheless utterly wrong. Such tools have the potential to be quite dangerous… especially if they gain a reputation of trustworthiness, but then occasionally get up to mischief. This, coupled with the potential to use AIs to deliberately mislead makes for a very dangerous situation.

To help forestall such problems, OpenAI has released a tool to allow detection of AI-written text. So far, this hasn’t proven to be fully effective, and at best would probably devolve into a kind of arms race with AI technology. Anthropic has trained its Claude model using a technique it calls constitutional AI, which they claim is much less likely to produce harmful outputs and yield a high degree of reliability and predictability. So far, this claim hasn’t been proven.

Several industry luminaries, including the ChatGPT co-creator Sam Altman and industrialist Elon Musk, have sounded the alarm. Altman says we may not be far from a potentially scary AI. They argue that AI needs to be regulated. However, we have to question the ultimate effect of regulations as we cannot assume that other players, like China or Russia, would adhere to such regulations.

To make matters worse, researchers at the Weizmann Institute of Science in Israel are exploring how it is possible to manipulate weights in deep neural networks of facial recognition systems to induce specific failures. So not only do we have to worry about AIs going wrong… we have to worry about them being deliberately hacked.

So What’s to Be Done?

There is no question AI is a powerful technology that is now starting to become a significant part of our technology and cultural landscape. As with all such developments, whether its firearms, nuclear weapons, or genetic engineering, it will be impossible to suppress or un-invent this new technology. Regulatory schemes also seem doomed to fail. In the realm of cybersecurity, where things always seem to be teetering on the brink of disaster, it will certainly mean a sharp escalation in the overall threat level. In our opinion, there is little or nothing that can be done to prevent this. It’s likely inevitable that AI, just as any other new technology, will be leveraged by some to deceive, defraud, and otherwise exploit an unwitting population. However, the majority of applications of AI technology are likely to be positive. The best thing we can likely do in cybersecurity is to focus on using AI to bolster defense and try to brace against the coming threats. It’s historically the case that those who have mastery of such new technologies fare better against the new threats than those who do not.

Share:

Register to Receive the Tech Joke of the Week!

This Week's Joke:

How many programmers does it take to change a light bulb?

None, it is a hardware problem!

More Posts: