The rapid rise of powerful, breakthrough AI technologies like ChatGPT has generated a lot of excitement in the IT world, but also concerns about threats to security and privacy. Some tech leaders have even publicly voiced concern that AI poses a “risk of human extinction” and called for a pause on AI development. How does one separate Siri from Skynet? Let’s start by defining AI.



What do they mean by AI?

Historically, artificial intelligence refers to algorithms capable of performing tasks that previously required human intelligence. Today it’s mostly a marketing term to refer to machine learning, a branch of AI where software is “trained” to learn to produce output data that’s in some way correlated to input data. Machine learning has been around for decades, but recent increases in processing power have enabled companies to leverage machine learning in applications from prosumer grade cameras to — of course — products like ChatGPT.


Why is AI taking off now?

The current AI boom is driven by several technologies, but the one that’s perceived as being the most revolutionary (and by some to be the most dangerous) are large language models such as ChatGPT. The GPT in ChatGPT stands for Generative Pretrained Transformer. Transformers are a refinement of a machine learning algorithm called an artificial neural network (ANN) proposed in a 2017 paper called “Attention is All you Need.” ANNs are inspired by biology — after all, nature has solved the problem of pattern recognition and intelligence already — and use software to emulate the intricately interconnected neurons in animal brain tissue. This idea isn’t new — a mathematical model of biological neural networks was proposed as early as 1951.


The downside of ANNs is that they require complex and expensive processing. The innovation behind transformers is their ability to strip away many of the more complex forms of interconnectivity, leaving a more basic neural network. While this would seem like a downgrade, the change to the network’s architecture allows it to be processed in a massively parallel way. This lets AI developers create much larger networks for the same cost in processing power, and more importantly, allows the neural network to be processed on the extremely powerful parallel hardware provided by Graphics Processing Units (GPUs). This — combined with the fact that a single modern high-end GPU has more processing power than the most powerful supercomputer in the world 20 years ago — has given rise to products such as ChatGPT.


What not to worry about

Transformers are specialized for the purpose of natural language processing. They can be adapted for use in similar applications, but their simplified architecture imposes limits on how many things a single model can accomplish. Existing implementations are pre-trained, meaning they do not learn over time. These factors make transformers almost certainly incapable of true cognition. They cannot actually think, plan, or have their own goals.


The human brain by comparison, accepts input, models the world, references its internal model against past experiences, simulates potential plans and hypothetical scenarios, dynamically modifies the local or global behavior of its own network, and dynamically changes the strength of the connections within itself. It does this while seamlessly translating intention into the commands necessary to enable things like physical motion, often while conscious thought processes are focused on a different task. Due to its complexity, the human brain does all this with a number of neurons and synapses roughly equal to that of ChatGPT 4, but the human brain dramatically outperforms ChatGPT 4 in almost every way. A human being can navigate a busy highway on autopilot while daydreaming about complex business, social, or game strategies, but ChatGPT 4 cannot reliably play a game of chess without making illegal moves unless it has external code supporting it — a task that was able to be performed at a superhuman level by more specialized hardware and software as early as 1997!


We shouldn’t fear that AI will try to replace humanity any time soon. We should, instead, keep in mind some dangers posed by how humans utilize them.


What you should be worrying about

The strength of transformers — the thing that makes us believe that they’re so capable — is in their ability to produce human-like output — like the illustrations used in this blog post. This feat impresses us, speaks to us, and makes us believe that the system is smart, that it knows things, and that it has judgement. But this is simply not the case.


Transformers on their own are not dangerous, but bad actors can use them to augment their abilities, and we as users can be lulled into relying on them for things transformers cannot do well.


Some of the ways that bad actors can use transformers to do harm include:


 Some of the ways that transformers can lead users to make bad decisions include:


Unfortunately, a tool which can generate realistic text, voice, and audio without having any awareness or understanding of what it’s doing is perfect for applications where the output doesn’t need to be correct, it just needs to be believable. There are legitimate uses, but for most of those uses the output needs to be reviewed for accuracy by a subject-matter expert before it can be trusted. Worse, for illegitimate uses, this complication isn’t important: a phish just needs the recipient to be fooled long enough to click a link or open an attachment for their computer to be compromised!


Tangible can help

For over 25 years, security-minded organizations have trusted Tangible Security with protecting their sensitive assets. We offer a full range of services from penetration testing and risk assessments to staff training, compliance assessments, and staff augmentation such as fractional CISOs that will ensure security in your organization becomes tangible.

Our services to prevent threats like AI-generated phishes, scams, and fraud include:

  • Security awareness training provides organizations with targeted educational programs to raise awareness among employees about cybersecurity risks and best practices. Our training sessions cover a wide range of topics, including phishing attacks, social engineering, password hygiene, data protection, and incident reporting. By engaging in Security Awareness Training, organizations can empower their employees to become the first line of defense against cyber threats, cultivate a security-conscious culture, and mitigate the risks associated with human error and negligence.
  • Penetration and security testing includes vulnerability assessments, penetration testing, reverse engineering, source code reviews, physical security, and social engineering testing, threat emulation, cloud security, ICS/OT security assessments and deploying red and purple teams.

Tangible Security is ready to provide expert, tailored, and personable cybersecurity consultation. For more information on how we can help your business, Contact us today.