AI tools are being deployed by IT departments around the world, and many organizations are reaping immediate benefits. However, there are many myths and misconceptions surrounding AI, and believing them can lead to security risks. Here are four common AI myths and the potential security risks of believing them.
1. Using AI is completely safe and won’t leak my data.
If you read the Terms of Service when you implement an AI tool, you will usually see that the owners of the AI service give themselves wide latitude to use your data in almost any way they see fit, including to train the next version of their AI model. So, realize that the next version may be able to provide information about your data to other users. Indeed, OpenAI’s policy FAQ specifically says “we may use your content to train our models,” and it has been demonstrated that ChatGPT will directly regurgitate training data, including personally identifying information, if asked in the correct way. Similarly, human error by the teams behind AI may result in data leaks. For example, Microsoft’s AI team leaked 38 terabytes of internal data last year. If AI training data is leaked, it can result in unauthorized data exfiltration from an organization, potentially compromising privacy and security. It’s important to review the terms and conditions of any service you intend to use so that you understand what services do with your data to prevent unauthorized data exfiltration from your organization.
2. AI products learn from me as I use them.
While new versions of AI models may be trained on user data, most existing AI products like ChatGPT or Stable Diffusion are static pre-trained models (ChatGPT stands for Generative Pre-trained Transformer) — meaning the training is done ahead of time, and the model’s knowledge remains static when in use. (You can read about how GPTs work here.) The way that ChatGPT retains a history of its conversation with you is that the interface sends the model a condensed version of the conversation history and any other “memories” along with your latest prompt. So, while it looks impressive, the AI “remembering” the context of the conversation is really smoke and mirrors. It’s important for your organization to have realistic expectations of tools marketed as containing AI, such as SIEMs and other monitoring/alerting tools. These tools can serve as a force-multiplier for experienced personnel, but they can’t serve as a replacement, and shouldn’t let you become too confident.
3. AI is inherently harmless (or dangerous, or good, or evil).
AI tools are just that: tools people use for things like increasing productivity and finding information. However, there are also ways in which AI can be used to cause harm to others. The technological innovations of the last few decades have provided numerous benefits, but at the same time have also had negative impacts on our way of life. It’s up to us as a society to use innovations in an ethical way. The rise in maliciously generated content, such as realistic disinformation, phishing emails, and other attacks, indicates that many people are failing in that responsibility. The rest of us must be more vigilant and committed to the safety and security of ourselves and our organizations as a result.
4. AI will self-improve, grow out of control, and destroy the world.
I can’t tell the future, but this seems unlikely, and multiple experts agree: much of the conversation over AI risk is not that AI will grow out of control, but that it will do unexpected and extreme things with the control humans grant it. AI in its current form – as discussed in my previous post – is still not very good at doing many of the things a single human being can do and isn’t truly intelligent. Even if that were so, there’s no reason to think that AI would have a natural advantage over the experts and researchers who have been creating it in programming AI. Just because an AI is made of code doesn’t mean AI will automatically be good at coding; after all, human intelligence is due to the brain, and we took thousands of years to get around to figuring out how the brain works!
As for growing out of control, AI models are large, and the most capable models require large datacenters with specialized hardware. An AI with such dependencies couldn’t “escape” to less capable hardware, nor could it manifest new datacenters for itself out of thin air, since humans would need to be involved at every step of that process. This may change – 25 years ago, the most capable supercomputer in the world had less processing power than a $2,000 graphics card has today – but for now, there’s no credible risk of escaped AIs spreading through the world’s computers; the ultimate risk is us and the actions we take with this new technology.
The fear around AI should not prevent us from using tools marketed as containing AI. Machine learning algorithms are simply another way to process data; so long as we understand the value of the product and have a realistic understanding of the costs and risks associated with it, AI-powered tools can be a force-multiplier for organizations who leverage them correctly.
Cutting through the hype
It’s established that we are terrible at predicting the future, and have a lower tolerance for risk when we’re afraid. We also have a deep-seated instinct to be disturbed by any non-human thing with too many human traits. This makes us prone to getting caught up in the hypotheticals around near-future AI technology. Instead, we should not be paralyzed by this new technology, nor should we scramble to avoid being left behind. Instead, we should assess AI in the same way as we’d approach any potential new addition to our organizations’ technology stacks: with rigorous vetting, risk-assessment, and cost/benefit analysis.
Tangible can help
For over 25 years, security-minded organizations have trusted Tangible Security with protecting their sensitive assets. We offer a full range of services from penetration testing and risk assessments to staff training, compliance assessments, and staff augmentation such as fractional CISOs that will ensure security in your organization becomes tangible.
Our services to reduce the risk of threats associated with the use of AI tools, platforms, and applications, as well as from AI-generated phishes, scams, and fraud include:
- Penetration and security testing. Our AI Application vulnerability assessment and penetration testing services provide a comprehensive evaluation of your AI-powered tools adhering to the OWASP Top 10 framework for Large Language Model applications and other critical areas. Our team of AI and cybersecurity experts, backed by a published AI researcher with extensive software engineering experience, delivers detailed reports on vulnerabilities and actionable recommendations, ensuring the security of your AI tools and safeguarding internal and confidential data.
- AI-Focused Security Program Assessment service offers a comprehensive gap assessment of your security program against emerging AI frameworks and standards such as NIST AI 100-1 (AI Risk Management Framework), ISO/IEC 42001 (Artificial Intelligence – Management System), ISO/IEC 23894 (Guidance on AI Risk Management) and more. Our expert team combines an attacker’s perspective with real-world breach data to identify potential threats that could cause significant harm to a business. We then determine your current program security maturity rating and provide an actionable roadmap for systematically reducing risk to the organization.
- Human Cyber Risk Services encompass a comprehensive suite of solutions to address the dynamic landscape of human cyber risk. Our services include: a human cyber risk program evaluation of your organization’s culture and effectiveness against real-world social engineering attacks, human cyber risk managed services, social engineering awareness services, and live social engineering awareness training with social engineering exercises. We also provide customized reporting of tailored insights regarding the human cyber risk of your organization.
Tangible Security is ready to provide expert, tailored, and personable cybersecurity consultation. For more information on how we can help your business, Contact us today.
Recent Comments