How to Leverage AI to Maximize your Cybersecurity

Pete Hoff
  • 7 min read

AI has been used in security applications for years, in particular monitoring SIEMS and orchestration efforts to secure systems during an attack. However with continuing innovations such as User Behavior Analytics (UBA), the use of AI in cybersecurity will go much further.

The potential applications of AI in security are as far-reaching as the uses of cloud computing to business in general. In this blog, we’ll take a quick look at some of the latest ways you can use AI to maximize cybersecurity.

AI for Research 

While some of us might be reluctant to admit it, security pros don’t know everything. (Well… not yet.) Research is one way AI can help maintain security. You may begin working with a new technology that you’re not familiar with, such as:

  • Developers bringing in a new way to build a database
  • New API integration
  • Managing tokens

In such cases, AI can be a tremendously helpful research tool, depending on how you ask the questions. AI will deliver a lot of responses, from which you can begin to build an understanding. Security pros need to think about things in a threat response way. As you begin to dig deeper and understand something, AI can help fill the gaps.

Facilitate Knowledge Sharing

The term “hacker” is typically associated with a person using technology to commit crime. More precisely, this is known as a “black hat” hacker, whereas “white hat hackers use their capabilities to uncover security failings to help safeguard organizations from dangerous hackers,” as described by Kaspersky. Thwarting criminals involves white hat hacking, and to be good at that you have to think like a black hat.

Although security pros may take actions similar to criminals, we adhere to ethical standards. We stand by that and trust each other when discussing threats. The security community shares knowledge, such as what attacks we’ve seen, and generative AI is making it all easier. Large Language Models (LLM) can synthesize a summary from documents to help us understand the content. AI can also help us improve responses and decisions by training models for actions with large data sets.

Google’s Secure AI Framework Echoes IT Best Practices

On June 8, 2023, Google introduced the Secure AI Framework (SAIF), a conceptual framework for secure AI systems with the following six core elements:

  1. Expand strong security foundations to the AI ecosystem
  2. Extend detection and response to bring AI into an organization’s threat universe
  3. Automate defenses to keep pace with existing and new threats
  4. Harmonize platform level controls to ensure consistent security across the organization
  5. Adapt controls to adjust mitigations and create faster feedback loops for AI deployment
  6. Contextualize AI system risks in surrounding business processes

I’m struck by how these principles are consistent with general IT best practices. Organizations need a comprehensive view of their entire infrastructure to understand how changes in one area impact others. Prompt feedback is necessary to meet users’ needs. Automations are required to keep pace, and it all requires a solid foundation to start with.

Safeguarding AI Technology & Using AI to Safeguard Technology

SAIF is designed to help mitigate risks specific to AI systems. According to Google, the framework will help you safeguard the technology that supports AI advancements, so that when AI models are implemented, they’re secure-by-default. 

AI tools can also be used to enhance security in other areas. The Cloud Security Alliance (CSA) specifically examined ChatGPT in, “Security Implications of ChatGPT,” a new whitepaper which, “seeks to discuss the security implications within four dimensions:

  • How malicious actors can use it to create new and improved cyberattacks
  • How defenders can use it to improve cybersecurity programs
  • How it can be directly attacked to produce incorrect or otherwise bad results
  • How to enable the business to use it securely and responsibly”

CSA states that ChatGPT’s potential raises critical questions about the fine line between ethical and malicious use of these technologies. I personally never felt that line was “fine,” but the fact remains that ChatGPT, like other AI tools, has astonishing capabilities which are accessible to malicious actors as well as cybersecurity pros. For example, identifying vulnerabilities to remediate is rather equivalent to identifying attack vectors to exploit, so I see the reason for describing it as “fine line.” They are two sides of the same coin.

CSA delves into how AI-driven systems can be exploited in different aspects of cyberattacks, including:

  • Enumeration
  • Foothold assistance
  • Reconnaissance
  • Phishing
  • Generation of polymorphic code

For example, “ChatGPT can be effectively employed to swiftly identify the most prevalent applications associated with specific technologies or platforms. This information can aid in understanding potential attack surfaces and vulnerabilities within a given network environment.”

CSA ranks foothold assistance as medium for risk, as well as impact and likelihood. “When requesting ChatGPT to examine vulnerabilities within a code sample of over 100 lines, it accurately pinpointed a File Inclusion vulnerability.” The AI successfully detected a variety of different issues in additional inquiries.

Similarities Across Generative AI Offerings

While ChatGPT may be dominating headlines for now, we’ve previously seen Google take a measured, structured approach to developing new technologies and end up offering the best product. For example, the original “G Suite” productivity apps had unique approaches to security that didn’t require a ton of overhead. 

Google’s primary consumer generative AI offering, Bard, is based on Google’s brand new PaLM 2 language model. Some of Bard’s myriad uses include writing suggestions, summaries of long texts, code debugging, and the creation of Google Sheets and spreadsheet formulas. Although Bard hasn’t received the press coverage of ChatGPT, the tools have similar capabilities, such as writing code which could be used to execute attacks. I expect that much of CSA’s analysis of ChatGPT would apply to Bard as well.

Support to Leverage AI & Stay Ahead of Persistent Threats 

Even with new AI innovations, the fact remains that there’s no one service or solution that will make any organization hack-proof. Wursta offers a variety of services, such as Virtual CISO and Security & Cloud Risk Assessment to help you take the steps needed to stay ahead of persistent threats. Additionally, Wursta will soon be launching workshops to help our clients explore the endless possibilities of AI and maximize the use of AI to benefit their organization. Contact us to learn more.