Table of Contents
Looking for malicious activity in the cloud is like looking for a needle in a haystack. Security professionals must sift through hundreds of false positives every day to identify real security incidents that need investigation.
According to a study by the cybersecurity provider Orca Security give 59% of IT security professionals, more than 500 alerts daily to get to the public cloud. An analyst must then decide whether to follow up on the report or ignore it.
All too often, this high volume of alerts leads to a scenario where employees are so busy managing trivial or irrelevant reports that they are unable to identify and respond to actual data breaches.
For example, 55% of cybersecurity leaders said they overlook important notices on a weekly or daily basis.
Faced with these challenges, more and more cybersecurity solution providers are turning to generative AI to help security teams analyze activity in the cloud.
One of them is Skyhawk Security a $180 million cloud security company that earlier this year announced the use of ChatGPT to detect cyber threats.
Smart threat hunting with ChatGPT
Transparency and context are critical for security analysts when determining whether an alert or threat signal is a sign of a cyberattack or a harmless false alarm.
Yet analysts often have too much or too little data to make a decision without further investigation.
The answer from Skyhawk Security Addressing this dilemma is the integration of generative AI, the ChatGPT API, into its Cloud Detection and Response (CDR) solution as part of two products – Threat Detector and Security Advisor.
- Threat Detector Works with the ChatGPT API, trained on millions of security signals from around the web, to analyze cloud events and generate alerts faster.
- Security Advisor provides a natural language summary of live alerts and recommendations for responding and resolving issues.
In this case, thanks to generative AI, users can react to alerts much earlier and receive more information on how best to proceed to fix data breaches in the shortest possible time.
It’s an automated approach to alarm management that’s noisy Skyhawk proven incredibly effective.
Tests showed that the CDR platform in Triggered 78% of cases of quick alerts when using the ChatGPT API as part of their threat analysis process.
Chen Burshan, the CEO of Skyhawk Security, told Techopedia:
“Generative AI was a natural progression for Skyhawk as we always strive to improve our threat detection, and we see generative AI as a great opportunity to streamline our detection and response for cloud engineers and SOC incident responders.”
Burshan added, “We’re using them as a force multiplier for the SOC to help fill the shortage of cloud-savvy employees.”
ChatGPT for cloud detection and response
Uses as part of its solution Skyhawk an existing foundation of machine learning (ML) algorithms for monitoring data in the cloud.
The system was trained to distinguish between “peaceful” activity and normal usage, and to immediately provide malicious behavior indicators (MBI), such as: B. unauthorized memory access, and can track to provide these MBIs with a threat score.
As soon as the value exceeds a certain threshold, an alarm is triggered. After the alert, the ML solution from Skyhawk create an attack sequence and present the user with a graphical representation of the event that summarizes what happened.
Then grab Skyhawk relies on its ChatGPT-trained threat detector to supplement and complete the data provided by its existing ML-driven threat assessment mechanism with additional parameters. This allows users to verify the risk values assigned to a specific activity.
This means IT admins can identify with greater certainty which alerts they need to act on and should give them a higher priority.
The Limits of Generative AI in Cybersecurity
Generative AI can be helpful for security professionals. However, companies must consider the limitations of technology to achieve the best results.
Burshan said:
“While generative AI is extremely powerful, it needs to be used judiciously to ensure it doesn’t introduce bugs, doesn’t raise privacy issues, and addresses many other issues that need attention.”
In this sense, generative AI for SOC teams is a tool that can complement and streamline human investigation of security events. It is not a solution that allows full automation of threat remediation and response.
Currently, generative AI is best used when it explains opaque alerts and data in natural language, and gives users guidance on how to effectively respond to them.
Sunil Potti, Vice President and General Manager of Google Cloud Securityexplained in a blog post following the launch of Google’s security LLM in April 2023: “Recent advances in artificial intelligence (AI), particularly large language models (LLMs), are accelerating our ability to help employees ensure the security of their organizations.”
Potty added:
“Not only do these new models give people a more natural and creative way to understand and manage security, but they also give them access to AI-powered expertise to go beyond what they could do on their own.”
Knowledge is power
With the increasing number of cyber threats, knowledge is an important requirement. The more context security teams have to make decisions about how to respond to security incidents, the better they can protect on-premises and cloud environments from attackers.
By using generative AI, companies can help their analysts make decisions about which alerts to investigate and what actions to take, rather than relying on them to correctly assess hundreds of cases every day.
- Michaël van de Poppe: Bitcoin to Hit $500,000 This Cycle? 🚀💸 Or Just Another Crypto Fairy Tale? - December 21, 2024
- What is the Meme Coin Bonk, Price Predictions 2025–2030, and Why Invest in BONK? - December 18, 2024
- BNB Price Analysis: 17/12/2024 – To the Moon or Stuck on a Layover? - December 17, 2024