top of page

The Anatomy of a SaaS Attack: Catching and Investigating Threats with AI

Whether planned and executed over time or forced overnight by the global pandemic, the world’s digital transformation has prompted a surge in the use of Software-as-a-Service (SaaS) solutions in organizations across the globe. The annual growth rate of the SaaS market is currently 18%, and as the global workforce becomes increasingly remote throughout 2020, this figure is only set to skyrocket.

SaaS solutions have been an entry point for cyber-attackers for some time – but little attention is given to how the Techniques, Tools & Procedures (TTPs) in SaaS attacks differ significantly from traditional TTPs seen in networks and endpoint attacks.

This raises a number of questions for security experts: how do you create meaningful detections in SaaS environments that don’t have endpoint or network data? How can you investigate threats in a SaaS environment? What does a ‘good’ SaaS environment look like as opposed to one that’s threatening? A global shortage in cyber skills already creates problems for finding security analysts able to work in traditional IT environments – hiring security experts with SaaS domain knowledge is all the more challenging.

Meanwhile, SaaS consumers are left with limited options: use the native SaaS security controls provided in each SaaS solution – and risk a lack of security maturity – or go with a third-party SaaS security solution, often in the form of Cloud Access Security Brokers (CASBs). Both options are not without their security risks.

Here are two examples of attacks recently detected by AI in SaaS environments that are representative of the broader SaaS threat landscape, and illuminate the sharp distinction between a traditional network attack and a SaaS compromise.

Office365 Business email compromise

In what amounted to a classic business email compromise (BEC), an attacker infiltrated an employee’s Microsoft 365 account to access sensitive financial documents hosted in SharePoint, including pay slip and banking details. Having gained initial entry, the attacker proceeded to make configuration changes to the inbox, deleting items and making updates that would enable them to cover their tracks.

The employee’s account login was first observed from unusual IP ranges. The account in question had never logged in from Bulgaria before, and the peer accounts belonging to those from the same department had not exhibited similar behavioral traits. This in itself was a low-level anomaly and not necessarily indicative of malicious activity – after all, in the context of an increasingly distributed workforce, employees might change locations frequently.

Yet the unusual login location was accompanied by an unusual login time and a new User-Agent. All of these anomalies called for a deeper analysis. It was then identified that the account was starting to access highly sensitive information, including payroll information on a Sharepoint.

The attacker tried to gain insights about payment information and credit card details, with the likely intention of changing the payroll details to an attacker-controlled bank account.

AI-powered security technology was able to put together these weak signals of a threat and illuminate the likely account compromise. The company’s security team was then able to lock the account and alert the user, who subsequently changed their credentials. Compromise

At a global supply company, unauthorized access to an employee’s file storage account was detected. The login took place in the US – where the company does operate – but from an unusual IP space and ASN. AI began to investigate the user’s activity.

The actor behind the account logged in to successfully, and proceeded to download expense reports, invoices, and other financial documents. These were files that were highly unusual for the account to access.

Cyber AI also found that the activity occurred at a highly unusual time for the legitimate user, and the location of the actor’s IP address was anomalous compared to the employee’s previous access locations for this particular SaaS service.

An understanding of user behavior and granular visibility within the application allowed the company to spot the subtle signs of account compromise. Moreover, AI-powered investigation outlined the narrative in its entirety, showing how each unauthorized file exposure was part of a connected incident and a key concern for the security team.

A new era in SaaS domain defense

Ultimately, traditional detection approaches with hard and fast rules for how SaaS domains should operate are not enough to ensure that SaaS applications remain secure. Keeping threat intelligence lists up to date is even more difficult, as most SaaS attacks don’t involve any Command & Control – just indiscriminate logins from remote devices. When it comes to points of entry for SaaS attacks, the possibilities are endless: VPN, Tor, other compromised devices, dynamic DNS – or even virtual private servers for attackers to cover their tracks.

A more intricate and effective approach to SaaS security requires an understanding of the dynamic individual behind the account. SaaS applications are fundamentally platforms for humans to communicate – allowing them to exchange and store ideas and information.

Abnormal, threatening behavior is therefore impossible to detect without a nuanced understanding of those unique individuals: where and when do they typically access a SaaS account, which files are they like to access, who do they typically connect with? As the attacks outlined serve to demonstrate, these are questions for an AI brain to contend with.




bottom of page