The threat is coming from inside the house: how to spot and prevent insider threat

The threat is coming from inside the house: how to spot and prevent insider threat

Have you ever seen a horror movie where the protagonist, who’s home alone on a dark and stormy night, starts receiving menacing calls from a stranger? Then checks all the doors and windows and calls the police – only to find out that the calls were coming from inside the house all along? A nightmarish scenario for sure, but as we often tell ourselves: “It’s just a movie.”

However, it can quickly become somewhat of a reality for CIOs who ignore the notoriously hard- to-detect security risks that might originate from within their organizations. In this article, we’ll explore who risky insiders are and why they’re one of businesses’ biggest cyber liabilities, as well as how to detect insider threats and safeguard company data assets against associated data breaches.

What is an insider threat – and who is an insider attacker?

According to the Cyber and Infrastructure Security Agency (CISA), the definition of insider threat is “the potential for an insider to use their authorized access or understanding of an organization to harm that organization.” This harm might be caused through malicious, complacent or unintentional acts but in any case, it ultimately damages the integrity, confidentiality, and availability of the company and its assets.

Who counts as an insider, you wonder? Anyone who has or used to have authorized access to or knowledge of a business’s resources, whether it’s personnel, facilities, data, equipment, networks, or systems. They might be people who are trusted by an organization, CISA explains, such as employees, and are granted access to sensitive information. Other examples include individuals who:

  • Have a badge or access device identifying them as someone with regular or continuous access, like a contractor or a vendor;
  • Develop the organization’s products and services and know the secrets of the products that provide value to the organization;
  • Are privy to the organization’s pricing and cost structure, strengths, and weaknesses as well as business strategy and goals;
  • Work in government roles with access to information which, if compromised, can jeopardize national security and public safety.

The anatomy of an insider attack: the hows and whys

Insider threats come in all shapes and sizes, but most of them fall into two major categories: intentional and unintentional threats.

Unintentional threats are the result of either negligent or accidental insider behavior. The former usually manifests itself through security incidents caused by individuals who are well aware of an organization’s cybersecurity protocols butdeliberately ignore them. Think: failing to install software updates, including security patches, or reckless handling of devices containing sensitive company data. Accidental threats occur when insiders unwittingly expose the organization to cybersecurity risk by, say, sharing a confidential file with the wrong person, falling prey to phishing or malware, or carelessly disposing of confidential documents.

A good example of the havoc such oversights can wreak on an organization and its stakeholders is The Great Twitter Hack of 2020. In July, 130 high-profile accounts from US politics, entertainment and tech – Elon Musk, Barack Obama, Joe Biden, Bill Gates, Jeff Bezos, Michael Bloomberg, Warren Buffett, Kim Kardashian, Kanye West, Apple, Uber, and the like – were hijacked to promote a bitcoin scam to hundreds of millions of followers. The idea was simple: send bitcoin to celebrities’ wallets and they’ll send double the amount back. According to a prosecutor, CNBC reported, the hackers raked in more than $100,000 worth of bitcoin.

Twitter confirmed that the ordeal had started out as a phone spear phishing attack against a handful of its employees. The hackers alleged that they were calling from the company’s IT department to sort out a VPN issue. Such problems were almost an everyday occurrence at the height of the pandemic, with a growing number of remote workers connecting from their home offices, points out the New York State Department of Financial Services. The employees were taken to a spoofed Twitter VPN site and asked to enter their credentials. It’s unknown how many employees fell for the scam. But at least one did – and it turned out to be one too many.

Intentional threats can be just as, if not more vicious. Malicious insiders are usually motivated by greed or vengeance, looking to harm an organization for financial gain or to “get even” for a real or perceived mistreatment. Typically, they do so by leaking sensitive information or trade secrets to sabotage a company’s productivity or reputation or as a form of espionage, stealing proprietary data or intellectual property for profit or to advance their careers at a new employer. A subset of malicious actors is often referred to as moles, collaborators or collusive threats, aka insiders who have been talked, tricked or coerced into doing their bidding by cyber criminals.

“He thought he was the smartest guy in the room,” FBI Albany Special Agent Vin Manglavil said of Jean Patrice Delia, an ex-GE engineer who pleaded guilty to conspiring to steal trade secrets from his former employer in December 2019. Clearly he was no match for federal investigators, who spent years uncovering how Delia and his business partner, Miguel Sernas had stolen some 8,000 files containing trade secrets, pricing models and proposals to bid against GE around the world for new contracts, as confirmed by the US Attorney’s Office for the Northern District of New York.

Corporate Cluedo: what are some potential insider threat indicators?

In its annual Cost of Insider Threats Global Report, Ponemon Institute studies the financial aftermaths of three types of insider threats, including careless or negligent employees and contractors, criminal or malicious insiders, and credential thieves. According to the 2022 edition, insider threats have increased in all three domains, but those caused by careless or negligent employees have dominated the risk landscape. Fifty-six percent of detected incidents were due to reckless behavior, leaving organizations with a bill of $484,931 on average. Malicious actors were to blame for 26% of insider attacks, with an average cost of $648,062 per incident. Recognizing the telltale signs of ongoing or impending insider activity, often called potential risk indicators (PRI), is crucial to mitigate insider threats. According to the Center for Development of Security Excellence, PRIs cover a variety of individual predispositions, stressors, choices, actions, and behaviors, such as access attributes, professional lifecycle and performance factors, security and compliance violations, unauthorized use or disclosure. They also include any inappropriate efforts to view or obtain protected information outside one’s need to know, technical activity, financial considerations as well as criminal or questionable personal conduct. A case study cited by the Cyber and Infrastructure Security Agency perfectly illustrates how many potential insider threat indicators might be at play in creating a perfect storm and turning a trusted employee into a cyber liability.

The insider in question was working as an engineer for a supplier of aerospace parts for high-profile federal agencies such as NASA, the US Air Force, and the US Navy. The company’s insider threat unit uncovered concerning behaviors on the employee’s part, including copying entire folders with mechanical drawings and other valuable design information for a satellite program on a USB stick. Plus, he showed signs of poor judgment (spent thousands of dollars on a romantic interest he hadn’t even met), frustration for not getting promoted at work, and worry over mounting medical bills in the wake of his wife’s worsening health conditions.

The employee then contacted the Russian embassy about selling the stolen proprietary software technology and satellite information and met several times with an undercover FBI agent who he believed was a Russian intelligence officer. Thanks to the aerospace manufacturer’s vigilant insider threat specialists and their collaboration with law enforcement agencies, the malicious operation was nipped in the bud. The perpetrator was eventually convicted for the attempted illegal sale of proprietary trade secrets to a foreign government’s intelligence service and sentenced to five years in prison.

From vulnerability to strength: 4 insider threat prevention best practices

1. Insider threat mitigation should start at recruitment

“Insider threats can be fought on multiple fronts, including early in the recruitment and hiring process. Hiring leaders should look beyond the standard criminal background checks, and dig into a prospect’s history to look for anything that might render them susceptible to blackmail or bribery,” advises security and borderless networks expert Pete Burke. This might be excessive debt, bankruptcy, loan defaults, tax arrears, or any indicator of financial hardship that can be used as leverage against a future employee.

2. Boost cyber awareness about insider threats

A joint study by Stanford University Professor Jeff Hancock and security firm Tessian found that a whopping 88% of data breach incidents are caused by human error. Some 50% of the employees surveyed stated that they are “very” or “pretty” sure they had made a mistake at work that could have turned into a security risk for their company. This clearly underlines the need for employers to build a culture of vigilance within their organization through training their workforce to spot insider threat indicators before they turn into data leaks.

3. Tailor security protocols to your organization

Designing an effective insider threat prevention program should start with asking yourself the question: “What critical assets does my organization have that need to be protected?” Physical or intellectual, technology or process, software or equipment – make sure to account for everything the loss, compromise, theft, or damage of could put business continuity at peril. As the next step, identify who has access to them and set up or re-evaluate permissions to limit them to a strict need-to-know basis.

4. Keep things transparent

Without visibility into users’ network activity, insider threat prevention will be an uphill battle. Set up controls to monitor and manage shadow IT risks, establish secure and easy-to-observe file sharing and access practices to track user behavior and file movement, and make sure your bring- your- own- device policy is clear and comprehensive enough not to translate into a “bring- your-own-risk” reality. It’s also a good idea to invest in an end-to-end encrypted, zero-knowledge file-sharing and collaboration tool for maximum protection.