InfoSec: Threat Hunting Opsec

(This material was presented as a talk during the 2019 BSides Vancouver)

Operational security (or “OpSec”) is not a novel concept in the field of cybersecurity. Like the "DMZ" it is yet another term borrowed by infosec from the military terminology. OpSec refers to the practice of conducting operations in a manner that does not make the adversary’s job easier. Offensive operators tend to pay close attention to their OpSec when conducting their campaigns. Historically, poor OpSec has been one of the most common ways for attackers to get caught or have their identity unmasked. However, practicing good OpSec is just as important for defenders. Security Operations Centers, Incident Responders and Cyber Threat Hunting teams should always consider whether their conduct may help the attacker further their objective.

Experienced Incident Responders could likely recite several OpSec mantras if you woke them up in the middle of the night. "Don’t use a domain admin account for responsive actions," "don’t upload payloads to VirusTotal," and so on. Threat Hunting is often referenced as "Incident Response without an incident." As such, its OpSec goals are the same. However, the challenges in achieving them are rather unique.

If we are performing Incident Response, we already know that an incident has occurred. We are even likely to have at least a general idea of where the attacker is in the environment. Something happened to alert us to the activity. An alert was triggered, a report was received from an end-user, or a degradation of system performance has been observed, which pointed us to the suspected location of the attacker. From there, we can start to pull on the thread and follow the breadcrumbs. We can profile, scope out and, eventually, contain the intrusion. Knowing where the attacker may be, inherently helps us reduce the risk of them being tipped off to being discovered. 

Random Acts of Incident Response

When Threat Hunting, we do not have that luxury. We do not know where the attacker is. In fact, we do not even know if the attacker is in our environment at all. We formulate a hypothesis that if an attacker were in our environment, we would be able to observe indicators of certain techniques on certain systems. We then go out and check every system in scope of the hunt for the presence of said indicators. Now, what happens if the attacker is on one of said systems? What happens if they notice that you have just ran a check intended to identify them? That in a matter of minutes, as you review the results, you will be onto them? If the attacker realizes that you are about to disrupt their campaign, they may do something terrible.

Best-case scenario, if they are not yet close enough to their action on objective, they may decide to go covert. They will take the scorched earth approach and clean their artifacts from the environment. They will set up a low-and-slow C2 channel, or a scheduled reverse shell. And a few months out, when we are no longer looking for them, they will be back. That is a best-case scenario and, yet, we have failed our objective of eradicating them from our environment. Worst case scenario is the attacker deciding to expedite their action on objective. This can range from exfiltrating our crown jewels, to pushing a ransomware payload to every system in your network.

I think we can all agree that, depending on the severity of the resulting incident, this may lead to what we often refer to as a “resume generating event.” We will go from being the “good guys” - threat hunters trying to help our company reduce the attacker dwell time - to being what the industry refers to as a “root cause”. And a "root cause" is not a good thing to be. A "root cause" gets written up in incident reports. A "root cause" gets brought up in "nameless and rankless" post-mortems. A "root cause" is talked about in attorney-client communications. We do not want to be the “root cause.”

Enter Passive Data Sources

How can we minimize the chance of an OpSec failure turning us into a “root cause” while we are threat hunting? One of the most effective measures is the use of passive data sources. This is likely something that most are already doing out of convenience anyways. I am referring to tools that abstract the logs, metadata, and telemetry information away from the endpoints where the attackers might be present. There, we can analyze the data without the risk of attackers realizing we are looking at it. Data sources like SIEMS and EDRs fall into the passive category. They enable us to stack, pivot and analyze indicators without any chance of attackers being tipped off to us doing it*. I have added an asterisk to the last statement so that we can get back to it later.

Here are some examples of the passive data sources that we can use for Threat Hunting:

  • EDR: Tanium, Carbon Black, CrowdStrike, SentinelOne, Sysmon, PowerShell (Oriana)
  • SIEM: Splunk, ArcSight, Qradar, LogRythm, ELK, Greylog
  • NTA: NetWitness, StealthWatch, Vectra, Awake, Snort, ExtraHop, Zeek, Moloch, Security Onion, RITA
  • OSINT: VirusTotal, Hybrid Analysis, URLscan, Mnemonic PDNS, PassiveTotal

One could argue that the “OSINT” sources listed above are not in the same category. They tend to not make our environment telemetry available for offline hunting. However, they can be very helpful for indicator and finding validation, so I will still cover them in this section.

Are They Passive?

Before hunting within passive data sources, it is important to understand how they work and if they are indeed passive. For example, a well-known EDR solution, Tanium, does not abstract the data away from the endpoints in its base implementation. It queries the data on demand, whenever a question is issued. This can be observed on the endpoint, alerting the adversary to the hunting activity. Another common example of a data source that can turn from passive to active is VirusTotal. Searching for a file hash in VirusTotal is perfectly safe from an OpSec standpoint. Uploading that file to VirusTotal for analysis will cause an OpSec failure. If the uploaded payload is unique to your environment, external parties will see a new scan result for that hash the moment it is uploaded. They can then conclude that they have been discovered. This happened to RSA Security in 2011 - someone uploaded the initial phishing email containing the zero-day payload to VirusTotal, allowing the public to perform attribution on the attack. Similar considerations apply to performing public (vs. private) page scans in, and to other OSINT data sources.

Securing the Data Sources

Another important aspect of using passive data sources is making sure that they cannot be accessed by the adversary. *This is where that asterisk from above comes in. This is where we get to “drink our own Kool-Aid”. Where we get to practice all those controls that we have been preaching to our users for decades. Password complexity, role-based access control, multifactor authentication, principles of least privilege, etc. It does not matter if our data sources are passive if an attacker can gain access to them. With access to our SIEM or EDR, attackers will be able to observe our hunting activities and adapt their TTPs to avoid detection. They may also use them to their advantage: identifying our crown jewels and high-value targets, modifying configurations and purging logs to avoid detection, leveraging remote execution capabilities to deploy their payloads. In the wrong hands our data sources become a powerful weapon. And if we end up arming the adversary with our own tools... well, you get it - "root cause."

Beyond the Passive Data Sources

Fast forward some months. We figured out and secured our passive data sources. We can threat hunt in our environment in an OpSec conscious manner. With this being the real world, we eventually come across a system where our SIEM or EDR just do not get us the visibility we need. We find that eventually we must take a measured OpSec risk and connect to the actual endpoint to get the information we need. Here are some things that we can do to improve your OpSec when forced to hunt on actual endpoints.

Attribution is a Two-Way Street

The funny thing about attribution is that it goes both ways. When an adversary enters our environment, they do not create new accounts with identifiable names (e.g., APT37). Instead, they “live off the land,” using existing accounts that they managed to take over. Or, if they must create one, they will observe and follow our account naming conventions. They do this to blend in and avoid detection by masquerading as common activity in our environment. We can borrow a page from their playbook. We can also use accounts and systems that cannot be attributed back to us when we do Threat Hunting. We already know the accounts whose activities most closely resemble Threat Hunting activities in our environment. In most environments these are vulnerability scanning or asset management discovery accounts. We can work with the corresponding teams to create accounts that, for all intents and purposes, appear to be owned and operated by them. Create some source systems that appear to be owned by them while you are at it. This way should the attackers observe our hunting activity, they will most likely assume that it was just another recurring asset inventory scan. Of course, they will still try to steal those credentials for their purposes (see the beginning of this paragraph), so let us address this next. 

Accounts are Free, Compromises are Not

When an adversary observes accounts connecting to systems under their control, they will absolutely attempt to steal those credentials. As the adage goes - “two is one, one is none.” An attacker with only one privileged account is one password change away from being locked out from the environment. They will attempt to gather additional credentials, including those unattributable accounts that we created for hunting. And that presents another opportunity for becoming the “root cause.” We should always limit the level of privilege that we grant to the accounts used for hunting. If all we need is to be able to pull running processes from workstations, we should set the account permissions for our environment accordingly. If all we need is the “badPwdCount” attribute in Active Directory (for statistical password spray detections) - we should have an account just for that. Accounts are free, compromises are not. Another option for limiting privileged credential exposure is to use unique per-system local admin accounts. While slightly more complicated to implement, solutions such as Microsoft LAPS, CyberArk or Hitachi ID make it possible. Unique local accounts can be used for hunting, with no risk of giving attacker access to additional systems in our environment. And we should never ever hunt using a full-fledged Domain Admin account! 

Another way to reduce the likelihood of attackers stealing our Threat Hunting credentials used for hunting, is choosing how we execute commands on remote systems. For example, PsExec uses local authentication and will most likely expose your credentials to the attacker running Mimikatz or Flamingo on the target endpoint. PowerShell Remoting (without the CredSSP option), uses network authentication and makes it nearly impossible to obtain the credential locally. Some EDR vendors also offer functionality to execute scripts on remote endpoints. This being done in the context of the EDR agent reduces the risk of shared credential exposure during remote command execution. But we need to make sure that we understand how our EDR does this, before choosing this option. SentinelOne on Windows and OS X, for example, creates a new local admin account when remote shell functionality is used. While not giving the adversary access to shared credentials, it will most definitely alert them to the fact that someone is snooping around on the system within their control. Queue the "root cause" conversation…


Maintaining good operational security is an important aspect of a mature Threat Hunting capability. Use passive data sources as much as possible. EDRs, SIEMs, Netflow and packet capture are your friends. Make sure you understand how your data sources function and how to secure them. Make attribution of your activity to the Threat Hunting difficult for attackers. Hide in the noise, masquerade as non-hunting teams, name your accounts and your source systems accordingly. Control your credentials. Never use a domain admin account for hunting. Consider which services and protocols you use to connect to target systems. Avoid becoming the root cause.

No comments: