Public disclosure: the pros and cons of naming and shaming cyber threat groups

Publishing information about cyber threat groups can have unexpected consequences, says BAE Systems’ Saher Naumaan

As the digital world has grown rapidly around us, so have the volume and variety of threats to this delicate ecosystem. Where once, critical national infrastructure (CNI) just included hospitals, nuclear power plants and financial markets, which were effectively air-gapped from the internet, this is frequently not the case today.

In fact, the organisations that comprise these and other sectors are overwhelmingly dependent on internet-based services and applications for day-to-day operations.

Cyber attacks from nation state and organised crime groups are now the new normal for these companies. But until now, no one has investigated the wider impact that public disclosure of such incidents might have. As BAE System reveals in a new report, the truth is more nuanced than you might think.

Who does it and why?

Governments, rights groups and private sector cyber security vendors have invested countless sums over the past few years into better understanding their online adversaries. Increasingly, these activities result in public disclosure.

The motivation for doing so is usually positive: to raise awareness of a particular type of threat activity, both within the security community and among the wider public. This, in turn, could help researchers collectively improve cyber defences for their clients and customers.

Public disclosure and attribution is also often viewed as an effective deterrent: a timely reminder for attackers that anonymity and immunity can be a precarious thing in cyberspace. By ‘doxing' or unmasking suspected hackers and/or burning their operations, researchers can make life difficult for their opponents.

Going underground

Yet, as has been documented in the past, threat groups almost certainly take a close interest in what is being said about them in public. And they may react in ways that make things more difficult for the threat intelligence community.

Sometimes, groups go quiet when their activities are unmasked, as happened when a company publicised a Middle Eastern APT, known as Operation Cleaver, in 2014. Often, they don't stay silent at all, and we see a change in tactics, techniques and procedures (TTPs) designed to help avoid further detection. When BAE Systems and PwC revealed a major Far Eastern state-backed operation against managed service providers and their customers, dubbed Cloud Hopper*, the group went deeper underground, making it harder to map its operations.

Sometimes, threat groups even attempt retaliatory responses against the organisations hunting them. The Middle East Charming Kitten group, for example, designed a phishing page to harvest log-ins from a vendor that had published intelligence on it. Researchers and even journalists investigating threat groups must be prepared for attempts to compromise their own IT systems.

False flags and copycats

We can also see occasions when public disclosure of threat activity has played somewhat into the hands of the other threat actors. Publish too much information about a particular group and it could be copied, as we believe happened when a novel SMB watering hole technique used by Russian group Dragonfly was subsequently used by a Middle East threat group known as Leafminer.

Similarly, TTPs from publicised operations can be deployed as false flags, in a bid to put researchers off the scent of new attacks. The destructive Olympic Destroyer malware used to target the Pyeongchang Olympic Games featured artefacts pointing to three separate Asian state-backed groups. However, it was eventually linked to a major Eastern European nation state.

Balance is everything

There are, of course, several caveats to be considered. There's no 100 per cent fool proof way of knowing if a threat group made a specific change in its tactics in response to a particular public disclosure. The only way of knowing is to ask them yourself, and assume that the reply you get is truthful.

What we can say is that many of the public disclosures listed above have had unforeseen and negative consequences. Yet they may also have had a positive impact, by disrupting the adversary's operations and imposing extra costs on them — not to mention the benefit to the research community of sharing useful threat intelligence [Google Docs spreadsheet]. It's also worth remembering that threat actors could respond in a similar manner if they spotted, not even a public disclosure, but merely a live investigation into their attacks.

Ultimately, it's all about getting the balance right in terms of attribution and the level of technical detail shared publicly. Frustratingly, there's no one-size-fits-all rule. But investigating the impact of disclosures will become an increasingly important area, as the stakes for threat detection and response climb ever higher.

Saher Naumaan is a Threat Intelligence Analyst at BAE Systems Applied Intelligence. She currently researches state-sponsored cyber espionage with a focus on threat groups and activity in the Middle East

* for more on Cloud Hopper, see PwC's and BAE Systems's research paper [PDF]