Cyber attribution poses a dilemma for many security teams that are already often time-strapped and short-staffed. Is it really time well spent trying to identify the attackers? Or is it a distraction from getting on with the improvements needed to reduce the likelihood of another breach?
The process behind cyber attribution is usually complex, lengthy, resource-hungry, and fraught with inaccuracy pitfalls. It involves security analysts gathering evidence, constructing timelines, piecing together the events that led to a breach, and painstakingly reviewing tactics, techniques, and procedures used by the adversary in an attempt to uncover the organization or individuals behind it.
The question for many cybersecurity teams is whether they can afford the effort involved, and whether it will really make any lasting difference to their overall security posture. So, if the end result doesn’t justify the magnitude of the task, then why do we attempt to do attribution in the first place?
Someone to blame?
I’ve had many a discussion with members of the cybersecurity community on this topic - some say attribution is vital to their organizations, but when pushed they can’t always explain the exact reason behind its usefulness. Maybe it’s because we are only human and part of our nature is the need to blame someone else for our problems, and organizations tend to behave no differently. They want to apportion blame externally to deflect criticism and therefore prove they weren’t at fault.
As with any forensic process, getting to the root of attribution is time-consuming and often involves a significant amount of educated guesswork. The temptation is for security teams to go down the path that suits their own narrative. It’s much easier to come up with hypotheses and speculate about ‘who’ and ‘why’ instead of the more immediate concerns of ‘what’, ‘when’, ‘where’, and ‘how’ it happened. I’ve worked on many an incident where a well-meaning executive has burst into the room proclaiming at the top of their voice “it’s isn’t it?” when we’re still trying to figure out what *is* happening, let alone who might be behind the activity.
Assumption or indeed misattribution can be dangerous too, leading analysts down the wrong investigatory path, wasting the most precious resource afforded to the security team: time.
Interestingly, attribution is not just a cyber problem. There are instances worldwide where experts base their theories on evidence that might not be real or misinterpret information. The art world, for example, has plenty of cases of forged paintings selling for millions of dollars to highly reputable galleries that were convinced they were genuine. Museums aren’t immune to misattribution either. In 2002, the J. Paul Getty Museum paid somewhere between $3million to $5million for an unsigned sculpture believed to be by Paul Gaugin, which transpired to only have been photographed by the artist, and is now thought to have been made at a time and place he would have needed to teleport to…
Bear in mind that governments also struggle with attribution. Governments and their agencies can take years to attribute a cyberattack. For example, the US Department of Justice took more than four years to blame two Chinese nationals for the health insurance giant Anthem Inc. breach, yet the motives behind it are still unknown.
Questioning the innate value of attribution
When a cyberattack happens in an organization, security teams are commonly seen to be at fault, at least to the uninitiated outside viewer. However, the fact is that most security teams are running almost on fumes, with insufficient budgets and limited tools available to them. Couple this with the purported global shortfall of cybersecurity professionals, reportedly around 3.4 million workers worldwide, it’s not surprising that focusing on the right security priorities is an unending challenge for businesses of all sizes. Whether you believe the skills gap numbers or otherwise, the bucket of cybersecurity folks sitting around with nothing to do with their time could hardly be described as overflowing.
Therefore, it makes sense for security teams to ask themselves whether they should be investing their time in other security initiatives instead of attribution. These certainly things don’t have to be a zero-sum game, but rather a decision point as to how best to utilise time when and where it will have the best impact, considering also when outside help is required. And in the middle of an incident, attribution tends of be considerably more of a distraction than a solution. Of course, there are always edge cases, so I’m not saying attribution is utterly pointless, but taking the right actions at the right time makes a whole lot of difference to how long an incident takes to resolve.
When attribution matters
Ultimately, the importance of attribution comes down to the individual organization and whether it can truly help pursue an investigation to a meaningful conclusion. Threat actors often go to great lengths to cover their tracks, so the information gathered through detailed analysis post-incident can absolutely bring organizations closer to reducing damage caused by adversaries by adding countermeasures and program updates to help prevent repeat attacks.
However, with investigations eating up significant amounts of time and resources, it shouldn’t be an organization's priority in the event of a breach, when responders need to focus on detection, analysis, containment, and eradication. With security teams already over-stretched, it’s better to utilize a root cause analysis (RCA) process after the breach to understanding the attack, the issues and problems that led to it, and developing more effective programs to reduce the likelihood of future episodes. My personal experience in this realm is that many organizations don’t always do a solid job of the RCA process, for a number of reasons – most of which boil down to money – so leave themselves wide open to a repeat attack performance.
Understanding your own risk-based threat profile
Another area which can commonly be left somewhat to chance is risk-based threat profiling. Organizations need to understand their own specific risk factors, understand their gaps, and baseline their controls. Utilizing the FedRAMP methodology (which aligns to the NIST framework) really isn’t overkill, albeit not something that can be achieved during a lunchbreak. Additionally, an effective Cyber Threat Intelligence (CTI) program can go a long way to help drive a more, dare I say predictive approach to the types of threats more likely to impact an individual organization.
When embarking on the latter, business-level requirements are the best way to approach an effective (I’m saying this twice for impact) CTI program in collaboration with senior leadership, including setting out goals, measurements, feedback loops, and reporting. As ever, communication is key - security teams must be able to tell stories around threat intelligence capabilities and engage with business leaders using non-technical language to gain understanding and support for their security program.
Overcoming management complacency
Organizations and boards that don’t take cyber threat initiatives seriously are putting their businesses at risk. Complacency was cited as the “biggest cyber risk” by the UK’s Information Commissioner’s Office (ICO) when it issued a £4.4m fine to a construction company for failing to keep the personal information of its staff secure.
If security leaders can get their threat-based risk profiles and CTI programs on track with the buy-in of their senior business peers, then for most organizations the question of the value of attribution will become less of a dilemma. Security teams can stay focused on minimizing their organizations' attack surface and guarding against current and emerging threats that might otherwise derail their business. Bottom line – when faced with an incident or breach, the what, where, and when generally matter a whole lot more than the who or indeed the why.