Faith-based Attribution
Every network attack against a company like Sony Entertainment, an organization like the DNC, or a government agency like OPM, comes with a series of questions to be answered, including the obvious ones like when did it begin? What was taken? Who was responsible? Are the attackers out of my network?
Attribution, simply put, purports to answer the question of who is responsible. For example, CrowdStrike investigated the DNC network breach and determined that the Russian government was responsible. FireEye investigated the Sony Entertainment network attack and determined that the North Korean government was responsible.
It’s important to know that the process of attributing an attack by a cybersecurity company has nothing to do with the scientific method. Claims of attribution aren’t testable or repeatable because the hypothesis is never proven right or wrong.
Neither are claims of attribution admissible in any criminal case, so those who make the claim don’t have to abide by any rules of evidence (i.e., hearsay, relevance, admissibility).
The closest analogy for a cybersecurity company’s assignment of attribution is an intelligence estimate, however intelligence analysts who write those estimates are held accountable for their hits and misses. If the miss is big enough (No WMDs in Iraq, missed India’s five nuclear bomb tests in ’98, missed Iraq’s invasion of Kuwait in 1990, etc.), there are consequences, and perhaps a Congressional investigation.
When it comes to cybersecurity estimates of attribution, no one holds the company that makes the claim accountable because there’s no way to prove whether the assignment of attribution is true or false unless (1) there is a criminal conviction, (2) the hacker is caught in the act, or (3) a government employee leaked the evidence.
In fact, when looking at professions who use an investigative process to determine a true and accurate answer, the closest profession to the attribution estimate of a cyber intelligence analyst is that of a religious office like a priest or a minister, who simply asks their congregation to believe what they say on faith. The likelihood that a nation state will acknowledge that a cybersecurity company has correctly identified one of their operations is probably slightly less likely than God making an appearance at the venue where a theological debate is underway about whether God exists.
Unstructured or Structured Analysis?
Many of the cyber intelligence analysts who work at companies like CrowdStrike, FireEye, and Mandiant have come out of the military or the Intelligence Community with prior analytic training.
So the quickest way to get to the heart of how these companies assign attribution is to look at how intelligence analysis was done during that time. Fortunately for us, Maj. Robert D. Folker, Jr. (USAF) did precisely that with his January, 2000 paper “Intelligence Analysis In Theater Joint Intelligence Centers: An Experiment In Applying Structured Methods” published by the Joint Military Intelligence College.
Folker believed that adding structure to the analytic process would result in superior results over the vastly more popular but frequently flawed intuitive approach. He gathered 26 active duty volunteers from Joint Intelligence Centers who were then divided into a Control group and an Experimental group. The Experimental group was given one hour of training in hypothesis testing, a structured methodology. The Control group wasn’t.
Notice what Folker observed in the Control group:
After reading the scenarios members of the control group formed a conclusion, then went back to the scenario to find evidence that supported their conclusion and ignored contradictory evidence. When asked to justify their answers, analysts in the control group often cited some “key” information that gave them a flash of insight.
And the Experimental group:
Members of the experimental group examined all evidence provided in the scenario prior to making their decision. They felt confident that they were making the best decision they could with the amount of information available. They acknowledged that their decision may not be the right one and added that if more evidence became available they would reevaluate their conclusion taking into account this new information.
Keep in mind that this study was done in 1999, when many of today’s cybersecurity professionals were serving in the military as intelligence analysts or investigators so it isn’t surprising that the same approach is frequently applied by cyber intelligence analysts today.
Unfortunately, cyber analysts who apply 20 year old habits to their attribution effort should pay more attention to what modern science has taught the IC about how the brain processes information; i.e., the impact of cognitive bias. IARPA, for example, has funded research into mitigating biases with gameplay. Or you could just read “Thinking Fast and Slow” by Daniel Kahneman.
Even if cyber intelligence managers and analysts were trained to apply the latest techniques to counter things like fundamental attribution error, confirmation bias, and bias blindspot, they would still have a huge deficit to overcome — the inability to measure the accuracy of their assessments.
Imagine taking an SAT test, turning it in at the end, and then being told that you have to assess your own grade based upon how well you think you did. And you never receive an official score. Would you hire any professional who couldn’t produce independently verifiable results of his proficiency? Of course not.
The solution to this problem is a simple one. If you can prove attribution, do it.
If you can’t, say so.
Just don’t claim the equivalent of a 1600 SAT score and expect us to take it on faith.
Read Also: “The DNC Breach and the Hijacking of Common Sense”