In 2005, a very distraught Washington D.C. administrative law judge sued his local dry cleaner for misplacing his pants for a few days. The $67 million dollar claim for damages (that’s not a typo) won national attention not just because it was beyond ridiculous but because it actually went to trial – making a mockery of the judge and the judicial process. But for the outpouring of support – emotional and pecuniary – that helped them survive the three years of litigation, the Korean-immigrant dry cleaners considered packing it up and moving back to Korea to pursue their American dream beyond the reach of the American legal system run amok. That legal system, much like our regulatory system, is supposed to empower and protect our citizens. Occasionally it goes haywire and fosters the kind of harm that it was meant to protect against. Eventually Judge Fancy Pants lost his frivolous case, and his job – but only after producing considerable and needless human misery while further straining an already overburdened judiciary.
Similarly, the Sarbanes Oxley Act (or Sarbox) was supposed to protect the investing public from misinformation by public companies. Not only has the quality of information not improved as a result of Sarbox, but the law of unintended consequences is in play – as one of the things Sarbox has effectively helped shut down is the IPO market to venture backed companies. Similarly, the Spitzer Rule was supposed to help the investing public by eliminating conflicts of interest in investment banking but instead cued the law of unintended consequences, making it impossible for emerging public companies to get the coverage necessary to survive and thrive in the public markets. The National Venture Capital Association wrote a great statement summarizing the impact of these regulations on capital markets. The list continues– the credit agencies were supposed to properly rate risk in mortgages but apparently couldn’t keep up with the massive structural change in the mortgage industry that led to radical and unprecedented risk-taking behavior on the part of underwriters. In all these instances, government regulators not only did nothing to stop the problem they were trying to address, but actually caused harm. “Trying is the first step toward failure,” as Homer Simpson famously declared.
Cybersecurity weaknesses rank as one of the most critical threats facing our economy to have emerged in generations. The rate and magnitude of breaches have rapidly increased. From script kiddies and cyber criminals, to hacktivists, economic espionage and cyber warfare, malicious activity is out of control. So of course, the question of government regulation has, of course, surfaced. In the wake of public companies like Sony, Google and RSA suffering major cybersecurity breaches, there is a clear strategic desire to impose policy regulating and protecting against the threat. The problem, again, is the law of unintended consequences. The SEC last week issued a new set of disclosure guidelines for cyber security issues, including the following statement, “Registrants should disclose the risk of cyber incidents if these issues are among the most significant factors that make an investment in the company speculative or risky.” Seriously?
I can identify at least three serious problems with the SEC’s move:
- Chief Fortune-Telling Officer: Imagine a public company CISO – we’ll call her Jane Doe – trying to determine the magnitude of the risk of a cyber incident. This will require a crystal ball, some tea leaves and maybe an I Ching to determine whether LulzSec, Anonymous, Legions of Doom or any of the countless other potential bad actors are going to target her company and if so, what the impact of such a hack could be on her organization. Risk can be anything from reputational assassination as with HB Gary to loss of intellectual property as with Night Dragon, to countless other dire ramifications. The perpetrator is just as likely to be a nation state as a criminal enterprise. One thing we can be sure of is that unless every public company makes a meaningless disclosure that they are at high risk and the impact could be devastating, the only thing that will happen from this disclosure requirement is added costs – both from a regulatory filing perspective and from a shareholder litigation perspective. You might say that a security audit industry could grow up around this, but that leads me to the second problem
- Standards: What standard? An audit requires a standard against which one is auditing. Financial disclosures are done against the backdrop of accounting standards so that everyone is speaking the same language and risk is uniformly assessed from one company to the next (let’s leave the GAAP vs. IFARS debate for another day). How is CISO Jane Doe supposed to assess the risk of cyber incidents if she doesn’t have a standard against which to measure that risk?
- Hack me!: And finally, talk about giving a roadmap to the adversary. It makes no sense to list out your cyber security vulnerabilities. I can’t imagine that the SEC would intend for companies to innumerate their online weaknesses, so my guess is that companies will report the following in their SEC filings. “Item 1A: Risk Factors. The risks described below could materially and adversely affect our business, financial condition, and results of operations and make an investment in our Company speculative and risky. We face strong competition… General economic factors may adversely affect our financial performance…Our growth strategy requires we figure out how to expand our business ….The risk of cyber incidents is high and creates operating, financial and reputational risk that is impossible to predict or quantify, because it could come from Russia, China, or Las Vegas, with intent to embarrass or steal.” The company will probably spend about $500K to generate that statement (or a more acceptable version of it), they will have met their reporting requirement. Hordes of litigants will disagree and file claims in court and, beyond the huge increase in legal and court costs, nothing will have changed about the company’s security posture or the general state of cybersecurity. It’s an invitation to every future Judge Fancy Pants to buy one share in public companies just to sue them for not making proper disclosure related to their cyber risk posture.
I hate to whine without offering some constructive thoughts on actually moving the needle in a positive direction. I would approach this problem by:
- Requiring disclosure. This is probably the single most important tool the government has to force industry to care about cyber security. When Google was hacked in 2010, everyone heard about it. What we didn’t hear about is the three dozen other companies who were similarly hacked. Disclosure of hacks, whether there is a threshold or not, is a great forcing mechanism for the industry at large to figure out how to care enough to protect their information and their networks. And unlike disclosing risk, which is a nebulous concept, disclosing actual hacks is purely a factual matter – and actually increases transparency for shareholders. Disclosure thresholds may become necessary down the road, but let’s cross that bridge when we get to it.
- Creating a Safe Harbor: Ensuring there is no disincentive to disclose is critical as we embark on a new regulatory scheme for cyber security. There must be a safe harbor if companies disclose within a reasonable number of days of discovery of a breach. Tort reform sure would have helped the Chungs and their dry cleaning business out a lot. As we embark down this process, we need to make sure that companies, both those that are hacked and those whose products are deployed to detect, stop or repair the hack, are protected from frivolous litigation as they work hand-in-hand with the government to minimize the damage of breaches. This point requires understanding and accepting the following: you cannot prevent breaches; you cannot prevent hacks; there is no such thing as a trusted network environment. We just need to figure out how to cope and recover quickly, minimizing damage, while eliminating the possibility of unnecessarily punishing companies for the inevitable truth that they are under attack.
- Developing standards. Yes the rate of change is fast – in the technology world, but even more critically in the hacking world. Hacking is polymorphic and our networks are static. Developing standards will be hard. They will not be perfect. But let’s start small and build on it wisely rather than letting the wild horses of good intentions rush us into a tangled thicket of unintended consequences. Let’s find one or two standards that everyone can agree on. Perhaps we can even target the pain points, ISPs, the industries that are best positioned to monitor and stop bad things from happening and start there.
With all that said, even as we make sure our regulatory scheme does not enable a Judge Fancy Pants type frivolous lawsuit, but rather encourages deployment of policies and technologies aimed at detecting and remediating cyber attacks, we need to be aware that without real deterrence, we will not stem the tide of attacks. Law enforcement plays a critical role here and how we address issues of privacy will either enable or disable their efforts to bring cyber criminals to justice. So too does foreign policy, especially when it comes to cyber espionage, which lies beyond the reach of our laws. Dmitri Alperovitch, former VP of Threat Research for McAfee, in a Brookings Institute discussion stated that the only way to really stem the tide of cyber espionage attacks is through a “declaratory policy of retaliation to strategic cyber attacks that clearly and credibly define red lines that will trigger a response.” In other words, foreign policy, together with law enforcement and regulation, must be in our arsenal of weapons against cyber attacks.