No solvency --- US demand doesn’t drive global zero-day use
Bellovin et al. 14 [Steven M., professor of computer science at Columbia University, Matt Blaze, associate professor of computer science at the University of Pennsylvania, Sandy Clark, Ph.D. student in computer science at the University of Pennsylvania, Susan Landau, 2012 Guggenheim Fellow; she is now at Google, Inc., April, 2014, “Lawful Hacking: Using Existing Vulnerabilities for Wiretapping on the Internet,” Northwestern Journal of Technology and Intellectual Property, 12 Nw. J. Tech. & Intell. Prop. 1] //khirn
P165 It is interesting to ponder whether the policy of immediately reporting vulnerabilities could disrupt thezero-day industry. Some members of the industry, such as HP DVLabs, "will responsibly and promptly notifythe appropriate product vendor of a security flaw with their product(s) or service(s)." n245 Others, such as VUPEN, which "reports all discovered vulnerabilities to the affected vendors under contract with VUPEN," n246 do not. Although it would be a great benefit to security if the inability to sell to law enforcement caused the sellers to actually change their course of action, U.S. law enforcement is unlikely to have a major impacton the zero-day market since it is an international market dominated by national security organizations.
Can’t solve lack of trust within the private sector --- regulatory and competitive barriers
Jaffer 15 [Jamil N., Adjunct Professor of Law and Director, Security Law Program, George Mason University Law School, Occasional Papers Series, published by the Dean Rusk Center for International Law and Policy, 4-1-2015, “Cybersecurity and National Defense: Building a Public-Private Partnership,” http://digitalcommons.law.uga.edu/cgi/viewcontent.cgi?article=1008&context=rusk_oc] //khirn
But, second, and perhaps even more important, is the lack of trust within the privatesector — the inability of private industry actors to communicate with one another thethreats they’re seeing. And there are a lot of reasons for that. There are regulatoryreasons, there are competitive reasons, and there’s just an inherent sense of, “It’shard for me to tell the guy next door what I’m doing.” Now, the truth is that at the systems administrator level this happens all the time. Systems administrators of major corporations all the time will call each other up and say, “Hey, I’m seeing this on my network. Are you seeing it?” And the reason that relationship works is because they trust each other. They know that the other sys admin is not going to, you know, screw them over competitively. They do worry at the corporate level, however. If general counsel were to know about this kind of conversation going on, they’d probably be tamping it down and saying, “Look, you can’t be talking to, you know, the sys admin over at our competitor because who knows if he tells his CEO what’s going to happen to us competitively.”
Vulnerabilities inevitable --- orphans
Bellovin et al 14 (Steven M. Bellovin (computer science prof at Columbia), Matt Blaze (associate prof at UPenn, Sandy Clark (Ph.D student at UPenn), & Susan Landau (Guggenheim fellow), April 2014, Lawful Hacking: Using Existing Vulnerabilities for Wiretapping on the Internet, Northwestern Journal of Technology and Intellectual Property, April, 2014, 12 Nw. J. Tech. & Intell. Prop. 1, lexis) /AMarb
To whom should a vulnerability report be made? In many cases, there is an obvious point of contact: a software vendor that sells and maintains the product in question, or, in the case of open-source software, the community team maintaining it. In other cases, however, the answer is less clear. Not all software is actively maintained; there may be “orphan” software without an active vendor or owner to report to.253 Also, not all vulnerabilities result from bugs in specific software products. For example, standard communications protocols are occasionally found to have vulnerabilities,254 and a given protocol may be used in many different products and systems. In this situation, the vulnerability would need to be reported not to a particular vendor, but to the standards body responsible for the protocol. Many standards bodies operate entirely in the open,255 however, which can make quietly reporting a vulnerability—or hiding the fact that it has been reported by a law enforcement agency—problematic. In this situation, the choice is simple: report it openly.
Can’t solve info sharing --- legal barreirs
Bucci, Ph.D., Rosenzweig and Inserra 13 (Steven P., Paul, and David, April 1, 2013, A Congressional Guide: Seven Steps to U.S. Security, Prosperity, and Freedom in Cyberspace, Heritage Foundation, http://www.heritage.org/research/reports/2013/04/a-congressional-guide-seven-steps-to-us-security-prosperity-and-freedom-in-cyberspace) /AMarb
There are four steps that can be taken to enable and encourage the needed cyber information sharing. First, Congress should remove barriers to voluntary private-sector sharing. Currently, legal ambiguities impede greater collaboration and sharing of information. As a result, nearly every cybersecurity proposal in the last Congress contained provisions for clarifying these ambiguities to allow sharing. The 2011 Cyber Intelligence Sharing and Protection Act (CISPA), the Strengthening and Enhancing Cybersecurity by Using Research, Education, Information, and Technology (SECURE IT) Act of 2012, and the Cyber Security Act (CSA) of 2012 all authorized sharing by stating that “[n]otwithstanding any other provision of law” a private-sector entity can “share” or “disclose” cybersecurity threat information with others in the private sector and with the government. While sharing information is important, all of it should be voluntary, in order to encourage true cooperation. After all, any arrangement that forces a business to share information is, by definition, not cooperation but coercion. Voluntary sharing will also allow organizations with manifest privacy concerns to simply avoid sharing their information, while still receiving helpful information from the government and other organizations. Second, those entities that share information about cyber threats, vulnerabilities, and breaches should have legal protection. The fact that they shared data about an attack, or even a complete breach, with the authorities should never open them up to legal action. This is one of the biggest hindrances to sharing today, as it seems easier and safer to withhold information than to share it, even if it will benefit others. The Information Technology Industry Council (ITIC) provides several examples of how liability concerns block effective information sharing. Under current law, “Company A [could] voluntarily report what may be a cybersecurity incident in an information-sharing environment, such as in an ISAC (Information Sharing and Analysis Centers), or directly to the government, such as to the FBI.” The result of such sharing could be thatgovernment prosecutors, law enforcement agencies, or civil attorneys use this information as the basis for establishing a violation of civil or criminal law against Company A or a customer, partner, or unaffiliated entity harmed by the incident sues Company A for not informing them of the incident as soon as they were aware of it. Company A’s disclosure can be seen as a “smoking gun” or “paper trail” of when Company A knew about a risk event though Company A did not yet have a legal duty to report the incident. Such allegation could lead to costly litigation or settlement regardless of its validity. With the threat of legal action, businesses have determined that they are better off not sharing information. Strong liability protection is critical to expanding information sharing.Third, the information that is shared must be exempted from FOIA requests and use by regulators. Without such protection, a competitor can get its hands on potentially proprietary information through a FOIA action. Alternatively, if information is shared with a regulator, it will dampen voluntary sharing, since organizations will fear a backlash from regulators, who could use shared information to penalize a regulated party or tighten rules. Once again, the ITIC provides a valuable example. If a company shares information on a potential cybersecurity incident and “later finds that a database was compromised that included Individually Identifiable Health Information as defined under the Health Insurance Portability and Accountability Act (HIPAA),” then the Federal Trade Commission could use the shared information “as evidence in a case against [that company] for violating the security provisions of HIPAA.” If shared information is exempted from FOIA and regulatory use, a company can share important data without fear that its competitive advantages will be lost to other firms or used by regulators to impose more rules or costs.
NSA won’t listen to the plan --- circumvention inevitable
Gellman 13 (Barton Gellman writes for the national staff. He has contributed to three Pulitzer Prizes for The Washington Post, most recently the 2014 Pulitzer Prize for Public Service. The Washington Post: “NSA broke privacy rules thousands of times per year, audit finds.” Published August 15th, 2013. Accessed June 29th, 2015. http://www.washingtonpost.com/world/national-security/nsa-broke-privacy-rules-thousands-of-times-per-year-audit-finds/2013/08/15/3310e554-05ca-11e3-a07f-49ddc7417125_story.html) KalM
The National Security Agency has broken privacy rules or overstepped its legal authority thousands of times each year since Congress granted the agency broad new powers in 2008, according to an internal audit and other top-secret documents. Most of the infractions involve unauthorized surveillance of Americans or foreign intelligence targets in the United States, both of which are restricted by statute and executive order. They range from significant violations of law to typographical errors that resulted in unintended interception of U.S. e-mails and telephone calls. The documents, provided earlier this summer to The Washington Post by former NSA contractor Edward Snowden, include a level of detail and analysis that is not routinely shared with Congress or the special court that oversees surveillance. In one of the documents, agency personnel are instructed to remove details and substitute more generic language in reports to the Justice Department and the Office of the Director of National Intelligence. In one instance, the NSA decided that it need not report the unintended surveillance of Americans. A notable example in 2008 was the interception of a “large number” of calls placed from Washingtonwhen a programming error confused the U.S. area code 202 for 20, the international dialing code for Egypt, according to a “quality assurance” review that was not distributed to the NSA’s oversight staff. In another case, the Foreign Intelligence Surveillance Court, which has authority over some NSA operations, did not learn about a new collection method until it had been in operation for many months. The court ruled it unconstitutional. Read the documents NSA report on privacy violations Read the full report with key sections highlighted and annotated by the reporter. FISA court finds illegal surveillance The only known details of a 2011 ruling that found the NSA was using illegal methods to collect and handle the communications of American citizens. What's a 'violation'? View a slide used in a training course for NSA intelligence collectors and analysts. What to say (and what not to say) How NSA analysts explain their targeting decisions without giving "extraneous information" to overseers. [FISA judge: Ability to police U.S. spying program is limited] The Obama administration has provided almost no public information about the NSA’s compliance record. In June, after promising to explain the NSA’s record in “as transparent a way as we possibly can,” Deputy Attorney General James Cole described extensive safeguards and oversight that keep the agency in check. “Every now and then, there may be a mistake,” Cole said in congressional testimony. The NSA audit obtained by The Post, dated May 2012, counted 2,776 incidents in the preceding 12 months of unauthorized collection, storage, access to or distribution of legally protected communications. Most were unintended. Many involved failures of due diligence or violations of standard operating procedure. The most serious incidents included a violation of a court order and unauthorized use of data about more than 3,000 Americans and green-card holders. In a statement in response to questions for this article, the NSA said it attempts to identify problems “at the earliest possible moment, implement mitigation measures wherever possible, and drive the numbers down.” The government was made aware of The Post’s intention to publish the documents that accompany this article online. “We’re a human-run agency operating in a complex environment with a number of different regulatory regimes, so at times we find ourselves on the wrong side of the line,” a senior NSA official said in an interview, speaking with White House permission on the condition of anonymity. “You can look at it as a percentage of our total activity that occurs each day,” he said. “You look at a number in absolute terms that looks big, and when you look at it in relative terms, it looks a little different.” There is no reliable way to calculate from the number of recorded compliance issues how many Americans have had their communications improperly collected, stored or distributed by the NSA. The causes and severity of NSA infractions vary widely. One in 10 incidents is attributed to a typographical error in which an analyst enters an incorrect query and retrieves data about U.S phone calls or e-mails. But the more serious lapses include unauthorized access to intercepted communications, the distribution of protected content and the use of automated systems without built-in safeguards to prevent unlawful surveillance. The May 2012 audit, intended for the agency’s top leaders, counts only incidents at the NSA’s Fort Meade headquarters and other facilities in the Washington area. Three government officials, speaking on the condition of anonymity to discuss classified matters, said the number would be substantially higher if it included other NSA operating units and regional collection centers. Senate Intelligence Committee Chairman Dianne Feinstein (D-Calif.), who did not receive a copy of the 2012 audit until The Post asked her staff about it, said in a statement late Thursday that the committee “can and should do more to independently verify that NSA’s operations are appropriate, and its reports of compliance incidents are accurate.” Despite the quadrupling of the NSA’s oversight staff after a series of significant violations in 2009, the rate of infractions increased throughout 2011 and early 2012. An NSA spokesman declined to disclose whether the trend has continued since last year. One major problem is largely unpreventable, the audit says, because current operations rely on technology that cannot quickly determine whether a foreign mobile phone has entered the United States. In what appears to be one of the most serious violations, the NSA diverted large volumes of international data passing through fiber-optic cables in the United States into a repository where the material could be stored temporarily for processing and selection. The operation to obtain what the agency called “multiple communications transactions” collected and commingled U.S. and foreign e-mails, according to an article in SSO News, a top-secret internal newsletter of the NSA’s Special Source Operations unit. NSA lawyers told the court that the agency could not practicably filter out the communications of Americans. In October 2011, months after the program got underway, the Foreign Intelligence Surveillance Court ruled that the collection effort was unconstitutional. The court said that the methods used were “deficient on statutory and constitutional grounds,” according to a top-secret summary of the opinion, and it ordered the NSA to comply with standard privacy protections or stop the program.
The plan doesn’t solve basic NSA surveillance --- that makes corporate trust impossible
Kehl, 14 (July, 2014, Danielle Kehl is a senior policy analyst at New America's Open Technology Institute, where she researches and writes about technology policy., “Surveillance Costs: The NSA’s Impact on the Economy, Internet Freedom & Cybersecurity” https://www.newamerica.org/downloads/Surveilance_Costs_Final.pdf)
Certainly, the actions of the NSA have created a serious trust and credibility problem for the United States and its Internet industry.“All of this denying and lying results in us not trusting anything the NSA says, anything the president says about the NSA, or anything companies say about their involvement with the NSA,” wrote security expert Bruce Schneier in September 2013.225 However, beyond undermining faith in American government and business, a variety of the NSA’s efforts have undermined trust in the security of the Internet itself. When Internet users transmit or store their information using the Internet, they believe—at least to a certain degree—that the information will be protected from unwanted third-party access. Indeed, the continued growth of the Internet as both an economic engine and an as avenue for private communication and free expression relies on that trust. Yet, as the scope of the NSA’s surveillance dragnet and its negative impact on cybersecurity comes into greater focus, that trust in the Internet is eroding.226 Trust is essential for a healthy functioning society. As economist Joseph Stiglitz explains, “Trust is what makes contracts, plans and everyday transactions possible; it facilitates the democratic process, from voting to law creation, and is necessary for social stability.”227 Individuals rely on online systems and services for a growing number of sensitive activities, including online banking and social services, and they must be able to trust that the data they are transmitting is safe. In particular, trust and authentication are essential components of the protocols and standards engineers develop to create a safer and more secure Internet, including encryption.228 The NSA’s work to undermine the tools and standards that help ensure cybersecurity—especially its work to thwart encryption—also undermines trust in the safety of the overall network. Moreover, it reduces trust in the United States itself, which many now perceive as a nation that exploits vulnerabilities in the interest of its own security.220 This loss of trust can have a chilling effect on the behavior of Internet users worldwide.230 Unfortunately, as we detail below, the growing loss of trust in the security of Internet as a result of the latest disclosures is largely warranted. Based on the news stories of the past year, it appears that the Internet is far less secure than people thought—a direct result of the NSA’s actions. These actions can be traced to a core contradiction in NSA’s two key missions: information assurance—protecting America’s and Americans’ sensitive data—and signals intelligence—spying on telephone and electronic communications for foreign intelligence purposes
2nc alt causes
Can’t solve corporate trust – NSA does a lot of pretty evil things
Sasso 14 [Brendan, technology correspondent for National Journal, previously covered technology policy issues for The Hill and was a researcher and contributing writer for the 2012 edition of the Almanac of American Politics, “The NSA Isn't Just Spying on Us, It's Also Undermining Internet Security,” National Journal, April 29, 2014, http://www.nationaljournal.com/daily/the-nsa-isn-t-just-spying-on-us-it-s-also-undermining-internet-security-20140429] //khirn
He said that company officials have historically discussed cybersecurity issues with the NSA, but that he wouldn’t be surprised if those relationships are now strained. He pointed to news that the NSA posed as Facebook to infect computers with malware. “That does a lot of harm to companies’ brands,” Soltani said. The NSA’s actions have also made it difficult for the U.S. to set international norms for cyberconflict. For several years, the U.S. has tried to pressure China to scale back its cyberspying operations, which allegedly steal trade secrets from U.S. businesses. Jason Healey, the director of the Cyber Statecraft Initiative at the Atlantic Council, said the U.S. has “militarized cyber policy.” “The United States has been saying that the world needs to operate according to certain norms,” he said. “It is difficult to get the norms that we want because it appears to the rest of the world that we only want to follow the norms that we think are important.” Vines, the NSA spokeswoman, emphasized that the NSA would never hack into foreign networks to give domestic companies a competitive edge (as China is accused of doing). “We do not use foreign intelligence capabilities to steal the trade secrets of foreign companies on behalf of—or give intelligence we collect to—U.S. companies to enhance their international competitiveness or increase their bottom line,” she said. Jim Lewis, a senior fellow with the Center for Strategic and International Studies, agreed that NSA spying to stop terrorist attacks is fundamentally different from China stealing business secrets to boost its own economy. He also said there is widespread misunderstanding of how the NSA works, but he acknowledged that there is a “trust problem—justified or not.” He predicted that rebuilding trust with the tech community will be one of the top challenges for Mike Rogers, who was sworn in as the new NSA director earlier this month. “All the tech companies are in varying degrees unhappy and not eager to have a close relationship with NSA,” Lewis said.
Villasenor 14 (John Villasenor; Professor, UCLA; Nonresident senior fellow at the Brookings Institution; National Fellow at the Hoover Institution. manuscript of an article to be published in the American Intellectual Property Law Association Quarterly Journal, 2015: “Corporate Cybersecurity Realism: Managing Trade Secrets in a World Where Breaches Occur” published August 28, 2014. Accessed June 24, 2015. http://poseidon01.ssrn.com/delivery.php?ID=347005106102011003080125018116007000009034067081071060081068017000117077089066011073126035037037025005058020000072094121097017060073073001035007006103107126028000127081002001029090093119117091094066083082080081069023080104113079101072079088008078064&EXT=pdf&TYPE=2) KalM
It would be an understatement to call trade secret cybersecurity a complex challenge.Trade secrets stored on company networks are ripe targets for cyberintruders who have continuing access to new vulnerabilities, including via a robust global market for zero day exploits. When a company can have hundreds or thousands of laptop computers, servers, tablets, and smartphones; all of the associated software; and employees with varying degrees of security awareness, how can security of economically valuable confidential information be assured? The answer, unsurprisingly, is that it can’t. As a result, the “every company has been hacked” theme has become a popular refrain in discussions about cybersecurity. In 2011 Dimitri Alperovitch, who was then with McAfee and went on to found cybersecurity company CrowdStrike, wrote, “I am convinced that every company in every conceivable industry with significant size and valuable intellectual property and trade secrets has been compromised (or will be shortly), with the great majority of the victims rarely discovering the intrusion or its impact.”2 In a speech at the 2012 RSA conference, then-FBI Director Robert S. Mueller, III said “I am convinced that there are only two types of companies: those that have been hacked and those that will be. And even they are converging into one category: companies that have been hacked and will be hacked again.”3 So what should companies do? First and most obviously companies need to take all reasonable steps to minimize the ability of cyber-intruders to get into their systems and make off with their trade secrets. There is a multibillion-dollar industry of products and services available to help plug security holes, and many companies have made cybersecurity a top priority. But there is no such thing as perfect cybersecurity. Sometimes, despite all efforts to the contrary, skilled attackers intent on obtaining trade secrets will find their way into company systems. This inevitability leads to a second aspect of the corporate cybersecurity challenge that is not generally appreciated: Companies need to manage their intellectual property in light of the affirmative knowledge that their computer systems will sometimes be breached.
Bugs will always occur and be hard to find – no aff solvency
Bellovin et al 14 (Steven M. Bellovin (computer science prof at Columbia), Matt Blaze (associate prof at UPenn, Sandy Clark (Ph.D student at UPenn), & Susan Landau (Guggenheim fellow), April 2014, Lawful Hacking: Using Existing Vulnerabilities for Wiretapping on the Internet, Northwestern Journal of Technology and Intellectual Property, April, 2014, 12 Nw. J. Tech. & Intell. Prop. 1, lexis) /AMarb**We don’t endorse ableist language
P67 We are suggesting use of pre-existing vulnerabilities for lawful access to communications. To understand why this is plausible, it is important to know a fundamental tenet of software engineering: bugs happen. In his classic The Mythical Man-Month, Frederick Brooks explained why: First, one must perform perfectly. The computer resembles the magic of legend in this respect, too. If one character, one pause, of the incantation is not strictly in proper form, the magic doesn't work. Human beings are not accustomed to being perfect, and few areas of human activity demand it. Adjusting to the requirement for perfection is, I think, the most difficult part of learning to program. n114 P68 Because computers, of course, are dumb--they do exactly what they are told to do-- programming has to be absolutely precise and correct. If a computer is told to do something stupid, it does it, while a human being would notice there is a problem. A person told to walk 50 meters then turn left would realize that there was an obstacle present, and prefer the path 52 meters down rather than walking into a tree trunk. A computer would not, unless it had been specifically programmed to check for an impediment in its path. If it has not been programmed that way--if there is virtually any imperfection in code--a bug will result. The circumstances which might cause that bug to become apparent may be rare, but it would nonetheless be a bug. n115 If this bug should happen to be in a security-critical section of code, the result may be a vulnerability. P69 A National Research Council study described the situation this way: [*28] [A]n overwhelming majority of security vulnerabilities are caused by "buggy" code. At least a third of the Computer Emergency Response Team (CERT) advisories since 1997, for example, concern inadequately checked input leading to character string overflows (a problem peculiar to C programming language handling of character strings). Moreover, less than 15 percent of all CERT advisories described problems that could have been fixed or avoided by proper use of cryptography. n116 P70 It would seem that bugs should be easy to eliminate: test the program and fix any problems that show up. Alas, bugs can be fiendishly hard to find, and complex programs simply have too many possible branches or execution paths to be able to test them all. n117
Cyber security impossible to be prepared for - threats are too rapidly developing
(“Rethinking Cyber Security” OpenDNS is a security company operating out of San Francisco, http://www.gridcybersec.com/cybersecurity-research/prevention-is-no-match-for-persistence)
Today, most IT security is based on prevention – an attempt to create counter measures against previously identified tactics and threats. In theory, understanding how hackers attack us helps us prepare our best defenses against them. But in practice, we can never build our virtual walls high or strong enough to serve as sufficient barricades. For starters, old tactics evolve and new tactics emerge at a rate impossible for security professionals to match. Spear phishing targets our most vulnerable employees and watering holes attract the unwary. Our best “sandbox” malware analyses can miss some of the latest suspect behaviors. It’s impossible to predict when and where the technologies we rely upon, such as Flash or Java, will suffer the exploitation of a previously undetected (a.k.a. zero-day) vulnerability. Worse, practice makes perfect. The key part of any advanced persistent threat (APT) is the persistence; even relatively basic, “off the shelf” malware can become powerful when it is applied repeatedly across a wide attack surface. As our digital borders, via private and public cloud services and mobile users and devices, expand they become more porous and our digital line in the sand becomes too big to defend. For enterprises or organizations at any scale, prevention alone can never be a sufficient defense: our security professionals must be right and fast all the time, but cyberattackers just need to be effective once, over any time period.
Cyber security won’t happen – the internet is too large a beast to conquer
1 March 2004, “The tensions of securing cyberspace: the Internet, state power & the National Strategy to Secure Cyberspace,” Michael T. Zimmer is a doctoral student in Media Ecology in the Department of Culture and Communication at New York University. http://firstmonday.org/ojs/index.php/fm/article/view/1125/1045
The rise of information technologies, including the Internet, impacts the way governance is organized and power is exercised in our society. As Castells notes, "Networks constitute the new social morphology of our societies, and the diffusion of networking logic substantially modifies the operation and outcomes in processes of production, experience, power and culture" . This poses immense constraints on any government’s attempt to secure cyberspace. While the structural tensions noted above seem clear, more abstract constraints to State power lurk just below the surface, exposing deep substantive tensions. These include challenges to the hierarchical structures of the nation–state, the blurring of territorial boundaries, and general resistance to power in a society increasingly focused on control. Information technology networks contribute to the departure from traditional hierarchical authoritative contexts privileging nation–states. As Arquilla and Ronfeldt explain, the rise of global information networks sets in motion forces that challenge the hierarchical design of many institutions: "It disrupts and erodes the hierarchies around which institutions are normally designed. It diffuses and redistributes power, often to the benefit of what may be considered weaker, smaller actors. It crosses borders, and redraws the boundaries of offices and responsibilities. It expands the spatial and temporal horizons that actors should take into account. And thus, it generally compels closed systems to open up."  As a consequence of the Internet’s capacity for anarchic global communication, new global institutions are being formed that are preponderantly sustained by network rather than hierarchical structures — examples include peer–based networks such as Slashdot.org, or even the IETF itself. Such global, interconnected networks help to flatten hierarchies, often transforming them altogether, into new types of spaces where traditional sovereign territoriality itself faces extinction.
2nc status quo solves
Project Zero solves the aff – companies are eliminating bugs
Sanger and Perlroth 15 – New York Times Reporters (David and Nicole, Feb 12, 2015, New York Times, Obama Heads to Tech Security Talks Amid Tensions, http://www.nytimes.com/2015/02/13/business/obama-heads-to-security-talks-amid-tensions.html?_r=0) /AMarb
PALO ALTO, Calif. — President Obama will meet here on Friday with the nation’s top technologists on a host of cybersecurity issues and the threats posed by increasingly sophisticated hackers. But nowhere on the agenda is the real issue for the chief executives and tech company officials who will gather on the Stanford campus: the deepening estrangement between Silicon Valley and the government. The long history of quiet cooperation between Washington and America’s top technology companies — first to win the Cold War, then to combat terrorism — was founded on the assumption of mutual interest. Edward J. Snowden’s revelations shattered that. Now, the Obama administration’s efforts to prevent companies from greatly strengthening encryption in commercial products like Apple’s iPhone and Google’s Android phones has set off a new battle, as the companies resist government efforts to make sure police and intelligence agencies can crack the systems. And there is continuing tension over the government’s desire to stockpile flaws in software — known as zero days — to develop weapons that the United States can reserve for future use against adversaries. “What has struck me is the enormous degree of hostility between Silicon Valley and the government,” said Herb Lin, who spent 20 years working on cyberissues at the National Academy of Sciences before moving to Stanford several months ago. “The relationship has been poisoned, and it’s not going to recover anytime soon.”Mr. Obama’s cybersecurity coordinator, Michael Daniel, concedes there are tensions. American firms, he says, are increasingly concerned about international competitiveness, and that means making a very public show of their efforts to defeat American intelligence-gathering by installing newer, harder-to-break encryption systems and demonstrating their distance from the United States government. The F.B.I., the intelligence agencies and David Cameron, the British prime minister, have all tried to stop Google, Apple and other companies from using encryption technology that the firms themselves cannot break into — meaning they cannot turn over emails or pictures, even if served with a court order. The firms have vociferously opposed government requests for such information as an intrusion on the privacy of their customers and a risk to their businesses. “In some cases that is driving them to resistance to Washington,” Mr. Daniel said in an interview. “But it’s not that simple. In other cases, with what’s going on in China,” where Beijing is insisting that companies turn over the software that is their lifeblood, “they are very interested in getting Washington’s help.” Mr. Daniel’s reference was to Silicon Valley’s argument that keeping a key to unlocking terrorists’ secret communications, as the government wants them to do, may sound reasonable in theory, but in fact would create an opening for others. It would also create a precedent that the Chinese, among others, could adopt to ensure they can get into American communications, especially as companies like Alibaba, the Chinese Internet giant, become a larger force in the American market. “A stupid approach,” is the assessment of one technology executive who will be seeing Mr. Obama on Friday, and who asked to speak anonymously. That tension — between companies’ insistence that they cannot install “back doors” or provide “keys” giving access to law enforcement or intelligence agencies and their desire for Washington’s protection from foreign nations seeking to exploit those same products — will be the subtext of the meeting. That is hardly the only point of contention. A year after Mr. Obama announced that the government would get out of the business of maintaining a huge database of every call made inside the United States, but would instead ask the nation’s telecommunications companies to store that data in case the government needs it, the companies are slow-walking the effort. They will not take on the job of “bulk collection” of the nation’s communications, they say, unless Congress forces them to. And some executives whisper it will be at a price that may make the National Security Administration’s once-secret program look like a bargain. The stated purpose of Friday’s meeting is trying to prevent the kinds of hackings that have struck millions of credit card holders at Home Depot and Target. A similar breach revealed the names, Social Security numbers and other information of about 80 million people insured by Anthem, the nation’s second-largest health insurer. Mr. Obama has made online security a major theme, making the case in hisState of the Union address that the huge increase in attacks during his presidency called for far greater protection. Lisa Monaco, Mr. Obama’s homeland security adviser, said this week that attacks have increased fivefold since the president came to office; some, like the Sony Pictures attack, had a clear political agenda. The image of Kim Jong-un, the North Korean leader, shown in the Sony Pictures comedy “The Interview” has been emblazoned in the minds of those who downloaded the film. But the one fixed in the minds of many Silicon Valley executives is the image revealed in photographs and documents released from the Snowden trove of N.S.A. employees slicing open a box containing a Cisco Systems server and placing “beacons” in it that could tap into a foreign computer network. Or the reports of how the N.S.A. intercepted email traffic moving between Google and Yahoo servers. “The government is realizing they can’t just blow into town and let bygones be bygones,” Eric Grosse, Google’s vice president of security and privacy, said in an interview. “Our business depends on trust. If you lose it, it takes years to regain.” When it comes to matters of security, Mr. Grosse said, “Their mission is clearly different than ours. It’s a source of continuing tension. It’s not like if they just wait, it will go away.” And while Silicon Valley executives have made a very public argument over encryption, they have been fuming quietly over the government’s use of zero-day flaws. Intelligence agencies are intent on finding or buying information about those flaws in widely used hardware and software, and information about the flaws often sells for hundreds of thousands of dollars on the black market. N.S.A. keeps a potent stockpile, without revealing the flaws to manufacturers. Companies like Google, Facebook, Microsoft and Twitter are fighting back by paying “bug bounties” to friendly hackers who alert them to serious bugs in their systems so they can be fixed. And last July, Google took the effort to another level. That month, Mr. Grosse began recruiting some of the world’s best bug hunters to track down and neuter the very bugs that intelligence agencies and military contractors have been paying top dollar for to add to their arsenals. They called the effort “Project Zero,” Mr. Grosse says, because the ultimate goal is to bring the number of bugs down to zero. He said that “Project Zero” would never get the number of bugs down to zero “but we’re going to get close.”The White House is expected to make a series of decisions on encryption in the coming weeks. Silicon Valley executives say encrypting their products has long been a priority, even before the revelations by Mr. Snowden, the former N.S.A. analyst, about N.S.A.’s surveillance, and they have no plans to slow down. In an interview last month, Timothy D. Cook, Apple’s chief executive, said the N.S.A. “would have to cart us out in a box” before the company would provide the government a back door to its products. Apple recently began encrypting phones and tablets using a scheme that would force the government to go directly to the user for their information. And intelligence agencies are bracing for another wave of encryption.