“Privacy in a Networked World,” John DeLong, Director of the Commercial Solutions Center, NSA
Other than the length and volume of applause, it’s difficult to measure an audience’s attitude toward a speaker. I’ll venture, though, that the audience of Privacy in a Networked World was generally pro-Snowden; the attitude toward John DeLong can perhaps be characterized as guarded open-mindedness laced with a healthy dose of skepticism.
DeLong’s talk was both forceful and defensive; he wanted to set the record straight about certain things, but he also knew that public opinion (in that room, at least) probably wasn’t in his favor. (He said repeatedly that he did not want to have an “Oxford-style debate,” though his talk wasn’t set up as a debate in the first place.) “Let’s not confuse the recipe with the cooking,” he said, in a somewhat belabored analogy where the NSA’s work was the cooking and the law was the recipe. (I cook a lot at home, and I’ll just say, I can usually tell when I’m looking at a bad recipe, and opt to adapt it or not make it at all.)
DeLong quoted at length from Geoffrey R. Stone’s “What I Told the NSA.” (Stone was a member of the President’s Review Group in fall 2013, after the Snowden revelations.) Stone’s conclusions were not altogether positive; he found that while the NSA “did its job,” many of its programs were “highly problematic and much in need of reform.” But it’s the Executive Branch, Congress, and FISA who authorized those programs and are responsible for reforming them. Stone added, “Of course, ‘I was only following orders’ is not always an excuse….To be clear, I am not saying that citizens should trust the NSA. They should not. Distrust is essential to effective democratic governance.”
DeLong said, “The idea that the NSA’s activities were unauthorized is wrong, wrong in a magnificent way.” He emphasized that the NSA is not a law enforcement agency, it’s an intelligence agency. He spoke in favor of people with different backgrounds and expertise – lawyers, engineers, mathematicians, privacy experts, etc. – coming together to work out solutions to problems, with respect for each others’ abilities. “Technology,” he said, “always comes back to how we as humans use it.” At present, “We do not have technologies that identify privacy risks….Privacy engineering could be one of the most important engineering feats of our time.”
DeLong talked about rebuilding the nation’s confidence in the NSA. “Confidence is the residue of promises kept,” he said. “More information does not necessarily equal more confidence.” (Someone on Twitter pointed out that much depends on the content of the information.) The talk was a good reminder not to villainize the entire NSA; part of DeLong’s forcefulness was undoubtedly on behalf of his co-workers and staff whom he felt were unfairly maligned. And technology that could identify privacy risks, built by people who have different perspectives and backgrounds, would be excellent. But do we need technology that identifies privacy risks, or do we need stronger oversight and better common sense? Mass surveillance erodes trust in government and hasn’t been terribly effective; what more do we need to know to put a stop to it?
“Privacy and Irony in Digital Health Data,” John Wilbanks, Chief Commons Officer, Sage Bionetworks
John Wilbanks gave a fast-paced, interesting talk about health data. The “irony” in the title of his talk soon became clear when he gave the example of Facebook’s mood manipulation experiment compared to a study of Parkinson’s disease. The sample size for Facebook was many times larger, with a constant flow of information from “participants,” as opposed to a much smaller sample population who filled out a survey and answered questions by phone. “What does our society value?” Wilbanks asked. This question can be answered by another question: “What do we surveil?”
Wilbanks showed a graph representing cardiovascular disease and terrorism: there is 1 death every 33 seconds from cardiovascular disease – “That’s like 9/11 every day” – and yet there’s not nearly the same kind of “surveillance” for health that there is for terrorism. Participating in a research study, Wilbanks said, is like “volunteering for surveillance,” and usually the mechanisms for tracking aren’t as comprehensive as, say, Facebook’s. Of course, privacy laws affect health research, and informed consent protects people by siloing their data; once the study is concluded, other researchers can’t use that data, and there’s no “network effect.”
Informed consent, while a good idea in theory, often leads to incomprehensible documents (much like Terms of Service). These documents are written by doctors, reviewed by lawyers, and edited by committee. Furthermore, said Wilbanks, people in health care don’t usually understand issues of power and data. So, he asked, how do we run studies at internet scale and make them recombinant? How do we scale privacy alongside the ability to do research? Wilbanks demonstrated some ideas to improve on traditional informed consent, which could also allow research subjects to get a copy of their own data and see which researchers are using data from the research in which they participated.
Obviously there are risks to individuals who share their personal health data, but there can be advantages too: more scientists having access to more data and doing more research can lead to more breakthroughs and improvements in the field of medicine.
Last year, Wilbanks talked about privacy and health data on NPR; you can listen to the segment here.
Still to come: Microsoft, Google, Pew, and a panel on “What Privacy Does Society Demand Now and How Much is New?”