Privacy in a Networked World, II

This is the second post about Privacy in a Networked World. The first post, about the conversation between Bruce Schneier and Edward Snowden, is here.

“Privacy in a Networked World,” John DeLong, Director of the Commercial Solutions Center, NSA

Other than the length and volume of applause, it’s difficult to measure an audience’s attitude toward a speaker. I’ll venture, though, that the audience of Privacy in a Networked World was generally pro-Snowden; the attitude toward John DeLong can perhaps be characterized as guarded open-mindedness laced with a healthy dose of skepticism.

DeLong’s talk was both forceful and defensive; he wanted to set the record straight about certain things, but he also knew that public opinion (in that room, at least) probably wasn’t in his favor. (He said repeatedly that he did not want to have an “Oxford-style debate,” though his talk wasn’t set up as a debate in the first place.) “Let’s not confuse the recipe with the cooking,” he said, in a somewhat belabored analogy where the NSA’s work was the cooking and the law was the recipe. (I cook a lot at home, and I’ll just say, I can usually tell when I’m looking at a bad recipe, and opt to adapt it or not make it at all.)

DeLong quoted at length from Geoffrey R. Stone’s “What I Told the NSA.” (Stone was a member of the President’s Review Group in fall 2013, after the Snowden revelations.) Stone’s conclusions were not altogether positive; he found that while the NSA “did its job,” many of its programs were “highly problematic and much in need of reform.” But it’s the Executive Branch, Congress, and FISA who authorized those programs and are responsible for reforming them. Stone added, “Of course, ‘I was only following orders’ is not always an excuse….To be clear, I am not saying that citizens should trust the NSA. They should not. Distrust is essential to effective democratic governance.”

DeLong said, “The idea that the NSA’s activities were unauthorized is wrong, wrong in a magnificent way.” He emphasized that the NSA is not a law enforcement agency, it’s an intelligence agency. He spoke in favor of people with different backgrounds and expertise – lawyers, engineers, mathematicians, privacy experts, etc. – coming together to work out solutions to problems, with respect for each others’ abilities. “Technology,” he said, “always comes back to how we as humans use it.” At present, “We do not have technologies that identify privacy risks….Privacy engineering could be one of the most important engineering feats of our time.”

DeLong talked about rebuilding the nation’s confidence in the NSA. “Confidence is the residue of promises kept,” he said. “More information does not necessarily equal more confidence.” (Someone on Twitter pointed out that much depends on the content of the information.) The talk was a good reminder not to villainize the entire NSA; part of DeLong’s forcefulness was undoubtedly on behalf of his co-workers and staff whom he felt were unfairly maligned. And technology that could identify privacy risks, built by people who have different perspectives and backgrounds, would be excellent. But do we need technology that identifies privacy risks, or do we need stronger oversight and better common sense? Mass surveillance erodes trust in government and hasn’t been terribly effective; what more do we need to know to put a stop to it?

“Privacy and Irony in Digital Health Data,” John Wilbanks, Chief Commons Officer, Sage Bionetworks

John Wilbanks gave a fast-paced, interesting talk about health data. The “irony” in the title of his talk soon became clear when he gave the example of Facebook’s mood manipulation experiment compared to a study of Parkinson’s disease. The sample size for Facebook was many times larger, with a constant flow of information from “participants,” as opposed to a much smaller sample population who filled out a survey and answered questions by phone. “What does our society value?” Wilbanks asked. This question can be answered by another question: “What do we surveil?”

Wilbanks showed a graph representing cardiovascular disease and terrorism: there is 1 death every 33 seconds from cardiovascular disease – “That’s like 9/11 every day” – and yet there’s not nearly the same kind of “surveillance” for health that there is for terrorism. Participating in a research study, Wilbanks said, is like “volunteering for surveillance,” and usually the mechanisms for tracking aren’t as comprehensive as, say, Facebook’s. Of course, privacy laws affect health research, and informed consent protects people by siloing their data; once the study is concluded, other researchers can’t use that data, and there’s no “network effect.”

Informed consent, while a good idea in theory, often leads to incomprehensible documents (much like Terms of Service). These documents are written by doctors, reviewed by lawyers, and edited by committee. Furthermore, said Wilbanks, people in health care don’t usually understand issues of power and data. So, he asked, how do we run studies at internet scale and make them recombinant? How do we scale privacy alongside the ability to do research? Wilbanks demonstrated some ideas to improve on traditional informed consent, which could also allow research subjects to get a copy of their own data and see which researchers are using data from the research in which they participated.

Obviously there are risks to individuals who share their personal health data, but there can be advantages too: more scientists having access to more data and doing more research can lead to more breakthroughs and improvements in the field of medicine.

Last year, Wilbanks talked about privacy and health data on NPR; you can listen to the segment here.

Still to come: Microsoft, Google, Pew, and a panel on “What Privacy Does Society Demand Now and How Much is New?” 

Privacy in a Networked World

This is the first post about Privacy in a Networked World, the Fourth Annual Symposium on the Future of Computation in Science and Engineering, at Harvard on Friday, January 23.

A Conversation between Bruce Schneier and Edward Snowden (video chat)

Bruce Schneier is a fellow at the Berkman Center for Internet & Society, and the author of Data and Goliath. Edward Snowden was a sysadmin at the NSA who revealed the extent of the government’s mass surveillance. The conversation was recorded (no joke) and is available on YouTube.

I have to say it was an incredibly cool feeling when Snowden popped up on the giant screen and was there in the room with us. There was sustained applause when he first appeared and also at the end of the conversation, when he was waving goodbye. Schneier started by asking Snowden about cryptography: What can and can’t be done? Snowden replied, “Encryption…is one of the few things that we can rely on.” When implemented properly, “encryption does work.” Poor cryptography, either through bad implementation or a weak algorithm, means weak security. End points are also weak spots, even if the data in transit is protected; it’s easier for an attacker to get around crypto than to break it.

Snowden pointed out a shift in the NSA’s focus over the last ten years from defense to offense. He encouraged us to ask Why? Is this proper? Appropriate? Does it benefit or serve the public?

The explosion in “passive” mass surveillance (collecting everything in case it’s needed later) is partly because it’s easy, cheap, and simple. If more data is encrypted, it becomes harder to sweep up, and hackers (including the NSA) who use more “active” techniques run a higher risk of exposure. This “hunger for risk has greatly increased” during the War on Terror era. Their targets are “crazy, unjustified….If they were truly risk averse they wouldn’t be doing this…it’s unlawful.”

Snowden said that the NSA “is completely free from any meaningful judicial oversight…in this environment, a culture of impunity develops.” Schneier said there were two kinds of oversight: tactical oversight within the organization (“did we follow the rules?”) and oversight from outside of the organization (“are these the right rules?”). He asked, “What is moral in our society?”

Snowden asked if the potential intelligence that we gain was worth the potential cost. He stated that reducing trust in the American infrastructure is a costly move; the information sector is crucial to our economy. The decrease in trust, he said, has already cost us more than the NSA’s budget. “They are not representing our interests.”

Schneier, using his NSA voice, said, “Corporations are spying on the whole internet, let’s get ourselves a copy!” (This was much re-tweeted.) “Personal information,” he said, “is the currency by which we buy our internet.” (Remember, if you can’t tell what the product is, you’re the product.) It’s “always amusing,” he said, when Google complains about the government spying on their users, because “it’s our job to spy on our users!” However, Schneier thinks that the attitudes of tech companies and standards bodies are changing.

These silos of information were too rich and interesting for governments to ignore, said Snowden, and there was no cost to scooping up the data because until 2013, “people didn’t realize how badly they were being sold up the river.” Schneier said that research into privacy-preserving technologies might increase now that there is more interest. Can we build a more privacy-preserving network, with less metadata?

“We’ve seen that the arguments for mass surveillance” haven’t really held up; there is little evidence that it has stopped many terrorist attacks. Schneier cited an article from the January 26, 2015 edition of The New Yorker, “The Whole Haystack,” in which author Mattathias Schwartz lists several recent terrorist attacks, and concludes, “In each of these cases, the authorities were not wanting for data. What they failed to do was appreciate the significance of the data they already had.”

Unlike during the Cold War, now “we all use the same stuff”: we can’t attack their networks and defend our networks, because it’s all the same thing. Schneier said, “Every time we hoard a zero-day opportunity [knowing about a security flaw], we’re leaving ourselves vulnerable to attack.”

PrivacyNetworkedWorld

Snowden was a tough act to follow, especially for John DeLong, Director of the Commercial Solutions Center for the NSA, but that’s exactly who spoke next. Stay tuned.

 

Introduction to Cyber Security

FutureLearnThis fall, I enrolled in, and completed, my first first MOOC (massive open online course), Introduction to Cyber Security at the Open University (UK) through their FutureLearn program. I found out about the course almost simultaneously through Cory Doctorow at BoingBoing and the Radical Reference listserv (thanks, Kevin).

Screen shot from course "trailer," featuring Cory Doctorow

Screen shot from course “trailer,” featuring Cory Doctorow

The free eight-week course started on October 15 and ended on December 5. Each week started with a short video, featuring course guide Cory Doctorow, and the rest of the week’s course materials included short articles and videos. Transcripts of the videos were made available, and other materials were available to download in PDF. Each step of each week included a discussion area, but only some of the steps included specific prompts or assignments to research and comment; facilitators from OU moderated the discussions and occasionally answered questions. Each week ended with a quiz; students had three tries to get each answer, earning successively fewer points for each try.

Week 1: [Security] Threat Landscape: Learn basic techniques for protecting your computers and your online information.
Week 2: Authentication and passwords
Week 3: Malware basics
Week 4: Networking and Communications: How does the Internet work?
Week 5: Cryptography basics
Week 6: Network security and firewalls
Week 7: “When your defenses fail”: What to do when things go wrong
Week 8: Managing and analyzing security risks

The FutureLearn website was incredibly easy to use, with a clean and intuitive design, and each week of the course was broken down into little bite-size chunks so it was easy to do a little bit at a time, or plow through a whole week in one or two sessions. I tended to do most of the work on Thursdays and Fridays, so there were plenty of comments in the discussions by the time I got there.

Anyone can still take the course, so I won’t go too in-depth here, but the following are some tips, facts, and resources I found valuable or noteworthy during the course:

  • Identify your information assets: these include school, work, and personal documents; photos; social media account information and content; e-mail; and more, basically anything you store locally on your computer or in the cloud. What is the value (high/medium/low) of this information to you? What are the relevant threats?
  • Passwords are how we identify ourselves (authentication). Passwords should be memorable, long, and unique (don’t use the same password for different sites or accounts). Password managers such as LastPass or KeyPass can help, but that is placing a lot of trust in them. Password managers should: require a password, lock up if inactive, be encrypted, and use 2-factor authentication.
  • Use 2-factor authentication whenever it is available.
  • 85% of all e-mail sent in 2011 was spam.
  • Anti-virus software uses two techniques: signatures (distinctive patterns of data) and heuristics (rules based on previous knowledge about known viruses).
  • The Sophos “Threatsaurus” provides an “A-Z of computer and data security threats” in plain English.
  • The Internet is “a network of networks.” Protocols (e.g. TCP/IP) are conventions for communication between computers. All computers understand the same protocols, even in different networks.
  • Wireless networks are exposed to risks to Confidentiality, Integrity, and Availability (CIA); thus, encryption is necessary. The best option currently is Wireless Protected Access (WPA2).
  • The Domain Name Server (DNS) translates URLs to IP addresses.
  • Any data that can be represented in binary format can be encrypted by a computer.
  • Symmetric encryption: 2 copies of 1 shared key. But how to transmit the shared key safely? Asymmetric encryption (a.k.a. public key cryptography) uses two keys and the Diffie-Hellman key exchange. (The video to explain this was very helpful.)
  • Pretty Good Privacy (PGP) is a collection of crypto techniques. In the course, we sent and received encrypted e-mail with Mailvelope.
  • Transport Layer Security (TLS) has replaced Secure Sockets Layer (SSL) as the standard crypto protocol to provide communication security over the Internet.
  • Firewalls block dangerous information/communications from spreading across networks. A personal firewall protects the computer it’s installed on.
  • Virtual Private Networks (VPNs) allow a secure connection across an untrusted network. VPNs use hashes, digital signatures, and message authentication codes (MACs).
  • Data loss is often due to “insider attacks”; these make up 36-37% of information security breaches.
  • Data is the representation of information (meaning).
  • The eight principles of the Data Protection Act (UK). Much of the information about legislation in Week 7 was specific to the UK, including the Computer Misuse Act (1990), the Regulation of Investigatory Powers Act (2000), and the Fraud Act (2006).
  • File permissions may be set to write (allows editing), read (allows copying), and execute (run program).
  • Use a likelihood-impact matrix to analyze risk: protect high-impact, high-likelihood data like e-mail, passwords, and online banking data.

Now that I’ve gained an increased awareness of cyber security, what’s changed? Partly due to this course and partly thanks to earlier articles, conference sessions, and workshops, here are the tools I use now:

See also this excellent list of privacy tools from the Watertown Free Library. Privacy/security is one of those topics you can’t just learn about once and be done; it’s a constant effort to keep up. But as more and more of our data becomes electronic, it’s essential that we keep tabs on threats and do our best to protect our online privacy.

Why Your Library’s Privacy Policy Matters

Today’s ALA/Booklist webinar, Why Your Library’s Policy Matters, was led by Cherie L. Givens, author of Information Privacy Fundamentals for Librarians and Information Professionals. The webinar seemed almost like a commercial for the book, because Givens only spoke generally, pointing listeners to the book for further detail. In fairness, it would be difficult to cover the topic of library privacy policies in depth in an hour, but I was still hoping for something slightly more concrete and practical. Nevertheless, here are the points she covered:

  • When drawing up a library privacy policy, make sure you are aware of relevant federal* and state legislation. State legislation (e.g. California) may be stricter than federal legislation.

*Particularly the Children’s Online Privacy Protection Act (COPPA), Family Education Rights and Privacy Act (FERPA), Protection of Pupil Rights Amendment (PPRA), No Child Left Behind (NCLB), the PATRIOT Act, Foreign Intelligence Surveillance Act (FISA), and National Security Letters (NSLs). (If your library does receive an NSL, the lawyers at ACLU would love to hear about it.)

  • The Federal Trade Commission (FTC) is a good resource for consumer protection (“We collect complaints about hundreds of issues from data security and deceptive advertising to identity theft and Do Not Call violations”).
  • People should have control over their Personally Identifiable Information (PII), including sensitive personal data such as Social Security Numbers. People should know when, how, and what PII is being communicated to others. It’s always best to collect as little information as possible, only what is necessary; minimize data collection and retention.
  • Every library needs a privacy policy, but the policy is just step one. The next step is to make sure your procedures match the policy, and that you contract for privacy with third parties (vendors) to ensure that they handle patron data according to the same standards.*
  • Perform a privacy audit/assessment: what information do you collect and how do you use it?
  • Look at other libraries’ privacy policies, and the privacy policies of small/medium-sized businesses.
  • The library privacy policy should be visible to users: hand it out with new library cards, post it near computers, keep a copy at the reference desk. (And on the library website?)
  • Privacy is important not just for intellectual freedom, but intellectual curiosity.

*I haven’t seen the contract language, but I would imagine this is much more difficult than it sounds, especially if a library is working with Overdrive, which allows patrons to check out Kindle books through Amazon. Amazon is a data-hungry beast.

These fair information practice principles I copied directly from slide 10 of Givens’ presentation:

  • Notice/Awareness: Provide notice of information collection practices before information is collected.
  • Choice/Consent: Give the subjects of data collection options about whether and how their personal information may be used.
  • Access/Participation: Provide access to an individual’s personal information so that the individual can review and correct it.
  • Integrity/Security: The data collector must take reasonable steps to make sure the data is accurate and secure.
  • Accountability or Enforcement/Redress: There must be a mechanism for addressing and resolving complaints for failing to abide by the above four principles.

Lastly, this great article was cited by one of the webinar participants. I remember reading it before (it was a Library Link of the Day on 10/4/14): “Librarians won’t stay quiet about government surveillance,” Washington Post, Andrea Peterson, 10/3/14.

This webinar will be archived with the rest of Booklist’s webinars, probably within the next week.

 

Nothing to hide: Readers’ rights to privacy and confidentiality

One of the first arguments that comes up in the privacy debate – whether the issue at hand is a police search of your vehicle or Amazon keeping a record of every Kindle book you read – is that only people who have “something to hide” care about privacy.

To say this is disingenuous, and if the people who made this argument thought for even five minutes, I bet they could come up with a few things about their lives that aren’t illegal, or even morally or ethically wrong, but that they’d like to keep private anyway. Let’s consider the issue of library books, and what the books you check out may reveal about you. (Notice The Anarchist Cookbook is not on the following list. I don’t know the statistics about where terrorists get their bomb-making instructions, but I doubt most of it comes from the public library. There’s this thing called the Internet, you see.)

  • What to Expect When You’re Expecting, or other books that might indicate you’re trying to start a family before you’ve told anyone else.
  • Cracking the New GRE, or other test-prep books for grad school or a planned career change you aren’t ready to tell your current boss about.
  • Managing Your Depression, The Lahey Clinic Guide to Cooking Through Cancer, or other books about medical conditions you or someone close to you may be experiencing.
  • Bankruptcy for Small Business Owners might prove worrisome to your clients or your bank.
  • The Guide to Getting It On, or any books on the topics of sexuality, sexual health, safe sex, etc. (In many libraries, kids can get their own library cards at a young age, and parents aren’t allowed to monitor their accounts.) See also: It Gets Better: Coming Out, Overcoming Bullying, Creating a Life Worth Living, or Transgender Lives, etc.
  • God Is Not Great or other anti-religious texts would likely be poorly received if you’re part of a religious family or community.
  • A Gentle Path Through the Twelve Steps, or other books about personal struggle and recovery.
  • How to Buy a House; How to Sell A House, or other real estate books when you haven’t told anyone you’re thinking of moving.

These are just a few examples of information that people might justifiably want to keep personal and private, but not because of any wrongdoing. And this is why librarians strive to protect patron privacy.

“We protect each library user’s right to privacy and confidentiality with respect to information sought or received and resources consulted, borrowed, acquired or transmitted.” -ALA Code of Ethics

11/1/14 Edited to add: This short graphic novel about privacy and technology from Al Jazeera America expands this idea, looking not just at people’s reading history but about all the information they share, voluntarily or not. Thanks to Library Link of the Day for the link.

"Even if you have nothing bad to hide, giving up privacy can mean giving up power over your life story and competing with others for control."

“Even if you have nothing bad to hide, giving up privacy can mean giving up power over your life story and competing with others for control.”

 

TOS42

“Maybe we’ve been given a false choice between opting in and giving up control over how that information is used–” “–between sharing and being left out.”

11/3/14 Edited to add: Kevin O’Kelly from the Somerville Public Library reminded me of Glenn Greenwald’s excellent TED Talk, “Why Privacy Matters.” In it, Greenwald says, “People who say that…privacy isn’t really important, they don’t actually believe it, and the way you know that they don’t actually believe it is that while they say with their words that privacy doesn’t matter, with their actions, they take all kinds of steps to safeguard their privacy. They put passwords on their email and their social media accounts, they put locks on their bedroom and bathroom doors, all steps designed to prevent other people from entering what they consider their private realm and knowing what it is that they don’t want other people to know.

And also: “We as human beings, even those of us who in words disclaim the importance of our own privacy, instinctively understand the profound importance of it. It is true that as human beings, we’re social animals, which means we have a need for other people to know what we’re doing and saying and thinking, which is why we voluntarily publish information about ourselves online. But equally essential to what it means to be a free and fulfilled human being is to have a place that we can go and be free of the judgmental eyes of other people.”

Greenwald is the author of No Place to Hide: Edward Snowden, the NSA, and the U.S. surveillance state (2014). His TED talk is well worth 20 minutes of your time.

 

NELA 2014: Consent of the Networked

Cross-posted on the NELA conference blog.

Intellectual Freedom Committee (IFC) Keynote: Consent of the Networked: The Worldwide Struggle for Internet Freedom, Rebecca MacKinnon (Monday, 8:30am)

MacKinnon pointed to many excellent resources during her presentation (see links below), but I’ll try to summarize a few of her key points. MacKinnon observed that “technology doesn’t obey borders.” Google and Facebook are the two most popular sites in the world, not just in the U.S., and technology companies affect citizen relationships with their governments. While technology may be a liberating force (as envisioned in Apple’s 1984 Superbowl commercial), companies also can and do censor content, and governments around the world are abusing their access to data.

“There are a lot of questions that people need to know to ask and they don’t automatically know to ask.”

MacKinnon noted that our assumption is that of a trend toward democracy, but in fact, some democracies may be sliding back toward authoritarianism: “If we’re not careful, our freedom can be eroded.” We need a global movement for digital rights, the way we need a global movement to act on climate change. If change is going to happen, it must be through an alliance of civil society (citizens, activists), companies, and politicians and policymakers. Why should companies care about digital rights? “They are afraid of becoming the next Friendster.” The work of a generation, MacKinnon said, is this: legislation, accountability, transparency, and building technology that is compatible with human rights.

It sounds overwhelming, but “everybody can start where they are.” To increase your awareness, check out a few of these links:

 

 

(Failing to) Protect Patron Privacy

Twitter_Overdrive_Adobe

On October 6, Nate Hoffelder wrote a post on The Digital Reader: “Adobe is Spying on Users, Collecting Data on Their eBook Libraries.” (He has updated the post over the past couple days.) Why is this privacy-violating spying story any more deserving of attention than the multitude of others? For librarians and library users, it’s important because Adobe Digital Editions is the software that readers who borrow e-books from the library through Overdrive (as well as other platforms) are using. This software “authenticates” users, and this is necessary because the publishers require DRM (Digital Rights Management) to ensure that the one copy/one user model is in effect. (Essentially, DRM allows publishers to mimic the physical restrictions of print books – i.e. one person can read a book at a time – on e-books, which could technically be read simultaneously by any number of people. To learn more about DRM and e-books, see Cory Doctorow’s article “A Whip to Beat Us With” in Publishers Weekly; though now more than two years old, it is still accurate and relevant.)

So how did authentication become spying? Well, it turns out Adobe was collecting more information than was strictly necessary, and was sending this information back to its servers in clear text – that is, unencrypted. Sean Gallagher has been following this issue and documenting it in Ars Technica (“Adobe’s e-book reader sends your reading logs back to Adobe – in plain text“). According to that piece, the information Adobe says it collects includes the following: user ID, device ID, certified app ID, device IP address, duration for which the book was read, and percentage of the book that was read. Even if this is all they collect, it’s still plenty of information, and transmitted in plain text, it’s vulnerable to any other spying group that might be interested.

The plain text is really just the icing on this horrible, horrible cake. The core issue goes back much further and much deeper: as Andromeda Yelton wrote in an eloquent post on the matter, “about how we default to choosing access over privacy.” She points out that the ALA Code of Ethics states, “We protect each library user’s right to privacy and confidentiality with respect to information sought or received and resources consulted, borrowed, acquired or transmitted,” and yet we have compromised this principle so that we are no longer technically able to uphold it.

Jason Griffey responded to Yelton’s piece, and part of his response is worth quoting in full:

“We need to decide whether we are angry at Adobe for failing technically (for not encrypting the information or otherwise anonymizing the data) or for failing ethically (for the collection of data about what someone is reading)….

…We need to insist that the providers of our digital information act in a way that upholds the ethical beliefs of our profession. It is possible, technically, to provide these services (digital downloads to multiple devices with reading position syncing) without sacrificing the privacy of the reader.”

Griffey linked to Galen Charlton’s post (“Verifying our tools; a role for ALA?“), which suggested several steps to take to tackle these issues in the short term and the long term. “We need to stop blindly trusting our tools,” he wrote, and start testing them. “Librarians…have a professional responsibility to protect our user’s reading history,” and the American Library Association could take the lead by testing library software, and providing institutional and legal support to others who do so.

Charlton, too, pointed back to DRM as the root of these troubles, and highlighted the tension between access and privacy that Yelton mentioned. “Accepting DRM has been a terrible dilemma for libraries – enabling and supporting, no matter how passively, tools for limiting access to information flies against our professional values.  On the other hand, without some degree of acquiescence to it, libraries would be even more limited in their ability to offer current books to their patrons.”

It’s a lousy situation. We shouldn’t have to trade privacy for access; people do too much of that already, giving personal information to private companies (remember, “if you’re not paying for a product, you are the product“), which in turn give or sell it to other companies, or turn it over to the government (or the government just scoops it up). In libraries, we still believe in privacy, and we should, as Griffey put it, “insist that the providers of our digital information act in a way that upholds the ethical beliefs of our profession.” It is possible.

10/12/14: The Swiss Army Librarian linked to another piece on this topic from Agnostic, Maybe, which is worth a read: “Say Yes No Maybe So to Privacy.”

10/14/14: The Waltham Public Library (MA) posted an excellent, clear Q&A about the implications for patrons, “Privacy Concerns About E-book Borrowing.” The Librarian in Black (a.k.a. Sarah Houghton, Director of the San Rafael Public Library in California), also wrote a piece: “Adobe Spies on eBook Readers, including Library Users.” The ALA response (and Adobe’s response to the ALA) can be found here: “Adobe Responds to ALA on egregious data breach,” and that links to LITA’s post “ADE in the Library Ebook Data Lifecycle.”

10/16/14: “Adobe Responds to ALA Concerns Over E-Book Privacy” in Publishers Weekly; Overdrive’s statement about adobe Digital Editions privacy concerns. On a semi-related note, Glenn Greenwald’s TED talk, “Why Privacy Matters,” is worth 20 minutes of your time.