Privacy in a Networked World, II

This is the second post about Privacy in a Networked World. The first post, about the conversation between Bruce Schneier and Edward Snowden, is here.

“Privacy in a Networked World,” John DeLong, Director of the Commercial Solutions Center, NSA

Other than the length and volume of applause, it’s difficult to measure an audience’s attitude toward a speaker. I’ll venture, though, that the audience of Privacy in a Networked World was generally pro-Snowden; the attitude toward John DeLong can perhaps be characterized as guarded open-mindedness laced with a healthy dose of skepticism.

DeLong’s talk was both forceful and defensive; he wanted to set the record straight about certain things, but he also knew that public opinion (in that room, at least) probably wasn’t in his favor. (He said repeatedly that he did not want to have an “Oxford-style debate,” though his talk wasn’t set up as a debate in the first place.) “Let’s not confuse the recipe with the cooking,” he said, in a somewhat belabored analogy where the NSA’s work was the cooking and the law was the recipe. (I cook a lot at home, and I’ll just say, I can usually tell when I’m looking at a bad recipe, and opt to adapt it or not make it at all.)

DeLong quoted at length from Geoffrey R. Stone’s “What I Told the NSA.” (Stone was a member of the President’s Review Group in fall 2013, after the Snowden revelations.) Stone’s conclusions were not altogether positive; he found that while the NSA “did its job,” many of its programs were “highly problematic and much in need of reform.” But it’s the Executive Branch, Congress, and FISA who authorized those programs and are responsible for reforming them. Stone added, “Of course, ‘I was only following orders’ is not always an excuse….To be clear, I am not saying that citizens should trust the NSA. They should not. Distrust is essential to effective democratic governance.”

DeLong said, “The idea that the NSA’s activities were unauthorized is wrong, wrong in a magnificent way.” He emphasized that the NSA is not a law enforcement agency, it’s an intelligence agency. He spoke in favor of people with different backgrounds and expertise – lawyers, engineers, mathematicians, privacy experts, etc. – coming together to work out solutions to problems, with respect for each others’ abilities. “Technology,” he said, “always comes back to how we as humans use it.” At present, “We do not have technologies that identify privacy risks….Privacy engineering could be one of the most important engineering feats of our time.”

DeLong talked about rebuilding the nation’s confidence in the NSA. “Confidence is the residue of promises kept,” he said. “More information does not necessarily equal more confidence.” (Someone on Twitter pointed out that much depends on the content of the information.) The talk was a good reminder not to villainize the entire NSA; part of DeLong’s forcefulness was undoubtedly on behalf of his co-workers and staff whom he felt were unfairly maligned. And technology that could identify privacy risks, built by people who have different perspectives and backgrounds, would be excellent. But do we need technology that identifies privacy risks, or do we need stronger oversight and better common sense? Mass surveillance erodes trust in government and hasn’t been terribly effective; what more do we need to know to put a stop to it?

“Privacy and Irony in Digital Health Data,” John Wilbanks, Chief Commons Officer, Sage Bionetworks

John Wilbanks gave a fast-paced, interesting talk about health data. The “irony” in the title of his talk soon became clear when he gave the example of Facebook’s mood manipulation experiment compared to a study of Parkinson’s disease. The sample size for Facebook was many times larger, with a constant flow of information from “participants,” as opposed to a much smaller sample population who filled out a survey and answered questions by phone. “What does our society value?” Wilbanks asked. This question can be answered by another question: “What do we surveil?”

Wilbanks showed a graph representing cardiovascular disease and terrorism: there is 1 death every 33 seconds from cardiovascular disease – “That’s like 9/11 every day” – and yet there’s not nearly the same kind of “surveillance” for health that there is for terrorism. Participating in a research study, Wilbanks said, is like “volunteering for surveillance,” and usually the mechanisms for tracking aren’t as comprehensive as, say, Facebook’s. Of course, privacy laws affect health research, and informed consent protects people by siloing their data; once the study is concluded, other researchers can’t use that data, and there’s no “network effect.”

Informed consent, while a good idea in theory, often leads to incomprehensible documents (much like Terms of Service). These documents are written by doctors, reviewed by lawyers, and edited by committee. Furthermore, said Wilbanks, people in health care don’t usually understand issues of power and data. So, he asked, how do we run studies at internet scale and make them recombinant? How do we scale privacy alongside the ability to do research? Wilbanks demonstrated some ideas to improve on traditional informed consent, which could also allow research subjects to get a copy of their own data and see which researchers are using data from the research in which they participated.

Obviously there are risks to individuals who share their personal health data, but there can be advantages too: more scientists having access to more data and doing more research can lead to more breakthroughs and improvements in the field of medicine.

Last year, Wilbanks talked about privacy and health data on NPR; you can listen to the segment here.

Still to come: Microsoft, Google, Pew, and a panel on “What Privacy Does Society Demand Now and How Much is New?” 

Privacy in a Networked World

This is the first post about Privacy in a Networked World, the Fourth Annual Symposium on the Future of Computation in Science and Engineering, at Harvard on Friday, January 23.

A Conversation between Bruce Schneier and Edward Snowden (video chat)

Bruce Schneier is a fellow at the Berkman Center for Internet & Society, and the author of Data and Goliath. Edward Snowden was a sysadmin at the NSA who revealed the extent of the government’s mass surveillance. The conversation was recorded (no joke) and is available on YouTube.

I have to say it was an incredibly cool feeling when Snowden popped up on the giant screen and was there in the room with us. There was sustained applause when he first appeared and also at the end of the conversation, when he was waving goodbye. Schneier started by asking Snowden about cryptography: What can and can’t be done? Snowden replied, “Encryption…is one of the few things that we can rely on.” When implemented properly, “encryption does work.” Poor cryptography, either through bad implementation or a weak algorithm, means weak security. End points are also weak spots, even if the data in transit is protected; it’s easier for an attacker to get around crypto than to break it.

Snowden pointed out a shift in the NSA’s focus over the last ten years from defense to offense. He encouraged us to ask Why? Is this proper? Appropriate? Does it benefit or serve the public?

The explosion in “passive” mass surveillance (collecting everything in case it’s needed later) is partly because it’s easy, cheap, and simple. If more data is encrypted, it becomes harder to sweep up, and hackers (including the NSA) who use more “active” techniques run a higher risk of exposure. This “hunger for risk has greatly increased” during the War on Terror era. Their targets are “crazy, unjustified….If they were truly risk averse they wouldn’t be doing this…it’s unlawful.”

Snowden said that the NSA “is completely free from any meaningful judicial oversight…in this environment, a culture of impunity develops.” Schneier said there were two kinds of oversight: tactical oversight within the organization (“did we follow the rules?”) and oversight from outside of the organization (“are these the right rules?”). He asked, “What is moral in our society?”

Snowden asked if the potential intelligence that we gain was worth the potential cost. He stated that reducing trust in the American infrastructure is a costly move; the information sector is crucial to our economy. The decrease in trust, he said, has already cost us more than the NSA’s budget. “They are not representing our interests.”

Schneier, using his NSA voice, said, “Corporations are spying on the whole internet, let’s get ourselves a copy!” (This was much re-tweeted.) “Personal information,” he said, “is the currency by which we buy our internet.” (Remember, if you can’t tell what the product is, you’re the product.) It’s “always amusing,” he said, when Google complains about the government spying on their users, because “it’s our job to spy on our users!” However, Schneier thinks that the attitudes of tech companies and standards bodies are changing.

These silos of information were too rich and interesting for governments to ignore, said Snowden, and there was no cost to scooping up the data because until 2013, “people didn’t realize how badly they were being sold up the river.” Schneier said that research into privacy-preserving technologies might increase now that there is more interest. Can we build a more privacy-preserving network, with less metadata?

“We’ve seen that the arguments for mass surveillance” haven’t really held up; there is little evidence that it has stopped many terrorist attacks. Schneier cited an article from the January 26, 2015 edition of The New Yorker, “The Whole Haystack,” in which author Mattathias Schwartz lists several recent terrorist attacks, and concludes, “In each of these cases, the authorities were not wanting for data. What they failed to do was appreciate the significance of the data they already had.”

Unlike during the Cold War, now “we all use the same stuff”: we can’t attack their networks and defend our networks, because it’s all the same thing. Schneier said, “Every time we hoard a zero-day opportunity [knowing about a security flaw], we’re leaving ourselves vulnerable to attack.”

PrivacyNetworkedWorld

Snowden was a tough act to follow, especially for John DeLong, Director of the Commercial Solutions Center for the NSA, but that’s exactly who spoke next. Stay tuned.

 

Housecleaning discovery: the Extinction Timeline

madetobreak
Made to Break by Giles Slade

Back in March 2013, I was trying every avenue to find a timeline of obsolescence I’d seen once during grad school. Even with the help of the Swiss Army Librarian, I came up empty-handed (though we did find a lot of other cool stuff, like the book Made to Break: Technology and Obsolescence in America by Giles Slade).

In the end – nearly two years later, as it happens – it was another book that led me to find the original piece paper I’d had in mind. That book was The Life-Changing Magic of Tidying Up by Marie Kondo (bet you didn’t see that coming, did you?). I’ve spent a good chunk of the past two weeks going through all the things in my apartment – clothes, books, technology, media, and lots and lots of papers – and at last, I found the timeline of obsolescence that I was looking for almost two years ago.

The Life-Changing Magic of Tidying Up by Marie Kondo
The Life-Changing Magic of Tidying Up by Marie Kondo

Only it isn’t a timeline of obsolescence, exactly; it’s an “extinction* timeline 1950-2050,” and it’s located – I think – in the 2010 book Future Files: A Brief History of the Next 50 Years by Richard Watson. It was created in partnership between What’s Next and the Future Exploration Network; Ross Dawson, founding chairman of the latter, wrote a blog post which includes a PDF of the timeline, “Extinction Timeline: what will disappear from our lives before 2050.”

*Existence insignificant beyond this date

Repair shops – the reason I was looking for this timeline in the first place – apparently went out of fashion (or “significance”) just before 2010, as did mending things, generally. Fortunately, the “predicted death date” for the things on the timeline is “not to be taken too seriously,” and since “a good night’s sleep” is coming under the axe just before 2040, I just have to hope that they’re wrong about that one.

futurefiles
Future Files by Richard Watson

Check out the extinction timeline yourself. Anything strike your interest? Do you agree or disagree with the predictions for the next 35 years? Discuss.

Extinction timeline 1950-2050 (PDF)

Introduction to Cyber Security

FutureLearnThis fall, I enrolled in, and completed, my first first MOOC (massive open online course), Introduction to Cyber Security at the Open University (UK) through their FutureLearn program. I found out about the course almost simultaneously through Cory Doctorow at BoingBoing and the Radical Reference listserv (thanks, Kevin).

Screen shot from course "trailer," featuring Cory Doctorow
Screen shot from course “trailer,” featuring Cory Doctorow

The free eight-week course started on October 15 and ended on December 5. Each week started with a short video, featuring course guide Cory Doctorow, and the rest of the week’s course materials included short articles and videos. Transcripts of the videos were made available, and other materials were available to download in PDF. Each step of each week included a discussion area, but only some of the steps included specific prompts or assignments to research and comment; facilitators from OU moderated the discussions and occasionally answered questions. Each week ended with a quiz; students had three tries to get each answer, earning successively fewer points for each try.

Week 1: [Security] Threat Landscape: Learn basic techniques for protecting your computers and your online information.
Week 2: Authentication and passwords
Week 3: Malware basics
Week 4: Networking and Communications: How does the Internet work?
Week 5: Cryptography basics
Week 6: Network security and firewalls
Week 7: “When your defenses fail”: What to do when things go wrong
Week 8: Managing and analyzing security risks

The FutureLearn website was incredibly easy to use, with a clean and intuitive design, and each week of the course was broken down into little bite-size chunks so it was easy to do a little bit at a time, or plow through a whole week in one or two sessions. I tended to do most of the work on Thursdays and Fridays, so there were plenty of comments in the discussions by the time I got there.

Anyone can still take the course, so I won’t go too in-depth here, but the following are some tips, facts, and resources I found valuable or noteworthy during the course:

  • Identify your information assets: these include school, work, and personal documents; photos; social media account information and content; e-mail; and more, basically anything you store locally on your computer or in the cloud. What is the value (high/medium/low) of this information to you? What are the relevant threats?
  • Passwords are how we identify ourselves (authentication). Passwords should be memorable, long, and unique (don’t use the same password for different sites or accounts). Password managers such as LastPass or KeyPass can help, but that is placing a lot of trust in them. Password managers should: require a password, lock up if inactive, be encrypted, and use 2-factor authentication.
  • Use 2-factor authentication whenever it is available.
  • 85% of all e-mail sent in 2011 was spam.
  • Anti-virus software uses two techniques: signatures (distinctive patterns of data) and heuristics (rules based on previous knowledge about known viruses).
  • The Sophos “Threatsaurus” provides an “A-Z of computer and data security threats” in plain English.
  • The Internet is “a network of networks.” Protocols (e.g. TCP/IP) are conventions for communication between computers. All computers understand the same protocols, even in different networks.
  • Wireless networks are exposed to risks to Confidentiality, Integrity, and Availability (CIA); thus, encryption is necessary. The best option currently is Wireless Protected Access (WPA2).
  • The Domain Name Server (DNS) translates URLs to IP addresses.
  • Any data that can be represented in binary format can be encrypted by a computer.
  • Symmetric encryption: 2 copies of 1 shared key. But how to transmit the shared key safely? Asymmetric encryption (a.k.a. public key cryptography) uses two keys and the Diffie-Hellman key exchange. (The video to explain this was very helpful.)
  • Pretty Good Privacy (PGP) is a collection of crypto techniques. In the course, we sent and received encrypted e-mail with Mailvelope.
  • Transport Layer Security (TLS) has replaced Secure Sockets Layer (SSL) as the standard crypto protocol to provide communication security over the Internet.
  • Firewalls block dangerous information/communications from spreading across networks. A personal firewall protects the computer it’s installed on.
  • Virtual Private Networks (VPNs) allow a secure connection across an untrusted network. VPNs use hashes, digital signatures, and message authentication codes (MACs).
  • Data loss is often due to “insider attacks”; these make up 36-37% of information security breaches.
  • Data is the representation of information (meaning).
  • The eight principles of the Data Protection Act (UK). Much of the information about legislation in Week 7 was specific to the UK, including the Computer Misuse Act (1990), the Regulation of Investigatory Powers Act (2000), and the Fraud Act (2006).
  • File permissions may be set to write (allows editing), read (allows copying), and execute (run program).
  • Use a likelihood-impact matrix to analyze risk: protect high-impact, high-likelihood data like e-mail, passwords, and online banking data.

Now that I’ve gained an increased awareness of cyber security, what’s changed? Partly due to this course and partly thanks to earlier articles, conference sessions, and workshops, here are the tools I use now:

See also this excellent list of privacy tools from the Watertown Free Library. Privacy/security is one of those topics you can’t just learn about once and be done; it’s a constant effort to keep up. But as more and more of our data becomes electronic, it’s essential that we keep tabs on threats and do our best to protect our online privacy.

NELA 2014: Consent of the Networked

Cross-posted on the NELA conference blog.

Intellectual Freedom Committee (IFC) Keynote: Consent of the Networked: The Worldwide Struggle for Internet Freedom, Rebecca MacKinnon (Monday, 8:30am)

MacKinnon pointed to many excellent resources during her presentation (see links below), but I’ll try to summarize a few of her key points. MacKinnon observed that “technology doesn’t obey borders.” Google and Facebook are the two most popular sites in the world, not just in the U.S., and technology companies affect citizen relationships with their governments. While technology may be a liberating force (as envisioned in Apple’s 1984 Superbowl commercial), companies also can and do censor content, and governments around the world are abusing their access to data.

“There are a lot of questions that people need to know to ask and they don’t automatically know to ask.”

MacKinnon noted that our assumption is that of a trend toward democracy, but in fact, some democracies may be sliding back toward authoritarianism: “If we’re not careful, our freedom can be eroded.” We need a global movement for digital rights, the way we need a global movement to act on climate change. If change is going to happen, it must be through an alliance of civil society (citizens, activists), companies, and politicians and policymakers. Why should companies care about digital rights? “They are afraid of becoming the next Friendster.” The work of a generation, MacKinnon said, is this: legislation, accountability, transparency, and building technology that is compatible with human rights.

It sounds overwhelming, but “everybody can start where they are.” To increase your awareness, check out a few of these links:

 

 

(Failing to) Protect Patron Privacy

Twitter_Overdrive_Adobe

On October 6, Nate Hoffelder wrote a post on The Digital Reader: “Adobe is Spying on Users, Collecting Data on Their eBook Libraries.” (He has updated the post over the past couple days.) Why is this privacy-violating spying story any more deserving of attention than the multitude of others? For librarians and library users, it’s important because Adobe Digital Editions is the software that readers who borrow e-books from the library through Overdrive (as well as other platforms) are using. This software “authenticates” users, and this is necessary because the publishers require DRM (Digital Rights Management) to ensure that the one copy/one user model is in effect. (Essentially, DRM allows publishers to mimic the physical restrictions of print books – i.e. one person can read a book at a time – on e-books, which could technically be read simultaneously by any number of people. To learn more about DRM and e-books, see Cory Doctorow’s article “A Whip to Beat Us With” in Publishers Weekly; though now more than two years old, it is still accurate and relevant.)

So how did authentication become spying? Well, it turns out Adobe was collecting more information than was strictly necessary, and was sending this information back to its servers in clear text – that is, unencrypted. Sean Gallagher has been following this issue and documenting it in Ars Technica (“Adobe’s e-book reader sends your reading logs back to Adobe – in plain text“). According to that piece, the information Adobe says it collects includes the following: user ID, device ID, certified app ID, device IP address, duration for which the book was read, and percentage of the book that was read. Even if this is all they collect, it’s still plenty of information, and transmitted in plain text, it’s vulnerable to any other spying group that might be interested.

The plain text is really just the icing on this horrible, horrible cake. The core issue goes back much further and much deeper: as Andromeda Yelton wrote in an eloquent post on the matter, “about how we default to choosing access over privacy.” She points out that the ALA Code of Ethics states, “We protect each library user’s right to privacy and confidentiality with respect to information sought or received and resources consulted, borrowed, acquired or transmitted,” and yet we have compromised this principle so that we are no longer technically able to uphold it.

Jason Griffey responded to Yelton’s piece, and part of his response is worth quoting in full:

“We need to decide whether we are angry at Adobe for failing technically (for not encrypting the information or otherwise anonymizing the data) or for failing ethically (for the collection of data about what someone is reading)….

…We need to insist that the providers of our digital information act in a way that upholds the ethical beliefs of our profession. It is possible, technically, to provide these services (digital downloads to multiple devices with reading position syncing) without sacrificing the privacy of the reader.”

Griffey linked to Galen Charlton’s post (“Verifying our tools; a role for ALA?“), which suggested several steps to take to tackle these issues in the short term and the long term. “We need to stop blindly trusting our tools,” he wrote, and start testing them. “Librarians…have a professional responsibility to protect our user’s reading history,” and the American Library Association could take the lead by testing library software, and providing institutional and legal support to others who do so.

Charlton, too, pointed back to DRM as the root of these troubles, and highlighted the tension between access and privacy that Yelton mentioned. “Accepting DRM has been a terrible dilemma for libraries – enabling and supporting, no matter how passively, tools for limiting access to information flies against our professional values.  On the other hand, without some degree of acquiescence to it, libraries would be even more limited in their ability to offer current books to their patrons.”

It’s a lousy situation. We shouldn’t have to trade privacy for access; people do too much of that already, giving personal information to private companies (remember, “if you’re not paying for a product, you are the product“), which in turn give or sell it to other companies, or turn it over to the government (or the government just scoops it up). In libraries, we still believe in privacy, and we should, as Griffey put it, “insist that the providers of our digital information act in a way that upholds the ethical beliefs of our profession.” It is possible.

10/12/14: The Swiss Army Librarian linked to another piece on this topic from Agnostic, Maybe, which is worth a read: “Say Yes No Maybe So to Privacy.”

10/14/14: The Waltham Public Library (MA) posted an excellent, clear Q&A about the implications for patrons, “Privacy Concerns About E-book Borrowing.” The Librarian in Black (a.k.a. Sarah Houghton, Director of the San Rafael Public Library in California), also wrote a piece: “Adobe Spies on eBook Readers, including Library Users.” The ALA response (and Adobe’s response to the ALA) can be found here: “Adobe Responds to ALA on egregious data breach,” and that links to LITA’s post “ADE in the Library Ebook Data Lifecycle.”

10/16/14: “Adobe Responds to ALA Concerns Over E-Book Privacy” in Publishers Weekly; Overdrive’s statement about adobe Digital Editions privacy concerns. On a semi-related note, Glenn Greenwald’s TED talk, “Why Privacy Matters,” is worth 20 minutes of your time.

 

 

Usability and Visibility

Last fall I wrote about Google’s redesign (which actually increased the number of clicks it took to get something done). Sure, it’s a “cleaner, simpler” look, but how did it get cleaner and simpler? To put it plainly: they hid stuff.

For those who are continually riding the breaking wave of technology, these little redesigns cause a few moments of confusion or annoyance at worst, but for those who are rather more at sea to begin with, they’re a tremendous stumbling block.

Today in the library, I helped an 80-year-old woman access her brand-new Gmail account. She signed on to one of the library computers with her library card – no problem there. Then she stared at the desktop for a while, so I explained that she could use one of three browsers – Chrome, Firefox, or Internet Explorer – to access the Internet. “Don’t confuse me with choices, just tell me what to do. Which one do you like?” she asked.

I suggested Firefox, and she opened the browser. The home screen is set to the familiar Google logo and search bar, surrounded by white space. I pointed up to the corner and told her to click on Gmail:

Screen shot 2014-09-02 at 7.44.07 PMThen came the sign-in screen, asking for email and password; at least the “sign in” button is obvious.

Screen shot 2014-09-02 at 7.48.45 PMNext, we encountered a step that asked her if she wanted to confirm her account by getting a mobile alert. I explained that she could skip this step, but she clicked on it anyway, then got frustrated when her inbox didn’t appear.

Now, here’s something that anyone who has ever put up any kind of signage probably knows: People don’t read signs. They don’t read instructions. Good design takes this into account; as Don Norman (The Design of Everyday Things) says, “Design is really an act of communication.” Good design communicates with a minimum of words and instructions.

In this case, I canceled the prompt for her and we got to her inbox. I showed her that she had three e-mails – informational, “welcome” e-mails from Gmail itself – and upon seeing she had no mail, she wanted to sign out. “Do I just click the X?” she asked, moving the mouse up to the upper right hand corner of the program. I explained that clicking the red X would close the browser, but that she should sign out of Gmail first (even though the library computers wipe out any saved information between patrons).

But is there a nice big button that says “Sign out”? No, there is not. Instead, there’s this:

Screen shot 2014-09-02 at 8.01.12 PMHow on earth would a new user know to click on that to sign out? She wouldn’t. And the thing about new users (very young ones excepted, usually) is that they don’t want to go around clicking on random things, because they’re afraid they will break something, or make a mistake they can’t correct or backtrack from.

I think the above scenario will be familiar to anyone who works in a public library, not to mention anyone who has tried to help a parent or a grandparent with a computer question. It’s easy to get frustrated with the user, but more often than not the blame really rests with the designer – and yet it’s not the designers who are made to feel stupid for “not getting it” or making mistakes.

And it isn’t just beginning users who run into these problems. Sometimes it seems as though designers are changing things around just for the sake of change, without making any real improvements. Examples spring to mind:

Think the latest “upgrade” to Google Maps. If there are checkboxes for all the things you already know are problems, why push the new version?

Screen shot 2014-09-02 at 8.25.28 PM

Even Twitter, which is usually pretty good about these things (and which got stars across the board in the EFF’s most recent privacy report, “Who Has Your Back?: Protecting Your Data From Government Requests”), is not immune to the making-changes-for-no-reason trend:

Screen shot 2014-09-02 at 8.18.00 PM

But perhaps the most notorious offender of all is iTunes:

Screen shot 2014-09-02 at 8.21.11 PM

Screen shot 2014-09-02 at 8.17.07 PM

To quote Don Norman (again), “Once a satisfactory product has been achieved, further change may be counterproductive, especially if the product is successful. You have to know when to stop.

To this end, I would suggest to all designers and front-end developers: please, run some user testing before you make changes, or as you’re creating a new design. Get just five people to do a few tasks. See where they get confused and frustrated, see where they make mistakes. Remember (Norman again), “Designers are not typical users. Designers often think of themselves as typical users…[but] the individual is in no position to discover all the relevant factors. There is no substitute for interaction with and study of actual users of a proposed design.

Edited to add: WordPress isn’t immune, either.

Screen shot 2014-09-02 at 8.41.47 PM

Is it “easier”? Is it “improved”? How so? I’m OK with the way it is now, thanks…but soon I’m sure I won’t have a choice about switching over to the new, “easier,” “improved” way.

“Netflix for books” already exists: it’s called the library

Even in a profession where we interact with the general public daily, it can be tricky for librarians to assess how much other people know about what we do, and what libraries offer – which is why it is so delightful to see an article by a non-librarian raising awareness of a service libraries offer. In “Why the Public Library Beats Amazon – For Now” in the Wall Street Journal, Geoffrey A. Fowler praises public libraries across the country, more than 90% of which offer e-books (according to the Digital Inclusion Study funded by the Institute of Museum and Library Services).

Noting the rise of Netflix-style subscription platforms like Oyster and Scribd, Fowler observes that libraries still have a few key advantages: they’re free, and they offer more books that people want to read.

random-house-penguin11
Graphic designer Aaron Tung’s idea for the Penguin – Random House logo

Librarians have been working with publishers for several years, negotiating various deals and trying out different models (sometimes it seems like two steps forward, one step back), but finally all of the Big Five have come on board and agreed to “sell” (license) e-books and digital audiobooks to libraries under some model. (The Big Five were formerly the Big Six, but Random House and Penguin merged and became Penguin Random House, missing a tremendous opportunity to call themselves Random Penguin House, with accompanying awesome logo.)

Thus, while Amazon’s Kindle Unlimited (KU for short – has the University of Kansas made a fuss about this yet? They should) touts its 600,000 titles, the question readers should be asking is, which 600,000 titles? All books are not created equal. The library is more likely to have the books you want to read, as Fowler points out in his article. It may be true that Amazon, Oyster, and Scribd have prettier user interfaces, and it may take fewer clicks to download the book you want (if it’s there), but library platforms – including OverDrive, 3M Cloud Library, and others – have made huge strides in this area. If you haven’t downloaded an e-book from your library recently, or at all, give it a try now – it’s leaps and bounds smoother than it used to be. You may have to wait for it – most publishers still insist on the “one copy/one user” model, rather than a simultaneous use model – but it is free. (Or if you’re impatient and solvent, you can go ahead and buy it.)

Readers' advisory desk at the Portland (ME) Public Library.
Readers’ advisory desk at the Portland (ME) Public Library.

Another way in which the library differs from for-profit book-rental platforms is that, to put it bluntly, the library isn’t spying on you. If you’re reading a Kindle book, Amazon knows how fast you read, where you stop, what you highlight. Libraries, on the other hand, have always valued privacy. The next time you’re looking for an e-book, try your local library – all you need is your library card number and PIN.

MLA Conference 2014, Day Two (Thursday)

Screen shot 2014-05-08 at 8.53.24 PMHarvard Library Innovation Lab: Pop-Ups, Prototypes, and Awesome Boxes

Annie Cain, Matt Phillips, and Jeff Goldenson from the Harvard Library Innovation Lab  presented some of their recent projects. Cain started off by introducing Awesome Box: the Awesome Box gives library users the opportunity to declare a library item (book, audiobook, movie, TV show, magazine, etc.) “awesome” by returning it to an Awesome Box instead of putting it into the book drop. Library staff can then scan the “awesome” items and send them to a custom website (e.g. arlington.awesomebox.io), where anyone can see the “recently awesome” and “most awesome” items. Instead of librarian-to-patron readers’ advisory, it’s patron-to-patron/librarian. Cool, fun, and easy to use! “Awesome” books can also be put on display in the library.

Phillips talked about the idea of “hovermarks,” bringing favicon-style images to the stacks by placing special bookmarks in books. Patrons or librarians could place a hovermark in a book to draw attention to local authors, Dewey Decimal areas, beach reads, favorites, Awesome Box picks, or anything else. It’s a “no-tech” way to “annotate the stacks.”

Goldenson floated the idea of a Library Community Catalog, inspired by the Whole Earth Catalog. The Library Community Catalog could contain real things, ideas, speculations, interviews, or other articles. It could be “hyper-local,” in print and/or online.

Of the three ideas presented, Awesome Box is definitely the most developed, and Harvard, which “isn’t necessarily known for sharing,” is eager to get it into public libraries. Contact them if you’re interested in setting it up at your library!

Libraries are Keeping Readers First: An Update on the National Initiative and How You Can Participate

Readers First is “a movement to improve e-book access and services for public library users.” Kelvin Watson from Queens Library and Michael Santangelo from BookOps presented an update on this initiative, explaining the work that’s been done thus far and how far we have to go. The more people (and libraries) sign on, the stronger the team, the better ability to effect change. Already, said Santangelo, Readers First represents over 20 million readers.

Screen shot 2014-05-09 at 3.57.31 PM

It’s worth going to the Readers First site (link in the previous paragraph) to read their principles. The two main challenges regarding e-books in libraries are availability and discoverability/access. Availability is an issue with the publishers; the issues of discoverability and access can be taken up with the vendors. Because libraries are only indirectly connected to publishers, but directly connected to vendors, Readers First decided to focus its efforts on the discoverability/access challenge.

Santangelo said that Ranganathan’s Five Laws of Library Science applied to e-books also (save the time of the reader, (e)books are for use, etc.) and that libraries have a responsibility to ensure open, easy, and free access to e-books the same as we do for print books. However, the e-book experience now is fragmented, disjointed, and cumbersome, creating a poor user experience. This is where the four Readers First principles come in: readers should be able to discover content in one comprehensive catalog; access a variety of content from multiple sources; interact with the library in the library’s own context; and read e-books compatible with all e-reading devices.

A Readers First Working Group sent a survey to vendors in order to create a guide to library e-book vendors. This guide will help librarians who are choosing an e-book vendor for the first time, or moving from one to another; it will also help vendors design their systems and decide what to prioritize.

Watson said that libraries should see vendors as partners, and challenge them to “do the right thing.” Librarians should hold all vendors accountable to the Readers First principles, with the end goal of a seamless experience for the user. The long-term objective, said Michael Colford of the Boston Public Library, is to “have the discovery layer be the platform.” Until then, we’re relying on APIs. “We can make things less complicated, but we can’t make it easier,” said Santangelo.

Readers First is working with the National Information Standards Organization (NISO) to develop standards for e-books, but according to Watson, the perfect format hasn’t been invented yet. (Other than PDFs, most e-book files are proprietary formats, wrapped in DRM and not usable across devices.)

MA E-Book Project

Deb Hoadley presented an update on the Massachusetts E-Book Project on behalf of the Massachusetts Library System. I was already familiar with the project because Robbins is one of the pilot libraries, but it was good to review the history, see where the project had hit snags, and hear from other librarians at pilot libraries (Jason Homer from Wellesley and Jackie Mushinsky from WPI) about how they had introduced the project to patrons.

150x71-MA-EbooksYou can read about the project’s history, the RFP, and see updates on the website, so I want to use this space to draw a parallel between the MA E-Book Project and Readers First. Although the pilot consists of three different vendors (BiblioBoard, Baker & Taylor (Axis 360), and EBL) with three different models, the end goal is a single e-book platform that offers integrated and seamless discovery. Any Massachusetts resident would have access through this user-friendly platform to e-content that is owned – not licensed – by Massachusetts libraries; local content would also be hosted and discoverable.

Although we are far from this goal right now, “Our vendors are listening to us,” said Homer. He said that participating in the pilot project has enabled him to start conversations with patrons about how much we spend on e-books now and why we need a new model. Mushinsky, who added local content through BiblioBoard, said that we need to ask, “Will this resource be of value to us? Can we add value to it?”

I came away from these two sessions (Readers First and the MA E-Book Project) convinced that we have the right goals, and dedicated people working toward them, but a little depressed at how far we have to go. Slowly but surely…


Teaching the Tools: Technology Education in Public Libraries

Clayton Cheever live-blogged this session; his notes are posted on the Teaching the Tools site.

Anna Litten from Wellesley did an excellent job moderating this informative panel. Litten and the other panelists (Michael Wick, Theresa Maturevitch, Jason Homer, and Sharani Robins) built a website called Teaching the Tools: Libraries and Technology Education, which they hope will serve as a resource going forward. To borrow from the site: “All reference librarians are technology trainers, educators and instructors these days.  But what does it really mean to teach technology topics in public libraries?  What can and should we teach?  How does technology instruction fit into our broader mission and core responsibilities?  What resources are available to use and to our clients?  How do we become better presenters and instructors?”

The panelists addressed these questions during the session. They all teach in their libraries, but the teaching takes different forms. “I teach to whatever question comes to the door, in whatever way the learner can understand it,” said Wick. Maturevich talked about printed brochures, online resources, and videos; Robins talked about beginner classes, one-on-one sessions, and “Wired Wednesday,” when patrons can drop in for tech help. Robins has also had reps from Barnes & Noble and Best Buy come in to help people with e-reading devices, and she often uses the resources at GCF LearnFree.org. Homer teaches intermediate classes in the Wellesley computer lab, and other Wellesley staff teach beginner classes. Clearly, there are many approaches, and flexibility is key.

Litten suggested taking the time to read instructional design blogs; most librarians don’t have a background in instructional design, but the field does exist and there’s a lot we can learn. “We have to focus on what’s going to work,” she said. “If it’s not working, abandon! Abandon!”

What to do when you offer a class and no one shows up? Wick and Litten talked about forming partnerships in the community. “We can be really useful to you in ways you didn’t even realize,” said Litten. “Listen,” Wick encouraged. Ask people, “What do you want? We’ll give it to you.” As for whether teaching technology is part of the library’s mission, Wick said, why wouldn’t it be? “We help everybody with everything else. Why aren’t we helping them as much as we can, more than they’re asking?” Find your audience first, said Wick, then design your classes.

Some library staff are reluctant to teach classes, but that isn’t the only kind of teaching. Nor do tech teachers have to be experts; in fact, said Wick, good teachers can be just one step ahead of their students. Knowing the librarian/teacher is not an expert but a fellow learner can put patrons/students at ease. Confronted with a question she doesn’t know the answer to, Maturevich often uses the line, “I don’t know either, but this is how we find out.”

“Good instruction depends on having good goals,” said Litten. “We’re already doing these things, we just need to do them a little bit better.”

carlitos_Simple_Pencil_ho

That’s all, folks! If you missed it, you can read about Wednesday’s sessions here (part 1) and here (part 2).

See the whole MLA conference program here [PDF]

 

We interrupt this broadcast…

Another post or three about MLA still to come, but first: May 6 was International Day Against DRM. Please go read what Sarah (a.k.a. the Librarian In Black) has to say about this, and follow all her links (especially check out Defective By Design).

librariansagainstDRM“Consumers, and libraries by extension, should have the right to access eBooks on any technological platform, including the hardware and software we choose.” -Sarah Houghton

And now, back to our regularly scheduled programming…