The elusive element of delight

Once a year or so, my husband and I order several one-ounce packets of various kinds of tea from a site called Culinary Teas. This year, when our supplies dwindled and we went to place a new order, we were completely blown away at the site’s new design. We could not stop marveling at it.

Thanks to the Internet Archive’s Wayback Machine, it’s still possible to see what the Culinary Teas site looked like before. Here’s a screenshot from February 10, 2011:

The Culinary Teas site, as captured by the Wayback Machine in 2011.

The Culinary Teas site, as captured by the Wayback Machine in 2011.

Again, thanks to the Wayback Machine, I could see that the site design really hadn’t changed much from its beginnings in the early 2000s. Here’s what it looked like in 2003:

The Culinary Teas site in 2003, courtesy of the Wayback Machine.

The Culinary Teas site in 2003, courtesy of the Wayback Machine.

They changed some colors around – the orange disappeared between 2003 and 2008 – but the site ID is still the same, and you can see they’ve got a seasonal theme going on behind it (snowflakes, leaves). There is a lot of text, and left and right sidebars, and the overall effect is busy, if functional; the login option and shopping cart are roughly where you’d expect them to be, near the top right, and the left sidebar provides an easy way to browse different kinds of tea. The search box is buried on the lower part of the right sidebar, which is not where most users are accustomed to looking for it, but if you’re just here to buy different teas, the left sidebar organization is pretty clear. Overall, the design of the site is functional; it works, it’s just not necessarily a pleasure to use.

But then, in 2014…WHAM.

The new Culinary Teas site, December 2014.

The new Culinary Teas site, December 2014.

 

HOLY SHIT IT’S BEAUTIFUL. Partly our reaction was due to the difference between our expectation (we remembered the old site) and the reality (of the new site). First of all, there’s a ton of white space, and much less text, so the first impression is much cleaner. The teapot logo is transformed and the site ID font is updated; they’re arranged together in the top left, according to standard web conventions, instead of top center. And there in the top right is the search bar, again in line with conventions; My Account is there too.

As for navigation, the left sidebar menu has remained more or less unchanged in terms of content and organization, and the organization makes sense to me (both as a tea-drinker and as an organization-minded librarian). The horizontal nav across the top, under the search box and My Account link, offers a reasonable six choices. (There are additional footer links, but we never needed to use them; even the About and Contact links, which are often buried in the footer of commercial sites, are header links here.)

When you click on one of the left nav options – Organic Teas, say – the resulting display is jaw-droppingly gorgeous:

Visual image of tea, name of tea, starting price for smallest amount. Beautiful.

Visual image of tea, name of tea, starting price for smallest amount. Beautiful.

Talk about “what you see is what you get.”

Culinary Teas has had a good product for a long time, and now they have a great website to showcase it as well. As Steve Krug writes in Don’t Make Me Think! (more on this soon), “Delight is a bit hard to pin down; it’s more one of those ‘I’ll know it when I feel it’ kind of things.” Other attributes of usability are more important: is it useful, learnable, memorable, effective, efficient, desirable? The previous version of this site had most of these usability attributes – I don’t ever remember being frustrated while using it – but it wasn’t delightful. The new version is. Three cheers for good design!

Finnish Lessons: What Can the World Learn from Educational Change in Finland?

finnishlessonsI first heard about Finnish Lessons: What Can the World Learn from Educational Change in Finland? by Pasi Sahlberg from the review “Schools We Can Envy” by Diane Ravitch in the New York Review of Books (3/8/12), but I didn’t pick it until December 2014. Reading Finnish Lessons was an enlightening experience, and a frustrating one. Enlightening, because Sahlberg shows how Finland developed a shared philosophy, set a goal, and achieved that goal by using evidence-based research; frustrating because the U.S. and many other countries are taking an opposite approach, despite evidence that this approach – competition between schools instead of cooperation, an increase in standardized testing – has been shown not to work.

Underpinning Finland’s steady educational improvement since the 1970s is a set of shared philosophies:

  • All pupils can learn if they are given proper opportunities and support.
  • Understanding of and learning through human diversity is an important educational goal.
  • Schools should function as small-scale democracies.
  • The role of public education must to be educate critical and independent-thinking citizens.

The basis of Finland’s education policy is that instruction is the key element that makes a difference in what students learn in school – not standards, assessment, or alternative instructional programs. To that end, teacher education was overhauled, so that now all teachers in Finland have master’s degrees, and all principals are or have been teachers. Teachers are trusted in society, and have autonomy within their classrooms. This approach has been successful; Sahlberg writes, “What PISA surveys, in general, have revealed is that education policies that are based on the idea of equal educational opportunities and that have brought teachers to the core of educational change have positively impacted the quality of learning outcomes.”

The Finns value equity in education, “a principle that aims at guaranteeing high quality education for all in different places and circumstances.” In practice, this means that Finnish students, no matter where in the country they live, receive an equally high level of instruction and support. And within schools, “ability grouping” (also called tracking or streaming) was stopped in 1985. Instead, teachers pay attention to students who have special educational needs; Sahlberg writes, “The basic idea is that with early recognition of learning difficulties and social and behavioral problems, appropriate professional support can be provided to individuals as early as possible.” So many students receive help at one point or another during their time in school that special education is not stigmatized the way it sometimes is in the U.S.  And, as in medicine, an ounce of prevention is worth a pound of cure.

As I was writing this post, the Library Link of the Day featured the Washington Post article “Requiring kindergarteners to read – as Common Core does – may harm some” by Valerie Strauss (1/13/15). Strauss quotes from the report “Reading in Kindergarten: Little to Gain and Much to Lose” by Nancy Carlsson-Paige, Geralyn Bywater McLaughlin, and Joan Wolfsheimer Almon: “Many children are not developmentally ready to read in kindergarten. In addition, the pressure of implementing the standards leads many kindergarten teachers to resort to inappropriate didactic methods combined with frequent testing. Teacher-led instruction in kindergartens has almost entirely replaced the active, play-based, experiential learning that we know children need from decades of research in cognitive and developmental psychology and neuroscience” (emphasis mine). Why on earth are we developing new standards in the U.S. that aren’t research-based? Why are we, in fact, doing the opposite of what the research indicates we should do? Incidentally, in Finland, school doesn’t start till age 7 – but of course, there are free, high-quality preschools that most children attend before then. (Universal preschool, let alone daycare, being another thing we don’t have here.)

It’s true that Finland is a very different country from the U.S.: it has a smaller, more homogenous population, a better social safety net for its citizens (only 4% of children in Finland live below the poverty line, compared to 20% in the U.S.). Sahlberg addresses those differences in his book, but it doesn’t change the main message, which he states in the introduction: “There is another way to improve education systems. This includes improving the teaching force, limiting student testing to a necessary minimum, placing responsibility and trust before accountability, and handing over school- and district-level leadership to education professionals.” In this country, we’re moving in the exact opposite direction, in spite of the fact that these strategies – teaching a prescribed curriculum, increasing standardized testing, relying on tests to measure accountability – haven’t worked in the past.

Thinking back to my own education, I know Sahlberg is right when he says “instruction is the key element.” What I remember best are my teachers: their enthusiasm, creativity, and dedication. The projects they came up with, the inventive paper topics they assigned, all of the resources they included beyond textbooks: novels and paintings and primary source documents. I remember their handwritten feedback on papers and tests, and the learning that occurred because of those comments. When you take a standardized test, no one goes over it with you afterward; you don’t know what you got right or where you made a mistake, so you can’t learn from it, you can only be anxious, or forget the experience. Students must be able to learn from their mistakes and failures; if failure only brings punishment instead of a learning opportunity, the fear of failure will become so great that students will stop trying anything creative or challenging, and their learning will become a smaller, more circumscribed thing. Are those the kind of citizens we want to produce? I don’t think so. I hope not.

Nick Hornby reads from Funny Girl in Cambridge

Despite being wildly excited for a new Hornby novel, and having read Funny Girl in galleys, I would have missed this event completely if it weren’t for my husband, who (a) discovered it was happening, and (b) called to get us tickets about three hours before the event started (after a brief, “oh no, Nick Hornby is in town tonight but it’s sold out!” panic). Usually I detest surprises of any kind, but it turns out that “Nick Hornby’s doing a reading and book signing tonight and we have tickets” is in the good surprises category.

DSC08002

Harvard Book Store event: Ethan Gilsdorf and Nick Hornby at the First Parish Church in Cambridge

Hornby appeared in conversation with Ethan Gilsdorf at the First Parish Church in Cambridge in an event organized by the Harvard Book Store. Serena, the book store employee who introduced him, said that his new novel Funny Girl achieved the “Nick Hornby trifecta”: smart, funny, and a little bit sad. This was the first stop on the U.S. tour, so there was the obligatory joke about snow, and didn’t he wish he’d started in California and worked in the other direction? (Snow didn’t stop fans from attending; it was a pretty full house.)

funnygirl-us

The U.S. cover of Funny Girl.

Gilsdorf asked Hornby what intrigued him about the time period and setting (London in the 1960s), and Hornby answered that his interest was born out of his work on An Education, which ended in 1963, and a conversation he’d had with actress Rosamund Pike. “Beautiful women are not often allowed to be comediennes,” Hornby realized, and thus the fictional British Lucille Ball – the main character of Funny Girl – was born. He also wanted to write about “the joy of collaborative work…the joy of conversation about lines in movies.” Hornby said, “All this fierce intelligence goes into popular entertainment,” and that statement more than any other might best sum up Funny Girl, whose characters believe at their cores in the value of comedy.

Did Hornby think of Funny Girl as historical fiction, a sort of alternate reality? He worked backward from the ending, a coda in which all the characters are in their 70s, looking back on their careers. If that was in the present, then their early careers would have been around 1964, ’65 – where An Education ended. The 1960s “was a really fantastic time for television…if you had a hit show then, everybody watched it.” Funny Girl, Hornby said, is more about the birth, life, and death of a hit show than about any one character; having read the book, I have to agree. Sophie herself admits, “It was always about the work. She’d never been in love with Clive, but she’d been in love with the show since the very first day.”

Funny Girl includes images of ephemera along with the text: real photos from the time, mock scripts from the TV show in the book – called Barbara (and Jim) – and other bits and pieces. “Why don’t novels have photographs? Why shouldn’t they?” Hornby asked. He even included a cover for one of his character’s books, which a designer at Penguin created.

funnygirl-uk

The U.K. cover of Funny Girl.

Though the book is mostly about Sophie – the eponymous funny girl – Hornby chose to read a section about her co-star, Clive. He read from the U.K. edition, and bent the cover back in a way that was almost physically painful to see (my husband saw the expression on my face and wondered if Hornby was bleeding. No, I said, the book is hurting). After reading, he talked again about Lucille Ball. “We didn’t have anything like that” in England…”Monty Python was so male.” He enjoyed creating an alternate history, giving England a comedienne. “I’ve done my bit,” he said, getting a laugh from the audience.

Next, Hornby and Gilsdorf talked about Hornby’s screen adaptation of Cheryl Strayed’s memoir Wild. Some people had expressed doubts about a man writing the screenplay, and Hornby admitted he was neither a hiker nor a woman. (Never mind that men have been writing female characters, and vice versa, for centuries.) But “it’s a book about being unable to hike…written with a liberal arts sensibility.” Furthermore, “it was a memoir, there [on the page] was the woman’s head!” In the book, Strayed reveals much in the first chapter, but Hornby decided to make it “an emotional mystery” so viewers “see the damage this grief has caused.”

DSC08001Gilsdorf asked about the romanticized life of the writer, referencing a section of Hornby’s website, “An Average Day.” Hornby’s advice to writers is, first, “if you can stop [writing], stop,” and second, “if you can write 500 words a day…do the maths…we should all be capable of writing a book a year.” (I’m not a “maths” person, but this is the approach I take in my fiction writing as well; for my first manuscript, I set the bar even lower, at 250 words a day, then bumped it to 500. For my second manuscript, I aimed for 750 words a day. It really does add up.) Hornby mentioned the Freedom app to block the internet, though “it’s ridiculous you have to pay somebody money to stop you doing something you’re paying for…”

DSC08009The audience questions were a varied lot, including far more questions about football (soccer) than I’ve ever heard at an author event. But the first question concerned not Arsenal but the Spree, Hornby’s name for the editors at The Believer, where he writes his “Stuff I’ve Been Reading” column. Vendela Vida first asked if he would write a music column, but he’d just finished writing about music for The New Yorker and wanted to write about reading instead, “about how one thing leads to another.” (Incidentally, the running joke about the Spree is one of my favorite running jokes in print. Though I’m not sure how many others there are.)

Another audience member asked about Hornby’s collaboration with Ben Folds, about which I knew nothing, and which I will now track down. Someone else asked if he had thought about Barbra Streisand when choosing the title – the working title was Miss Blackpool – and Hornby replied that Funny Girl is “one adjective and one of the most common nouns in existence…I don’t think [Streisand] can copyright it.” Another person asked if Hornby had researched band websites when he was inventing the Tucker Crowe sites in Juliet, Naked; Hornby replied that he’d done no research for the book, but had looked at about 900 band websites for “my own purposes.” With that kind of “intense personal engagement,” he said, “you can only ever disappoint creatively.”

One interesting question was about stumbling blocks in publishing for writers whose first language wasn’t English. “The bar is raised higher,” Hornby admitted, but “so many amazing writers [e.g. Aleksandar Hemon] have written novels not in their first language…The biggest obstacle to being a writer is the job itself, not the language.” Someone else asked what book or author had had the most influence on him; the answer was Anne Tyler, particularly Dinner at the Homesick Restaurant, for her warmth, accessibility, and emotional intelligence. “I can’t be Salman Rushdie” Hornby said (and thank goodness for that), but Anne Tyler showed him there was another kind of book, one he could try to write. The last question was about tone, and whether the sense of humor in Hornby’s books was a conscious addition. “It is self-expression,” Hornby said. “It’s how it comes out.” The difficult part is compressing the parts that are easy to write – dialogue, in his case – so that every scene moves the story forward.

DSC08006

My signed 1995 paperback of High Fidelity.

In my excitement about this event, I only grabbed novels before leaving the house (High Fidelity and Juliet, Naked), forgetting the essays entirely (I’ve got The Polysyllabic Spree and Housekeeping vs. the Dirt, and my friend Josh has Shakespeare Wrote for Money; we bought all three together when McSweeney’s had a deal several years ago). When I reached the front of the signing line, I remembered to compliment the author on the track list for Juliet, Naked, which I’ve admired since I first read it for being able to tell so much story in so few words. (I also said the “top five” thing. Come on, wouldn’t you?)

Five Librarian Bloggers to Follow

I’m honored to be included in this Information Today feature by Brandi Scardilli, “Five Librarian Bloggers to Follow,” along with David Lee King, Justin Hoenke (a.k.a. Justin the Librarian), Rita Meade (a.k.a. Screwy Decimal), and Joe Hardenbrook (a.k.a. Mr. Library Dude). Please read Brandi’s article and check out some of my fellow librarian bloggers’ sites!

I’ll just be over here, wishing I’d come up with a cleverer name for this blog when I started it six years ago.

JustintheLibrarianTwitter

Privacy in a Networked World, III

This is the third and last post about Privacy in a Networked World. See the first post (Snowden and Schneier) here and the second post (John DeLong and John Wilbanks) here.

“The Mete and Measure of Privacy,” Cynthia Dwork, Microsoft Research

This was probably the presentation I was least able to follow well, so I’ll do my best to recap in English-major language; please feel free to suggest corrections in the comments. Dwork talked about the importance of being able to draw conclusions about a whole population from a representative data set while maintaining the confidentiality of the individuals in the data set. “Differential privacy” means the outcome of data analysis is equally likely independent of whether any individual does or doesn’t join the data set; this “equally likely” can be measured/represented by epsilon, with a smaller value being better (i.e. less privacy loss). An epsilon registry could then be created to help us better understand cumulative privacy loss.

Dwork also talked about targeted advertising. Companies who say “Your privacy is very important to us” have “no idea what they’re talking about” – they don’t know how to (and may have little interest in) keeping your data private. And when you hear “Don’t worry, we just want to advertise to you,” remember – your advertiser is not your friend. Advertisers want to create demand where none exists for their own benefit, not for yours. If an advertiser can pinpoint your mood, they may want to manipulate it (keeping you sad, or cheering you up when you are sad for a good reason). During this presentation, someone posted a link on Twitter to this article from The Atlantic, “The Internet’s Original Sin,” which is well worth a read.

Dwork quoted Latanya Sweeney, who asked, “Computer science got us into this mess. Can computer science get us out of it?” Dwork’s differential privacy is one attempt to simplify and solve the problem of privacy loss. Slides from a different but similar presentation are available through SlideServe.

“Protecting Privacy in an Uncertain World,” Betsy Masiello, Senior Manager, Global Public Policy, Google

Masiello’s talk focused on what Google is doing to protect users’ privacy. “It’s hard to imagine that Larry and Sergey had any idea what they were building,” she began. Today, “Everything is mobile…everything is signed in.” Services like Google Now send you a flow of relevant information, from calendar reminders to traffic to weather. In addition to Google, “the average user has 100 accounts online.” It’s impossible to remember that many passwords, especially if they’re good passwords; and even if they’re good passwords, your accounts still aren’t really safe (see Mat Honan’s  2012 article for Wired, “Kill the Password: Why a String of Characters Can’t Protect Us Anymore“).

To increase security, Google offers two-factor authentication. (You can find out what other sites offer 2FA by checking out twofactorauth.org. Dropbox, Skype, many – but not all – banks, Facebook, LinkedIn, Tumblr, and Twitter all support 2FA.) Masiello said that after news of hacks, they see more people sign up for 2FA. “It’s an awareness problem,” she said. In addition to 2FA, Google is encrypting its services, including Gmail (note that the URLs start with https). “E-mail is still the most common way people send private information,” she said, and as such deserves protection.

“Encryption is the 21st century way of protecting our personal information,” said Masiello. Governments have protested companies who have started using encryption, but “governments have all the tools they need to obtain information legally.” As Cory Doctorow has pointed out many times before, it’s impossible to build a back door that only the “good guys” can walk through. Masiello said, “Governments getting information to protect us doesn’t require mass surveillance or undermining security designed to keep us safe.” The PRISM revelations “sparked a very important debate about privacy and security online.” Masiello believes that we can protect civil liberties and national security, without back doors or mass surveillance.

“Getting security right takes expertise and commitment,” Masiello said. She mentioned the paper “Breaking the Web” by Anupam Chander and Uyen P. Le, and said that we already have a good set of guidelines: the OECD Privacy Principles, which include collection limitation, data quality, purpose specification, use limitation, security safeguards, openness, individual participation, and accountability. As for Google, Masiello said, “We don’t sell user data; we don’t share with third parties.” All of the advertising revenue is based on user searches, and it’s possible to opt out of interest-based ads. (Those creepy right-sidebar ads that used to show up in Gmail, having mined your e-mail to produce relevant ads, appear to be gone. And good riddance.)

Finally, Masiello talking about developing/setting industry standards for privacy and security that would facilitate innovation and competition. But privacy isn’t even the main concern in the future: it’s identity – what it means, and how we construct it.

“Sur-veillance, Sous-veillance and Co-veillance,” Lee Rainie, Director of Internet, Science and Technology Research, Pew Research Center

Lee Rainie definitely walked away with the “Most Neutral, Fact-Based Presentation” Award. Rainie has spoken at library conferences in New England before, but I – perhaps unwisely – chose to go to other sessions, so this was the first time I saw him speak, and he was great. Furthermore, all of his slides are available on SlideShare. He started off with a few findings:

1. Privacy is not binary / context matters

2. Personal control / agency matters

3. Trade-offs are part of the bargain

4. The young are more focused on network privacy than their elders (this is only surprising if you haven’t read danah boyd’s excellent It’s Complicated: The Social Lives of Networked Teens, and in fact Rainie quoted boyd a few slides later: “The new reality is that people are ‘public by default and private by effort.'”)

5. Many know that they don’t know what’s going on

6. People are growing hopeless and their trust is failing

The Pew Research Center has found that consumers have lost control over companies’ use and control of their data; they have adopted a transactional frame of mind (e.g. giving up control of personal data in exchange for the use of a platform or service). In general, trust in most institutions has gone down, with the exceptions of the military, firefighters, and librarians(!). But there is a pervasive sense of vulnerability, and users want anonymity – mostly from advertisers, hackers, and social connections, rather than the government (see slide below).

Lee Rainie, slide 30, "Who users try to avoid: % of adult users who say they have used the internet in ways to avoid being observed or seen by..."

Lee Rainie, slide 30, “Who users try to avoid: % of adult users who say they have used the internet in ways to avoid being observed or seen by…”

This slide supports the argument for privacy, especially against the “nothing to hide” argument: people desire – and deserve – privacy for many reasons, the least of which is to avoid the government or law enforcement. (Mostly, someone on Twitter pointed out, we want to avoid “that guy.”)

As for the future of privacy, people are feeling “hopeless.” Rainie remembered saying, in the early 2000s, “There’s going to be an Exxon-Valdez of data spills…” and there have been many since then, but little has been done to protect consumer privacy. “How do we convince people to have hope?” he asked.

Panel: “What Privacy Does Society Demand Now and How Much is New?” Danny Weitzner (moderator), Kobbi Nissim, Nick Sinai, Latanya Sweeney

Fortunately, the moderator and panelists have different initials. The questions and responses below are paraphrased from the notes I took during the panel session.

DW: What sort of privacy does society demand now? Is privacy different now?

NS: Access to your own data has always been a art of privacy; also the right to correct, erase, and transfer. Your data should be useable and portable.

KN: The ability to collect a lot of data all the time is new. There is a different balance of power (companies have too much).

LS: Privacy and security are just the beginning. Every American value is being changed by technology. Computer scientists aren’t trained to think of social science effects and the power of technology design.

DW: Cryptography and math are a foundation we can trust if implemented properly, as Snowden said this morning.

LS: I dislike choosing between two things. We need a cross-disciplinary approach, a blended approach.

NS: Any great company should constantly be trying to improve user experience. How does privacy/security get integrated into design?

KN: Aim for mathematical solutions/foundations. We need to re-architect economic incentives, regulations, how all the components work together.

DW: Where will the leadership and initiative come from? Government?

KN: Academia, research. We need to find ways to incentivize.

LS: Economic [incentives] or regulations are necessary for privacy by design. They’re all collapsing…every single one of them [Facebook, the IRS] is heading for a major disaster.

DW: People care about control of their data, yet the information environment is increasingly complicated.

LS: Society benefits from technology with certain protections.

KN: Regulations we have today were designed in a completely different era. We may be in compliance, and still we have damaged privacy severely.

LS mentioned HIPPA, NS mentioned the Consumer Bill of Rights, DW mentioned “Privacy on the Books and on the Ground.”

DW: Privacy practices and discussion are/is evolving in the U.S.

LS: A huge dose of transparency would go a long way. This is the new 1776. It’s a whole new world. Technology is redefining society. The Federal Trade Commission could become the Federal Technology Commission.

DW: Are you optimistic? Are we heading toward a positive sense of privacy?

NS: Yes, by nature I’m optimistic, but complexity and user experience (user accounts, passwords) frustrates me. Entrepreneurs do help change the world.

KN: The genie is out of the bottle. This forces us to rethink privacy. Nineteen-fifties privacy has changed and isn’t the privacy we have today, but that doesn’t mean that privacy is dead. Privacy is a sword and a shield.

DW: We’re at the beginning of a long cycle. It’s only been a year [and a half] since Snowden. What do we expect from our government and our companies? How powerful should government and private organizations be? Marketing/advertising issues are trivial compared to bigger issues.

LS: The cost of collecting data is almost zero, so organizations (public and private) collect it and then figure out how to use it later. They should be more selective about collection. If we can expose the harm, it will lead to change.

Question/comment from audience: A lot of people are not aware they’re giving away their privacy (when browsing the internet, etc.).

LS: We need transparency.

NS: We need regulation and consumer protection.

 

Privacy in a Networked World, II

This is the second post about Privacy in a Networked World. The first post, about the conversation between Bruce Schneier and Edward Snowden, is here.

“Privacy in a Networked World,” John DeLong, Director of the Commercial Solutions Center, NSA

Other than the length and volume of applause, it’s difficult to measure an audience’s attitude toward a speaker. I’ll venture, though, that the audience of Privacy in a Networked World was generally pro-Snowden; the attitude toward John DeLong can perhaps be characterized as guarded open-mindedness laced with a healthy dose of skepticism.

DeLong’s talk was both forceful and defensive; he wanted to set the record straight about certain things, but he also knew that public opinion (in that room, at least) probably wasn’t in his favor. (He said repeatedly that he did not want to have an “Oxford-style debate,” though his talk wasn’t set up as a debate in the first place.) “Let’s not confuse the recipe with the cooking,” he said, in a somewhat belabored analogy where the NSA’s work was the cooking and the law was the recipe. (I cook a lot at home, and I’ll just say, I can usually tell when I’m looking at a bad recipe, and opt to adapt it or not make it at all.)

DeLong quoted at length from Geoffrey R. Stone’s “What I Told the NSA.” (Stone was a member of the President’s Review Group in fall 2013, after the Snowden revelations.) Stone’s conclusions were not altogether positive; he found that while the NSA “did its job,” many of its programs were “highly problematic and much in need of reform.” But it’s the Executive Branch, Congress, and FISA who authorized those programs and are responsible for reforming them. Stone added, “Of course, ‘I was only following orders’ is not always an excuse….To be clear, I am not saying that citizens should trust the NSA. They should not. Distrust is essential to effective democratic governance.”

DeLong said, “The idea that the NSA’s activities were unauthorized is wrong, wrong in a magnificent way.” He emphasized that the NSA is not a law enforcement agency, it’s an intelligence agency. He spoke in favor of people with different backgrounds and expertise – lawyers, engineers, mathematicians, privacy experts, etc. – coming together to work out solutions to problems, with respect for each others’ abilities. “Technology,” he said, “always comes back to how we as humans use it.” At present, “We do not have technologies that identify privacy risks….Privacy engineering could be one of the most important engineering feats of our time.”

DeLong talked about rebuilding the nation’s confidence in the NSA. “Confidence is the residue of promises kept,” he said. “More information does not necessarily equal more confidence.” (Someone on Twitter pointed out that much depends on the content of the information.) The talk was a good reminder not to villainize the entire NSA; part of DeLong’s forcefulness was undoubtedly on behalf of his co-workers and staff whom he felt were unfairly maligned. And technology that could identify privacy risks, built by people who have different perspectives and backgrounds, would be excellent. But do we need technology that identifies privacy risks, or do we need stronger oversight and better common sense? Mass surveillance erodes trust in government and hasn’t been terribly effective; what more do we need to know to put a stop to it?

“Privacy and Irony in Digital Health Data,” John Wilbanks, Chief Commons Officer, Sage Bionetworks

John Wilbanks gave a fast-paced, interesting talk about health data. The “irony” in the title of his talk soon became clear when he gave the example of Facebook’s mood manipulation experiment compared to a study of Parkinson’s disease. The sample size for Facebook was many times larger, with a constant flow of information from “participants,” as opposed to a much smaller sample population who filled out a survey and answered questions by phone. “What does our society value?” Wilbanks asked. This question can be answered by another question: “What do we surveil?”

Wilbanks showed a graph representing cardiovascular disease and terrorism: there is 1 death every 33 seconds from cardiovascular disease – “That’s like 9/11 every day” – and yet there’s not nearly the same kind of “surveillance” for health that there is for terrorism. Participating in a research study, Wilbanks said, is like “volunteering for surveillance,” and usually the mechanisms for tracking aren’t as comprehensive as, say, Facebook’s. Of course, privacy laws affect health research, and informed consent protects people by siloing their data; once the study is concluded, other researchers can’t use that data, and there’s no “network effect.”

Informed consent, while a good idea in theory, often leads to incomprehensible documents (much like Terms of Service). These documents are written by doctors, reviewed by lawyers, and edited by committee. Furthermore, said Wilbanks, people in health care don’t usually understand issues of power and data. So, he asked, how do we run studies at internet scale and make them recombinant? How do we scale privacy alongside the ability to do research? Wilbanks demonstrated some ideas to improve on traditional informed consent, which could also allow research subjects to get a copy of their own data and see which researchers are using data from the research in which they participated.

Obviously there are risks to individuals who share their personal health data, but there can be advantages too: more scientists having access to more data and doing more research can lead to more breakthroughs and improvements in the field of medicine.

Last year, Wilbanks talked about privacy and health data on NPR; you can listen to the segment here.

Still to come: Microsoft, Google, Pew, and a panel on “What Privacy Does Society Demand Now and How Much is New?” 

Privacy in a Networked World

This is the first post about Privacy in a Networked World, the Fourth Annual Symposium on the Future of Computation in Science and Engineering, at Harvard on Friday, January 23.

A Conversation between Bruce Schneier and Edward Snowden (video chat)

Bruce Schneier is a fellow at the Berkman Center for Internet & Society, and the author of Data and Goliath. Edward Snowden was a sysadmin at the NSA who revealed the extent of the government’s mass surveillance. The conversation was recorded (no joke) and is available on YouTube.

I have to say it was an incredibly cool feeling when Snowden popped up on the giant screen and was there in the room with us. There was sustained applause when he first appeared and also at the end of the conversation, when he was waving goodbye. Schneier started by asking Snowden about cryptography: What can and can’t be done? Snowden replied, “Encryption…is one of the few things that we can rely on.” When implemented properly, “encryption does work.” Poor cryptography, either through bad implementation or a weak algorithm, means weak security. End points are also weak spots, even if the data in transit is protected; it’s easier for an attacker to get around crypto than to break it.

Snowden pointed out a shift in the NSA’s focus over the last ten years from defense to offense. He encouraged us to ask Why? Is this proper? Appropriate? Does it benefit or serve the public?

The explosion in “passive” mass surveillance (collecting everything in case it’s needed later) is partly because it’s easy, cheap, and simple. If more data is encrypted, it becomes harder to sweep up, and hackers (including the NSA) who use more “active” techniques run a higher risk of exposure. This “hunger for risk has greatly increased” during the War on Terror era. Their targets are “crazy, unjustified….If they were truly risk averse they wouldn’t be doing this…it’s unlawful.”

Snowden said that the NSA “is completely free from any meaningful judicial oversight…in this environment, a culture of impunity develops.” Schneier said there were two kinds of oversight: tactical oversight within the organization (“did we follow the rules?”) and oversight from outside of the organization (“are these the right rules?”). He asked, “What is moral in our society?”

Snowden asked if the potential intelligence that we gain was worth the potential cost. He stated that reducing trust in the American infrastructure is a costly move; the information sector is crucial to our economy. The decrease in trust, he said, has already cost us more than the NSA’s budget. “They are not representing our interests.”

Schneier, using his NSA voice, said, “Corporations are spying on the whole internet, let’s get ourselves a copy!” (This was much re-tweeted.) “Personal information,” he said, “is the currency by which we buy our internet.” (Remember, if you can’t tell what the product is, you’re the product.) It’s “always amusing,” he said, when Google complains about the government spying on their users, because “it’s our job to spy on our users!” However, Schneier thinks that the attitudes of tech companies and standards bodies are changing.

These silos of information were too rich and interesting for governments to ignore, said Snowden, and there was no cost to scooping up the data because until 2013, “people didn’t realize how badly they were being sold up the river.” Schneier said that research into privacy-preserving technologies might increase now that there is more interest. Can we build a more privacy-preserving network, with less metadata?

“We’ve seen that the arguments for mass surveillance” haven’t really held up; there is little evidence that it has stopped many terrorist attacks. Schneier cited an article from the January 26, 2015 edition of The New Yorker, “The Whole Haystack,” in which author Mattathias Schwartz lists several recent terrorist attacks, and concludes, “In each of these cases, the authorities were not wanting for data. What they failed to do was appreciate the significance of the data they already had.”

Unlike during the Cold War, now “we all use the same stuff”: we can’t attack their networks and defend our networks, because it’s all the same thing. Schneier said, “Every time we hoard a zero-day opportunity [knowing about a security flaw], we’re leaving ourselves vulnerable to attack.”

PrivacyNetworkedWorld

Snowden was a tough act to follow, especially for John DeLong, Director of the Commercial Solutions Center for the NSA, but that’s exactly who spoke next. Stay tuned.