Don’t Make Me Think! by Steve Krug

cover of Don't Make Me Think by Steve Krug, 2014 editionI first heard about Don’t Make Me Think! by Steve Krug in grad school, but as William Goldman wrote in The Princess Bride, “What with one thing and another, three years passed.” (Actually, it may even have been four years; long enough, anyway, for a new edition to be published, so you see, every now and then procrastination pays off.)

That said, I highly recommend you make this book the next one you read. Don’t Make Me Think! is about usability, and specifically about usability as it pertains to websites (and now mobile sites and apps as well). While usability has many attributes – a website may be useful, learnable, memorable, effective, efficient, desirable, delightful – Krug’s definition of usability is as follows:

“A person of average (or even below average) ability and experience can figure out how to use the thing to accomplish something without it being more trouble than it’s worth.”

Krug’s writing is accessible, clear, funny, and peppered with relevant examples and illustrations; he cites many sources, including Jakob Nielsen, Don Norman (author of the excellent The Design of Everyday Things), and Ginny Redish (author of Letting Go of the Words). He explodes the myth of “the average user” (“All web users are unique and all web use is basically idiosyncratic”) and shows the value of usability testing as a way forward when designers and developers don’t agree. Krug writes, “Usability testing tends to defuse most arguments and break impasses by moving the discussion away from the realm of what’s right or wrong and what people like or dislike and into the realm of what works or doesn’t work. And by opening our eyes to just how varied users’ motivations, perceptions, and responses are, testing makes it hard to keep thinking that all users are like us.”

In addition to explaining why usability is important, Krug suggests some specific guidelines. For example, format text on your site to support scanning by:

  • using plenty of headings
  • keeping paragraphs short
  • using bulleted lists
  • highlighting key terms

Krug highlights the importance of site navigation, which, as he sees it, has three important functions:

  • It tells us what’s here (“Navigation reveals content!”)
  • It tells us how to use the site
  • It gives us (the user) confidence in the people who built [the site]

Krug also advises using clear language – no specialized jargon or cutesy labels – and making the information you know people will be looking for, like contact information, available in a logical place. Ultimately, “Usability is about serving people better by building better products.”

Privacy in a Networked World, III

This is the third and last post about Privacy in a Networked World. See the first post (Snowden and Schneier) here and the second post (John DeLong and John Wilbanks) here.

“The Mete and Measure of Privacy,” Cynthia Dwork, Microsoft Research

This was probably the presentation I was least able to follow well, so I’ll do my best to recap in English-major language; please feel free to suggest corrections in the comments. Dwork talked about the importance of being able to draw conclusions about a whole population from a representative data set while maintaining the confidentiality of the individuals in the data set. “Differential privacy” means the outcome of data analysis is equally likely independent of whether any individual does or doesn’t join the data set; this “equally likely” can be measured/represented by epsilon, with a smaller value being better (i.e. less privacy loss). An epsilon registry could then be created to help us better understand cumulative privacy loss.

Dwork also talked about targeted advertising. Companies who say “Your privacy is very important to us” have “no idea what they’re talking about” – they don’t know how to (and may have little interest in) keeping your data private. And when you hear “Don’t worry, we just want to advertise to you,” remember – your advertiser is not your friend. Advertisers want to create demand where none exists for their own benefit, not for yours. If an advertiser can pinpoint your mood, they may want to manipulate it (keeping you sad, or cheering you up when you are sad for a good reason). During this presentation, someone posted a link on Twitter to this article from The Atlantic, “The Internet’s Original Sin,” which is well worth a read.

Dwork quoted Latanya Sweeney, who asked, “Computer science got us into this mess. Can computer science get us out of it?” Dwork’s differential privacy is one attempt to simplify and solve the problem of privacy loss. Slides from a different but similar presentation are available through SlideServe.

“Protecting Privacy in an Uncertain World,” Betsy Masiello, Senior Manager, Global Public Policy, Google

Masiello’s talk focused on what Google is doing to protect users’ privacy. “It’s hard to imagine that Larry and Sergey had any idea what they were building,” she began. Today, “Everything is mobile…everything is signed in.” Services like Google Now send you a flow of relevant information, from calendar reminders to traffic to weather. In addition to Google, “the average user has 100 accounts online.” It’s impossible to remember that many passwords, especially if they’re good passwords; and even if they’re good passwords, your accounts still aren’t really safe (see Mat Honan’s  2012 article for Wired, “Kill the Password: Why a String of Characters Can’t Protect Us Anymore“).

To increase security, Google offers two-factor authentication. (You can find out what other sites offer 2FA by checking out twofactorauth.org. Dropbox, Skype, many – but not all – banks, Facebook, LinkedIn, Tumblr, and Twitter all support 2FA.) Masiello said that after news of hacks, they see more people sign up for 2FA. “It’s an awareness problem,” she said. In addition to 2FA, Google is encrypting its services, including Gmail (note that the URLs start with https). “E-mail is still the most common way people send private information,” she said, and as such deserves protection.

“Encryption is the 21st century way of protecting our personal information,” said Masiello. Governments have protested companies who have started using encryption, but “governments have all the tools they need to obtain information legally.” As Cory Doctorow has pointed out many times before, it’s impossible to build a back door that only the “good guys” can walk through. Masiello said, “Governments getting information to protect us doesn’t require mass surveillance or undermining security designed to keep us safe.” The PRISM revelations “sparked a very important debate about privacy and security online.” Masiello believes that we can protect civil liberties and national security, without back doors or mass surveillance.

“Getting security right takes expertise and commitment,” Masiello said. She mentioned the paper “Breaking the Web” by Anupam Chander and Uyen P. Le, and said that we already have a good set of guidelines: the OECD Privacy Principles, which include collection limitation, data quality, purpose specification, use limitation, security safeguards, openness, individual participation, and accountability. As for Google, Masiello said, “We don’t sell user data; we don’t share with third parties.” All of the advertising revenue is based on user searches, and it’s possible to opt out of interest-based ads. (Those creepy right-sidebar ads that used to show up in Gmail, having mined your e-mail to produce relevant ads, appear to be gone. And good riddance.)

Finally, Masiello talking about developing/setting industry standards for privacy and security that would facilitate innovation and competition. But privacy isn’t even the main concern in the future: it’s identity – what it means, and how we construct it.

“Sur-veillance, Sous-veillance and Co-veillance,” Lee Rainie, Director of Internet, Science and Technology Research, Pew Research Center

Lee Rainie definitely walked away with the “Most Neutral, Fact-Based Presentation” Award. Rainie has spoken at library conferences in New England before, but I – perhaps unwisely – chose to go to other sessions, so this was the first time I saw him speak, and he was great. Furthermore, all of his slides are available on SlideShare. He started off with a few findings:

1. Privacy is not binary / context matters

2. Personal control / agency matters

3. Trade-offs are part of the bargain

4. The young are more focused on network privacy than their elders (this is only surprising if you haven’t read danah boyd’s excellent It’s Complicated: The Social Lives of Networked Teens, and in fact Rainie quoted boyd a few slides later: “The new reality is that people are ‘public by default and private by effort.'”)

5. Many know that they don’t know what’s going on

6. People are growing hopeless and their trust is failing

The Pew Research Center has found that consumers have lost control over companies’ use and control of their data; they have adopted a transactional frame of mind (e.g. giving up control of personal data in exchange for the use of a platform or service). In general, trust in most institutions has gone down, with the exceptions of the military, firefighters, and librarians(!). But there is a pervasive sense of vulnerability, and users want anonymity – mostly from advertisers, hackers, and social connections, rather than the government (see slide below).

Lee Rainie, slide 30, "Who users try to avoid: % of adult users who say they have used the internet in ways to avoid being observed or seen by..."

Lee Rainie, slide 30, “Who users try to avoid: % of adult users who say they have used the internet in ways to avoid being observed or seen by…”

This slide supports the argument for privacy, especially against the “nothing to hide” argument: people desire – and deserve – privacy for many reasons, the least of which is to avoid the government or law enforcement. (Mostly, someone on Twitter pointed out, we want to avoid “that guy.”)

As for the future of privacy, people are feeling “hopeless.” Rainie remembered saying, in the early 2000s, “There’s going to be an Exxon-Valdez of data spills…” and there have been many since then, but little has been done to protect consumer privacy. “How do we convince people to have hope?” he asked.

Panel: “What Privacy Does Society Demand Now and How Much is New?” Danny Weitzner (moderator), Kobbi Nissim, Nick Sinai, Latanya Sweeney

Fortunately, the moderator and panelists have different initials. The questions and responses below are paraphrased from the notes I took during the panel session.

DW: What sort of privacy does society demand now? Is privacy different now?

NS: Access to your own data has always been a art of privacy; also the right to correct, erase, and transfer. Your data should be useable and portable.

KN: The ability to collect a lot of data all the time is new. There is a different balance of power (companies have too much).

LS: Privacy and security are just the beginning. Every American value is being changed by technology. Computer scientists aren’t trained to think of social science effects and the power of technology design.

DW: Cryptography and math are a foundation we can trust if implemented properly, as Snowden said this morning.

LS: I dislike choosing between two things. We need a cross-disciplinary approach, a blended approach.

NS: Any great company should constantly be trying to improve user experience. How does privacy/security get integrated into design?

KN: Aim for mathematical solutions/foundations. We need to re-architect economic incentives, regulations, how all the components work together.

DW: Where will the leadership and initiative come from? Government?

KN: Academia, research. We need to find ways to incentivize.

LS: Economic [incentives] or regulations are necessary for privacy by design. They’re all collapsing…every single one of them [Facebook, the IRS] is heading for a major disaster.

DW: People care about control of their data, yet the information environment is increasingly complicated.

LS: Society benefits from technology with certain protections.

KN: Regulations we have today were designed in a completely different era. We may be in compliance, and still we have damaged privacy severely.

LS mentioned HIPPA, NS mentioned the Consumer Bill of Rights, DW mentioned “Privacy on the Books and on the Ground.”

DW: Privacy practices and discussion are/is evolving in the U.S.

LS: A huge dose of transparency would go a long way. This is the new 1776. It’s a whole new world. Technology is redefining society. The Federal Trade Commission could become the Federal Technology Commission.

DW: Are you optimistic? Are we heading toward a positive sense of privacy?

NS: Yes, by nature I’m optimistic, but complexity and user experience (user accounts, passwords) frustrates me. Entrepreneurs do help change the world.

KN: The genie is out of the bottle. This forces us to rethink privacy. Nineteen-fifties privacy has changed and isn’t the privacy we have today, but that doesn’t mean that privacy is dead. Privacy is a sword and a shield.

DW: We’re at the beginning of a long cycle. It’s only been a year [and a half] since Snowden. What do we expect from our government and our companies? How powerful should government and private organizations be? Marketing/advertising issues are trivial compared to bigger issues.

LS: The cost of collecting data is almost zero, so organizations (public and private) collect it and then figure out how to use it later. They should be more selective about collection. If we can expose the harm, it will lead to change.

Question/comment from audience: A lot of people are not aware they’re giving away their privacy (when browsing the internet, etc.).

LS: We need transparency.

NS: We need regulation and consumer protection.

 

Privacy in a Networked World, II

This is the second post about Privacy in a Networked World. The first post, about the conversation between Bruce Schneier and Edward Snowden, is here.

“Privacy in a Networked World,” John DeLong, Director of the Commercial Solutions Center, NSA

Other than the length and volume of applause, it’s difficult to measure an audience’s attitude toward a speaker. I’ll venture, though, that the audience of Privacy in a Networked World was generally pro-Snowden; the attitude toward John DeLong can perhaps be characterized as guarded open-mindedness laced with a healthy dose of skepticism.

DeLong’s talk was both forceful and defensive; he wanted to set the record straight about certain things, but he also knew that public opinion (in that room, at least) probably wasn’t in his favor. (He said repeatedly that he did not want to have an “Oxford-style debate,” though his talk wasn’t set up as a debate in the first place.) “Let’s not confuse the recipe with the cooking,” he said, in a somewhat belabored analogy where the NSA’s work was the cooking and the law was the recipe. (I cook a lot at home, and I’ll just say, I can usually tell when I’m looking at a bad recipe, and opt to adapt it or not make it at all.)

DeLong quoted at length from Geoffrey R. Stone’s “What I Told the NSA.” (Stone was a member of the President’s Review Group in fall 2013, after the Snowden revelations.) Stone’s conclusions were not altogether positive; he found that while the NSA “did its job,” many of its programs were “highly problematic and much in need of reform.” But it’s the Executive Branch, Congress, and FISA who authorized those programs and are responsible for reforming them. Stone added, “Of course, ‘I was only following orders’ is not always an excuse….To be clear, I am not saying that citizens should trust the NSA. They should not. Distrust is essential to effective democratic governance.”

DeLong said, “The idea that the NSA’s activities were unauthorized is wrong, wrong in a magnificent way.” He emphasized that the NSA is not a law enforcement agency, it’s an intelligence agency. He spoke in favor of people with different backgrounds and expertise – lawyers, engineers, mathematicians, privacy experts, etc. – coming together to work out solutions to problems, with respect for each others’ abilities. “Technology,” he said, “always comes back to how we as humans use it.” At present, “We do not have technologies that identify privacy risks….Privacy engineering could be one of the most important engineering feats of our time.”

DeLong talked about rebuilding the nation’s confidence in the NSA. “Confidence is the residue of promises kept,” he said. “More information does not necessarily equal more confidence.” (Someone on Twitter pointed out that much depends on the content of the information.) The talk was a good reminder not to villainize the entire NSA; part of DeLong’s forcefulness was undoubtedly on behalf of his co-workers and staff whom he felt were unfairly maligned. And technology that could identify privacy risks, built by people who have different perspectives and backgrounds, would be excellent. But do we need technology that identifies privacy risks, or do we need stronger oversight and better common sense? Mass surveillance erodes trust in government and hasn’t been terribly effective; what more do we need to know to put a stop to it?

“Privacy and Irony in Digital Health Data,” John Wilbanks, Chief Commons Officer, Sage Bionetworks

John Wilbanks gave a fast-paced, interesting talk about health data. The “irony” in the title of his talk soon became clear when he gave the example of Facebook’s mood manipulation experiment compared to a study of Parkinson’s disease. The sample size for Facebook was many times larger, with a constant flow of information from “participants,” as opposed to a much smaller sample population who filled out a survey and answered questions by phone. “What does our society value?” Wilbanks asked. This question can be answered by another question: “What do we surveil?”

Wilbanks showed a graph representing cardiovascular disease and terrorism: there is 1 death every 33 seconds from cardiovascular disease – “That’s like 9/11 every day” – and yet there’s not nearly the same kind of “surveillance” for health that there is for terrorism. Participating in a research study, Wilbanks said, is like “volunteering for surveillance,” and usually the mechanisms for tracking aren’t as comprehensive as, say, Facebook’s. Of course, privacy laws affect health research, and informed consent protects people by siloing their data; once the study is concluded, other researchers can’t use that data, and there’s no “network effect.”

Informed consent, while a good idea in theory, often leads to incomprehensible documents (much like Terms of Service). These documents are written by doctors, reviewed by lawyers, and edited by committee. Furthermore, said Wilbanks, people in health care don’t usually understand issues of power and data. So, he asked, how do we run studies at internet scale and make them recombinant? How do we scale privacy alongside the ability to do research? Wilbanks demonstrated some ideas to improve on traditional informed consent, which could also allow research subjects to get a copy of their own data and see which researchers are using data from the research in which they participated.

Obviously there are risks to individuals who share their personal health data, but there can be advantages too: more scientists having access to more data and doing more research can lead to more breakthroughs and improvements in the field of medicine.

Last year, Wilbanks talked about privacy and health data on NPR; you can listen to the segment here.

Still to come: Microsoft, Google, Pew, and a panel on “What Privacy Does Society Demand Now and How Much is New?” 

Privacy in a Networked World

This is the first post about Privacy in a Networked World, the Fourth Annual Symposium on the Future of Computation in Science and Engineering, at Harvard on Friday, January 23.

A Conversation between Bruce Schneier and Edward Snowden (video chat)

Bruce Schneier is a fellow at the Berkman Center for Internet & Society, and the author of Data and Goliath. Edward Snowden was a sysadmin at the NSA who revealed the extent of the government’s mass surveillance. The conversation was recorded (no joke) and is available on YouTube.

I have to say it was an incredibly cool feeling when Snowden popped up on the giant screen and was there in the room with us. There was sustained applause when he first appeared and also at the end of the conversation, when he was waving goodbye. Schneier started by asking Snowden about cryptography: What can and can’t be done? Snowden replied, “Encryption…is one of the few things that we can rely on.” When implemented properly, “encryption does work.” Poor cryptography, either through bad implementation or a weak algorithm, means weak security. End points are also weak spots, even if the data in transit is protected; it’s easier for an attacker to get around crypto than to break it.

Snowden pointed out a shift in the NSA’s focus over the last ten years from defense to offense. He encouraged us to ask Why? Is this proper? Appropriate? Does it benefit or serve the public?

The explosion in “passive” mass surveillance (collecting everything in case it’s needed later) is partly because it’s easy, cheap, and simple. If more data is encrypted, it becomes harder to sweep up, and hackers (including the NSA) who use more “active” techniques run a higher risk of exposure. This “hunger for risk has greatly increased” during the War on Terror era. Their targets are “crazy, unjustified….If they were truly risk averse they wouldn’t be doing this…it’s unlawful.”

Snowden said that the NSA “is completely free from any meaningful judicial oversight…in this environment, a culture of impunity develops.” Schneier said there were two kinds of oversight: tactical oversight within the organization (“did we follow the rules?”) and oversight from outside of the organization (“are these the right rules?”). He asked, “What is moral in our society?”

Snowden asked if the potential intelligence that we gain was worth the potential cost. He stated that reducing trust in the American infrastructure is a costly move; the information sector is crucial to our economy. The decrease in trust, he said, has already cost us more than the NSA’s budget. “They are not representing our interests.”

Schneier, using his NSA voice, said, “Corporations are spying on the whole internet, let’s get ourselves a copy!” (This was much re-tweeted.) “Personal information,” he said, “is the currency by which we buy our internet.” (Remember, if you can’t tell what the product is, you’re the product.) It’s “always amusing,” he said, when Google complains about the government spying on their users, because “it’s our job to spy on our users!” However, Schneier thinks that the attitudes of tech companies and standards bodies are changing.

These silos of information were too rich and interesting for governments to ignore, said Snowden, and there was no cost to scooping up the data because until 2013, “people didn’t realize how badly they were being sold up the river.” Schneier said that research into privacy-preserving technologies might increase now that there is more interest. Can we build a more privacy-preserving network, with less metadata?

“We’ve seen that the arguments for mass surveillance” haven’t really held up; there is little evidence that it has stopped many terrorist attacks. Schneier cited an article from the January 26, 2015 edition of The New Yorker, “The Whole Haystack,” in which author Mattathias Schwartz lists several recent terrorist attacks, and concludes, “In each of these cases, the authorities were not wanting for data. What they failed to do was appreciate the significance of the data they already had.”

Unlike during the Cold War, now “we all use the same stuff”: we can’t attack their networks and defend our networks, because it’s all the same thing. Schneier said, “Every time we hoard a zero-day opportunity [knowing about a security flaw], we’re leaving ourselves vulnerable to attack.”

PrivacyNetworkedWorld

Snowden was a tough act to follow, especially for John DeLong, Director of the Commercial Solutions Center for the NSA, but that’s exactly who spoke next. Stay tuned.

 

Housecleaning discovery: the Extinction Timeline

madetobreak

Made to Break by Giles Slade

Back in March 2013, I was trying every avenue to find a timeline of obsolescence I’d seen once during grad school. Even with the help of the Swiss Army Librarian, I came up empty-handed (though we did find a lot of other cool stuff, like the book Made to Break: Technology and Obsolescence in America by Giles Slade).

In the end – nearly two years later, as it happens – it was another book that led me to find the original piece paper I’d had in mind. That book was The Life-Changing Magic of Tidying Up by Marie Kondo (bet you didn’t see that coming, did you?). I’ve spent a good chunk of the past two weeks going through all the things in my apartment – clothes, books, technology, media, and lots and lots of papers – and at last, I found the timeline of obsolescence that I was looking for almost two years ago.

The Life-Changing Magic of Tidying Up by Marie Kondo

The Life-Changing Magic of Tidying Up by Marie Kondo

Only it isn’t a timeline of obsolescence, exactly; it’s an “extinction* timeline 1950-2050,” and it’s located – I think – in the 2010 book Future Files: A Brief History of the Next 50 Years by Richard Watson. It was created in partnership between What’s Next and the Future Exploration Network; Ross Dawson, founding chairman of the latter, wrote a blog post which includes a PDF of the timeline, “Extinction Timeline: what will disappear from our lives before 2050.”

*Existence insignificant beyond this date

Repair shops – the reason I was looking for this timeline in the first place – apparently went out of fashion (or “significance”) just before 2010, as did mending things, generally. Fortunately, the “predicted death date” for the things on the timeline is “not to be taken too seriously,” and since “a good night’s sleep” is coming under the axe just before 2040, I just have to hope that they’re wrong about that one.

futurefiles

Future Files by Richard Watson

Check out the extinction timeline yourself. Anything strike your interest? Do you agree or disagree with the predictions for the next 35 years? Discuss.

Extinction timeline 1950-2050 (PDF)

Introduction to Cyber Security

FutureLearnThis fall, I enrolled in, and completed, my first first MOOC (massive open online course), Introduction to Cyber Security at the Open University (UK) through their FutureLearn program. I found out about the course almost simultaneously through Cory Doctorow at BoingBoing and the Radical Reference listserv (thanks, Kevin).

Screen shot from course "trailer," featuring Cory Doctorow

Screen shot from course “trailer,” featuring Cory Doctorow

The free eight-week course started on October 15 and ended on December 5. Each week started with a short video, featuring course guide Cory Doctorow, and the rest of the week’s course materials included short articles and videos. Transcripts of the videos were made available, and other materials were available to download in PDF. Each step of each week included a discussion area, but only some of the steps included specific prompts or assignments to research and comment; facilitators from OU moderated the discussions and occasionally answered questions. Each week ended with a quiz; students had three tries to get each answer, earning successively fewer points for each try.

Week 1: [Security] Threat Landscape: Learn basic techniques for protecting your computers and your online information.
Week 2: Authentication and passwords
Week 3: Malware basics
Week 4: Networking and Communications: How does the Internet work?
Week 5: Cryptography basics
Week 6: Network security and firewalls
Week 7: “When your defenses fail”: What to do when things go wrong
Week 8: Managing and analyzing security risks

The FutureLearn website was incredibly easy to use, with a clean and intuitive design, and each week of the course was broken down into little bite-size chunks so it was easy to do a little bit at a time, or plow through a whole week in one or two sessions. I tended to do most of the work on Thursdays and Fridays, so there were plenty of comments in the discussions by the time I got there.

Anyone can still take the course, so I won’t go too in-depth here, but the following are some tips, facts, and resources I found valuable or noteworthy during the course:

  • Identify your information assets: these include school, work, and personal documents; photos; social media account information and content; e-mail; and more, basically anything you store locally on your computer or in the cloud. What is the value (high/medium/low) of this information to you? What are the relevant threats?
  • Passwords are how we identify ourselves (authentication). Passwords should be memorable, long, and unique (don’t use the same password for different sites or accounts). Password managers such as LastPass or KeyPass can help, but that is placing a lot of trust in them. Password managers should: require a password, lock up if inactive, be encrypted, and use 2-factor authentication.
  • Use 2-factor authentication whenever it is available.
  • 85% of all e-mail sent in 2011 was spam.
  • Anti-virus software uses two techniques: signatures (distinctive patterns of data) and heuristics (rules based on previous knowledge about known viruses).
  • The Sophos “Threatsaurus” provides an “A-Z of computer and data security threats” in plain English.
  • The Internet is “a network of networks.” Protocols (e.g. TCP/IP) are conventions for communication between computers. All computers understand the same protocols, even in different networks.
  • Wireless networks are exposed to risks to Confidentiality, Integrity, and Availability (CIA); thus, encryption is necessary. The best option currently is Wireless Protected Access (WPA2).
  • The Domain Name Server (DNS) translates URLs to IP addresses.
  • Any data that can be represented in binary format can be encrypted by a computer.
  • Symmetric encryption: 2 copies of 1 shared key. But how to transmit the shared key safely? Asymmetric encryption (a.k.a. public key cryptography) uses two keys and the Diffie-Hellman key exchange. (The video to explain this was very helpful.)
  • Pretty Good Privacy (PGP) is a collection of crypto techniques. In the course, we sent and received encrypted e-mail with Mailvelope.
  • Transport Layer Security (TLS) has replaced Secure Sockets Layer (SSL) as the standard crypto protocol to provide communication security over the Internet.
  • Firewalls block dangerous information/communications from spreading across networks. A personal firewall protects the computer it’s installed on.
  • Virtual Private Networks (VPNs) allow a secure connection across an untrusted network. VPNs use hashes, digital signatures, and message authentication codes (MACs).
  • Data loss is often due to “insider attacks”; these make up 36-37% of information security breaches.
  • Data is the representation of information (meaning).
  • The eight principles of the Data Protection Act (UK). Much of the information about legislation in Week 7 was specific to the UK, including the Computer Misuse Act (1990), the Regulation of Investigatory Powers Act (2000), and the Fraud Act (2006).
  • File permissions may be set to write (allows editing), read (allows copying), and execute (run program).
  • Use a likelihood-impact matrix to analyze risk: protect high-impact, high-likelihood data like e-mail, passwords, and online banking data.

Now that I’ve gained an increased awareness of cyber security, what’s changed? Partly due to this course and partly thanks to earlier articles, conference sessions, and workshops, here are the tools I use now:

See also this excellent list of privacy tools from the Watertown Free Library. Privacy/security is one of those topics you can’t just learn about once and be done; it’s a constant effort to keep up. But as more and more of our data becomes electronic, it’s essential that we keep tabs on threats and do our best to protect our online privacy.

NELA 2014: Consent of the Networked

Cross-posted on the NELA conference blog.

Intellectual Freedom Committee (IFC) Keynote: Consent of the Networked: The Worldwide Struggle for Internet Freedom, Rebecca MacKinnon (Monday, 8:30am)

MacKinnon pointed to many excellent resources during her presentation (see links below), but I’ll try to summarize a few of her key points. MacKinnon observed that “technology doesn’t obey borders.” Google and Facebook are the two most popular sites in the world, not just in the U.S., and technology companies affect citizen relationships with their governments. While technology may be a liberating force (as envisioned in Apple’s 1984 Superbowl commercial), companies also can and do censor content, and governments around the world are abusing their access to data.

“There are a lot of questions that people need to know to ask and they don’t automatically know to ask.”

MacKinnon noted that our assumption is that of a trend toward democracy, but in fact, some democracies may be sliding back toward authoritarianism: “If we’re not careful, our freedom can be eroded.” We need a global movement for digital rights, the way we need a global movement to act on climate change. If change is going to happen, it must be through an alliance of civil society (citizens, activists), companies, and politicians and policymakers. Why should companies care about digital rights? “They are afraid of becoming the next Friendster.” The work of a generation, MacKinnon said, is this: legislation, accountability, transparency, and building technology that is compatible with human rights.

It sounds overwhelming, but “everybody can start where they are.” To increase your awareness, check out a few of these links: