“Beyond a reasonable doubt” is the standard upheld in the United States criminal justice system by which a defendant can be found guilty. Although we will not be conducting a murder trial in the classroom, serving as the jury, there are still certain standards by which I will be evaluating the arguments of the debate. Strong cases can be made for both sides of the question of electronic surveillance in the interest of national security, as we have delved into over the course of this semester. Various opinions on this matter depend on people’s differing political affiliations, prior experiences and moral values. Because attitudes are highly divergent and very personal, debaters often become emotional as they attempt to defend their standpoint. Although passion is very easy to detect and makes a speaker emphatic, it does not necessarily render their argument better. A sound argument, much like in the courtroom, requires hard evidence. There are a lot of unknowns and misconceptions with regards to electronic surveillance because of the secure nature of the act. Many arguments against security make assumptions and suggest hypotheticals about the unnecessary and invasive nature of surveillance. Likewise, arguments for security make assumptions and suggest hypotheticals about the necessity and efficacy of surveillance. As the jury, the only way to make a fair evaluation is to remain objective. A legitimate argument is backed by statistics, facts and hard evidence that can be verified, and as such that is what I will be listening for in the class debate.
Author: Brianna Jacobson Page 1 of 2
With the popularization of social media, the 21st century has redefined the ways that people interact and share with one another. Today’s teenagers are notorious for posting everything online, from embarrassing pictures to political opinions. Parents consistently accuse teens of “oversharing” and often believe they are entitled to monitoring their kid’s online activities. They impose that their children have no regard for privacy because they share every bit of their lives online. Teenagers, however, argue differently. In her book, It’s Complicated, Danah Boyd offers various teenagers perspectives on privacy in a public setting:
“In a mediated world, assumptions and norms about the visibility and spread of expressions must be questioned. Many of the most popular genres of social media are designed to encourage participants to spread information. On a site like Facebook, it is far easier to share with all friends than to manipulate the privacy settings to limit the visibility of a particular piece of content to a narrower audience. As a result, many participants make a different calculation than the one they would make in an unmediated situation. Rather than asking themselves if the information to be shared is significant enough to be broadly publicized, they question whether it is intimate enough to require special protection. In other words, when participating in net- worked publics, many participants embrace a widespread public-by- default, private-through-effort mentality.” (Boyd 62)
Parents mistake posting on social media with a disregard for privacy. Traditionally, the notion of privacy pertains to keeping personal information out of the public eye. As our culture has shifted to interacting on online public domains, however, this conventional understanding is no longer relevant. We [including myself in the teenage population] share things online to socialize with friends; to gather information on peers we know little about; to attract potential roommates and significant others. Interactions that traditionally occurred in person, where there is little chance of documentation, now take place on the internet where they are more accessible for viewing. But simply because the domain of communication has changed does not nullify the desire for privacy. With regards to monitoring the flow of information that people want to be available online, perhaps a better word than “privacy” is “control”. It is not that we don’t want people to know information about us or what is going on in our lives; rather, we want to retain power over our narrative that exists online. Posting content and commenting on what other people share typically creates a link to your personal profile. Every move we make online is a conscious decision. By selectively participating on social media sites, I believe we have control over our digital personalities that are accessible for viewing.
In chapter 7, The Code Book joins the likes of George Orwell’s 1984 and the Hollywood hit Back to the Future series in making predictions about the years to come. The concept of the narrative’s future being our “past” is a little mind-boggling, but it is incredibly interesting to read the forecasts of a book published almost 20 years ago now. Use of the internet was just starting to become publicly accessible when The Code Book was written, but as an scholar who is obviously well versed in the subject matter, Singh actually makes some very good predictions about the future of the internet heading into the 21st century. He recognized that the internet would certainly gain popularity and become utilized for a multitude of purposes: online shopping, banking, taxes and data records are all amenities that we have and regularly use today. He is correct, therefore, about the requirement for encryption that has progressed beyond military and government use. Hackers attempt to steal identities via credit card and social security numbers. Secure encryption is no longer a privilege, but a necessity when all of our most personal information is stored on computers. It is difficult to fathom things that have yet to come. I don’t know if even Singh could have predicted the integral role internet now plays in our lives, during an Information Age in which so many people rely almost entirely on electronic devices to manage and live their lives.
Ideas and inventions are not concocted inside of a vacuum. They grow from a wide array of preexisting knowledge and ideas already present in the scientific community from public contributions. However, there exists a break in this flow of information; as Singh points out in chapter 6 of The Code Book, government findings are often kept under lock and key. This was certainly the case for the work being done at England’s GCHQ in the 20th century. Researchers Ellis, Cocks and Williamson were working diligently on a solution to the problem of exchanging keys in the cryptographic world, and their findings culminated in a successful solution to their mission in 1973. But, because of the highly classified status of their work at the time, their discovery was unbeknown to the world until it was finally released almost 30 years later.
Meanwhile, without knowledge of the British intelligence, the same problem was being attacked by academics on the American front. On the West Coast, researchers Diffie, Hellman and Merkle theorized a solution to the key exchange problem by implementing asymmetrical ciphers. MIT scholars Rivest, Shamir and Adleman successfully implemented the idea in a working system that we still use today. The RSA encryption algorithm was officially patented in 1979.
So who are the true inventors of public key cryptography? Although the credit goes mainly to the men abbreviated by the letters R, S and A, I would argue that all of the parties deserve recognition. The combined efforts of both the British and American groups resulted in a successful solution to the problem at hand. Their discoveries took place six years apart, but the latter success did so without knowledge of the prior. Rivest, Shamir and Adleman secured a patent enabling them to claim the invention of public key cryptography, copyright laws are simply a social construct, so we cannot ignore the often classified contributions of those working in government.
World War II was a well choreographed ballet of air raids, land advances and U-boat attacks that required coordination across nations. The element of surprise was vital for successful attacks; maintaining secrecy in communications was absolutely crucial in winning the war. Because there were so many operations all across the globe involved in one of two sides, each nation developed their own coded system of transmitting information. After traditional encryption techniques were cracked in World War I, World War II demanded a whole new system of cryptographic methods that had not yet been solved. Both the Germans and the Japanese turned to machines to meet these new standards. The Nazi used Enigma machine and the Japanese employed Purple machine were both inventions that digitalized the encryption process for the first time. While the British worked on Enigma at Bletchley Park, the American codebreakers focuses their efforts on the Pacific powers. They worked on cracking the Japanese code that came to be known as Purple. The disadvantage they faced, unlike the British who had a version of the Enigma machine, was that no one knew how the encryption machine had been constructed. The cipher text it produced, was determined by many nations to be unbreakable, but the American cryptographers prevailed and worked for months to crack the code and ultimately succeeded. In wrongfully assuming their communications were secure, the enemy provided America (and Britain) with imperative intelligence, allowing us to evade attacks and plan successful missions of our own, ultimately leading to our victory on the beaches of Normandy.
Just like every artist, designer and engineer has their own unique sense of style, podcast authors individualize their productions. And as with all things intended for public consumption, some are more successful than others. I listened to a professionally produced podcast by The Memory Palace as well as two student produced podcasts by Kelsey Brown and Xinyi Zhang. All three of these episodes offered some historical example of cryptography. The first and second aforementioned told a story, and I actually found the amateur podcast to be more successful than the professional one. The Memory Palace had one speaker dictating the entire episode; Brown did most of the talking as well, but she also integrated a number of audio clips from real news coverage that really enhanced her story. I also found Brown’s podcast to be more relevant. In a production primarily about a Confederate spy, almost half of the episode was spent talking about her daughter. And although I cannot criticize the producers for their choice in content, the topic was addressed by a series of questions that admittedly don’t have answers, leaving me as the listener with no concrete takeaway information. Brown was far more successful in staying on topic and providing her audience with concrete, interesting information that helped enhance her story. The third production varied in format from the other two. While Zhang did tell a story about cryptography, she focused more on the kinetics of the code than the history of the cipher. Including an interactive segment in which the listener could follow along and create a cipher of their own was an excellent stylistic choice. Her episode — although containing some minor faults — was easy to follow and comprehend. Overall, I was very impressed by the work put out by the students, especially in comparison with the professional podcasts we have encountered thus far.
Cracking codes seems like it should be a relatively straightforward task. Codes are not designed to be trivial; people encrypt text with the intention that someone else somewhere will be able to decipher it into a meaningful message. In order to form an intelligible message, the code maker employs an agreed upon pattern so that their code breaker can later translate the text easily. But for someone without knowledge of the key, cracking a code proves to be a much harder task. In chapter 3 of The Code Book, which talks about more advanced ciphers, Singh provides examples of how a cryptanalyst may begin deciphering a message. Using the example of a piece of text encrypted using a keyword and the Vigenère polyalphabetic system, he shows how, by testing common words at various points of the ciphertext, one can begin to uncover some words of the plaintext and ultimately, the entire message.
This method, while theoretically plausible, poses an incredibly tedious task for a codebreaker, even for relatively short messages. It is simply a mathematical problem. In the English language alone, there are 26 letters that form over 170,000 words (Oxford Dictionaries). These words can be arranged in an increasingly-near infinite number of ways as the length of text increases, and it is statically impossible for any human — or existent computer at the moment — to test every possible combination. Although there are some typical things to look for, these patterns may not always be obvious or present at all. Singh uses short examples that he designed for the purpose of demonstrating such tactics. Real codes are not designed to work so nicely. In reality, sifting through a ciphertext is like looking for a needle in a haystack; although the pile of letters lays right in front of you, finding what you are actually looking for may prove to be a nearly impossible task.
Philosopher Jeremy Bentham introduced a design he called a panopticon (“all seeing”) to be used in prisons or institutions such that all inmates can be watched by a single guard. Although there aren’t any structures of this model in existence, the concept can be viewed as a symbol for modern government surveillance. Benjamin Walker argues that this metaphor is weak, but I would argue that the panopticon, although not the most effective model, actually offers an accurate representation of our current system of surveillance.
The key feature of the panopticon is that each participant is unable to know whether he or she is being watched. The assumption, therefore, is that each inmate is inclined to behave as if they were in fact being monitored all the time. However, a single guard cannot watch a large number of people individually at the same time. Any informed inmate who knows the concept of the model understands that it is impossible that they are actually being watched all the time, realizing they can get away with misbehaviors some of the time. For this reason, the panopticon is conceptually flawed.
Although the panopticon may not be the most efficient model, I think it actually offers a pretty accurate description of what we understand about the current system of surveillance. It is impossible for a single individual or organization to monitor all the online activity of everyone. If participants understand the system, they know that they can’t possibly be monitored all the time. People believe they can and still do get away with shady online activities.
“Governments are instituted among men, deriving their just powers from the consent of the governed, that whenever any form of government becomes destructive of these ends, it is the right of the people to alter or to abolish it, and to institute new government, laying its foundation on such principles and organizing its powers in such form, as to them shall seem most likely to effect their safety and happiness.”
This passage shows up multiple times over the course of the novel, but these are not the words of Cory Doctorow. Marcus first recites this quote during a social studies class debate on civil rights their current War on Terror, and then again later during an online press conference he gave to publicize the actions of the XNetters. These words, written in the Declaration of Independence by the founders of our country, are as prevalent in Little Brother as they were in 1776. In the wake of the terrorist attack on their city, with the technology that exists in the book, the citizens of San Francisco are under extreme scrutiny by the Department of Homeland Security. The government decided that in the dire circumstances, its people no longer reserved the right to privacy. But as the nation’s founding document points out, the role of the government should be to maintain both the safety and happiness of the country. The DHS has an obligation to protect Americans which cannot be done without some level of public surveillance, but interrupting thousands of peoples’ daily lives to question their every move in both invasive — and as Marcus later points out with his “False Positive effect” — ineffective. That is why I believe he and his undeclared “followers” are justified in their actions to dismantle the DHS’s efforts. As citizens, it is both their duty and right to take these measures in order to “establish Justice, insure domestic Tranquility…and secure the Blessings of Liberty” for themselves and the rest of the nation when the central government has failed to do so.
The immanent threat of school shooters is a sad but unfortunate reality of today’s world. In “Mining Student Data Could Save Lives,” Michael Morris contends that universities possess a crystal ball of sorts. By allowing students to access the university’s private network with personal email accounts and wireless internet access, schools have the ability to monitor student’s online activity. Morris offers an anecdote of when monitoring virtual activity would be efficacious:
“If university officials were to learn that a student had conducted extensive online research about the personal life and daily activities of a particular faculty member, posted angry and threatening comments on his Facebook wall about that professor, shopped online for high-powered firearms and ammunition, and saved a draft version of a suicide note on his personal network drive, would those officials want to have a conversation with that student, even though he hadn’t engaged in any significant outward behavior? Certainly.“
In this particular scenario, it in indisputable that this student was a threat to both himself and others and that mining his data saved numerous lives. This is an extreme, worst-case-scenario example. Morris discusses campus threat-assessment teams which look to identify such behavior. Given knowledge of a troubled student’s intentions, a university certainly has the right to intervene. However, in this modern world, the internet is no longer a luxury, but an integrated part of the education system. Schools maintain learning management systems so that all classes have online components. Having a computer is no longer an option, but rather a requirement of being a college student. Shouldn’t students have at least some right to privacy? Anything posted on social media, such as the threatening Facebook comments, is out for public view and can absolutely be tracked. Even flagging students for visiting suspicious websites and browser searches while on the university’s private network is acceptable. But mining data from personal emails and documents directly from a student’s computer without warrant seems unethical and invasive to me. In the age where we keep everything stored on our phones, business, personal and otherwise, I believe student’s have some right to maintain anonymity.