Cryptography

The History and Mathematics of Codes and Code Breaking

Tag: security Page 1 of 9

Privacy Is a Right, Not a Privilege 

I take issue with the way this debate is often framed: privacy versus security. It’s misleading to suggest that privacy is directly opposed to security. A more apt name would be privacy versus surveillance. My point in these semantics is that, even given a wide latitude to monitor the people of this country, government surveillance doesn’t necessarily make us any safer. The American people are and have been under what some would consider heavy surveillance for a few years and it has not demonstrably impacted our security. What makes us think that expanding the reach of that surveillance would suddenly be more effective? The likelihood of some bad actor within the government abusing their power to invade the privacy of American citizens, as has already happened with the NSA, is too great to justify whatever security may or may not be gained by giving that bad actor more tools to work with.

Secondly, regardless of the effectiveness of surveillance, privacy is a right. Plain and simple. By surveilling the American citizenry, the government violates that right on a national scale. I think that the right to privacy should be more clearly stated in the constitution, but it is alluded to in the fourth amendment, and it is clearly a principle on which this country was founded, even if the founding fathers didn’t think of it in terms of privacy because this debate looked different due to differences between then and now in technology. The intent behind “no warrants shall issue, but upon probable cause” is pretty hard to mistake: it is a breach of privacy, and therefore a breach of our rights as citizens of this country to be subjected to involuntary surveillance by our government.

Why we need more surveillance

Now there’s the obvious reason that more surveillance can catch more criminals, achieve more justice, deter further crime, and thus lead to a safer society. To those who suggest that citizens’ privacy will be violated in our quest for more safety, I say this: our privacy will be destroyed anyway. According to historical trends, coupled with the current political climate, the government will only continue to add more cameras, initiate more surveillance programs, and expand. Thus the only recourse to check the power of an overreaching government is to add more cameras that are able to document the every action: actions undertaken by the public and by the government. Thus it is meaningless to debate national security with the assumption that we still have privacy, because that assumption isn’t based in reality.

Finally this third reason may be a bit… strange. Individuals are selfish creatures who often act within their own self-interest to achieve a goal, even if that means harming others along the way. There are countless examples of corporations, small groups, and individuals harming others to achieve a personal or professional goal. Thus it’s far better for the government, which generally consists of a more educated subsection of the populace, to make decisions on behalf of the public even if the public has no say in the decision, or the government obtains their data through privacy-violating means. Thus, it is for the above reasons that I am pro-surveillance.

Notes from a Notetaker

To start off, I’ll be taking notes on every argument that is made. Good or bad, sensible or not, I’ll write it down. It will be up to the jurors to pick through this information, deciding which arguments are the strongest, most factual, and most convincing.

That being said, there are some aspects of this debate that it’s crucial we touch upon. First, how effective is the surveillance that those favoring the “security” side argue for? An argument must not be based on hypotheticals. They should include concrete examples of instances in which surveillance has increased security if they hope to convince the jury that security is more important. However, the privacy side must argue more than just “citizens have a right to privacy.” It’s widely accepted that 100% privacy isn’t possible in our country. But what amount of privacy sacrificed is a reasonable amount? Where is “the line” that determines when privacy is violated?  Additionally, both sides should address the concerns of the other side. Each person has different values, and everyone is comfortable giving up different amounts of privacy. Moreover, what makes one person feel “secure” may not make another feel the same. Thus, it’s difficult to craft one policy that pleases the most amount of people. How do we reconcile the opinions of so many people when finding a solution that effects all of them?

It is my thought that the debate will center more around the morals of the statement rather than the legislation. I hope that we discuss what “should” be done, as opposed to what the law may say. However, it will also be important to explore how effective the law has been in preventing privacy violations and promoting security. I’m looking forward to hearing both sides, and copying their arguments into a google doc as fast as I possibly can!

Back to the Future

In chapter 7, The Code Book joins the likes of George Orwell’s 1984 and the Hollywood hit Back to the Future series in making predictions about the years to come. The concept of the narrative’s future being our “past” is a little mind-boggling, but it is incredibly interesting to read the forecasts of a book published almost 20 years ago now. Use of the internet was just starting to become publicly accessible when The Code Book was written, but as an scholar who is obviously well versed in the subject matter, Singh actually makes some very good predictions about the future of the internet heading into the 21st century. He recognized that the internet would certainly gain popularity and become utilized for a multitude of purposes: online shopping, banking, taxes and data records are all amenities that we have and regularly use today. He is correct, therefore, about the requirement for encryption that has progressed beyond military and government use. Hackers attempt to steal identities via credit card and social security numbers. Secure encryption is no longer a privilege, but a necessity when all of our most personal information is stored on computers. It is difficult to fathom things that have yet to come. I don’t know if even Singh could have predicted the integral role internet now plays in our lives, during an Information Age in which so many people rely almost entirely on electronic devices to manage and live their lives.

The Perils of Perfect Security

The idea of perfect security is a tantalizing one on the surface. It guarantees anonymity and protection from unwanted attention; it facilitates and protects a bedrock of democracy, that being freedom of speech. Altogether, it’s no surprise that, in the interest of preserving the core values of democracy, people would want perfect security implemented for their digital communications. However, with perfect digital security comes a price, one that society may not be willing to pay.

As Simon Singh argues in his The Code Book, once PGP became a widespread method of encrypting civilian communications, it became clear to the American government that such a tool could be employed by malicious entities to mask their activities. In this vein, Singh provides two extremely compelling arguments for why perfect security may not be in the best interest of the people. First, he presents the idea for evidence collecting in a court of law. Here, Singh provides evidence that, during the 1920’s, police forces actively made use of phone wiretaps to listen in on communications and gather incriminating evidence. These practices were upheld by the Supreme Court and were widely accepted, and thus helped the police do their job more effectively. With the advent of digital communications and perfect security, the police would lose this avenue of gathering evidence, stunting their ability to collect evidence in a discreet and non-invasive fashion. By doing so, police would be forced to gather evidence physically, which may even put lives on the line that don’t need to be at risk in the first place.

Secondly, on a national security level, Singh also shows how international and domestic terrorist groups have used and will continue to use modern encryption technology to keep their plans and communications private and untraceable. Using examples of events like the Tokyo Subway Gas attacks and even the computer of a World Trade Center bomber, Singh creates a dark picture where terror attacks are able to be planned and executed with little in the way of countermeasures, which ultimately puts innocent lives at risk.

As such, it’s clear that while perfect security is attractive on the surface, the inability for the proper authorities to covertly access information when the need arises puts innocent lives on the line. Altogether, its a steep price to pay for not wanting anyone to read your emails.

How Much Should We Hide?

In the least controversial way possible, I believe this can be related to arguments for and against the second amendment. In a sense, cryptography, similar to guns, can be easily weaponized. If a person encrypts a message it is because it contains something extreme that they do not want to get out to the public. The key is the word ‘extreme’. For instance, I wouldn’t want the world to know if I had cheated on my S.O., however I would not encrypt an email to my friend discussing the incident considering my everyday acquaintances would not take the time to decipher it, and the people that could decipher it would find no use in the information. On the contrary, if I was planning an event that impedes on national security I would most likely encrypt it considering the U.S. government would probably take special interest in its content. In this case, I understand why the everyday person should not be able to encrypt their messages.

Encryption could also, however, be used to save us in the future. For instance, if for some reason the government turned against the people, we should be able to use cryptography to fight back. If the NSA has full knowledge of our lives they could easily control us or keep us contained in the extreme case of a large uprising. 

 

Environmental change in Cryptological Perception

Mary Queen of Scots fully believed that her cipher was unbreakable, so she laid bare her plan to take control of Scotland. Thus when her cypher was encrypted, there laid a written confession on the table, ready to take her to the gallows. This historical example led to the development of an environment of secrecy and mistrust, where cryptanalysts held the power over cryptographers. Even if one made a seemingly “unbreakable” code, they did not know if another expert codebreaker was waiting to crack it. This never-ending cat-and-mouse game of codes has continued through the centuries, always adapting and evolving. The knowledge that one’s code could be broken fostered more caution on behalf of the cryptographer, wherein they sent codes that were more cryptic in nature even in plaintext, knowing that an expert codebreaker might crack their code.

This strategy was a direct consequence of the knowledge that someone more experienced may crack your code – after all, if that was the case, why not make your plaintext message more difficult to understand as well? This would add an additional layer of security, and ensure more protection.  This shift was a significant one in cryptography history, and represented a transition to a more secretive/hard-to-decipher language where nothing was taken for granted.

Is Communication Ever Secure?

Before the telegraph was invented and introduced to society, the only way of sending messages was through written means. If you wanted to send a message to a receiver that lived far away, you needed a middleman – someone to transport the message. The telegraph effectively removed the worry that your message would be intercepted or stolen along the way. Although you knew that the message was being sent to the correct machine, however, you didn’t know that it was reaching its intended receiver in its correct form.  You had to trust that the telegraph operator would be honest and secretive in translating/delivering the message. There was essentially no way of confirming that the correct person was at the other end of the receiver. Additionally, some people wanted to send messages that they were uncomfortable with others reading at all. This led to the encryption of messages even before being given to the operator. The desire to keep messages secret from the sender and protect the message in case it didn’t reach its intended receiver have motivated the use of a more secure cipher, the Vignère cipher. This cipher, more complicated than the monoalphabetic cipher, remained the standard for many years. 

After the telegraph, the telephone was invented. At first, though, the telephone still didn’t allow for direct, secure communication. There were telephone operators that would connect calls, and they could potentially listen in on calls without the either line knowing. However, calls became more secure with the invention of the rotary dial. And now, we communicate through talk, text, or email, typically through our smartphones. Nowadays, most people communicate primarily through text or email, not over the phone. Our communications are surely much more secure now than they were years, even decades ago, but I worry that our messages are never truly secure. There are always ways that companies, hackers, or the government can access anything that travels via the web. The only form of truly secure communication is face-to-face. 

Power of a Test

After the terrorist attack on San Francisco, the Department of Homeland Security ramps up security and surveillance in hopes of catching the people responsible, but instead only manage to inconvenience, detain, and even seriously harm innocent civilians. Marcus explains that the problem with the DHS system is that they’re looking for something too rare in too large a population, resulting in a very large number of false positives.

What Marcus is describing is referred to in statistics as a Type I error – that is, we reject the null hypothesis (the assumption that nothing is abnormal) when the null hypothesis is actually true. In this case, the null hypothesis is “not a terrorist”, and there’s enough suspicious data, the null hypothesis is rejected in favor of flagging the person for investigation. Marcus claims that in order to look for rare things, you need a test that only rejects the null hypothesis at the same rate at which the thing we’re testing for – in this case, terrorists – actually occur. The problem is, there’s also Type II errors. While Type I errors are caused by being too cautious, Type II errors occur when our test “misses” the thing we are actually looking for. When determining how “tough” a test should be, we need to decide how to balance these two risks.

Marcus is advocating for making the system less broad, therefore reducing false positives. However, this increases the risk for false negatives as well. So, which is worse: a false positive or a false negative? That’s a question of expected value, which is based off the probability of a result and its consequences. In this case, the result at one end of the spectrum is the terrorists are caught because of this system, but many innocent people are subject to surveillance and searching. On the other end is that no one is caught because they slip through a timid test, and more people are hurt as a result. Clearly, this can easily turn into a much more complicated debate on the values of time, trust, privacy, and life, so I won’t try to determine what the correct balance is myself. Although it’s easy to describe some aspects of this conflict with numbers, as Marcus did, it just isn’t that simple.

Mining Mystery: Should we mine Student Data for more Protection?

Morris’s central argument revolves around the incorporation of student data mining in order to counter possible future threats. He calls this “the next natural step” in using private information to prevent external threats. Morris goes on to detail how administrators could track social media usage, shopping patterns, and further online activity in order to make assessments on whether a credible threat exists. 

 

The central issue in this debate lies between privacy and security. Are students’ rights to privacy outweighed by administrators’ need to provide safety and security for their students? This question isn’t limited to college campuses, but can rather be applied to society as a whole. Discussing the role of authority, particularly governments, in our daily lives is of the utmost importance and a daily ideological struggle. I both agree and disagree with Morris’s argument. It’s important for administrators to do whatever is necessary to protect their students, but violating the privacy of their students is not the path to go. Aside from the obvious moral enigma, such an act could give more power to authority and reduce self-accountability. Allowing the administration to monitor what students do online would lead to mistrust; dangerous, secretive behaviors; and a need for students to “hide” what they are doing online. A common-sense solution would combine certain aspects of Morris’s argument with the other side. Allowing the student population to decide which aspects of their online life they want monitored would provide more credibility to the administrations’ efforts to increase safety, as well as provide increased trust and accountability of authority.

 

How much power we are willing to give authority is a central tenet of modern society, and no discrete answer exists. The best possible solution takes into account both sides’ arguments and will help administrators provide better security while also protecting student privacy.

 

Page 1 of 9

Powered by WordPress & Theme by Anders Norén