Cryptography

The History and Mathematics of Codes and Code Breaking

Tag: security Page 1 of 8

Back to the Future

In chapter 7, The Code Book joins the likes of George Orwell’s 1984 and the Hollywood hit Back to the Future series in making predictions about the years to come. The concept of the narrative’s future being our “past” is a little mind-boggling, but it is incredibly interesting to read the forecasts of a book published almost 20 years ago now. Use of the internet was just starting to become publicly accessible when The Code Book was written, but as an scholar who is obviously well versed in the subject matter, Singh actually makes some very good predictions about the future of the internet heading into the 21st century. He recognized that the internet would certainly gain popularity and become utilized for a multitude of purposes: online shopping, banking, taxes and data records are all amenities that we have and regularly use today. He is correct, therefore, about the requirement for encryption that has progressed beyond military and government use. Hackers attempt to steal identities via credit card and social security numbers. Secure encryption is no longer a privilege, but a necessity when all of our most personal information is stored on computers. It is difficult to fathom things that have yet to come. I don’t know if even Singh could have predicted the integral role internet now plays in our lives, during an Information Age in which so many people rely almost entirely on electronic devices to manage and live their lives.

The Perils of Perfect Security

The idea of perfect security is a tantalizing one on the surface. It guarantees anonymity and protection from unwanted attention; it facilitates and protects a bedrock of democracy, that being freedom of speech. Altogether, it's no surprise that, in the interest of preserving the core values of democracy, people would want perfect security implemented for their digital communications. However, with perfect digital security comes a price, one that society may not be willing to pay.

As Simon Singh argues in his The Code Book, once PGP became a widespread method of encrypting civilian communications, it became clear to the American government that such a tool could be employed by malicious entities to mask their activities. In this vein, Singh provides two extremely compelling arguments for why perfect security may not be in the best interest of the people. First, he presents the idea for evidence collecting in a court of law. Here, Singh provides evidence that, during the 1920's, police forces actively made use of phone wiretaps to listen in on communications and gather incriminating evidence. These practices were upheld by the Supreme Court and were widely accepted, and thus helped the police do their job more effectively. With the advent of digital communications and perfect security, the police would lose this avenue of gathering evidence, stunting their ability to collect evidence in a discreet and non-invasive fashion. By doing so, police would be forced to gather evidence physically, which may even put lives on the line that don't need to be at risk in the first place.

Secondly, on a national security level, Singh also shows how international and domestic terrorist groups have used and will continue to use modern encryption technology to keep their plans and communications private and untraceable. Using examples of events like the Tokyo Subway Gas attacks and even the computer of a World Trade Center bomber, Singh creates a dark picture where terror attacks are able to be planned and executed with little in the way of countermeasures, which ultimately puts innocent lives at risk.

As such, it's clear that while perfect security is attractive on the surface, the inability for the proper authorities to covertly access information when the need arises puts innocent lives on the line. Altogether, its a steep price to pay for not wanting anyone to read your emails.

How Much Should We Hide?

In the least controversial way possible, I believe this can be related to arguments for and against the second amendment. In a sense, cryptography, similar to guns, can be easily weaponized. If a person encrypts a message it is because it contains something extreme that they do not want to get out to the public. The key is the word ‘extreme’. For instance, I wouldn't want the world to know if I had cheated on my S.O., however I would not encrypt an email to my friend discussing the incident considering my everyday acquaintances would not take the time to decipher it, and the people that could decipher it would find no use in the information. On the contrary, if I was planning an event that impedes on national security I would most likely encrypt it considering the U.S. government would probably take special interest in its content. In this case, I understand why the everyday person should not be able to encrypt their messages.

Encryption could also, however, be used to save us in the future. For instance, if for some reason the government turned against the people, we should be able to use cryptography to fight back. If the NSA has full knowledge of our lives they could easily control us or keep us contained in the extreme case of a large uprising. 

 

Environmental change in Cryptological Perception

Mary Queen of Scots fully believed that her cipher was unbreakable, so she laid bare her plan to take control of Scotland. Thus when her cypher was encrypted, there laid a written confession on the table, ready to take her to the gallows. This historical example led to the development of an environment of secrecy and mistrust, where cryptanalysts held the power over cryptographers. Even if one made a seemingly "unbreakable" code, they did not know if another expert codebreaker was waiting to crack it. This never-ending cat-and-mouse game of codes has continued through the centuries, always adapting and evolving. The knowledge that one's code could be broken fostered more caution on behalf of the cryptographer, wherein they sent codes that were more cryptic in nature even in plaintext, knowing that an expert codebreaker might crack their code.

This strategy was a direct consequence of the knowledge that someone more experienced may crack your code - after all, if that was the case, why not make your plaintext message more difficult to understand as well? This would add an additional layer of security, and ensure more protection.  This shift was a significant one in cryptography history, and represented a transition to a more secretive/hard-to-decipher language where nothing was taken for granted.

Is Communication Ever Secure?

Before the telegraph was invented and introduced to society, the only way of sending messages was through written means. If you wanted to send a message to a receiver that lived far away, you needed a middleman – someone to transport the message. The telegraph effectively removed the worry that your message would be intercepted or stolen along the way. Although you knew that the message was being sent to the correct machine, however, you didn’t know that it was reaching its intended receiver in its correct form.  You had to trust that the telegraph operator would be honest and secretive in translating/delivering the message. There was essentially no way of confirming that the correct person was at the other end of the receiver. Additionally, some people wanted to send messages that they were uncomfortable with others reading at all. This led to the encryption of messages even before being given to the operator. The desire to keep messages secret from the sender and protect the message in case it didn’t reach its intended receiver have motivated the use of a more secure cipher, the Vignère cipher. This cipher, more complicated than the monoalphabetic cipher, remained the standard for many years. 

After the telegraph, the telephone was invented. At first, though, the telephone still didn’t allow for direct, secure communication. There were telephone operators that would connect calls, and they could potentially listen in on calls without the either line knowing. However, calls became more secure with the invention of the rotary dial. And now, we communicate through talk, text, or email, typically through our smartphones. Nowadays, most people communicate primarily through text or email, not over the phone. Our communications are surely much more secure now than they were years, even decades ago, but I worry that our messages are never truly secure. There are always ways that companies, hackers, or the government can access anything that travels via the web. The only form of truly secure communication is face-to-face. 

Power of a Test

After the terrorist attack on San Francisco, the Department of Homeland Security ramps up security and surveillance in hopes of catching the people responsible, but instead only manage to inconvenience, detain, and even seriously harm innocent civilians. Marcus explains that the problem with the DHS system is that they're looking for something too rare in too large a population, resulting in a very large number of false positives.

What Marcus is describing is referred to in statistics as a Type I error - that is, we reject the null hypothesis (the assumption that nothing is abnormal) when the null hypothesis is actually true. In this case, the null hypothesis is "not a terrorist", and there's enough suspicious data, the null hypothesis is rejected in favor of flagging the person for investigation. Marcus claims that in order to look for rare things, you need a test that only rejects the null hypothesis at the same rate at which the thing we're testing for - in this case, terrorists - actually occur. The problem is, there's also Type II errors. While Type I errors are caused by being too cautious, Type II errors occur when our test "misses" the thing we are actually looking for. When determining how "tough" a test should be, we need to decide how to balance these two risks.

Marcus is advocating for making the system less broad, therefore reducing false positives. However, this increases the risk for false negatives as well. So, which is worse: a false positive or a false negative? That's a question of expected value, which is based off the probability of a result and its consequences. In this case, the result at one end of the spectrum is the terrorists are caught because of this system, but many innocent people are subject to surveillance and searching. On the other end is that no one is caught because they slip through a timid test, and more people are hurt as a result. Clearly, this can easily turn into a much more complicated debate on the values of time, trust, privacy, and life, so I won't try to determine what the correct balance is myself. Although it's easy to describe some aspects of this conflict with numbers, as Marcus did, it just isn't that simple.

Mining Mystery: Should we mine Student Data for more Protection?

Morris’s central argument revolves around the incorporation of student data mining in order to counter possible future threats. He calls this “the next natural step” in using private information to prevent external threats. Morris goes on to detail how administrators could track social media usage, shopping patterns, and further online activity in order to make assessments on whether a credible threat exists. 

 

The central issue in this debate lies between privacy and security. Are students’ rights to privacy outweighed by administrators’ need to provide safety and security for their students? This question isn’t limited to college campuses, but can rather be applied to society as a whole. Discussing the role of authority, particularly governments, in our daily lives is of the utmost importance and a daily ideological struggle. I both agree and disagree with Morris’s argument. It’s important for administrators to do whatever is necessary to protect their students, but violating the privacy of their students is not the path to go. Aside from the obvious moral enigma, such an act could give more power to authority and reduce self-accountability. Allowing the administration to monitor what students do online would lead to mistrust; dangerous, secretive behaviors; and a need for students to “hide” what they are doing online. A common-sense solution would combine certain aspects of Morris’s argument with the other side. Allowing the student population to decide which aspects of their online life they want monitored would provide more credibility to the administrations’ efforts to increase safety, as well as provide increased trust and accountability of authority.

 

How much power we are willing to give authority is a central tenet of modern society, and no discrete answer exists. The best possible solution takes into account both sides’ arguments and will help administrators provide better security while also protecting student privacy.

 

Data Mining: It's Already Happening, So Why Not Push It Further

In the essay "Mining Student Data Could Save Lives," by Michael Morris, the central argument is essentially that a variety of online platforms already use data mining to see what they should advertise to users; since this is the case, why not allow colleges and universities to use the same technology to see if they can identify when a student is showing unhealthy, worrying, and potentially dangerous through their internet usage?

At first, when I had begun to read the essay, I already had it set in my mind that colleges and universities being able to see what students were doing was an invasion of their privacy, simply because it is so easy to abuse that power. But after I continued reading, Morris made points about how shopping sites and social media platforms already data mine, and that quickly changed my viewpoint.

Just as I can Google dresses and later have dresses advertised to me on Facebook, students can shop for guns or stalk faculty (like Morris said) and have that information available for their university to see. And even though this is not one hundred percent full proof or guaranteed to prevent tragic events from happening on campuses, it is still a good step to assuring a little bit more safety and security on campus.

Careful Campus

After recently watching Citizenfour, I feel myself being much more cautious about what I search on the web. I do not do this because I have anything to hide, but because people do not act the same when they believe, or in this case know, they are being surveilled. These podcast episodes did not exactly put my mind at ease either. With problems such as ransomware and botnet, it seems a lack of knowledge could cost the average citizen a lot more than a few lost files. Therefor, the question remains, how do we protect ourselves from these cyber attacks?

College students around the world use their devices for primarily social media. Some of that content is private in the sense that you only want a select amount of people to be able to view it. So, how do we protect our accounts? The best way also is the most simple: long and complicated passwords. The more random and lengthy the password is, the harder for an attacker to gain access. Another caution to hold in your mind brings me back to the video we watched about the reporter who visited "hacker-con" in Russia. To show the ease and speed with which an attacker can infiltrate a device, the interviewees set up a fake wifi account under the hotels name. The reporter logged on to the wifi and the attackers were then able to snake-hole their way through the rest of her passwords and locks with ease. If I could offer two pieces of substantial advice for fellow college students I would offer: use strong passwords and always be vigilant of what you connect with your device.

 

Judging criteria for the debate

As a jury of a debate, I would like to consider several issues as the judging criteria for the debate on Monday.

First of all, the basic points for pro team and con team must be explicit and reasonable. In their first round, they must build at least one solid point of view, which should be prepared well before the debate. What I expect to hear about is the real voice for the citizens that which one is more important, privacy or security, and why. The best form of their speaking is the combination of points and examples in order to make the point more convincing.

Secondly, after hearing the point of the opposite team, they should know what is the core statement of the opponent and build up an effective counterpoint for that. For example, if the pro team states that electronic surveillance could help track criminals, I expect the con team to consider that sometimes it does not work with the system and there might be false positives that lead to the wrong direction and harm innocent people.

Thirdly, they should also learn about the possible weakness of their own points. If they could point out the weakness by themselves and do concession. Then they actually effectively eliminate one possible point of opponents. Both teams should prepare these ideas well before the debate so that they could react quickly in the class.

People cannot convince others thoroughly, but they could use their ideas to influence others' thoughts, at least make others agree to part of their points and consider the issue from some new aspects. If they made it in this level, then they might do better than the opposite side and win the debate.

Page 1 of 8

Powered by WordPress & Theme by Anders Norén