Cryptography

The History and Mathematics of Codes and Code Breaking

Author: Maria Sellers

Mutual Trust is Key

In her book, It’s Complicated, Danah Boyd essentially sums up the problem of privacy on social media in a single sentence: “What’s at stake is not whether someone can listen in but whether one should” (Boyd, 58). Some will claim that since teenagers overshare on social media, they forfeit their privacy because they post everything to the world. But public expression does not necessarily equate to the rejection of privacy. For many teenagers, social media is a platform for self-expression and growth. Should adults, particularly those in authoritative positions such as parents and teachers, invade these spaces, teenagers will not be able to express themselves in the same way.

When I got my first social media account, my mom and I had a deal. I would give her my password and in return she would not use it unless I gave her a reason to. It was all about trust. I trusted my mom not to regularly spy on my Instagram account, and she trusted me not to post anything inappropriate. If I had found her logged into my Instagram on a random day for no apparent reason, I would have been offended. To me, that would signify that she didn’t trust me. Going back to Boyd’s statement, it wouldn’t have been a matter of whether my mom could access my Instagram, but whether she should. Unless she had a solid reason to suspect that my posts were inappropriate, logging on to my Instagram would be a violation of trust.

Parents will argue that they have to monitor their children’s online activity in order to protect them. Our society confirms this argument often, going so far as to imply that parents who don’t monitor their children’s online activity are “bad parents.” However, often children aren’t actually doing anything that should be a cause of concern to parents, and moreover, the parents are effectively disassembling any mutual trust that existed between them and their children.

Old Information in a New Way

The podcast “Cipher, or Greenhow Girls” from The Memory Palace was particularly intriguing and cleverly done. The producer of this podcast incorporated music, details, and descriptions which draw the listener in. Instead of feeling as though you are listening to a textbook, the producer weaves a narrative that is both relatable and interesting. Short segments of music underscore the key points of the story and break up the monotony of his voice. Additionally, the producer’s detailed description of images and places helps the listener to picture the locations he is describing, even though they cannot actually see these places.

There are a few times in the podcast when the producer does not strictly adhere to historical fact. In his analysis of “Little Rose,” he raises questions and speculates as to what she may have thought or felt later in her life. Since there is so little factual data on Little Rose, this is a good method to get the listener to think about the story that isn’t there; the one that no one knows.

The topic, in of itself, is fascinating. I had heard of Rose Greenhow before, but I had never realized the extent to which her daughter, Little Rose, was affected by her espionage. The producer of this podcast not only presented new information, but he presented it from an angle which I had not heard before. That is the key to a good podcast. It does not necessarily have to be brand new information so long as it is presented in a novel way. In my podcast, I hope I will be able to replicate the detail, descriptions, and novel point of view for whatever topic I choose.

Too Much to Lose

Although German overconfidence played a major role in the success of Allied cryptanalysts, there were many other factors at play. One of the most significant reasons for Allied success was that the Allies had much more to lose. Initially, Marian Rejewski cracked Enigma because the threat of a German invasion of Poland was extremely high. Whereas other countries such as France had given up on breaking the Enigma, the Polish had too much to lose should they fail. Rejewski and his team spent a full year creating a book full of all of the potential keys for the Enigma. When it became clear that a German invasion of Poland was inevitable, Rejewski and his team handed over their work to the British, in hopes that they might be able to use it as well.

As the Germans added features to the Enigma to strengthen its encryption, such as additional plug board options, the Allies had to step up their game. Once again, the Allies had too much to lose for them not to invest the time and resources into cryptography. For each message the British failed to decipher in time, thousands of lives could be lost. The message could be about the location of the next air raid, or where the German troops were planning to move. Should the Allies have been able to know this information in advance, they might have been able to evacuate areas or adjust their strategies. Therefore, it was incredibly important to them that they be able to break Enigma. As a result, despite some reluctance on their commanding officer’s part, cryptologists at Bletchley Park were eventually given enough resources for Alan Turing to create his Turing Machine; a machine that was reliably able to crack the daily settings for Enigma.

When the stakes are higher, people work harder. German overconfidence certainly helped the Allies to be more successful with their cryptography, however, without the imminent German threat it is unlikely that people like Marian Rejewski and Alan Turing would have had the dedication or the resources, respectively, to break Enigma. Without cracking Enigma, the war could have turned out very differently.

Is There an Answer?

Looking at this display from the Newseum, the thing that stood out most to me on the board was the person who wrote that they would sacrifice “some privacy.” Personally, this makes me wonder what part of privacy this person was referring to. Were they referring to texts, phone calls, emails, their location, or something else? Where is the line drawn? When does safety overrule privacy, and when does privacy once again become the priority?

It seems to me that there is a very fine line between what we are and are not willing to sacrifice for safety. For example, one person wrote that they wouldn’t want the government to have access to their location. Another person wrote that they would sacrifice “as much as necessary to feel safe.” It is difficult, if not impossible, to define an amount of privacy that everyone is willing give up for safety’s sake. Something that makes one person feel safer, such as mass surveillance of internet search histories, may cause someone else to feel less safe and uncomfortable. In cases like those, who do we choose? Either choice causes someone to feel unsafe. Which person’s safety is of a higher value?

Perhaps the answer is that there is no answer. Perhaps it is impossible to have everyone feel safe at the same time. Some will claim that mass surveillance and legislation like the Patriot Act make people safer. They help the government to catch terrorists and others who intend to inflict harm. However, these methods make some people feel less safe. If the government or anyone else was to abuse this power, or misuse this data, there could be serious repercussions. Is there a line that can clearly be defined; this is an acceptable invasion of privacy, but this is going too far? Until we can answer this, the debate of security vs. privacy will continue.

Necessity is the Mother of Public Use

Necessity is the mother of invention. It wasn’t until the late 19th century that cryptography began to amass a large public following. According to Singh, it was the invention of the telegraph that made the use of ciphers common in the general public. Since telegraph operators had to read a message to send it, those who wished to send more sensitive or private messages had to figure out ways to maintain maintain their privacy. As Singh puts it, “The telegraph operators had access to every message, and hence the risk that one might bribe an operator in order to gain access to a rival’s communications” (Singh, 61). In order to protect their messages, many people began using simple monoalphabetic ciphers to encrypt their messages before sending them. This was more expensive and more time consuming, but the messages were unintelligible to your average nosy telegraph operator.

The public only became interested in ciphers once they had a reason to; they needed to keep their information private. It is much easier to trust that a letter in a sealed envelope will make it to its intended recipient unread than a message sent through another person, although, as seen with Mary Queen of Scots, this is not always the case. Once ciphers became known to the general public, however, they quickly gained popularity. They were not only useful, but also a fun diversion. Victorian lovers used ciphers to send each other notes in the newspaper, the Times was tricked into printing an unflattering encrypted comment about itself, and Edgar Allen Poe wrote a short story centered around cryptography, The Gold Bug. Ciphers, albeit fairly simplistic ciphers, were suddenly everywhere. This is why today even schoolchildren will come up with monoalphabetic ciphers like those that had once stumped the cryptanalysts of the world. Ciphers have become a deeply engrained part of our culture.

That being said, there is less of an interest in ciphers among the general public of today. While we still romanticize ciphers and codes in movies, books, and other media, we don’t have the practical crypto graphical skills that we once did. Phones and email have removed the middle man, the operator, from the equation; it appears that there is no need to encrypt our messages anymore. While there is still an interest in cryptography, few people ever go beyond the simple mono-alphabetic or shift ciphers from their schoolyard days.

Getting Our Priorities in Order

“The role of government is to secure for citizens the rights of life, liberty and the pursuit of happiness. In that order. It’s like a filter. If the government wants to do something that makes us a little unhappy, or takes away some of our liberty, it’s okay, providing they’re doing it to save our lives,” (Doctorow, 209). This is an excerpt from Little Brother, a novel by Cory Doctorow. At this point in the story, a DHS approved social studies teacher, Mrs. Andersen, has replaced Ms. Galvez, the regular teacher. Mrs. Andersen is explaining the Bill of Rights to the students from the DHS perspective.

Protecting life, liberty, and the pursuit of happiness “in that order” is not only useless, but dangerous. If your first concern is always to protect your life above all else, then you shouldn’t get out of bed in the morning. Think of all the terrible accidents that could happen, just by getting out of bed. Not that staying in bed is much safer, what if a tree falls on your house? When you sacrifice liberty and the pursuit of happiness due to fear, you are no longer living at all. If the government has the right to take away your freedoms at will, as long as they can claim to be “protecting life,” then there is no government action that is unjustifiable.

This is an obvious example of the slippery slope between privacy and security. If the Bill of Rights is treated as a set of guidelines, then what is to prevent ideas like these? The framers of our Constitution did not intend for the Bill of Rights to be flexible; these rights are absolute. When they become less than that, we open ourselves up to the tyranny that those rights are meant to protect us from. It’s time that we recognize this, time that we truly get our priorities in order, before it’s too late. If the government has unrestricted access to all personal data, then we have failed to live up to the ideals that make this country what it is. Even in the name of remaining safe, of preventing terror, we will be causing it. Innocent until proven guilty could quickly become guilty until proven innocent. If we prioritize security over privacy, it may not be long before we are living in a world like that of Little Brother.

Mining Student Data Poses More Threats Than it Resolves

In the article “Mining Student Data Could Save Lives” Michael Morris makes an interesting point about data mining on college campuses. According to Morris, since college students are already using accounts and internet access provided by the school, there is no reason that colleges should not be able to monitor student data for early warning signs of mental instability. Morris says “…the truth is that society has been systematically forfeiting its rights to online privacy over the past several years through the continued and increased use of services on the Internet” (Morris). That’s true. Between social media, google searches, and smart phones, most of our lives are now completely digital. That does not mean, however, that I agree with Morris’ sentiments regarding colleges data mining their students.

It all comes down to a basic question of security vs. privacy. How much of our privacy are we willing to give up in the interest of staying safe? The better question might be, how much of our privacy can we give up while still staying safe? Who is to say that the school officials monitoring the data would be completely aboveboard? I realize that college staff is usually very trustworthy, but there are always exceptions to the rule. Imagine what one corrupted school official could do with access to all of that data. Additionally, once those back channels are established, what is to prevent an accomplished hacker abusing them? Data mining may be to keep us “safe,” but it actually opens the door to a whole new set of problems that colleges may not be equipped to deal with.

It is also important to consider the consequences of false threats. If a school decides that a student’s activity is suspicious they would intervene. But then what if the school was wrong? For example, I have had some strange google search histories in the past. I have always wanted to write a murder mystery and I have researched various poisons to see they would work in my plot. It is likely that, should my college be monitoring my activity, that could be flagged as a dangerous. Even if my search histories were an exception to the rule, how would schools avoid adopting a “guilty until proven innocent” mentality in the interest of keeping everyone “safe?” Morris’ idea has good intentions, but ultimately results in more problems and potential security threats than it solves.

Morris, Michael. “Mining Student Data Could Save Lives.” The Chronicle of Higher Education, The Chronicle of Higher Education, 2 Oct. 2011, www.chronicle.com/article/Mining-Student-Data-Could-Save/129231/.

Finding the Balance of Confidence and Cryptography

The bigger they are, the harder they fall. In chapter one of The Code Bookby Simon Singh, Singh states that “…a weak encryption can be worse than no encryption at all” (Singh, 41). When it comes to cryptography, this could not be more true.

A successfully encrypted message should only be decipherable to the intended recipient, otherwise it fails to accomplish its purpose. As a result, those responsible for encrypting the message must be certain that, without the proper key, their message is indecipherable. This, however, is a dangerous assumption. False confidence can lull cryptographers and their intended recipients into a false sense of security, thereby causing them to let their guard down. For example, in the instance of the Babington plot, both Mary Queen of Scots and Anthony Babington assumed that their cipher was unbreakable and spoke quite openly about their plans in their correspondences. As a result, when Thomas Phelippes managed to crack their cipher, he effectively signed their death warrants. Had Mary Queen of Scots and Babington been less assured of the strength of their code, they would never have written their plans out as obviously as they did.

Additionally, there is much that depends on the abilities of the cryptanalysts of the times. For example, the Spanish cryptographers that Singh refers to on pages 28 and 29 of his book believed their code to be indecipherable. When they discovered that their codes were, in fact, quite obvious to a French cryptographer, Philibert Babou, they could not accept it. They had been so confident in their ciphers that they went so far as to suggest that Babou was in league with the devil. Such overconfidence is a constant danger to cryptographers.

Confidence is one of the most basic conundrums of cryptography. On the one hand, if cryptographers are overly confident in their ciphers they risk exposure should their ciphers be broken. On the other, if a cryptographer is not confident enough in their cipher, then there would be no sensible reason risk using it for secret correspondence. The answer must be somewhere in the middle. Cryptographers must have enough faith in their own work to use their ciphers, and yet they must be wary enough to watch what they say.

Singh, Simon. The Code Book: The Science of Secrecy From Ancient 

       Egypt to Quantum Cryptography. Anchor Books, 2000.

Powered by WordPress & Theme by Anders Norén