Cryptography

The History and Mathematics of Codes and Code Breaking

Author: Julius Tabery Page 1 of 2

Privacy Is a Right, Not a Privilege 

I take issue with the way this debate is often framed: privacy versus security. It’s misleading to suggest that privacy is directly opposed to security. A more apt name would be privacy versus surveillance. My point in these semantics is that, even given a wide latitude to monitor the people of this country, government surveillance doesn’t necessarily make us any safer. The American people are and have been under what some would consider heavy surveillance for a few years and it has not demonstrably impacted our security. What makes us think that expanding the reach of that surveillance would suddenly be more effective? The likelihood of some bad actor within the government abusing their power to invade the privacy of American citizens, as has already happened with the NSA, is too great to justify whatever security may or may not be gained by giving that bad actor more tools to work with.

Secondly, regardless of the effectiveness of surveillance, privacy is a right. Plain and simple. By surveilling the American citizenry, the government violates that right on a national scale. I think that the right to privacy should be more clearly stated in the constitution, but it is alluded to in the fourth amendment, and it is clearly a principle on which this country was founded, even if the founding fathers didn’t think of it in terms of privacy because this debate looked different due to differences between then and now in technology. The intent behind “no warrants shall issue, but upon probable cause” is pretty hard to mistake: it is a breach of privacy, and therefore a breach of our rights as citizens of this country to be subjected to involuntary surveillance by our government.

Social Media Is Basically Spy Training

“Rather than finding privacy by controlling access to content, many teens are instead controlling access to meaning.” (Boyd, 76)

Discussing this quote leads to some of the key differences between cryptography and steganography. While teens are openly publishing messages, only those with the requisite information and context required to decipher what the messages are saying will be able to take any meaning from them. It’s as if they’re sending encrypted messages where the encryption method is not based on mathematics or systematic rearrangement and swapping of letters, but is instead based on context and inside jokes. It’s like a “social cipher” with the key being a history of social interactions with the sender, rather than some series of letters or numbers.

Thinking about cryptic social media posts this way can lead to another thought about the difference between cryptography and steganography. In steganography, it is not the contents of a message that is being hidden, but the existence of the message itself. Cryptography, conversely, makes the message unintelligible to a receiver, unless that receiver has the key to decrypt the message, but the message itself is never hidden. In this way, posting something cryptic on social media can actually have characteristics of both. If a teen posts the lyrics to a song, the casual observer would just think that the teen likes that song, but that song may have some special meaning to someone else who takes away a completely different message. The message being delivered here was in plain view and was only correctly interpreted by its intended recipient, which is a characteristic of cryptography. However, the fact that there even was a message other than “I like this song” was unknown to everyone except its recipient, which is a characteristic of steganography.

Unintentional Facilitation Is Not Complicity

When Phil Zimmerman made PGP available to the world, he gave everyone with a computer access to secure and private communication with anyone else with a computer. His goal in doing this was to give the public a way to communicate with the assurance that the contents of their messages were private, an assurance that had not been available since advancements in surveillance technology such as hidden microphones and wiretapping had been introduced. His goal was not to facilitate the dealings of criminals and terrorists, his interests were in the privacy of normal people who just wanted secure and private communication.

Of course, whether it was his intention or not, there’s no denying that PGP was used by criminals and terrorists and whoever else had nefarious intentions that they wanted to hide from authorities. Just because facilitating these people wasn’t Zimmerman’s intention doesn’t mean that it didn’t happen, but it seems unfair to place the blame for these people’s actions on him. Just as we can’t blame the hardware store that sold the crowbar to the burglar who used it to break into someone’s house, or the winter apparel store who sold him the gloves and ski mask he used to hide his identity, we can’t blame the maker of a technology when that technology is used for harm. If the burglar from our metaphor also used a silenced pistol he bought from the black market in his heist, that’s different. The black market arms dealer who sold him the weapon had no illusions as to its intended purpose. He knew it would be used for a crime, and sold it nonetheless. Therefore, that arms dealer deserves to be charged with aiding and abetting the crime. In this analogy, PGP more closely resembles the crowbar and gloves and ski mask than the gun. Zimmerman didn’t put PGP onto the internet to aid criminals, he did it to protect people’s privacy. The hardware store owner knows that crowbars can be used for breaking and entering, but that’s not why she sells crowbars, and she shouldn’t be charged with assisting the burglar. Zimmerman probably knew that PGP could be used by criminals, but that’s not why he published it, and he shouldn’t be charged with assisting those criminals.

How Darwin’s Theory of Evolution Applies to Cryptography

Public key cryptography was invented by the academic researchers Diffie, Hellman, Merkle, Rivest, Shamir, and Adleman. They’re the ones who came up with the idea, and they’re the ones who created functions that could work with it. Here’s the issue: British GCHQ researchers Ellis, Cocks, and Williamson did all of those things too. The only difference between the two groups is that the GCHQ researchers couldn’t publish their work because it was classified.

The phenomenon that occurred here happens in another science: biology. There, it’s known as Convergent evolution. Convergent evolution is the independent evolution of some biological feature by two different species. For example, echolocation evolved in dolphins and whales, but also independently in bats. Similarly, birds, bats, pterosaurs, and insects are not closely related to each other but they all have wings. They don’t all share some great winged ancestor, they just evolved to fly because that’s a useful thing to be able to do. The inability to fly was a common problem for all of these animals and independently, they solved it with the development of wings.

Similarly, the American academic researchers and the GCHQ researchers were each facing the problem of key distribution. Cryptography had advanced to the point where making a secure cipher was less challenging than arranging to share the key with the recipient of the cipher. Leading-edge cryptographers had arrived at the same obstacle at around the same time, and they each found the same (or similar) solution to it. That solution came to be associated with the American researchers because the Brits were under oath. They couldn’t even share their findings with their families, much less file a patent. The fact that one group came up with public-key cryptography doesn’t mean that the other didn’t. The two groups independently made convergent solutions.

Codebreaking Wins Wars

Hiroshi Oshima, the Japanese ambassador to Germany, played a big part in losing World War Two for the Axis. He had sent a series of messages home to Tokyo, describing seemingly every military secret that Hitler could possibly have wanted to keep a secret. Detailing the strengths and weaknesses of the German defenses along the northwestern coast of Europe must not have seemed like a big deal to him because his message was encrypted. Unfortunately for him, his messages were deciphered. Later in the war, Oshima again unknowingly revealed to the Allies ore crucial information that helped them win the war.

Oshima made several mistakes. Firstly, he trusted in the security of his code. The fact that he was so sure that his communications were secure made him reveal information that he might have otherwise kept more closely. The second was that he didn’t try to make life any harder for the American code-breakers working on his messages. His messages were a “wordy, effusive, somewhat emotional, meticulous description of German fortifications along the northwestern coast of France, from Britany to Belgium and everything in between.” (pg. 297) As anyone who’s ever tried to decipher a chunk of ciphertext knows, the more ciphertext you have to work with, the easier your job is. By being both wordy and specific, Oshima gave code-breakers a gift: they could find out what he was saying because he said so much, and what he said was immensely helpful to the Allies in the war effort. The information gained from Oshima, and from other Axis communications in Europe gave the Allies a leg up in the war, and led to the success of their D-Day invasion of Normandy.

Mundy, L. (2017). Code Girls. New York, NY: Hachette.

For Lack of a Better Title

Listening to these podcasts, I was intrigued by the 99% Invisible episode, Vox Ex Machina. 

I think Roman Mars does an excellent job of holding his listener’s attention. This is not to say that the subject matter is boring or would be uninteresting but for its presentation, but his ability to establish an idea, get the listener invested in it, and follow through on it helps him stay at the forefront of the listener’s attention. At the very beginning of the episode, he introduces this story about the Voder, a machine introduced in 1939 that could synthesize the human voice. After introducing it and talking about it for a bit, Mars moves on to something that at first seemed completely unrelated, and then made the connection. Rather than introducing the main idea and then discussing related information, he starts with the related information and then explains how it’s connected to the main idea. The podcast was also well-broken-up. As lovely as his voice is, it would be harder to pay attention if the podcast was just Roman Mars talking at me for 25 minutes. The fact that he gets sound bytes from his interviews, and introduces different voices helps him break up the show and also creates the feeling that there are multiple perspectives being brought to the show instead of one guy just talking about what he thinks. This is clearly something Anna Butrico thought about when she used so many clips from other podcasts and had a friend voice Aristotle. 99% Invisible has been going for a while, so it’s no surprise that it’s a well-produced show.

Not as Easy as it Looks

When reading Singh’s The Code Book, it can be easy to lose track of how difficult it can be to break a piece of ciphertext. Remember that a simple monoalphabetic substitution cipher took us a fair amount of time and careful consideration to break, even with all of the advantages we had as cryptanalysts: we knew, or at least were fairly sure of, the method used to encrypt the plaintext, we used methods that had already been invented and documented to break the ciphertext, and the plaintext was chosen specifically to be broken because its purpose was to teach us cryptography, not to communicate military secrets.

Contrast our situation with real-world cryptography and it makes a little more sense why cryptanalysis is so difficult. Firstly, when a new cipher is invented, cryptanalysts have no starting point, no angle from which to approach the problem, and no way to tell if the piece of ciphertext they’re working with is indeed ciphertext or if it’s just gibberish sent out to throw them off the scent. Secondly, even when the method of encryption is discovered, a way to crack it doesn’t just materialize out of thin air. Remember that it took more than a few centuries for the Arabs to invent frequency analysis. Thirdly, a good cryptographer will keep encrypted messages short or confusing or both in order to minimize the amount or the helpfulness of the reference material that cryptanalysts have to work with.

With the benefit of hindsight, any code can seem simple to crack, but we should remember that it often takes the best cryptanalysts in the world years or even centuries to defeat a good code.

The Walls Are Very Porous

Jeremy Bentham’s great theory was the Panopticon: a hypothetical prison design in which all inmates could be seen and observed by those in charge, but the inmates themselves could not see the observers, nor could they see any other inmates. It’s an interesting concept to think about in theory, but it is not useful as a metaphor in our conversations about surveillance, and, as time goes on, its effectiveness will only diminish.

There are two key features to the Panopticon that make it unique: the observer sees all, but is not observed, and those being observed are isolated from one another. The first feature fits fairly well as a metaphor into our conversations about surveillance. The observer (in this case, probably the government) takes information from the internet, from travel history, from any official record of our existence in the world, without our knowledge. We are observed, but we never see it happen.

Where the Panopticon metaphor breaks down is in the second feature: those being observed are isolated from each other. In the conversation of surveillance, it’s unclear exactly what this part would stand as a metaphor for. People are more connected now than at any point in human history, and that is made possible by the same technology that makes modern surveillance possible. Instead of building metaphorical walls between us, the internet gives us access to each other like nothing ever has. It’s called the information superhighway for a reason: it instantaneously connects us from across the world.

For the Panopticon to be a more useful metaphor, I would suggest a tweak to the design: make the walls between inmates out of glass. Better yet, remove them entirely.

Nothing to Fear but Fear Itself

In Little Brother, Marcus, the main character, frequently argues with his father over the matter of whether we should give up some of our personal freedoms and privacies in order to grant more power to those seeking to prevent harm from threats like terrorism. It’s a difficult debate that I have occasionally had with myself, and I’ve never quite come to a conclusion, but in one of those arguments, Marcus raises a great point: are we really hurting the terrorists by adding security?

The main goal of terrorism is there in the name: terror. They want to scare people– to make them feel unsafe. That’s why their attacks always come in such violent and public forms. One terrorist organization cannot possibly hope to kill each and every citizen of the United States of America, but they could quite possibly make us all fear for our lives.

Marcus’s point is this: by adding more checkpoints, more data mining, more tracking, more security, less privacy, are we really acting against the terrorists? Would you really feel safer if the police considered you a potential terrorist and had eyes, ears, and possibly guns pointed in your direction at all times? If they consider you and everyone you know a suspect, then you yourself might begin to suspect those around you.

Suddenly, everyone you see on the street is a potential murderer.

Suddenly, you aren’t sure if you should eat at a particular restaurant because there aren’t any open seats near the door. What if someone inside started shooting?

Suddenly, you have to think long and hard about accepting a job offer because you would have to take the subway on your commute. Sure, the pay is better, but what if a bomb went off while you were underground?

In an effort to prevent terrorist attacks, law enforcement can inadvertently carry out the end goal of those attacks: terror.

 

The Crystal Ball is Cloudy

Michael Morris makes the argument that, through mining student data, examining the digital footprints left by students in their day-to-day lives, universities could prevent violence from occurring on campus. This belief is founded on the idea that students intending to commit violence might leave some evidence of their bad intentions in their online actions. Morris rightly suggests that, if a student has shown strong negative opinions on a particular professor, shopped online for weaponry, and has drafted a suicide note, there is cause for concern.

Morris provides many examples of the added security from student violence that the practice of data mining would provide, but neglects to address in detail the privacy concerns that opening up this information to university authorities introduces. While many of the arguments Morris makes are valid, the article generally seems overly optimistic toward the idea of student data mining. It glosses over concerns of privacy, and of false accusations. Morris uses the example of credit card companies tracking spending behavior to detect fraud. This is a practice I support, as, frequently, it can prevent the owner of the card from having money fraudulently taken away, but credit card companies are not one hundred percent accurate. Sometimes, the owner of the card gets their purchase declined because the credit card company misidentifies suspicious spending habits. In the case of credit card companies, this is fine, as the owner of the card can simply inform the company that there was no fraudulent spending, and the matter is resolved. However, in the case of student terrorist activity, the stakes are much higher. If a student’s actions are falsely identified as those of a future murderer, that student can potentially have their life permanently altered by false accusations.

While I have my criticisms of the viewpoints expressed in this article, I do not necessarily completely disagree with it. The issue is a complex one, and I don’t believe that there is one correct answer that one can address all of the different concerns and competing priorities when making decisions on whether or not to go forward with mining student data. It’s a complicated question that would take much more than 400 words to even begin to try to answer.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén