The History and Mathematics of Codes and Code Breaking

Author: mohitesj Page 1 of 2

A Matter of Faith

It’s easy to subscribe to the idea that a government that remains aloof from the business of its people is a good way to safeguard the right to privacy of the individual; modernity is full of examples of what too much government oversight can lead to, from China to North Korea. However, though seemingly analogous, the cases of China and North Korea give no pertinent information as to how increases in the US government’s latitude to watch its citizenry would play out. However, given the current sociopolitical climate of the nation and the state of advancing technology, independent of the examples of other nations, it’s clear to see that the US would actually benefit from increased government surveillance.

One such case where the US population would benefit is law enforcement. One needn’t look further than the case of the Golden State Killer to see how law enforcement can leverage advancing technologies in surveillance and tracking to catch and hold notorious killers and criminals accountable, even decades after the fact. Further, with criminals being able to leverage advancing technologies in order to further their own malicious goals, the ability for police to track and identify threats must increase accordingly, or else law enforcement will be powerless to stop or at the very least mitigate potential damages. Altogether, it’s not hard to see all the potential benefit that might come in allowing the government to leverage advancing technologies in a manner that would increase their ability to watch the populace.

Yet, there still exists resistance to the idea, borne of an inherent distrust of government and its actions. A principle argument against the granting of increased power to the government is how it will inevitably lead to a slippery slope into fascism; as the adage says “power corrupts, and absolute power corrupts absolutely”. However, who’s to say that the government of the United States of America is not already too powerful, under that framework. With the largest defense budget in the world as well as an army, navy, and air force that can be mobilized in an instant, no right bestowed upon the individual could possibly stop the U.S. from becoming a fascist state if it so chooses; the events in Hong Kong could as easily play out in New York, San Francisco, or Los Angeles should the government as a whole see it fit. So what’s stopping it? Our democratic institutions, from freedom of the press to democratic elections, and many more beyond. So long as we don’t allow increased surveillance to erode away at the fundamental principles that uphold our democracy, increased surveillance will not be the first domino tipped in a long chain of events that will turn the US into yet another totalitarian regime. Rather, it will be a tool to augment national security and keep innocent people safe in an age that seems to grow more dangerous by the minute.

The Price We (Force Others to) Pay

In the episode of Leading Lines, one point that Professor Gilliard brought up was that of how privacy infringements in the United States can have consequences that transcend national borders. The example provided: the oppression of Uyghurs in China.

At one point in the episode, Professor Gilliard mentions how FaceApp, an app available to American consumers through the app store that enabled users to leverage AI to perform mobile edits to photos of their faces, stored information on the user base’s facial data and transmitted it to foreign servers in China. While for us, this kind of privacy infringement doesn’t necessarily have any immediate consequences, there is a group that is currently paying a steep price for our negligence: the Uyghurs of China. Gilliard mentions how the data collected by FaceApp was actually leveraged by the Chinese government to train their facial recognition algorithms and ultimately augment their ability to locate and extradite Uyghurs to “reeducation camps”. For the west, such a grim reality is a drastic departure from what we consider as the true threat of increasing government infringement on personal privacy; most discussions of privacy infringements in the west eventually turn into hyperbolic debates about the inevitable slide into a 1984-esque police state, the common theme being that these debates primarily focus on conjecture of how the future may or may not turn out given the actions taken in the status quo. For China, however, these repercussions are unfolding today, and complacency with the privacy of our data in the west is leading to unparalleled and unheard of misery and suffering to the order of millions of people.

Consequently, China must serve as a wakeup call for what a government given too much power over privacy can and will accomplish. No longer should the debate surrounding government infringement of privacy revolve around what-ifs and conjecture. Rather, they should cite China as an inevitable terminus for a government given too much power and too much information.

What Privacy Means for the Modern World

Public discourse around privacy often centers on hiding or opting
out of public environments, whereas scholars and engineers often
focus more on controlling the flow of information. These can both be
helpful ways of thinking about privacy, but as philosopher Helen Nissenbaum astutely notes, privacy is always rooted in context  (Boyd 60).

In this quote from It’s Complicated, Danah Boyd points out an import disconnect in the definitions of privacy: that of the public layman and that of the scholars and engineers tasked with determining the minutiae of the definition itself. Identifying this disconnect is critical in the discussion of privacy as it precludes meaningful discourse on how to implement privacy measures that satisfy all involved parties. While a more philosophical view is presented by the philosopher Helen Nissenbaum, the triviality of the statement, once again, fails to advance any kind of useful discourse on what privacy truly is; saying “privacy is always rooted in context” is a general statement that does nothing to establish a set of axioms from which we derive a general sense of what privacy is.

So then, what is privacy? Or rather, what are some common features of this ethereal concept we refer to as “privacy”? For this, we can return to Boyd’s distinction between two different views: that of the public and that of scholars and engineers. For the public, privacy is the ability to hide certain personal details from the public eye or scrutiny. Sounds simple enough, but this definition falls apart with regards to private third parties. Suppose, for example, that a teenager doesn’t want their parents to snoop about their private social media feeds, accounts that are understood to be privately available to a select group of people chosen by the teen themselves. Parents, in this situation, act as a private third party and, under the aforementioned definition of privacy, should be allowed to have access to these accounts. However, ask any teen whether or not they would grant access to their social media to their parents and you’ll be met with a zealous “No”.

So then, if this definition fails to address certain, we must turn to the scholarly definition, the one wherein the actor has control over the flow of their personal information. This definition, however, also has its faults, faults which have grown more apparent with the advancement of the digital age. We’ll examine these faults in the context of a teen’s media feed once more. Consider then, the case where a teen posts information to a select number of carefully curated followers: close friends and acquaintances, among others. Following, suppose one of those friends wishes to share the post with their friends, and so on and so forth. Here, we see that the scholarly definition of privacy fall apart at the outset, as as soon as the teen posts the information, they relinquished all control over the flow of that information.

As such, we see that both definitions of privacy fail in an increasingly connected world, but they do provide us with a general sense of what privacy means in practicality: privacy can be loosely defined the ultimate freedom to choose who exactly can view one’s personal details. While such perfect privacy may never be achievable, defining privacy as such can ultimately lead to constructive discourse on how to approach such an ideal, despite the increasingly abundant pitfalls created by a digitizing world.

The Perils of Perfect Security

The idea of perfect security is a tantalizing one on the surface. It guarantees anonymity and protection from unwanted attention; it facilitates and protects a bedrock of democracy, that being freedom of speech. Altogether, it’s no surprise that, in the interest of preserving the core values of democracy, people would want perfect security implemented for their digital communications. However, with perfect digital security comes a price, one that society may not be willing to pay.

As Simon Singh argues in his The Code Book, once PGP became a widespread method of encrypting civilian communications, it became clear to the American government that such a tool could be employed by malicious entities to mask their activities. In this vein, Singh provides two extremely compelling arguments for why perfect security may not be in the best interest of the people. First, he presents the idea for evidence collecting in a court of law. Here, Singh provides evidence that, during the 1920’s, police forces actively made use of phone wiretaps to listen in on communications and gather incriminating evidence. These practices were upheld by the Supreme Court and were widely accepted, and thus helped the police do their job more effectively. With the advent of digital communications and perfect security, the police would lose this avenue of gathering evidence, stunting their ability to collect evidence in a discreet and non-invasive fashion. By doing so, police would be forced to gather evidence physically, which may even put lives on the line that don’t need to be at risk in the first place.

Secondly, on a national security level, Singh also shows how international and domestic terrorist groups have used and will continue to use modern encryption technology to keep their plans and communications private and untraceable. Using examples of events like the Tokyo Subway Gas attacks and even the computer of a World Trade Center bomber, Singh creates a dark picture where terror attacks are able to be planned and executed with little in the way of countermeasures, which ultimately puts innocent lives at risk.

As such, it’s clear that while perfect security is attractive on the surface, the inability for the proper authorities to covertly access information when the need arises puts innocent lives on the line. Altogether, its a steep price to pay for not wanting anyone to read your emails.

For The Greater Good

The National Security Agency has been criticized for decades due to the very nature of its purpose; no one likes the idea that someone can read their emails, listen to their phone calls, or act as an observant third-party on any private two-way communication. But, at the end of the day, so long as the government in and of itself is not a bad actor, the NSA’s sole purpose is to facilitate the protection of the citizenry.

Enter the Data Encryption Standard, a new cipher for the computer age and employed up to 16 enciphering keys to encode blocks of text, designed as a joint venture between IBM and the NSA. While simple enough on the surface, the technique created billions upon billions of possible permutations, so many that the even the most state-of-the-art computers of the time would have trouble cracking it. So what’s the problem? Wouldn’t it be a good thing that after so many years, civilians finally had access to perfect privacy? Well, not if its the height of the Cold War; not if Russian agents could use that very same ultra-secure network to plot attacks or demonstrations to undermine western democracy.

The NSA, vigil as ever, took notice of this inherent risk of the system, and handicapped the DES, leaving it susceptible to brute force attack from their machines, but relatively impervious to commercially available computers. This way, the NSA could still intercept messages sent over private networks, monitoring their content while still allowing a degree of security from unwanted prying eyes. In this sense, the NSA’s decision to handicap the DES was justified, as their reasoning to do so was in line with their cardinal purpose: facilitate the safety and security of the citizenry. In allowing the DES to remain too complicated for commercial computers to crack, the NSA even allowed for the enhancement of civilian privacy while not contradicting their inherent purpose. To this end, the NSA was justified in their actions, as their building in a weakness was not to completely destroy the concept of digital purpose, but rather to better enable their ability to intercept and act on potentially malicious communications; their decision was ultimately for the greater good.

Mean Girls (WWII Edition)

Arlington Hall, the epicenter of the American code breaking effort, was s densely populated pseudo-tenement housing for some of the brightest and most flexible minds the country had to offer. Of course, with such a high population of men and women living together in close quarters, gender played a significant role in both the code breaking efforts and daily lives of the residents of Arlington Hall.

The most prominent aspect of the gender dynamic in Arlington Hall was complaining. Most notably was the case of William Seaman, who consistently complained of being the target of a clique-y group of female coders who bullied and harassed him. Many other men in the facility voiced a similar complaint, especially of the college-educated women, a group of people who seemed to point their nose up and look down on a good portion of the civilians and other employees that populated the hall.

Further, gender also played a significant role in the jobs that men and women carried out on campus. Women were placed into every level of codebreaking on account of their skill in reading and interpreting languages and having a general understanding of mathematics. However, in addition to this, women also filled in many of the mundane and rudimentary jobs, such as sanitation or security. On the other hand, the men staffed at Arlington Hall all were involved in many of the higher level positions, as any who were of the physical capacity to go to war were sent away as such. This left behind many men who, despite lacking in the masculinity department, could contribute more than their fair share to the code breaking efforts. However, despite these differences, it would take the harmonious cooperation of men and women to thwart the Axis’ cryptographic efforts and ultimately win the Second World War.

Ted Cr- I Mean, The Zodiac Killer

The podcast on the Zodiac Killer was well made and well produced.  From a technical standpoint, the intonation, projection, and fluency of the speaker made it very enjoyable to listen to the podcast. Often times, even when simply reading from a script, people can falter, trip over words and phrases, and stutter, all of which detract from the listening experience of the podcast. In addition, these kinds of mistakes also decrease the ethos of the speaker, making them seem unprepared or nervous. These issues, however, were not present in the Zodiac Killer podcast. as the speaker delivered her message with clear intonation and projection and few, if any, stutters or mistakes. In addition her speaking style, the incorporation of background music made the podcast that much more enjoyable, as rather than it simply being one person droning on about a subject matter, the background music helps to supplant the voice of the speaker. Altogether, I hope to incorporate these two aspects of this particular podcast into my own, as they seemed to help enhance the efficacy of the podcast and overall make it more engaging.

Beyond merely technical details, the content of this podcast also piqued my interest. Having seen many documentaries and movies about serial killers – even the ones about the titular killer – it was interesting to learn even more about one of the most notorious killers in American history. It was even more fascinating to learn about the killer’s extensive use of cryptography in hiding his killings and communicating with the police and those who would have liked to have stopped him. Altogether, the communication of relevant information in an easy-to-listen-to manner made this podcast interesting and enjoyable, as well as giving me ideas on how to create my own.

Hall’s Choice

Admiral Hall of Room 40 – Britain’s analog to the American Black Chamber – was faced with an impossible choice during World War I: immediately release information of the Zimmermann Note to the Americans and risk the Germans developing new, more secure ciphers, or holding on to the note until the perfect moment, potentially risking thousands of innocent Americans. While Hall’s final decision was morally duplicitous at best, it was certainly the more ethical of the two options, and one that, on net, saved more lives and brought about the end of World War I.

The choice to release the Zimmermann Note to the Americans was one of the most pivotal decisions made during the Great War. To fully comprehend the implications of Hall’s decision, we must analyze the logical end of both options he was presented with. First, we’ll consider the case where Hall releases the Note immediately. As history has proven, upon receiving the Zimmermann Note, American politicians almost unanimously motioned to go to war, Woodrow Wilson even reneging on his campaign promises and urging Congress to approve an official declaration of war. Of course, such a momentous decision would be heard the world over, documented by every major news source on the planet, especially in Germany; these stories would likely also detail the reason why the United States changed its mind: the Zimmermann Note. Following this, it would not take the German government very long to deduce that their encryption techniques had been broken, forcing them to engineer new ways to encipher their confidential information. By forcing German intelligence to upgrade in such a way, Great Britain would have been swamped with a sudden influx of cipher text that needed to be decoded, cipher texts that would demand resources to decipher. In this way, the British would be stuck playing intelligence catch-up, as the Germans would thus be able to their troops around freely without the Allies knowing. This would clearly lead to a colossal loss of life on the Allied side.

On the other hand, history has shown that Hall’s strategy ultimately paid off, and that the number of civilian ships sunk by the German’s aggressive UBoat campaign were few and far between. Therefore, in true utilitarian fashion, history will and must regard the choices of Admiral Hall as ethical insofar as they mitigated an excessive loss of life and expedited the end of the war.

We Don’t Care That We’re Being Watched

The principle problem of the Panopticon metaphor is rooted in Bentham’s original purpose for the structure: behavioral modification. As Walker puts it, Bentham believed that the mere act of being being watched constantly would alter a person’s behavior, adding a layer of accountability and therefore pushing the person in question towards a more moral or sociably acceptable course of action.

As Walker points out, however, modern surveillance is completely incompatible with this idea. He uses the example of digital watchers overstepping their boundaries, but it is apparent that even in everyday, mundane examples of surveillance, people simply don’t change their behavior. For example, consider Facebook. It’s no secret that Facebook tracks and stores almost every bit of information its users will provide it (how else will Zuckerberg learn what it means to be human). Following the Cambridge Analytica scandal, that knowledge became headline news; everyone knew Facebook was effectively spying on them. Since then, Facebook has gained almost 100 million users.

If people know they’re being watched, why do they opt into the system?

Simply put, it’s because it’s impossible to live without the system. The Panopticon may have been a prison, but technology is so integral to modern life that opting out simply isn’t an option. Beyond just Facebook, social media provides a fast and efficient communication system, and Google is the premiere tool to find information in the blink of an eye. These systems are unlike prison in that we want and need to be a part of them to survive the modern world. They’ve made life easy and convenient enough that the expectation is that we use them to augment our abilities to both work and play. For that reason, the Panopticon is a defunct metaphor that cannot encapsulate the complexity of modern surveillance. It’s not just that there are too many actors that watch us from the watchtower, but that we have to remain in the prison if we want to maintain a standard of living that we’re used to; we’ve collectively decided that the opportunity cost of opting out of the system is too great, even if we maintain some semblance of privacy. Yet, we don’t begrudgingly use these apps, either. People still love to browse using Google, wish their friends ‘happy birthday’ on Facebook, and post their latest fire selfie on Instagram.

Altogether, we just really don’t care that we’re being watched.

The Everest of Cryptanalysis

As Singh indicates in his The Code Book, the Beale Ciphers have gone unbroken for over a hundred years, the best and brightest minds of recent decades pouring hours upon hours into the effort of deciphering them. Unfortunately, their work, as of yet, has borne no fruit. Ultimately, this begs the question: why do people continue to attempt that which has eluded the brightest minds of this generation and those long since passed?

I believe the answer is twofold. Of course, money is a key motivating factor. $20 million by today’s standards is quite a bot of cash, and would enable an individual to live quite comfortably for the rest of their days. In fact, as Singh points out, there are entire societies that have been formed around the goal of solving the Beale Ciphers, their membership contingent on how the treasure, should it be discovered, would be allocated to the members of said society; often, the people who crack the cipher believe they should have the right to keep all of it. For that reason, it is impossible not to acknowledge money and, by extension, greed, as one of the key motivators that drives people to crack the Beale Cipher.

Beyond that, however, lie the intellectuals, those who see the Beale Ciphers as the ultimate challenge, akin to winning a Nobel Prize or Fields Medal. For them, the money is irrelevant, as the Beale Ciphers serve as the perfect opportunity to affirm their skills as cryptanalysts and codebreakers. These people are likely driven by pure intellectual curiosity, much like Babbage and Poe, wanting to test their abilities against the hardest cryptographic problem the world has yet to offer. For that reason, their motivation for solving the Beale Ciphers is akin to that of George Mallory’s for climbing Mount Everest: because they’re there, they must be solved.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén