Cryptography

The History and Mathematics of Codes and Code Breaking

Author: konacr Page 1 of 2

Whether we can or we should: an exploration of privacy in the digital age

“What’s at stake is not whether someone can listen in but whether one should.”

This quote from It’s Complicated by Danah Boyd perfectly illustrates the complex role of privacy in an increasingly digital age. As opposed to the past where locked doors and hushed conversations limited parents’ intrusions into their children’s privacy, the rise of public chat rooms, profiles, and pages on social media platforms have allowed increased access to the social media profiles of students. One common argument that parents often make for the stalking of their kids’ social media is the fact that it’s accessible to the public, and therefore they can look at it. But that argument fails to account for whether or not they should look at it. I have the ability to run through commons and make a scene when getting my breakfast; that doesn’t mean I should do it, because doing so causes a public disturbance that violates social etiquette. It’s this sense of social etiquette that drives our sense of morality, and what should prevent parents from excessively looking at their children's' online profiles without cause. This argument should be extended into the information age and evolve into a sort of digital etiquette. Even if online accessibility has increased, boundaries remain very real and should be respected no matter the medium of information exchange. It’s well known that government agencies such as the NSA possess the tools to decipher our encryptions and monitor our messages; but doing so knowingly violates citizens’ rights to privacy without just cause and can turn into a slippery slope where all communication is monitored by an overarching surveillance state. However dystopian that may sounds, its effects are being observed in realtime where increased violation of boundaries often leads to more secrecy and unexpected consequences.

Just because an action can be applied isn’t reason enough for its application. Those who use this justification often have ulterior goals, and it's necessary that parents, authorities, and everyone in between recognize that boundaries exist and respect them. The "can" vs "should" argument will no doubt persist, but I hope this blog post was able to clarify the debate around this topic with respect to privacy. 

Was Zimmerman Guilty?

In an attempt to bring RSA encryption-level security to the masses, Zimmerman released Pretty Good Privacy(PGP). But in his attempt to do so, Zimmerman had one large issue: The FBI had taken notice of his activities and were frightened. They were frightened because they believed that they would not longer be able to wiretap criminals and bring them to justice in Zimmermans's attempt to bring NSA-proof security to the masses. Zimmer eventually published the PGP onto the internet through a friend, which the FBI deemed as "exporting munitions" because a foreign government or hostile power could have easily accessed it. This remains problematic for a number of reasons, but ultimately Zimmerman was wrong in publishing software on the internet because he did so with the intent to deceive the US government and provide top grade security for all, law-abiding citizens and criminals alike.

When anyone publishes anything on the internet, they should be able to face the consequences of their action. We've seen in the present how past videos or texts can come back to derail an established politician's career. Anything posted on the web never truly disappears, and people need to be aware of this fact. Critics state that because Zimmer hadn't actually sent the software to a foreign government, he shouldn't have been pursued by the FBI; but the fact remains, Zimmerman published his work in an attempt to deceive the US government. And in fact, another more compelling argument remains: if country A sells weapons to country B, and country B is currently engaged in a genocide and A is aware of this fact, then Country A is at least partially to blame for providing the tools with which that genocide occurs. A key component of this argument is that those who provide the tools must know that their tools can and will be used to enact harm, and Zimmerman certainly fell true to this.

In all, this question is one that is difficult to answer, but if cryptanalysts publish software that has circumvented the government's wished and that they know will be used for harm, such as Zimmerman, then such cryptanalysts are at least partially responsible for the consequences that ensue.

With an advance in technology, the use of computers for encryption technology wasn't just limited to the military and government. Increasingly, civilian businesses began using encryption and cryptography to encode their messages. In an attempt to standardize encryption across the United States, the National Bureau of Standards looked to Lucifer. This encryption system developed at IBM was so strong that it offered the possibility of cryptography that couldn't be broken even by the NSA. The NSA didn't want civilians to use encryptions that it couldn't break, so the NSA successfully lobbied to weaken Lucifer by reducing the number of possible keys. The adoption of this weakened Lucifer meant that the civilian world had access to strong but not optimal security, meaning that the NSA could still break their encryptions if it needed to do.

The NSA was justified in pushing for the adoption of a mechanism that they could break even if it meant less security for the civilian world. Allowing civilians and businesses to gain strong encryption mechanisms that no one but them could decipher would have meant an increase in criminal activity that governments couldn't even begin to monitor. This would have reduced the safety of the populace as a whole. When living in a society we often give up some rights for the greater good, and it should be noted that no right is absolute - my right to free speech doesn't allow me to yell fire in a crowded theater for example. Thus by merely knowing that the NSA can still decrypt messages that businesses send can often be a deterrence to secretive or illegal activity.

Critics like to point out that giving the NSA the ability to decrypt any message they would like would be giving the government far too much power. But it should be noted that even while the NSA has the means to decipher an encryption, that doesn't necessarily mean it will. There are billions of texts, emails, and calls exchanged each day in our world - the NSA has neither the means nor the resources to monitor every single message. Thus the NSA must prioritize by possible criminal activity: criminal activity they cannot detect and stop without the use of decryption. Thus, it is not only important but essential that the NSA be able to decrypt the messages of the business world in order to deter criminal activity and better protect our society.

 

The Rise of WAVES

Gender was extremely indicative of what role Americans played in the war. The men were given officer positions, extra privileges, and were able to be shipped overseas to fight in the trenches and on the islands. The women, meanwhile, were resigned to domestic jobs, and a select few were sent overseas to serve as nurses or in other support positions. By 1942 however, a domestic push had introduced women into the war effort as more than passive observers. The women initially were seamen who had fewer privileges than their male counterparts despite serving in the same positions. But eventually, as more men were shipped overseas, the female codebreakers(who had set up shop in Washington D.C.) outnumbered the male codebreakers, served in officer positions, and became more integral to the war effort as they deciphered a greater number of crucial Japanese messages.

Perhaps the most famous example of this rise of the WAVES unit(the female naval codebreakers), was the decryption of the itinerary. A greater number of Japanese messages began to be intercepted, and a group of women managed to decrypt parts of the itinerary of Admiral Yamamoto: the top Japanese commander who had orchestrated the attack on Pearl Harbor. As the days passed, the codebreakers were able to piece together the exact itinerary of the commander's flight to certain Japanese islands, and Nimitz and other navy officials proposed a daring American plan, dubbed Operation vengeance, to intercept Yamamoto's flight and kill him. On April 18, American jets managed to catch the Japanese by surprise, and in a turning point of the war, show down the Japanese bomber carrying Yamamoto.

The WAVES unit managed to keep quiet about their section of the war effort, and told outsiders that they merely worked in naval communication. Their persistence and effort eroded traditional gender stereotypes by proving that women could be capable in the military, and allowed women greater control and more freedom to participate in the war effort. Codebreaking was integral to the war, and female codebreakers especially played a crucial role in the Allied victory.

Analyzing Numbers Stations

What I found most interesting about the Numbers Station episode from 99% Invisible was the eerie fact that many of these numbers stations can still be widely heard today. Despite knowing that their message could be intercepted by a wide array of individuals and agencies, the transmitters of the numbers are confident enough in their encryption strength to broadcast their message on an open channel. The podcast producer kept the audience interested in the content in a variety of ways. They brought in other speakers to help analyze these seemingly-random messages, and most importantly employed seemingly-off/creepy music and other sound effects when talking about the number stations. This music helped keep the listener alert and interested in the material; this strategy particularly paid off when the producers began discussing the more technical aspects of the material.

When discussing the history of numbers stations or why they existed, the producers would have multiple people put forth their ideas and debate, and then intersperse bursts of odd music or a numbers station message. This helped provide a sense of variety to the reader, kept them alert with the odd music and seemingly-random messages, and provided them with more technical information on numbers stations. Based on this episode, I am interested in making a podcast that deals with some mysterious facet of cryptography, whether that be a specific example of cryptography gone wrong or just the secrecy behind encryptions. I would like to use multiple media, viewpoints, and stories to narrate my podcast, thereby keeping the content "fresh" with variety as well as offering more useful information and informative debates to my audience.

Ethical Implications of Wartime Actions

When Zimmerman was sworn in, America rejoiced at what they thought was going to be a new era of German diplomacy, and the greater likelihood of peace in Europe.   But little did they realize that the new foreign minister was intent on increasing Germany's aggression. Two years into the war, Zimmerman successfully lobbied for a lift on the ban on unrestricted submarine warfare. He believed that a new fleet of U-boats could lead to Britain's surrender within six months; the only issue was America's neutrality. This new move would almost certainly push America's allegiance to the Allies, so Zimmerman devised a cunning plan: he would persuade Mexico to declare war on America, which would allow time for Germany to win in Europe and prepare for the American campaign. But thanks to a clever move by British ships, Germany's underwater cables had been severed before the war, so Zimmerman's encrypted telegram to Mexico was intercepted by the UK.

Admiral Hall's cryptanalysis deciphered parts of the telegram, and correctly deduced what Zimmerman's plan was. But Hall decided to not to tell America for two reasons: he did not want to miss vital information and give America an incomplete message, and he did not want the Germans to figure out that Britain had broken their encrypted messages. Admiral Hall was justified in his decision to not give the message immediately to America.

The decision to allow unrestricted U-boat warfare would have gone through in either scenario, and such a drastic move on Germany's part might have been enough to push America to fight for the Allies. But the more important reason that Hall's decision was justified was that Hall was sacrificing the short-term consequences for the long term gains. If the Germans knew that Britain could crack their codes, that would have been enough of an impetus for the Germans to develop a stronger encryption; thus, the British would have lost a major source of intelligence that would have proven disastrous, possibly fatal later in the war. The long-term effects of being able to know the plans of your enemy, their locations, and modes of attack are invaluable; either way, when America chose to remain neutral after the resuming of unrestricted boat warfare, Hall exploited the Zimmerman telegram to pull America into the war.

Although Hall's decision may seem unethical on the surface, the long-term benefits significantly outweigh the short-term negatives.

The Panopticon -

I both agree and disagree with Benjamin Walker's assertion that the Panopticon is a faulty metaphor. The panopticon is a theoretical building where a circular building is located around a central watchtower. The watchtower shines bright light such that the people in the watchtower can observe what those in the building below are doing, but the observed individuals can't see when they are being observed. Thus they must always assume that they are being watched. Originally meant to be a prison, the panopticon can be applied to a wide variety of situations.

In today's surveillance era, we are constantly tracked by cameras wherever we go; the cameras, as Walker argued the watchtower was, served as a means of deterrence. The argument goes that if there was visual evidence of your actions, engaging in criminal acts would be discouraged. But in today's digital age, there are no "eyes" silently tracking us as we move from news apps to games to video-sharing websites. Instead, giant corporations and governments silently track our data usage to build algorithms that can help protect us from bad actors. But without those digital eyes, we are more likely to engage in harmful behaviors that we believe are anonymous. This is, I believe, the biggest strength in the concept of the panopticon - the deterrence of being in a constant state of being observed. But even though we know we are being watched today, we still act as if we are invisible. The watchtower is particularly interesting; it has migrated from being a physical building to being countless data surveillance tools arrayed by a variety of actors. The panopticon is very much real today in its surveillance sense; whether our behavior is being normalized or corrected because of its presence(whether or not we know about it) is another issue.

Environmental change in Cryptological Perception

Mary Queen of Scots fully believed that her cipher was unbreakable, so she laid bare her plan to take control of Scotland. Thus when her cypher was encrypted, there laid a written confession on the table, ready to take her to the gallows. This historical example led to the development of an environment of secrecy and mistrust, where cryptanalysts held the power over cryptographers. Even if one made a seemingly "unbreakable" code, they did not know if another expert codebreaker was waiting to crack it. This never-ending cat-and-mouse game of codes has continued through the centuries, always adapting and evolving. The knowledge that one's code could be broken fostered more caution on behalf of the cryptographer, wherein they sent codes that were more cryptic in nature even in plaintext, knowing that an expert codebreaker might crack their code.

This strategy was a direct consequence of the knowledge that someone more experienced may crack your code - after all, if that was the case, why not make your plaintext message more difficult to understand as well? This would add an additional layer of security, and ensure more protection.  This shift was a significant one in cryptography history, and represented a transition to a more secretive/hard-to-decipher language where nothing was taken for granted.

Reimann Sum(feat. Technology)

Technology has quite literally transformed our lives. We live in an age of undeniable prosperity and freedom, where even our poorest live a better life than ancient kings. But in recent years the very technologies that we use for pleasure have been turned against us by governments and bad-faith actors. Of course we don't live in an era of absolute freedom; we agree to cede some of our rights for safety and security. For example, we as a society agree on the use of surveillance cameras as a means of deterrence and protection, but are we ready to make the leap to facial ID? We agree that police should use DNA testing to solve crime, but what about an artificial intelligence reconstruction of a criminal that may present flaws?

One of the most striking paragraphs from Big Brother came up on page 42 when Cory Doctorow discussed how despite advancements in gait recognition software allowed recognition of individuals from their movements, the software's success rate was reduced by any number of external factors including floor material, ankle angle measure, and your energy level. This variability can lead to errors in the system which can often have devastating consequences, especially when peoples' lives and security hang in the balance. The title, I believe, accurately reflects our society's desire to perfect our creations: we input more data points, update more software, create new tools, in a never-ending journey to create the perfect AI tool. But at what point do the ethical complications from such a tool lead to sufficient harm such that an objective cost-benefit analysis would overturn the progress of such a tool? No matter how many data points we inject, a piece of technology will never perfectly emulate the human mind. Every error/mistake that's caused by the inaccuracy of technology threatens our stability, and is only magnified as the scope of the instrument exists. One particular example exists in the NSA. What would be the fallout of an inaccurate terror watch list that was compiled using the latest data points? Although this question is astronomical, it is important that we examine this issue with the utmost scrutiny.

Mining Mystery: Should we mine Student Data for more Protection?

Morris’s central argument revolves around the incorporation of student data mining in order to counter possible future threats. He calls this “the next natural step” in using private information to prevent external threats. Morris goes on to detail how administrators could track social media usage, shopping patterns, and further online activity in order to make assessments on whether a credible threat exists. 

 

The central issue in this debate lies between privacy and security. Are students’ rights to privacy outweighed by administrators’ need to provide safety and security for their students? This question isn’t limited to college campuses, but can rather be applied to society as a whole. Discussing the role of authority, particularly governments, in our daily lives is of the utmost importance and a daily ideological struggle. I both agree and disagree with Morris’s argument. It’s important for administrators to do whatever is necessary to protect their students, but violating the privacy of their students is not the path to go. Aside from the obvious moral enigma, such an act could give more power to authority and reduce self-accountability. Allowing the administration to monitor what students do online would lead to mistrust; dangerous, secretive behaviors; and a need for students to “hide” what they are doing online. A common-sense solution would combine certain aspects of Morris’s argument with the other side. Allowing the student population to decide which aspects of their online life they want monitored would provide more credibility to the administrations’ efforts to increase safety, as well as provide increased trust and accountability of authority.

 

How much power we are willing to give authority is a central tenet of modern society, and no discrete answer exists. The best possible solution takes into account both sides’ arguments and will help administrators provide better security while also protecting student privacy.

 

Page 1 of 2

Powered by WordPress & Theme by Anders Norén