The History and Mathematics of Codes and Code Breaking

Author: Sandra

Worry is a misuse of your imagination

“Teens often grow frustrated with adult assumptions that suggest that they are part of a generation that has eschewed privacy in order to participate in social media.”

I would agree that there definitely is a social pressure to “post nice pictures” on Instagram or Facebook or VSCO because there’s a recurring saying among users that if you didn’t take a picture and post it on Instagram, then were you really there?

But also I agree that the generation of parents fret too much over us being peer pressured into doing something that we don’t want to. That just isn’t always the case, because it’s up to that teenager to decide whether other people will dictate her feelings and the social pressure she puts on herself or just completely disregard the impact of social media and be carefree of it. People nowadays have more respect for exactly that—not caring how many “likes” a picture gets, but posting a picture plainly for the memory or to share what’s going on in their life. Sure, the pressure of being validated by the amount of likes you get will always be present because validation of existence and purpose I would argue is one of the basic needs of human life in the 21st century, but I think we can all agree that we can control ourselves mentally to not let that define our existence. If not, then that says a lot about how easily people can be manipulated today.

I think the reality is that teenagers are briefed so many times on the whole “keep your private information PRIVATE and everything posted online will stay there forever” shebang that there’s already a common sentiment that teenagers actually do want privacy. The stories about teenagers posting an indecent post that received much disdain and dislike that resulted in her/him being socially shunned that people are scared into are probably exaggerated and skewed to sound like it’s worse than it actually is. I’m sure that we’ve all posted something (not even risqué) that we look back on two years later and go “yikes.” But the whole mental game plays with your mind to make it seem like that post attracted a lot more negative attention towards your social standing than it actually did. And that’s the trap of social media and why parents have such a misunderstanding of how teenagers really understand use and sharing of private information on social media.

You don’t realize how often secret messages are still being transmitted

In the “Number Stations” podcast, I enjoyed learning about the fact that actual secret messages that nobody knows the reason or meaning of is actually being constantly transmitted on certain radio frequencies and generally accessible to pretty much anyone who has a radio and a knack for tuning to every possible station. Their message itself is kept secret but the transmission of the encrypted one is totally open to the public. What’s more interesting is that this isn’t only used by one country–many countries such as Germany, Spain, Russia, and China are exploiting this means of communication. I’m guessing that as long as the coders are confident in the strength of their system, it doesn’t matter who hears it. But what does matter to them is an easy way of getting messages out and heard by the receiving end (radio is pretty common).

What bolstered this podcast’s grip on the audience came from the insertion of actual radio dialogues that show us what these “number stations” usually sound like in addition to quotes from relevant sources or experts, such as Bruce Schneier.

Though I was very captivated after about 10 minutes into the podcast, it initially felt a little awkward and unorganized, just because without reading anything else on the website, this podcast sort of starts out of nowhere and is confusing. We’re not entirely sure what their main point is until 3 minutes in. Thus, in this case, reading the text article beforehand is essential, which many people may not immediately realize (they might go straight to the podcast, sit back, and enjoy). Unlike papers that are organized with an introduction, body, and conclusion, the podcast seemed like it started with a point and decided to go wherever with it on a whim, like speaking on a stream of consciousness.

This gives me the idea to make sure that right from the start of my own podcast, I will introduce the main points covered so the audience has a very clear idea of what they’re about to listen to.

I did like how this podcast has supplemental information when you scroll down the page. More visual components such as graphs that show data or directly depict the situation in the order that they are discussed would be extremely helpful to people who are listening to the podcast simultaneously.

-Insert Title Here-

One overlooked important contribution to Allied success in cracking Enigma was that the Allies had knowledge of the structure of Enigma machines; had the Germans kept Scherbius’s invention as classified as possible—including exactly how many scramblers were used and the plugboard—efforts on the Allied side would have been much more stymied by pure bafflement of how the text could have even been conceived. Without knowing that there were scramblers, how many of them were employed, and how they worked, their progress would have been much more set back. They wouldn’t even know where to start, since all previous methods of decrypting messages would render to be useless. To guess out of the blue that there are 3 scramblers in effect in addition to a plugboard would have required some great intellectual leaps itself. 

Another flaw lies in the inherent structure-oriented nature of militaries.They always fall to regimented schedules, customs, and chain of command. In a way, they are sort of formulaic. In ROTC, we often write up memorandums, which is the main way formal messages are communicated through the detachment, and they always follow a certain template: the date is always in a specific location and distance away from the edge of the paper, the first line is always “MEMORANDUM FOR…”, the line after that is always “FROM”, and the line after that is always “SUBJECT.” Knowing this, the intense structured military environment allows no room for subjectivity, which reduces individual creativity and expression. It will always follow an objective that has been laid out for someone to fill in the blanks. Its repercussions can be seen in two areas: the location and repetition of the scrambler settings for the next message and the weather report that usually came at 6am every day. 

Had the weather report not contained the German word “weather” as the second word of each message and rather been incorporated into differently structured sentences (today’s weather is…the weather for today…it appears that it will rain at 1600 today…etc.) this would have drastically reduced the amount of cribs available for the code breakers. Unfortunately, under normal circumstances, it is unfavorable to deviate from the norm. Because that’s not how militaries work. In the military, they rely on unison and synchronicity. If a soldier decided to put a potentially life saving device in a pocket that was not designated for that device and ended up in a life threatening situation, he might not make it because other soldiers count on the fact that that device will always be in a specific pocket, which in this case is not. This wastes time and reduces efficiency. These are some other important factors that led to Allied success in cracking Enigma.


References: Photo from

Privacy vs Secur–does it even have to be something versus something?

On the Newseum board, there are a lot of arguments for pro-privacy. At the same time, there is another compelling argument to take as much as it has to in order to make people feel safe. 

I feel like people come from many different sides when they are voicing their opinions; their personal experiences in their own lives have shaped their beliefs and has compelled them to draw themselves to one particular side of this argument. One interesting point to notice is: why does there have to be a fine line separating pro-privacy vs pro-security? I believe we can have a healthy mixture of both. It’s when people divide crucial and sensitive topics like this into two distinct sides, that conflicts arise. Security and privacy can go hand-in-hand in some cases, but immediately saying that it is a rivalry where one decision should be better than the other forces people to choose sides even though they have beliefs belonging to both sides. Some people are willing to give up a bit of their privacy because they value their safety over anything (maybe they haven’t experienced an invasion of their privacy and don’t know the frustration of that). Some people are very protective of their privacy and believe that our privacy should be something inherent like freedom of speech (however, they might not have directly experienced a terrorist attack or danger where they’ve feared for their lives and know that government intervention will save lives and prevent terrorism). 

In general, we do have a lot of positive vibes on the board, such as “love not hate” and “good not evil.” Also, I found the quote “living life is an honor, don’t take our freedom away” an interesting quote. Here, we see someone who values life and probably also safety, I’m assuming. In addition to that, they also don’t want their freedom to be taken away. Perhaps that’s referring to freedom of having a private life? Freedom looks like it comes in two ways: the freedom in safety and freedom of having a private life. Which one do you prefer? Maybe both?

The Greatness of the Great Cipher

I see The Great Cipher is synonymous to the simple monoalphabetic substitution cipher, just on steroids. The concept is the same—one cipher letter or multiple cipher numbers represent a number of plaintext letters. However, what makes the two so different in their difficulty to be cracked lies in the sheer possibility of combinations that could be created from each cipher.

The cipher key was not limited to just one letter replacing another; instead, a few numbers represented syllables. Thus, this opened up a lot more possibilities to stump cryptanalysts.

Before, it was clear in monoalphabetic substitutions that one cipher letter represented one letter of the plaintext. Therefore, we were only faced with a certain amount of different cipher keys to deal with. Even though a completely random monoalphabetic cipher would yield so many possibilities, frequency analysis could easily help decipher it. But now with a cipher with undeterminable characteristics (does “1” represent a letter or does “123” represent one letter? Or a syllable? I’m guessing they did not know how many numbers represented how many letters), patterns that lead to the cracking The Great Cipher become less obvious. There is a multitude of syllables that exist in the French language, making combinations all the greater in amount. This increases the difficulty because although we might see a string of numbers or other patterns, the specific plaintext it refers to—whether it be just one letter or two or three—has much more holes and traps. 

In addition, many people might still be familiar with only the mono alphabetic substitution (since cryptology was still developing), so people might have not thought in a “numbers now represents syllables” way just yet. A reason for the people’s unfamiliarity would be that since the Great Cipher was made by two people (the Rossignols) who already knew how to crack extremely hard ciphers, their knowledge of the weakness of strong ciphers bolstered their knowledge to build something knew that didn’t fall into the traps of the simple mono alphabetic substitution cipher. As such, because they thought five steps ahead of everyone else. In addition to their death, the Great Cipher remained unsolved for 200 years because the only people smart enough to crack hard ciphers and used the weakness of those to create a new super hard to crack cipher had died. In short, their knowledge of the Great Cipher died along with them until it was unearthed 200 years later.

How can we make security more “secure”?

On page 99 of Little Brother by Cory Doctorow, Marcus delineates the flaws of cryptology and how ultimately cracking the Enigma led to the victory against the Nazis in WWII. One of the flaws was secrecy; after Alan Turing cracked the Enigma, any Nazi message could be deciphered because “Turning was smarter than the guy who thought up Enigma” (99). As a result, it sparked the thought that any security system is “vulnerable to someone smarter than you coming up with a way of breaking it” (99). Bruce Schneier also refers to flaws of a security system in his Afterword, explaining that it is useless for you to come up with a system entirely by yourself because there is no way for you to detect flaws in your creation. You are limited to your knowledge. Outsiders with different levels of thinking would help by suggesting different views in which people can think of in order to break the system.

I think that this concept is interesting; you are limited by what you know. And everyone around us knows something that we don’t. Recently I read a passage in Harvard Business Review on how companies and organizations should welcome people in different kinds of fields to evaluate an idea because they won’t think the same way that people in a particular company does; a mathematician thinks differently than a historian does, and the distance between their thinking has the potential to bolster ideas, limit flaws, and suggest new ideas that haven’t been thought of yet. Could this be the way to strengthen our current security systems? What kind of people do we need to evaluate them? How many people do we need (until we pass the point to where the security measure is too widely known and therefore ironically more vulnerable)?

I believe this is one of the fundamental qualities of Cryptology and all security measures: how do we know a system is safe to use? Truth is, we really don’t know, but we can always come closer by cross referencing and past experiences, allowing security to get better and better with each step of the way.


Crossing the Line?

In the article “Mining Student Data Could Save Lives” by Michael Morris, I believe Morris argues that in order to mitigate or even repel completely the threat of student violence, an option lies in “data mining”–the process of collecting massive amounts of data from students using their emails, computer use in the university, and other activities on the internet (Morris). At first I thought about the gains: safety for everyone. Then I thought about the repercussions a little more. While I agree that data mining could potentially save thousands of lives, currently it comes at too high of a cost of privacy to every single person and thus is too intrusive to all of our personal lives.

Data mining is already happening to everyone: Google analyzes what shopping sites you visit and places ads to remind you of that dress you really wanted from Nordstrom. Banks conclude that you can’t be in Indiana and Maine at the same time using the same credit card and so they decline further transactions. In these cases, one could argue that there is a point in using data mining: Google tracks shopping sites to allow greater marketing and sales, and you could say that it is one of the jobs of the bank to ensure the safety of your cards. But what about data mining text messages between your friends or siblings or significant other? Mining your social media sites to see who your friends are and your personal interests and your plans for the weekend?

This method of analyzing a student to see if s/he would become dangerous to a campus’s safety would appear to be a solution, but what if an individual is nothing as so simple as the student that Morris describes: a student with very obvious intentions to attack another person, as seen from his online browsing and social media activity? We must also take into consideration the fact that if a person was smart enough, s/he would attempt to cover his/her tracks as best as possible. Maybe this person doesn’t use social media platforms to let out their rage. Maybe this person doesn’t need a firearm to cause harm. 

In addition, think about the consequences if this information was compromised. This is the danger of providing mountains of personal information in the inter web or a database. It could cause more harm than good. Now it becomes easier for people to stalk you, to analyze your daily/weekly routine and follow you around without your knowledge. 

We must ask where is the line drawn between safety and security? At what cost does it come at? In the plot of “Little Brother” by Cody Doctorow, people with *just enough* evidence would be subject to intense and humiliating questioning, even though nothing they did was threatening the safety of their country. However in the eyes of the DHS, whatever they decided to be remotely suspicious was enough to constitute a potential threat. The character Marcus just happened to have an affinity for data encryption and hacking (maybe this itself was caused by the increase in ridiculously superfluous school security…and a push to make security tighter only exacerbated Marcus’s need to up his defenses).

People say that if you’re an honest and good person, you should have nothing to hide. But I believe that isn’t the problem in most cases. The problem occurs in feeling vulnerable, like you’re boxed in, with prying eyes above you watching like a hawk. Feeling like you’re controlled, like you can’t have individual freedom anymore because every step is monitored. For these reasons, I believe that we are just a little too far away from finding the perfect solution to keep everyone safe through the use of data mining.


[Response to question #1] When Singh says that “a weak encryption can be worse than no encryption at all,” it makes me want to equate it to a devastating mistake of leaving an unmistakable trace because with that, all of the evidence of enciphering a message to plot to kill Queen Elizabeth falls completely onto Queen Mary of Scots because of the cipher that was used on both sides. Whereas if there was no encryption, there could be more room (just a little) for the argument that the letters between Mary and Babington could just be in the wrong place at the wrong time (though it looks very unlikely to win over in this Mary-Elizabeth case). For example, if Mary and Babington were not too ignorant or overconfident with the security of their enciphered messages, they could agree on the word “She” with a capital S in place of saying the “Queen” (or any word/phrase that makes the plot to assassinate Elizabeth obvious) because then, the plot to kill can be against anyone. However, when both Mary and Babington use the exact same cipher and have the exact same content of their intentions, it becomes very difficult to convince people of her innocence. My interpretation of having no encryption as opposed to a weak encryption is that with an encryption, albeit furtive, can do more damage than good when it falls into the wrong hands especially with high stakes because it implies that there is information that is so valuable that it has to be hidden from others’ eyes. This heightens the curiosity and thus makes people, whether for good intention or bad, feel the need to pry into the message and know its meaning. Also in Mary’s case, it presented itself as undeniable evidence that she was taking part in the Babington conspiracy and ultimately her cause of death.

Powered by WordPress & Theme by Anders Norén