Cryptography

The History and Mathematics of Codes and Code Breaking

Author: schrokr1

Takeaways From an Engaging Podcast

One of the podcast episodes I chose to listen to was "Numbers Stations" from 99% Invisible.  The episode is hosted by Roman Mars, who discusses mysterious shortwave radio frequencies used to broadcast endless strings of numbers, also known as numbers stations.  Something I found very interesting about this topic was the degree of mystery and obscurity behind these broadcasts.  It is assumed that the numbers represent coded messages, but nobody knows who is meant to receive them.  The most popular theory is that these shortwave frequencies are used by government agencies such as the CIA to communicate with spies around the world, but there's no way to be certain.

The producer does an excellent job at keeping the podcast interesting and engaging through the use of various sound clips.  He sprinkles in recordings of numbers station broadcasts throughout the episode, allowing listeners to feel like they are directly tuning in to them.  Additionally, there is a lot of creepy background music which serves to reinforce the sense of mystery behind numbers stations and make the listener want to know more about them.  Finally, the content is explained in a way that is relatively easy to understand.  The producer avoids using heavy jargon in order to keep his audience as broad as possible.

After listening to this episode, I realized how important it is to have good background music and other appropriate sounds.  It adds a whole new dimension to the experience.  Depending on the topic I choose, I plan to implement strong auditory elements into my own podcast to hopefully make it more engaging.

Why Some Intel Should Remain Secret

Prior to the publication of Winston Churchill's The World Crisis and the British Royal Navy's official history of the First World War in 1923, the Germans were completely oblivious to the fact that their encryption system had been compromised.  Since Admiral Hall managed to make it seem as though the unencrypted version of the Zimmermann Telegram had been intercepted in Mexico, they didn't know that it had actually been deciphered by British cryptanalysts.  As we discussed in class, cryptographers tend to be overly confident in the security of their codes. Most will not assume they have been broken unless there is clear evidence that they have.  Because of this, the Germans had no reason to believe that their messages weren't secure, so they initially displayed no interest in investing in the Enigma machine after the war.

However, when the British publicly announced that their knowledge of German codes had given them a major advantage in the war, the Germans realized they needed a stronger encryption system.  This realization is what led them to adopt the Enigma machine for use in military communication encryption during the Second World War.  The formidable strength of Enigma posed a major challenge to the Allies' cryptanalysts, appearing to be unbreakable.  Although it was eventually cracked, Enigma allowed the Nazis to communicate in secrecy for a large portion of the war, giving them a significant advantage.

There are a few reasons that could explain why the British announced their knowledge of Germany's codes after World War I.  For one, they were likely motivated by pride.  They wanted to show what their cryptanalysts were capable of, possibly with the intention of intimidating other countries.  Furthermore, they probably figured that since the war was over, there was no harm in revealing the strategies they used.  However, after seeing the consequences that arose later on, it is clear that the British should have stayed quiet.  Had they kept their knowledge a secret, the Nazis might have continued to use the same methods of encryption into the second World War.  If so, the Allies would have been able to know their plans ahead of time, resulting in a much shorter and less bloody World War II.

The Fine Line Between Surveillance and Privacy Invasion

The Newseum display encourages people to consider the issue of privacy versus security and asks us what we would be willing to give up to feel safe.  There are many interesting responses on the whiteboard underneath the display, but the one that stood out to me the most was the Ben Franklin quote, which reads, "Those who would give up essential liberty, to purchase a little temporary safety, deserve neither liberty nor safety." While we can assume Franklin did not say this with modern technology and its possible implications for government surveillance in mind, the core message can still be applied.  Essentially, this statement suggests that personal liberty is a fundamental necessity that should not be sacrificed under any circumstance, which can be interpreted in support of the privacy argument.  If people knew, or even just thought, that they were under constant surveillance, they would likely behave differently, even if they weren't doing anything wrong.  They might begin to feel like they don't have ownership over their own lives.  Like Marcus says in Little Brother, being subject to surveillance is like pooping in public.  You're not doing anything illegal or immoral, but it's still unsettling.

Personally, I agree with Franklin's point that liberty should not be sacrificed for safety, but I don't think that means government surveillance is completely unacceptable.  To an extent, the government can collect information on the general population to look for potential safety risks in a way that doesn't make us feel like we no longer own our lives.  For example, I wouldn't mind if the government had access to information such as my purchase history or even my location history, since I wouldn't feel the need to worry about keeping them private as long as my behavior is legal.  However, I would have a problem if they snooped through my personal ideas in the form of text messages or private note files.  I consider those worthy of being kept private.  If a stranger had access to my personal conversations and thoughts, I would behave differently and feel less in control, even if I'm not doing anything wrong.  Basically, I draw the line where the information collected stops being rationally useful for promoting public safety and begins to threaten my personal liberty for no apparent benefit.

The Cipher That Survived for 200 Years

The Great Cipher of King Louis XIV was an enhanced monoalphabetic substitution cipher that managed to remain unsolved for over two centuries.  It was developed by the father-and-son team of Antoine and Bonaventure Rossignol, two of the best cryptanalysts in France.  King Louis XIV used it to securely encrypt sensitive information regarding his political plans.  The first characteristic of the Great Cipher that made it so strong was that it used 587 different numbers to encode messages rather than just 26 symbols, like a standard monoalphabetic substitution cipher.  This meant that there were multiple possibilities for the significance of each number.  Cryptanalysts initially thought that each number corresponded to a single letter, with several ways to represent each letter.  A cipher like this would be quite effective in that it would be immune to frequency analysis, but the Great Cipher was actually even more complicated.  Rather than a single letter, each number represented a full syllable in the French language.  Since there are so many possible syllables, this method is several times more secure, requiring a cryptanalyst to match up far more than just 26 pairs of meanings.  In addition, the Rossignols made the cipher extra deceiving to potential codebreakers by making some of the numbers delete the previous syllable instead of signifying a unique one.  All of these strong encryption techniques contributed to the longevity of the Great Cipher, and it remained unsolved until expert cryptanalyst Commandant Étienne Bazeries finally broke through 200 years later.

1 Comment

The Paradox of the False Positive

One passage from Little Brother that particularly caught my attention was the part from chapter 8 in which Marcus discusses the paradox of the false positive.  It begins with Marcus explaining his plan to fight back against the Department of Homeland Security's ramped-up surveillance and "safety protocols" that he believes to be violating the personal privacy of the citizens of San Francisco.  He talks about a critical flaw in the DHS terrorist detection system, which is that the accuracy of the terrorism tests isn't nearly good enough to effectively identify actual terrorists without incorrectly accusing hundreds or even thousands of innocent people in the process.  Due to the extreme rarity of true terrorists, the tests meant to increase safety end up generating far too many false positives that result in people feeling even less safe.  As Marcus says, it's like trying to point out an individual atom with the tip of a pencil.

This passage made me reconsider just how efficient automatic detection algorithms really are.  It's logical to believe that a 99% accurate test is reliable, but when there is a very small amount of what you're looking for in a very large population, a 1% error can cause major problems.  Thinking back to the article that discussed universities' use of data-mining to identify possible school shooters or other at-risk individuals, it's clear that the paradox of the false positive could cause similar issues in real-world situations.  The number of would-be school shooters is so small compared to the total student population that it would be extremely difficult for any tests to accurately identify them.  Overall, Little Brother's discussion of the paradox of the false positive demonstrates the importance of having reliable identification tests with sufficient accuracy to take on the rarity of what they are meant to find.  Otherwise, you might just end up working against yourself.

Data Mining: A Lifesaver if Done Right

In the essay, "Mining Student Data Could Save Lives," author Michael Morris claims that universities should use data mining to monitor the online activity of students as a safety precaution.  Access to information about students' online behavior could theoretically be used to identify individuals at risk of committing acts of violence and allow university officials to intervene before anyone gets hurt.

Personally, I agree with the idea that university officials should have access to information that could end up saving lives.  While a certain degree of individual privacy would inevitably be sacrificed, I believe the overall benefits outweigh the costs.  It is unlikely that a typical student would even be affected by the presence of increased surveillance.  However, data mining is a controversial concept and any implementation of such practices would require clearly outlined procedures and restrictions.  First of all, it would be necessary to ensure that the algorithm used to identify red-flag behavior is reliable.  You wouldn't want it to constantly raise alarms at behavior that turns out to be completely harmless, but at the same time, it's important that when there is a real threat, even a subtle one, university officials are able to catch it and determine the correct steps of action.  Additionally, university protocol would have to be designed so that personal student information is only disclosed to appropriate parties, in accordance with FERPA regulations.

One of the most important considerations with online surveillance is the response protocol used when at-risk students are discovered.  In order for university data mining to be successful, potential threats must be dealt with tentatively.  No accusations could be made based solely on analysis of online activity; intervention would have to be non-hostile and carried out with the intent to understand the student's behavior without jumping to conclusions.  Officials must have the mindset to help at-risk students, not attack them.

In conclusion, universities should be allowed monitor student activity via data mining, since it can potentially identify risks of violent behavior.  If implemented correctly, universities could prevent tragedies without interfering with students' daily lives.  As Morris mentioned, we are all already subject to data mining from other sources, and many people are still unaware of its existence.  To me, the fact that data mining could save lives makes it well worth the sacrifice of a small degree of privacy.

The Problem with Weak Encryption

In Chapter 1 of The Code Book, author Simon Singh states, "The cipher of Mary Queen of Scots clearly demonstrates that a weak encryption can be worse than no encryption at all."  What this essentially means is that overconfidence with a cipher, especially a relatively weak one, can be dangerous in that it creates an illusion of privacy that may lead to careless communication.  This was problematic for Mary and continues to be problematic today.

The encryption method used by Mary and Babington was called nomenclator, in which both letters and common words are replaced with corresponding symbols in the ciphertext.  In their minds, that system was more than effective, but they were unaware of the advancements in cryptanalysis that were being made at the time which allowed Walsingham and Phelippes to decipher it.  As a result, Mary and Babington had the false impression that they could say anything to each other without their messages being understood if intercepted.  This ended up proving worse for them than if they had no encryption method at all.  Had that been the case, they would have consciously made efforts to be vague and discreet when discussing sensitive information because there would be an obvious threat of self-incrimination.  However, their blind confidence in the encryption masked that threat and led them to speak directly and openly about their plans to assassinate Queen Elizabeth.  When it turned out that Walsingham was able to decipher their messages, they were caught completely off guard.

The issue of reliance on weak encryption methods is arguably even more prevalent today in the digital age.  The internet allows more information than ever before to be accessible to more people than ever before, so weak encryption can pose extreme privacy and security risks.  That is why it is important to be careful what information you put online, even if it is protected by a password.  There is always a possibility that hackers can gain access to your personal info.  For that reason, it is important to utilize the best encryption methods, and even then, to avoid putting out sensitive information when possible.

 

Powered by WordPress & Theme by Anders Norén