Cryptography

The History and Mathematics of Codes and Code Breaking

Author: schrokr1 Page 1 of 2

Getting Our Priorities Straight

Almost everyone agrees that safety and privacy are two things that people have the fundamental right to enjoy.  Rarely do we hear an argument deliberately stating that either of these concepts should be intentionally disregarded.  In a perfect world, everyone could feel protected from physical harm as well as from privacy invasion.  Unfortunately, however, we do not live in a perfect world.  We live in a society where priorities must be evaluated and sacrifices must be made in order to promote the greater good.

Today, we face a growing prevalence of terrorism and violent crime that poses a threat to national security.  It is important that our government is given freedom to use electronic surveillance because it would allow it to collect information that could prevent these horrible incidents from ever taking place.  If federal agencies such as the NSA or the FBI could monitor people’s online behavior, they could identify red flags and potentially intervene before tragedy strikes.  Even if the chances are slim, it’s still worth a try.

Some believe that the government would be overstepping its bounds with surveillance like this, saying it has no right to collect personal data.  However, if surveillance has a chance to save lives, one could argue that it is acceptable to use it at the expense of some degree of personal privacy.  As long as you aren’t doing anything wrong, you have nothing to be afraid of.  The primary purpose of any government is to protect its citizens.  It has no interest in snooping around an ordinary person’s data, and would not go out of its way to bother anyone who doesn’t pose a threat.  Overall, it’s important that we have a little bit more faith in the intentions of our government.  We are currently in the midst of an informational arms race.  The enemy is using every resource at their disposal to try to come out on top – shouldn’t we do the same?

 

The Thoughtful Production of the RadioLab Podcast

The producers of the RadioLab podcast episodes, “Darkode” and “Ceremony,” implemented several elements in order to make the material more interesting and engaging.  First of all, the introductions did a good job grabbing the attention of the audience, with unique sound editing techniques.  Furthermore, the producers continued to add immersive sound effects throughout the duration of each podcast.  In the “Ceremony” episode, I really liked how they added amplified computer processor noises to imitate what it would be like to listen through the high-tech microphones that they were worried hackers might be using in the next room.  This made it clear just what these microphones were capable of and the extent that hackers sometimes go to.  I never knew such technology existed, and I would have thought it was ridiculous to worry about someone listening to the sounds your computer makes from another room.  Hearing how it is possible made me realize that sometimes being paranoid is justified.  There were also a variety of other sound effects that made the audience feel like they were part of the experience.

Another aspect that made the podcasts more interesting was that they told stories.  In the “Darkode” episode, they got a victim who was hacked using Cryptowall to give a firsthand account of what happened to her.  Her story helped make it easier to understand how Botnets work and how hackers can use them to infiltrate millions of people, encrypt their data, and make them pay ransom to get it back.  In the second half of the episode, they got one of the original creators of Darkode to explain its backstory and how it worked.  His account gave an interesting perspective on its original intended use, and how people twisted it to serve other purposes.  Personally, I found this content fascinating, and the way it was presented made it even more engaging.

Communicating in Plain Sight

One passage from It’s Complicated by danah boyd that caught my attention was, “Many teens are happy to publicly perform their social dramas for their classmates and acquaintances, provided that only those in the know will actually understand what’s really going on and those who shouldn’t be involved are socially isolated from knowing what’s unfolding. These teens know that adults might be present, but they also feel that, if asked, they could create a convincing alternate interpretation of what was being discussed.”

This passage illustrates the concept of social steganography, a strategy that teens often use to privately communicate.  What I find interesting is that I have always been aware of the existence of this technique, even used it myself, but I had never realized that it was a form of steganography.  Now, it seems quite obvious.  When someone posts an “inside joke” or uses vague or special language that means something to a particular group but appears meaningless to everyone else, they are basically hiding a message in plain sight.  Anyone with access to their online profile could see what they are putting out there, but only a specific target audience would understand what they are really communicating.  Clearly, steganography has a much larger presence in everyday life than I previously thought.

As boyd explains, many adults often criticize teens for posting information publicly while also caring so much about their privacy.  They see these things as acting against each other, but what they don’t realize is that teens are very careful in deciding what they expose to the public.  By using strategies such as social steganography, it is possible to have an easily accessible online presence while simultaneously maintaining control over who you share sensitive information with.

The Question of Accountability

On page 315, Singh writes that Zimmerman, through a friend, “simply installed [PGP] on an American computer, which happened to be connected to the Internet. After that, a hostile regime may or may not have downloaded it.”  Although Zimmerman’s actions possibly enabled criminals to gain access to better encryption, he should not be held accountable for what they do with it.  For one, his intention when releasing PGP to the public was simply to provide average citizens the ability to exercise their right to privacy.  He did not upload it with the goal of helping criminals and terrorists, so there is no reason he should be held accountable if such groups choose to abuse the software.

Singh brings up an important point in this debate when he compares the release of PGP to the sale of gloves.  The purpose of gloves is to protect your hands from hazardous environments, and that is what most people use them for.  However, they can also be used by criminals to cover up their fingerprints.  Therefore, gloves can hinder a police investigation of a case when they are abused by a criminal, yet you don’t hear people saying that the inventor of gloves should be held accountable for this.  The same concept applies to PGP.  The creator of the program is not to blame for its misuse by a select few.  The only person who should be held accountable for a crime is the person who committed it.

Takeaways From an Engaging Podcast

One of the podcast episodes I chose to listen to was “Numbers Stations” from 99% Invisible.  The episode is hosted by Roman Mars, who discusses mysterious shortwave radio frequencies used to broadcast endless strings of numbers, also known as numbers stations.  Something I found very interesting about this topic was the degree of mystery and obscurity behind these broadcasts.  It is assumed that the numbers represent coded messages, but nobody knows who is meant to receive them.  The most popular theory is that these shortwave frequencies are used by government agencies such as the CIA to communicate with spies around the world, but there’s no way to be certain.

The producer does an excellent job at keeping the podcast interesting and engaging through the use of various sound clips.  He sprinkles in recordings of numbers station broadcasts throughout the episode, allowing listeners to feel like they are directly tuning in to them.  Additionally, there is a lot of creepy background music which serves to reinforce the sense of mystery behind numbers stations and make the listener want to know more about them.  Finally, the content is explained in a way that is relatively easy to understand.  The producer avoids using heavy jargon in order to keep his audience as broad as possible.

After listening to this episode, I realized how important it is to have good background music and other appropriate sounds.  It adds a whole new dimension to the experience.  Depending on the topic I choose, I plan to implement strong auditory elements into my own podcast to hopefully make it more engaging.

Why Some Intel Should Remain Secret

Prior to the publication of Winston Churchill’s The World Crisis and the British Royal Navy’s official history of the First World War in 1923, the Germans were completely oblivious to the fact that their encryption system had been compromised.  Since Admiral Hall managed to make it seem as though the unencrypted version of the Zimmermann Telegram had been intercepted in Mexico, they didn’t know that it had actually been deciphered by British cryptanalysts.  As we discussed in class, cryptographers tend to be overly confident in the security of their codes. Most will not assume they have been broken unless there is clear evidence that they have.  Because of this, the Germans had no reason to believe that their messages weren’t secure, so they initially displayed no interest in investing in the Enigma machine after the war.

However, when the British publicly announced that their knowledge of German codes had given them a major advantage in the war, the Germans realized they needed a stronger encryption system.  This realization is what led them to adopt the Enigma machine for use in military communication encryption during the Second World War.  The formidable strength of Enigma posed a major challenge to the Allies’ cryptanalysts, appearing to be unbreakable.  Although it was eventually cracked, Enigma allowed the Nazis to communicate in secrecy for a large portion of the war, giving them a significant advantage.

There are a few reasons that could explain why the British announced their knowledge of Germany’s codes after World War I.  For one, they were likely motivated by pride.  They wanted to show what their cryptanalysts were capable of, possibly with the intention of intimidating other countries.  Furthermore, they probably figured that since the war was over, there was no harm in revealing the strategies they used.  However, after seeing the consequences that arose later on, it is clear that the British should have stayed quiet.  Had they kept their knowledge a secret, the Nazis might have continued to use the same methods of encryption into the second World War.  If so, the Allies would have been able to know their plans ahead of time, resulting in a much shorter and less bloody World War II.

The Fine Line Between Surveillance and Privacy Invasion

The Newseum display encourages people to consider the issue of privacy versus security and asks us what we would be willing to give up to feel safe.  There are many interesting responses on the whiteboard underneath the display, but the one that stood out to me the most was the Ben Franklin quote, which reads, “Those who would give up essential liberty, to purchase a little temporary safety, deserve neither liberty nor safety.” While we can assume Franklin did not say this with modern technology and its possible implications for government surveillance in mind, the core message can still be applied.  Essentially, this statement suggests that personal liberty is a fundamental necessity that should not be sacrificed under any circumstance, which can be interpreted in support of the privacy argument.  If people knew, or even just thought, that they were under constant surveillance, they would likely behave differently, even if they weren’t doing anything wrong.  They might begin to feel like they don’t have ownership over their own lives.  Like Marcus says in Little Brother, being subject to surveillance is like pooping in public.  You’re not doing anything illegal or immoral, but it’s still unsettling.

Personally, I agree with Franklin’s point that liberty should not be sacrificed for safety, but I don’t think that means government surveillance is completely unacceptable.  To an extent, the government can collect information on the general population to look for potential safety risks in a way that doesn’t make us feel like we no longer own our lives.  For example, I wouldn’t mind if the government had access to information such as my purchase history or even my location history, since I wouldn’t feel the need to worry about keeping them private as long as my behavior is legal.  However, I would have a problem if they snooped through my personal ideas in the form of text messages or private note files.  I consider those worthy of being kept private.  If a stranger had access to my personal conversations and thoughts, I would behave differently and feel less in control, even if I’m not doing anything wrong.  Basically, I draw the line where the information collected stops being rationally useful for promoting public safety and begins to threaten my personal liberty for no apparent benefit.

The Cipher That Survived for 200 Years

The Great Cipher of King Louis XIV was an enhanced monoalphabetic substitution cipher that managed to remain unsolved for over two centuries.  It was developed by the father-and-son team of Antoine and Bonaventure Rossignol, two of the best cryptanalysts in France.  King Louis XIV used it to securely encrypt sensitive information regarding his political plans.  The first characteristic of the Great Cipher that made it so strong was that it used 587 different numbers to encode messages rather than just 26 symbols, like a standard monoalphabetic substitution cipher.  This meant that there were multiple possibilities for the significance of each number.  Cryptanalysts initially thought that each number corresponded to a single letter, with several ways to represent each letter.  A cipher like this would be quite effective in that it would be immune to frequency analysis, but the Great Cipher was actually even more complicated.  Rather than a single letter, each number represented a full syllable in the French language.  Since there are so many possible syllables, this method is several times more secure, requiring a cryptanalyst to match up far more than just 26 pairs of meanings.  In addition, the Rossignols made the cipher extra deceiving to potential codebreakers by making some of the numbers delete the previous syllable instead of signifying a unique one.  All of these strong encryption techniques contributed to the longevity of the Great Cipher, and it remained unsolved until expert cryptanalyst Commandant Étienne Bazeries finally broke through 200 years later.

The Paradox of the False Positive

One passage from Little Brother that particularly caught my attention was the part from chapter 8 in which Marcus discusses the paradox of the false positive.  It begins with Marcus explaining his plan to fight back against the Department of Homeland Security’s ramped-up surveillance and “safety protocols” that he believes to be violating the personal privacy of the citizens of San Francisco.  He talks about a critical flaw in the DHS terrorist detection system, which is that the accuracy of the terrorism tests isn’t nearly good enough to effectively identify actual terrorists without incorrectly accusing hundreds or even thousands of innocent people in the process.  Due to the extreme rarity of true terrorists, the tests meant to increase safety end up generating far too many false positives that result in people feeling even less safe.  As Marcus says, it’s like trying to point out an individual atom with the tip of a pencil.

This passage made me reconsider just how efficient automatic detection algorithms really are.  It’s logical to believe that a 99% accurate test is reliable, but when there is a very small amount of what you’re looking for in a very large population, a 1% error can cause major problems.  Thinking back to the article that discussed universities’ use of data-mining to identify possible school shooters or other at-risk individuals, it’s clear that the paradox of the false positive could cause similar issues in real-world situations.  The number of would-be school shooters is so small compared to the total student population that it would be extremely difficult for any tests to accurately identify them.  Overall, Little Brother‘s discussion of the paradox of the false positive demonstrates the importance of having reliable identification tests with sufficient accuracy to take on the rarity of what they are meant to find.  Otherwise, you might just end up working against yourself.

Data Mining: A Lifesaver if Done Right

In the essay, “Mining Student Data Could Save Lives,” author Michael Morris claims that universities should use data mining to monitor the online activity of students as a safety precaution.  Access to information about students’ online behavior could theoretically be used to identify individuals at risk of committing acts of violence and allow university officials to intervene before anyone gets hurt.

Personally, I agree with the idea that university officials should have access to information that could end up saving lives.  While a certain degree of individual privacy would inevitably be sacrificed, I believe the overall benefits outweigh the costs.  It is unlikely that a typical student would even be affected by the presence of increased surveillance.  However, data mining is a controversial concept and any implementation of such practices would require clearly outlined procedures and restrictions.  First of all, it would be necessary to ensure that the algorithm used to identify red-flag behavior is reliable.  You wouldn’t want it to constantly raise alarms at behavior that turns out to be completely harmless, but at the same time, it’s important that when there is a real threat, even a subtle one, university officials are able to catch it and determine the correct steps of action.  Additionally, university protocol would have to be designed so that personal student information is only disclosed to appropriate parties, in accordance with FERPA regulations.

One of the most important considerations with online surveillance is the response protocol used when at-risk students are discovered.  In order for university data mining to be successful, potential threats must be dealt with tentatively.  No accusations could be made based solely on analysis of online activity; intervention would have to be non-hostile and carried out with the intent to understand the student’s behavior without jumping to conclusions.  Officials must have the mindset to help at-risk students, not attack them.

In conclusion, universities should be allowed monitor student activity via data mining, since it can potentially identify risks of violent behavior.  If implemented correctly, universities could prevent tragedies without interfering with students’ daily lives.  As Morris mentioned, we are all already subject to data mining from other sources, and many people are still unaware of its existence.  To me, the fact that data mining could save lives makes it well worth the sacrifice of a small degree of privacy.

Page 1 of 2

Powered by WordPress & Theme by Anders Norén