The History and Mathematics of Codes and Code Breaking

Tag: vulnerability

How can we make security more “secure”?

On page 99 of Little Brother by Cory Doctorow, Marcus delineates the flaws of cryptology and how ultimately cracking the Enigma led to the victory against the Nazis in WWII. One of the flaws was secrecy; after Alan Turing cracked the Enigma, any Nazi message could be deciphered because “Turning was smarter than the guy who thought up Enigma” (99). As a result, it sparked the thought that any security system is “vulnerable to someone smarter than you coming up with a way of breaking it” (99). Bruce Schneier also refers to flaws of a security system in his Afterword, explaining that it is useless for you to come up with a system entirely by yourself because there is no way for you to detect flaws in your creation. You are limited to your knowledge. Outsiders with different levels of thinking would help by suggesting different views in which people can think of in order to break the system.

I think that this concept is interesting; you are limited by what you know. And everyone around us knows something that we don’t. Recently I read a passage in Harvard Business Review on how companies and organizations should welcome people in different kinds of fields to evaluate an idea because they won’t think the same way that people in a particular company does; a mathematician thinks differently than a historian does, and the distance between their thinking has the potential to bolster ideas, limit flaws, and suggest new ideas that haven’t been thought of yet. Could this be the way to strengthen our current security systems? What kind of people do we need to evaluate them? How many people do we need (until we pass the point to where the security measure is too widely known and therefore ironically more vulnerable)?

I believe this is one of the fundamental qualities of Cryptology and all security measures: how do we know a system is safe to use? Truth is, we really don’t know, but we can always come closer by cross referencing and past experiences, allowing security to get better and better with each step of the way.


Crossing the Line?

In the article “Mining Student Data Could Save Lives” by Michael Morris, I believe Morris argues that in order to mitigate or even repel completely the threat of student violence, an option lies in “data mining”–the process of collecting massive amounts of data from students using their emails, computer use in the university, and other activities on the internet (Morris). At first I thought about the gains: safety for everyone. Then I thought about the repercussions a little more. While I agree that data mining could potentially save thousands of lives, currently it comes at too high of a cost of privacy to every single person and thus is too intrusive to all of our personal lives.

Data mining is already happening to everyone: Google analyzes what shopping sites you visit and places ads to remind you of that dress you really wanted from Nordstrom. Banks conclude that you can’t be in Indiana and Maine at the same time using the same credit card and so they decline further transactions. In these cases, one could argue that there is a point in using data mining: Google tracks shopping sites to allow greater marketing and sales, and you could say that it is one of the jobs of the bank to ensure the safety of your cards. But what about data mining text messages between your friends or siblings or significant other? Mining your social media sites to see who your friends are and your personal interests and your plans for the weekend?

This method of analyzing a student to see if s/he would become dangerous to a campus’s safety would appear to be a solution, but what if an individual is nothing as so simple as the student that Morris describes: a student with very obvious intentions to attack another person, as seen from his online browsing and social media activity? We must also take into consideration the fact that if a person was smart enough, s/he would attempt to cover his/her tracks as best as possible. Maybe this person doesn’t use social media platforms to let out their rage. Maybe this person doesn’t need a firearm to cause harm. 

In addition, think about the consequences if this information was compromised. This is the danger of providing mountains of personal information in the inter web or a database. It could cause more harm than good. Now it becomes easier for people to stalk you, to analyze your daily/weekly routine and follow you around without your knowledge. 

We must ask where is the line drawn between safety and security? At what cost does it come at? In the plot of “Little Brother” by Cody Doctorow, people with *just enough* evidence would be subject to intense and humiliating questioning, even though nothing they did was threatening the safety of their country. However in the eyes of the DHS, whatever they decided to be remotely suspicious was enough to constitute a potential threat. The character Marcus just happened to have an affinity for data encryption and hacking (maybe this itself was caused by the increase in ridiculously superfluous school security…and a push to make security tighter only exacerbated Marcus’s need to up his defenses).

People say that if you’re an honest and good person, you should have nothing to hide. But I believe that isn’t the problem in most cases. The problem occurs in feeling vulnerable, like you’re boxed in, with prying eyes above you watching like a hawk. Feeling like you’re controlled, like you can’t have individual freedom anymore because every step is monitored. For these reasons, I believe that we are just a little too far away from finding the perfect solution to keep everyone safe through the use of data mining.

Powered by WordPress & Theme by Anders Norén