Cryptography

The History and Mathematics of Codes and Code Breaking

Author: scuddeam

No, the Fact I Don't Want You To Read My Texts Doesn't Mean I'm Obviously Breaking The Law

They want the right to be ignored by the people who they see as being “in their business.” Teens are not particularly concerned about organizational actors; rather, they wish to avoid paternalistic adults who use safety and protection as an excuse to monitor their everyday sociality. (Boyd, 56)

This chapter, and in particular this section, reminded me of a disagreement I've had with my parents time and time again - If I've got nothing to hide, then I should have nothing to fear, and therefore my parents should be allowed to access my digital communications without me protesting.

Parents often assume that if teens are being secretive, it means they're doing something illicit, and by monitoring communications, they are protecting their child from harm. Sure, it's true that teens do things that break house or school rules, or even laws, but if that's the case, it would make sense that they would use forms of communication that minimize the chance of later incrimination, meaning their messages would still be difficult to access even if their phone was confiscated. And of course, many teens are innocent in all these respects, and yet still want to maintain their privacy.

Kids (usually) don't want to keep their texts secret out of fear of punishment for illicit activities from their families, schools, or governments. As Boyd mentions throughout the chapter, it's simply because there's a certain level of privacy expected from what is essentially the digital form of a private conversation. Even if it takes place on social media- such as a comments section of a post - its still considered to be the equivalent of a private space, in which only certain members are allowed. If you had a group of friends over in one room, it would be considered rude for someone to eavesdrop on your conversation even if they can technically access that space or a space adjacent to it.

This leads me to the second flaw in many parents logic - when a parents reads their child's texts, or looks through their social media interactions, they aren't invading just their own child's privacy. My parents, for example, often argued that as my parents, they were entitled to the right to invade my privacy. However, by looking at my texts without my permission, they are also invading the privacy of the other party in the conversation. Even if the information being shared isn't illicit or even that sensitive, it's awkward and socially odd for someone to have knowledge of the private conversations of people they only indirectly know. The discomfort in these situations doesn't arise from fear of punishment - rather, it's the fuzzy boundaries and awkward relationships that can result from such surveillance.

Implications of the Information Age

When I began reading Singh Ch.7, I failed to realize, or remember, that the book was written 20 years ago. So, I was very surprised by some of the statements he made in the opening paragraph, such as the assertion that email would "soon" replace physical mail. In addition to email, text messages, instant messaging, and social media have also made electronic communication more common. At another point in the chapter, he mentioned that online shopping is still in its infancy - now, it's driving brick-and-mortar stores to extinction. Yet another development is the popularization of computerized systems for businesses, banks, healthcare facilities, and other services. Sure, we would all like privacy in our interpersonal communications, but its particularly important for our interactions with these institutions, in order to protect ourselves from fraud, identity theft, and  discrimination. One could argue, especially in Singh's time, that this could be achieved by only allowing such institutions to have access to encryption. But now as these systems and functions become increasingly complex, widespread, and consumer-initiated, its necessary for the general public to have control over their data and its security. Nearly our entire lives are stored in some way or another on our devices, therefore it is paramount that we are able to protect these devices from those who wish to exploit us or do us harm.m

Easier Understood than Done

In one of my previous posts, I wrote about hindsight bias and how it affects our perception of surveillance and whether it would significantly improve security. Turns out, hindsight bias is significant once again. In chapter 3 (and throughout the book) Singh provides examples where he makes breaking what would've once been considered an unbreakable cipher look easy, obvious even. This leads us to believe that these techniques should've been obvious in the first place. However, that's just our own overconfidence and hindsight bias talking. In general, we tend to assume that things are more predictable and more obvious than they actually were, and that we "knew it all along". Additionally, we overestimate what we currently do know, and how well we are able to do things independently before we are asked to do them. For example, when students study for tests, they'll read a question and think to themselves "Oh, I know that", but when asked that very question on a test, draw a blank because they really didn't know the answer - they just assumed they did because they maybe recognized the concepts or vocabulary. When we see these examples and think they should've been obvious, we have the privilege of hindsight and someone else's guidance.

Also, it's important to note that Singh is writing some of these examples himself, and it's much easier to decode something you encoded in the first place. These examples are also meant to be examples, and were therefore designed with that in mind. We're supposed to read these examples and understand how the decryption is working and find it logical - that's what a good explanation is supposed to do.

Necessity is the Mother of Invention

The invention of the telegraph revolutionized long-distance communication by allowing messages to travel many miles practically instantaneously. However, the drawback of telegraphs compared to letters was that the required intermediaries for transmission also had access to the contents of a message. While a postman is unlikely to open and read a sealed letter, a telegraph clerk has no choice but to read what they are sending.

This affected the development of cryptographic techniques in two major ways. One, it prompted the general public to become more interested in cryptography. Even if their messages were not necessarily "secret" per se, most people are uncomfortable with the idea of a half-dozen people reading their private correspondences. Two, individuals and organizations that already encrypted their messages needed to amp up their security, because their messages would be viewed by more people and their messages would be easier to intercept via wire tapping. This spurred the adoption of the Vigenère cipher for telegraph communications because of the increased security it provided.

Similarly, in recent times, the concept of encryption has become more popular and the technique has become more refined in response to the increase in digital communications. As new technologies emerge, so may new cryptographic techniques.

Power of a Test

After the terrorist attack on San Francisco, the Department of Homeland Security ramps up security and surveillance in hopes of catching the people responsible, but instead only manage to inconvenience, detain, and even seriously harm innocent civilians. Marcus explains that the problem with the DHS system is that they're looking for something too rare in too large a population, resulting in a very large number of false positives.

What Marcus is describing is referred to in statistics as a Type I error - that is, we reject the null hypothesis (the assumption that nothing is abnormal) when the null hypothesis is actually true. In this case, the null hypothesis is "not a terrorist", and there's enough suspicious data, the null hypothesis is rejected in favor of flagging the person for investigation. Marcus claims that in order to look for rare things, you need a test that only rejects the null hypothesis at the same rate at which the thing we're testing for - in this case, terrorists - actually occur. The problem is, there's also Type II errors. While Type I errors are caused by being too cautious, Type II errors occur when our test "misses" the thing we are actually looking for. When determining how "tough" a test should be, we need to decide how to balance these two risks.

Marcus is advocating for making the system less broad, therefore reducing false positives. However, this increases the risk for false negatives as well. So, which is worse: a false positive or a false negative? That's a question of expected value, which is based off the probability of a result and its consequences. In this case, the result at one end of the spectrum is the terrorists are caught because of this system, but many innocent people are subject to surveillance and searching. On the other end is that no one is caught because they slip through a timid test, and more people are hurt as a result. Clearly, this can easily turn into a much more complicated debate on the values of time, trust, privacy, and life, so I won't try to determine what the correct balance is myself. Although it's easy to describe some aspects of this conflict with numbers, as Marcus did, it just isn't that simple.

Hindsight is 20/20

In his essay "Mining Student Data Could Save Lives", Morris suggests that by analyzing students digital activities, we could catch the oft-ignored signs of a future attack and take action before any lives are lost. At first glance, this seems like a perfect method to deter violence on campus. Sure, the students privacy is somewhat compromised, but the lives that could be saved are certainly worth the sacrifice, aren't they? However, even if we could justify the morality and ethics of such a system, there are some logical faults in this data-powered "crystal ball".

After a mass shooting, we often look at the evidence and wonder how no one noticed the signs - they seem so obvious. However, this is a classic example of hindsight bias, which refers to our tendency to see events that have already occurred as more predictable than they were. While some signs are indisputably concerning, such as outright threats and manifestos, many are not. Some may be subtle, and only stand out in context of the attack. Or, it may be difficult to gauge the severity and sincerity of a message, especially since people tend to be emboldened on the internet. Many indicators can have perfectly innocent, plausible explanations, and innocent behavior can seem sinister depending on one's perspective. Finally, there's a risk that those who design the system will build their personal biases into it, unfairly targeting certain groups.

How do we handle this ambiguity? Do we err on the side of false positives and discrimination, or should we lean towards giving the benefit of the doubt, even if we risk some attackers slipping through? If a student is identified as a threat, how do we intervene, discipline, or serve justice when no crime has been committed? Perhaps there are other ways we can prevent these violent acts, such as limiting students access to deadly weapons, building a strong community that prioritizes student care, and working to undo societal norms, standards, and pressures that contribute to violence. Since there are many other less inflammatory options, we ought to pursue them before turning to a faulty and unethical system of constant surveillance.

 

 

Inviting Suspicion

We generally don't bother to encrypt messages if we have nothing to hide. By using a code or cipher, it's implied that the contents are sensitive or illicit in nature. In fact, as Singh points out, they're likely to be more explicit because the encryption lulls the sender into a false sense of security and they write more openly about their plans. So by putting too much faith in an easily breakable cipher, you risk incriminating yourself further.

In addition, by using a cipher or code that is easily identifiable as such, you automatically invite suspicion.  In her trial, Mary claimed she knew nothing about the plot, but even without decrypting the message, it was clear she was corresponding with conspirators. Also, the fact that she didn't write her message in plain text implies she was concealing something. In situations like these, it may be better to stick to some sort of code that masks the message as something innocuous, or some sort of steganography that hides the secret message within another. By finding a way to hide a message in plain sight, it helps divert suspicion in the first place rather than relying on an imperfect cipher once you've drawn attention.

Powered by WordPress & Theme by Anders Norén