# Cryptography

#### Author: wangs29

An interesting pint Cory Doctorow brought up in his novel, Little Brother, is the idea of the “false positive.” He writes, “Say you have a new disease, called Super­AIDS. Only one in a million people gets Super­AIDS. You develop a test for Super­AIDS that’s 99 percent accurate… You give the test to a million people. One in a million people have Super­AIDS. One in a hundred people that you test will generate a ‘false positive’ ­­ the test will say he has Super­AIDS even though he doesn’t. That’s what ’99 percent accurate’ means: one percent wrong… If you test a million random people, you’ll probably only find one case of real Super­AIDS. But your test won’t identify one person as having Super­AIDS. It will identify 10,000 people as having it” (128).

This idea can be linked to Michael Morris’ essay on student data mining. Critics of Morris’ argue that looking at students’ data would not be an effective method of school shooting prevention, as many innocent behaviors can be seen as “suspicious.” Even if looking into student data is deemed 99% effective in detecting threatening individuals (Which it is not. In fact, it is most likely nowhere near that statistic), the false positive theory explains that many more non-suspicious students will be marked as suspicious than actually suspicious people. However, one can argue that the pros of these “threat tracking” methods outweigh the cons. If data surveillance can prevent a dangerous school attack, then it is worth identifying a couple innocent people as suspicious. (This opinion can be seen as a bit Machiavellian.)

The paradox of the false positive can be applied to beyond data encryption. One can use this idea to examine how misleading statistics are in general. For example, hand sanitizer claims to kill 99.9% of bacteria. There’s about 1500 bacterial cells living on each square centimeter of your hands. If 99.9% of those bacterial cells are killed off by hand sanitizer, there’s still several billions left, and the ones left are probably the strong ones capable of making you sick.

The issue over Internet privacy and surveillance is large and ever-increasing as our lives become more and more linked with the digital world. In Michael Morris’ essay Mining Students’ Data Could Save Lives, Morris argues that schools and universities should employ data mining technology on their networks to try and prevent potentially harmful acts against the staff and student body.

Morris’ stance on this topic is obviously an extremely controversial one. When presented with the notion that schools can track their data, most students would most likely be upset with the idea, saying it’s a violation of their privacy. However, the article brings up an interesting and valid point that we already give up much of our personal information to online websites, most notably for targeted advertising. Yet, most people do not seem bothered by this idea, and continue to use these online services.

The reason why most people wouldn’t agree with schools tracking students’ online activity, despite consenting to online surveillance on the daily, is the concept of personal disconnect. A student is at school for 9 months a year. They have had direct contact with their administration as well. As a result, it feels much more personal to be watched by a university versus a large corporation like Google which has billions of users. In addition, students would most likely feel suspicious with the school, thinking that administration would be watching their every move online with a magnifying glass. I think that university surveillance of students’ activity on their networks could be an effective way of keeping schools safe. With gun violence being such a hot issue in America, it’s reasonable for schools to be allowed to look at potentially suspicious activity. If you’re not doing anything wrong, there should be no reason for you to worry.

One may assume that any type of encryption is better than no encryption, but for many situations, that may not be the case. Take the story of Mary Queen of Scots. Her weakly encrypted correspondence with Babington was interpreted by expert cryptanalysis Phelippes, leading to her eventual execution. Mary and Babington were so confident in their substitution cypher that they explicitly spoke about their plans to assassinate Queen Elizabeth. Unbeknownst to them, there had been huge advances in the field of cypher-breaking. Had Mary and Babington possessed an accurate sense in the weakness of their cypher, their conversation would have been far more discreet, taking care to discuss their plans in a more cautious manner.

This form of explicit versus discreet communication can be seen in everyday situations where cyphers are not involved. For example, if a group of bilingual people want to talk about someone nearby without their knowledge, the group will most likely switch to the second, less widely-spoken language. They would talk about said person without any filters, as they’d assume that no one around them would be able to understand what they are saying. However, if the group of people aren’t lucky enough to have a second language to fall back on, they will probably communicate in a more discreet manner, with facial expressions and gestures, rather than clearly spoken words.

The case of Mary Queen of Scots could be a lesson to anyone who wishes to communicate through encryption- communicate as if your cypher is breakable, no matter how secure you might think it is.

Page 2 of 2