# Cryptography

#### Tag: surveillance Page 1 of 4

After the terrorist attack on San Francisco, the Department of Homeland Security ramps up security and surveillance in hopes of catching the people responsible, but instead only manage to inconvenience, detain, and even seriously harm innocent civilians. Marcus explains that the problem with the DHS system is that they're looking for something too rare in too large a population, resulting in a very large number of false positives.

What Marcus is describing is referred to in statistics as a Type I error - that is, we reject the null hypothesis (the assumption that nothing is abnormal) when the null hypothesis is actually true. In this case, the null hypothesis is "not a terrorist", and there's enough suspicious data, the null hypothesis is rejected in favor of flagging the person for investigation. Marcus claims that in order to look for rare things, you need a test that only rejects the null hypothesis at the same rate at which the thing we're testing for - in this case, terrorists - actually occur. The problem is, there's also Type II errors. While Type I errors are caused by being too cautious, Type II errors occur when our test "misses" the thing we are actually looking for. When determining how "tough" a test should be, we need to decide how to balance these two risks.

Marcus is advocating for making the system less broad, therefore reducing false positives. However, this increases the risk for false negatives as well. So, which is worse: a false positive or a false negative? That's a question of expected value, which is based off the probability of a result and its consequences. In this case, the result at one end of the spectrum is the terrorists are caught because of this system, but many innocent people are subject to surveillance and searching. On the other end is that no one is caught because they slip through a timid test, and more people are hurt as a result. Clearly, this can easily turn into a much more complicated debate on the values of time, trust, privacy, and life, so I won't try to determine what the correct balance is myself. Although it's easy to describe some aspects of this conflict with numbers, as Marcus did, it just isn't that simple.

An interesting pint Cory Doctorow brought up in his novel, Little Brother, is the idea of the "false positive." He writes, "Say you have a new disease, called Super­AIDS. Only one in a million people gets Super­AIDS. You develop a test for Super­AIDS that's 99 percent accurate... You give the test to a million people. One in a million people have Super­AIDS. One in a hundred people that you test will generate a 'false positive' ­­ the test will say he has Super­AIDS even though he doesn't. That's what '99 percent accurate' means: one percent wrong... If you test a million random people, you'll probably only find one case of real Super­AIDS. But your test won't identify one person as having Super­AIDS. It will identify 10,000 people as having it" (128).

This idea can be linked to Michael Morris' essay on student data mining. Critics of Morris' argue that looking at students' data would not be an effective method of school shooting prevention, as many innocent behaviors can be seen as "suspicious." Even if looking into student data is deemed 99% effective in detecting threatening individuals (Which it is not. In fact, it is most likely nowhere near that statistic), the false positive theory explains that many more non-suspicious students will be marked as suspicious than actually suspicious people. However, one can argue that the pros of these "threat tracking" methods outweigh the cons. If data surveillance can prevent a dangerous school attack, then it is worth identifying a couple innocent people as suspicious. (This opinion can be seen as a bit Machiavellian.)

The paradox of the false positive can be applied to beyond data encryption. One can use this idea to examine how misleading statistics are in general. For example, hand sanitizer claims to kill 99.9% of bacteria. There's about 1500 bacterial cells living on each square centimeter of your hands. If 99.9% of those bacterial cells are killed off by hand sanitizer, there's still several billions left, and the ones left are probably the strong ones capable of making you sick.

One of the topics most widely discussed throughout Little Brother by Cory Doctorow is government surveillance. Was it justifiable for the DHS to track the citizens of San Francisco's every move in the name of national security? An instance where this ethical dilemma came into question occurred on pages 136-138, when Marcus and his father learned that the DHS was closely monitoring ground chatter. Marcus, who was responsible for this spike in chatter, was opposed to the DHS’ involvement with the issue, while his father praised the DHS for their work attempting to catch the “methodical fools.” According to Marcus’ father, in today's society you must sacrifice some things in order to feel safe, asking his son, “Would you rather have privacy or terrorists?” Marcus on the other hand sees the monitoring as an invasion of privacy, and does not believe that surveillance will amount to the arrest of terrorists.

I found both Marcus and his father’s arguments extremely interesting and compelling. On one hand, the terrorists who killed thousands of people where still physically free, and potentially able to cause more harm. On the other hand, the constant monitoring has only slowed society, and has created fear throughout the city. Although both arguments are valid, from an ethical standpoint I would have to side with Marcus. The use of algorithms and data-mining to determine the likelihood of a person to be a terrorist is extremely dehumanising. In the US, we have already turned humans into mere digits by using social security numbers to keep track of virtually everything we do. Data-mining, for the purpose of finding criminals, reduces human behavior to simple numbers. We are not computers. This dehumanization allows the government to treat us like statistics. As shown by the book, we go far beyond this assumption. Our behavior is influenced by a range of variables (like emotions), that computers cannot comprehend.

Shootings, suicides, and other similar acts of violence, especially on campuses, have become more prevalent in the last decade than ever before. The free internet (though not one of the larger reasons for this increase, in my opinion) has expanded the accessibility of the resources needed to commit such acts. And in most cases, in the "aftermath of every large-scale act of campus violence," officials and investigators discover warning signs that, had they found before-hand, could've provided reason for authority to intervene.

In his essay Mining Student Data Could Save Lives, Michael Morris argues for the use of data mining on campuses to prevent incidents of campus violence. Since the University essentially controls both the wired and wireless internet network, campus administration has the tools to use algorithms to identify at-risk student behavior. Morris believes that universities should take advantage of this ability to maximize campus safety. Since we already give up so much of our privacy and personal information through social media, what does it matter if we lose a little more?

I agree with this argument, but only to a slight extent. Giving universities the freedom to survey and monitor student activity on their network could be an extremely slippery slope if not taken extremely seriously and carefully. While I do agree that the benefits of preventing large-scale acts of violence do outweigh the need for complete privacy, universities should be controlled in how much access they have for student information, and how they use it. FERPA could possibly be modified to give universities more freedom when it comes to monitoring online student activity, but in a limited and controlled way. Contrary to how Morris makes it seem, there is a substantial amount of information on our computers that we haven't given up through social media. Though our lives are largely public, I do value personal privacy to some extent. Despite the continually growing need for surveillance and intervention to prevent violence, I do believe universities too much power could open a can of worms that may be difficult to close.

In his essay "Mining Student Data Could Save Lives", Morris suggests that by analyzing students digital activities, we could catch the oft-ignored signs of a future attack and take action before any lives are lost. At first glance, this seems like a perfect method to deter violence on campus. Sure, the students privacy is somewhat compromised, but the lives that could be saved are certainly worth the sacrifice, aren't they? However, even if we could justify the morality and ethics of such a system, there are some logical faults in this data-powered "crystal ball".

After a mass shooting, we often look at the evidence and wonder how no one noticed the signs - they seem so obvious. However, this is a classic example of hindsight bias, which refers to our tendency to see events that have already occurred as more predictable than they were. While some signs are indisputably concerning, such as outright threats and manifestos, many are not. Some may be subtle, and only stand out in context of the attack. Or, it may be difficult to gauge the severity and sincerity of a message, especially since people tend to be emboldened on the internet. Many indicators can have perfectly innocent, plausible explanations, and innocent behavior can seem sinister depending on one's perspective. Finally, there's a risk that those who design the system will build their personal biases into it, unfairly targeting certain groups.

How do we handle this ambiguity? Do we err on the side of false positives and discrimination, or should we lean towards giving the benefit of the doubt, even if we risk some attackers slipping through? If a student is identified as a threat, how do we intervene, discipline, or serve justice when no crime has been committed? Perhaps there are other ways we can prevent these violent acts, such as limiting students access to deadly weapons, building a strong community that prioritizes student care, and working to undo societal norms, standards, and pressures that contribute to violence. Since there are many other less inflammatory options, we ought to pursue them before turning to a faulty and unethical system of constant surveillance.

The issue over Internet privacy and surveillance is large and ever-increasing as our lives become more and more linked with the digital world. In Michael Morris' essay Mining Students' Data Could Save Lives, Morris argues that schools and universities should employ data mining technology on their networks to try and prevent potentially harmful acts against the staff and student body.

Morris' stance on this topic is obviously an extremely controversial one. When presented with the notion that schools can track their data, most students would most likely be upset with the idea, saying it's a violation of their privacy. However, the article brings up an interesting and valid point that we already give up much of our personal information to online websites, most notably for targeted advertising. Yet, most people do not seem bothered by this idea, and continue to use these online services.

The reason why most people wouldn't agree with schools tracking students' online activity, despite consenting to online surveillance on the daily, is the concept of personal disconnect. A student is at school for 9 months a year. They have had direct contact with their administration as well. As a result, it feels much more personal to be watched by a university versus a large corporation like Google which has billions of users. In addition, students would most likely feel suspicious with the school, thinking that administration would be watching their every move online with a magnifying glass. I think that university surveillance of students' activity on their networks could be an effective way of keeping schools safe. With gun violence being such a hot issue in America, it's reasonable for schools to be allowed to look at potentially suspicious activity. If you're not doing anything wrong, there should be no reason for you to worry.

Almost everyone agrees that safety and privacy are two things that people have the fundamental right to enjoy.  Rarely do we hear an argument deliberately stating that either of these concepts should be intentionally disregarded.  In a perfect world, everyone could feel protected from physical harm as well as from privacy invasion.  Unfortunately, however, we do not live in a perfect world.  We live in a society where priorities must be evaluated and sacrifices must be made in order to promote the greater good.

Today, we face a growing prevalence of terrorism and violent crime that poses a threat to national security.  It is important that our government is given freedom to use electronic surveillance because it would allow it to collect information that could prevent these horrible incidents from ever taking place.  If federal agencies such as the NSA or the FBI could monitor people's online behavior, they could identify red flags and potentially intervene before tragedy strikes.  Even if the chances are slim, it's still worth a try.

Some believe that the government would be overstepping its bounds with surveillance like this, saying it has no right to collect personal data.  However, if surveillance has a chance to save lives, one could argue that it is acceptable to use it at the expense of some degree of personal privacy.  As long as you aren't doing anything wrong, you have nothing to be afraid of.  The primary purpose of any government is to protect its citizens.  It has no interest in snooping around an ordinary person's data, and would not go out of its way to bother anyone who doesn't pose a threat.  Overall, it's important that we have a little bit more faith in the intentions of our government.  We are currently in the midst of an informational arms race.  The enemy is using every resource at their disposal to try to come out on top - shouldn't we do the same?

When we talk about the battle between security and privacy, most of the discussion from both sides has to do with one of two topics: the effectiveness of electronic mass surveillance in deterring and stopping crime, or the effect that surveillance has on individual freedoms e.g. freedom of speech/expression. These are the most important questions in the debate, since we all agree that both individual freedom and safety are important, but the debate surrounds the way we prioritize those values and the effects that we perceive surveillance having on them. As a debater on either side of the topic, it is often tempting (and quite easy) to exaggerate the importance of either privacy or security, for example by claiming that by letting the government monitor our phone calls, we are condemning ourselves to an Orwellian future. Obviously, it is possible to live in a free and healthy democratic society where the government has access to its citizens phone calls. So instead of making that extreme claim, it might be more appropriate to simply note that we need to be deliberate and thoughtful about what freedoms we give up, and a similar approach applies to the safety side of the debate.

In addition to these value-driven issues, there is an important practical side to the debate that goes along with the above idea to be judicious in how we relinquish our freedoms, even when the end result is justified. It is important to keep in mind that any powers we grant to the government now are effectively permanent; they set a precedent for future regimes to do the same. So if we are going to give up a freedom in today's society, we should also be willing to give that up in a hypothetical society where our ruler is the kind of tyrant we fear the most. Obviously, our constitution is designed specifically to prevent such a government from coming to power, but recognizing the longstanding effects of our choices today is vital since we can't afford to get the answers to these questions wrong.

In the debate of privacy vs. surveillance in the United States, there are a few arguments that can be made in favor of having more surveillance as a security measure. The biggest and most obvious argument is that it aids in ensuring national security. Without electronic surveillance, it would be almost impossible to catch criminals and terrorists in our technology-filled modern society. At this point, video surveillance would not be enough even the government was somehow allowed to put cameras in our houses. Now that so many of our day-to-day interactions occur on the internet, criminals can communicate with each other with the push of a button. Electronic surveillance of digital devices allows for these communications to be monitored, which makes it much easier to crack down on these crimes.

Another argument for security stems off of the previous argument. When the government says they are collecting our data through electronic surveillance, they may mean just that. Having electronic surveillance helps a lot, but just because the government may be collecting data doesn't mean that they're constantly looking through it. There most likely isn't an NSA agent looking through each one of our texts and our social media accounts. But, one they have reasonable suspicion that a seemingly ordinary citizen is doing something shady, they have the data there, and they can finally use it.

When I first entered this class, I was very pro-privacy. But after hearing arguments from both sides, I have come to understand that surveillance is necessary. And the argument on our side is not against surveillance, but rather focusing on the word “wide latitude”. What is considered a wide enough latitude? Who is the one to assess that? When the government wants more surveillance that might not have been proven necessary, who is to stop them? The emphasis should be placed on the checks and balances that should be implemented into any surveillance system, and the establishment of boundary between surveillance and privacy.

One might argue that in face of threats to national security, one’s feelings about privacy should be disregarded. I agree that in times of crisis there should be certain measures of crisis. However, it would be a great downplay to say that “privacy” is merely a word or a feeling. Privacy is tightly linked to the freedom of speech. Whoever controls the surveillance controls the information flow, and in our time, information flow is everything.

Surveillance is not harmless because it’s placed in the hands of men. I need not draw any example from history because we can all come to the conclusion that men can be evil. Men could be wrong. Power could be abused. And surveillance is probably one of the greatest powers of the government in our time. Electronic surveillance in the interest of national security is necessary, providing that it’s effective and it’s in the interest of national security. However, the downplay of the privacy of citizens is unacceptable. The foundation of the nation, the first amendment of speech and its free press clause, could be compromised if all privacy is invaded.

Page 1 of 4