Cryptography

The History and Mathematics of Codes and Code Breaking

Tag: surveillance Page 3 of 6

We Don’t Care That We’re Being Watched

The principle problem of the Panopticon metaphor is rooted in Bentham’s original purpose for the structure: behavioral modification. As Walker puts it, Bentham believed that the mere act of being being watched constantly would alter a person’s behavior, adding a layer of accountability and therefore pushing the person in question towards a more moral or sociably acceptable course of action.

As Walker points out, however, modern surveillance is completely incompatible with this idea. He uses the example of digital watchers overstepping their boundaries, but it is apparent that even in everyday, mundane examples of surveillance, people simply don’t change their behavior. For example, consider Facebook. It’s no secret that Facebook tracks and stores almost every bit of information its users will provide it (how else will Zuckerberg learn what it means to be human). Following the Cambridge Analytica scandal, that knowledge became headline news; everyone knew Facebook was effectively spying on them. Since then, Facebook has gained almost 100 million users.

If people know they’re being watched, why do they opt into the system?

Simply put, it’s because it’s impossible to live without the system. The Panopticon may have been a prison, but technology is so integral to modern life that opting out simply isn’t an option. Beyond just Facebook, social media provides a fast and efficient communication system, and Google is the premiere tool to find information in the blink of an eye. These systems are unlike prison in that we want and need to be a part of them to survive the modern world. They’ve made life easy and convenient enough that the expectation is that we use them to augment our abilities to both work and play. For that reason, the Panopticon is a defunct metaphor that cannot encapsulate the complexity of modern surveillance. It’s not just that there are too many actors that watch us from the watchtower, but that we have to remain in the prison if we want to maintain a standard of living that we’re used to; we’ve collectively decided that the opportunity cost of opting out of the system is too great, even if we maintain some semblance of privacy. Yet, we don’t begrudgingly use these apps, either. People still love to browse using Google, wish their friends ‘happy birthday’ on Facebook, and post their latest fire selfie on Instagram.

Altogether, we just really don’t care that we’re being watched.

Too Many Eyes to Fit in the Panopticon’s Tower

I would agree with Walker’s claim that the Panopticon is not an accurate metaphor for the average human’s interaction with surveillance today. While it could be argued that the government does watch over us and large corporations do silently collect our data, most people are not aware of this and thus it does not enact behavioral changes like it was supposed to do in the Panopticon. Additionally, Walker argues that the Panopticon metaphor limits its idea of surveillance solely to the “big brother” in the tower—the NSA or government, in our case—while today there exists so many other forms of surveillance such as the “self-surveillance” present on so many social media sites, or the ability of companies like Facebook to collect and sell your data, or Amazon’s Alexa to listen in on your conversations to find out what you might want to buy next. I would argue, however, that our increasingly socially connected world allows for the “self-surveillance” of another nature, however. Not only do social media sites allow so much scrutiny by the court of public opinion that it might feel like someone is always watching you online, but many social media outlets now have means of physical surveillance by one’s own peers. Apps like “Find my Friends” allow those who you “add” to track your location, while the widely used social media app Snapchat has now created a map that shows you where all of your Snapchat friends are at any given time, provided that they are not on “ghost mode.” Services like these allow for so much more surveillance from many sources, not just the man in the Panopticon’s tower.

The Walls Are Very Porous

Jeremy Bentham’s great theory was the Panopticon: a hypothetical prison design in which all inmates could be seen and observed by those in charge, but the inmates themselves could not see the observers, nor could they see any other inmates. It’s an interesting concept to think about in theory, but it is not useful as a metaphor in our conversations about surveillance, and, as time goes on, its effectiveness will only diminish.

There are two key features to the Panopticon that make it unique: the observer sees all, but is not observed, and those being observed are isolated from one another. The first feature fits fairly well as a metaphor into our conversations about surveillance. The observer (in this case, probably the government) takes information from the internet, from travel history, from any official record of our existence in the world, without our knowledge. We are observed, but we never see it happen.

Where the Panopticon metaphor breaks down is in the second feature: those being observed are isolated from each other. In the conversation of surveillance, it’s unclear exactly what this part would stand as a metaphor for. People are more connected now than at any point in human history, and that is made possible by the same technology that makes modern surveillance possible. Instead of building metaphorical walls between us, the internet gives us access to each other like nothing ever has. It’s called the information superhighway for a reason: it instantaneously connects us from across the world.

For the Panopticon to be a more useful metaphor, I would suggest a tweak to the design: make the walls between inmates out of glass. Better yet, remove them entirely.

We Live in a Panopticon, Here’s Why

The concept of the panopticon in a practical sense seems inefficient, as the whole idea of it builds of the power on the individuality of the worker. The idea that without collaboration, there is no workplace interference that would slow workers down. In principle, leading to increased productivity. However without workers collaborating on projects and sharing information on how to maximize time and space, the quality of finished products would be inconsistent as each individual worker would create a piece that varies from one colleague to the next. Leading certain pieces of a project or product to become incompatible because of these slight individualities creating a faulty product and thus a flawed system.

Metaphorically, I do not agree with his thesis. I believe that the metaphor of the panopticon is accurate regarding our conversations about surveillance. From how I understood it, the panopticon metaphor is about an authority watching us but we cannot see them like the government or an internet company watching over us regular people and collecting data from our online habits. The metaphor makes sense to me because we don’t know who has access to our online information like our passwords or emails much like people working in a panopticon have no idea who’s in the tower watching them or if they are even being watched in the first place.

Power of a Test

After the terrorist attack on San Francisco, the Department of Homeland Security ramps up security and surveillance in hopes of catching the people responsible, but instead only manage to inconvenience, detain, and even seriously harm innocent civilians. Marcus explains that the problem with the DHS system is that they’re looking for something too rare in too large a population, resulting in a very large number of false positives.

What Marcus is describing is referred to in statistics as a Type I error – that is, we reject the null hypothesis (the assumption that nothing is abnormal) when the null hypothesis is actually true. In this case, the null hypothesis is “not a terrorist”, and there’s enough suspicious data, the null hypothesis is rejected in favor of flagging the person for investigation. Marcus claims that in order to look for rare things, you need a test that only rejects the null hypothesis at the same rate at which the thing we’re testing for – in this case, terrorists – actually occur. The problem is, there’s also Type II errors. While Type I errors are caused by being too cautious, Type II errors occur when our test “misses” the thing we are actually looking for. When determining how “tough” a test should be, we need to decide how to balance these two risks.

Marcus is advocating for making the system less broad, therefore reducing false positives. However, this increases the risk for false negatives as well. So, which is worse: a false positive or a false negative? That’s a question of expected value, which is based off the probability of a result and its consequences. In this case, the result at one end of the spectrum is the terrorists are caught because of this system, but many innocent people are subject to surveillance and searching. On the other end is that no one is caught because they slip through a timid test, and more people are hurt as a result. Clearly, this can easily turn into a much more complicated debate on the values of time, trust, privacy, and life, so I won’t try to determine what the correct balance is myself. Although it’s easy to describe some aspects of this conflict with numbers, as Marcus did, it just isn’t that simple.

Misleading Statistics

An interesting pint Cory Doctorow brought up in his novel, Little Brother, is the idea of the “false positive.” He writes, “Say you have a new disease, called Super­AIDS. Only one in a million people gets Super­AIDS. You develop a test for Super­AIDS that’s 99 percent accurate… You give the test to a million people. One in a million people have Super­AIDS. One in a hundred people that you test will generate a ‘false positive’ ­­ the test will say he has Super­AIDS even though he doesn’t. That’s what ’99 percent accurate’ means: one percent wrong… If you test a million random people, you’ll probably only find one case of real Super­AIDS. But your test won’t identify one person as having Super­AIDS. It will identify 10,000 people as having it” (128).

This idea can be linked to Michael Morris’ essay on student data mining. Critics of Morris’ argue that looking at students’ data would not be an effective method of school shooting prevention, as many innocent behaviors can be seen as “suspicious.” Even if looking into student data is deemed 99% effective in detecting threatening individuals (Which it is not. In fact, it is most likely nowhere near that statistic), the false positive theory explains that many more non-suspicious students will be marked as suspicious than actually suspicious people. However, one can argue that the pros of these “threat tracking” methods outweigh the cons. If data surveillance can prevent a dangerous school attack, then it is worth identifying a couple innocent people as suspicious. (This opinion can be seen as a bit Machiavellian.)

The paradox of the false positive can be applied to beyond data encryption. One can use this idea to examine how misleading statistics are in general. For example, hand sanitizer claims to kill 99.9% of bacteria. There’s about 1500 bacterial cells living on each square centimeter of your hands. If 99.9% of those bacterial cells are killed off by hand sanitizer, there’s still several billions left, and the ones left are probably the strong ones capable of making you sick.

Surveillance = Dehumanization

One of the topics most widely discussed throughout Little Brother by Cory Doctorow is government surveillance. Was it justifiable for the DHS to track the citizens of San Francisco’s every move in the name of national security? An instance where this ethical dilemma came into question occurred on pages 136-138, when Marcus and his father learned that the DHS was closely monitoring ground chatter. Marcus, who was responsible for this spike in chatter, was opposed to the DHS’ involvement with the issue, while his father praised the DHS for their work attempting to catch the “methodical fools.” According to Marcus’ father, in today’s society you must sacrifice some things in order to feel safe, asking his son, “Would you rather have privacy or terrorists?” Marcus on the other hand sees the monitoring as an invasion of privacy, and does not believe that surveillance will amount to the arrest of terrorists.  

 

I found both Marcus and his father’s arguments extremely interesting and compelling. On one hand, the terrorists who killed thousands of people where still physically free, and potentially able to cause more harm. On the other hand, the constant monitoring has only slowed society, and has created fear throughout the city. Although both arguments are valid, from an ethical standpoint I would have to side with Marcus. The use of algorithms and data-mining to determine the likelihood of a person to be a terrorist is extremely dehumanising. In the US, we have already turned humans into mere digits by using social security numbers to keep track of virtually everything we do. Data-mining, for the purpose of finding criminals, reduces human behavior to simple numbers. We are not computers. This dehumanization allows the government to treat us like statistics. As shown by the book, we go far beyond this assumption. Our behavior is influenced by a range of variables (like emotions), that computers cannot comprehend. 

 

A Slippery Slope

Shootings, suicides, and other similar acts of violence, especially on campuses, have become more prevalent in the last decade than ever before. The free internet (though not one of the larger reasons for this increase, in my opinion) has expanded the accessibility of the resources needed to commit such acts. And in most cases, in the “aftermath of every large-scale act of campus violence,” officials and investigators discover warning signs that, had they found before-hand, could’ve provided reason for authority to intervene.

In his essay Mining Student Data Could Save Lives, Michael Morris argues for the use of data mining on campuses to prevent incidents of campus violence. Since the University essentially controls both the wired and wireless internet network, campus administration has the tools to use algorithms to identify at-risk student behavior. Morris believes that universities should take advantage of this ability to maximize campus safety. Since we already give up so much of our privacy and personal information through social media, what does it matter if we lose a little more?

I agree with this argument, but only to a slight extent. Giving universities the freedom to survey and monitor student activity on their network could be an extremely slippery slope if not taken extremely seriously and carefully. While I do agree that the benefits of preventing large-scale acts of violence do outweigh the need for complete privacy, universities should be controlled in how much access they have for student information, and how they use it. FERPA could possibly be modified to give universities more freedom when it comes to monitoring online student activity, but in a limited and controlled way. Contrary to how Morris makes it seem, there is a substantial amount of information on our computers that we haven’t given up through social media. Though our lives are largely public, I do value personal privacy to some extent. Despite the continually growing need for surveillance and intervention to prevent violence, I do believe universities too much power could open a can of worms that may be difficult to close.

Hindsight is 20/20

In his essay “Mining Student Data Could Save Lives”, Morris suggests that by analyzing students digital activities, we could catch the oft-ignored signs of a future attack and take action before any lives are lost. At first glance, this seems like a perfect method to deter violence on campus. Sure, the students privacy is somewhat compromised, but the lives that could be saved are certainly worth the sacrifice, aren’t they? However, even if we could justify the morality and ethics of such a system, there are some logical faults in this data-powered “crystal ball”.

After a mass shooting, we often look at the evidence and wonder how no one noticed the signs – they seem so obvious. However, this is a classic example of hindsight bias, which refers to our tendency to see events that have already occurred as more predictable than they were. While some signs are indisputably concerning, such as outright threats and manifestos, many are not. Some may be subtle, and only stand out in context of the attack. Or, it may be difficult to gauge the severity and sincerity of a message, especially since people tend to be emboldened on the internet. Many indicators can have perfectly innocent, plausible explanations, and innocent behavior can seem sinister depending on one’s perspective. Finally, there’s a risk that those who design the system will build their personal biases into it, unfairly targeting certain groups.

How do we handle this ambiguity? Do we err on the side of false positives and discrimination, or should we lean towards giving the benefit of the doubt, even if we risk some attackers slipping through? If a student is identified as a threat, how do we intervene, discipline, or serve justice when no crime has been committed? Perhaps there are other ways we can prevent these violent acts, such as limiting students access to deadly weapons, building a strong community that prioritizes student care, and working to undo societal norms, standards, and pressures that contribute to violence. Since there are many other less inflammatory options, we ought to pursue them before turning to a faulty and unethical system of constant surveillance.

 

 

Data Mining in Universities

The issue over Internet privacy and surveillance is large and ever-increasing as our lives become more and more linked with the digital world. In Michael Morris’ essay Mining Students’ Data Could Save Lives, Morris argues that schools and universities should employ data mining technology on their networks to try and prevent potentially harmful acts against the staff and student body.

Morris’ stance on this topic is obviously an extremely controversial one. When presented with the notion that schools can track their data, most students would most likely be upset with the idea, saying it’s a violation of their privacy. However, the article brings up an interesting and valid point that we already give up much of our personal information to online websites, most notably for targeted advertising. Yet, most people do not seem bothered by this idea, and continue to use these online services.

The reason why most people wouldn’t agree with schools tracking students’ online activity, despite consenting to online surveillance on the daily, is the concept of personal disconnect. A student is at school for 9 months a year. They have had direct contact with their administration as well. As a result, it feels much more personal to be watched by a university versus a large corporation like Google which has billions of users. In addition, students would most likely feel suspicious with the school, thinking that administration would be watching their every move online with a magnifying glass. I think that university surveillance of students’ activity on their networks could be an effective way of keeping schools safe. With gun violence being such a hot issue in America, it’s reasonable for schools to be allowed to look at potentially suspicious activity. If you’re not doing anything wrong, there should be no reason for you to worry.

Page 3 of 6

Powered by WordPress & Theme by Anders Norén