Cryptography

The History and Mathematics of Codes and Code Breaking

Tag: data mining Page 1 of 5

The Panopticon Metapaphor isn't All That Bad.... Sue Me

In the 18th century, philosopher Jeremy Bentham designed the Panopticon, meant for prisoners to be monitored by an all seeing guard, who himself, could not be seen. Comparing this to surveillance now, particular regarding the internet, even thought the metaphor is kind of bad. It is not too far off of what could be happening.

Anyone and everyone who is using the internet knows that their usage habits are being monitored; if you do not not know, now you know. It is called data mining. That is why when you are on Forever 21's website shopping for dresses, you see Forever 21 dress advertisements on Facebook not even minutes after you have clicked off of Forever 21. This technically fits in with how a company, whether Facebook or Forever 21, is watching your online activity similar to how a guard is watching several prisoners.

I am also going to take this time to compare internet users to prisoners within a Panopticon. Our data is constantly being mined and our usage being monitored, however, we cannot really do anything about it. Before using most of these websites, we usually make an agreement for said website to do so. This is similar to how prisoners cannot (and will not) do much about a guard watching their every move.

Now in relation to the government being said guard and internet users being "prisoners," the Panopticon metaphor is not the best. Although there are a vast amount of theories out, there is not an "all seeing government." At least not within the United States. It is completely possible for the government obtain information about a person if they absolutely have to, however, the government is not constantly watching millions of citizens.

To sum it all up, Panopticon metaphor for data mining? Good. For government surveillance? Bad.

The Panopticon Isn't That Bad... But It's Not Good Either

As explained in the podcast, the Panopticon is essentially the idea of a tower that looks over a prison. The tower is illuminated so that the guard in the tower can see the inmates, but the inmates cannot see the guard. Although this could used to exemplify today's government surveillance, Walker disagrees, saying that it is a “terrible metaphor.”

To take a side in this debate is very difficult. On the one hand, the government has bribed sites like Yahoo, or Facebook, allowing them to access all of our information without our consent. On the other hand, data mining goes further than the negative assumptions we place on it. As Walker points out, in todays society we see data mining as exclusively negative. The government must be using our information for their personal gain right? However data mining does not solely have bad implications. One could use data mining to conduct studies in order to improve our internet experiences. You could also use data mining to research children in a comfortable setting. These are not bad things. In this case, I would agree with Walker. 

However things get a bit sketchy when you remember the negative possibilities of surveillance. We do not know what the government is using our data for. They could be simply conducting research to better our lives, or they could be discovering ways to most efficiently imprison members of society in order to create a mass genocide. That's a bit extreme. But I guess it could happen theoretically. My point is, just like the prisoners, we do not know what the people in charge are doing. Because of this, I could never definitely say that the Panopticon is a bad metaphor.

 

Mining Mystery: Should we mine Student Data for more Protection?

Morris’s central argument revolves around the incorporation of student data mining in order to counter possible future threats. He calls this “the next natural step” in using private information to prevent external threats. Morris goes on to detail how administrators could track social media usage, shopping patterns, and further online activity in order to make assessments on whether a credible threat exists. 

 

The central issue in this debate lies between privacy and security. Are students’ rights to privacy outweighed by administrators’ need to provide safety and security for their students? This question isn’t limited to college campuses, but can rather be applied to society as a whole. Discussing the role of authority, particularly governments, in our daily lives is of the utmost importance and a daily ideological struggle. I both agree and disagree with Morris’s argument. It’s important for administrators to do whatever is necessary to protect their students, but violating the privacy of their students is not the path to go. Aside from the obvious moral enigma, such an act could give more power to authority and reduce self-accountability. Allowing the administration to monitor what students do online would lead to mistrust; dangerous, secretive behaviors; and a need for students to “hide” what they are doing online. A common-sense solution would combine certain aspects of Morris’s argument with the other side. Allowing the student population to decide which aspects of their online life they want monitored would provide more credibility to the administrations’ efforts to increase safety, as well as provide increased trust and accountability of authority.

 

How much power we are willing to give authority is a central tenet of modern society, and no discrete answer exists. The best possible solution takes into account both sides’ arguments and will help administrators provide better security while also protecting student privacy.

 

Data Mining: It's Already Happening, So Why Not Push It Further

In the essay "Mining Student Data Could Save Lives," by Michael Morris, the central argument is essentially that a variety of online platforms already use data mining to see what they should advertise to users; since this is the case, why not allow colleges and universities to use the same technology to see if they can identify when a student is showing unhealthy, worrying, and potentially dangerous through their internet usage?

At first, when I had begun to read the essay, I already had it set in my mind that colleges and universities being able to see what students were doing was an invasion of their privacy, simply because it is so easy to abuse that power. But after I continued reading, Morris made points about how shopping sites and social media platforms already data mine, and that quickly changed my viewpoint.

Just as I can Google dresses and later have dresses advertised to me on Facebook, students can shop for guns or stalk faculty (like Morris said) and have that information available for their university to see. And even though this is not one hundred percent full proof or guaranteed to prevent tragic events from happening on campuses, it is still a good step to assuring a little bit more safety and security on campus.

The Crystal Ball is Cloudy

Michael Morris makes the argument that, through mining student data, examining the digital footprints left by students in their day-to-day lives, universities could prevent violence from occurring on campus. This belief is founded on the idea that students intending to commit violence might leave some evidence of their bad intentions in their online actions. Morris rightly suggests that, if a student has shown strong negative opinions on a particular professor, shopped online for weaponry, and has drafted a suicide note, there is cause for concern.

Morris provides many examples of the added security from student violence that the practice of data mining would provide, but neglects to address in detail the privacy concerns that opening up this information to university authorities introduces. While many of the arguments Morris makes are valid, the article generally seems overly optimistic toward the idea of student data mining. It glosses over concerns of privacy, and of false accusations. Morris uses the example of credit card companies tracking spending behavior to detect fraud. This is a practice I support, as, frequently, it can prevent the owner of the card from having money fraudulently taken away, but credit card companies are not one hundred percent accurate. Sometimes, the owner of the card gets their purchase declined because the credit card company misidentifies suspicious spending habits. In the case of credit card companies, this is fine, as the owner of the card can simply inform the company that there was no fraudulent spending, and the matter is resolved. However, in the case of student terrorist activity, the stakes are much higher. If a student's actions are falsely identified as those of a future murderer, that student can potentially have their life permanently altered by false accusations.

While I have my criticisms of the viewpoints expressed in this article, I do not necessarily completely disagree with it. The issue is a complex one, and I don't believe that there is one correct answer that one can address all of the different concerns and competing priorities when making decisions on whether or not to go forward with mining student data. It's a complicated question that would take much more than 400 words to even begin to try to answer.

A Slippery Slope

Shootings, suicides, and other similar acts of violence, especially on campuses, have become more prevalent in the last decade than ever before. The free internet (though not one of the larger reasons for this increase, in my opinion) has expanded the accessibility of the resources needed to commit such acts. And in most cases, in the "aftermath of every large-scale act of campus violence," officials and investigators discover warning signs that, had they found before-hand, could've provided reason for authority to intervene.

In his essay Mining Student Data Could Save Lives, Michael Morris argues for the use of data mining on campuses to prevent incidents of campus violence. Since the University essentially controls both the wired and wireless internet network, campus administration has the tools to use algorithms to identify at-risk student behavior. Morris believes that universities should take advantage of this ability to maximize campus safety. Since we already give up so much of our privacy and personal information through social media, what does it matter if we lose a little more?

I agree with this argument, but only to a slight extent. Giving universities the freedom to survey and monitor student activity on their network could be an extremely slippery slope if not taken extremely seriously and carefully. While I do agree that the benefits of preventing large-scale acts of violence do outweigh the need for complete privacy, universities should be controlled in how much access they have for student information, and how they use it. FERPA could possibly be modified to give universities more freedom when it comes to monitoring online student activity, but in a limited and controlled way. Contrary to how Morris makes it seem, there is a substantial amount of information on our computers that we haven't given up through social media. Though our lives are largely public, I do value personal privacy to some extent. Despite the continually growing need for surveillance and intervention to prevent violence, I do believe universities too much power could open a can of worms that may be difficult to close.

Hindsight is 20/20

In his essay "Mining Student Data Could Save Lives", Morris suggests that by analyzing students digital activities, we could catch the oft-ignored signs of a future attack and take action before any lives are lost. At first glance, this seems like a perfect method to deter violence on campus. Sure, the students privacy is somewhat compromised, but the lives that could be saved are certainly worth the sacrifice, aren't they? However, even if we could justify the morality and ethics of such a system, there are some logical faults in this data-powered "crystal ball".

After a mass shooting, we often look at the evidence and wonder how no one noticed the signs - they seem so obvious. However, this is a classic example of hindsight bias, which refers to our tendency to see events that have already occurred as more predictable than they were. While some signs are indisputably concerning, such as outright threats and manifestos, many are not. Some may be subtle, and only stand out in context of the attack. Or, it may be difficult to gauge the severity and sincerity of a message, especially since people tend to be emboldened on the internet. Many indicators can have perfectly innocent, plausible explanations, and innocent behavior can seem sinister depending on one's perspective. Finally, there's a risk that those who design the system will build their personal biases into it, unfairly targeting certain groups.

How do we handle this ambiguity? Do we err on the side of false positives and discrimination, or should we lean towards giving the benefit of the doubt, even if we risk some attackers slipping through? If a student is identified as a threat, how do we intervene, discipline, or serve justice when no crime has been committed? Perhaps there are other ways we can prevent these violent acts, such as limiting students access to deadly weapons, building a strong community that prioritizes student care, and working to undo societal norms, standards, and pressures that contribute to violence. Since there are many other less inflammatory options, we ought to pursue them before turning to a faulty and unethical system of constant surveillance.

 

 

Data Mining Should Become a Priority of Campus Officials

In Michael Morris' article, “Mining Student Data Could Save Lives” Morris suggests that if universities are able to track troubling student behavior via data mining through traditionally private information then there would be more at risk and potentially violent behavior being caught early by university officials. Morris also includes that the Family Educational Rights and Privacy Act (Ferpa), which originally disallowed the release of a student’s information without written consent, has been altered because of the killings at Virginia Tech in 2007. Universities are now allowed to report data on students that they find to be demeaning and potentially threatening.

The central argument that Morris makes is that by increasing the functionality of university threat-assessment teams through data mining it would help avert violence on campus. I agree with Morris’ argument because universities are supposed to be a safe zone for students to learn while experiencing a lifestyle with more responsibilities. Allowing threat-assessment teams to have more control over the data would ensure that student safety and well-being is a priority for campus officials.

Coming from personal experience, I grew up in a suburb of San Diego where it was expected for every high school graduate to move on to universities. The academic pressure for a lot of students was paramount. So paramount in fact, that they couldn’t live with the stress put on them by the school or by the society around them. A total of six students had committed suicide by the time I graduated high school. I feel like with data mining and the enhanced capacity of threat-assessment teams, it would allow them to find data on students who are at risk of hurting themselves in order to cultivate a campus identity built upon health and conversation. I know Vanderbilt does a great job with this especially with the Center of Student Well-being and having accessible hotlines for students to call when they find themselves in hard situations, but this is more of a statement based off of what I have experienced previously to university.

The Role of Data Privacy in Campus Safety

In his essay “Mining Student Data Could Save Lives,” Michael Morris urges universities to act upon their unique ability to prevent possible acts of mass violence by screening student’s data footprints. Morris explains that the use of university email addresses and campus wireless networks permits most college IT departments the ability to mine the data of their students, but that restrictions within the Family Educational Rights and Privacy Act, as well as some administrators’ fears over student privacy, has slowed or prevented the implementation of student data mining for security purposes. Although the implications of universities having control over student internet usage are precarious, Morris argues that most students have already, whether willingly or not, relinquished most control over their online privacy through the use of social media and other websites, and therefore begs the question of whether or not safety on campus is more important than the (according to Morris, somewhat superficial) notion of privacy.

While I understand the potential risks these practices could have to student privacy, I also believe that every student has a right to safety in their place of learning, and anything that can be done to prevent atrocities such as mass shootings on campuses is worth the slightest perceived loss in privacy.

The Modern Dichotomy: Protection or Privacy

Michael Morris' piece Mining Student Data Could Save Lives presents the argument that universities in the United States have the technological capabilities to monitor their student bodies and act upon any suspect behaviors they may detect. Such a breach of privacy would better enable these institutions to facilitate the safety of their students, but at the trade-off of each individual's privacy. While most of the piece is an objective analysis of the ways in which universities could employ data mining technology, he does eventually advocate for a position, saying that the American university should make use of their "crystal ball" to better prevent violent incidents on campus. I agree with this sentiment, given the current climate of gun violence within this country. Time and time again, it has been proven that the antiquated methodologies of yesteryear are insufficient to prevent the heinous acts like those perpetrated at Virginia Tech from happening again. Too often is the response to these atrocities to "send our thoughts and prayers" and simply wait with bated breath for the next one down the road. As such, the only effective system of preventing mass shootings and other premeditated violent acts is the one Morris describes: the use of a data mining algorithm to analyze suspicious behaviors and activities as they occur in real time, therefore giving law enforcement the time to respond. While this may constitute a breach in the fundamental privacy afforded to all Americans by the Constitution, the frequency and efficiency with which these acts are being carried out with forces us to reexamine the intrinsic worth of privacy within society. However, given recent events, it seems that the answer is clear: university campuses have a moral obligation to use the data they have access to to adequately facilitate the protection of their student bodies, even if that demands some intrusions on their digital privacy.

Page 1 of 5

Powered by WordPress & Theme by Anders Norén