Cryptography

The History and Mathematics of Codes and Code Breaking

Tag: false positives

The High Price of Safety

I believe that the whiteboard exhibition at the Newseum was nothing less than a work of art. While it appears to be a forum for people to share their viewpoints, it also shows the array of opinions held by different people with different mindsets. Just in the given display, we see one person uncomfortable with sharing his location and personal texts while another finds it reasonable for the government to go through his phone records and texts. Another still uses a quote to imply that giving up your privacy for security makes you unworthy of both privacy and security. Such conflicting viewpoints serve to be an illustration of just how difficult it can be to find a reasonable compromise.

To answer the question asked by the display, I feel that I am comfortable with giving the government as much information as they need as long as it bears no repercussion in my day to day life. If the the government can guarantee that the information will remain confidential, I don’t see why I should be bothered about a stranger going through my phone records. The only flaw I see in adopting the aforementioned approach is the implications of the false positives. Given the current state of technology and surveillance, the number of false positives generated would cause a majority of people to face intervention by the government even when they are innocent. This can be problematic as it directly counters the ideas of safety and security since these victims can feel targeted by the very government they chose to protect them.

 

 

The Paradox of the False Positive

One passage from Little Brother that particularly caught my attention was the part from chapter 8 in which Marcus discusses the paradox of the false positive.  It begins with Marcus explaining his plan to fight back against the Department of Homeland Security’s ramped-up surveillance and “safety protocols” that he believes to be violating the personal privacy of the citizens of San Francisco.  He talks about a critical flaw in the DHS terrorist detection system, which is that the accuracy of the terrorism tests isn’t nearly good enough to effectively identify actual terrorists without incorrectly accusing hundreds or even thousands of innocent people in the process.  Due to the extreme rarity of true terrorists, the tests meant to increase safety end up generating far too many false positives that result in people feeling even less safe.  As Marcus says, it’s like trying to point out an individual atom with the tip of a pencil.

This passage made me reconsider just how efficient automatic detection algorithms really are.  It’s logical to believe that a 99% accurate test is reliable, but when there is a very small amount of what you’re looking for in a very large population, a 1% error can cause major problems.  Thinking back to the article that discussed universities’ use of data-mining to identify possible school shooters or other at-risk individuals, it’s clear that the paradox of the false positive could cause similar issues in real-world situations.  The number of would-be school shooters is so small compared to the total student population that it would be extremely difficult for any tests to accurately identify them.  Overall, Little Brother‘s discussion of the paradox of the false positive demonstrates the importance of having reliable identification tests with sufficient accuracy to take on the rarity of what they are meant to find.  Otherwise, you might just end up working against yourself.

The False Positive Paradox

While I had previously worked with false positives in various statistic problems, I never considered the implications behind it. Cory Doctorow addresses this “false positive paradox” in his book Little Brother.

In the story, the narrator, Marcus, talks about a “99 percent accurate” test for “Super-AIDS” (Doctorow 47). However, this means the test is one percent inaccurate. One percent of one million is 10,000. Only one of those people will actually have the disease, and thus, the “99 percent accurate test will perform with 99.99 percent inaccuracy” (Doctorow 47).

This was extremely interesting, as I had never considered how inaccurate these tests really were. The statistics and numbers had always seemed solid. It made sense, and 99% seemed like an extremely high percentages. But Doctorow makes an intriguing analogy. Pointing at a single pixel on your screen with a sharp pencil works. However, pointing at a single atom with a sharp pencil would be extremely inaccurate. This essentially highlights the flaws in any test that could potentially produce false positives. Whether detecting diseases or terrorism, these tests could result in large wastes of resources, such as time and money.

This could also be connected to the process of cracking ciphers. For example, when searching for cribs in an Enigma deciphered message, a crib may make sense in a particular context but could turn out to be an incorrect decipherment for the whole message . Even in general cryptanalysis, you could potentially make progress on deciphering a message. After hours of working on a message, you could realize that you made a mistake originally and your previous progress had simply been a “false positive”. Clearly, false positives can be quite dangerous and misleading. The false positive paradox further magnifies the huge effect these false positive readings can have on carrying out important tests or examinations, and the consequences could be devastating. Imagine administering a drug thought to combat a certain disease to a patient who really didn’t have the disease. Because the patient was perfectly fine, this drug actually resulted in the patients’ children having birth defects. A simple false positive could cause tragic repercussions.

99% Accurate Means 1% Wrong

One passage that caught my attention in Little Brother was the explanation of false positives and why they cause so many problems in systems like the terrorism detection in the book. For some things, a test that is 99% accurate works great. However, if the test is trying to detect something that is very uncommon in a very large group—such as people who are terrorists, which the book estimates as making up 1/20,000% of a city’s population—then that 1% of inaccuracy begins to cause a huge problem. In a city like San Francisco, with 20 million people, incorrectly identifying 1% of the population as terrorists means investigating two hundred thousand innocent citizens—in order to maybe catch ten terrorists. And, as such a system would likely be far less than 99% accurate, the problem would be far worse.

Things like this are important to take into consideration in today’s society, which is becoming ever more concerned with security and devising new ways to prevent terrorist attacks—even if it means invading people’s privacy. While programs such as the one in the book are not currently in place in America, if an attack like the one on the Bay bridge were to occur there would likely be support for implementing them. However, there comes a point at which, in the name of “defending freedom,” freedom is actually taken away, and that’s something we need to be very careful of.

Powered by WordPress & Theme by Anders Norén