One passage from Little Brother that particularly caught my attention was the part from chapter 8 in which Marcus discusses the paradox of the false positive.  It begins with Marcus explaining his plan to fight back against the Department of Homeland Security’s ramped-up surveillance and “safety protocols” that he believes to be violating the personal privacy of the citizens of San Francisco.  He talks about a critical flaw in the DHS terrorist detection system, which is that the accuracy of the terrorism tests isn’t nearly good enough to effectively identify actual terrorists without incorrectly accusing hundreds or even thousands of innocent people in the process.  Due to the extreme rarity of true terrorists, the tests meant to increase safety end up generating far too many false positives that result in people feeling even less safe.  As Marcus says, it’s like trying to point out an individual atom with the tip of a pencil.

This passage made me reconsider just how efficient automatic detection algorithms really are.  It’s logical to believe that a 99% accurate test is reliable, but when there is a very small amount of what you’re looking for in a very large population, a 1% error can cause major problems.  Thinking back to the article that discussed universities’ use of data-mining to identify possible school shooters or other at-risk individuals, it’s clear that the paradox of the false positive could cause similar issues in real-world situations.  The number of would-be school shooters is so small compared to the total student population that it would be extremely difficult for any tests to accurately identify them.  Overall, Little Brother‘s discussion of the paradox of the false positive demonstrates the importance of having reliable identification tests with sufficient accuracy to take on the rarity of what they are meant to find.  Otherwise, you might just end up working against yourself.