Cryptography

The History and Mathematics of Codes and Code Breaking

Category: Uncategorized Page 3 of 9

Easier Understood than Done

In one of my previous posts, I wrote about hindsight bias and how it affects our perception of surveillance and whether it would significantly improve security. Turns out, hindsight bias is significant once again. In chapter 3 (and throughout the book) Singh provides examples where he makes breaking what would’ve once been considered an unbreakable cipher look easy, obvious even. This leads us to believe that these techniques should’ve been obvious in the first place. However, that’s just our own overconfidence and hindsight bias talking. In general, we tend to assume that things are more predictable and more obvious than they actually were, and that we “knew it all along”. Additionally, we overestimate what we currently do know, and how well we are able to do things independently before we are asked to do them. For example, when students study for tests, they’ll read a question and think to themselves “Oh, I know that”, but when asked that very question on a test, draw a blank because they really didn’t know the answer – they just assumed they did because they maybe recognized the concepts or vocabulary. When we see these examples and think they should’ve been obvious, we have the privilege of hindsight and someone else’s guidance.

Also, it’s important to note that Singh is writing some of these examples himself, and it’s much easier to decode something you encoded in the first place. These examples are also meant to be examples, and were therefore designed with that in mind. We’re supposed to read these examples and understand how the decryption is working and find it logical – that’s what a good explanation is supposed to do.

Admiral Hall’s Ethics

I found the first question quite interesting as it related to a few topics that I discussed in my Ethics class of junior year. When is something morally justifiable? And, is a bad deed moral if it leads to the greater good? Obviously Admiral William Hall would argue that not telling President Woodrow Wilson about the United States’ potential danger in order to pull the wool over Germany’s eyes was ethical. He was focused on the greater good. This most closely follows consequentialism; the idea that the morality of an action lies in the consequences it bears. I have always disagreed with the ideas of consequentialism. To be completely frank, I think they are a bit ridiculous.

The results of an action are extremely important in determining the morality of the deed, however the results are not everything. An action can, in itself, be ethical or unethical. Certain things, at least in my opinion, are never up for debate. For instance murder is always unethical. Even if something good came from murder, the action would never be moral. Who are we to decide the value of a life? William Hall clearly had no issue valuing human lives. He saw– what could have been– death and destruction and found that potential outcome of his actions to outweigh the more probable consequences. In the end, his decision paid off. However, the decision he made, although great, will never be ethical. 

 

We in Fact Know

The argument that Benjamen Walker presents is one that claims that the analogy of the Panopticon does not correlate with the surveillance of our conversations and our actions. For the most part, I believe that Benjamin Walker has every right to say this simply because of the fundamental basis for both of these concepts.

The Panopticon, in essence, is a building that serves as a “surveillance machine”. It was a structure that Jeremy Bentham advocated for and mainly thought of its use as a prison, where the prisoners sat in their respective cells in the open circular building, while the guards stood in the illuminated tower, having the ability to watch the prisoners at any given moment. Due to the illumination of the center tower, the prisoners could not see outside their cell, which means that they do not know if they are being watched at any point in time. And while this analogy can be generally acceptable, understanding what surveillance is in our context can help us understand the flaws of this comparison.

One noticeable hole is that in terms of surveillance, we do in fact know that we are being watched. In fact, we have come to accept the fact that we are being watched practically all the time. Yet many a time, we don’t let that thought affect what we decide to see or what we decide to say in our daily conversations. The comparison to the prison would be accurate if the government was hindering our every word, our every Google search, etc. But because this is not the case, the Panopticon cannot be an effective way to describe the surveillance that happens today.

The Panopticon –

I both agree and disagree with Benjamin Walker’s assertion that the Panopticon is a faulty metaphor. The panopticon is a theoretical building where a circular building is located around a central watchtower. The watchtower shines bright light such that the people in the watchtower can observe what those in the building below are doing, but the observed individuals can’t see when they are being observed. Thus they must always assume that they are being watched. Originally meant to be a prison, the panopticon can be applied to a wide variety of situations.

In today’s surveillance era, we are constantly tracked by cameras wherever we go; the cameras, as Walker argued the watchtower was, served as a means of deterrence. The argument goes that if there was visual evidence of your actions, engaging in criminal acts would be discouraged. But in today’s digital age, there are no “eyes” silently tracking us as we move from news apps to games to video-sharing websites. Instead, giant corporations and governments silently track our data usage to build algorithms that can help protect us from bad actors. But without those digital eyes, we are more likely to engage in harmful behaviors that we believe are anonymous. This is, I believe, the biggest strength in the concept of the panopticon – the deterrence of being in a constant state of being observed. But even though we know we are being watched today, we still act as if we are invisible. The watchtower is particularly interesting; it has migrated from being a physical building to being countless data surveillance tools arrayed by a variety of actors. The panopticon is very much real today in its surveillance sense; whether our behavior is being normalized or corrected because of its presence(whether or not we know about it) is another issue.

The Fallacy of the Panopticon Metaphor

Jeremy Bentham’s Panopticon is a hypothetical prison based on two concepts: the idea that the officers can spy on the inmates without the inmates knowing they’re being spied on, and the premise that the inmates can’t communicate with each other due to the separation of their cells. The comparison between current government surveillance and the Panopticon, however, is not an accurate metaphor.

In the Panopticon, the prisoners know they’re in prison. There are physical cells keeping the inmates from talking to each other, reminding them of their imprisonment. However, in reality, the “prisoners” of the government often don’t even know they’re in prison. Many citizens are unaware of the government’s ability to see into their lives through the Internet. They live in ignorant bliss, thinking that their lives are any sort of private. And, because they don’t know they’re “imprisoned”, they don’t have the thought to protect their data, and fight back against those doing the surveillance.

The other caveat is the concept that the prisoners are completely separate from each other. In reality, the web allows us to communicate with each other, and gather information through online sources. If we want to educate ourselves about anything (including governmental surveillance procedures), we can do it. Those who are aware that they’re “imprisoned” do have the ability to band together and rebel, or at least try. The question of how we can best fight back still has yet to be answered.

 

PANOPTICON: MORE THAN MEETS THE EYE

Jeremy Bentham, the famous utilitarian philosopher, is the original creator of the concept of the Panopticon. The Panopticon is a surveillance facility, typically a prison, which is supposed to achieve a foolproof system. In a panopticon, prisoners are held in cells in a rotunda building with an illuminated inspection tower in the middle. In this system, the prison guard in the inspection tower can see into every cell in the surrounding rotunda building. However, since the tower is illuminated, the prisoners cannot see the guard. They never know when they are being watched, so they are incentivized to act as if they are being watched always, making for perfect order. Since Bentham first invented the concept, people have begun to think about the Panopticon as a metaphor for how the government surveys the public. This metaphor works for explaining how the government might like to survey the public, but it ends up oversimplifying the situation.

The Panopticon works in an interesting way as a metaphor for how the government would like surveillance to go. First, in the Panopticon, the surveillance officer can see all, yet cannot be seen. In many ways, this is how the government would like to maintain order. To be able to monitor all activity with ease would, in theory, be the best way to identify and shut out danger and crime. However, the real cost would be the people’s feelings of being violated. However, in the Panopticon, the prisoners cannot identify the prison guard or whether or not they’re being watched. In theory, they have no method of exposing those who have violated them. Moreover, an important way that the Panopticon is a great metaphor for the inceptive aspect of the system. Most people see government surveillance as a way to catch those who are already posing a threat to society. However, the Panopticon, when used as a metaphor, reminds us that the government can also survey to deter people from acting out. In the Panopticon, prisoners behave because they are inceptived to act as if they are always being watched. Similarly, the government can use surveillance techniques as a way to scare people against acting out for fear that they may be caught. 

Despite its strong suits, the Panopticon fails as a metaphor to accurately explain the give and take between privacy and security. In the panopticon, the prisoners cannot interact with each other and are unable from learning anything about the guards who are watching them. But this is far from the truth in the real world. In reality, those who feel violated always fight back by interacting with each other and teaming up against oppression. The panopticon does not account for this happening, yet it always will. Additionally, privacy movements usually are popularized due to the public learning about a way that the government has been violating privacy rights. In the real world, people always learn new information about the government in relation to privacy. For these reasons, the panopticon does not accurately explain surveillance systems and everything that happens around them. 

The Great Cipher Was a Really Great Cipher

The 1600’s were a strange time in the history of cryptography. Monoalphabetic ciphers had run their course, with cryptanalysts having the resources and know-how to crack any monoalphabetic cipher quickly. On the other hand, the newly developed polyalphabetic cipher, a cipher that uses multiple cipher alphabets, was effective but too tedious to be embraced by codemakers. People needed a method of encryption that was unbreakable and also not so difficult to use. That’s when the Rossingols, cryptanalysts employed by the French government, developed the Great Cipher of Louis XIV. It is important to wonder why this particular cipher, which is just an enhanced monoalphabetic cipher, took 200 years to decipher.

The first reason why the Great cipher took so long to decipher was its complexity compared to ciphers cryptanalysis had seen in the past. The Great Cipher was a monoalphabetic cipher, meaning each symbol in the “cipher alphabet” mapped to one and only one thing in the plaintext alphabet. Still, it was extremely different from all other monoalphabetic ciphers. First, it used numbers. The use of numbers to map to letters had been a relatively new development in cryptography, and cryptanalysts still didn’t know the best method to decipher numeric codes. More importantly, however, the Great Cipher included 576 numbers: many more letters than there are in the alphabet. This great of a mismatch between the quantity of symbols and the quantity of letters had never been seen before, so there was initially a huge gap between the experience of the cryptallaists and the complexity of this cipher.

The second reason the Great Cipher took so long to decipher was the technologies available at the time. This was a cipher of 576 numbers. If it were 26 numbers, it would be somewhat obvious that each number matches a letter. However, with 576 numbers, each number could mean anything. So, many possibilities needed to be tested out. However, due to the fact that everything had to be written out, testing a possibility was extremely tedious, time consuming, and daunting. Bazeries, the man who eventually deciphered the Great Cipher 200 years later, spent months testing if it was a homophone and then spent months testing whether the numbers represented pairs of words. In short, no one wanted to do the herculean task of testing failing decriptions over and over again with pen and paper, so it took a long time before someone took up the task.

Lastly, the Great Cipher took so long to decipher because of the creativity and effectiveness of the cipher itself. Each cipher in the past had been based off of letters, but the Rossingols based their cipher off of syllables, matching each number to a syllable in the english language. This system made it just as easy to send messages and decode them with a key, but made the cipher extremely difficult to crack, given that there are so many potential syllables. The creativity put into this cipher showed, as no one thought to look at the syllables for 200 years. The Great Cipher took so long to crack because it was everything a great cipher is: complex, daunting, and way before its time. 

 

Environmental change in Cryptological Perception

Mary Queen of Scots fully believed that her cipher was unbreakable, so she laid bare her plan to take control of Scotland. Thus when her cypher was encrypted, there laid a written confession on the table, ready to take her to the gallows. This historical example led to the development of an environment of secrecy and mistrust, where cryptanalysts held the power over cryptographers. Even if one made a seemingly “unbreakable” code, they did not know if another expert codebreaker was waiting to crack it. This never-ending cat-and-mouse game of codes has continued through the centuries, always adapting and evolving. The knowledge that one’s code could be broken fostered more caution on behalf of the cryptographer, wherein they sent codes that were more cryptic in nature even in plaintext, knowing that an expert codebreaker might crack their code.

This strategy was a direct consequence of the knowledge that someone more experienced may crack your code – after all, if that was the case, why not make your plaintext message more difficult to understand as well? This would add an additional layer of security, and ensure more protection.  This shift was a significant one in cryptography history, and represented a transition to a more secretive/hard-to-decipher language where nothing was taken for granted.

Necessity is the Mother of Invention

The invention of the telegraph revolutionized long-distance communication by allowing messages to travel many miles practically instantaneously. However, the drawback of telegraphs compared to letters was that the required intermediaries for transmission also had access to the contents of a message. While a postman is unlikely to open and read a sealed letter, a telegraph clerk has no choice but to read what they are sending.

This affected the development of cryptographic techniques in two major ways. One, it prompted the general public to become more interested in cryptography. Even if their messages were not necessarily “secret” per se, most people are uncomfortable with the idea of a half-dozen people reading their private correspondences. Two, individuals and organizations that already encrypted their messages needed to amp up their security, because their messages would be viewed by more people and their messages would be easier to intercept via wire tapping. This spurred the adoption of the Vigenère cipher for telegraph communications because of the increased security it provided.

Similarly, in recent times, the concept of encryption has become more popular and the technique has become more refined in response to the increase in digital communications. As new technologies emerge, so may new cryptographic techniques.

Reimann Sum(feat. Technology)

Technology has quite literally transformed our lives. We live in an age of undeniable prosperity and freedom, where even our poorest live a better life than ancient kings. But in recent years the very technologies that we use for pleasure have been turned against us by governments and bad-faith actors. Of course we don’t live in an era of absolute freedom; we agree to cede some of our rights for safety and security. For example, we as a society agree on the use of surveillance cameras as a means of deterrence and protection, but are we ready to make the leap to facial ID? We agree that police should use DNA testing to solve crime, but what about an artificial intelligence reconstruction of a criminal that may present flaws?

One of the most striking paragraphs from Big Brother came up on page 42 when Cory Doctorow discussed how despite advancements in gait recognition software allowed recognition of individuals from their movements, the software’s success rate was reduced by any number of external factors including floor material, ankle angle measure, and your energy level. This variability can lead to errors in the system which can often have devastating consequences, especially when peoples’ lives and security hang in the balance. The title, I believe, accurately reflects our society’s desire to perfect our creations: we input more data points, update more software, create new tools, in a never-ending journey to create the perfect AI tool. But at what point do the ethical complications from such a tool lead to sufficient harm such that an objective cost-benefit analysis would overturn the progress of such a tool? No matter how many data points we inject, a piece of technology will never perfectly emulate the human mind. Every error/mistake that’s caused by the inaccuracy of technology threatens our stability, and is only magnified as the scope of the instrument exists. One particular example exists in the NSA. What would be the fallout of an inaccurate terror watch list that was compiled using the latest data points? Although this question is astronomical, it is important that we examine this issue with the utmost scrutiny.

Page 3 of 9

Powered by WordPress & Theme by Anders Norén