< PREV | NEXT > | INDEX | SITEMAP | GOOGLE | UPDATES | BLOG | CONTACT | $Donate? | HOME

[4.0] The Future Of Cryptology

v1.1.0 / chapter 4 of 4 / 01 may 23 / greg goebel

* Cryptology has always had a "fringe" component, mostly focused around supposed codes buried in major works of literature. This chapter discusses the cryptological fringe, then concludes with some personal comments and a list of sources.

MODERN CRYPTOLOGY


[4.1] QUANTUM COMPUTING & CODEBREAKING
[4.2] QUANTUM CRYPTOGRAPHY
[4.3] COMMENTS, SOURCES, & REVISION HISTORY

[4.1] QUANTUM COMPUTING & CODEBREAKING

* Current ciphers such as RSA or AES are extremely difficult to break by brute-force methods. To be sure, there is no such thing as an "unbreakable" cipher, since if a cipher can't be broken by analytical means, there are other links in the chain where it can be attacked. For example, Eve could write a software virus or even a modified version of PGP and slip it unnoticed onto Alice's computer. When Alice tries to encrypt a message, Eve's "stealth" software could obtain the key or the plaintext, and then sneakily send it on to Eve over the Internet later.

Such trickery does not appeal to most cryptanalysts, who would really like to be able to crack a strong cipher such as RSA by analytic means. That means factoring a very large number in some period of time hopefully much shorter than the age of the Universe. There is an approach known as "quantum computing" that might be able to do the trick, if anybody can figure out how to make it work.

Modern computers are based on simple logic operations that are easily understood, though their elaborations may become very complicated. Quantum computing, in contrast, is based on operations that even those who are working on quantum computing not only admit, but insist, cannot be understood at an intuitive level.

Quantum computing is based on quantum physics, the branch of modern physics that deals with the often bizarre laws of nature at the microscale. One of the best-known experiments in quantum physics, an experiment very relevant to quantum computing, is the "two-slit light interference paradox", or "two-slit paradox" for short. This paradox is based on the phenomenon of light interference, which has been understood for over two centuries, and for most of that time did not seem paradoxical at all.

In 1803, the English polymath Thomas Young published a paper on experiments he had conducted on light. In one such experiment, he shined a light onto a screen through two parallel slits, spaced a short distance apart. The strip of light cast through the two slits was not a broad swath of light, but alternated from dark to bright to dark to bright across the swath. This could be explained if light were assumed to be a type of wave phenomenon. The dark positions in the swath occurred where the pathlengths of light from the two slits were such that the light from one was shifted by a half-wavelength from the other, and so the light canceled out. The bright positions in the swath occurred where the pathlengths resulted in the light from the two slits being in the same phase, and so the light from the two slits added up. There had long been a controversy among physicists over whether light was a particle or wave phenomenon. Young's experiments seemed to conclusively prove that it was a wave phenomenon, and light was accepted as a wave for over a century.

However, in the 20th century, light was proven to have behaviors clearly characteristic of particles as well. Quantum physics was able to reconcile, uncomfortably, the mutually conflicting ideas of light as a particle and light as a wave through a concept known as "wave-particle" duality, which said essentially that an experiment on a subatomic particle that was intended to show wavelike properties would show wavelike properties, and an experiment that was intended to show particle properties would show particle properties.

Light is now known to travel as individual particles called "photons". The two-slit interference pattern observed by Thomas Young can be seen as the result of countless photons pouring through the two slits, building up the interference pattern one photon at a time. This is a bit puzzling, since if light were simply a particle the photons would simply pile up in front of each slit, establishing maximum brightness at those locations and fading at locations farther away from the slits. There would be no interference pattern. However, wave-particle duality does allow the wave interference pattern to occur.

What is much more puzzling is that the interference pattern will build up over time even if only one photon is emitted at a time. This sounds perfectly crazy: if only one photon goes through at a time, what could it possibly be interfering with to build up the pattern? Although a photon absolutely has a wavelength, any one photon seems to be a point particle; there's nothing particularly "wavy" about it in itself.

One of the important features of the sub-microscopic world is that a measurement cannot be taken on a particle without seriously affecting it. This is known as the "uncertainty principle". According to traditional quantum physics, by a further implication of the uncertainty principle, nothing can be said about the properties of a particle, or its "state", until a measurement is made on it. This might not seem to be much of a condition, since intuitively it seems obvious that the particle is in some state before we measure it, even though the measurement will alter that state.

Intuition is wrong. Before a measurement is taken, nothing can be said about the properties of the particle; it is "indeterminate". It is not a question of the particle being in one of the particular states available to it before the particle is measured, and not knowing what that state is; the particle exists, in potential, in all the states at the same time. If no measurement is taken to see which slit the particle goes through, it could go through either slit, and goes through both of them. This principle is called "superposition of states", and the existence of the photon in these multiple states is known as "coherent superposition". Once the photon is measured, the superposition of states "collapses" into a measured event, a process known as "decoherence".

This traditional view of quantum physics -- known as the "Copenhagen interpretation" -- has its critics, and a number of alternative interpretations have been proposed. None of them sound much less outrageous and there's no good reason to discuss them here, but all the interpretations agree on the existence of the concepts of quantum indeterminacy and superposition of states. These notions are observables, not arbitrary theoretical contraptions, with the interpretations simply representing a struggle to explain the observed reality. When a student told famed American physicist Richard P. Feynman that he didn't believe in superposition of states, Feynman replied: "Well, go do the experiments until you do believe it."

* Just because superposition of states violates common-sense logic does not mean that it's useless. In 1985, following up on hints provided by Feynman a few years earlier, a British physicist named David Deutsch of the University of Oxford published a paper that suggested that superposition of states could be put to use in a "quantum computer".

A conventional computer encodes a bit, a 1 or 0, as the presence or absence of a voltage. A quantum computer, in contrast, uses a quantum-physical property of a particle to encode a 1 or 0, for example the spin axis of an electron (which is either "up" or "down" but nothing in between) or the polarization of a photon. In either case, calculations will be performed on a group of bits, say 32 bits, at one time. 32 bits can be arranged in a total of 4,294,967,296 different ways, allowing them to encode that number of different numbers. Suppose a conventional computer were to perform a calculation requiring a test of all possible values that can be represented by 32 bits. It would have to loop through every single value, one at a time.

In contrast, if a 32-bit value were represented by the spin states of 32 electrons or the polarization states of 32 photons, by the principle of superposition of states, all possible 32-bit values would be present at the same time! A quantum computer operating on these 32 particles could test all 4,294,967,296 values in a single calculation, without going in a loop.

While adding another bit doubles the number of loops for a conventional computer, even if another bit is added to the calculation in a quantum computer, the calculation would still be done at one time and would take no longer than before. The hardware would have to get bigger, but the computation time would remain the same.

Since the use of the spin state of an electron or polarization state of a photon to represent a bit has vastly different properties from the use of a voltage level to represent a bit, in a quantum computer the bits are referred to as "quantum bits" or "qubits".

* This was a completely mind-boggling idea. It was also a purely abstract proposal, similar to Diffie's conceptual invention of public-key cryptography. Deutsch was in a worse situation, however, since not only did he not know how to implement a quantum computer, he didn't really know what could be specifically done with such a technology even if it existed. All he had was a marvelous "what if?" scenario. For that reason, quantum computing remained little more than an exotic theoretical toy for almost a decade. Researchers figured out examples of tasks that a quantum computer could execute, but these tasks were trivial and of little practical value.

That changed in 1994, when Peter Shor of the AT&T Bell Laboratories in New Jersey came up with a practical algorithm that could be implemented with a quantum computer: fast factoring of large numbers. Two years later, his colleague Lov Grover, working from Shor's research, came up with a second practical algorithm: a fast technique for searching for the location of a specific value in an unsorted list of values. Grover announced an even more efficient scheme in the summer of 2000.

Neither of these algorithms actually got a result in a single step using a quantum computer, but they were far more efficient than comparable calculations performed on a conventional computer. For example, while the average number of searches to find a specific entry in a list of N items is no better than N/2 with a conventional sequential search, with Shor's scheme it is SQRT(N).

Cryptographers quickly realized that the first two practical algorithms to be devised for a quantum computer were not merely useful to cryptanalysis but of central importance to it. A fast factoring algorithm could crack RSA. A fast value-search algorithm could crack DES and other block ciphers. Interest in quantum computing increased significantly.

* A quantum computer is very high on the wishlist of cryptanalysts and the black organizations that hire them. However, nobody's figured out how to build a quantum computer yet. Controlling individual particles or small groups of particles is very difficult, and to make matters worse the qubits have to be isolated from the environment to achieve a superposition of states. This problem gets more difficult as more qubits are used at one time.

Researchers are painfully nailing down the technological steps pieces needed to build a quantum computer. They are tinkering with a range of schemes for quantum computing, based on semiconductor micromechanical systems implementing sets of single-ion traps; single photons fed through optical elements; single-electron transistors; oscillations of current across superconducting junction elements; or manipulation of magnetic flux states in superconducting loops.

An actual quantum computer will require thousands of qubits to be useful, including a large number of bits simply for error correction. There are already computers based on quantum principles on the market, but they are still far from quantum computers as they were envisioned by the pioneers. Progress continues.

BACK_TO_TOP

[4.2] QUANTUM CRYPTOGRAPHY

* As mentioned, work is underway on new public-key ciphers, such as lattice ciphers, that quantum computers won't so easily crack. Even if such an improved public-key cipher is found, symmetric-key ciphers are still likely to be vulnerable to quantum computers. However, quantum technologies may be able to produce a cipher that even a quantum computer can't crack.

In the late 1960s Stephen Weisner, at that time was a graduate student at Columbia University, came up with an unusual idea for "quantum money" that couldn't be counterfeited. Weisner's quantum money was based on the physics of light photons. One of the ways in which light is wavelike is in that it can be "polarized". It has a wave motion at a right angle to its direction of travel. Relative to the axis of travel, that wave motion could be oriented vertically, horizontally, or any direction in between. A "Polaroid" lens can be used to screen for light of a given polarization. Such lenses are made of plastic with all the polymer molecules oriented in the same direction. If they are all oriented vertically, photons with vertical polarization can pass through, but photons with horizontal polarization cannot.

To explain Weisner's quantum money, let's assume that we have filtered light so that we end up with photons polarized in only four directions: vertically, horizontally, +45 degrees, and -45 degrees. We'll designate these four polarizations as "V", "H", "P" (plus), and "M" (minus).

Suppose we try to send photons with these four polarizations through a vertically-polarized Polaroid lens. A V photon will pass through and an H photon will be blocked, as expected. However, the concept of quantum indeterminacy applies to the polarization of photons. What this means in practice is that a P photon has a 50:50 chance of passing through or being blocked, as does an M photon; if a P or M photon does pass through the vertically polarized lens, it will be vertically polarized and will be indistinguishable from a V photon. Similarly, if a P or M photon is passed through a horizontally polarized lens, it will have a 50:50 chance of being blocked; if it is passed through, it will be horizontally polarized and will be indistinguishable from an H photon.

Weisner's quantum money, as he envisioned it, included 20 "light traps", which were devices that could capture and store a photon. Each trap could be loaded with a single photon with V, H, P, or M polarization. The polarization sequence of the 20 photons would be unique for each bill. Each bill would also have a printed serial number, and a record could be kept of serial numbers and the corresponding polarization sequence loaded into each and every bill. To forge the bill, a counterfeiter would have to copy a legitimate serial number, which is trivial, and copy the polarization pattern for the bill with that serial number, which is not.

The problem arises because of the ambiguity in reading photon polarizations. If the counterfeiter uses a vertically polarized filter, it would pass V photons and block H photons. But what about the P and M photons? On the average, half the time they would be passed, with the counterfeiter interpreting them as V photons, and the other half of the time they would be blocked, and the counterfeiter interpreting them as H photons. The counterfeiter could use a Polaroid lens with an orientation of +45 degrees to discriminate between P and M photons, but then will be confounded by the V and H photons. Since the photons can only be released from the traps to be measured once, the counterfeiter cannot use a trial-and-error method to determine the polarization sequence.

The bank can reliably read the polarization sequence because the information describing the sequence has been recorded and so the proper filters can be used. To be sure, there is some ambiguity even for the bank, since M and P photons could be misinterpreted as V and H photons or the reverse, but the likelihood of properly testing all 20 photons by accident is very small. Of course, the bill has to be "reinitialized" with the recorded polarization sequence after it has been read, since the original is destroyed by reading it.

* Even Weisner didn't think that quantum money was a practical idea; he just found it an interesting theoretical toy. Nobody else was interested, and all the papers he wrote on it were rejected by science journals.

One person who did find it interesting was his friend Charles Bennett, who kept tinkering with the idea over a period of several years. In the early 1980s, when Bennett was working for IBM, he got to talking with a computer scientist from the University of Montreal named Gilles Brassard, and the two realized that the quantum money concept had applications in cryptography. In particular, it offered the possibility of a cipher that could not be broken analytically.

Suppose Alice wants to send Bob an enciphered message in binary code. She could transmit each bit of the message using the polarization of photons: a V photon could represent a 1, while an H photon could represent a 0. Alternatively, she could use a P photon to send a 1, and an M photon to send a 0.

For purposes of this argument, we can call the V-H scheme the "rectilinear" scheme and designate it with a "+", and we can call the P-M scheme the "diagonal" scheme and designate it with an "x". Alice, being devious, decides to use the rectilinear scheme and the diagonal scheme interchangeably, at random. For example, if she wants to send the binary string "10111001110001", she could send it like this:

   1 0 1 1 1 0 0 1 1 1 0 0 0 1
   + + x + x x + + x + + x + x
   V H P V P M H V P V H M H P

What makes this devious is that if Eve tries to intercept the message, just as with the quantum-money counterfeiter, she has no way of knowing what polarization filter to use to sense the polarization of a particular bit of the message, and she will get the wrong bit value about half the time. Eve simply can't read the message.

Of course, Bob can read the message reliably if he knows the pattern of polarization schemes that Alice is using. However, this leads straight back to the classic key-distribution problem, with Alice trying to think of a secure way to get the pattern of polarization schemes to Bob. Alice and Bob could use RSA to transfer the key, but then we would have a scheme that would be no more and no less secure than RSA. Bennett and Brassard chewed on this problem at length and finally came up with an answer in 1984. The result was quantum cryptography.

* A quantum cryptography session involves a set of preliminary initialization steps:

STEP 1: Alice sends Bob a purely random sequence of a given number of bits, using a purely random sequence of the two polarization schemes.

STEP 2: Bob has no idea what bits are being sent and no idea which of the two polarization schemes are being used for each bit. On his end, he tries the two polarization filters at random, sometimes getting the correct result, sometimes not.

STEP 3: Now Alice phones Bob, on an ordinary insecure line, and tells him what sequence of polarization schemes she used, but not what the bit values were. Bob then replies and tells Alice which of the bits he read using the correct filter, meaning he got the correct values, but does not tell her what those results were.

STEP 4: The actual values of the bits that were read correctly can then be used as a completely unbreakable one-time pad cipher key. Alice's message can then be sent, using one or the other of the polarization schemes but not both at random, and will be completely secure.

STEP 5: Eve, the eavesdropper, is in the same position of using randomly-chosen filters to read Alice's sequence, but the likelihood of Eve using the same sequence of filters as Bob is vanishingly small. Eve cannot obtain the full cipher key by intercepting the transmission.

To illustrate, using an unrealistically short binary sequence:

   Alice's random bit sequence:   1 0 0 0 1 1 0 1 1 0 0 1 0 0 0 0 1 0 1 1 0
   Alice's random polarizations:  + + + + x x + + + x x + x + + x x + x + x
   Bob's random filters:          x + + x + x + + x x x + x x x + + + x + +
   Correctly read bits:           - 0 0 - - 1 0 1 - 0 0 1 0 - - - - 0 1 1 -

   Resulting key sequence:        001010010011

   Eve's random filters:          + x x + + x + x + + + x + x x x + + x + +
   Correctly read bits:           1 - - 0 - 1 0 - 1 - - - - - - 0 - 0 1 1 -

   Eve's reconstruction of key:   ??10?????011

Eve is actually in a worse position than this scenario shows. Since only single photons are being sent between Alice and Bob, if Eve intercepts them she will affect their polarization, for example converting P photons to V photons, and this will corrupt Alice and Bob's encryption. If Bob tries to decrypt the message and gets gibberish, he knows someone's been listening in.

Since the measurement of photon polarizations is tricky, it is possible that Bob will make errors in measurement. Bob can perform a little error-checking by actually telling Alice some subset of the bit values he received, with Alice confirming that they matched. If they don't, they construct another key. If they do, they simply discard the bits Bob told Alice, on the presumption that Eve was listening in, and use the remainder.

For example, they could build a key of 1,100 bits, use 100 of those bits as an error check, and then use the remaining 1,000 bits as a key. An alternative that would not throw away bits and would be as easy to do would be for Alice to simply tell Bob what the "parity" of a number of bits selected from the key would be, or in other words indicate if there were an even or odd number of "1" bits in that random selection. Less than 20 such parity comparisons would be enough to ensure that the key hadn't been tampered with by eavesdropping.

Quantum cryptography promised to make the absolutely secure one-time pad cipher useful for common use. Cipher users could produce an absolutely random keystream for every individual message, and communicate it between each other with little fear of interception.

* The cryptography community was fascinated by the idea of quantum cryptography, but many doubted it could be made to work in practice. However, in 1988, Bennett and one of his students, John Smolin, managed to send a quantum-encrypted message between two personal computers, appropriately named "Alice" and "Bob", and read it properly. The distance between the two computers was only 30 centimeters (a foot), but other researchers have been steadily increasing the distance between transmitter and receiver.

A variation on the original quantum-cryptography scheme, based on what is known as "quantum entanglement (QE)", was proposed by Artur Ekert of Oxford University in the UK. QE is based on a quantum-physical quandary, the "Einstein-Podolsky-Rosen (EPR) paradox", which has been known since the 1930s. The EPR paradox imagines the generation of two photons from an event that creates them with, say, polarizations at right angles to each other, and sends them in opposition directions. In quantum physics terms, the two photons are said to be "entangled". Entangled photons with a right-angle relationship can be generated, for example, by shining a laser beam through certain types of optically-active crystals.

However, according to quantum indeterminacy, until the polarization of at least one of the photons is measured, the polarization of both photons remains indeterminate. Once the polarization of one photon is measured and "resolved", then the polarization of the other "entangled" photon is also implicitly resolved, even though it is far away by that time.

Now suppose Alice generates pairs of entangled photons, storing her half of the pairs in some kind of optical "trap" and sending the other half to Bob, who also stores his in an optical trap. When Alice wants to send an enciphered message to Bob, she calls him, and the two then perform measurements on the trapped photons with filters, much as with the quantum cryptography scheme outlined previously. The beauty of the scheme is that if Eve gets her hands on any of the entangled photons they do her no good, since she can't duplicate them without resolving them, and making them absolutely useless for the procedure. They can be delivered to Bob or to Eve, but not both. Alice and Bob can even use the same filter sequence -- all "+" for example, or all "x" -- and eliminate the guesswork, at least as long as Alice is certain she is talking with Bob.

Some starflight enthusiasts have jumped to the conclusion that QE permits instantaneous communications. In fact, Einstein and his colleagues came up with the EPR paradox just to show how absurd quantum physics is, since instantaneous communications are impossible in terms of relativistic physics. The joke was on them, of course, since the EPR paradox turned out to be provably true, but the joke cut both ways, since setting up the cipher key requires non-instantaneous communications between Alice and Bob. Quantum physics has a quirky tendency to cheat on classical physics, but in a way that can't really be detected or used, at least directly. The Creator does have a sense of humor.

By late 2008, quantum-cryptography technology had advanced to the level where a quantum cryptography system had been implemented as a demonstration in Vienna, Austria, with six sites linked over the city's fiber-optic communications network. Experimental satellites featuring quantum encryption technology are now being flown, using long-range free-space lasers, in hopes of ultimately creating a global quantum-encrypted communications system.

Or, in other words, what quantum physics gives, quantum physics takes away, and this applies at a higher level as well. Quantum computing will be able to crack any existing cipher, but quantum cryptography will result in ciphers that even quantum computers won't be able to crack. A question that will remain when quantum computing and cryptography become practical: Is the cryptographic game over? Or will it just move on to a new and even more devious level?

BACK_TO_TOP

[4.3] COMMENTS, SOURCES, & REVISION HISTORY

* This document started life as a comprehensive survey of cryptography, begun in 2001, which evolved over the next two decades. In the end, I broke the document down into one on classical cryptology, modern cryptology (this document), and data rights / data security. On breaking it apart, I gave this document a revcode of v1.0.0, though it really goes back like 20 years.

* Sources include:

The cover illustration is by one Christoph Schloss, and is available for commercial use under the Creative Commons Share Alike license. Finally, I need to thank readers for their interest in my work, and welcome any useful feedback.

* Revision history:

   v1.0.0 / 01 jul 21 
   v1.1.0 / 01 may 23 / General update.
BACK_TO_TOP
< PREV | NEXT > | INDEX | SITEMAP | GOOGLE | UPDATES | BLOG | CONTACT | $Donate? | HOME