Books by Casey Fisher

 

 

Blog Post

The Enigmatic and the Pragmatic of Quantum Computing

A Deep Dive Into How It Works and How It Could Get Spooky

August 1, 2024

The Enigmatic and Pragmatic of Quantum Computing


The conceptual framework for quantum computing was developed in the 1980s, and for a couple decades, it was rarely discussed outside those erudite circles of theoretical physicists. But starting in the 2010s, it has grown into something of a Kitsune.

Kitsune, you ask?

Kitsune
Like the Kitsune from Japanese folklore, quantum computing evokes a curious blend of mystery and practicality.

If you’re acquainted with Japanese culture, you likely already know. A Kitsune is a mythical figure from Japanese folklore which translates as “fox spirit”, inspired by the real-world foxes which were once a ubiquitous presence in agrarian life. According to legend, Kitsunes possess the magical ability to shapeshift into virtually any form, including human beings, to exploit people’s fears and desires. They can also manifest in dreams, weave elaborate illusions indiscernible from reality, and even manipulate time and space, often to lure their targets into a trap of some sort.

There is an undeniable parallel here when it comes to the way quantum computing has captivated popular imagination in recent years. My new novel, The Subtle Cause, is yet another example, although it explores a dimension I do not believe has been considered before. But we’ll address this later in the article.

For now, let’s focus on why quantum computing tends to trigger an unsettling sense of vertigo. The answer relates to the mind-bending principles and implications of quantum mechanics itself. When we think of quantum mechanics our minds tend to swim with fantastical notions of parallel realties and branching universes, spooky action and coordination at a distance, particles that are also waves but also aren’t even objectively real, cats that are both dead and alive, and reality only existing when there is someone or something conscious to observe it—but good luck defining what constitutes a conscious observer in the first place.

Okay, that’s a whole field of rabbit holes in which we could easily become lost. Each of these ideas are no doubt intriguing but also commonly misunderstood and misconstrued and deserve articles of their own. Suffice it to say, the spookiness of quantum mechanics presents fertile ground for wild speculation and sci-fi adventurism. But we should also acknowledge that there is a line of logic to support at least some of this conjecture. And if we must concede that quantum mechanics reveals some deep fissures in the presumed foundations of reality, then what havoc might a frighteningly powerful machine built on these principles unleash?

It is certainly fair to ask such questions. But there is another side to quantum computing that refuses to be ignored, just as there is another side to the Kitsunes which have preoccupied active minds since time immemorial. The practical side. Even while purportedly bewitching unsuspecting humans throughout the ages, foxes were also providing useful services. They hunted rodents to protect grain and rice storages while also scavenging organic waste and helping to aerate and fertilize fields for agriculture.

Similarly, quantum computers promise to revolutionize several areas of critical importance to modernity, including cryptography, advanced modeling, simulations, optimization problems, drug discovery, unstructured database search, artificial intelligence and much more.

But even these applications will require a substantial amount of scientific and technological progress before they are viable, which evokes yet another question: How much of the hype surrounding quantum computing is plausible versus belonging to the realm of science fiction?

In this article, we’ll examine this question in-depth. But first, it’s important to understand what a quantum computer is and how it works.

What Is a Quantum Computer and How Does It Compare to a Classical Computer?

To put it simply, a quantum computer is a computing system built on the principles of quantum mechanics. Instead of bits, which can be set to 0 or 1, it is based on qubits, which can be set to 0 and 1. This duality enables exponential degrees of freedom and data representation which directly translates into computational power and data storage.

The quantum mechanical principles of superposition and entanglement are especially pertinent when it comes to this topic. Superposition refers to a quantum object's ability to exist in multiple states simultaneously, while entanglement enables quantum particles to remain interconnected across distances, coordinating in ways that defy classical physics, including the principles of relativity. Quantum computers exploit superposition and entanglement to achieve “Quantum Parallelism”, a form of parallel processing that is far beyond the reach of even the most powerful supercomputers. This capability allows quantum computers to explore a vast array of computational paths at the same time.

Qubits are controlled and manipulated by quantum gates, which are conceptually similar to classical gates used in traditional computing. But there are some crucial differences. Classical gates are constructed from transistors that regulate electrical current, while quantum gates utilize methods such as lasers, microwave and radiofrequency pulses, magnetic and electric fields, and even acoustic waves. These techniques allow quantum gates to induce qubits into states of superposition and entanglement, while also executing mathematical operations and algorithms to solve complex problems beyond the capabilities of classical computers. Like their classical counterparts, quantum gates are arranged into circuits, which are sequences of gates, algorithms and measurements applied to the qubits. But unlike their classical counterparts, quantum gates are reversible. This is the property that enables quantum parallelism.

The Quantum Revolution
We are nearing the maximum potential of integrated circuits. The next exponential leap in computing will require leveraging the principles of quantum mechanics.

Quantum computers also differ from classical computers in that they are not deterministic. They don't follow a single logical path to a clearly defined outcome. Instead, they operate on probabilistic principles, another fundamental aspect of quantum mechanics. This enables qubits to exist in a state of superposition where every possible solution is represented simultaneously. Quantum algorithms then amplify the probability of solutions likely to meet the specified conditions. When a measurement is made, the qubits collapse into a classical bit state, yielding one of the possible solutions—often the most likely to fit the established criteria, due to the amplification process. The result is then fed into a classical computer for verification. If the solution is incorrect, the process is repeated until the correct answer is found. This “quantum speedup” approach has the potential to exponentially reduce computation time compared to purely classical methods.

To summarize, a quantum computer can produce a potential solution from an exponentially large set of possibilities in mere nanoseconds, with a high probability of accuracy. While a classical computer can evaluate a single solution with perfect accuracy, it would require an impractical amount of time to work its way through an exponentially large dataset of possibilities.

To clarify this further, it's important to understand the operating principles behind classical computers and the inherent limitations they are now pushing up against. In other words, classical computers can no longer achieve exponential improvements—only linear ones. And linear advancements are insufficient for the next generation of complex problems to be solved.

Classical computing systems, including supercomputers, are built on the more familiar principles of electrodynamics and thermodynamics. While this makes them deterministic and reliable, it also imposes physical constraints, such as the amount of heat, energy and material required per bit. This means there are complex algorithms that are simply too resource-intensive or time-prohibitive for classical computers to solve efficiently.

Starting in the mid-1960s, the number of transistors that could be fitted onto an integrated circuit has doubled approximately every two years, a phenomenon known as Moore’s Law, named after Gordon Moore, the co-founder of Intel who first identified this trend. But we are nearing the physical limits of Moore’s Law. This isn’t simply a technological hurdle that innovation could conceivably overcome. Our transistors are nearing the scale of individual atoms, and at this level, quantum effects, heat dissipation, and material limitations impose insurmountable barriers to further miniaturization. Moreover, because transistors are fundamentally designed as binary switches (0 or 1), we are inherently limited to one bit per transistor. This is significant because the number of bits directly corresponds to processing power, data storage, data transmission, bandwidth, etc. There is simply no cheating these limitations.

With miniaturization off the table, physical size and energy usage of the system must be scaled up to boost computational power. But we can only scale linearly. In other words, to vastly increase our computational power, we must increase materials and energy usage proportionally, as efficiency gains have nearly been maximized already.

There is no way to condense more bits into less space, nor can they be made more energy efficient. This latter part is explained by Landauer’s Principle which dictates that there is a minimum amount of energy required to manipulate one bit of information. This also directly relates to the temperature of the system. For every bit manipulated, there is a corresponding amount of heat generated that must be dissipated, establishing an “energy floor”. The energy floor represents the minimum level of energy required per bit. Just as transistor-shrinking has hit its physical limits, semiconductor technology has nearly reached the energy floor limit, meaning only marginal improvements at diminishing returns are physically possible.

But to conceptualize what we are up against, it helps to examine these challenges in the context of a real-world example.

The Cryptography Example

RSA 2048-bit is the current standard in digital encryption, widely used for web traffic and online transaction security, email encryption, digital signatures, VPNs, secure messaging apps, etc. It relies on the computational unfeasibility of factoring an astronomically large semiprime number around 617 digits in length, expressed as the modulus N. A semiprime number is the product of exactly two prime numbers, denoted as P and Q. For an attacker to break RSA 2048-bit encryption, they would need to determine the values of P and Q, which themselves are also astronomically large numbers.

When establishing a secure communication channel, a public key is generated and exchanged between the client and the server. The public key consists of the modulus N and an exponent e, where e is coprime to the totient function of N, which is equal to (P - 1)(Q - 1).

To encrypt a message, the sender uses the recipient's public key consisting of N and e. The recipient holds the corresponding private key, which includes the exponent d, where d is mathematically related to both N and e through a complex relationship involving the prime factors of N. Using d, the recipient can decrypt the message by reversing the operations performed by e during encryption. For an attacker to decrypt the message without the private key, they would need to find d, which is computationally unfeasible with only N and e from the public key. The only feasible way to derive d would be by factoring N into its prime components P and Q. Once an attacker knows N, e, P and Q, then calculating d is straightforward.

Easy-peasy, right? Except there’s a catch. The number of possible prime factorizations of a 2048-bit number is astronomical. To put this into perspective, the number of potential pairs of prime factors P and Q is far greater than the total number of particles in the observable universe. A supercomputer could only attempt to solve this problem through a brute-force approach, meaning it would need to test every possible factorization. This requires parallel processing. Classical computers achieve parallel processing through multicore processors, where each core can independently execute operations simultaneously. The most powerful supercomputer on earth as of this writing is the Frontier supercomputer at Oak Ridge National Laboratory in the U.S. With its 8.7 million cores, the Frontier could perform over a quintillion calculations per second.

Sounds promising, right?

But here’s the reality check: Even with this astonishing computational capacity, it would take the Frontier supercomputer far longer than the age of the universe to brute-force a single RSA 2048-bit encryption. This demonstrates just how secure RSA encryption remains, despite advances in computational power.

As we have established, the only way to increase computational power with a computing system based on transistors is to increase in size. The Frontier supercomputer is already a massive machine with a footprint of 7,300 square feet and a volume of roughly 51,000 cubic feet. But since we are near the limit of Moore’s Law, even if we doubled its size, we would only approximately double our computational power.

The Stellar Supercomputer
A supercomputer thousands of times larger than our sun would require far longer than the age of the universe to decrypt a single RSA 2048-bit encrypted message.

So how much would we need to scale a supercomputer to generate enough processing power to decrypt an RSA 2048-encrypted message? What if it was the size of an entire city? Or for purely illustrative purposes let’s imagine we built one the size of a continent. Well, if we’re going this far, we might as well scale our hypothetical supercomputer to the size of planet Earth, extending from the inner core to the outer atmosphere. But the fact is, this still wouldn’t come close. Even if our supercomputer was the size of Betelgeuse, a red supergiant star about 1,000 times larger than the size of our sun, it would still take longer to decrypt a single RSA 2048-bit message than the age of the universe, by many, many orders of magnitude, more than we could meaningfully quantify.

But once we reach the point where we can take full advantage of the principles of quantum mechanics, expanding computing power exponentially with minimal material and energy requirements, decryption becomes realistically within reach. And a quantum computer harnessing enough qubits to solve this problem might eventually occupy an amount of space comparable to a large, industrial refrigerator or a small room.

Welcome to the power of exponential data representation. Welcome to the power of qubits.

What Makes Qubits So Special?

As mentioned previously, a qubit, or quantum bit, is the basic unit of information in a quantum computer. They are based on quantum objects which can exist in more than one state at a time. This means each qubit can be set to both 0 and 1.

For example, a qubit could be an isolated electron that exists in a superposition of spin-up and spin-down states, representing a combination of both simultaneously. Qubits can also be based on superconducting circuits, trapped ions and photons. By having base-units of information capable of representing multiple values, processing power and data storage is dramatically increased with even a relatively small number of qubits. And then for every qubit added, the degrees of freedom or data representation increase exponentially. Even just three hundred qubits would give us more variables to work with than the number of particles in the universe. Now imagine the possibilities if we could scale up to a million, or even a trillion qubits. But more on that later.

For a quantum object to maintain this “magical” property of quantumness, it must remain in a state of superposition. We have established a simple definition of superposition, which refers to the ability of a quantum object to exist in multiple states simultaneously.

But this basic definition fails to do it justice. Let's delve deeper into the surrealness of superposition to fully appreciate its significance in quantum mechanics.

What Does Superposition Really Mean?

Quantum superposition is a highly complex and counterintuitive concept that is difficult to reconcile with our everyday experience and understanding. To help gain a better grasp, I have included a couple excerpts from The Subtle Cause:

 

An analogy [of superposition] would be a small child with the potential of becoming a doctor, a teacher and an astronaut. In the familiar world, the child would be defined by their current properties. Perhaps they are forty inches tall and their hair is cut short and they are attending their first year of elementary school. This would be their defined state. Their future potentialities merely hypothetical. But in the quantum world, such a child would literally be a blurred mixture of an elementary school student, a doctor, a teacher and an astronaut, all at the same time and at varying heights with their hair at varying lengths. In other words, the child would be in a superposition of all these things.

 

Superposition isn’t an absolute condition within the totality of universes or all that exists. It’s not something that a defined system objectively has or doesn’t have. A system is only in a state of superposition in relation to some other system. So, let’s take our universe, or what we tend to think of as that objective reality of which we are ostensibly a part. I’m talking about that which began with what we call the big bang and expanded outward to give rise to space and time and all its contents and properties, from energy and matter, fundamental forces and particles, to stars and planets and entire galaxies. From the singularity that precipitated the big bang to the cosmic horizon which is still expanding outward … It’s really nothing other than a contiguous web of information being continually exchanged. A continuous, unbroken line of logic which has become self-reinforcing. So, let’s call this the Familiar-Universe System.

In order for there to be a system in superposition relative to the Familiar-Universe System, it must be partitioned from it in a way that an exchange of information cannot take place between the two. This is called coherence, and it can be accomplished through a variety of means, such as creating vacuum conditions, magnetic shielding or even ultra-low temperatures. So, in a sense, this partitioned system becomes its own universe where quantum rules predominate.

A Superposition of Pathways
Superposition enables a quantum computer to effectively explore every possible pathway simultaneously.

The maze example is another way to conceptualize superposition. Imagine an elaborate and complicated maze with a point of entry and a point of exit. We know there are a vast number of potential pathways that will successfully navigate us through the maze and many more that will lead to dead ends. Our goal is to find the shortest, most efficient pathway that solves the maze.

A classical computer would approach this problem through brute force, trying each possible pathway one at a time. Only after testing all the pathways could it determine which one is the shortest. But a quantum computer could represent a superposition of states, effectively exploring all possible pathways simultaneously.

But here’s the catch. We can’t simply peek inside to get the answer for which pathway is shortest. If we try, the qubits will collapse into a classical state of ordinary bits, leaving us with a single, randomly manifested pathway from the superposition which is virtually certain not to be the “right answer”. Even if it was, against all odds, there would be no way of verifying with certainty. This predicament illustrates the pesky and bizarre measurement problem and observer effect. The measurement problem refers to the enigma of how and why a quantum system’s many possible states, as described by a unified wavefunction, collapse to a single, definite outcome when observed or measured. The closely related observer effect suggests that the act of measurement or observation itself fundamentally alters the state of the quantum system.

So does this mean all the hype surrounding quantum computers is just a mirage? Are we chasing the pot of gold at the end of the rainbow? Well, we didn’t come all this way for nothing. Quantum computing might not be the magic wand as some of the hyperbole has led us to believe. But it’s far from a mirage. By leveraging a few mathematical techniques, quantum computers can be coaxed into helping solve otherwise unworkable problems within very reasonable timescales.

To understand how, let’s revisit our RSA 2048-bit cryptology example.

Decrypting RSA 2048-Bit Encryption with Shor’s Algorithm

Quantum computing doesn’t magically produce solutions to complex problems at the push of a button. The process is circuitous and indirect, though it can lead to workable results in a highly condensed timeframe compared to classical computers.

Shor’s Algorithm, which is a quantum algorithm designed to find factors of an integer, is a perfect example of this process.

The first step in Shor’s Algorithm, assuming we have enough logical qubits (an estimated 4,000 - 6,000), is to create a quantum register which represents every possible set of values for coprimes P and Q. This part is accomplished in a matter of milliseconds. Even though there are 22048 potential solutions, the quantum register can handle this vast amount of information efficiently.

The next step involves applying a mathematical operation known as Modular Exponentiation, which is an intermediary step for factoring very large numbers. It doesn’t directly find P and Q, but it can lead us to a value indirectly related to P and Q by finding the periodicity, or period r, for the modulus N.

Periodicity refers to a repeating pattern in the sequence of values generated by a mathematical function. To find this period, we choose a large random number a that is less than and coprime with N. We then raise a to successive powers, taking the remainder when divided by N.

Here is an example of this operation using small numbers for a (7) and N (15):

  ➣ 71 / 15 = quotient 0, remainder 7
  ➣ 72 / 15 = 3, remainder 4
  ➣ 73 / 15 = 22, remainder 13
  ➣ 74 / 15 = 160, remainder 1
  ➣ 75 / 15 = 1120, remainder 7
  ➣ 76 / 15 = 7843, remainder 4
  ➣ 77 / 15 = 54902, remainder 13
  ➣ 78 / 15 = 384320, remainder 1

As you can see, the repeating sequence of remainders is 7, 4, 13, 1. The sequence repeats every 4 steps, which means the period (r) = 4. In the process of decrypting RSA 2048, these will be astronomically large numbers, including periodicity, if you can imagine that. This example simply illustrates the basic principle of modular exponentiation.

But we do not derive r directly as we are in the habit with a classical computer. Due to the measurement problem in quantum mechanics, attempting to retrieve a value for periodicity would collapse the quantum system into a single state, losing the superposition of possibilities. Instead, the quantum register is designed in such a way that the period (r) becomes embedded within the phases and amplitudes of the quantum states. This makes sense if we think of the potential values as waves, which is essentially what they are when existing in a state of superposition. This relates to yet another counterintuitive feature of quantum mechanics, which is wave-particle duality.

The next step involves applying the Quantum Fourier Transform (QFT) to the quantum register. QFT is a mathematical function that maps the periodic structure within the quantum state and transforms it into a new superposition where the outcomes related to r are amplified. This makes periodicity (r) more explicit in the probability amplitudes of the superposed waves.

Let’s use another analogy to better understand how QFT works. Imagine you're at a concert and the band plays a repeating sequence of notes that form a melody. Initially, you're standing far from the stage and all the sounds from the different instruments blend together. You can hear the music, but the melody's repeating pattern is hard to distinguish clearly.

Now, suppose you move closer to the stage and wear a special pair of headphones that enhances the core melody, allowing you to focus on the repeating sequence of notes. Suddenly, the repeating pattern becomes very clear, and you can easily pick up on the structure of the melody.

In this analogy, the superposition of quantum states is like the blended sound of all the instruments playing together, and the period r—the repeating pattern—is like the melody. The Quantum Fourier Transform is the special pair of headphones that transforms the harmonized sound into an arrangement where the repeating melody is accentuated.

After the Quantum Fourier Transform (QFT) is applied, the quantum register remains in a superposition of states, but it is now structured in such a way that the period r is encoded in the probability amplitudes of these states.

Now we are ready for measurement.

When the quantum state is measured, the superposition collapses to a single outcome. The result of this measurement is closely related to r, but it doesn’t provide r directly. Instead, it produces a value from which r can be deduced. This value will likely be close to a multiple of Q and r. If it is sufficiently close, then it is not difficult for a classical computer to calculate r, even without supercomputering power. Once r is determined, it is not hard for a regular computer to find prime factors P and Q, and then from here, calculating for the decryption key d becomes straightforward.

But let’s not uncork that bottle of champagne just yet. There is yet another wrinkle involved. The QFT does not guarantee we will get the “correct” value in a single measurement, although it drastically increases the odds. This relates to yet another fundamental aspect of quantum mechanics, which is its inherently probabilistic nature. But this is the reason the QFT works. By amplifying the amplitudes of those values that are within close range of the correct answer, there is a reasonable likelihood of obtaining a useful value when a measurement is made.

How reasonable?

With enough qubits for the task, it would typically require 10 to 100 attempts to find the correct value for r. An attempt begins with running Shor’s Algorithm and then measuring the quantum state, collapsing it to a single value. Every potential value within our refined superposition has an equal chance of being manifested in the measurement. Once a candidate value is obtained via measurement, it is fed into the conventional computer to determine if it works—a computationally trivial task. If we were lucky enough to draw an integer that led to the correct value of r, then our standard computer would easily determine P and Q, and then d. But if d fails to decrypt the message, then we know that the value derived from measurement was not correct and the process is simply repeated.

Each attempt might take between a few seconds to a few minutes. With 10 to 100 attempts being typical, decrypting an RSA 2048-bit message could take as little as a few seconds, or a few hours at most—certainly much faster than the several lifespans of the universe required by classical methods.

Game over, right? Time to completely overhaul online security. And I suppose we’ll be curing cancer and solving the mysteries of abiogenesis in short order as well.

But this is where we need to slow our roll because there is still a lot of progress to be made before 4,000 or more computational qubits can be harnessed. And this is to say nothing of the additional physical qubits required to support them. Now that we’ve explored how a quantum computer works in theory, let’s examine what it would take to actually assemble one that is capable of performing these complex operations.

Logical vs Physical Qubits, Decoherence and Quantum Error Correction

The larger a quantum system, the more susceptible it is to “noise”, which refers to any external or internal disturbance that affects its overall state, leading to errors in computations and other operations. These disturbances can cause quantum computers to lose “coherence”, a critical property for maintaining a quantum state in superposition. If the system succumbs to decoherence, losing its superposition, it will no longer retain the “quantumness” required to solve problems that are beyond the capability of classical supercomputers.

Noise can originate from environmental sources, such as interactions with electromagnetic fields, mechanical vibrations, or temperature fluctuations. It can also arise from operational imperfections, including inaccuracies in the control signals used to manipulate qubits and quantum gates. Additionally, measurement-related noise can introduce errors during the readout process, affecting the accuracy of the final results.

This sensitivity necessitates that quantum computers operate in environments with minimal external disturbances, which is typically achieved by maintaining extremely low temperatures and vacuum conditions to preserve quantum coherence.

The following is an excerpt from The Subtle Cause that gives us another way of thinking about decoherence:

 

Anything we do that pierces the partition facilitates an exchange of information, which is known as decoherence. In decoherence, the system in superposition assimilates into the system it was separated from, and in doing so, conforms to its expectations. In other words, it becomes enfolded into its contiguous web, or its immersive, self-reinforcing story. And in the familiar-universe system, the expectation is that every bit is either one or zero, not both. Every electron either spin-up or spin-down—not both.

To mitigate noise and decoherence, error correction methods must be employed. However, implementing these quantum error correction techniques requires additional qubits, significantly increasing the complexity and resource requirements of the quantum system.

Quantum Error Correction

During quantum operations, even slight noise can cause qubits to experience bit-flips or phase-flips, leading to errors that can render measurements inaccurate or even useless. To address this, fault-tolerance must be built into the system by maintaining thousands, or even hundreds of thousands, of physical qubits for each logical qubit.

Approximately ten percent of these physical qubits make up the logical qubit, directly participating in computations. The remaining ninety percent are ancilla qubits, which perform auxiliary roles essential for error correction.

Ancilla qubits are entangled with the logical qubits in a way that allows them to gather information about potential errors during operations. When an error occurs, the ancilla qubits generate an "error syndrome," which identifies exactly which qubits have experienced an irregularity and the type of error. These error syndromes can be metaphorically viewed as quantum error reports. The surface code, which governs the error correction process, uses this information to prescribe the appropriate corrective operations to the affected logical qubits, employing a sophisticated consensus mechanism. This correction is applied in real-time, somewhat analogous to how blockchain technology ensures robustness and reliability through consensus among network nodes.

Once the operation is complete, the logical qubits are measured to obtain the final, probabilistically determined result. Additionally, the ancilla qubits are measured to produce a second error syndrome. This second syndrome is intended for human analysis, allowing operators to assess the level of errors that occurred. If the errors are within an acceptable threshold, operators can be confident in the reliability of the result. If not, they will know that further adjustments or fixes are needed to address the noise issue.

Let’s circle back to our cryptography example where it is estimated that 4,000 to 6,000 logical qubits are required to decrypt an RSA 2048-bit encrypted message. With error correction, we now understand that each of these logical qubits would demand a substantial overhead of physical qubits to support both computational (logical) and auxiliary (ancilla) functions. This overhead could range from thousands to potentially millions of physical qubits per logical qubit. For Shor’s Algorithm, this translates to a need for approximately four million to six billion physical qubits in total.

As of this writing, the quantum computer with the highest known qubit count contains 433 physical qubits, developed by IBM. Yes, you read that correctly—just 433 physical qubits, not logical qubits. Clearly, there is still significant progress to be made before we can harness the vast number of qubits required to make quantum computers a practical reality.

Beyond decoherence and noise, there are several other significant challenges that must be addressed. We need to enhance control precision to achieve high quantum gate fidelity. We also need to improve cooling and isolation methods to maintain quantum states, as well as develop more efficient algorithms for the many applications that quantum computing promises to make feasible. Moreover, scaling up the generation of qubits—and keeping them isolated—remains a daunting task.

Given these hurdles, it seems safe to say that an overhaul of cryptography isn’t imminent, nor is a seismic shift in our technological landscape. But could the age of quantum computing sneak up on us? Could a kind of Quantum Moore’s Law accelerate the timeline, potentially bringing it within a decade or two? Companies like IBM seem to believe this timeline is realistic.

Unstructured Database Search Example

Cryptography is one of the most straightforward applications of quantum computing, offering a clarifying entry point for understanding its operational principles. But this is only the tip of the iceberg.

With a grasp of the basics, let’s explore another example—one that brings us closer to the potential of quantum computing to perform incredibly advanced and detailed simulations, possibly revealing answers to some of the most perplexing mysteries of the universe and existence itself.

Imagine we have a massive medical database containing millions of patient records. Each record includes a wide range of information, such as patient history, conditions, treatments, outcomes, and more. The database is unstructured, meaning it lacks inherent organization or indexing that would make it easy to search for specific records based on particular conditions and treatments.

Our goal is to find the subset of patients who (1) have a very specific medical condition, such as a rare genetic disorder, and (2) have been administered a particular experimental treatment regimen.

Grover’s Algorithm

In this scenario, Grover’s Algorithm would be employed to extract the records that meet the specified criteria. The process begins by defining the database, where each individual record can be viewed as an entry. In our example, the size of the database is denoted by N. Let’s assume we have a total of ten million individual records, making N = 10,000,000.

The algorithm proceeds by placing all qubits into a superposition of all possible states, representing every entry in the database. Mathematically, the quantum state is a superposition of all N records, each with an equal probability amplitude.

Next, the algorithm performs an oracle function designed to "mark" the patient records that meet the specific criteria: (1) having the specified medical condition, and (2) having received the experimental treatment regimen. The oracle flips the phase of the amplitude corresponding to the correct entries, often by multiplying it by -1, without collapsing the quantum state.

Grover’s Algorithm then amplifies the probability of the correct solution by applying the Grover diffusion operator, also known as the Inversion About the Mean. This step increases the amplitudes of the correct solutions while decreasing those of the incorrect ones.

The oracle function and amplitude amplification are repeated several times, with each iteration increasing the likelihood that a qualifying record will be returned upon measurement. Grover’s Iteration is repeated  N  times.

In our example, this means  10,000,000  = 3,162 iterations.

While this may seem like a large number, each iteration occurs in a matter of nanoseconds to microseconds. For 3,162 iterations, this would amount to about 3 microseconds to milliseconds, making the process virtually instantaneous.

Finally, the quantum state is measured, causing it to collapse and produce a single result. Due to the amplitude amplification, there is a high probability that this result will meet our criteria. But this is not a certainty, due to the probabilistic nature of quantum mechanics. Either way, the result can then be easily verified using a classical computer.

The algorithm will repeat this process as many times as necessary. But even after several runs, there is no direct way to determine with absolute certainty that all matching records have been found. However, when the rate of discovering new records drops significantly, it is likely that all, or nearly all, matching records have been identified and aggregated.

For instance, if we run the algorithm 120,000 times and obtain 12,000 unique records matching the criteria, with a database size of ten million, this process would take anywhere from 0.36 seconds to 6 minutes. If the rate of discovering new records diminishes significantly toward the end, with no new records found in the last several hundred runs, we can be statistically confident that we have retrieved all or nearly all records matching our criteria. To be even more certain, the algorithm can be run several thousand more times in just a few seconds. If no additional records are found, it becomes highly probable that all matching records have been extracted.

How does this compare to a supercomputer? A supercomputer would need to process each record sequentially. Unlike the exponential speedup seen in some quantum algorithms, such as in our cryptography example, Grover’s Algorithm offers a quadratic speedup. This means that the quantum computer's time improvement is proportional to the square root of the time a classical computer would need. For example, if a task using Grover’s Algorithm takes a quantum computer 6 minutes to complete, a supercomputer would take approximately 36 minutes. The quantum computer would also achieve significant energy and heat savings, which extend beyond the quadratic time improvement.

The Quantum Computing Game-Changer
Quantum computing promises to revolutionize search and rescue, drug discovery, DNA analysis, climate modeling and astronomy research.

At first glance, this might not seem like a drastic enough improvement to be transformative. However, consider a scenario where the unstructured database is much larger and the task is time-sensitive, such as in a search and rescue operation. If a supercomputer required 36 hours to analyze an enormous volume of unstructured satellite imagery, a quantum computer could potentially perform the operation in just 6 hours. This could easily be the difference between life and death.

Another example is molecular simulations where our unstructured database consists of all potential chemical compounds, each with unique properties, interactions, and potential side effects. The objective might be to identify specific compounds that could effectively target a particular disease by analyzing millions of molecules with various configurations.

Theoretically, this quadratic quantum speedup could reduce the time required to discover viable drug candidates for deadly, previously untreatable diseases from 36 years to 6 years, or from 100 years to 10 years. Think of the sheer number of lives that could be saved from such a quadratic quantum speedup.

There could be similar ramifications for genomic data analysis, enabling the identification of specific gene sequences associated with rare diseases. Quantum computing could also revolutionize global climate modeling, financial market analysis, and the ability to parse unstructured telescope image data to identify rare astronomical events or even extraterrestrial signals.

The possibilities are endless. The larger the dataset to be analyzed, the greater the potential time and efficiency gains, and the more quantum computers can make a real difference.

Infinity at Our Fingertips

Now that we’ve delved into how quantum computers work and what they are capable of accomplishing, you might find yourself thinking that while all of this is intriguing, and potentially game changing in a lot of areas, it is far from spooky.

But let’s explore the speculative side, where things could potentially get weird.

The limitations currently holding quantum computing back are primarily related to the difficulty of harnessing superposition effectively—running operations and deriving actionable insights from them without succumbing to decoherence or noise.

But what if we abandoned the conventional idea of running operations solely for measurement purposes? What if we endeavored to create a system of infinite possibilities within a perfectly sealed box or chamber, insulated from decoherence and noise? Imagine limitless simulations where every possible version of our universe unfolds simultaneously, right under our noses. What would be the metaphysical implications? What strange effects could emerge? And what if we discovered a way to bridge the seemingly impenetrable partition between two systems perfectly isolated from one another—namely, “our version of reality” and the “infinite potentialities” contained within the box or chamber?

This concept is explored in the book The Subtle Cause, where various storylines seem to intertwine around an accidental quantum computing breakthrough that taps into such an infinite system without collapsing it.

Before considering whether something this extraordinary is even theoretically possible, let’s first examine a more digestible aspect: Is it within the bounds of physical law to harness this level of degrees of freedom and data representation to perform these expansive and elaborate simulations?

Obviously, achieving this would require a scale far beyond the 433 qubits that IBM has currently reached—although even that number already provides far more variables than there are particles in the universe. However, if the goal were simply to isolate as many qubits as possible without worrying about error correction, it's conceivable that we could push this number substantially higher even with today’s technology. Of course, no one has attempted this (as far as we know) because there would be no practical use.

IBM and other companies ostensibly believe that tens of millions, or perhaps even billions, of physical qubits may be feasible in the not-so-distant future. But the need to perform measurements requires a significant overhead of physical qubits, meaning that tens of millions of qubits might still result in fewer than ten thousand logical qubits. While ten thousand logical qubits could enable very detailed and comprehensive simulations, it is still grossly insufficient to simulate a vast number of possible versions of our universe.

But what if we dedicated each of these tens of millions or billions of physical qubits to acting as a logical qubit of their own? Instead of ten thousand logical qubits, we could have billions of them. This would exponentially increase our variable and data representation, reaching an incomprehensible scale. Even with two-dimensional qubits, we could potentially simulate an enormous number of universes, each in granular detail. Now imagine how much we could enhance this capability by applying statistical mechanics to achieve greater information density. In this approach, not every particle would need to be represented individually on a 1:1 basis; instead, each qubit could represent a statistical configuration or ensemble of many particles or molecules, similar to how statistical mechanics predicts the behavior of entire clouds of molecules in thermodynamics.

This would result in another exponential gain on top of gains already exponential.

The temptation is to rely on adding more electrons to increase the number of qubits. Theoretically, if we could harness billions, what would stop us from eventually increasing this to trillions, quadrillions, or even quintillions? However, when we start discussing such astronomical numbers, we begin to encounter hard boundaries set by the laws of physics. Even with perfect isolation achieved through technologies like cryogenics, vacuum conditions, and magnetic shielding, other challenges emerge.

For example, there is the Coulomb force, the fundamental electrostatic repulsion between electrons. There is also the Pauli Exclusion Principle which limits how densely electrons can be condensed together for different reasons, relating to the mathematical rule that two electrons in the same locality cannot inhabit the same quantum state. Moreover, quantum fluctuations and the tendency for electrons to ionize and form plasma when densely packed add further complications. As we approach these physical limits, quantum computers could face constraints of their own where they would have to increase in size to achieve only linear gains in data storage and processing power.

Quantum objects other than electrons, such as photons, might be considered, but they come with their own set of challenges that can be even more difficult to overcome.

Given these constraints, let’s assume we eventually max out our logical qubit count in the trillions. This would be sufficient to simulate an incomprehensibly vast number of versions of our universe simultaneously, though certainly not every conceivable version.

The Many Degrees of Freedom of an Electron-Based Qubit
Using the position and momentum of an electron as a continuous variable could allow infinite data representation.

But there is a way to achieve another massive exponential gain. This involves getting more out of the qubits themselves. Instead of relying on the binary spin-up, spin-down variable, which is two-dimensional, position could be used, which is multi- or even omni-dimensional. By dividing position space into multiple distinct regions, each region could represent a different bit of information. For every degree of freedom above two, we gain data representation by additional orders of magnitude. By definition, a qubit with more than two levels is a “qudit”.

Qudits are a relatively new and underexplored area of research, but researchers have already achieved qudits with at least ten levels using ions. There is good reason to expect that this number could be increased substantially. This is a more challenging process when it comes to electrons, but it is well within the realm of feasibility.

But it is also possible that we soar past even the many degrees of freedom gained by qudits. Continuous-variable quantum systems are also being actively researched and developed. By using the infinitely continuous variables of an electron’s position and momentum, in theory we could reach a level of truly infinite data representation.

Now we have a viable, theoretical basis allowing for infinite degrees of freedom, an idea explored in The Subtle Cause. Although the novel doesn’t delve into the specifics as deeply as we have here, it presents a scenario where an experimental research syndicate known as the Cohera Group develops a quantum computing system based on the concept of infinite data representation. They hire hundreds of programmers to experiment with writing programs that might forge a link with this unimaginably powerful but inaccessible system. Without rules or specific objectives, the programmers are encouraged to be creative and unconventional. But most of their efforts result in failure and frustration, as each attempt leads to the collapse of the system. One of these programmers, Joshua, the main character, is on the verge of quitting. He suspects the entire project is an elaborate psyche experiment masquerading as a pioneering computing enterprise. But on a whim, as he is embedding his Easter Egg, he inadvertently establishes a connection between our “familiar universe” system and the infinite system in superposition partitioned from it.

He accomplishes this, in part, by exploiting another mind-bending principle of quantum mechanics: quantum tunneling.

Quantum Tunneling

Quantum tunneling is a quantum mechanical phenomenon where particles pass through energy barriers they classically should not be able to overcome. Due to their wave-like nature, particles like electrons have a probability of "tunneling" through a barrier, even if they don't have enough energy to cross it by classical means.

To help clarify this concept, here is another excerpt from The Subtle Cause:

 

It’s the probabilistic nature of quantum particles that makes quantum tunneling possible. The Schrodinger equation at the heart of quantum mechanics does not produce a single, precise solution. Instead, it produces a distribution of probabilities. At the extremities of the spectrum, there is a small chance that the quantum object will suddenly find itself on the other side of a barrier it lacks the energy to surpass. So, once again, we are reminded that it is not the intuitive laws of motion that govern the universe at its most fundamental level, but rather an underlying mathematical process, which is inherently random.

But Joshua was careful not to “ask a question in expectation of an answer”, knowing that a measurement of any kind would lead to the collapse of the system. Instead, he crafted his script to “stop short of the gate”, so to speak, disguising it as part of the same wavefunction as the quantized “infinity system”. Rather than attempting to extract or command anything from the system, the script simply presents itself, and the universe from which it originated, as an inseparable part of the ensemble.

The idea behind this is that there would be a non-zero chance some part of the signal, within which the script is packaged, would find itself on the other side of this seemingly impassable barrier—inside the infinite-simulation system. In fact, over time, it would be statistically inevitable. When this occurs, it would register as a random fluctuation, which would not upset a system in superposition, as long as the system is not being explicitly linked to another by making demands of it.

Philosophical and Metaphysical Implications

Now we have officially entered speculative territory. But there is a line of logic to suggest such a scenario could be theoretically feasible at some point.

Here is the thought process behind it:

If an external particle or group of particles encoded with the programming script were to tunnel into such a quantum system without collapsing the wavefunction, it could potentially become entangled with the system, thereby becoming part of the overall superposition. This means the quantum state of these particles would now be intertwined with the quantum states of the qubits or qudits within the “infinity system”.

By joining the superposition, the quantum states of the encoded external particles would be added to the multitude of states already represented by the “infinity system”. In this way, the possibilities being explored by the quantum computer would now include scenarios influenced by these additional states, potentially altering the set of outcomes being simulated.

Universe in a Box
Quantum tunneling could be the way to subtly link "our version of the universe" with the infinite potentiality of universes being simulated within a perfectly partitioned system.

Even without measurement, the mere presence of the new quantum states in the superposition could change the configuration of the overall wavefunction. This could be thought of as an influence that doesn’t collapse the system but instead alters the "landscape" of possible universes being explored. Quantum mechanics allows for the possibility that even small, seemingly insignificant interactions can have far-reaching consequences in a highly interconnected quantum system. In other words, it could exert a quantum version of the butterfly effect. In this context, the introduction of new quantum states (via the encoded external particles) could subtly shift the probabilities and the range of possible outcomes in the simulations.

From a philosophical perspective, it could be speculated that introducing new quantum states to a system in superposition is akin to introducing new "branches" or "pathways" in the multiverse. These new pathways could expand the range of possibilities the system can explore, subtly influencing the future evolution of the quantum states without direct measurement.

Now let’s take the speculation a step further.

If this hypothetical script presented itself as part of the superposition, becoming mathematically inseparable from the Infinity System’s wavefunction while still being connected to “our version of reality,” could there be a backflow effect from the “black box” to “our reality”? One could argue that there would be a wavefunction overlap, as the script would share a reality (i.e. a contiguous exchange of information) with both the black box and our familiar universe. Due to entanglement, there could, in principle, be non-local effects where the black box influences entangled particles outside of it. This in turn could exert a subtle effect in “our version of reality” to which the system is indirectly connected, causing a leakage of the internal state to the external state.

This is another concept explored in The Subtle Cause.

This idea has implications not only for “our version” of the universe but for all versions. It aligns with the Many-Worlds Interpretation of quantum mechanics, introduced by physicist Hugh Everett in the 1950s. The Many-Worlds Interpretation deserves its own lengthy discussion, but according to at least some renowned theoretical physicists, it offers the best explanatory power for some of the otherwise inexplicable effects observed in quantum mechanics. Perhaps its most important feature is that it eliminates the need for wavefunction collapse. Instead of the wavefunction collapsing to a single outcome, all possible outcomes of a quantum event are theorized to occur, each in its own separate, branching universe. This interpretation is consistent with the mathematics of quantum mechanics and doesn't require additional rules or mechanisms to explain why a particular outcome is observed. It also avoids the whole quagmire of needing to fundamentally define a conscious observer. By treating the wavefunction as a real, physical entity, the Many-Worlds Interpretation provides a clear, if counterintuitive, explanation for quantum phenomena.

For the purpose of our speculation, if the tunneling particles of the script become entangled with the Infinity System, they might act as a bridge between various branches, allowing for subtle influences to propagate across different realities.

The idea that a quantum system could influence our reality without direct measurement challenges classical notions of causality. It suggests a highly interconnected universe where quantum states can subtly influence each other across different scales and systems. In other words, this setup we have described could reveal a deeper truth about the nature of reality. If particles from our reality are continuously connected to a vast quantum system through entanglement, this could imply a form of quantum interconnectedness that extends beyond any given isolated system. The implications of this would be profound, suggesting that what happens in an isolated quantum system could have subtle but real effects on our perception of the everyday world.

This raises another, perhaps unsettling, question: Is there a perspective in which “our version of reality”, or the “familiar-universe system”, exists in a state of superposition?

When we consider the totality of all that could exist, we are compelled to acknowledge that ultimately everything belongs to the same superposition. This implies that from a maximally broadened perspective, everything is working in enigmatic concert with everything else.

Perhaps this is the Kitsune of our time—to reconcile the mystical with our notions of causal reasoning, and in doing so, discovering the truth of our reality and the expanded realm of all that is possible.

About Author

 

Casey Fisher has been a successful American entrepreneur for more than twenty years. He is now focusing his efforts into writing, having completed two epic novels in the last few years. Casey is also a husband, a father of five and a devoted pet daddy.