Quantum computing is an ongoing revolution and promises breakthroughs in many fields, like the ability to find the geometry of complex molecules, which is expected to boost fields like biology and pharmacy, and the factorization of large primes, which threatens the security of our confidential information.   Such an important field relies on the availability of quantum computers, which I’m going to introduce in the simplest way as possible.   A more complete presentation of the subject is provided here.

First, let’s clarify the name.   A modern computer is in the essence a calculator operating with the binary system and we heavily rely on it because it is much more precise and fast than what we can do ourselves.   “Binary system” means that a computer is built with components capable of recognizing and manipulating symbols in a set containing only two elements, which are called “false” or “zero” (0), and “true” or “one” (1).   The reason for choosing the binary system is that it ensures the maximum reliability when implemented with electronics, in the sense that there is the maximum difference between the voltages representing all possible states, minimizing the probability of misunderstanding each single state.   As a side effect, arithmetic and Boolean logics can be implemented with the same circuits.   With computers, everything is encoded in sequences of zeros and ones, stored as sequences of information units called “bits”, each assuming value 0 or 1 depending on the computation.   Normally we group sets of bits in bytes (1 B = 8 consecutive bits) or some multiple of them, like the Kilobyte (1 KB = 1024 B = 8192 consecutive bits).   A computer is a state machine that, at each cycle, possesses a well-defined set of binary values in each bit used by the algorithm.

On the other hand, quantum computers are based on qubits, which not only can represent the two logical states mentioned above, but can also exist in the so-called quantum superposition of such states.   This is a unique phenomenon, completely absent in our daily life, characterizing quantum systems, i.e. the microscopic entities obeying Quantum Physics.   If we consider two qubits produced to be in the superposition of all four possible combinations of 0 and 1, a measurement of their status will end up with a single configuration, among 00, 01, 10, and 10.   By repeating many times the measurement on identical two-qubit systems, the relative frequency of each possible state will tell us what is its probability.   An important step when designing quantum algorithms is indeed to ensure that there is a clear preference for the “right” or “optimal” result, such that one does not need to perform many repeated measurements to find the desired solution.

Nature makes it all the time.   For example, the protein geometry in three-dimensional (3D) space is formed quickly and spontaneously while the amino acids are sequentially added by   ribosomes using the pre-messenger RNA (mRNA) as template.   In practice, the 3D geometry of a protein minimizes the Gibbs free energy, therefore a classical algorithm to compute protein folding, could be the following.   Place the first amino acid in the origin of the reference system and compute its electrostatic field in the surrounding space.   Next, add the second amino acid and place it such that the chemical bound with the previous one minimizes the Gibbs free energy of the resulting system.   Then compute the electrostatic field of the system and add a third amono acid.   At this stage, you have to consider all possible chemical bounds of the third unit with each of the previous ones, allowing also for a reconfiguration of the geometry with the goal of finding the configuration with minimal Gibbs free energy.   This problem is obviously more difficult to solve than the previous step, because there are more possible choices for the overall configuration.   Next, you add the fourth amino acid and repeat the process, this time with many more possible choices.   You easily understand that the computational complexity of this algorithm makes it unable to address long sequence of amino acids.

If the problem is so complex, how can proteins be synthetized by our cells?   The answer is that Nature does not apply a classical sequential algorithm like the one described above but exploits Quantum Physics to “probe” all possible solutions at the same time.   If we have a quantum computer that can mimic this process with a suitable algorithm, then we can compute the 3D geometry of proteins in minutes, while the most powerful computer would take a time longer than the life of the universe to complete.   Biochemistry, which is the set of chemical reactions related to living beings, heavily depends on the 3D structure of proteins.   Therefore, the latter is a very important field of study, for example in pharmacy, and you can easily imagine the high expectations on quantum computing on this side.

Of course, building quantum computers is not easy.   The reason is not that we don’t understand them, but rather that they have a fundamental weakpoint: Quantum superposition allows a small set of bits to “sample” a huge parameter space, but is also very fragile.   Any interaction with the environment, whether desired (a measurement) or undesired (e.g. thermal fluctuations), will break the superposition and make the qubits system “collapse” onto one of the possible states.   The main issue is that, once we prepare the initial configuration of qubits in a suitable superposition of states, random fluctuations in their environment (which cannot be completely isolated from the outside world) sooner or later will destroy this superposition.   The problem is known as decoherence and every quantum computer is characterized by a typical decoherence time, after which the carefully prepared state is lost.   Hence, a quantum computer must be able to configure the desired initial superposition and then run the quantum algorithm in a time sufficiently shorter than the decoherence time to ensure successful completion of the task.   This is not trivial at all.

The best way of increasing the decoherence time is to improve the decoupling of the qubits from the external world, and the big enemy is heat.   The first commercial quantum computer was the IBM Quantum System One and IBM is engaged into a challenging 15-year quantum roadmap including a full technology stack and offering for free 10 minutes per month of quantum computing time on a 100 qubit system.   IBM is investing a lot on research about cryogenic electronics for quantum computing because of the very low temperature required by its qubits to minimize the effects of thermal fluctuations.

In addition to reducing unwanted interactions with the external world, quantum computers adopt error correction to detect and correct single qubit errors.   This requires entangling multiple qubits to store and protect a single logical qubit’s state.   In practice, for each “useful” qubit one must actually implement a system of multiple entangled qubits, which means that the actual number of qubits is significantly larger than the declared number of usable qubits.   The lower limit for error correction is 5 qubits, but common algorithms employ 7 or 9 qubits to encode a single usable qubit.   A performance comparison of different approaches is provided by a recent Nature article written by Google Quantum AI and collaborators.

Quantum entanglement is a property that sets certain quantum systems far apart from macroscopic systems.   When two quantum states are entangled, a measurement performed on one of them not only “collapses” the previous superposition of states into a specific quantum state, but also immediately tells us what is the state of the other entangled state.   For example, the gamma-rays emitted back-to-back in a positron-electron annihilation process, widely used in Positron Emission Tomography (PET), are very energetic photons whose polarization is strongly correlated, because they are emitted as an entangled state.   This may be used to improve the signal to noise ratio in PET reconstruction, as explained by Watts et al. in their Nature paper.   What matters here is that a measurement of the polarization of one photon tells us also what is the most likely polarization of the other, without the need of a second measurement.

This property of entangled states is widely used in quantum computing.   First, it is used to encode quantum information in a redundant way on a set of entangled qubits.   Second, if one qubits experiences an error like a bit flip, the entanglement ensures that the system as a whole can detect the problem without the need of measuring the qubit directly.   Finally, in such a case the system uses entanglement to correct the error, often using majority voting.   Therefore, entanglement is a critical ingredient of quantum error correction.

A very simple scheme for error detection and correction, which is commonly adopted in critical systems like the space missions on which I worked for 20 years (including AMS-02 and Solar Orbiter), is to add to each byte two more bits, called parity bit and CRC, whose values are known for the true representation of the byte.  This increases space occupancy by 25% (from 8 bits to 10 bits) but it allows the underlying hardware to perform realtime error detection and correction, which is a big advantage in noisy systems.  In addition, mission-critical components add one more layer of redundancy by employing majority voting, which means that three identical copies of the same code and data exist and the decisions are taken based on the values that appear at least twice among the three copies.  In all cases, the main goal in error correction is to reduce the probability of a mistake to negligible levels, with respect to the expected amount of computation.

Assuming that the random bit flips are due to uncorrelated phenomena, the probability of two bit flips is the product of their single bit flip probabilities.   As a probability is always a number between zero (the impossible event) and one (the certain event), the product of two probabilities cannot be larger than the individual probabilities.   If the latter are low, a double failure has a neglible probability and can be neglected.   Think for example about single bit-flip probability to 10–6 (one part per million), which gives double bit-flip probability of 10–12 (one part per thousand billion or per trillion).   This reasoning applies both to classical and quantum bits of course.

Quantum error correction also aims at reducing the probability of unrecoverable errors.   Similarly to ordinary error correction, redundancy is employed to “distribute” the information across multiple qubits.   However, quantum error correction must be obtained without performing any measurement, otherwise this would destroy the carefully prepared quantum superposition.   This is done with smart combinations of entangled qubits, such that a spontaneous bit flip in one of them would be “healed” automatically by the entanglement itself, without external intervention on the system.   Of course, this is only possible if the rate of errors is low enough, such that there is at most one “wrong” qubits in the entangled state (this is known as the threshold theorem).   Compared to classical error correction, which requires a relatively small number of extra bits, quantum error correction requires significantly more “space”.   The most famous error correction approach is probably the one proposed in 1995 by Peter Shor, which employs 9 physical qubits to obtain a single logical (i.e. entangled) qubit, capable of automatically correcting any kind of single-qubit error.   The extra space is essentially one order of magnitude greater than the number of usable qubits!   This is one of the reasons why it’s so difficult to obtain quantum computer with many (usable) qubits.

All promising approaches to quantum computing nowadays rely on sets of entangled qubits providing a reliable subset of logical qubits, therefore bringing part of the algorithm into the architecture of the quantum processor itself.   Among such topological structures, Microsoft has recently announced the Majorana 1 processor, which could boost scale-up in the next years and win over the competing approaches.   This chip explits Majorana fermions that can be produced and manipulated on a new class of materials, which Microsoft calls “topoconductors”, which exhibit a new state of matter.   This was actually found when measuring the properties of semiconductorsuperconductor device, which provide high-mobility two-dimensional electron gases as documented in this (very technical) 2023 review.

A fermion is a particle that follows Fermi-Dirac statistics, named after Enrico Fermi and Paul Dirac.   Elementary particles like electrons, protons and neutrons are fermions.   Moreover, the fundamental particles that in the Standard Model of particle Physics are the building blocks of all forms of matter are all fermions.   They are all characterized by one important property: Identical fermions cannot occupy the same quantum state.   There is another class of particles, called bosons, which include photons and the other particles mediating fundamental interactions, with a diametrically opposite behaviour: Any number of identical bosons can occupy the same quantum state. They obey the Bose-Einstein statistics, named after Satyendra Nath Bose and Albert Einstein.

Fermions and bosons are characterized by half-integer and integer spin, respectively.   We call spin the intrinsic angular momentum of a particle, distinct from its motion in space (which may be associate with its own angular momentum).   Elementary particles, like those mentioned above, possess a tiny intrinsic angular momentum which, measured in units of the reduced Planck constant, assumes integer values like 0, ±1, ±2, etc. for bosons, and half-integer values like ±1/2, ±3/2, etc. for fermions.   Systems composed by elementary particles may also have spin, which is the (vector) sum of the individual spins.

When we provide enough energy, we can create new matter in the form of particle-antiparticle pairs.   This happens billions of times in particle accelerators and demonstrates with incredible precision the validity of the energy-matter equivalence implied in the most famous equation by Einstein: E = m c2.   Here, the constant c is the speed of light in vacuo, and its square is a huge number (just a bit below 1017 m2/s2).   This means that a small mass is converted into huge energy and vice versa.   Particle and antiparticle (for example, electron and positron) must have opposite quantum numbers, such that they can disappear completely when they meet (this is called annihilation).   Therefore, the must have opposite electrical charge or be neutral, and their spin is opposite too.   Hence, all fundamental fermionic particles differ from their antiparticles.

However, Mathematics tells us that it does not need to be so.   In 1937 Ettore Majorana found that a relativistic wave equation may be written in such a way that only real numbers are needed. Quantum Mechanics heavily relies on complex numbers instead, and particles and antiparticles are described by complex conjugate wave functions.   In contrast, solutions of the Majorana equation are fermions that coincide with their antiparticles, called Majorana fermions.   Although no known elementary particle is like them, there are physical phenomena, called quasiparticles, that behave like particles but are not. They are the collective behaviour of a set of particles or
vibrations.   One possible realization of Majorana fermions is exploited by the Majorana 1 processor, to achieve fault-tolerant quantum computation (more details can be found in this article).   This processor could boost scale-up in the next years and win over the competing approaches.

Are we so close to the epoch in which quantum computation becomes able to address biochemistry problems?   There is quite some excitement about it in fields like pharmacy, where big investments are made each year to discover new drugs.   While this is really promising, there is also a different form excitement arising from the threat that quantum computing poses on our current cryptography.  
As most current encryption approaches (like RSA) depends on the difficulty of factoring large numbers, quantum algorithms like Shor’s algorithm can break cybersecurity in minutes.   If you consider the amount of confidential information and communications currently stored worldwide over the Internet will be soon vulnerable to this kind of cyberattack, you’ll certainly understand the worries by governments and other organizations about quantum computing.   Anyway, Shor’s factorization algorithm is brilliant and fascinating and you can get a wonderful explanation of how it works in this video by Veritasium and have fun 🙂 while you learn how to adopt new encryption algorithms that enable quantum-resistant communication.