By: Thomas Stahura
Editor’s note: This is the first in a three-part Token Talk series on quantum computing. Today’s post covers the fundamentals of how quantum machines work. Next week, we’ll dive into the key players in the field and the startups already building real applications.
You’ve probably heard of quantum computers.
Invented in 1998, this breed of thinking machines are billed as the quintessential classical computer disrupter. But when asked exactly why or how these machines will change the world, most folks just shrug.
Over the last 27 years, the field has gone from two qubits per chip to an astounding 1,121 qubits in IBM's latest quantum chip. Still, few have seen, let alone used, a quantum computer. What gives?
Before diving into the new world of quantum computers, let's quickly cover the old world of classical computers.
Classical computers (like the device you're looking at now), store information in binary bits of 1s and 0s. This information flows through a series of logic gates that each perform a certain mathematical operation. These logic gates are the following: NOT, AND, OR, NAND (Not AND), NOR (Not OR), XOR (Exclusive OR), XNOR (Exclusive NOR / Equivalence).
Take the NAND gate. Its function is to output 0 only if both of its inputs are 1; otherwise, it outputs 1.
So,
Input: 1, 1 → Output: 0
Input: 1, 0 → Output: 1
Input: 0, 1 → Output: 1
Input: 0, 0 → Output: 1
The NOR gate, on the other hand, outputs 1 only if both inputs are 0; otherwise, it outputs 0.
So,
Input: 0, 0 → Output: 1
Input: 0, 1 → Output: 0
Input: 1, 0 → Output: 0
Input: 1, 1 → Output: 0
And lastly, the NOT gate (AKA the inverter), flips the input.
So,
Input: 1 → Output: 0
Input: 0 → Output: 1
Logic gates are the LEGO bricks of computation. By chaining them together, you build circuits that can add, subtract, multiply, and more. Ok, now to understand how quantum computers differ from classical computers, you also need to understand the concept of reversibility.
A logic gate is reversible if you can always uniquely recover the input from the output.
For example, you have a NAND gate and it outputs a 1, what was the input? It could be 0,0 or 0,1 or 1,0. Since we cannot uniquely recover the input from the output, we say NAND gates are not reversible. In other words, information (about the input) is lost.
NOT gates, on the other hand, are reversible. For example, if a NOT gate outputs a 0, we know the input must be 1. And if it outputs a 1, its input must be 0.
Now that you get classical gates — NAND, NOR, NOT, etc. — it's time to dive into quantum computers because they are playing a whole different game. Instead of bits, they use qubits.
Qubits aren’t just 0 or 1; they can be both at the same time (that’s superposition). And quantum gates are the logic gates that manipulate these qubits.
The first rule of quantum math is: Every quantum gate is reversible. Meaning you can always run them backward and recover your original state.
Classical gates (like NAND/NOR) can destroy info (not reversible). Quantum gates never do. They’re always reversible, always unitary (fancy math words for “no info lost”).
As such, because of reversibility, quantum computers have a unique set of quantum logic gates that permit a certain kind of math. Let's go over two of them:
Hadamard (H) Gate is the superposition gate. Input a 0, you get a 50/50 mix of 0 and 1. Imagine flipping a coin, as it's spinning in mid air, it forms a 3d sphere and its probability, at that moment, is 50/50 chance of being heads or tails. Input a 1, same deal — still a 50/50 mix, but with a phase flip. Imagine representing the direction and speed of the coin’s spinning as an arrow in 3d space, this arrow has a direction (phase), and speed (magnitude). Flipping the phase reverses the direction of the coin's spin. The Hadamard gate is how you unlock quantum parallelism: it takes a boring, definite state and turns it into a quantum probabilistic state. In short, it’s the logic gate that turns classical bits into quantum bits.
So,
Input: |0⟩ → Output: 50% chance of being 1 or 0
Input: |1⟩ → Output: 50% chance of being 1 or 0
Once your qubit is in superposition, you can start doing some wild quantum tricks. The next essential gate is the Pauli-X gate (often just called the X gate). Think of the X gate as the quantum version of the classical NOT gate. It flips the state of a qubit:
Input: |0⟩ → Output: |1⟩
Input: |1⟩ → Output: |0⟩
If your qubit is in superposition (say, α|0⟩ + β|1⟩), the X gate swaps the amplitudes:
Input: α|0⟩ + β|1⟩ → Output: α|1⟩ + β|0⟩
Still reversible, still no info lost.
In quantum computing, amplitudes (like α and β) are complex numbers that represent the arrows in 3d space mentioned earlier. They encode both the phase and magnitude of a qubit with the probability of the qubit given by the squared magnitude of the amplitude. The phase (angle) of the amplitude affects how quantum states interfere, but is not directly observable as a probability.
After many quantum logic gates, when you measure a qubit, its superposition collapses to a definite 0 or 1. So, to get a quantum speedup, your algorithm must:
Exploit superposition and entanglement to process many possibilities at once.
Be reversible (unitary operations only).
Use a technique called interference to amplify the correct probabilities and cancel out the wrong ones.
Most problems don’t fit this mold. If you just naively port classical code, you’ll get no speedup — or worse, a slowdown.
As of today, there are only four algorithms that take advantage of quantum computers' unique properties. They are, Shor’s Algorithm (Factoring Integers), Grover’s Algorithm (Unstructured Search), Quantum Simulation (physics simulations), and Quantum Machine Learning (QML)
Shor’s algorithm, using quantum Fourier transform, finds the prime factors of large numbers exponentially faster than the best classical algorithms. This has massive implications in cryptography since it breaks RSA encryption, which relies on prime factoring being difficult, and secures most of the internet today
Grover’s algorithm, using amplitude amplification to boost the probability of the correct answer, searches an unsorted database about 99.9% faster for a million items. And the speedup grows as the database gets bigger.
Quantum Simulation, using entanglement and superposition, models complex quantum systems — like molecules, proteins, or new materials — that are impossible for classical computers to handle. This unlocks breakthroughs in drug discovery, chemistry, and materials science by letting us “test” new compounds in silico before ever touching a lab.
Quantum Machine Learning (QML), using quantum circuits, can turbocharge core tasks like linear algebra and sampling. Quantum computers, in theory, can solve huge systems of equations, invert matrices, and sample from complex probability distributions faster than classical machines. Though this is still very much in the domain of researchers.
A new wave of pre-quantum startups is building the application layer for quantum computing. Just as AI startups turned research into real-world value, these teams are doing the same for quantum by targeting proven algorithmic advantages. They are developing tools for drug discovery, molecular modeling, cybersecurity, faster search, and design optimization in aerospace and manufacturing. These companies are positioning themselves now so they are ready to scale when the hardware becomes readily available.
Ok, that was a crash course in quantum computing! Abstract, but just scratching the surface. And there’s still a whole universe left to explore: More quantum logic gates, quantum error correction (how do you keep qubits from falling apart?), decoherence (why do quantum states vanish so easily?), entanglement (spooky action at a distance, anyone?), and the wild world of quantum hardware (trapped ions, superconducting circuits, photonics, and more). We haven’t even touched on the real-world challenges — scaling up, keeping things cold, and making quantum computers actually useful outside the lab.