By: Thomas Stahura
How many times has the perfect sentence slipped away in the second it takes to articulate that thought?
Our interactions with computers evolved from punch cards to command lines, mice to touchscreens, and typing to talking, each step more natural than the last. Now, we stand on the brink of the ultimate leap: a direct brain-computer interface — AKA the closest thing we’ll experience to telepathy.
Right now, as you read this, the roughly eighty-seven billion neurons inside your skull are chemically sparking across synapses at hundreds of miles an hour. Every letter scan triggers a cascade of action potentials. Out of the statistical roar forms a pattern, and the pattern encodes a thought. At least that's how the thinking goes.
Brains are still stumped on how brains work.
The core problem for any BCI, then, is getting and interpreting these signals. And the most common way to do so is via Electroencephalography (EEG). The path taken by companies like Emotiv, Neurable, and the open-source OpenBCI. These non-invasive devices rely on placing an array of electrodes on the scalp to measure the sum total of brain waves leaking through the cranium. The skull, however, acts as both insulator and filter, blurring the crisp detail of the original neural activity.
For better connectivity there are always invasive methods, such as ECoG grids or Neuralink’s fine wire arrays, which surgically insert electrodes directly into or onto brain tissue. The arrays then pick up individual neuron firings or local field potentials, delivering ultra-high-resolution brain activations at the cost of invasive brain surgery. But the technology remains confined to specialized clinical trials.
(We have an investment in the BCI space operating in stealth.)
Still, capturing brain data is only half the battle. It has to be cleaned, labeled, and turned into intent. The pipeline starts with applying Independent Component Analysis (ICA) to separate the mixed EEG signals into independent muscle movements, eye blinks, unconscious activity, and genuine thought.
From there, features are extracted like power and phase information (waves) from the classic EEG frequency bands: Delta (0.5–4 Hz), Theta (4–8 Hz), Alpha (8–13 Hz), Beta (13–30 Hz), and Gamma (30–100 Hz). Where Delta and Theta waves often track deep sleep, drowsiness, or meditative states. Alpha is linked to relaxed wakefulness and idle focus. Beta signals active thinking, concentration, and problem-solving, and Gamma is the realm of higher-order cognition.
Feed these bands into neural nets that learn to map, say, a sustained increase in beta over the motor cortex to "move the cursor left," or a distinct alpha suppression to "select item."
The frontier though is interpreting highly nuanced commands — intricate language, ideas, and emotions
Researchers are using invasive ECoG grids to do this by intercepting the brain's motor commands for speech, and translating those intended phonemes into synthesized words in real-time. The visual equivalent is even wilder. By scanning the visual cortex with fMRI and feeding that data to generative AI models like Stable Diffusion, scientists can reconstruct ghostly, but recognizable, images of what a person is seeing.
Both are lab confined, but prove a critical point: the once-unthinkable is now fundamentally a data science and engineering problem. The question is no longer if we can build a co-processor for the mind, but how quickly, and what happens when we do – so stay tuned!
My 8-channel OpenBCI EEG in action: On the left, each colored trace visualizes live electrical activity from an electrode on my scalp. The center widget translates that neural chaos into a real-time “Concentrating” score based on simple regression. On the right, you see the breakdown of my brain’s activity into the classic EEG frequency bands (delta, theta, alpha, beta, and gamma.) Below, a timeline graph plots my concentration rising and falling over the last few seconds.