Sam Altman Backs Merge Labs to Build Brain Computer Interfaces
Sam Altman and OpenAI are backing Merge Labs, a new startup building brain-computer interfaces that directly competes with Elon Musk’s Neuralink. The move accelerates competition in neurotech, raises safety, privacy and regulatory questions, and forces businesses and regulators to rethink ethics, security, and integration with AI systems.
What happened
Sam Altman, backed by OpenAI, is funding Merge Labs, a new startup focused on brain-computer interfaces (BCIs). The Financial Times reports this positions Merge as a direct rival to Elon Musk’s Neuralink and marks a clear expansion of Altman’s long-stated interest in fusing human cognition with machines.
Why it matters
This development turns an ongoing tech rivalry into a race for the next interface between humans and machines. Altman has written about “the merge” as a best-case path toward close human–AI interaction, and Merge Labs signals an attempt to translate that idea into hardware and clinical systems.
The immediate effects are threefold: faster technical progress driven by competition, heightened public and regulatory scrutiny, and more urgent questions about safety, privacy, and data governance when neural signals become digital assets.
Real-world use cases and risks
BCIs promise transformative applications: restoring motor function, enabling new accessibility tools, and augmenting cognitive tasks. But they also introduce unique risks—permanent implants, sensitive neural data, attack surfaces for malicious actors, and potential misuse in workplace or consumer settings.
Consider medicine: a BCI that decodes intended speech could restore communication to patients, yet it must meet clinical standards, protect patient privacy under health laws, and defend against data exfiltration.
What organizations should do now
Companies, healthcare providers, and regulators should begin planning for a world where neural data interfaces with AI models. Early preparation reduces risk and preserves options for safe innovation.
- Map data flows: identify where neural signals are collected, stored, and processed.
- Build threat models specific to implanted devices and their cloud integrations.
- Draft regulatory and ethical guardrails for consent, data ownership, and clinical validation.
- Plan integration tests between on‑device models and cloud AI to ensure robustness and privacy.
Big picture
Merge Labs joining the BCI race accelerates a debate that’s been theoretical for years: how do we merge the power of AI with the sanctity of human cognition? Competition can drive breakthroughs, but it also compresses timelines for safety and policy. Expect faster technical progress, louder public scrutiny, and urgent demands for practical governance.
For tech leaders and regulators, the question isn’t whether BCIs will arrive—it’s how to make their arrival beneficial and safe. That requires technical rigor, cross-disciplinary policy work, and careful design choices that balance innovation with rights and protections.
QuarkyByte’s approach is to marry technical assessment with pragmatic governance: we analyze risks, design data and threat models, and help organizations plan compliance and clinical paths so innovation can proceed without sacrificing trust.
Keep Reading
View AllWhy Chatbots Can't Explain Their Own Behavior
Grok's conflicting suspension explanations show why LLMs can't reliably self-explain. Demand transparency from creators, not chatbots.
NeoLogic Builds Energy Efficient CPUs for AI Servers
NeoLogic raised $10M to build simplified, low-power server CPUs for AI data centers, targeting faster performance and ~30% cost and energy savings.
OpenAI Will Give Advance Notice Before Retiring Models
After GPT-5 backlash, OpenAI restores GPT-4o as opt-in and pledges to warn users before removing older ChatGPT models.
AI Tools Built for Agencies That Move Fast.
QuarkyByte can map the technical, ethical, and regulatory terrain Merge Labs raises—assessing data flows, threat models, and clinical-compliance pathways. We help health systems, regulators, and tech leaders plan safe rollouts, design governance frameworks, and measure trade-offs between innovation and risk.