Dark Laboratories: AI Industry Veterans Launch New AI-Scientist Venture with Periodic Labs
Contributors to ChatGPT, DeepMind’s GNoME, OpenAI and Stanford University have taken covers off of their new venture that aims to create an AI Scientist with the launch of Periodic Labs. The aim of the startup is automating the entire process of ideating, hypothesising, experimenting and even publishing their findings.
High profile startup launches such as Periodic and LILA Sciences signals a deeper shift in how science itself may be conducted in the decades to come. Artificial intelligence has so far advanced by automatizing the hypothetical-deductive method by analyzing available data: papers, code, images, protein sequences. But even the largest models eventually collide with a boundary: the internet, while vast, is finite. What happens when AI exhausts humanity’s recorded knowledge and no longer can come up with new hypothesis?
The founders of Periodic Labs have a hypothesis — AI that doesn’t merely study science, but does science through experimentation and repetition, why not train an AI to do both and grow at the same time. The company is building AI scientists or rather Dark Laboratories with the ability to design hypotheses, run experiments in autonomous labs, and generate new knowledge that no dataset yet contains. While other recent startups like LILA Sciences and a few others aim to create AI scientists that augment real researchers, Periodic Labs seeks to completely automate even the experimentation and deduction of results part.
If proven useful the implications could be profound and could remove bottlenecks that human researchers cause. Scientific progress has always depended on tools that extend human capability: the telescope, the particle accelerator, PCR, the genome sequencer. AI scientists may become the next such instrument — not just observing, but proposing, testing, and refining ideas at machine speed. In the physical sciences, where experiments yield high signal-to-noise data and simulations provide reliable testbeds, the feedback loop between conjecture and verification could accelerate by orders of magnitude. Imaging Dark-Laboratories that only require electricity, processing power and raw materials and in return provide research papers, cures ready for clinical trial stages and other discoveries that we can’t even imagine yet.
Periodic Labs is starting with one of science’s most tantalizing challenges: the discovery of high-temperature superconductors, materials that could revolutionize energy transmission, transportation, and computing. But the broader ambition is clear — to automate materials design itself. If successful, the impact could ripple across industries, from semiconductors and space travel to nuclear fusion.
This vision is not unfolding in isolation. In recent years, DeepMind’s GNoME system identified hundreds of thousands of stable new materials, MatterGen pushed the frontier of generative models for physics, and cloud-based autonomous labs have begun to reshape experimental workflows. The emergence of Periodic Labs adds momentum to this movement, bringing heavyweight founders, top-tier investors like a16z, Accel, and NVentures, and advisors which includes Nobel laureates and leading physicists.
Still, the road ahead raises difficult questions. How do we validate discoveries generated by non-human agents? Who bears responsibility if an AI scientist proposes — or misinterprets — an experiment with real-world consequences? And how will the culture of science adapt when machines contribute not just assistance, but agency? These issues are already being faced by software developers who depend on AI extensively to write their code, developers “vibe code” through thousands of lines of software without being aware of its structure. This becomes a problem during debugging issues and identifying security flaws that the AI agent has mistakenly included.
Contributors to ChatGPT, DeepMind’s GNoME, OpenAI, and Stanford University have unveiled their new venture — Periodic Labs, a startup that aims to create an AI Scientist. Its mission is to automate the entire process of ideating, hypothesizing, experimenting, and even publishing results, potentially redefining how research is conducted.
The Shift Toward Autonomous Science
High-profile launches such as Periodic Labs and Lila Sciences signal a deeper transformation in how science itself may evolve in the coming decades. Artificial intelligence has so far advanced by automating the hypothetical–deductive process, analyzing vast datasets — papers, code, images, and protein sequences.
But even the largest models collide with a boundary: the internet, while vast, is finite. What happens when AI exhausts humanity’s recorded knowledge and can no longer generate new hypotheses from existing data?
From Studying Science to Doing Science
The founders of Periodic Labs offer a hypothesis of their own: AI shouldn’t just study science — it should do science. Their vision is to build AI scientists operating in autonomous “Dark Laboratories”, capable of designing hypotheses, running experiments, and generating new knowledge not yet found in any dataset.
While some companies like Lila Sciences aim to create AI that assists researchers, Periodic Labs seeks full automation — covering ideation, experimentation, deduction, and analysis.
The Promise and Power of Machine-Driven Discovery
If successful, the implications could be profound. It could remove human bottlenecks in research and drastically accelerate progress. Science has always advanced through tools that extend human capability — the telescope, the particle accelerator, PCR, the genome sequencer.
AI scientists could be the next such instrument — not just observing but proposing, testing, and refining ideas at machine speed.
Imagine “dark laboratories” requiring only electricity, compute power, and raw materials, yet producing research papers, novel materials, and even medical breakthroughs ready for clinical trials.
Starting with Superconductors
Periodic Labs begins with one of the most ambitious challenges in physics — discovering high-temperature superconductors, materials that could revolutionize energy transmission, computing, and transportation.
Its broader goal is to automate materials design itself. If achieved, it could reshape industries from semiconductors and fusion energy to aerospace and manufacturing.
Building on the Momentum of AI + Science
This movement doesn’t exist in isolation. DeepMind’s GNoME has identified hundreds of thousands of stable new materials, MatterGen has pushed generative modeling for physics, and cloud-based autonomous labs are reshaping experimental workflows.
Periodic Labs builds on this foundation, backed by top-tier investors — a16z, Accel, NVentures, and others — and advised by Nobel laureates and leading physicists.
Challenges and Questions Ahead
The vision, however, raises serious questions about how we validate discoveries generated by non-human agents?
Who bears responsibility if an AI scientist misinterprets an experiment with real-world consequences?
How will the scientific community adapt when machines contribute not just assistance, but agency?
These challenges mirror those already being faced by software developers increasingly dependent on AI-generated code — so-called “vibe coding” through thousands of lines without fully understanding their structure. The same risk: unseen errors, security flaws, and misinterpretations could potentially multiply at scale in physical experiments.
Governance, Provenance, and Trust in AI Science
To avoid these pitfalls, governance must be built in from the start. That means embedding provenance and traceability — instrument logs, simulation seeds, model checkpoints, and sensor data — and publishing negative results and raw datasets wherever safe.
Partnerships with standards bodies and third-party verification labs will be critical. Robust governance doesn’t just reduce risk; it increases trust and adoption, ensuring that scientific discoveries come with verifiable lineage, reproducible protocols, and transparent safety checks.
The Expanding Boundaries of the Scientific Method
Hype or not, one thing is clear: the boundaries of the scientific method are expanding. Periodic Labs is betting that the next great leap won’t come from bigger datasets or better textbooks, but from AI systems that act upon the world — not just read about it.
As co-founder Xiang Fu puts it: “Our vision is to give AI scientists not just intelligence, but the ability to act.”
And as Liam Fedus adds: “If successful, this shift could redefine the nature of discovery itself — moving us from the age of information to the age of autonomous experimentation.”
Do check out the insightful interview below for the unique and optimistic view of the future of research from the founders themselves.

