(Re)Share #68 - Simulate-breaking news
Robotic training | Acoustic engineering | DeSci 101 | Protein folding | AI infra | IVF organoids
Happy New Year and welcome to the first issue of 2026! I hope you all had a festive holiday season, which I certainly did. As you may remember, my wife and I hosted Christmas for the first time and I’m pleased to say it was a success. Fun fact, if you cook a 14 lb turkey for a family that doesn’t really like turkey it will take you about 2.5 weeks to finish eating said turkey. But I digress. There’s been a lot of news since we last got together. No, I’m not referring to the increasingly aggressive return to imperialism, decay of the rule of law a
nd the justification of military action from men that would probably sprain their wrist if they ever fired a gun. Let’s get to it.
Shameless Plus
The case against accelerators - I penned a bit of a hot take dunk on programmatic company building and why they decline in relevancy, at least from a VC sourcing perspective. Apart from a few select strategies, accelerators were a product of the times and no longer make sense.
Stuff Worth Sharing
Human resources - If you’re worried about being overtaken by AI-powered robotic overlords, fear not. They still need us—at least for a little while longer. Physical Intelligence shared research demonstrating that human-to-robot skill transfer emerges naturally in large vision-language-action (VLA) models once they are trained on sufficiently large and diverse robot datasets. Egocentric human videos, long considered difficult to use due to functional domain gaps, can significantly improve robot performance without any explicit transfer mechanisms. By simply co-fine-tuning a scaled VLA model on human videos and relevant robot data, performance on generalization tasks (bussing, sorting, tidying) improved by 2x. The result mirrors emergence in LLMs and suggests that scaling robotic foundation models unlocks new data sources—making cheap, abundant human video a viable training signal for real-world robots.
Lost and sound - Researchers at the University of Connecticut have developed a programmable acoustic metamaterial capable of reshaping sound waves in real time. The system consists of an 11×11 grid of asymmetrical pillars, each individually rotated by a motor in one-degree increments, allowing precise control over how sound is bent, focused, or dampened. Unlike traditional fixed metamaterials, the platform can be reconfigured instantly without remanufacturing, enabling it to adapt across frequencies and functions. Experiments show it can concentrate sound at specific points for applications like ultrasound imaging, acoustic tweezers, and targeted therapies, or guide waves along edges in a manner analogous to topological insulators. The team argues the vast combinatorial design space makes the material a candidate for AI-driven optimization, pointing toward future intelligent materials that autonomously tune themselves to achieve desired wave behaviors.
Tatooiner take all - Nobel Prize–winning chemist Omar Yaghi is betting that metal-organic frameworks (MOFs) can unlock a new way to produce potable water directly from air, even in deserts. If that sounds like Uncle Owen’s moisture farm - well, yes. Through his startup Atoco, Yaghi is developing machines that use MOFs’ vast internal surface areas to capture water molecules at low humidity and release them with minimal energy input, potentially using only sunlight. Unlike conventional atmospheric water generators that rely on energy-intensive refrigeration or membranes, MOFs can operate efficiently in much drier conditions. Atoco plans both grid-powered industrial systems capable of producing thousands of liters per day and off-grid units for remote regions, with early field trials slated this year in the Mojave Desert. There are certainly limitations to the technology today, but I get excited when I read research like this. It holds the potential to significantly reduce reliance on desalination and centralized water infrastructure—issues that many of my more environmentally focused investor friends are extremely concerned with. On multiple occasions I’ve heard the argument that water could drive geopolitics for the next century, and based on how this issue is shaping up, that’s not far-fetched.
Material evidence - AI-driven materials discovery is learning the lesson that techbio did a few years ago: simulations alone aren’t enough. Despite the hundreds of millions invested into models and the (proposed) millions of hypothetical materials discovered, very few have translated into useful, real-world substances. When bits try to innovate atoms, it’s the synthesis and testing of materials that counts—and that is a slow, expensive process. A new wave of startups like Lila Sciences, Periodic Labs, and Radical AI are building AI-directed, autonomous laboratories that combine robotics, high-throughput synthesis, and LLM-based agents to plan, run, and analyze experiments, with the promise of compressing materials timelines from decades to years. Long-time readers will recall that I’ve opined on lab automation myself, and it remains one of my top priorities.
Weird science - My friends at Compound wrote a great piece on the promise and playbooks for decentralized science (DeSci)—the innovation that results from applying decentralized / crypto primitives to scientific discovery. I’ve always been a little skeptical of the investment viability of decentralized orgs (call me old fashioned), but far be it from me to disagree with Shelby or 0XSMAC. The piece makes some interesting arguments on the natural advantages of such a structure, especially for relatively fixed-term projects where traditional capital markets fail. DeSci enables rapid access to global capital, larger and more diverse contributor bases, and new incentive structures for data generation, experimentation, and peer review. It has already shown traction in underfunded areas like women’s health or rare disease. For the traditional scientists in my readership, the playbook makes it clear that DeSci is not a replacement for traditional biotech, but a complementary model that turns science into a more programmable, participatory, and capital-efficient system.
Simulate-breaking news - Some back-to-work preparation for a portfolio company brought me to this paper, and despite being a complex topic (pun intended), the explanation is quite accessible even for those less familiar with the space. Cosmohedra is a physics-based method for rapidly predicting the structures of very large, symmetric protein complexes—an area that has long been a blind spot for AlphaFold-style models due to memory and computational limits. Cosmohedra assembles complexes using geometric and physical heuristics that scale beyond the somewhat arbitrary ~5,000 amino acid ceiling. Across 36 benchmark complexes, the method achieved near-experimental accuracy, approaching the resolution limits of cryo-electron microscope data. Structures with up to ~40,000 amino acids were built in minutes on consumer hardware, making Cosmohedra 10³–10⁵× faster than molecular-dynamics docking methods.
Intelligentle giant - One of the biggest stories coming out of CES last week was the announced partnership between DeepMind and Boston Dynamics. The Gemini Robotics model will be deployed within Atlas humanoid robots and will be trialled in a Hyundai auto factory (an investor). This is a pretty obvious team-up in the making, not just because of past ownership ties. The Atlas robot (and most of the BD fleet) is the undeniable leader in physical capability, but they’ve broadly been absent from any meaningful application because they lack contextual awareness and object-manipulation skills. While Atlas already excels at balance and acrobatics, Gemini’s multimodal perception and decision-making framework is promised to allow Atlas to identify unfamiliar objects and adapt to changing environments. This is the obvious strategy for every robotic intelligence play, and while BD is a big get, the sheer cost of those embodiments makes all but the highest-volume tasks economically unviable. Still, an exciting development to watch from the sidelines.
Scaffold under the pressure - The age-old design principle “less is more” has made its way to LLMs. Vercel reports some interesting results from rebuilding its internal text-to-SQL agent, d0. By removing most of the agent’s specialized tools, they made it significantly faster, simpler, and more reliable. The original system relied on heavy prompt engineering, schema-specific tools, and guardrails to prevent hallucinations, but it proved slow and fragile. Vercel gutted the entire setup and gave Claude direct access to raw analytics files and only a single capability: execute bash commands. The model now explores data the way a human analyst would, and the stripped-down agent achieved a 100% success rate (up from 80%), ran 3.5× faster, and used 37% fewer tokens. As models improve, complex agent systems can become a liability, and well-structured data plus minimal tooling often beats elaborate orchestration.
The doctor won’t see you now - OpenAI has launched ChatGPT Health, a dedicated experience designed to help users better understand and manage their health—while preserving the privacy protections that I personally have completely thrown in the trash. The amount of mental and physical health information I’ve dumped into OpenAI’s data coffers is…concerning. In this new architecture, Health lives in a separate, isolated space within ChatGPT, allowing users to securely connect medical records and wellness apps to ground conversations in their own data. Conversations in Health are not used to train foundation models and are protected with additional encryption and access controls. Except for Europe—no soup for you.
Eggsecutive decision - Through both personal and professional experience I can say with extreme clarity that IVF is a broken system, so anytime I see interesting research pushing the boundaries in that arena, I perk up. Scientists in China, the US, and Europe have now created the most realistic lab models yet of early human pregnancy by combining human embryos (and embryo-like “blastoids”) with uterine organoids grown inside microfluidic chips. These systems recreate implantation—the moment an embryo attaches to the uterine lining—allowing researchers to directly observe a process that normally happens unseen inside the body. Across three Cell Press papers, embryos burrowed into endometrial tissue and initiated early placental development before experiments were halted at or before the 14-day ethical limit. Researchers ran thousands of trials using blastoids, enabling large-scale studies of why implantation often fails during IVF. In one experiment, screening 1,119 approved drugs identified compounds that increased implantation rates from ~5% to ~25%.
Portfolio Flex
Microsoft gave some nice coverage on Wayve, describing the history of the contrarian bet they took and announcing an expanded scope of their Azure partnership.
Some new additions to the Fly portfolio - (publicly) welcome Clicks, xMemory, TimeTrace Labs and Facet!


