(Re)Share #64 - Kill two birds with one clone
Synthetic avatars | BCI | Humanoid robotics | Autonomous bio | Scale law limits
Summer is behind us and the startup world is returning to full speed. The past few weeks have been a busy one for Fly. We welcomed a new Associate to our ranks and there’s been a lot of big developments on the portfolio side, as you’ll see below. But of course what brings us together, as always, is the range of exciting developments in the world of deep tech. Let’s get to it.
Stuff Worth Sharing
- The doctor won’t see you now - A frequent topic at (Re)Share is the diverging behaviors of AI adoption in the East vs. the West. China’s country’s strained healthcare system has created fertile ground for AI chatbots like DeepSeek, which patients increasingly treat as trusted medical companions. The article follows a kidney transplant patient who now relies on DeepSeek for symptom analysis, lifestyle advice, and emotional support, even reducing prescribed medication based on its suggestions. For her, the bot feels “more humane” than her overworked doctors. This dynamic reflects a wider trend: AI tools are stepping into gaps left by absent caregivers and overstretched health systems, offering 24/7 availability and empathy. This read put me in a weird place within the uncanny valley. On the one hand, digital access to medical care is one of the single biggest impacts that AI could have on the world. It is just not feasible to provide specialist-level coverage to every patient that needs it, and that’s especially true in more rural locales. At the same time, as the article shows, the risks of hallucinations, bias, and patient misdiagnosis are very real. Model overconfidence comes with real-world costs, and there is only so much training that can be done for real patient engagement—at least ethically. 
- Kill two birds with one clone - The undisputed leader of synthetic human video has extended its lead once again. Synthesia unveiled its Express-2 avatars offering, pushing their digital clone capability closer to photorealism. Compared to the earlier Express-1, the new system offers smoother body movements, more natural gestures, and voice cloning that preserves accent and intonation. Journalist Rhiannon Williams provides a head-to-head of the two versions using the same base training, and the improvement is quite significant. The gain is attributable to the infallibility of scaling laws, where the Express-1 model had a few hundred million parameters, Express-2’s rendering model’s parameters number in the billions. It still strikes me as odd that a technology this powerful has found the killer app to be corporate training videos, but so speaks the market. Synthesia claims that the next frontier is interactivity, with avatars that can “talk back” in real time and answer questions mid-video. At which point, I would predict live commerce comes to the West in a big way. 
- Emergency exit - Annoyingly talented writer/investor Mike Dempsey penned another great piece, this one on the changing dynamics of investing in a world of talent-based arms races and HALO’d (Hire-And-License-Out) pseudo-exits. AI startups like Inflection, Adept, Character, and Windsurf raised vast sums only to be acqui-hired by larger labs with questionable return structures. While these transactions may be justified as talent aggregation, Dempsey argues that HALOs reveal many startups were strategically mispositioned—e.g., betting billions on frontier LLMs without sustainable moats. Inflection, for example, raised over $1.3 billion and amassed a 22,000 H100 arsenal but couldn’t compete with OpenAI, Anthropic, or Meta on compute, talent, or consumer distribution. Character.ai faced similar scale issues despite strong usage. Fast-followers chasing platform shifts with massive early raises often end up boxed into one existential shot, burning capital and talent when incumbents consolidate. In AI, the winners are often first movers or last disruptors—not the rushed middle cohort. 
- Come brain or shine - Two Canadian men in their 30s with spinal cord injuries have become the first patients outside the U.S. to receive Neuralink brain implants. Implanted at Toronto Western Hospital, the chip enabled them to control a cursor with thought alone within just minutes of surgery, and both were discharged the next day. The Canadian trial will monitor up to six participants for a year, assessing safety and quality-of-life improvements while watching for risks like seizures or infection. Health advocates urge caution, and a prior U.S. patient experienced implant slippage—the technology remains experimental. Still, the speed of life impact in these two surgeries is materially faster than the previous implantations we’ve covered in past issues. Some of that is due to the nature of the injury, but the pace of technological improvement is remarkable. 
- Getting high on your own supply - Long-time readers will know that I am not particularly bullish on the viability of humanoid robotics, and this piece in IEEE Spectrum somewhat explains why. Humanoid robotics startups have promised factories of tens of thousands of robots, but the scaling challenge remains seriously unresolved. Agility Robotics claims its Oregon facility can build 10,000 Digits a year, and Tesla projects 50,000 Optimus units by 2026. But today, only a few robots are deployed and all in tightly controlled pilots. The bottleneck isn’t manufacturing, it’s demand. Companies haven’t found applications requiring thousands of humanoids in a single facility, and multipurpose AI is not yet robust enough to fill the gap. Market viability also hinges on a multitude of critical constraints: battery life, reliability, safety evidence, etc. This is the Perez curve being drawn in real time. I, for one, think that outside of consumer care (child/elderly), there is no environment that can’t be somewhat easily adapted to an embodiment form. Personally, I would much rather bet on basic arm functionality with a highly dexterous hand—but I may be a bit biased ;) 
- Embody mass index - This is a great reflection piece from Decoding Bio on the state of biotech capacity in China, from a scientist on the ground. Cam Watson argues that the next biotech leap won’t come from in-silico prediction tools like Atomwise or AlphaFold but from machine intelligence and robotic labs. We’ve covered the potential of autonomous science in a number of past issues - both in coverage on leading labs like FutureHouse as well as my own writing. But I have always had a Western lens, and Watson shows just how shortsighted that was. China, he argues, is uniquely positioned to lead this shift in AI-generated discovery. State-backed investment, national biofoundries, industrial robotics expertise, and reliable grid infrastructure have created fertile ground for embodied AI platforms like BioMARS, which orchestrates multi-agent AI “scientists” in fully automated labs. Integrated biotech campuses such as Suzhou’s BioBAY show the model’s commercial viability with frankly shocking success rates—15 therapeutics to market, 54 unicorns, and 17 IPOs! He goes on to argue that biotech is about to have its internet moment and that China’s scale and speed could set the global agenda. 
- Bad Apple - Researchers from Apple pressure-tested the capabilities of Large Reasoning Models (LRMs) vs. predecessor LLMs, and the offspring didn’t do so hot. The models were tested on controllable puzzle environments like Tower of Hanoi and River Crossing to allow for precise control of complexity and direct analysis of reasoning steps (vs. math benchmarking). Findings revealed that on low-complexity tasks, standard LLMs (Claude 3.7 Sonnet) often outperform LRMs (DeepSeek-R1 and OpenAI o-series) by solving puzzles faster and with fewer tokens. At medium complexity, LRMs gain an edge, leveraging longer chain-of-thought to navigate harder problems. But at high complexity, both collapse—with accuracy dropping to zero. Strikingly, as problems grow harder, LRMs sometimes reduce reasoning effort, cutting short their “thinking” despite having unused token budgets.This last bit is a big, big deal as it suggests a fundamental scaling limit in inference-time reasoning. The study underscores that today’s reasoning models don’t possess generalizable problem-solving skills but rely on brittle heuristics that break under compositional depth. While reinforcement learning and self-reflection improve mid-range performance, LRMs remain bounded by structural limits. The authors warn against conflating “thinking tokens” with true reasoning capacity: more text is not equivalent to deeper cognition. 
Portfolio Flex
- Lakera announced a $300 million acquisition by security leader, Check Point. Fly has been lucky enough to sit alongside the team since day one and couldn’t be more proud of them! 
- Blue Morpho shared a sneak peak of their knowledge-graph powered solution for finance. 
- Agemo introduced CodeWords: the fastest way to go from idea to automation, simply by chatting with AI. 


