(Re)Share | #24 - Don’t f**k with MCATs
Rare Earth Metals | AI Regulation | Multimodal Medical AI | Asteroid Collisions
I hope everyone enjoyed the last few weeks of official summer. I got out to Montauk to see what all the fuss was about and found it generally disappointing unless you really like lighthouses, in which case it was exceptional. Hoping that NYC can redeem itself with Climate Week kicking off tomorrow. Looking forward to seeing some of you over the coming days.
Stuff worth sharing
This really magma day - EV battery supply chains are a common topic on (Re)Share, but the majority of coverage has been focused on the geopolitical issues of mining in low income nations or novel approaches to alternative extraction methods. However it’s now theorized that the largest lithium deposit on the planet might be snuggly situated in the US (Nevada / Utah). At an estimated 20-40 million tonnes, which is double the next largest deposit, the source would have a massive impact on global supply chains - as big as fracking had with natural gas. If interested in learning how the lakebed deposit came to pass check out this explanation in ChemistryWorld.
Pay it forward - Advanced market commitment company, Frontier, announced it’s latest round of carbon removal purchases. As a reminder, Frontier serves as an investor, of sorts, for large corporates looking to reduce their carbon footprint. But rather than taking equity, they deploy cash into forward-looking commercial agreements. The latest round of projects includes CO₂ extraction from seawater, direct air capture and mineral-based carbon absorption in rivers (enhanced weathering).
GAAIAP - The IRS is giving its auditors a leg up. America’s most hated department will start deploying AI systems to help identify and process some its most complex and questionable tax returns. In unrelated news does anyone know how to apply for Cayman Island citizenship? Asking for a friend.
Pick on someone your pwn size - Activision is is partnering with AI startup, Modulate, to deploy its ToxMod vocal moderation offering. The AI listens to chat from live gameplay and flags if commentary is violent or abusive. Fans of first-person shooters will know that these games have a certain…culture…where the parlance is a bit more aggressive than normal. Navigating this is a pretty tricky technical challenge but ToxMod is unique for its ability to analyze tone and intent in speech, rather than a more basic keyword search. For example, determining other player reactions can help identify if aggressive phrasing vs. accepted ribbing. They can also predict the age of certain players based on voice, and adjust its moderation threshold accordingly. As someone with voice cracks well into my twenties, I feel this might be an overreach.
OK Boomer - The Senate held their third hearing on regulating AI, this one with testimony from Microsoft President Brad Smith, NVIDIA Chief Scientist William Dally and Boston University’s Woodrow Hartzog. I found this to be a particularly forgettable hearing compared to the past few rounds. There was very little said regarding novel approaches or views that deviated from the typical talking points. The one notable exception was a lightly heated debate between Smith and Senator Hawley on Microsoft’s age ban with Bing chat (1:01:54), which highlighted the very real tradeoff of expanding access - in this case education - and the required risks to get there. If that sounds boring and you would instead prefer to just see some cringe-worthy performance look no further than Senator Kennedy at 55:31.
Don’t f**k with MCATs - The Google Brain team released their research in multimodal medical AI, which leverages language models in overcoming de novo imaging challenges. Historically, training medical ML models required expensive, expert-curated and labeled datasets, which is broadly non tractable for new or rare disease areas. This approach, dubbed ELIXR, uses standard images and free-text radiology reports to generate predictive performance.
Crushing it - What is the coolest way to avert an asteroid careening towards Earth? Obviously the correct answer is complete annihilation via satellite-mounted space laser. But until such a time as Elon figures that out, NASA has been experimenting with route diversion through collision. Last year their DART program launched a rocket in the Dimorphos asteroid at 14,000 miles per hour to see if they could alter its pathway. The impact fragments and the resulting trajectory are still being studied but it does seem that the asteroid’s orbit was altered by a notable margin.
All that and a bag of - A not-too-deep dive into the cost advantages that Google should enjoy with the unveiling of their latest AI chip, the TPUv5e. The article goes into a good amount of depth on the performance, capex and opex dimensions that the chip brings vs. the defacto standard, NVIDIA’s H100. Tl;dr: for models under 200 billion parameters (GPT3 level) Google is a better buy. Considering the tear that NVIDIA’s been on lately, this is big news.
The company you keep - The UK’s Frontier AI Taskforce shared a first release on the progress made and plans going forward. Policy details are light but the advisory panels the Taskforce has assembled is truly next level - AI royalty Yoshua Bengio, storied UK VC Alex van Someren, EF co-founder / former boss Matt Clifford, etc. Very promising outlook in my opinion.
Full disclosure - Fly has multiple co-investments with Taskforce lead Ian Hogarth.
Can’t sit here - Our friends at Airstreet Capital posted a piece arguing that the UK should not be invited to the UK’s upcoming AI Safety Summit in November. While I understand the position, I broadly disagree with the argument. Diplomacy is needed when nations disagree and while we may not reach an agreement in the session, I would argue that the UK would stand to learn much more from understanding China’s plans vs. (shots fired) Canada’s. Of course, I’m no where near well read enough to have an informed view on this, but if you’re still reading this newsletter you’ve already figured that out.
Portfolio Flex
Wayve released LINGO-1, a natural language interface for investigating the decisions of their self-driving models. The most cited risk factor for an AI system is the black box issue and that is particularly acute when involving human safety. With LINGO-1 the Wayve team is able to review driving footage and ask a question about a specific point in time (e.g. why did you turn left there?). This is a world’s first in self-driving and could usher in an iPhone moment for robotics at large. (featured in MIT Tech Review as well).