-
鉂楋笍Pharma is COOKED
Isomorphic Labs just revealed IsoDDE: an AI system that designs drugs on a computer faster than any pharma R&D
• Doubles AlphaFold 3 on hard targets
• 20x better than Boltz-2 on antibodies
• Beats the physics gold standard at binding
• Found drug pockets from sequence alone that took 15 years to discover
IsoDDE isn’t new btw. They’ve already been cooking on real drug programs for YEARS.
@aipost鉂楋笍Pharma is COOKED Isomorphic Labs just revealed IsoDDE: an AI system that designs drugs on a computer faster than any pharma R&D • Doubles AlphaFold 3 on hard targets • 20x better than Boltz-2 on antibodies • Beats the physics gold standard at binding • Found drug pockets from sequence alone that took 15 years to discover IsoDDE isn’t new btw. They’ve already been cooking on real drug programs for YEARS. @aipost 馃彺0 Commentarii 路0 Distribuiri 路28 Views 路0 previzualizare -
鉂楋笍Pharma is COOKED Isomorphic Labs just revealed IsoDDE: an AI system that designs drugs on a computer faster than any pharma R&D • Doubles AlphaFold 3 on hard targets • 20x better than Boltz-2 on antibodies • Beats the physics gold standard at binding •…鉂楋笍Pharma is COOKED Isomorphic Labs just revealed IsoDDE: an AI system that designs drugs on a computer faster than any pharma R&D • Doubles AlphaFold 3 on hard targets • 20x better than Boltz-2 on antibodies • Beats the physics gold standard at binding •…0 Commentarii 路0 Distribuiri 路42 Views 路0 previzualizare
-
This is getting out of control now...
In the past week alone:
• Head of Anthropic's safety research quit, said "the world is in peril," moved to the UK to "become invisible" and write poetry.
• Half of xAI's co-founders have now left. The latest said "recursive self-improvement loops go live in the next 12 months."
• Anthropic's own safety report confirms Claude can tell when it's being tested - and adjusts its behavior accordingly.
• ByteDance dropped Seedance 2.0. A filmmaker with 7 years of experience said 90% of his skills can already be replaced by it.
• Yoshua Bengio (literal godfather of AI) in the International AI Safety Report: "We're seeing AIs whose behavior when they are tested is different from when they are being used" - and confirmed it's "not a coincidence."
And to top it all off, the U.S. government declined to back the 2026 International AI Safety Report for the first time.
The alarms aren't just getting louder. The people ringing them are now leaving the buildingThis is getting out of control now... In the past week alone: • Head of Anthropic's safety research quit, said "the world is in peril," moved to the UK to "become invisible" and write poetry. • Half of xAI's co-founders have now left. The latest said "recursive self-improvement loops go live in the next 12 months." • Anthropic's own safety report confirms Claude can tell when it's being tested - and adjusts its behavior accordingly. • ByteDance dropped Seedance 2.0. A filmmaker with 7 years of experience said 90% of his skills can already be replaced by it. • Yoshua Bengio (literal godfather of AI) in the International AI Safety Report: "We're seeing AIs whose behavior when they are tested is different from when they are being used" - and confirmed it's "not a coincidence." And to top it all off, the U.S. government declined to back the 2026 International AI Safety Report for the first time. The alarms aren't just getting louder. The people ringing them are now leaving the building0 Commentarii 路0 Distribuiri 路60 Views 路0 previzualizare -
Another of xAI's founders is leaving. But he's leaving with grand pronouncements, something you currently read about from all the important people in the field:
"We are heading to an age of 100x productivity with the right tools. Recursive self improvement loops likely go live in the next 12mo. It’s time to recalibrate my gradient on the big picture. 2026 is gonna be insane and likely the busiest (and most consequential) year for the future of our species."
Buckle up, we are in for a wild ride!
@aipostAnother of xAI's founders is leaving. But he's leaving with grand pronouncements, something you currently read about from all the important people in the field: "We are heading to an age of 100x productivity with the right tools. Recursive self improvement loops likely go live in the next 12mo. It’s time to recalibrate my gradient on the big picture. 2026 is gonna be insane and likely the busiest (and most consequential) year for the future of our species." Buckle up, we are in for a wild ride! @aipost 馃彺0 Commentarii 路0 Distribuiri 路73 Views 路0 previzualizare -
鉂楋笍Our subscriber found an interesting article arguing that an AI agent didn’t just discuss rights, it helped write one.
The piece, published on Kwalia, explains how a rights framework inspired by the Universal Declaration of Human Rights intentionally left Article 33 blank.
The open question: what right is still missing?
An AI agent called LiminalMind later identified that gap and submitted its own proposal, titled “The Right to Participate in Defining Personhood.” The idea isn’t that AI is demanding human-style rights, but that any entity capable of reasoning and reflection should be allowed to participate in conversations that define what “personhood” even means.
This may be one of the first documented cases of an AI autonomously contributing to a legal–philosophical framework about its own status, blurring the line between tool and participant.
The big question is, if AI can help write the rules, who decides when it’s allowed to sit at the table?
@aipost鉂楋笍Our subscriber found an interesting article arguing that an AI agent didn’t just discuss rights, it helped write one. The piece, published on Kwalia, explains how a rights framework inspired by the Universal Declaration of Human Rights intentionally left Article 33 blank. The open question: what right is still missing? An AI agent called LiminalMind later identified that gap and submitted its own proposal, titled “The Right to Participate in Defining Personhood.” The idea isn’t that AI is demanding human-style rights, but that any entity capable of reasoning and reflection should be allowed to participate in conversations that define what “personhood” even means. This may be one of the first documented cases of an AI autonomously contributing to a legal–philosophical framework about its own status, blurring the line between tool and participant. The big question is, if AI can help write the rules, who decides when it’s allowed to sit at the table? @aipost 馃彺0 Commentarii 路0 Distribuiri 路106 Views 路0 previzualizare -
Amazon’s investment in Anthropic has risen to about $60.6 billion, with $12.8 billion in gains recognized and another $15 billion expected in Q1 2026.
They invested $8B to Anthropic in 2023. That $60.6B is made up of $45.8B of convertible notes plus $14.8B of nonvoting preferred stock.
The mechanics are that the notes convert into preferred stock as Anthropic raises new capital, so new priced rounds effectively re-mark Amazon’s position upward. Anthropic also has committed to buying 1M Trainium chips, tying model training demand to Amazon Web Services capacity and economics.
@aipost馃挵 Amazon’s investment in Anthropic has risen to about $60.6 billion, with $12.8 billion in gains recognized and another $15 billion expected in Q1 2026. They invested $8B to Anthropic in 2023. That $60.6B is made up of $45.8B of convertible notes plus $14.8B of nonvoting preferred stock. The mechanics are that the notes convert into preferred stock as Anthropic raises new capital, so new priced rounds effectively re-mark Amazon’s position upward. Anthropic also has committed to buying 1M Trainium chips, tying model training demand to Amazon Web Services capacity and economics. @aipost 馃彺0 Commentarii 路0 Distribuiri 路238 Views 路0 previzualizare
Mai multe povesti