AI Новина
PodcastsClosePodcastsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All PodcastsAICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AIPolicyClosePolicyPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All PolicyAnthropic’s quest to study the negative effects of AI is under pressureThe Verge’s Hayden Field joins Decoder to discuss the politically fraught climate around AI safety.by Nilay PatelCloseNilay PatelEditor-in-ChiefPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Nilay PatelDec 4, 2025, 3:00 PM UTCLinkSharePodcastsClosePodcastsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All PodcastsAICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AIPolicyClosePolicyPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All PolicyAnthropic’s quest to study the negative effects of AI is under pressureThe Verge’s Hayden Field joins Decoder to discuss the politically fraught climate around AI safety.by Nilay PatelCloseNilay PatelEditor-in-ChiefPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Nilay PatelDec 4, 2025, 3:00 PM UTCLinkShareNilay PatelCloseNilay PatelPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Nilay Patel is editor-in-chief of The Verge, host of the Decoder podcast, and co-host of The Vergecast.Today, I’m talking with Verge senior AI reporter Hayden Field about some of the people responsible for studying AI and deciding in what ways it might… well, ruin the world. Those folks work at Anthropic as part of a group called the societal impacts team, which Hayden just spent time with for a profile she published this week.The team is just nine people out of more than 2,000 who work at Anthropic. Their only job, as the team members themselves say, is to investigate and publish quote “inconvenient truths” about how people are using AI tools, what chatbots might be doing to our mental health, and how all of that might be having broader ripple effects on the labor market, the economy, and even our elections.That of course brings up a whole host of problems. The most important is whether this team can remain independent, or even exist at all, as it publicizes findings about Anthropic’s own products that might be unflattering or politically fraught. After all, there’s a lot of pressure on the AI industry in general and Anthropic specifically to fall in line with the Trump administration, which put out an executive order in July banning so-called “woke AI.”Verge subscribers, don’t forget you get exclusive access to ad-free Decoder wherever you get your podcasts. Head here. Not a subscriber? You can sign up here.If you’ve been following the tech industry, the outline of this story will feel familiar. We’ve seen this most recently with social media companies and the trust and safety teams responsible for doing content moderation. Meta went through countless cycles of this, where it dedicated resources to solving problems created by its own scale and the unpredictable nature of products like Facebook and Instagram. And then, after a while, it seems like the resources dried up, or Mark Zuckerberg got bored or more interested in MMA or just cozying up to Trump, and the products didn’t really change to reflect what the research showed.We’re living through one of those moments right now. The social platforms have slashed investments into election integrity and other forms of content moderation. Meanwhile, Silicon Valley is working closely with the Trump White House to resist meaningful attempts to regulate AI. So as you’ll hear, that’s why Hayden was so interested in this team at Anthropic. It’s fundamentally unique in the industry right now.In fact, Anthropic is an outlier because of how amenable CEO Dario Amodei has been to calls for AI regulation, both at the state and federal level. Anthropic is also seen as the most safety-first of the leading AI labs, because it was formed by former research executives at OpenAI who were worried their concerns about AI safety weren’t being taken seriously. There’s actually quite a few companies formed by former OpenAI people worried about the company, Sam Altman, and AI safety. It’s a real theme of the industry that Anthropic seems to be taking to the next level.So I asked Hayden about all of these pressures, and how Anthropic’s reputation within the industry might be affecting how the societal impacts team functions — and whether it can really meaningfully study and perhaps even influence AI product development. Or, if as history suggests, this will just look good on paper, until the team quietly goes away. There’s a lot here, especially if you’re interested in how AI companies think about safety from a cultural, moral, and business perspective. A quick announcement: We’re running a special end-of-the-year mailbag episode of Decoder later this month where we answer your questions about the show: who we should talk to, what topics we cover in 2026, what you like, what you hate. All of it. Please send your questions to decoder@theverge.com and we’ll do our best to feature as many as we can. If you’d like to about what we discussed in this episode, check out these links:It’s their job to keep AI from destroying everything | The VergeAnthropic details how it measures Claude’s wokeness | The VergeThe White House orders tech companies to make AI bigoted again | The VergeChaos and lies: Why Sam Altman was booted from OpenAI | The VergeAnthropic CEO Dario Amodei just made another call for AI regulation | Inc.How Elon Musk Is remaking Grok in his image | NYTAnthropic tries to defuse White House backlash | Axios New AI battle: White House vs. Anthropic | AxiosAnthropic CEO says company will pursue gulf state investments after all | WiredQuestions or comments about this episode? Hit us up at decoder@theverge.com. We really do read every email!Decoder with Nilay PatelA podcast from The Verge about big ideas and other problems.SUBSCRIBE NOW!Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.Nilay PatelCloseNilay PatelEditor-in-ChiefPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Nilay PatelAICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AIAnthropicCloseAnthropicPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AnthropicDecoderCloseDecoderPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All DecoderPodcastsClosePodcastsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All PodcastsPolicyClosePolicyPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All PolicyPoliticsClosePoliticsPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All PoliticsTechCloseTechPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All TechMost PopularMost PopularCrucial is shutting down — because Micron wants to sell its RAM and SSDs to AI companies insteadSteam Machine today, Steam Phones tomorrowApple’s head of UI design is leaving for MetaAntigravity’s 360-degree drone is here to help you forget DJIElon Musk is on a racist posting spree againThe Verge DailyA free daily digest of the news that matters most.Email (required)Sign UpBy submitting your email, you agree to our Terms and Privacy Notice. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.Advertiser Content FromThis is the title for the native ad