Anthropic’s AI model Claude gets popularity boost after US military feud
Claude, an AI model developed by Anthropic, has seen a significant rise in popularity following its blacklisting by the Pentagon over ethical issues.
Claude, the AI model from Anthropic, experienced a surge in popularity after being blacklisted by the Pentagon due to concerns regarding ethics in artificial intelligence applications. Following this controversy, Claude reached the top position on Apple’s chart of free apps in the United States, surpassing OpenAI's ChatGPT, which had been chosen by the Pentagon to provide AI technologies for classified military networks. Despite its swift rise in rankings, Claude did not manage to dethrone ChatGPT's dominance on Android and iPhone platforms in the UK.
The unexpected increase in demand for Claude also led to significant service outages, as Anthropic reported that the model could not handle the unprecedented levels of interest. On one particular morning, over 1,400 users experienced disruptions, prompting a quick resolution by Anthropic within hours of the reported outages. This spike in usage highlights the changing dynamics in the AI space, where controversial decisions can paradoxically drive engagement and notoriety for certain technologies.
The ongoing feud with the Pentagon has not hindered business for Anthropic; in fact, it may have amplified the visibility and attractiveness of Claude amid rising concerns about AI ethics in military applications. As consumers and businesses grapple with the nuances of AI ethics, models like Claude could potentially gain traction among users looking for alternatives to more mainstream options, marking a significant turning point in AI app competitiveness and market response amid political tensions.