Just five months after Anthropic debuted its open-source ChatGPT rival, Claude, the company is back with an updated version that promises longer answers, more detailed reasonings, fewer hallucinations and generally better performance. It also now scores in the 90th percentile of graduate school applicants on the GRE reading and writing exams.
The updated version, Claude 2, is available today for users in the US and the UK. It can now handle as many as 100,000 tokens — that's around 75,000 words, or a few hundred pages of documents users can have Claude digest and analyze — up significantly from the previous version’s 9,000 token limit. In AI, tokens are the bits and pieces that your input prompt gets broken down into so that the model can more readily process them — hence Claude's ability to "digest" user data.
This increased capacity will also translate into longer, more nuanced responses. Claude 2 will even be able to generate short stories “up to a few thousand tokens,” the company announced. Its coding capabilities have also improved, rising to a score of 71.2 percent on the Codex HumanEval benchmark, up from 56 percent.
The Claude “Constitutional AI” system is already guided by 10 secret “foundational” principals of fairness and autonomy. Extensive red-team testing since the release of the first version has tempered Claude 2 into a more emotionally stable and harder to fool AI. Compared to its predecessor Claude 2 is reportedly, “2x better at giving harmless responses compared to Claude 1.3,” the company’s announcement claimed. If you’re already subscribed to the Claude 1.3 API, great news, you’ll be automatically rolled over to Claude 2 at no extra charge.
This article originally appeared on Engadget at https://ift.tt/2lOS7CMvia engadget.com
0 Response to "Anthropic releases Claude 2 a more capable less gullible AI chatbot"
Post a Comment