Together Evaluations now benchmarks proprietary AI models from OpenAI, Anthropic, and Google against open-source alternatives, claiming 10x cost savings. (Read Together Evaluations now benchmarks proprietary AI models from OpenAI, Anthropic, and Google against open-source alternatives, claiming 10x cost savings. (Read

Together AI Opens Evaluations to OpenAI, Anthropic, Google Models

2026/02/03 04:01
2분 읽기
이 콘텐츠에 대한 의견이나 우려 사항이 있으시면 [email protected]으로 연락주시기 바랍니다

Together AI Opens Evaluations to OpenAI, Anthropic, Google Models

Lawrence Jengar Feb 02, 2026 20:01

Together Evaluations now benchmarks proprietary AI models from OpenAI, Anthropic, and Google against open-source alternatives, claiming 10x cost savings.

Together AI Opens Evaluations to OpenAI, Anthropic, Google Models

Together AI has expanded its Evaluations platform to support direct benchmarking against proprietary models from OpenAI, Anthropic, and Google—a move that could reshape how enterprises make AI infrastructure decisions.

The update, announced February 3, enables side-by-side comparisons between open-source models and closed-source alternatives including GPT-5, Claude Sonnet 4.5, and Gemini 2.5 Pro. For AI-focused crypto projects and decentralized compute networks, this creates a standardized framework for proving cost-efficiency claims.

What's Actually New

Together Evaluations now accepts models from three major providers as both evaluation targets and judges:

OpenAI: GPT-5, GPT-5.2
Anthropic: Claude Sonnet 4.5, Claude Haiku 4.5, Claude Opus 4.5
Google: Gemini 2.5 Pro, Gemini 2.5 Flash

The platform also supports any OpenAI Chat Completions-compatible URL, meaning self-hosted and decentralized inference endpoints can plug directly into the benchmarking system.

The Cost Argument Gets Data

Together AI published accompanying research showing fine-tuned open-source judges (GPT-OSS 120B, Qwen3 235B) outperforming GPT-5.2 as evaluators—62.63% accuracy versus 61.62%—while running at reportedly 10x lower cost and 15x higher speed.

That's a specific, testable claim. For decentralized AI networks competing on inference pricing, having a neutral benchmarking platform that accepts custom endpoints could prove valuable for customer acquisition.

The company, founded in 2020 and known for research innovations like FlashAttention-3, has positioned itself as infrastructure-agnostic. Its platform already offers access to over 200 open-source models with claimed 4x faster inference and 11x lower cost compared to GPT-4o, according to December 2024 benchmarks.

Why This Matters for Crypto AI

Several blockchain-based AI projects—from decentralized GPU marketplaces to inference networks—have struggled to prove their cost advantages aren't just marketing. A third-party evaluation framework that accepts any compatible endpoint changes that dynamic.

The Evaluations API runs on Together's Batch API at roughly 50% lower cost than real-time inference, making large-scale model comparisons economically viable for smaller teams.

Together AI remains a private company with no associated token. But its tooling increasingly touches the infrastructure layer where crypto AI projects compete—and now those projects have a standardized way to benchmark against the incumbents they're trying to displace.

Image source: Shutterstock
  • together ai
  • ai infrastructure
  • llm benchmarking
  • open source ai
  • enterprise ai
면책 조항: 본 사이트에 재게시된 글들은 공개 플랫폼에서 가져온 것으로 정보 제공 목적으로만 제공됩니다. 이는 반드시 MEXC의 견해를 반영하는 것은 아닙니다. 모든 권리는 원저자에게 있습니다. 제3자의 권리를 침해하는 콘텐츠가 있다고 판단될 경우, [email protected]으로 연락하여 삭제 요청을 해주시기 바랍니다. MEXC는 콘텐츠의 정확성, 완전성 또는 시의적절성에 대해 어떠한 보증도 하지 않으며, 제공된 정보에 기반하여 취해진 어떠한 조치에 대해서도 책임을 지지 않습니다. 본 콘텐츠는 금융, 법률 또는 기타 전문적인 조언을 구성하지 않으며, MEXC의 추천이나 보증으로 간주되어서는 안 됩니다.