studioglobal
热门发现
报告已发布12 来源

GPT-5.5、Claude Opus 4.7、Kimi K2.6 和 DeepSeek V4 怎么选?

终端型编码代理优先测 GPT 5.5;软件修复和无工具硬推理优先测 Claude Opus 4.7;开放权重部署看 Kimi K2.6;成本敏感场景把 DeepSeek V4 Pro Max 纳入验证 [1][18][24]。 不要把 GPT 5.5 Pro 和基础 GPT 5.5 混成一个模型;在单独报告 Pro 的行里,它以 BrowseComp 90.1% 和带工具 Humanity’s Last Exam 57.2% 领先 [24]。

17K0
Abstract benchmark dashboard comparing GPT-5.5, Claude Opus 4.7, Kimi K2.6 and DeepSeek V4
GPT-5.5 vs Claude Opus 4.7 vs Kimi K2.6 vs DeepSeek V4: Benchmarks ComparedAI-generated editorial illustration for a benchmark comparison of GPT-5.5, Claude Opus 4.7, Kimi K2.6 and DeepSeek V4.
AI 提示

Create a landscape editorial hero image for this Studio Global article: GPT-5.5 vs Claude Opus 4.7 vs Kimi K2.6 vs DeepSeek V4: Benchmarks Compared. Article summary: There is no single apples to apples leaderboard in the cited sources. The clearest signals are GPT 5.5 at 82.7% on Terminal Bench 2.0, Claude Opus 4.7 at 87.6% on SWE Bench Verified, Kimi K2.6 as the open weight pick,.... Topic tags: ai, ai benchmarks, llm, openai, anthropic. Reference image context from search candidates: Reference image 1: visual subject "[Kimi K2 vs Claude Opus 4.7 vs GPT 5.5 Comparison](https://www.youtube.com/watch?v=M90iB4hpenI). ![Image 4](https://www.youtube.com/watch?v=M90iB4hpenI). [](https://www.youtube.com" source context "Kimi K2 vs Claude Opus 4.7 vs GPT 5.5 Comparison" Reference image 2: visual subject "[Kimi K2 vs Claude Opus 4.7 vs GPT 5.5 Comparison](https://www.youtube.com/watch?v=M90iB4hp

openai.com

把 GPT-5.5、Claude Opus 4.7、Kimi K2.6 和 DeepSeek V4 放在一起看,很容易得出一个过于简单的问题:谁第一?但公开资料并没有给出一个完全统一、同一评测框架下覆盖四者的榜单。最接近的共同对比覆盖 GPT-5.5、GPT-5.5 Pro、Claude Opus 4.7 和 DeepSeek-V4-Pro-Max;Kimi K2.6 的数据主要来自单独的 Kimi 发布报道、模型卡和榜单来源 [1][6][24]

所以,更有用的问题不是谁赢,而是:你的工作负载应该先测哪一个?

还有一个命名说明:本文把 DeepSeek V4 具体写作 DeepSeek-V4-Pro-Max,因为引用来源中给出基准和成本行的是这个变体 [18][24]。同时,凡是来源把 GPT-5.5 Pro 单独列出,本文也会把它和基础 GPT-5.5 分开,不把分数混算 [24]

先看结论:按任务选起点

  • **终端密集型编码代理:**GPT-5.5 在共同对比中的 Terminal-Bench 2.0 成绩最高,为 82.7% [24]
  • **软件修复类基准:**Claude Opus 4.7 在引用的 SWE-Bench Pro 行为 64.3%,在 SWE-Bench Verified 行为 87.6%,均领先本文比较的这些模型 [18][24]
  • **不使用工具的硬推理:**Claude Opus 4.7 在共同表格中的 GPQA Diamond 和 Humanity’s Last Exam 无工具行领先 [24]
  • **带工具的推理与浏览检索:**在单独报告 GPT-5.5 Pro 的行里,GPT-5.5 Pro 以 Humanity’s Last Exam 带工具 57.2% 和 BrowseComp 90.1% 领先 [24]
  • **开放权重部署:**Kimi K2.6 是引用材料中最清晰的开放权重候选,被描述为 1T 参数 MoE、32B 活跃参数、256K 上下文窗口的模型 [1]
  • **成本敏感的托管推理:**DeepSeek-V4-Pro-Max 值得进入测试集;LLM Stats 将其列为 1M 上下文、SWE-Bench Verified 80.6%,成本列为 $1.74/$3.48 [18]

基准对比表

表中的破折号表示引用来源中没有找到该模型在这一项上的分数,不代表得分为零。GPT-5.5、GPT-5.5 Pro、Claude Opus 4.7 和 DeepSeek-V4-Pro-Max 的多数字段来自同一共同对比;Kimi K2.6 的数字来自单独的 Kimi 相关来源 [1][6][24]

基准GPT-5.5GPT-5.5 ProClaude Opus 4.7Kimi K2.6DeepSeek-V4-Pro-Max
GPQA Diamond93.6% [24]94.2% [24]约 91% [28]90.1% [24]
Humanity’s Last Exam,无工具41.4% [24]43.1% [24]46.9% [24]37.7% [24]
Humanity’s Last Exam,带工具52.2% [24]57.2% [24]54.7% [24]54.0% [1]48.2% [24]
Terminal-Bench 2.082.7% [24]69.4% [24]66.7% [6]67.9% [24]
SWE-Bench Pro58.6% [24]64.3% [24]58.6% [6]55.4% [24]
BrowseComp84.4% [24]90.1% [24]79.3% [24]83.2% [1]83.4% [24]
MCP Atlas / MCPAtlas Public75.3% [24]79.1% [24]73.6% [24]
SWE-Bench Verified87.6% [18]80.2% [6]80.6% [18]

你的场景应该先测谁?

优先级建议先测理由
终端式编码代理GPT-5.5在共同对比中,它的 Terminal-Bench 2.0 最高,为 82.7% [24]
软件工程修复Claude Opus 4.7在引用的 SWE-Bench Pro 和 SWE-Bench Verified 行中,它领先本文比较的这些模型 [18][24]
无工具硬推理Claude Opus 4.7在共同对比中,它领先 GPQA Diamond 和 Humanity’s Last Exam 无工具行 [24]
带工具硬推理或浏览检索GPT-5.5 Pro在单独报告 Pro 的行里,它领先 Humanity’s Last Exam 带工具和 BrowseComp [24]
开放权重部署Kimi K2.6它被描述为开放权重 1T 参数 MoE 模型,Hugging Face 模型卡也报告了较强的编码基准行 [1][6]
成本敏感的托管推理DeepSeek-V4-Pro-MaxLLM Stats 将其列为 1M 上下文、SWE-Bench Verified 80.6%,且同榜单成本列低于 Claude Opus 4.7 行 [18]
长上下文需求GPT-5.5、Claude Opus 4.7 或 DeepSeek-V4-Pro-Max引用来源列出 GPT-5.5、Claude Opus 4.7 和 DeepSeek-V4-Pro-Max 为 1M 上下文;Kimi K2.6 则约为 256K 到 262K 上下文 [1][11][16][18][27]

分模型解读

GPT-5.5

OpenAI 将 GPT-5.5 描述为面向复杂任务构建,包括编码、研究和数据分析 [38]。在 VentureBeat 的共同对比中,GPT-5.5 的 Terminal-Bench 2.0 为 82.7%,高于 Claude Opus 4.7 的 69.4% 和 DeepSeek-V4-Pro-Max 的 67.9% [24]。同一表格还列出 GPT-5.5 在 GPQA Diamond 为 93.6%、SWE-Bench Pro 为 58.6%、BrowseComp 为 84.4% [24]

这里最容易踩坑的是 GPT-5.5 Pro。共同表格中,GPT-5.5 Pro 在 BrowseComp 达到 90.1%,在 Humanity’s Last Exam 带工具行达到 57.2%;但这些数字不应与基础 GPT-5.5 混在一起比较成本、延迟或模型设置 [24]

从采购和预算角度看,BenchLM 将 GPT-5.5 列为 1M token 上下文窗口;另一个价格报道列出 GPT-5.5 为每百万输入 token $5、每百万输出 token $30 [27][36]。这类价格更适合作为预算信号,真正采购前仍应核对服务商的实时价格。

Claude Opus 4.7

Claude Opus 4.7 在这组模型中给出了最强的软件修复信号。LLM Stats 将其 SWE-Bench Verified 列为 87.6%,共同对比中其 SWE-Bench Pro 为 64.3% [18][24]。它还在共同表格中领先 GPQA Diamond,得分 94.2%;领先 Humanity’s Last Exam 无工具行,得分 46.9%;并在 MCP Atlas 行达到 79.1% [24]

LLM Stats 还报告 Claude Opus 4.7 具有 1M token 上下文窗口,价格为每百万 token $5/$25 [16]。不过,可比性仍要谨慎看待:Anthropic 说明部分基准结果使用了内部实现或更新后的评测参数,有些分数不能直接与公开榜单分数比较 [17]

Kimi K2.6

Kimi K2.6 是引用材料中最突出的开放权重候选。发布报道将其描述为开放权重 1T 参数 MoE 模型,具有 32B 活跃参数、384 个专家、原生多模态、INT4 量化和 256K 上下文 [1]。它的 Hugging Face 模型卡报告了 SWE-Bench Verified 80.2%、SWE-Bench Pro 58.6%、Terminal-Bench 2.0 66.7% 和 LiveCodeBench v6 89.6 [6]

同一发布报道还列出 Kimi K2.6 在 Humanity’s Last Exam 带工具项为 54.0,在 BrowseComp 为 83.2 [1]。LLM Stats 将 Kimi K2.6 列为 262K 上下文,价格列为 $0.95/$4.00,并带有 Open Source 标签 [11]。限制在于,Kimi 的这些数字不是来自与 GPT-5.5、Claude Opus 4.7 和 DeepSeek-V4-Pro-Max 同一张共同表格;因此,接近的分差更应被看作测试线索,而不是最终胜负 [1][6][24]

DeepSeek-V4-Pro-Max

DeepSeek-V4-Pro-Max 更像是性价比候选,而不是公开数据里的全能冠军。LLM Stats 将其列为 1.6T 规模、1M 上下文、SWE-Bench Verified 80.6%,成本列为 $1.74/$3.48 [18]。在共同对比中,它的 GPQA Diamond 为 90.1%、Humanity’s Last Exam 无工具为 37.7%、Humanity’s Last Exam 带工具为 48.2%、Terminal-Bench 2.0 为 67.9%、SWE-Bench Pro 为 55.4%、BrowseComp 为 83.4%、MCP Atlas 为 73.6% [24]

这些数字让 DeepSeek-V4-Pro-Max 很适合进入成本敏感场景的验证名单。但同一共同表格显示,在多数已报告基准行中,GPT-5.5、GPT-5.5 Pro 或 Claude Opus 4.7 仍然领先;因此,在生产环境替换高价模型之前,应先用自己的任务做验证 [24]

上下文与价格:只能当信号,不能当报价

上下文窗口和价格并不总是由同一来源、同一服务商报告。下面这些信息更适合作为采购前筛选信号,而不是最终合同价格。

模型引用中的上下文与价格信号实用解读
GPT-5.5BenchLM 列为 1M 上下文;一个价格报道列出每百万输入 token $5、输出 token $30 [27][36]高端托管选项;预算前需核对实时价格。
Claude Opus 4.7LLM Stats 报告 1M 上下文和每百万 token $5/$25 价格 [16]面向编码、推理和长上下文任务的高端选项。
Kimi K2.6发布报道列为 256K 上下文;LLM Stats 列为 262K 上下文和 $0.95/$4.00 价格列 [1][11]强开放权重候选;托管价格可能随服务商变化。
DeepSeek-V4-Pro-MaxLLM Stats 列为 1M 上下文、1.6T 规模、SWE-Bench Verified 80.6% 和 $1.74/$3.48 成本列 [18]如果你自己的任务质量达标,是很强的价值候选。

为什么榜单会互相打架?

原因并不神秘:不同基准测的是不同能力。GPQA Diamond 和 Humanity’s Last Exam 更偏困难推理;Terminal-Bench 2.0 和 SWE-Bench 系列更偏编码与代理式软件工程;BrowseComp 在共同对比中衡量浏览检索类表现 [24]。一个模型在某一行领先、另一行落后,往往是因为任务、工具权限和评测框架不同。

即使是同名基准,也可能因为实现不同而出现差异。LLM Stats 将 Claude Opus 4.7 的 SWE-Bench Verified 列为 87.6%;LMCouncil 在其设置下列为 83.5% ± 1.7 [18][30]。Anthropic 也说明,部分结果使用内部实现或更新后的评测参数,因此不一定能与公开榜单直接比较 [17]

所以,一两个百分点的差距不应直接决定生产选型。公开基准最适合用来缩短候选名单,最后的决定仍应来自你自己的评测。

真正上线前,建议这样测

在押注某个模型前,最好把排名靠前的两到三个候选放到你的真实任务里跑一轮。

  1. **使用真实提示词、文件和代码仓库。**公开基准很难复现你的代码结构、文档、业务规则和用户行为。
  2. **匹配真实工具环境。**编码代理的表现会受终端权限、浏览、检索、仓库上下文和内部 API 影响。
  3. **用同一设置测成本和延迟。**Pro 模式、更高推理努力度等设置会改变质量、token 用量和响应时间。
  4. **人工检查失败样本。**编码任务尤其要看测试结果、diff 质量、可维护性、安全回归和是否编造依赖。
  5. **至少放入一个低成本挑战者。**如果开放权重或推理成本重要,Kimi K2.6 和 DeepSeek-V4-Pro-Max 都值得进入测试集 [1][18]

底线

如果你想从高端闭源模型开始缩小范围,最直接的组合是并排测试 GPT-5.5 和 Claude Opus 4.7:GPT-5.5 在引用数据中有最强的 Terminal-Bench 2.0 成绩,而 Claude Opus 4.7 在引用的 SWE-Bench Pro 和 SWE-Bench Verified 上最强 [18][24]

如果你需要开放权重,先看 Kimi K2.6 [1][6]。如果预算是硬约束,把 DeepSeek-V4-Pro-Max 放进候选名单;但在把它视为高端模型的直接替代品之前,务必用自己的工作负载验证质量、成本和延迟 [18][24]

Studio Global AI

Search, cite, and publish your own answer

Use this topic as a starting point for a fresh source-backed answer, then compare citations before you share it.

使用 Studio Global AI 搜索并核查事实

要点

  • 终端型编码代理优先测 GPT 5.5;软件修复和无工具硬推理优先测 Claude Opus 4.7;开放权重部署看 Kimi K2.6;成本敏感场景把 DeepSeek V4 Pro Max 纳入验证 [1][18][24]。
  • 不要把 GPT 5.5 Pro 和基础 GPT 5.5 混成一个模型;在单独报告 Pro 的行里,它以 BrowseComp 90.1% 和带工具 Humanity’s Last Exam 57.2% 领先 [24]。
  • Kimi K2.6 被描述为开放权重 1T 参数 MoE、32B 活跃参数模型;LLM Stats 则把 DeepSeek V4 Pro Max 列为 1M 上下文、成本列 $1.74/$3.48 [1][18]。

人们还问

“GPT-5.5、Claude Opus 4.7、Kimi K2.6 和 DeepSeek V4 怎么选?”的简短答案是什么?

终端型编码代理优先测 GPT 5.5;软件修复和无工具硬推理优先测 Claude Opus 4.7;开放权重部署看 Kimi K2.6;成本敏感场景把 DeepSeek V4 Pro Max 纳入验证 [1][18][24]。

首先要验证的关键点是什么?

终端型编码代理优先测 GPT 5.5;软件修复和无工具硬推理优先测 Claude Opus 4.7;开放权重部署看 Kimi K2.6;成本敏感场景把 DeepSeek V4 Pro Max 纳入验证 [1][18][24]。 不要把 GPT 5.5 Pro 和基础 GPT 5.5 混成一个模型;在单独报告 Pro 的行里,它以 BrowseComp 90.1% 和带工具 Humanity’s Last Exam 57.2% 领先 [24]。

接下来在实践中我应该做什么?

Kimi K2.6 被描述为开放权重 1T 参数 MoE、32B 活跃参数模型;LLM Stats 则把 DeepSeek V4 Pro Max 列为 1M 上下文、成本列 $1.74/$3.48 [1][18]。

接下来我应该探索哪个相关主题?

继续“Claude Opus 4.7、GPT-5.5、DeepSeek V4 与 Kimi K2.6:2026 基准对比与选型结论”以获得另一个角度和额外的引用。

打开相关页面

我应该将其与什么进行比较?

对照“DeepSeek V4 工程解析:1M 上下文、MoE 与 API 迁移要点”交叉检查此答案。

打开相关页面

继续你的研究

研究对话

研究问题

Search for the benchmarks of GPT-5.5, Claude Opus 4.7, Kimi K2.6, DeepSeek V4. Compare them to help me understand more about them.

Studio Global AI40 来源

引用的答案

I found public benchmark data for all four, but not one fully independent, apples-to-apples leaderboard covering GPT-5.5, Claude Opus 4.7, Kimi K2.6, and DeepSeek V4 under the same harness. The practical read is: GPT-5.5 looks strongest for broad agentic coding/computer-use tasks, Claude Opus 4.7 is extremely strong on coding and hard reasoning, Kimi K2.6 is the strongest open-weight option in this group, and DeepSeek V4 looks like the best price/performance play if its reported results hold up across independent testing [4][5][6][13].

Key findings

  • GPT-5.5’s official benchmarks emphasize agentic coding and computer-use: 82.7% on Terminal-Bench 2.0, 58.6% on SWE-Bench Pro, 78.7% on OSWorld-Verified, 84.4% on BrowseComp, and 81.8% on CyberGym [4].

  • GPT-5.5 Pro appears stronger on some harder research/search/math tasks, including 90.1% on BrowseComp, 52.4% on FrontierMath Tier 1–3, and 39.6% on FrontierMath Tier 4 [4].

  • Claude Opus 4.7 is positioned by Anthropic as a premium hybrid reasoning model for coding and agents, with a 1M-token context window and API pricing of $5 per million input tokens and $25 per million output tokens [2]. Third-party benchmark reporting lists Claude Opus 4.7 at 87.6% on SWE-bench Verified and 94.2% on GPQA Diamond [5].

  • Kimi K2.6 is an open-weight multimodal MoE model with 1T total parameters, 32B active parameters, and a 256K-token context window [13]. Its model card reports 80.2% on SWE-Bench Verified, 58.6% on SWE-Bench Pro, 66.7% on Terminal-Bench 2.0, 90.5% on GPQA Diamond, and 54.0% on HLE-Full with tools [13].

  • DeepSeek V4-Pro-Max is reported as a 1.6T-parameter open model with much lower API pricing than GPT-5.5 and Claude Opus 4.7 [6]. Reported comparison data puts DeepSeek V4-Pro-Max at 90.1% on GPQA Diamond, 37.7% on HLE without tools, 48.2% on HLE with tools, and 67.9% on Terminal-Bench 2.0 [6].

Comparison

ModelBest fitNotable reported benchmarksMain caveat
GPT-5.5Best general pick for agentic coding, computer use, research workflows, and tool-heavy tasksTerminal-Bench 2.0: 82.7%; SWE-Bench Pro: 58.6%; OSWorld-Verified: 78.7%; BrowseComp: 84.4%; CyberGym: 81.8% [4]OpenAI’s strongest numbers include internal or vendor-run evals, so independent confirmation matters [4].
GPT-5.5 ProHarder reasoning/search/math where cost is less importantBrowseComp: 90.1%; FrontierMath Tier 1–3: 52.4%; FrontierMath Tier 4: 39.6% [4]Higher-tier “Pro” results are not directly comparable to base GPT-5.5 or open models unless the same effort/budget is used [4].
Claude Opus 4.7Premium coding, long-context enterprise workflows, careful reasoning, and high-reliability agentsSWE-bench Verified: 87.6%; GPQA Diamond: 94.2%; 1M context window [2][5]Some Anthropic benchmark results are presented in release materials or partner/internal evals, and not every benchmark is directly comparable to public leaderboard runs [8].
Kimi K2.6Best open-weight option if you want strong coding/agentic performance and self-hostabilitySWE-Bench Verified: 80.2%; SWE-Bench Pro: 58.6%; Terminal-Bench 2.0: 66.7%; GPQA Diamond: 90.5%; HLE-Full with tools: 54.0% [13]Kimi’s own model card notes different settings and re-evaluations for some comparison scores, so treat cross-model deltas cautiously [13].
DeepSeek V4-Pro-MaxBest value candidate; strong open-model performance with much lower reported API costGPQA Diamond: 90.1%; HLE no tools: 37.7%; HLE with tools: 48.2%; Terminal-Bench 2.0: 67.9% [6]Evidence is newer and less independently settled; I would wait for more third-party validation before treating it as equal to closed frontier models [6].

How to interpret the numbers

  • For software engineering, Claude Opus 4.7’s 87.6% SWE-bench Verified score is the standout among the reported figures I found, while GPT-5.5’s 82.7% Terminal-Bench 2.0 score is the strongest reported command-line/agentic coding number in this set [4][5].

  • For broad tool-using agent tasks, GPT-5.5 looks especially strong because its official release reports high scores across OSWorld-Verified, BrowseComp, Toolathlon, Terminal-Bench 2.0, and CyberGym rather than just one narrow benchmark [4].

  • For open-weight deployment, Kimi K2.6 and DeepSeek V4 are the two to watch: Kimi K2.6 has a strong official benchmark table across coding, reasoning, vision, and agentic tasks, while DeepSeek V4 emphasizes frontier-like performance at much lower cost [6][13].

  • For hard academic/scientific reasoning, Claude Opus 4.7’s reported 94.2% GPQA Diamond leads the figures I found, while GPT-5.5 and DeepSeek V4 are reported lower on GPQA Diamond in the comparison data I found [5][6].

  • For price/performance, DeepSeek V4 appears meaningfully cheaper than GPT-5.5 and Claude Opus 4.7 in reported API pricing, but cheaper inference only matters if its quality is sufficient for your workload [6].

Evidence notes

  • The strongest source for GPT-5.5 is OpenAI’s official GPT-5.5 release page, which includes the benchmark table for Terminal-Bench 2.0, SWE-Bench Pro, OSWorld-Verified, BrowseComp, FrontierMath, Toolathlon, and CyberGym [4].

  • The strongest source for Claude Opus 4.7 positioning, context window, availability, and pricing is Anthropic’s Opus 4.7 product/release material [2][8]. The cleanest text benchmark figures I found for Opus 4.7’s SWE-bench Verified and GPQA Diamond scores came from LLM Stats reporting [5].

  • The strongest source for Kimi K2.6 is Moonshot AI’s Hugging Face model card, which includes architecture details, context length, and the evaluation table [13].

  • The strongest text source I found for DeepSeek V4’s direct comparison against GPT-5.5 and Claude Opus 4.7 was VentureBeat’s launch/benchmark report [6].

Limitations / uncertainty

  • Insufficient evidence for one single, independent, fully apples-to-apples benchmark suite that evaluates all four models with the same prompts, tools, budgets, reasoning effort, and scoring harness.

  • Some numbers are vendor-reported, some are third-party summaries, and some benchmarks use different tool access or reasoning-effort settings, so small score differences should not be overinterpreted [4][8][13].

  • For real buying/deployment decisions, I would run your own 20–50 task eval set with your actual prompts, files, tools, latency targets, and cost constraints.

Summary

If you want the safest high-end general choice, start with GPT-5.5. If your priority is top-tier coding reliability and long-context enterprise work, test Claude Opus 4.7. If you need open weights and strong frontier-adjacent performance, Kimi K2.6 is the most compelling open option. If cost is the constraint and you can tolerate some validation risk, DeepSeek V4 is the one to benchmark first.

来源

  • [1] [AINews] Moonshot Kimi K2.6: the world's leading Open Model ...latent.space

    Moonshot’s Kimi K2.6 was the clear release of the day: an open-weight 1T-parameter MoE with 32B active, 384 experts (8 routed + 1 shared), MLA attention, 256K context, native multimodality, and INT4 quantization, with day-0 support in vLLM, OpenRouter, Clou...

  • [6] moonshotai/Kimi-K2.6 - Hugging Facehuggingface.co

    OSWorld-Verified 73.1 75.0 72.7 63.3 Coding Terminal-Bench 2.0 (Terminus-2) 66.7 65.4 65.4 68.5 50.8 SWE-Bench Pro 58.6 57.7 53.4 54.2 50.7 SWE-Bench Multilingual 76.7 77.8 76.9 73.0 SWE-Bench Verified 80.2 80.8 80.6 76.8 SciCode 52.2 56.6 51.9 58.9 48.7 OJ...

  • [11] AI Leaderboard 2026 - Compare Top AI Models & Rankingsllm-stats.com

    19 Image 20: Moonshot AI Kimi K2.6NEW Moonshot AI 1,157 — 90.5% 80.2% 262K $0.95 $4.00 Open Source 20 Image 21: OpenAI GPT-5.2 Codex OpenAI 1,148 812 — — 400K $1.75 $14.00 Proprietary [...] 6 Image 7: Anthropic Claude Opus 4.5 Anthropic 1,614 1,342 87.0% 80...

  • [16] Claude Opus 4.7: Benchmarks, Pricing, Context & What's Newllm-stats.com

    LLM Stats Logo Make AI phone calls with one API call Claude Opus 4.7: Benchmarks, Pricing, Context & What's New Claude Opus 4.7 scores 87.6% on SWE-bench Verified, 94.2% on GPQA, 1M token context, 3.3x higher-resolution vision, new xhigh effort level. $5/$2...

  • [17] Introducing Claude Opus 4.7 - Anthropicanthropic.com

    CyberGym: Opus 4.6’s score has been updated from the originally reported 66.6 to 73.8, as we updated our harness parameters to better elicit cyber capability. SWE-bench Multimodal: We used an internal implementation for both Opus 4.7 and Opus 4.6. Scores ar...

  • [18] SWE-Bench Verified Leaderboard - LLM Statsllm-stats.com

    Model Score Size Context Cost License --- --- --- 1 Anthropic Claude Mythos Preview Anthropic 0.939 — — $25.00 / $125.00 2 Anthropic Claude Opus 4.7 Anthropic 0.876 — 1.0M $5.00 / $25.00 3 Anthropic Claude Opus 4.5 Anthropic 0.809 — 200K $5.00 / $25.00 4 An...

  • [24] DeepSeek-V4 arrives with near state-of-the-art intelligence at 1/6th ...venturebeat.com

    Benchmark DeepSeek-V4-Pro-Max GPT-5.5 GPT-5.5 Pro, where shown Claude Opus 4.7 Best result among these GPQA Diamond 90.1% 93.6% — 94.2% Claude Opus 4.7 Humanity’s Last Exam, no tools 37.7% 41.4% 43.1% 46.9% Claude Opus 4.7 Humanity’s Last Exam, with tools 4...

  • [27] GPT-5.5 Benchmarks 2026: Scores, Rankings & Performancebenchlm.ai

    Core Rankings Specialized Use Cases Dashboards Directories Guides & Lists Tools GPT-5.5 According to BenchLM.ai, GPT-5.5 ranks 5 out of 112 models on the provisional leaderboard with an overall score of 89/100. It also ranks 2 out of 16 on the verified lead...

  • [28] GPT-5.5: Pricing, Benchmarks & Performance - LLM Statsllm-stats.com

    9Image 42GPT-5 mini 0.22 10Image 43o3 0.16 GPQAView → 4 of 10 Image 44: LLM Stats Logo A challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. Questions are Google-proof and extremely difficult, w...

  • [30] AI Model Benchmarks Apr 2026 | Compare GPT-5, Claude ...lmcouncil.ai

    METR Time Horizons Model Minutes --- 1 Claude Opus 4.6 (unknown thinking) 718.8 ±1815.2 2 GPT-5.2 (high) 352.2 ±335.5 3 GPT-5.3 Codex 349.5 ±333.1 4 Claude Opus 4.5 (no thinking) 293.0 ±239.0 5 Claude Opus 4.5 (16k thinking) 288.9 ±558.2 SWE-bench Verified...

  • [36] GPT-5.5 Doubles the Price, Google Goes Full Agent, DeepSeek V4 ...thecreatorsai.com

    GPT-5.5 is out — $5 per million input, $30 per million output. That's exactly double GPT-5.4 and 20% more than Claude Opus 4.7. OpenAI released ... 21 hours ago

  • [38] Introducing GPT-5.5 - OpenAIopenai.com

    Introducing GPT-5.5, our smartest model yet—faster, more capable, and built for complex tasks like coding, research, and data analysis ... 2 days ago