studioglobal
熱門探索內容
報告已發布12 個來源

GPT-5.5 vs Claude Opus 4.7 vs Kimi K2.6 vs DeepSeek V4:基準測試與實務選型

終端式 coding agent 可先測 GPT 5.5;軟體修復基準則以 Claude Opus 4.7 的 SWE Bench Pro 與 SWE Bench Verified 訊號最突出 [18][24]。 GPT 5.5 Pro 不應與基本 GPT 5.5 混算;來源分開列出時,它在 BrowseComp 達 90.1%,Humanity’s Last Exam with tools 達 57.2% [24]。

17K0
Abstract benchmark dashboard comparing GPT-5.5, Claude Opus 4.7, Kimi K2.6 and DeepSeek V4
GPT-5.5 vs Claude Opus 4.7 vs Kimi K2.6 vs DeepSeek V4: Benchmarks ComparedAI-generated editorial illustration for a benchmark comparison of GPT-5.5, Claude Opus 4.7, Kimi K2.6 and DeepSeek V4.
AI 提示詞

Create a landscape editorial hero image for this Studio Global article: GPT-5.5 vs Claude Opus 4.7 vs Kimi K2.6 vs DeepSeek V4: Benchmarks Compared. Article summary: There is no single apples to apples leaderboard in the cited sources. The clearest signals are GPT 5.5 at 82.7% on Terminal Bench 2.0, Claude Opus 4.7 at 87.6% on SWE Bench Verified, Kimi K2.6 as the open weight pick,.... Topic tags: ai, ai benchmarks, llm, openai, anthropic. Reference image context from search candidates: Reference image 1: visual subject "[Kimi K2 vs Claude Opus 4.7 vs GPT 5.5 Comparison](https://www.youtube.com/watch?v=M90iB4hpenI). ![Image 4](https://www.youtube.com/watch?v=M90iB4hpenI). [](https://www.youtube.com" source context "Kimi K2 vs Claude Opus 4.7 vs GPT 5.5 Comparison" Reference image 2: visual subject "[Kimi K2 vs Claude Opus 4.7 vs GPT 5.5 Comparison](https://www.youtube.com/watch?v=M90iB4hp

openai.com

基準圖表看起來像一場排名賽,但這組模型其實不能只問誰第一。最接近同場比較的資料,涵蓋 GPT-5.5、GPT-5.5 Pro、Claude Opus 4.7 與 DeepSeek-V4-Pro-Max;Kimi K2.6 則來自另外的 Kimi 發布報導、模型卡與排行榜資料 [1][6][24]。因此,更實用的問題不是哪個模型通吃,而是你的工作負載該先測哪一個。

先說清楚命名:本文把 DeepSeek V4 的可比版本寫作 DeepSeek-V4-Pro-Max,因為引用資料中列出基準測試與成本欄位的是這個變體 [18][24]。同時,GPT-5.5 Pro 會與基本版 GPT-5.5 分開看;只要來源分開列分數,就不把兩者混在一起 [24]

先看結論:不同工作負載該先測誰?

  • **終端機與命令列型 coding agent:**GPT-5.5 在共享比較中的 Terminal-Bench 2.0 分數最高,達 82.7% [24]
  • **軟體修復與工程任務:**Claude Opus 4.7 在引用資料中的 SWE-Bench Pro 達 64.3%,SWE-Bench Verified 達 87.6%,是這組模型裡最強的軟體修復訊號 [18][24]
  • **不使用工具的高難推理:**Claude Opus 4.7 在共享比較中的 GPQA Diamond 與 Humanity’s Last Exam no tools 兩列領先 [24]
  • **工具輔助推理與瀏覽:**GPT-5.5 Pro 在有列出 Pro 版本的項目中,Humanity’s Last Exam with tools 達 57.2%,BrowseComp 達 90.1% [24]
  • **開放權重部署:**Kimi K2.6 是引用資料中最明確的開放權重候選,被描述為 1T 參數 MoE 模型、32B active parameters,並支援 256K context window [1]
  • **重視推論成本的託管服務:**DeepSeek-V4-Pro-Max 值得先驗證;LLM Stats 列出它具 100 萬 token 上下文、SWE-Bench Verified 80.6%,成本欄位為 $1.74/$3.48 [18]

基準測試對照表

表中的破折號代表引用資料中沒有找到該模型的對應分數,不代表零分。GPT-5.5、GPT-5.5 Pro、Claude Opus 4.7 與 DeepSeek-V4-Pro-Max 多數來自同一份共享比較;Kimi K2.6 的數字則來自 Kimi 相關來源 [1][6][24]

基準測試GPT-5.5GPT-5.5 ProClaude Opus 4.7Kimi K2.6DeepSeek-V4-Pro-Max
GPQA Diamond93.6% [24]94.2% [24]約 91% [28]90.1% [24]
Humanity’s Last Exam,no tools41.4% [24]43.1% [24]46.9% [24]37.7% [24]
Humanity’s Last Exam,with tools52.2% [24]57.2% [24]54.7% [24]54.0% [1]48.2% [24]
Terminal-Bench 2.082.7% [24]69.4% [24]66.7% [6]67.9% [24]
SWE-Bench Pro58.6% [24]64.3% [24]58.6% [6]55.4% [24]
BrowseComp84.4% [24]90.1% [24]79.3% [24]83.2% [1]83.4% [24]
MCP Atlas/MCPAtlas Public75.3% [24]79.1% [24]73.6% [24]
SWE-Bench Verified87.6% [18]80.2% [6]80.6% [18]

選型速查

優先條件建議先測理由
終端式 coding agentGPT-5.5共享比較中 Terminal-Bench 2.0 最高,為 82.7% [24]
軟體工程修復Claude Opus 4.7在引用資料中的 SWE-Bench Pro 與 SWE-Bench Verified 皆領先這組主要候選 [18][24]
不靠工具的高難推理Claude Opus 4.7共享比較中 GPQA Diamond 與 Humanity’s Last Exam no tools 領先 [24]
工具輔助推理或瀏覽GPT-5.5 Pro在有分開列出 Pro 的項目中,Humanity’s Last Exam with tools 與 BrowseComp 最高 [24]
開放權重部署Kimi K2.6被描述為開放權重 1T 參數 MoE 模型,Hugging Face 模型卡也列出強勁的 coding benchmark 數字 [1][6]
成本敏感的託管推論DeepSeek-V4-Pro-MaxLLM Stats 列出 100 萬 token 上下文、SWE-Bench Verified 80.6%,且同榜成本欄位低於 Claude Opus 4.7 [18]
長上下文需求GPT-5.5、Claude Opus 4.7 或 DeepSeek-V4-Pro-Max引用資料列出這三者為 100 萬 token 上下文;Kimi K2.6 則約 256K 至 262K [1][11][16][18][27]

各模型重點

GPT-5.5

OpenAI 將 GPT-5.5 定位為面向複雜任務的模型,包括 coding、研究與資料分析 [38]。在共享比較中,GPT-5.5 的 Terminal-Bench 2.0 為 82.7%,高於 Claude Opus 4.7 的 69.4% 與 DeepSeek-V4-Pro-Max 的 67.9% [24]。同一表中,它也在 GPQA Diamond 拿到 93.6%、SWE-Bench Pro 拿到 58.6%、BrowseComp 拿到 84.4% [24]

要注意的是,GPT-5.5 Pro 是另一個比較點。共享表中,GPT-5.5 Pro 的 BrowseComp 為 90.1%,Humanity’s Last Exam with tools 為 57.2%;這些分數不應直接併入基本版 GPT-5.5,特別是在比較成本、延遲或推理設定時 [24]

採購面可以把資料當成訊號而非報價:BenchLM 列出 GPT-5.5 具 100 萬 token context window;另有價格報導列出 GPT-5.5 為每百萬 input token $5、每百萬 output token $30 [27][36]。正式編列預算前,仍應以供應商即時價格為準。

Claude Opus 4.7

Claude Opus 4.7 在這組模型中的軟體修復訊號最強。LLM Stats 列出它在 SWE-Bench Verified 達 87.6%,共享比較則列出它在 SWE-Bench Pro 達 64.3% [18][24]。同一共享比較中,它也在 GPQA Diamond 達 94.2%、Humanity’s Last Exam no tools 達 46.9%、MCP Atlas 達 79.1%,均為該表領先結果 [24]

LLM Stats 另列出 Claude Opus 4.7 具 100 萬 token context window,價格為每百萬 token $5/$25 [16]。不過,可比性仍要小心:Anthropic 說明部分 benchmark 使用內部實作或更新後的 harness 參數,有些分數不能與公開排行榜直接比較 [17]

Kimi K2.6

Kimi K2.6 是引用資料中最清楚的開放權重選項。發布報導描述它為開放權重 1T 參數 MoE 模型,具 32B active parameters、384 experts、原生多模態、INT4 quantization 與 256K context window [1]。其 Hugging Face 模型卡列出 SWE-Bench Verified 80.2%、SWE-Bench Pro 58.6%、Terminal-Bench 2.0 66.7%,以及 LiveCodeBench v6 89.6 [6]

同一發布報導還列出 Kimi K2.6 在 Humanity’s Last Exam with tools 為 54.0,BrowseComp 為 83.2 [1]。LLM Stats 則列出 Kimi K2.6 具 262K context、價格欄位為 $0.95/$4.00,並標示為 Open Source [11]。限制在於:Kimi 的分數不是來自 GPT-5.5、Claude Opus 4.7、DeepSeek-V4-Pro-Max 那張同場表,所以接近的分差最好視為測試線索,而非定案勝負 [1][6][24]

DeepSeek-V4-Pro-Max

DeepSeek-V4-Pro-Max 更像是性價比候選,而不是全面基準冠軍。LLM Stats 列出它的 size 為 1.6T、context 為 100 萬 token、SWE-Bench Verified 為 80.6%,成本欄位為 $1.74/$3.48 [18]。在共享比較中,它的 GPQA Diamond 為 90.1%、Humanity’s Last Exam no tools 為 37.7%、Humanity’s Last Exam with tools 為 48.2%、Terminal-Bench 2.0 為 67.9%、SWE-Bench Pro 為 55.4%、BrowseComp 為 83.4%、MCP Atlas 為 73.6% [24]

這些數字代表 DeepSeek-V4-Pro-Max 很值得放進成本敏感工作負載的候選名單。但同一共享表中,多數列仍由 GPT-5.5、GPT-5.5 Pro 或 Claude Opus 4.7 領先;若要把它用來替代高價模型,最好先用自己的任務驗證品質、穩定性與失敗型態 [24]

價格與上下文:只能當採購訊號

價格與 context window 不一定由同一來源或同一供應商報告,下表適合做初步篩選,不適合當最終報價。

模型引用資料中的 context 與價格訊號實務解讀
GPT-5.5BenchLM 列出 100 萬 token context;一份價格報導列出每百萬 input token $5、output token $30 [27][36]高階託管選項;正式採購前要查即時價格。
Claude Opus 4.7LLM Stats 列出 100 萬 token context,價格為每百萬 token $5/$25 [16]適合 coding、推理與長上下文任務的高階選項。
Kimi K2.6發布報導列出 256K context;LLM Stats 列出 262K context 與 $0.95/$4.00 價格欄位 [1][11]開放權重部署吸引力高;託管價格會因平台而異。
DeepSeek-V4-Pro-MaxLLM Stats 列出 100 萬 token context、1.6T size、SWE-Bench Verified 80.6% 與 $1.74/$3.48 成本欄位 [18]若你的工作負載品質可接受,是強性價比候選。

為什麼排名會互相打架?

不同 benchmark 測的是不同能力。GPQA Diamond 與 Humanity’s Last Exam 偏向高難推理;Terminal-Bench 2.0 與 SWE-Bench 系列偏向 coding 與代理式軟體工程;BrowseComp 則在共享比較中衡量瀏覽與檢索風格的表現 [24]。一個模型在某列領先、另一列落後,並不矛盾。

就算名稱相同,benchmark 也可能因實作而不同。LLM Stats 列出 Claude Opus 4.7 的 SWE-Bench Verified 為 87.6%,LMCouncil 則在其設定下列為 83.5% ± 1.7 [18][30]。Anthropic 也說明部分結果使用內部實作或更新後 harness 參數,限制了與公開排行榜的直接可比性 [17]

所以,一兩個百分點的差距不應單獨決定正式上線。公開基準測試適合幫你縮小候選名單;真正的採用決策,仍應看自己的任務。

實測時該怎麼做?

正式導入前,建議把前兩到三個候選模型放到同一套內部測試裡。

  1. 用真實 prompt、檔案與 repository。 公開 benchmark 很難覆蓋你的程式碼庫、文件、內規與使用者行為。
  2. 工具環境要對齊。 coding agent 有沒有 terminal、瀏覽器、檢索、repository context 或內部 API,結果可能差很多。
  3. 用同樣設定量成本與延遲。 Pro 模式、更高 effort setting 或更長輸出,都可能改變品質、token 使用量與等待時間。
  4. 人工檢查失敗案例。 對 coding 任務來說,只看是否通過測試不夠,還要看 diff 品質、可維護性、安全性退化與幻覺依賴。
  5. 至少放一個低成本挑戰者。 如果你在意開放權重或推論成本,Kimi K2.6 與 DeepSeek-V4-Pro-Max 都值得進入測試組 [1][18]

最後怎麼選?

如果你要高階閉源模型的短名單,先把 GPT-5.5 與 Claude Opus 4.7 並排測:GPT-5.5 在引用資料中擁有最強 Terminal-Bench 2.0 結果,Claude Opus 4.7 則在 SWE-Bench Pro 與 SWE-Bench Verified 訊號最強 [18][24]。如果你需要開放權重,先看 Kimi K2.6 [1][6]。如果成本是硬限制,把 DeepSeek-V4-Pro-Max 放進評估,但不要只因價格就把它視為高價模型的即插即用替代品;先用你的任務驗證 [18][24]

Studio Global AI

Search, cite, and publish your own answer

Use this topic as a starting point for a fresh source-backed answer, then compare citations before you share it.

使用 Studio Global AI 搜尋並查證事實

重點整理

  • 終端式 coding agent 可先測 GPT 5.5;軟體修復基準則以 Claude Opus 4.7 的 SWE Bench Pro 與 SWE Bench Verified 訊號最突出 [18][24]。
  • GPT 5.5 Pro 不應與基本 GPT 5.5 混算;來源分開列出時,它在 BrowseComp 達 90.1%,Humanity’s Last Exam with tools 達 57.2% [24]。
  • Kimi K2.6 是引用資料中最明確的開放權重候選;DeepSeek V4 Pro Max 則以 100 萬 token 上下文、SWE Bench Verified 80.6% 與較低成本欄位,適合納入性價比測試 [1][18]。

大家也會問

「GPT-5.5 vs Claude Opus 4.7 vs Kimi K2.6 vs DeepSeek V4:基準測試與實務選型」的簡短答案是什麼?

終端式 coding agent 可先測 GPT 5.5;軟體修復基準則以 Claude Opus 4.7 的 SWE Bench Pro 與 SWE Bench Verified 訊號最突出 [18][24]。

最值得優先驗證的重點是什麼?

終端式 coding agent 可先測 GPT 5.5;軟體修復基準則以 Claude Opus 4.7 的 SWE Bench Pro 與 SWE Bench Verified 訊號最突出 [18][24]。 GPT 5.5 Pro 不應與基本 GPT 5.5 混算;來源分開列出時,它在 BrowseComp 達 90.1%,Humanity’s Last Exam with tools 達 57.2% [24]。

接下來在實務上該怎麼做?

Kimi K2.6 是引用資料中最明確的開放權重候選;DeepSeek V4 Pro Max 則以 100 萬 token 上下文、SWE Bench Verified 80.6% 與較低成本欄位,適合納入性價比測試 [1][18]。

下一步適合探索哪個相關主題?

繼續閱讀「Claude Opus 4.7、GPT-5.5、DeepSeek V4、Kimi K2.6 基準比較:2026 年誰最值得信?」,從另一個角度查看更多引用來源。

開啟相關頁面

我應該拿這個和什麼比較?

將這個答案與「DeepSeek V4 工程解析:1M 上下文、MoE 與 API 遷移重點」交叉比對。

開啟相關頁面

繼續深入研究

研究對話

研究問題

Search for the benchmarks of GPT-5.5, Claude Opus 4.7, Kimi K2.6, DeepSeek V4. Compare them to help me understand more about them.

Studio Global AI40 個來源

附引用的答案

I found public benchmark data for all four, but not one fully independent, apples-to-apples leaderboard covering GPT-5.5, Claude Opus 4.7, Kimi K2.6, and DeepSeek V4 under the same harness. The practical read is: GPT-5.5 looks strongest for broad agentic coding/computer-use tasks, Claude Opus 4.7 is extremely strong on coding and hard reasoning, Kimi K2.6 is the strongest open-weight option in this group, and DeepSeek V4 looks like the best price/performance play if its reported results hold up across independent testing [4][5][6][13].

Key findings

  • GPT-5.5’s official benchmarks emphasize agentic coding and computer-use: 82.7% on Terminal-Bench 2.0, 58.6% on SWE-Bench Pro, 78.7% on OSWorld-Verified, 84.4% on BrowseComp, and 81.8% on CyberGym [4].

  • GPT-5.5 Pro appears stronger on some harder research/search/math tasks, including 90.1% on BrowseComp, 52.4% on FrontierMath Tier 1–3, and 39.6% on FrontierMath Tier 4 [4].

  • Claude Opus 4.7 is positioned by Anthropic as a premium hybrid reasoning model for coding and agents, with a 1M-token context window and API pricing of $5 per million input tokens and $25 per million output tokens [2]. Third-party benchmark reporting lists Claude Opus 4.7 at 87.6% on SWE-bench Verified and 94.2% on GPQA Diamond [5].

  • Kimi K2.6 is an open-weight multimodal MoE model with 1T total parameters, 32B active parameters, and a 256K-token context window [13]. Its model card reports 80.2% on SWE-Bench Verified, 58.6% on SWE-Bench Pro, 66.7% on Terminal-Bench 2.0, 90.5% on GPQA Diamond, and 54.0% on HLE-Full with tools [13].

  • DeepSeek V4-Pro-Max is reported as a 1.6T-parameter open model with much lower API pricing than GPT-5.5 and Claude Opus 4.7 [6]. Reported comparison data puts DeepSeek V4-Pro-Max at 90.1% on GPQA Diamond, 37.7% on HLE without tools, 48.2% on HLE with tools, and 67.9% on Terminal-Bench 2.0 [6].

Comparison

ModelBest fitNotable reported benchmarksMain caveat
GPT-5.5Best general pick for agentic coding, computer use, research workflows, and tool-heavy tasksTerminal-Bench 2.0: 82.7%; SWE-Bench Pro: 58.6%; OSWorld-Verified: 78.7%; BrowseComp: 84.4%; CyberGym: 81.8% [4]OpenAI’s strongest numbers include internal or vendor-run evals, so independent confirmation matters [4].
GPT-5.5 ProHarder reasoning/search/math where cost is less importantBrowseComp: 90.1%; FrontierMath Tier 1–3: 52.4%; FrontierMath Tier 4: 39.6% [4]Higher-tier “Pro” results are not directly comparable to base GPT-5.5 or open models unless the same effort/budget is used [4].
Claude Opus 4.7Premium coding, long-context enterprise workflows, careful reasoning, and high-reliability agentsSWE-bench Verified: 87.6%; GPQA Diamond: 94.2%; 1M context window [2][5]Some Anthropic benchmark results are presented in release materials or partner/internal evals, and not every benchmark is directly comparable to public leaderboard runs [8].
Kimi K2.6Best open-weight option if you want strong coding/agentic performance and self-hostabilitySWE-Bench Verified: 80.2%; SWE-Bench Pro: 58.6%; Terminal-Bench 2.0: 66.7%; GPQA Diamond: 90.5%; HLE-Full with tools: 54.0% [13]Kimi’s own model card notes different settings and re-evaluations for some comparison scores, so treat cross-model deltas cautiously [13].
DeepSeek V4-Pro-MaxBest value candidate; strong open-model performance with much lower reported API costGPQA Diamond: 90.1%; HLE no tools: 37.7%; HLE with tools: 48.2%; Terminal-Bench 2.0: 67.9% [6]Evidence is newer and less independently settled; I would wait for more third-party validation before treating it as equal to closed frontier models [6].

How to interpret the numbers

  • For software engineering, Claude Opus 4.7’s 87.6% SWE-bench Verified score is the standout among the reported figures I found, while GPT-5.5’s 82.7% Terminal-Bench 2.0 score is the strongest reported command-line/agentic coding number in this set [4][5].

  • For broad tool-using agent tasks, GPT-5.5 looks especially strong because its official release reports high scores across OSWorld-Verified, BrowseComp, Toolathlon, Terminal-Bench 2.0, and CyberGym rather than just one narrow benchmark [4].

  • For open-weight deployment, Kimi K2.6 and DeepSeek V4 are the two to watch: Kimi K2.6 has a strong official benchmark table across coding, reasoning, vision, and agentic tasks, while DeepSeek V4 emphasizes frontier-like performance at much lower cost [6][13].

  • For hard academic/scientific reasoning, Claude Opus 4.7’s reported 94.2% GPQA Diamond leads the figures I found, while GPT-5.5 and DeepSeek V4 are reported lower on GPQA Diamond in the comparison data I found [5][6].

  • For price/performance, DeepSeek V4 appears meaningfully cheaper than GPT-5.5 and Claude Opus 4.7 in reported API pricing, but cheaper inference only matters if its quality is sufficient for your workload [6].

Evidence notes

  • The strongest source for GPT-5.5 is OpenAI’s official GPT-5.5 release page, which includes the benchmark table for Terminal-Bench 2.0, SWE-Bench Pro, OSWorld-Verified, BrowseComp, FrontierMath, Toolathlon, and CyberGym [4].

  • The strongest source for Claude Opus 4.7 positioning, context window, availability, and pricing is Anthropic’s Opus 4.7 product/release material [2][8]. The cleanest text benchmark figures I found for Opus 4.7’s SWE-bench Verified and GPQA Diamond scores came from LLM Stats reporting [5].

  • The strongest source for Kimi K2.6 is Moonshot AI’s Hugging Face model card, which includes architecture details, context length, and the evaluation table [13].

  • The strongest text source I found for DeepSeek V4’s direct comparison against GPT-5.5 and Claude Opus 4.7 was VentureBeat’s launch/benchmark report [6].

Limitations / uncertainty

  • Insufficient evidence for one single, independent, fully apples-to-apples benchmark suite that evaluates all four models with the same prompts, tools, budgets, reasoning effort, and scoring harness.

  • Some numbers are vendor-reported, some are third-party summaries, and some benchmarks use different tool access or reasoning-effort settings, so small score differences should not be overinterpreted [4][8][13].

  • For real buying/deployment decisions, I would run your own 20–50 task eval set with your actual prompts, files, tools, latency targets, and cost constraints.

Summary

If you want the safest high-end general choice, start with GPT-5.5. If your priority is top-tier coding reliability and long-context enterprise work, test Claude Opus 4.7. If you need open weights and strong frontier-adjacent performance, Kimi K2.6 is the most compelling open option. If cost is the constraint and you can tolerate some validation risk, DeepSeek V4 is the one to benchmark first.

來源

  • [1] [AINews] Moonshot Kimi K2.6: the world's leading Open Model ...latent.space

    Moonshot’s Kimi K2.6 was the clear release of the day: an open-weight 1T-parameter MoE with 32B active, 384 experts (8 routed + 1 shared), MLA attention, 256K context, native multimodality, and INT4 quantization, with day-0 support in vLLM, OpenRouter, Clou...

  • [6] moonshotai/Kimi-K2.6 - Hugging Facehuggingface.co

    OSWorld-Verified 73.1 75.0 72.7 63.3 Coding Terminal-Bench 2.0 (Terminus-2) 66.7 65.4 65.4 68.5 50.8 SWE-Bench Pro 58.6 57.7 53.4 54.2 50.7 SWE-Bench Multilingual 76.7 77.8 76.9 73.0 SWE-Bench Verified 80.2 80.8 80.6 76.8 SciCode 52.2 56.6 51.9 58.9 48.7 OJ...

  • [11] AI Leaderboard 2026 - Compare Top AI Models & Rankingsllm-stats.com

    19 Image 20: Moonshot AI Kimi K2.6NEW Moonshot AI 1,157 — 90.5% 80.2% 262K $0.95 $4.00 Open Source 20 Image 21: OpenAI GPT-5.2 Codex OpenAI 1,148 812 — — 400K $1.75 $14.00 Proprietary [...] 6 Image 7: Anthropic Claude Opus 4.5 Anthropic 1,614 1,342 87.0% 80...

  • [16] Claude Opus 4.7: Benchmarks, Pricing, Context & What's Newllm-stats.com

    LLM Stats Logo Make AI phone calls with one API call Claude Opus 4.7: Benchmarks, Pricing, Context & What's New Claude Opus 4.7 scores 87.6% on SWE-bench Verified, 94.2% on GPQA, 1M token context, 3.3x higher-resolution vision, new xhigh effort level. $5/$2...

  • [17] Introducing Claude Opus 4.7 - Anthropicanthropic.com

    CyberGym: Opus 4.6’s score has been updated from the originally reported 66.6 to 73.8, as we updated our harness parameters to better elicit cyber capability. SWE-bench Multimodal: We used an internal implementation for both Opus 4.7 and Opus 4.6. Scores ar...

  • [18] SWE-Bench Verified Leaderboard - LLM Statsllm-stats.com

    Model Score Size Context Cost License --- --- --- 1 Anthropic Claude Mythos Preview Anthropic 0.939 — — $25.00 / $125.00 2 Anthropic Claude Opus 4.7 Anthropic 0.876 — 1.0M $5.00 / $25.00 3 Anthropic Claude Opus 4.5 Anthropic 0.809 — 200K $5.00 / $25.00 4 An...

  • [24] DeepSeek-V4 arrives with near state-of-the-art intelligence at 1/6th ...venturebeat.com

    Benchmark DeepSeek-V4-Pro-Max GPT-5.5 GPT-5.5 Pro, where shown Claude Opus 4.7 Best result among these GPQA Diamond 90.1% 93.6% — 94.2% Claude Opus 4.7 Humanity’s Last Exam, no tools 37.7% 41.4% 43.1% 46.9% Claude Opus 4.7 Humanity’s Last Exam, with tools 4...

  • [27] GPT-5.5 Benchmarks 2026: Scores, Rankings & Performancebenchlm.ai

    Core Rankings Specialized Use Cases Dashboards Directories Guides & Lists Tools GPT-5.5 According to BenchLM.ai, GPT-5.5 ranks 5 out of 112 models on the provisional leaderboard with an overall score of 89/100. It also ranks 2 out of 16 on the verified lead...

  • [28] GPT-5.5: Pricing, Benchmarks & Performance - LLM Statsllm-stats.com

    9Image 42GPT-5 mini 0.22 10Image 43o3 0.16 GPQAView → 4 of 10 Image 44: LLM Stats Logo A challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. Questions are Google-proof and extremely difficult, w...

  • [30] AI Model Benchmarks Apr 2026 | Compare GPT-5, Claude ...lmcouncil.ai

    METR Time Horizons Model Minutes --- 1 Claude Opus 4.6 (unknown thinking) 718.8 ±1815.2 2 GPT-5.2 (high) 352.2 ±335.5 3 GPT-5.3 Codex 349.5 ±333.1 4 Claude Opus 4.5 (no thinking) 293.0 ±239.0 5 Claude Opus 4.5 (16k thinking) 288.9 ±558.2 SWE-bench Verified...

  • [36] GPT-5.5 Doubles the Price, Google Goes Full Agent, DeepSeek V4 ...thecreatorsai.com

    GPT-5.5 is out — $5 per million input, $30 per million output. That's exactly double GPT-5.4 and 20% more than Claude Opus 4.7. OpenAI released ... 21 hours ago

  • [38] Introducing GPT-5.5 - OpenAIopenai.com

    Introducing GPT-5.5, our smartest model yet—faster, more capable, and built for complex tasks like coding, research, and data analysis ... 2 days ago