studioglobal
熱門發現
報告已發布12 來源

GPT-5.5、Claude Opus 4.7、Kimi K2.6、DeepSeek V4 跑分比較

Terminal heavy 編程代理可先試 GPT 5.5;軟件修復與無工具硬推理可先試 Claude Opus 4.7;開放權重部署可先睇 Kimi K2.6;成本敏感 hosted inference 則應把 DeepSeek V4 Pro Max 放入測試名單。 GPT 5.5 Pro 不應同基本版 GPT 5.5 混為一談;在分開列出的數據中,GPT 5.5 Pro 於 BrowseComp 達 90.1%,Humanity’s Last Exam with tools 達 57.2% [24]。

17K0
Abstract benchmark dashboard comparing GPT-5.5, Claude Opus 4.7, Kimi K2.6 and DeepSeek V4
GPT-5.5 vs Claude Opus 4.7 vs Kimi K2.6 vs DeepSeek V4: Benchmarks ComparedAI-generated editorial illustration for a benchmark comparison of GPT-5.5, Claude Opus 4.7, Kimi K2.6 and DeepSeek V4.
AI 提示

Create a landscape editorial hero image for this Studio Global article: GPT-5.5 vs Claude Opus 4.7 vs Kimi K2.6 vs DeepSeek V4: Benchmarks Compared. Article summary: There is no single apples to apples leaderboard in the cited sources. The clearest signals are GPT 5.5 at 82.7% on Terminal Bench 2.0, Claude Opus 4.7 at 87.6% on SWE Bench Verified, Kimi K2.6 as the open weight pick,.... Topic tags: ai, ai benchmarks, llm, openai, anthropic. Reference image context from search candidates: Reference image 1: visual subject "[Kimi K2 vs Claude Opus 4.7 vs GPT 5.5 Comparison](https://www.youtube.com/watch?v=M90iB4hpenI). ![Image 4](https://www.youtube.com/watch?v=M90iB4hpenI). [](https://www.youtube.com" source context "Kimi K2 vs Claude Opus 4.7 vs GPT 5.5 Comparison" Reference image 2: visual subject "[Kimi K2 vs Claude Opus 4.7 vs GPT 5.5 Comparison](https://www.youtube.com/watch?v=M90iB4hp

openai.com

睇 AI 模型跑分,好容易變成一場賽馬:邊個高 1、2 個百分點就當贏。但今次 GPT-5.5、Claude Opus 4.7、Kimi K2.6 同 DeepSeek V4 的比較,並唔係一條終點線定輸贏。現有資料入面,最接近同場比較的是 GPT-5.5、GPT-5.5 Pro、Claude Opus 4.7 同 DeepSeek-V4-Pro-Max;Kimi K2.6 則主要來自 Kimi 發布、模型卡同 leaderboard 等另一組資料 [1][6][24]

所以,較實際的問題唔係「邊個模型最強」,而係:「你手上嗰類工作,應該先測邊個模型?」

有一點命名要先講清楚:本文用 DeepSeek-V4-Pro-Max 代表 DeepSeek V4,因為有跑分同成本資料的是這個變體 [18][24]。另外,凡來源把 GPT-5.5 Pro 同基本版 GPT-5.5 分開列出,本文亦會分開處理,唔會將兩者成績合併 [24]

先講結論:按工作負載揀

  • **終端機/command-line 類編程代理:**先試 GPT-5.5。共享比較入面,GPT-5.5 的 Terminal-Bench 2.0 為 82.7%,是這組數據最高 [24]
  • **軟件修復 benchmark:**先試 Claude Opus 4.7。它在引用的 SWE-Bench Pro 行達 64.3%,SWE-Bench Verified 行達 87.6% [18][24]
  • **無工具硬推理:**先試 Claude Opus 4.7。共享比較中,它領先 GPQA Diamond 及 Humanity’s Last Exam no tools [24]
  • **工具輔助推理/瀏覽式搜尋:**先試 GPT-5.5 Pro。資料分開列出 Pro 時,它在 Humanity’s Last Exam with tools 達 57.2%,BrowseComp 達 90.1% [24]
  • **開放權重部署:**Kimi K2.6 是最清晰的候選。來源描述它是 1T 參數 MoE 模型,32B active parameters,256K context window [1]
  • **成本敏感的雲端推理:**DeepSeek-V4-Pro-Max 值得放入驗證名單。LLM Stats 列出它有 1M context、SWE-Bench Verified 80.6%,成本欄為 $1.74/$3.48 [18]

主要 benchmark 對照

下表的「—」代表引用資料沒有找到該模型在該項目的分數,並不代表分數是零。GPT-5.5、GPT-5.5 Pro、Claude Opus 4.7 同 DeepSeek-V4-Pro-Max 多數來自同一個共享比較;Kimi K2.6 則來自 Kimi 相關發布與模型卡資料 [1][6][24]

BenchmarkGPT-5.5GPT-5.5 ProClaude Opus 4.7Kimi K2.6DeepSeek-V4-Pro-Max
GPQA Diamond93.6% [24]94.2% [24]約 91% [28]90.1% [24]
Humanity’s Last Exam,無工具41.4% [24]43.1% [24]46.9% [24]37.7% [24]
Humanity’s Last Exam,有工具52.2% [24]57.2% [24]54.7% [24]54.0% [1]48.2% [24]
Terminal-Bench 2.082.7% [24]69.4% [24]66.7% [6]67.9% [24]
SWE-Bench Pro58.6% [24]64.3% [24]58.6% [6]55.4% [24]
BrowseComp84.4% [24]90.1% [24]79.3% [24]83.2% [1]83.4% [24]
MCP Atlas / MCPAtlas Public75.3% [24]79.1% [24]73.6% [24]
SWE-Bench Verified87.6% [18]80.2% [6]80.6% [18]

如果你要落手測,應該由邊個開始?

優先事項先試原因
Terminal-style coding agentsGPT-5.5共享比較中,它的 Terminal-Bench 2.0 最高,達 82.7% [24]
軟件工程修復Claude Opus 4.7在本文引用的 SWE-Bench Pro 同 SWE-Bench Verified 行,它都領先這組模型 [18][24]
無工具硬推理Claude Opus 4.7共享比較中,它領先 GPQA Diamond 同 Humanity’s Last Exam without tools [24]
工具輔助硬推理或瀏覽GPT-5.5 Pro在 Pro 版本有分開列出的項目中,它領先 Humanity’s Last Exam with tools 同 BrowseComp [24]
開放權重部署Kimi K2.6它被描述為開放權重 1T 參數 MoE 模型,Hugging Face 模型卡亦列出多個強勁編程 benchmark [1][6]
成本敏感 hosted inferenceDeepSeek-V4-Pro-MaxLLM Stats 列出它具 1M context、SWE-Bench Verified 80.6%,同一 leaderboard 上成本欄低於 Claude Opus 4.7 [18]
長上下文需求GPT-5.5、Claude Opus 4.7 或 DeepSeek-V4-Pro-Max引用來源列出 GPT-5.5、Claude Opus 4.7、DeepSeek-V4-Pro-Max 為 1M context;Kimi K2.6 則約為 256K 至 262K context [1][11][16][18][27]

模型逐個睇

GPT-5.5

OpenAI 形容 GPT-5.5 是為複雜任務而設,例如 coding、research 同 data analysis [38]。在 VentureBeat 的共享比較中,GPT-5.5 於 Terminal-Bench 2.0 得 82.7%,高過 Claude Opus 4.7 的 69.4% 同 DeepSeek-V4-Pro-Max 的 67.9% [24]。同一表格亦列出 GPT-5.5 在 GPQA Diamond 得 93.6%、SWE-Bench Pro 得 58.6%、BrowseComp 得 84.4% [24]

要小心的是,GPT-5.5 Pro 是另一個比較點。同一共享表格中,GPT-5.5 Pro 在 BrowseComp 達 90.1%,Humanity’s Last Exam with tools 達 57.2%;但這些分數不應直接併入基本版 GPT-5.5,尤其當你要比較成本、延遲或模型設定時 [24]

採購角度上,BenchLM 列出 GPT-5.5 有 1M-token context window;另有價格報告列 GPT-5.5 為每百萬 input tokens $5、每百萬 output tokens $30 [27][36]。這些價格只宜當作訊號,落 budget 前仍要核對供應商即時價格。

Claude Opus 4.7

Claude Opus 4.7 在這組模型入面,軟件修復相關訊號最突出。LLM Stats 列出它在 SWE-Bench Verified 得 87.6%,共享比較則列出它在 SWE-Bench Pro 得 64.3% [18][24]。它亦在共享比較中領先 GPQA Diamond,分數 94.2%;Humanity’s Last Exam without tools 為 46.9%;MCP Atlas 為 79.1% [24]

LLM Stats 報告 Claude Opus 4.7 有 1M-token context window,價格為每百萬 token $5/$25 [16]。不過,可比性要打個折扣:Anthropic 說明部分 benchmark 使用內部實作或更新後的 harness parameters,部分分數不能直接同公開 leaderboard 分數比較 [17]

Kimi K2.6

如果你想要開放權重,Kimi K2.6 是本文引用材料中最清楚的候選。發布報道描述它是開放權重 1T 參數 MoE 模型,有 32B active parameters、384 experts、native multimodality、INT4 quantization 及 256K context [1]。Hugging Face 模型卡列出它在 SWE-Bench Verified 得 80.2%、SWE-Bench Pro 得 58.6%、Terminal-Bench 2.0 得 66.7%,LiveCodeBench v6 為 89.6 [6]

同一發布報道亦列出 Kimi K2.6 在 Humanity’s Last Exam with tools 得 54.0,在 BrowseComp 得 83.2 [1]。LLM Stats 列 Kimi K2.6 為 262K context,價格欄為 $0.95/$4.00,並標示 Open Source [11]。限制是,Kimi 的數字不是同 GPT-5.5、Claude Opus 4.7、DeepSeek-V4-Pro-Max 完全同一張共享表格得來;所以細微分差應視為「值得測試」的提示,而不是板上釘釘的勝負 [1][6][24]

DeepSeek-V4-Pro-Max

DeepSeek-V4-Pro-Max 更像是「性價比候選」,而不是明顯全能冠軍。LLM Stats 列出它的 size 為 1.6T、context 為 1M、SWE-Bench Verified 為 80.6%,成本欄為 $1.74/$3.48 [18]。共享比較中,它在 GPQA Diamond 得 90.1%、Humanity’s Last Exam without tools 得 37.7%、Humanity’s Last Exam with tools 得 48.2%、Terminal-Bench 2.0 得 67.9%、SWE-Bench Pro 得 55.4%、BrowseComp 得 83.4%、MCP Atlas 得 73.6% [24]

這些數字令 DeepSeek-V4-Pro-Max 很值得放入成本敏感場景測試。不過,同一共享表格中,多數 benchmark 行仍由 GPT-5.5、GPT-5.5 Pro 或 Claude Opus 4.7 領先;所以若要用它取代 premium model,應先用你自己的任務驗證 [24]

Context window 同價格:只當採購線索

不同來源未必用同一方法報價,context window 亦未必由同一個供應商頁面提供。以下只應當作採購前的訊號,不是最終報價。

模型引用到的 context/價格訊號實際解讀
GPT-5.5BenchLM 列 1M context;一份價格報告列每百萬 input $5、output $30 [27][36]Premium hosted option;要核對即時價格。
Claude Opus 4.7LLM Stats 報告 1M context,以及每百萬 token $5/$25 [16]適合 premium coding、reasoning 及長上下文任務。
Kimi K2.6發布報道列 256K context;LLM Stats 列 262K context 及 $0.95/$4.00 價格欄 [1][11]強開放權重候選;hosted 價格視供應商而定。
DeepSeek-V4-Pro-MaxLLM Stats 列 1M context、1.6T size、SWE-Bench Verified 80.6%,成本欄 $1.74/$3.48 [18]若你工作負載上質素穩定,是強性價比候選。

點解排行榜會「各說各話」?

因為每個 benchmark 測的能力唔一樣。GPQA Diamond 同 Humanity’s Last Exam 偏向硬推理;Terminal-Bench 2.0 同 SWE-Bench 系列偏向編程、agentic software work;BrowseComp 在共享比較中則衡量瀏覽式檢索表現 [24]。一個模型可以在某項第一、另一項落後,未必矛盾,只是題型、工具權限同評測 harness 不同。

即使 benchmark 名稱相同,實作都可能有差異。LLM Stats 列 Claude Opus 4.7 在 SWE-Bench Verified 得 87.6%;LMCouncil 在其設定下則列 Claude Opus 4.7 為 83.5% ± 1.7 [18][30]。Anthropic 亦說明部分結果使用內部實作或更新後 harness parameters,限制了同公開 leaderboard 直接比較的可能 [17]

因此,一兩個百分點的差距,不應單獨決定 production rollout。公開 benchmark 最適合用來收窄 shortlist;最後決定,應該由你自己的測試集來做。

實測 finalists:一個實用清單

在真正轉用某個模型前,最好用你實際會用的任務,測試頭兩至三個候選。

  1. 用真實 prompts、檔案同 repositories。 公開 benchmark 很少完整反映你的 codebase、文件、政策同用戶行為。
  2. 工具環境要一致。 Coding-agent 成績會受 terminal access、browsing、retrieval、repository context 或內部 API 影響。
  3. 同設定下量度成本同延遲。 Pro mode 或較高 reasoning effort 可能提升質素,但亦可能增加 token 用量同回應時間。
  4. 人工檢查失敗個案。 編程任務要看 tests、diffs、可維護性、安全回歸,以及有沒有幻覺式依賴。
  5. 至少放入一個低成本挑戰者。 如果你重視開放權重或推理成本,Kimi K2.6 同 DeepSeek-V4-Pro-Max 都值得入測試名單 [1][18]

總結

如果你想先挑最高端候選,應該把 GPT-5.5 同 Claude Opus 4.7 放在一起測:GPT-5.5 有本文引用中最強的 Terminal-Bench 2.0 成績;Claude Opus 4.7 則在引用的 SWE-Bench Pro 同 SWE-Bench Verified 成績最突出 [18][24]。如果你需要開放權重,先看 Kimi K2.6 [1][6]。如果成本是最大限制,就把 DeepSeek-V4-Pro-Max 放入 shortlist,但不要未測就當它可以無縫取代 premium options [18][24]

Studio Global AI

Search, cite, and publish your own answer

Use this topic as a starting point for a fresh source-backed answer, then compare citations before you share it.

使用 Studio Global AI 搜尋並查核事實

重點

  • Terminal heavy 編程代理可先試 GPT 5.5;軟件修復與無工具硬推理可先試 Claude Opus 4.7;開放權重部署可先睇 Kimi K2.6;成本敏感 hosted inference 則應把 DeepSeek V4 Pro Max 放入測試名單。
  • GPT 5.5 Pro 不應同基本版 GPT 5.5 混為一談;在分開列出的數據中,GPT 5.5 Pro 於 BrowseComp 達 90.1%,Humanity’s Last Exam with tools 達 57.2% [24]。
  • Kimi K2.6 被描述為開放權重 1T 參數 MoE、32B active parameters 模型;LLM Stats 則列出 DeepSeek V4 Pro Max 具 1M context,成本欄為 $1.74/$3.48 [1][18]。

人們還問

「GPT-5.5、Claude Opus 4.7、Kimi K2.6、DeepSeek V4 跑分比較」的簡短答案是什麼?

Terminal heavy 編程代理可先試 GPT 5.5;軟件修復與無工具硬推理可先試 Claude Opus 4.7;開放權重部署可先睇 Kimi K2.6;成本敏感 hosted inference 則應把 DeepSeek V4 Pro Max 放入測試名單。

首先要驗證的關鍵點是什麼?

Terminal heavy 編程代理可先試 GPT 5.5;軟件修復與無工具硬推理可先試 Claude Opus 4.7;開放權重部署可先睇 Kimi K2.6;成本敏感 hosted inference 則應把 DeepSeek V4 Pro Max 放入測試名單。 GPT 5.5 Pro 不應同基本版 GPT 5.5 混為一談;在分開列出的數據中,GPT 5.5 Pro 於 BrowseComp 達 90.1%,Humanity’s Last Exam with tools 達 57.2% [24]。

接下來在實務上我該做什麼?

Kimi K2.6 被描述為開放權重 1T 參數 MoE、32B active parameters 模型;LLM Stats 則列出 DeepSeek V4 Pro Max 具 1M context,成本欄為 $1.74/$3.48 [1][18]。

接下來我應該探索哪個相關主題?

繼續“Claude Opus 4.7、GPT-5.5、DeepSeek V4、Kimi K2.6:2026 Benchmark 點睇先唔會睇錯”以獲得另一個角度和額外的引用。

開啟相關頁面

我應該將其與什麼進行比較?

對照「DeepSeek V4 唔止 1M context:MoE、長上下文同 API 遷移重點」交叉檢查此答案。

開啟相關頁面

繼續你的研究

研究對話

研究問題

Search for the benchmarks of GPT-5.5, Claude Opus 4.7, Kimi K2.6, DeepSeek V4. Compare them to help me understand more about them.

Studio Global AI40 來源

引用的答案

I found public benchmark data for all four, but not one fully independent, apples-to-apples leaderboard covering GPT-5.5, Claude Opus 4.7, Kimi K2.6, and DeepSeek V4 under the same harness. The practical read is: GPT-5.5 looks strongest for broad agentic coding/computer-use tasks, Claude Opus 4.7 is extremely strong on coding and hard reasoning, Kimi K2.6 is the strongest open-weight option in this group, and DeepSeek V4 looks like the best price/performance play if its reported results hold up across independent testing [4][5][6][13].

Key findings

  • GPT-5.5’s official benchmarks emphasize agentic coding and computer-use: 82.7% on Terminal-Bench 2.0, 58.6% on SWE-Bench Pro, 78.7% on OSWorld-Verified, 84.4% on BrowseComp, and 81.8% on CyberGym [4].

  • GPT-5.5 Pro appears stronger on some harder research/search/math tasks, including 90.1% on BrowseComp, 52.4% on FrontierMath Tier 1–3, and 39.6% on FrontierMath Tier 4 [4].

  • Claude Opus 4.7 is positioned by Anthropic as a premium hybrid reasoning model for coding and agents, with a 1M-token context window and API pricing of $5 per million input tokens and $25 per million output tokens [2]. Third-party benchmark reporting lists Claude Opus 4.7 at 87.6% on SWE-bench Verified and 94.2% on GPQA Diamond [5].

  • Kimi K2.6 is an open-weight multimodal MoE model with 1T total parameters, 32B active parameters, and a 256K-token context window [13]. Its model card reports 80.2% on SWE-Bench Verified, 58.6% on SWE-Bench Pro, 66.7% on Terminal-Bench 2.0, 90.5% on GPQA Diamond, and 54.0% on HLE-Full with tools [13].

  • DeepSeek V4-Pro-Max is reported as a 1.6T-parameter open model with much lower API pricing than GPT-5.5 and Claude Opus 4.7 [6]. Reported comparison data puts DeepSeek V4-Pro-Max at 90.1% on GPQA Diamond, 37.7% on HLE without tools, 48.2% on HLE with tools, and 67.9% on Terminal-Bench 2.0 [6].

Comparison

ModelBest fitNotable reported benchmarksMain caveat
GPT-5.5Best general pick for agentic coding, computer use, research workflows, and tool-heavy tasksTerminal-Bench 2.0: 82.7%; SWE-Bench Pro: 58.6%; OSWorld-Verified: 78.7%; BrowseComp: 84.4%; CyberGym: 81.8% [4]OpenAI’s strongest numbers include internal or vendor-run evals, so independent confirmation matters [4].
GPT-5.5 ProHarder reasoning/search/math where cost is less importantBrowseComp: 90.1%; FrontierMath Tier 1–3: 52.4%; FrontierMath Tier 4: 39.6% [4]Higher-tier “Pro” results are not directly comparable to base GPT-5.5 or open models unless the same effort/budget is used [4].
Claude Opus 4.7Premium coding, long-context enterprise workflows, careful reasoning, and high-reliability agentsSWE-bench Verified: 87.6%; GPQA Diamond: 94.2%; 1M context window [2][5]Some Anthropic benchmark results are presented in release materials or partner/internal evals, and not every benchmark is directly comparable to public leaderboard runs [8].
Kimi K2.6Best open-weight option if you want strong coding/agentic performance and self-hostabilitySWE-Bench Verified: 80.2%; SWE-Bench Pro: 58.6%; Terminal-Bench 2.0: 66.7%; GPQA Diamond: 90.5%; HLE-Full with tools: 54.0% [13]Kimi’s own model card notes different settings and re-evaluations for some comparison scores, so treat cross-model deltas cautiously [13].
DeepSeek V4-Pro-MaxBest value candidate; strong open-model performance with much lower reported API costGPQA Diamond: 90.1%; HLE no tools: 37.7%; HLE with tools: 48.2%; Terminal-Bench 2.0: 67.9% [6]Evidence is newer and less independently settled; I would wait for more third-party validation before treating it as equal to closed frontier models [6].

How to interpret the numbers

  • For software engineering, Claude Opus 4.7’s 87.6% SWE-bench Verified score is the standout among the reported figures I found, while GPT-5.5’s 82.7% Terminal-Bench 2.0 score is the strongest reported command-line/agentic coding number in this set [4][5].

  • For broad tool-using agent tasks, GPT-5.5 looks especially strong because its official release reports high scores across OSWorld-Verified, BrowseComp, Toolathlon, Terminal-Bench 2.0, and CyberGym rather than just one narrow benchmark [4].

  • For open-weight deployment, Kimi K2.6 and DeepSeek V4 are the two to watch: Kimi K2.6 has a strong official benchmark table across coding, reasoning, vision, and agentic tasks, while DeepSeek V4 emphasizes frontier-like performance at much lower cost [6][13].

  • For hard academic/scientific reasoning, Claude Opus 4.7’s reported 94.2% GPQA Diamond leads the figures I found, while GPT-5.5 and DeepSeek V4 are reported lower on GPQA Diamond in the comparison data I found [5][6].

  • For price/performance, DeepSeek V4 appears meaningfully cheaper than GPT-5.5 and Claude Opus 4.7 in reported API pricing, but cheaper inference only matters if its quality is sufficient for your workload [6].

Evidence notes

  • The strongest source for GPT-5.5 is OpenAI’s official GPT-5.5 release page, which includes the benchmark table for Terminal-Bench 2.0, SWE-Bench Pro, OSWorld-Verified, BrowseComp, FrontierMath, Toolathlon, and CyberGym [4].

  • The strongest source for Claude Opus 4.7 positioning, context window, availability, and pricing is Anthropic’s Opus 4.7 product/release material [2][8]. The cleanest text benchmark figures I found for Opus 4.7’s SWE-bench Verified and GPQA Diamond scores came from LLM Stats reporting [5].

  • The strongest source for Kimi K2.6 is Moonshot AI’s Hugging Face model card, which includes architecture details, context length, and the evaluation table [13].

  • The strongest text source I found for DeepSeek V4’s direct comparison against GPT-5.5 and Claude Opus 4.7 was VentureBeat’s launch/benchmark report [6].

Limitations / uncertainty

  • Insufficient evidence for one single, independent, fully apples-to-apples benchmark suite that evaluates all four models with the same prompts, tools, budgets, reasoning effort, and scoring harness.

  • Some numbers are vendor-reported, some are third-party summaries, and some benchmarks use different tool access or reasoning-effort settings, so small score differences should not be overinterpreted [4][8][13].

  • For real buying/deployment decisions, I would run your own 20–50 task eval set with your actual prompts, files, tools, latency targets, and cost constraints.

Summary

If you want the safest high-end general choice, start with GPT-5.5. If your priority is top-tier coding reliability and long-context enterprise work, test Claude Opus 4.7. If you need open weights and strong frontier-adjacent performance, Kimi K2.6 is the most compelling open option. If cost is the constraint and you can tolerate some validation risk, DeepSeek V4 is the one to benchmark first.

來源

  • [1] [AINews] Moonshot Kimi K2.6: the world's leading Open Model ...latent.space

    Moonshot’s Kimi K2.6 was the clear release of the day: an open-weight 1T-parameter MoE with 32B active, 384 experts (8 routed + 1 shared), MLA attention, 256K context, native multimodality, and INT4 quantization, with day-0 support in vLLM, OpenRouter, Clou...

  • [6] moonshotai/Kimi-K2.6 - Hugging Facehuggingface.co

    OSWorld-Verified 73.1 75.0 72.7 63.3 Coding Terminal-Bench 2.0 (Terminus-2) 66.7 65.4 65.4 68.5 50.8 SWE-Bench Pro 58.6 57.7 53.4 54.2 50.7 SWE-Bench Multilingual 76.7 77.8 76.9 73.0 SWE-Bench Verified 80.2 80.8 80.6 76.8 SciCode 52.2 56.6 51.9 58.9 48.7 OJ...

  • [11] AI Leaderboard 2026 - Compare Top AI Models & Rankingsllm-stats.com

    19 Image 20: Moonshot AI Kimi K2.6NEW Moonshot AI 1,157 — 90.5% 80.2% 262K $0.95 $4.00 Open Source 20 Image 21: OpenAI GPT-5.2 Codex OpenAI 1,148 812 — — 400K $1.75 $14.00 Proprietary [...] 6 Image 7: Anthropic Claude Opus 4.5 Anthropic 1,614 1,342 87.0% 80...

  • [16] Claude Opus 4.7: Benchmarks, Pricing, Context & What's Newllm-stats.com

    LLM Stats Logo Make AI phone calls with one API call Claude Opus 4.7: Benchmarks, Pricing, Context & What's New Claude Opus 4.7 scores 87.6% on SWE-bench Verified, 94.2% on GPQA, 1M token context, 3.3x higher-resolution vision, new xhigh effort level. $5/$2...

  • [17] Introducing Claude Opus 4.7 - Anthropicanthropic.com

    CyberGym: Opus 4.6’s score has been updated from the originally reported 66.6 to 73.8, as we updated our harness parameters to better elicit cyber capability. SWE-bench Multimodal: We used an internal implementation for both Opus 4.7 and Opus 4.6. Scores ar...

  • [18] SWE-Bench Verified Leaderboard - LLM Statsllm-stats.com

    Model Score Size Context Cost License --- --- --- 1 Anthropic Claude Mythos Preview Anthropic 0.939 — — $25.00 / $125.00 2 Anthropic Claude Opus 4.7 Anthropic 0.876 — 1.0M $5.00 / $25.00 3 Anthropic Claude Opus 4.5 Anthropic 0.809 — 200K $5.00 / $25.00 4 An...

  • [24] DeepSeek-V4 arrives with near state-of-the-art intelligence at 1/6th ...venturebeat.com

    Benchmark DeepSeek-V4-Pro-Max GPT-5.5 GPT-5.5 Pro, where shown Claude Opus 4.7 Best result among these GPQA Diamond 90.1% 93.6% — 94.2% Claude Opus 4.7 Humanity’s Last Exam, no tools 37.7% 41.4% 43.1% 46.9% Claude Opus 4.7 Humanity’s Last Exam, with tools 4...

  • [27] GPT-5.5 Benchmarks 2026: Scores, Rankings & Performancebenchlm.ai

    Core Rankings Specialized Use Cases Dashboards Directories Guides & Lists Tools GPT-5.5 According to BenchLM.ai, GPT-5.5 ranks 5 out of 112 models on the provisional leaderboard with an overall score of 89/100. It also ranks 2 out of 16 on the verified lead...

  • [28] GPT-5.5: Pricing, Benchmarks & Performance - LLM Statsllm-stats.com

    9Image 42GPT-5 mini 0.22 10Image 43o3 0.16 GPQAView → 4 of 10 Image 44: LLM Stats Logo A challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. Questions are Google-proof and extremely difficult, w...

  • [30] AI Model Benchmarks Apr 2026 | Compare GPT-5, Claude ...lmcouncil.ai

    METR Time Horizons Model Minutes --- 1 Claude Opus 4.6 (unknown thinking) 718.8 ±1815.2 2 GPT-5.2 (high) 352.2 ±335.5 3 GPT-5.3 Codex 349.5 ±333.1 4 Claude Opus 4.5 (no thinking) 293.0 ±239.0 5 Claude Opus 4.5 (16k thinking) 288.9 ±558.2 SWE-bench Verified...

  • [36] GPT-5.5 Doubles the Price, Google Goes Full Agent, DeepSeek V4 ...thecreatorsai.com

    GPT-5.5 is out — $5 per million input, $30 per million output. That's exactly double GPT-5.4 and 20% more than Claude Opus 4.7. OpenAI released ... 21 hours ago

  • [38] Introducing GPT-5.5 - OpenAIopenai.com

    Introducing GPT-5.5, our smartest model yet—faster, more capable, and built for complex tasks like coding, research, and data analysis ... 2 days ago