studioglobal
報告公開網址2026年4月24日82 個來源

請閱讀 Claude Opus 4.7 與 GPT 5.5 的介紹: https://www.anthropic.com/news/claude-opus-4-7 https://openai.com/index/introducing-gpt-5-5 請對它們進行深入研究,並撰寫一份研究報告。

以下報告嚴格只依據你提供的證據整理。就目前證據包而言,Claude Opus 4.7 可以做出相對扎實的描述,但 GPT 5.5 只能得到「證據不足」的結論,因為 OpenAI 相關材料只包含 API 定價頁、提到 GPT 5.3 / GPT 5 Codex 的官方文章,以及一則 GPT 5 的 Wikipedia 條目,沒有 GPT 5.5 介紹頁的可核對內容 [4][6][7]。Anthropic 官方資料顯示,Claude...

以下報告嚴格只依據你提供的證據整理。就目前證據包而言,Claude Opus 4.7 可以做出相對扎實的描述,但 GPT 5.5 只能得到「證據不足」的結論,因為 OpenAI 相關材料只包含 API 定價頁、提到 GPT 5.3 / GPT 5 Codex 的官方文章,以及一則 GPT 5 的 Wikipedia 條目,沒有 GPT 5.5 介紹頁的可核對內容 [4][6][7]。Anthropic 官方資料顯示,Claude Opus 4.7 於 2026 年 4 月 16 日推出,被定位為其最強的通用可用模型,主打複雜推理與 agentic coding,且維持與 Opus 4.6 相同的 $5 / $25 per MTok

重點整理

  • 以下報告嚴格只依據你提供的證據整理。就目前證據包而言,Claude Opus 4.7 可以做出相對扎實的描述,但 GPT-5.5 只能得到「證據不足」的結論,因為 OpenAI 相關材料只包含 API 定價頁、提到 GPT-5.3 / GPT-5 Codex 的官方文章,以及一則 GPT-5 的 Wikipedia 條目,沒有 GPT-5.5 介紹頁的可核對內容 [4][6][7]。Anthropic 官方資料顯示,Claude Opus 4.7 於 2026 年 4 月 16 日推出,被定位為其最強的通用可用模型,主打複雜推理與 agentic coding,且維持與 Opus 4.6 相同
  • 就證據完整度而言,Anthropic 一側有多份直接官方文件可交叉印證;OpenAI 一側則缺少 GPT-5.5 的直接官方介紹內容,因此這份研究天然是不對稱比較 。
  • Claude Opus 4.7 的可確認核心訊息包括:發布時間、旗艦定位、複雜推理與 agentic coding 主張、1M context,以及與前代相同的定價 。
  • Anthropic 的訊息重心不只在模型能力,也在部署層可用性:價格不變、長上下文不加價、`max_tokens` 調整建議、以及工具化工作流範例 。
  • 對 GPT-5.5:Insufficient evidence. 目前證據不足以確認其產品定位、價格、context window、工具支援、benchmark,或與 Claude Opus 4.7 的直接差異 。
  • 以下報告嚴格只依據你提供的證據整理。就目前證據包而言,Claude Opus 4.7 可以做出相對扎實的描述,但 GPT-5.5 只能得到「證據不足」的結論,因為 OpenAI 相關材料只包含 API 定價頁、提到 GPT-5.3 / GPT-5 Codex 的官方文章,以及一則 GPT-5 的 Wikipedia 條目,沒有 GPT-5.5 介紹頁的可核對內容 [4][6][7]。Anthropic 官方資料顯示,Claude Opus 4.7 於 2026 年 4 月 16 日推出,被定位為其最強的通用可用模型,主打複雜推理與 agentic cod
  • ## Key findings

研究答案

以下報告嚴格只依據你提供的證據整理。就目前證據包而言,Claude Opus 4.7 可以做出相對扎實的描述,但 GPT-5.5 只能得到「證據不足」的結論,因為 OpenAI 相關材料只包含 API 定價頁、提到 GPT-5.3 / GPT-5 Codex 的官方文章,以及一則 GPT-5 的 Wikipedia 條目,沒有 GPT-5.5 介紹頁的可核對內容 [4][6][7]。Anthropic 官方資料顯示,Claude Opus 4.7 於 2026 年 4 月 16 日推出,被定位為其最強的通用可用模型,主打複雜推理與 agentic coding,且維持與 Opus 4.6 相同的 $5 / $25 per MTok 定價 [2]。Anthropic 也明確表示 Opus 4.7 提供 1M context window,標準 API 定價下不收 long-context premium,並建議開發者提高 max_tokens 預留空間以涵蓋 compaction triggers [5]

Key findings

  • 就證據完整度而言,Anthropic 一側有多份直接官方文件可交叉印證;OpenAI 一側則缺少 GPT-5.5 的直接官方介紹內容,因此這份研究天然是不對稱比較 [1][2][5][4][7]
  • Claude Opus 4.7 的可確認核心訊息包括:發布時間、旗艦定位、複雜推理與 agentic coding 主張、1M context,以及與前代相同的定價 [2][5]
  • Anthropic 的訊息重心不只在模型能力,也在部署層可用性:價格不變、長上下文不加價、max_tokens 調整建議、以及工具化工作流範例 [1][2][5]
  • 對 GPT-5.5:Insufficient evidence. 目前證據不足以確認其產品定位、價格、context window、工具支援、benchmark,或與 Claude Opus 4.7 的直接差異 [4][6][7]

Confirmed facts

Claude Opus 4.7

  • Anthropic 在 2026 年 4 月 16 日發布 Claude Opus 4.7 [2]
  • Anthropic 將其描述為最強的 generally available 模型,用於複雜推理與 agentic coding [2]
  • 其定價與 Opus 4.6 相同,為 $5 / $25 per MTok [2]
  • Claude Opus 4.7 提供 1M context window,且標準 API 定價下沒有 long-context premium [5]
  • Anthropic 建議開發者提高 max_tokens 預留,以容納額外 headroom 與 compaction triggers [5]
  • Anthropic 表示 Claude Opus 4.7 在 knowledge-worker tasks 上有有意義的提升 [5]
  • Anthropic 的 web search 工具文件以 claude-opus-4-7 作為範例模型 [1]
  • Anthropic 的 release notes 表示 Opus 4.7 伴隨 capability improvements、new features 與 updated tokenizer [2]

OpenAI / GPT-5.5 side

  • 在這批證據裡,沒有 GPT-5.5 介紹頁的直接內容可供核對 [4][6][7]
  • 目前可見的 OpenAI 一手材料之一是 API 定價頁,但提供的是家族層級的 pricing framing,而不是 GPT-5.5 的專屬規格與價格細節 [4]
  • 另一份 OpenAI 官方材料提到了 GPT-5.3 Instant、GPT-5.3-Codex 與 GPT-5 Codex,這表示 2026 年的 OpenAI 確實存在 GPT-5 系列的延伸命名與產品線,但這仍不能證明 GPT-5.5 的具體定位 [7]

What remains inference

  • 把 Anthropic 所說的「meaningful gains」直接解讀為在所有 benchmark 或所有知識工作場景都明顯領先,仍屬推論,因為目前沒有量化結果或評測表 [5]
  • 把 Claude Opus 4.7 判定為優於 GPT-5.5 的推理或編碼模型,屬於無法驗證的推論,因為 GPT-5.5 的對應資料不在這個證據包中 [2][5][4][7]
  • claude-opus-4-7 出現在 web search 文件範例,解讀為它在所有 Anthropic 產品層都完整支援所有工具能力,也屬推論;現有證據只能證明官方把它用在範例裡 [1]
  • 把「價格不變 + 1M context」直接轉化為「最佳性價比」結論,同樣缺乏與 GPT-5.5 的對照資料 [2][5][4]
  • 把 OpenAI 現有 GPT-5 系列命名,直接外推出 GPT-5.5 的能力邏輯或商業定位,也沒有足夠證據支撐 [7]

What the evidence suggests

  • Anthropic 對 Opus 4.7 的商業訊息是「能力升級,但維持既有價格帶」,這有助於降低既有客戶升級摩擦 [2]
  • Anthropic 的產品差異化不只在模型能力,也在長上下文定價政策與部署指引的清晰度 [5]
  • Anthropic 想讓 Opus 4.7 看起來更適合工具化、代理化、與程式開發流程,因為其官方定位與文件範例都朝這個方向集中 [1][2]
  • OpenAI 在這份證據中的可見訊號更像是一個擴展中的 GPT-5 產品家族,而不是 GPT-5.5 的明確產品敘事,因此無法判定其相對優勢 [4][7]

Conflicting evidence or uncertainty

  • Claude Opus 4.7 的上線日期存在衝突:Anthropic 官方 release notes 寫 2026 年 4 月 16 日 [2],而社群貼文寫 2026 年 4 月 17 日 [69]。在這種情況下,官方一手來源 [2] 明顯比社群貼文 [69] 更可信。
  • 對 GPT-5.5:Insufficient evidence. 目前沒有其官方介紹頁內容、規格表、benchmark 或價格細節,因此無法做對等比較 [4][6][7]
  • Anthropic 的能力提升說法目前是定性表述,缺少 benchmark 表、評測方法與工作負載分層,因此無法估計提升幅度 [2][5]
  • Anthropic release notes 提到 updated tokenizer,且顯示可能有 API 相關變更,但目前提供的片段沒有足夠細節來評估遷移成本 [2]
  • 關於 OpenAI 一側,Wikipedia 條目只能作為弱背景,不足以替代 GPT-5.5 的官方介紹頁或 model card [6]

Open questions

  • GPT-5.5 的官方定位是什麼:推理旗艦、速度型模型、Codex 變體,還是其他產品層?
  • GPT-5.5 的 context window、定價、工具支援、以及 reasoning / coding benchmark 是什麼?
  • Anthropic 所稱 knowledge-worker tasks 的提升,具體體現在什麼評測集與什麼幅度?
  • Opus 4.7 的 updated tokenizer 與 API 變更,對既有應用有多大兼容性影響?
  • 若以實務採購角度評估,兩者的延遲、穩定性、代理工作流成功率與總成本差異是多少?

Sources worth trusting most

  • [2] Anthropic release notes:一手、時間戳明確、直接描述 Opus 4.7 的發布、定位與定價 [2]
  • [5] Anthropic model page:一手、直接提供 1M context、無 long-context premium、能力改善與 max_tokens 建議 [5]
  • [1] Anthropic API docs:一手、能證明 Opus 4.7 已被放入官方工具使用範例 [1]
  • [4] OpenAI API pricing:一手,但只提供家族層級定價脈絡,無 GPT-5.5 專頁細節 [4]
  • [7] OpenAI official article:一手,但只側面顯示 GPT-5 家族命名與產品線延伸,仍非 GPT-5.5 介紹頁 [7]
  • [6] Wikipedia:二手背景資料,且主題是 GPT-5 不是 GPT-5.5,可信度明顯低於官方文件 [6]
  • [69] Facebook post:社群來源,只適合拿來指出日期衝突,不適合用來定義正式規格或發布資訊 [69]

Recommended next step

  1. 先補齊 GPT-5.5 的官方介紹頁全文或可驗證摘錄,因為目前 OpenAI 證據不是 GPT-5.5 專屬材料 [4][7]
  2. 再以同一欄位做對照:發布日期、定位、context window、價格、benchmark、工具支援、API 相容性 [2][5]
  3. 最後再做真正的採購或技術決策分析;否則現在只能得到「Claude 資料完整、GPT-5.5 資料缺口大」的非對稱結論 [1][2][5][4][7]

Summary

嚴格依目前證據,Claude Opus 4.7 是一個有明確官方敘事支撐的模型:它在 2026 年 4 月 16 日發布,被定位為 Anthropic 最強的通用可用模型,主打複雜推理與 agentic coding,提供 1M context,且維持 $5 / $25 per MTok 定價 [2][5]。相比之下,這份證據並未提供 GPT-5.5 介紹頁的實際內容,因此無法完成對等、可信、細節充分的比較 [4][6][7]。最嚴謹的結論不是「誰更強」,而是:我們目前能可靠描述 Claude Opus 4.7,但對 GPT-5.5 仍是 Insufficient evidence [2][5][4][6][7]

輔助視覺素材

The image features a soft, pastel-colored background with flower petals and the prominent text "GPT-5.5" displayed across the center.
Introducing GPT-5.5 | OpenAIThe image features a soft, pastel-colored background with flower petals and the prominent text "GPT-5.5" displayed across the center.
A comparison chart displays performance metrics of GPT-5.5, GPT-5.4, and other AI models, highlighting GPT-5.5's enhancements in areas like reasoning, knowledge tasks, and browsing
Introducing GPT-5.5: OpenAI's New Class of Intelligence for RealA comparison chart displays performance metrics of GPT-5.5, GPT-5.4, and other AI models, highlighting GPT-5.5's enhancements in areas like reasoning, knowledge tasks, and browsing, with references to upcoming versions and safety safeguards scheduled for April 2026.
A dark interface displays a 3D mathematical model of interwoven spheres and curves, with sections labeled "Surface Intersection Lab," "Weierstrass," and "smooth," alongside equatio
Introducing GPT-5.5 | OpenAIA dark interface displays a 3D mathematical model of interwoven spheres and curves, with sections labeled "Surface Intersection Lab," "Weierstrass," and "smooth," alongside equations and numerical data.
A digital simulation shows spacecraft navigation data tracking the closest lunar approach, with detailed metrics and trajectory information displayed on a dark, starry background.
codex-artemis-demoA digital simulation shows spacecraft navigation data tracking the closest lunar approach, with detailed metrics and trajectory information displayed on a dark, starry background.
A person with glasses smiling while looking at someone off-camera, with bold text "GPT-5.5" and "First impressions" overlaid on the image.
GPT-5.5-FirstImpressions-1920x1080A person with glasses smiling while looking at someone off-camera, with bold text "GPT-5.5" and "First impressions" overlaid on the image.
The image features a split illustration of a brain, with the left side glowing in blue wireframe representing AI and the right side appearing as a dark, metallic lock symbolizing s
Anthropic Launches Claude Opus 4.7 — But Its Most Powerful AI IsThe image features a split illustration of a brain, with the left side glowing in blue wireframe representing AI and the right side appearing as a dark, metallic lock symbolizing security or restricted access, alongside bold text announcing the release of Claude Opus 4.7 in April 2026.
A bar graph compares the ELO scores of different AI models, including Opus 4.7, Opus 4.6, GPT-5.4, and Gemini 3.1 Pro, illustrating their relative performance in a context involvin
Introducing Claude Opus 4.7 \ AnthropicA bar graph compares the ELO scores of different AI models, including Opus 4.7, Opus 4.6, GPT-5.4, and Gemini 3.1 Pro, illustrating their relative performance in a context involving Claude Opus 4.7, safeguards, and future planning through April 2026.
A graphic illustration featuring a stylized silhouette of a human head with neural network connections on the left and a playful abstract drawing of a face with question marks on t
Introducing Claude Opus 4.7 \ AnthropicA graphic illustration featuring a stylized silhouette of a human head with neural network connections on the left and a playful abstract drawing of a face with question marks on the right, referencing the Anthropic Claude Opus 4.7 announcement.
A comparative table displaying performance metrics of different AI models, highlighting the advancements of Anthropic's Claude Opus 4.7 over previous versions and competitors, with
Introducing Claude Opus 4.7 \ AnthropicA comparative table displaying performance metrics of different AI models, highlighting the advancements of Anthropic's Claude Opus 4.7 over previous versions and competitors, with specific emphasis on its high score of 93.9% in Agentic coding.
Figure 1: The graph demonstrates the approximate frequency of election-related conversations in the US in the weeks leading up to the US presidential election using Clio on Claude.
Figure 1: The graph demonstrates the approximate frequency of election-related conversations in the US in the weeks leading up to the US preFigure 1: The graph demonstrates the approximate frequency of election-related conversations in the US in the weeks leading up to the US presidential election using Clio on Claude.ai Free and Pro traffic.
Developing nuclear safeguards for AI through public-private partnership
Developing nuclear safeguards for AI through public-private partnership
Claude response before system prompt changes (Claude Opus model)
Claude response before system prompt changes (Claude Opus model)
Claude response after system prompt changes (Sonnet 3.5 new)
Claude response after system prompt changes (Sonnet 3.5 new)
The image features the text "Claude" and "Opus 4.7" on a soft pink background, with a stylized starburst-like graphic in a reddish-brown color to the left of the text.
Anthropic Releases Claude Opus 4.7 with Automated Real-TimeThe image features the text "Claude" and "Opus 4.7" on a soft pink background, with a stylized starburst-like graphic in a reddish-brown color to the left of the text.
Automatic cloud cost optimization is helpful, but to truly improve your bottom line, you’ll need to leverage unit cost.
Automatic cloud cost optimization is helpful, but to truly improve your bottom line, you’ll need to leverage unit costAutomatic cloud cost optimization is helpful, but to truly improve your bottom line, you’ll need to leverage unit cost.
Discover the best EC2 strategies to choose the right instance, better understand your costs, and lower your EC2 spend.
Discover the best EC2 strategies to choose the right instance, better understand your costs, and lower your EC2 spendDiscover the best EC2 strategies to choose the right instance, better understand your costs, and lower your EC2 spend.
Dario Amodei
Dario Amodei
Lambda Rightsizing
Lambda Rightsizing

研究對話

Studio Global AI40 個來源

附引用的答案

依據目前提供的證據,我無法完成一份對 Claude Opus 4.7 與 GPT 5.5 官方介紹頁本身進行深入、可靠比較的研究報告。原因是可用來源並未包含這兩篇介紹的正文或可核對摘錄;現有資料幾乎都在討論生命科學/生物資訊領域的 AI benchmark,例如 LABBench2 與 BixBench [1][7]。因此,任何關於兩個模型的功能、效能、定價、API、上下文長度、安全設計或產品定位的具體結論,都會缺乏證據支持;就原問題而言,Insufficient evidence. [1][7]

Key findings

  • 提供的來源與題目需求明顯不匹配:現有證據主題集中在 AI 於生物研究或生物資訊的評測,而不是 Claude Opus 4.7 或 GPT 5.5 的官方產品介紹 [1][2][3][4][5][6][7]

  • LABBench2 被描述為「an improved benchmark for AI systems performing biology research」,它屬於生物研究能力評估基準,而非商用模型的發布或介紹文件 [1]

  • BixBench 被描述為「a comprehensive benchmark for LLM-based agents in computational biology」,而且多篇論文把它當作 computational biology、scientific analysis、或 AI scientist 類工作的評測基準 [2][3][4][5][7]

  • 另一篇資料指出,Bioprobench 在回顧既有 biomedical / biological benchmarks 時,明確把 LAB-Bench 與 BixBench 列為相關基準,這再次顯示整組證據的中心是生命科學任務評估,而不是通用模型產品介紹 [6]

  • 唯一較接近「發布」性質的材料是 LinkedIn 貼文;它談的是 BixBench 的發布,並表達「目前模型表現不好、但未來可能改善」的看法,仍不是 Claude Opus 4.7 或 GPT 5.5 的官方文件 [63]

  • 因此,基於目前證據,無法可靠回答下列核心研究問題:兩模型的官方定位、關鍵能力主張、與前代差異、工具使用方式、商業方案、正式 benchmark 結果、或彼此優劣 [1][2][3][4][5][6][7][63]

Evidence notes

  • 目前可用的正式研究型來源主要是 arXiv 論文,內容圍繞 biology research、omics、protocol understanding、scientific analysis、與 computational biology benchmarks,而非兩家模型公告頁本身 [1][2][3][4][5][6][7]

  • 這些論文之間有一致主題:BixBench 被反覆引用為 computational biology / scientific analysis 的評測基準,說明來源彼此相互關聯,但也同時證明它們與本題要求的「Claude Opus 4.7 vs GPT 5.5 介紹頁比較」並不相符 [2][3][4][5][7]

  • [63] 是一則 LinkedIn 貼文,而不是正式產品說明或系統化評測;它最多只能提供背景脈絡,不能替代官方介紹或嚴格的模型比較證據 [63]

Limitations / uncertainty

  • Insufficient evidence.

  • 未提供 Claude Opus 4.7 與 GPT 5.5 官方介紹頁的正文、可核對摘錄、或官方規格摘要,因此我不能誠實地總結其內容。

  • 現有來源沒有兩個模型的產品規格、價格、API 細節、上下文長度、安全政策、或官方性能聲明,因此任何細節比較都會變成推測。

  • 多數來源是特定領域 benchmark 論文,其任務設定集中在生命科學,不足以外推到通用商用模型的整體競爭力比較 [1][2][3][4][5][6][7]

Summary

目前這組證據能可靠支持的,只是這樣一個較窄的結論:近期 AI 科學研究正在使用如 LABBench2 與 BixBench 這類 benchmark,來評估模型在 biology research、bioinformatics、omics 與相關科學分析任務上的能力 [1][2][3][4][5][6][7]。但這些資料不能支撐對 Claude Opus 4.7 與 GPT 5.5 的介紹頁內容做實質、可驗證的比較,因此本題目前只能給出:Insufficient evidence. [1][2][3][4][5][6][7][63]

如果你願意,我下一輪可以直接根據那兩篇官方頁面的原文,幫你重建一份完整的比較研究報告框架。

Studio Global AI40 個來源

附引用的答案

以下報告嚴格只依據你提供的證據整理。就目前證據包而言,Claude Opus 4.7 可以做出相對扎實的描述,但 GPT-5.5 只能得到「證據不足」的結論,因為 OpenAI 相關材料只包含 API 定價頁、提到 GPT-5.3 / GPT-5 Codex 的官方文章,以及一則 GPT-5 的 Wikipedia 條目,沒有 GPT-5.5 介紹頁的可核對內容 [4][6][7]。Anthropic 官方資料顯示,Claude Opus 4.7 於 2026 年 4 月 16 日推出,被定位為其最強的通用可用模型,主打複雜推理與 agentic coding,且維持與 Opus 4.6 相同的 $5 / $25 per MTok 定價 [2]。Anthropic 也明確表示 Opus 4.7 提供 1M context window,標準 API 定價下不收 long-context premium,並建議開發者提高 max_tokens 預留空間以涵蓋 compaction triggers [5]

Key findings

  • 就證據完整度而言,Anthropic 一側有多份直接官方文件可交叉印證;OpenAI 一側則缺少 GPT-5.5 的直接官方介紹內容,因此這份研究天然是不對稱比較 [1][2][5][4][7]
  • Claude Opus 4.7 的可確認核心訊息包括:發布時間、旗艦定位、複雜推理與 agentic coding 主張、1M context,以及與前代相同的定價 [2][5]
  • Anthropic 的訊息重心不只在模型能力,也在部署層可用性:價格不變、長上下文不加價、max_tokens 調整建議、以及工具化工作流範例 [1][2][5]
  • 對 GPT-5.5:Insufficient evidence. 目前證據不足以確認其產品定位、價格、context window、工具支援、benchmark,或與 Claude Opus 4.7 的直接差異 [4][6][7]

Confirmed facts

Claude Opus 4.7

  • Anthropic 在 2026 年 4 月 16 日發布 Claude Opus 4.7 [2]
  • Anthropic 將其描述為最強的 generally available 模型,用於複雜推理與 agentic coding [2]
  • 其定價與 Opus 4.6 相同,為 $5 / $25 per MTok [2]
  • Claude Opus 4.7 提供 1M context window,且標準 API 定價下沒有 long-context premium [5]
  • Anthropic 建議開發者提高 max_tokens 預留,以容納額外 headroom 與 compaction triggers [5]
  • Anthropic 表示 Claude Opus 4.7 在 knowledge-worker tasks 上有有意義的提升 [5]
  • Anthropic 的 web search 工具文件以 claude-opus-4-7 作為範例模型 [1]
  • Anthropic 的 release notes 表示 Opus 4.7 伴隨 capability improvements、new features 與 updated tokenizer [2]

OpenAI / GPT-5.5 side

  • 在這批證據裡,沒有 GPT-5.5 介紹頁的直接內容可供核對 [4][6][7]
  • 目前可見的 OpenAI 一手材料之一是 API 定價頁,但提供的是家族層級的 pricing framing,而不是 GPT-5.5 的專屬規格與價格細節 [4]
  • 另一份 OpenAI 官方材料提到了 GPT-5.3 Instant、GPT-5.3-Codex 與 GPT-5 Codex,這表示 2026 年的 OpenAI 確實存在 GPT-5 系列的延伸命名與產品線,但這仍不能證明 GPT-5.5 的具體定位 [7]

What remains inference

  • 把 Anthropic 所說的「meaningful gains」直接解讀為在所有 benchmark 或所有知識工作場景都明顯領先,仍屬推論,因為目前沒有量化結果或評測表 [5]
  • 把 Claude Opus 4.7 判定為優於 GPT-5.5 的推理或編碼模型,屬於無法驗證的推論,因為 GPT-5.5 的對應資料不在這個證據包中 [2][5][4][7]
  • claude-opus-4-7 出現在 web search 文件範例,解讀為它在所有 Anthropic 產品層都完整支援所有工具能力,也屬推論;現有證據只能證明官方把它用在範例裡 [1]
  • 把「價格不變 + 1M context」直接轉化為「最佳性價比」結論,同樣缺乏與 GPT-5.5 的對照資料 [2][5][4]
  • 把 OpenAI 現有 GPT-5 系列命名,直接外推出 GPT-5.5 的能力邏輯或商業定位,也沒有足夠證據支撐 [7]

What the evidence suggests

  • Anthropic 對 Opus 4.7 的商業訊息是「能力升級,但維持既有價格帶」,這有助於降低既有客戶升級摩擦 [2]
  • Anthropic 的產品差異化不只在模型能力,也在長上下文定價政策與部署指引的清晰度 [5]
  • Anthropic 想讓 Opus 4.7 看起來更適合工具化、代理化、與程式開發流程,因為其官方定位與文件範例都朝這個方向集中 [1][2]
  • OpenAI 在這份證據中的可見訊號更像是一個擴展中的 GPT-5 產品家族,而不是 GPT-5.5 的明確產品敘事,因此無法判定其相對優勢 [4][7]

Conflicting evidence or uncertainty

  • Claude Opus 4.7 的上線日期存在衝突:Anthropic 官方 release notes 寫 2026 年 4 月 16 日 [2],而社群貼文寫 2026 年 4 月 17 日 [69]。在這種情況下,官方一手來源 [2] 明顯比社群貼文 [69] 更可信。
  • 對 GPT-5.5:Insufficient evidence. 目前沒有其官方介紹頁內容、規格表、benchmark 或價格細節,因此無法做對等比較 [4][6][7]
  • Anthropic 的能力提升說法目前是定性表述,缺少 benchmark 表、評測方法與工作負載分層,因此無法估計提升幅度 [2][5]
  • Anthropic release notes 提到 updated tokenizer,且顯示可能有 API 相關變更,但目前提供的片段沒有足夠細節來評估遷移成本 [2]
  • 關於 OpenAI 一側,Wikipedia 條目只能作為弱背景,不足以替代 GPT-5.5 的官方介紹頁或 model card [6]

Open questions

  • GPT-5.5 的官方定位是什麼:推理旗艦、速度型模型、Codex 變體,還是其他產品層?
  • GPT-5.5 的 context window、定價、工具支援、以及 reasoning / coding benchmark 是什麼?
  • Anthropic 所稱 knowledge-worker tasks 的提升,具體體現在什麼評測集與什麼幅度?
  • Opus 4.7 的 updated tokenizer 與 API 變更,對既有應用有多大兼容性影響?
  • 若以實務採購角度評估,兩者的延遲、穩定性、代理工作流成功率與總成本差異是多少?

Sources worth trusting most

  • [2] Anthropic release notes:一手、時間戳明確、直接描述 Opus 4.7 的發布、定位與定價 [2]
  • [5] Anthropic model page:一手、直接提供 1M context、無 long-context premium、能力改善與 max_tokens 建議 [5]
  • [1] Anthropic API docs:一手、能證明 Opus 4.7 已被放入官方工具使用範例 [1]
  • [4] OpenAI API pricing:一手,但只提供家族層級定價脈絡,無 GPT-5.5 專頁細節 [4]
  • [7] OpenAI official article:一手,但只側面顯示 GPT-5 家族命名與產品線延伸,仍非 GPT-5.5 介紹頁 [7]
  • [6] Wikipedia:二手背景資料,且主題是 GPT-5 不是 GPT-5.5,可信度明顯低於官方文件 [6]
  • [69] Facebook post:社群來源,只適合拿來指出日期衝突,不適合用來定義正式規格或發布資訊 [69]

Recommended next step

  1. 先補齊 GPT-5.5 的官方介紹頁全文或可驗證摘錄,因為目前 OpenAI 證據不是 GPT-5.5 專屬材料 [4][7]
  2. 再以同一欄位做對照:發布日期、定位、context window、價格、benchmark、工具支援、API 相容性 [2][5]
  3. 最後再做真正的採購或技術決策分析;否則現在只能得到「Claude 資料完整、GPT-5.5 資料缺口大」的非對稱結論 [1][2][5][4][7]

Summary

嚴格依目前證據,Claude Opus 4.7 是一個有明確官方敘事支撐的模型:它在 2026 年 4 月 16 日發布,被定位為 Anthropic 最強的通用可用模型,主打複雜推理與 agentic coding,提供 1M context,且維持 $5 / $25 per MTok 定價 [2][5]。相比之下,這份證據並未提供 GPT-5.5 介紹頁的實際內容,因此無法完成對等、可信、細節充分的比較 [4][6][7]。最嚴謹的結論不是「誰更強」,而是:我們目前能可靠描述 Claude Opus 4.7,但對 GPT-5.5 仍是 Insufficient evidence [2][5][4][6][7]

大家也會問

「請閱讀 Claude Opus 4.7 與 GPT 5.5 的介紹: https://www.anthropic.com/news/claude-opus-4-7 https://openai.com/index/introducing-gpt-5-5 請對它們進行深入研究,並撰寫一份研究報告。」的簡短答案是什麼?

以下報告嚴格只依據你提供的證據整理。就目前證據包而言,Claude Opus 4.7 可以做出相對扎實的描述,但 GPT-5.5 只能得到「證據不足」的結論,因為 OpenAI 相關材料只包含 API 定價頁、提到 GPT-5.3 / GPT-5 Codex 的官方文章,以及一則 GPT-5 的 Wikipedia 條目,沒有 GPT-5.5 介紹頁的可核對內容 [4][6][7]。Anthropic 官方資料顯示,Claude Opus 4.7 於 2026 年 4 月 16 日推出,被定位為其最強的通用可用模型,主打複雜推理與 agentic coding,且維持與 Opus 4.6 相同

最值得優先驗證的重點是什麼?

以下報告嚴格只依據你提供的證據整理。就目前證據包而言,Claude Opus 4.7 可以做出相對扎實的描述,但 GPT-5.5 只能得到「證據不足」的結論,因為 OpenAI 相關材料只包含 API 定價頁、提到 GPT-5.3 / GPT-5 Codex 的官方文章,以及一則 GPT-5 的 Wikipedia 條目,沒有 GPT-5.5 介紹頁的可核對內容 [4][6][7]。Anthropic 官方資料顯示,Claude Opus 4.7 於 2026 年 4 月 16 日推出,被定位為其最強的通用可用模型,主打複雜推理與 agentic coding,且維持與 Opus 4.6 相同 就證據完整度而言,Anthropic 一側有多份直接官方文件可交叉印證;OpenAI 一側則缺少 GPT-5.5 的直接官方介紹內容,因此這份研究天然是不對稱比較 。

接下來在實務上該怎麼做?

Claude Opus 4.7 的可確認核心訊息包括:發布時間、旗艦定位、複雜推理與 agentic coding 主張、1M context,以及與前代相同的定價 。

下一步適合探索哪個相關主題?

繼續閱讀「研究並查核事實:在要連續搜尋、整理、交叉比對、再修正的長流程研究任務裡,Claude Opus 4.7 跟 GPT-5.5 Spud 哪一個比較不會中途失焦、漏步驟或跑偏?」,從另一個角度查看更多引用來源。

開啟相關頁面

我應該拿這個和什麼比較?

將這個答案與「研究 GPT-5.5、Claude Opus 4.7、Kimi K2.6、DeepSeek V4 的基準測試表現,並根據這些基準測試對它們進行比較。」交叉比對。

開啟相關頁面

繼續深入研究

來源

  • [1] API Pricingopenai.com

    OpenAI API Pricing | OpenAI Skip to main content Log inTry ChatGPT(opens in a new window) Research Products Business Developers Company Foundation(opens in a new window) OpenAI API Pricing | OpenAI # API Pricing Contact sales ## Flagship models Our frontier models are designed to spend more time thinking before producing a response, making them ideal for complex, multi-step problems. Choose your processing mode Standard Batch -50%Data residency +10% ## GPT-5.5 (coming soon) A new class of intelligence for coding and professional work. ### Price Input: $5.00 / 1M tokens Cached input: $0.50 /…

  • [2] Accelerating the cyber defense ecosystem that protects us all - OpenAIopenai.com

    Image 3: Child safety blueprint > card image Introducing the Child Safety Blueprint Safety Apr 8, 2026 Our Research Research Index Research Overview Research Residency Economic Research Latest Advancements GPT-5.3 Instant GPT-5.3-Codex GPT-5 Codex Safety Safety Approach Security & Privacy Trust & Transparency ChatGPT Explore ChatGPT(opens in a new window) Business Enterprise Education Pricing(opens in a new window) Download(opens in a new window) Sora Sora Overview Features Pricing Sora log in(opens in a new window) API Platform Platform Overview Pricing API log in(opens in a new window) Docu…

  • [3] Codex for (almost) everything | OpenAIopenai.com

    What’s next In just the year since Codex launched, the ways developers are using Codex has expanded. Developers start with Codex to write code, then increasingly use it to understand systems, gather context, review work, debug issues, coordinate with teammates, and keep longer-running work moving. Our mission is to ensure that AGI benefits all of humanity. That includes narrowing the gap between what people can imagine and what they can build. This release brings Codex closer to the tools, workflows, and decisions involved in building software, with much more to come soon. 2026 Codex ## Au…

  • [4] GPT-5.3 and GPT-5.5 in ChatGPT - OpenAI Help Centerhelp.openai.com

    GPT-5.3 and GPT-5.5 in ChatGPT | OpenAI Help Center Image 1: OpenAI Language English United States Login 1. All Collections 2. ChatGPT 3. GPT-5.3 and GPT-5.5 in ChatGPT # GPT-5.3 and GPT-5.5 in ChatGPT Updated: 10 minutes ago As of February 13, 2026, models GPT-4o, GPT-4.1, GPT-4.1 mini, OpenAI o4-mini, and GPT-5 (Instant and Thinking) have been retired from ChatGPT and are no longer available. API access remains unchanged. _ChatGPT Business, Enterprise, and Edu customers will retain access to GPT-4o within Custom GPTs until April 3, 2026. After April 3, GPT-4o will be fully retired acros…

  • [5] GPT-5.3 and GPT-5.5 in ChatGPT | OpenAI Help Centerhelp.openai.com

    GPT-5.3 and GPT-5.5 in ChatGPT | OpenAI Help Center Image 1: OpenAI Language English United States Login 1. All Collections 2. ChatGPT 3. GPT-5.3 and GPT-5.5 in ChatGPT # GPT-5.3 and GPT-5.5 in ChatGPT Updated: 16 minutes ago As of February 13, 2026, models GPT-4o, GPT-4.1, GPT-4.1 mini, OpenAI o4-mini, and GPT-5 (Instant and Thinking) have been retired from ChatGPT and are no longer available. API access remains unchanged. _ChatGPT Business, Enterprise, and Edu customers will retain access to GPT-4o within Custom GPTs until April 3, 2026. After April 3, GPT-4o will be fully retired acros…

  • [6] GPT-5.5 is here! Available in Codex and ChatGPT todaycommunity.openai.com

    Announcements models You have selected 0 posts. select all cancel selecting 3.7k views 35 likes 2 links 8 users Image 2: polepole2 Image 3: Espresso Bean Image 4: alonso quintanilla Image 5: Mauricio Barros Image 6 Summarize Apr 23 1 / 10 Apr 24 5h ago ## post by vb 8 hours ago Image 7 vb Leader Image 8: potato 3 8h Introducing GPT-5.5 A new class of intelligence for real work and powering agents, built to understand complex goals, use tools, check its work, and carry more tasks through to completion. It marks a new way of getting computer work done. Image 9: HGm8jVWbsAAwL60 HGm8jVWbsAAwL60 1…

  • [7] GPT-5.5 is here! Available in Codex and ChatGPT todaycommunity.openai.com

    GPT-5.5 is here! Available in Codex and ChatGPT today API deployments require different safeguards and we are working closely with partners and customers on the safety and security requirements for serving it at scale. We’ll bring GPT‑5.5 and GPT‑5.5 Pro to the API very soon. ### Related topics [...] ### Related topics | Topic | | Replies | Views | Activity | --- --- | GPT-5.1-Codex-Max is now available in the API Announcements | 11 | 2874 | December 11, 2025 | | GPT-5 Codex available in the API API codex , gpt-5-codex | 0 | 387 | September 23, 2025 | | Announcing GPT-5.1 in the API API api…

  • [8] GPT-5.5 is here! Available in Codex and ChatGPT todaycommunity.openai.com

    GPT-5.5 is here! Available in Codex and ChatGPT today #### Sam Altman ### Related topics | Topic | | Replies | Views | Activity | --- --- | GPT-5.1-Codex-Max is now available in the API Announcements | 11 | 2874 | December 11, 2025 | | GPT-5 Codex available in the API API codex , gpt-5-codex | 0 | 386 | September 23, 2025 | | Announcing GPT-5.1 in the API API api , gpt-5 | 7 | 1118 | November 18, 2025 | | Codex Updates: Mini Model, Higher Limits & Priority Processing Codex | 0 | 1363 | November 7, 2025 | | Introducing GPT-5.2-Codex Codex announcement , codex , chatgpt , api | 3 | 1189 | Dec…

  • [9] GPT-5.5 is here! Available in Codex and ChatGPT today - #7 by _jcommunity.openai.com

    GPT-5.5 is here! Available in Codex and ChatGPT today A straight-up price-doubling on top of a price-doubling between gpt-5.1 to gpt-5.4, on top of a price doubling to use faster yet cheaper to operate inference, reserved for service_tier:priority. For what they directly say is the same “latency” aka compute time. And with pricing tied to API, that’s ChatGPT-purchased Codex credits going half as far, on a platform where they give 0-day model shutoffs. (“Remember, we don’t have enough thinking time for both you and the US Department of War”) ### Related topics [...] ### Related topics | Top…

  • [10] GPT-5.5 System Card - OpenAIopenai.com

    GPT-5.5 System Card | OpenAI Skip to main content Log inTry ChatGPT(opens in a new window) Research Products Business Developers Company Foundation(opens in a new window) Try ChatGPT(opens in a new window)Login OpenAI April 23, 2026 SafetyPublication # GPT‑5.5 System Card Read the System Card(opens in a new window) Share ## 1. Introduction GPT‑5.5 is a new model designed for complex, real-world work, including writing code, researching online, analyzing information, creating documents and spreadsheets, and moving across tools to get things done. Relative to earlier models, GPT‑5.5 understan…

  • [11] Introducing GPT-5 - OpenAIopenai.com

    Keep reading View all Image 1: Hero Art Card SEO 1x1 Introducing GPT-5.5 Product Apr 23, 2026 Image 2: Making ChatGPT free for clinicians Making ChatGPT better for clinicians Product Apr 22, 2026 Image 3: OAI Blog Agents Hero 1x1 Introducing workspace agents in ChatGPT Product Apr 22, 2026 Our Research Research Index Research Overview Research Residency Economic Research Latest Advancements GPT-5.5 GPT-5.4 GPT-5.3 Instant GPT-5.3-Codex Safety Safety Approach Security & Privacy Trust & Transparency ChatGPT Explore ChatGPT(opens in a new window) Business Enterprise Education Pricing(opens in…

  • [12] Introducing GPT-Rosalind for life sciences research - OpenAIopenai.com

    Over time, we expect these systems to become increasingly capable partners in discovery—helping scientists move faster from question to evidence, from evidence to insight, and from insight to new treatments for patients. ## Keep reading View all Image 2: Introducing OpenAI Privacy Filter Introducing OpenAI Privacy Filter Research Apr 22, 2026 Image 3: Images 2.0 blog art card Introducing ChatGPT Images 2.0 Product Apr 21, 2026 Image 4: model spec > art card Inside our approach to the Model Spec Research Mar 25, 2026 Our Research Research Index Research Overview Research Residency Economic Res…

  • [13] Trusted access for the next era of cyber defense - OpenAIopenai.com

    Trusted access for the next era of cyber defense | OpenAI Skip to main content Log inTry ChatGPT(opens in a new window) Research Products Business Developers Company Foundation(opens in a new window) Try ChatGPT(opens in a new window)Login OpenAI Table of contents Scaling Trusted Access for Cyber and GPT-5.4-Cyber Looking ahead to our upcoming model release and beyond April 14, 2026 SecuritySafety # Trusted access for the next era of cyber defense We continue to evolve trusted access, safeguards, and ecosystem support to help cyber defenders protect us all. Loading… Share [...] 2026 ## Auth…

  • [14] GPT-5.5 Bio Bug Bounty - OpenAIopenai.com

    GPT-5.5 Bio Bug Bounty | OpenAI Skip to main content Log inTry ChatGPT(opens in a new window) Research Products Business Developers Company Foundation(opens in a new window) GPT-5.5 Bio Bug Bounty | OpenAI Table of contents Invitation Program overview How to participate April 23, 2026 Safety # GPT‑5.5 Bio Bug Bounty Testing universal jailbreaks for biorisks in GPT‑5.5 Apply here(opens in a new window) Share ## Invitation [...] If you’re interested in supporting OpenAI’s work to deliver safe and secure artificial intelligence beyond the Bio Bounty program, you can learn about our Safety Bug…

  • [15] GPT-5.5 System Card - Deployment Safety Hub - OpenAIdeploymentsafety.openai.com

    As we did for GPT-5.4 Thinking before it, we are continuing to treat GPT-5.5 as High capability in the Biological and Chemical domain. We have applied the corresponding safeguards for this model as described in the GPT-5 system card. As we did for GPT-5.3-Codex and GPT-5.4-thinking, we are treating GPT-5.5 as High capability in the Cybersecurity domain, but below Critical. Our cybersecurity safeguards have increased for this launch, reflecting GPT-5.5’s increased capabilities in this domain. While GPT-5.5 demonstrates an increase in cyber security capabilities compared to 5.4, the model does…

  • [16] Introducing GPT-5.5 - OpenAIopenai.com

    Introducing GPT-5.5 | OpenAI Skip to main content Log inTry ChatGPT(opens in a new window) Research Products Business Developers Company Foundation(opens in a new window) Introducing GPT-5.5 | OpenAI Table of contents Model capabilities Next-generation inference efficiency Advancing cybersecurity for everyone’s safety Availability and pricing Evaluations April 23, 2026 ProductRelease # Introducing GPT‑5.5 A new class of intelligence for real work 00:00 01:28 Listen to article Share We’re releasing GPT‑5.5, our smartest and most intuitive to use model yet, and the next step toward a new way…

  • [17] OpenAI Newsopenai.com

    OpenAI News | OpenAI Skip to main content Log inTry ChatGPT(opens in a new window) Research Products Business Developers Company Foundation(opens in a new window) Try ChatGPT(opens in a new window)Login OpenAI ## All Company Research Product Safety Engineering Security Global Affairs AI Adoption All Filter Sort Switch cards to show Media Switch cards to hide Media Image 1: Hero Art Card SEO 1x1 Introducing GPT-5.5 Product Apr 23, 2026 Image 2: System Card Card SEO 1x1 GPT-5.5 System Card Safety Apr 23, 2026 Image 3: GPT-5.5 Bio Bug Bounty > art card GPT-5.5 Bio Bug Bounty Safety Apr 23, 202…

  • [18] GPT-5.5 System Card - Deployment Safety Hub - OpenAIdeploymentsafety.openai.com

    We find that GPT-5.5 performs generally on par with its predecessors. Minor regressions are not statistically significant. In addition to the evaluations reported in the table above, we previously ran vision evaluations for illicit and attack planning. We removed those evaluations as the harms are measured as disallowed content evaluations. ## 3.3 Avoiding Accidental Data-Destructive Actions We ran our destructive actions evaluation that measures the model’s ability to preserve user-produced changes and avoid taking accidental destructive actions. We find that GPT-5.5 performs better than ear…

  • [19] OpenAI | OpenAIopenai.com

    OpenAI | OpenAI Skip to main content Log inTry ChatGPT(opens in a new window) Research Products Business Developers Company Foundation(opens in a new window) OpenAI | OpenAI Your browser does not support the video tag. # Introducing GPT-5.5 A new class of intelligence for real work Learn more Close What can I help with? Message ChatGPT Quiz me on vocabulary Plan a surf trip to Costa Rica in August India stock market today Explica por qué el maíz palomitas explota Teach me Mahjong for beginners Find hiking boots for wide feet Explain this code Was mach ich in Berlin wenn es regnet? What are…

  • [20] [PDF] GeneBench: Assessing AI Agents for Multi-Stage Inference ... - OpenAIcdn.openai.com

    GPT-5.5 reports pass rate for GPT-5.5 at the xhigh reasoning setting over repeated runs. Abbreviations: GWAS, genome-wide association study; LDL

  • [21] Web search tool - Claude API Docsdocs.anthropic.com

    web_search_20260209

    client = anthropic.Anthropic() response = client.messages.create( model="claude-opus-4-7", max_tokens=4096, messages=[ { "role": "user", "content": "Search for the current prices of AAPL and GOOGL, then calculate which has a better P/E ratio.", } ], tools=[{"type": "web_search_20260209", "name": "web_search"}], ) print(response)
    ## How to use web search Your organization's administrator must enable web search in the Claude Console. Provide the web search tool in your API request: `client = anthropic.Anthropic() [...] Web search usage is charged in addition to token usa…

  • [22] Claude Platform - Claude API Docsdocs.anthropic.com

    April 16, 2026 We've launched Claude Opus 4.7, our most capable generally available model for complex reasoning and agentic coding, at the same $5 / $25 per MTok pricing as Opus 4.6. See What's new in Claude Opus 4.7 for capability improvements, new features, and the updated tokenizer. Opus 4.7 includes API breaking changes versus Opus 4.6; see Migrating to Claude Opus 4.7 before upgrading. Claude in Amazon Bedrock is now open to all Amazon Bedrock customers. Claude Opus 4.7 and Claude Haiku 4.5 are available self-serve from the Bedrock console through the Messages API endpoint at `/anthr…

  • [23] Release notes | Claude Help Centerdocs.anthropic.com

    February 12, 2026 Self-serve Enterprise plans Previously, Enterprise plans were only available to customers working with our Sales team. Now, any organization can purchase an Enterprise plan directly on our website with no Sales conversation required. Self-serve Enterprise plans have a single seat type that includes access to Claude, Claude Code, and Cowork. For more information, refer to our blog post or What is the Enterprise plan? ### February 5, 2026 Claude Opus 4.6 launch We’ve upgraded our smartest model and improved its coding skills. Read our blog post for more information: Introd…

  • [24] Anthropic at Google Cloud Next 2026anthropic.com

    Skip to footer Research Economic Futures Commitments Initiatives + Claude's Constitution + Transparency + Responsible Scaling Policy Trust center + Security and compliance Learn Learn + Anthropic Academy + Tutorials + Use cases + Engineering at Anthropic + Developer docs Company + About + Careers + Events News Try Claude Try Claude Learn more about Claude Products + Claude + Claude Code + Claude Cowork + Claude Platform + Pricing + Contact sales Models + Opus + Sonnet + Haiku Log in + Claude.ai + Claude Console EN This is some text inside of a div block. Log in to Claude Log in to Claude Down…

  • [25] Detecting and Countering Malicious Uses of Claude - Anthropicanthropic.com

    landscape and help the wider AI ecosystem develop more robust safeguards. [...] Our intelligence program is meant to be a safety net by both finding harms not caught by our standard scaled detection and to add context in how bad actors are using our models maliciously. In investigating these cases, our team applied techniques described in our recently published research papers, including Clio and hierarchical summarization. These approaches allowed us to efficiently analyze large volumes of conversation data to identify patterns of misuse. These techniques, coupled with classifiers (which ana…

  • [26] Developing nuclear safeguards for AI through public-private ...anthropic.com

    Introducing Claude Design by Anthropic Labs Today, we’re launching Claude Design, a new Anthropic Labs product that lets you collaborate with Claude to create polished visual work like designs, prototypes, slides, one-pagers, and more. ### Introducing Claude Opus 4.7 Our latest Opus model brings stronger performance across coding, agents, vision, and multi-step tasks, with greater thoroughness and consistency on the work that matters most. ### Anthropic’s Long-Term Benefit Trust appoints Vas Narasimhan to Board of Directors ### Products ### Models ### Solutions ### Claude Platform ### Res…

  • [27] Elections and AI in 2024: observations and learnings - Anthropicanthropic.com

    Introducing Claude Opus 4.7 Our latest Opus model brings stronger performance across coding, agents, vision, and multi-step tasks, with greater thoroughness and consistency on the work that matters most. ### Anthropic’s Long-Term Benefit Trust appoints Vas Narasimhan to Board of Directors ### Products ### Models ### Solutions ### Claude Platform ### Resources ### Help and security ### Company ### Terms and policies [...] ### Looking forward Protecting election integrity requires constant vigilance and adaptation as AI technology evolves. We remain committed to developing sophisticated tes…

  • [28] Introducing Claude Design by Anthropic Labsanthropic.com

    For Enterprise organizations, Claude Design is off by default. Admins can enable it in Organization settings. Start designing at claude.ai/design. []( ## Related content ### Introducing Claude Opus 4.7 Our latest Opus model brings stronger performance across coding, agents, vision, and multi-step tasks, with greater thoroughness and consistency on the work that matters most. Read more ### Anthropic’s Long-Term Benefit Trust appoints Vas Narasimhan to Board of Directors Read more ### Anthropic expands partnership with Google and Broadcom for multiple gigawatts of next-generation compute Read m…

  • [29] Introducing Claude Opus 4.7 - Anthropicanthropic.com

    Opus 4.7 is available today across all Claude products and our API, Amazon Bedrock, Google Cloud’s Vertex AI, and Microsoft Foundry. Pricing remains the same as Opus 4.6: $5 per million input tokens and $25 per million output tokens. Developers can use claude-opus-4-7 via the Claude API. ## Testing Claude Opus 4.7 Claude Opus 4.7 has garnered strong feedback from our early-access testers: Image 3: logo [...] Skip to main contentSkip to footer . We are releasing Opus 4.7 with safeguards that automatically detect and block requests that indicate prohibited or high-risk cybersecurity uses. Wha…

  • [30] Responsible Scaling Policy Version 3.0 - Anthropicanthropic.com

    Read more ### Introducing Claude Opus 4.7 Our latest Opus model brings stronger performance across coding, agents, vision, and multi-step tasks, with greater thoroughness and consistency on the work that matters most. Read more ### Anthropic’s Long-Term Benefit Trust appoints Vas Narasimhan to Board of Directors Read more []( ### Products Claude Claude Code Claude Code Enterprise Claude Code Security Claude Cowork Claude for Chrome Claude for Slack Claude for Excel Claude for PowerPoint Claude for Word Skills Max plan Team plan Enterprise plan Download app Pricing Log in to Claude ### Models…

  • [31] [PDF] Claude Sonnet 4.6 System Card - Anthropicanthropic.com

    achieved by any model snapshot into our final capabilities assessment.​ ​ 10​ ​ We generally present results from the final, deployed model unless otherwise specified,​ ​ though some examples of particular model behaviors are from earlier snapshots and many​ ​ of our dangerous capability evaluations measure whichever snapshot scored highest.​ ​ 1.2.3 AI Safety Level determination process​ ​ Claude Sonnet 4.6 was evaluated following the​​ preliminary​​ assessment​​ protocol, which​ ​ includes automated evaluations. The safety level required was determined with reference​ ​ to the recently-rele…

  • [32] Anthropic's Transparency Hubanthropic.com

    an adaptive attacker was given 100 attempts to craft a successful injection. With new safeguards in place, only 1.4% of attacks were successful against Claude Opus 4.5, compared to 10.8% for Claude Sonnet 4.5 with our previous safeguards. [...] We conducted multiple types of biological risk evaluations, including evaluations from biodefense experts, multiple-choice evaluations, open-ended questions, and task-based agentic evaluations. One example of a biological risk evaluation we conducted involved controlled trials measuring AI assistance in the planning and acquisition of bioweapons. The c…

  • [33] Home \ Anthropicanthropic.com

    Models Opus Sonnet Haiku Log in Claude.ai Claude Console EN This is some text inside of a div block. Log in to ClaudeLog in to Claude Log in to Claude Download appDownload app Download app # AI research and products that put safety at the frontier AIresearchandproductsthat put safety at the frontier AI will have a vast impact on the world. Anthropic is a public benefit corporation dedicated to securing its benefits and mitigating its risks. ## Project Glasswing Securing critical software for the AI era Continue reading Image 1 Read the storyRead the story ## Latest releases ### Claude Opus 4.…

  • [34] Newsroom - Anthropicanthropic.com

    Skip to main contentSkip to footer []( Research Economic Futures Commitments Learn News Try Claude # Newsroom Press inquirespress@anthropic.com Non-media inquiriessupport@anthropic.com Media assetsDownload press kit Image 1: Introducing Claude Opus 4.7 ## Introducing Claude Opus 4.7 Product Apr 16, 2026 Our latest Opus model brings stronger performance across coding, agents, vision, and multi-step tasks, with greater thoroughness and consistency on the work that matters most. [...] Apr 17, 2026 Product Introducing Claude Design by Anthropic Labs Apr 16, 2026 Product Introducing Claude Opus 4.…

  • [35] Research - Anthropicanthropic.com

    Apr 14, 2026 Alignment Automated Alignment Researchers: Using large language models to scale scalable oversight Apr 9, 2026 Policy Trustworthy agents in practice Apr 2, 2026 Interpretability Emotion concepts and their function in a large language model Mar 31, 2026 Economic Research How Australia Uses Claude: Findings from the Anthropic Economic Index Mar 24, 2026 Economic Research Anthropic Economic Index report: Learning curves Mar 23, 2026 Science Introducing our Science Blog Mar 23, 2026 Science Long-running Claude for scientific computing Mar 23, 2026 Science Vibe physics: The AI grad st…

  • [36] Trust Center - Anthropictrust.anthropic.com

    | Scope | SOC 2 Type 2 | ISO 27001 | ISO 42001 | CSA Star | HIPAA | NIST 800-171 | FedRAMP High | DoD IL4 | DoD IL5 | --- --- --- --- --- | | Claude via Anthropic's API | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | N/A | N/A | N/A | | Claude for Enterprise | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | N/A | N/A | N/A | | Claude in Amazon Bedrock | ✅ | ✅ | ✅ | ✅ | N/A | N/A | N/A | N/A | N/A | | Claude on Google Cloud's Vertex AI | ✅ | ✅ | ✅ | ✅ | N/A | N/A | N/A | N/A | N/A | | Claude in Preview on Microsoft Foundry | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | N/A | N/A | N/A | | Claude GA on Microsoft Foundry | In-Process | In-Process | _In-Pr…

  • [37] Anthropic Eventsanthropic.com

    We don’t have any events matching those criteria yet ## Webinars ## Webinar series ### Hand off the work you’ve been putting off ### From manual coding to multi-agent orchestration ### Build faster with Claude on Vertex AI ## What’s new View more news Product ### Introducing: Cowork Announcement ### Claude Sonnet 4.6 Product Introducing the Max plan ## Get the developer newsletter Product updates, how-tos, community spotlights, and more. Delivered monthly to your inbox. Please provide your email address if you’d like to receive our monthly developer newsletter. You can unsubscribe at any…

  • [38] What's new in Claude Opus 4.7platform.claude.com

    We suggest updating your max_tokens parameters to give additional headroom, including compaction triggers. Claude Opus 4.7 provides a 1M context window at standard API pricing with no long-context premium. ## Capability improvements ### Knowledge work Claude Opus 4.7 shows meaningful gains on knowledge-worker tasks, particularly where the model needs to visually verify its own outputs: .docx redlining and .pptx editing — improved at producing and self-checking tracked changes and slide layouts. Charts and figure analysis — improved at programmatic tool-calling with image-processing librarie…

  • [39] Anthropic releases Claude Opus 4.7, a less risky model than Mythoscnbc.com

    Ashley Capoot@/in/ashley-capoot/ WATCH LIVE Key Points Anthropic on Thursday announced a new artificial intelligence model, Claude Opus 4.7. The company said it is an improvement over past models but is "less broadly capable" than its most powerful offering, Claude Mythos Preview. Anthropic rolled out Mythos to a select of companies as part of a new cybersecurity initiative called Project Glasswing earlier this month. Dario Amodei, chief executive officer of Anthropic, at the AI Impact Summit in New Delhi, India, on Thursday, Feb. 19, 2026. Ruhani Kaur | Bloomberg | Getty Images [...] Watchli…

  • [40] Anthropic releases Claude Opus 4.7, concedes it trails ... - Axiosaxios.com

    "Opus 4.7 introduces a new xhigh ("extra high") effort level between high and max, giving users finer control over the tradeoff between reasoning and latency on hard problems," Anthropic said."When testing Opus 4.7 for coding and agentic use cases, we recommend starting with high or xhigh effort." It's also testing a new system called "task budgets" that give developers more control over how Claude does its reasoning on longer tasks. Between the lines: Anthropic said it will use the new release to test guardrails designed to prevent its model being used for cybersecurity attacks. "What we lea…

  • [41] Claude Opus 4.7 - Anthropicanthropic.com

    Skip to main contentSkip to footer []( Research Economic Futures Commitments Learn News Try Claude # Claude Opus 4.7 Image 1: Claude Opus 4.7 Image 2: Claude Opus 4.7 Hybrid reasoning model that pushes the frontier for coding and AI agents, featuring a 1M context window Try ClaudeGet API access ## Announcements NEW Claude Opus 4.7 Apr 16, 2026 Claude Opus 4.7 brings stronger performance across coding, vision, and complex multi-step tasks. It's more thorough and consistent on difficult work, with better results across professional knowledge work. Read more Claude Opus 4.6 [...] Pricing for Opu…

  • [42] Claude Opus 4.7 By Anthropic: Features, Updates & What You ...acecloud.ai

    Claude Opus 4.7 has landed exactly at the right moment. In 2026, the AI conversation has moved past novelty and into execution. Teams want Large Language Models that can code, reason across long contexts, interpret visuals, use tools, and stay reliable in production. That is why this release is getting so much attention. Anthropic launched Claude Opus 4.7 on April 16, 2026, calling it its most capable generally available model, with major gains in advanced software engineering, instruction following, long-running tasks, and high-resolution vision. [...] Pricing stays at $5 per million input t…

  • [43] Claude Opus 4.7 Deep Dive: Capabilities, Migration, and the New ...caylent.com

    At a spec level, Opus 4.7 is positioned as Anthropic’s most capable generally available model for coding, enterprise workflows, multimodal reasoning, financial analysis, life sciences, cybersecurity, and long-running agentic work. It supports a 1M context window with no long-context pricing premium, up to 128K output tokens, and standard Opus pricing at $5 per million input tokens and $25 per million output tokens. The model's reliable knowledge cutoff is January 2026. [...] But that does not mean your cost per task stays flat. Anthropic’s prompting guidance says Opus 4.7 counts tokens differ…

  • [44] Claude Opus 4.7 is generally available - GitHub Changeloggithub.blog

    Claude Opus 4.7, Anthropic’s latest Opus model, is now rolling out on GitHub Copilot. In our early testing, Opus 4.7 delivers stronger multi-step task performance and more reliable agentic execution, building on the coding strategy strengths of its predecessor. It also shows meaningful improvement in long-horizon reasoning and complex, tool-dependent workflows. As part of our efforts to improve service reliability, we are streamlining our model offerings. Over the coming weeks, Opus 4.7 will replace Opus 4.5 and Opus 4.6 in the model picker for Copilot Pro+. We’ve seen strong improvements acr…

  • [45] Claude Opus 4.7 Pricing: The Real Cost Story Behind ...finout.io

    Workload 1: Coding agent, 1M input / 200K output per day Opus 4.6: (1M × $5) + (0.2M × $25) = $10/day, ~$300/month Opus 4.7 at 35% token inflation: ~$13.50/day, ~$405/month Delta: +$105/month, +35% on the same underlying work ### Workload 2: RAG assistant, 5M input / 500K output per day, 70% cache hit ratio Opus 4.7 input: cached 3.5M × ~$0.50 + uncached 1.5M × $5 = $1.75 + $7.50 = $9.25 Opus 4.7 output: 0.5M × $25 = $12.50 Daily: $21.75, monthly: ~$652 Same workload on Sonnet 4.6 (same caching assumptions): ~$13.05/day, ~$392/month. Savings: ~40%. For RAG, stay on Sonnet unless quality e…

  • [46] Claude Opus 4.7 vs Opus 4.6: What Actually Changed and ...mindstudio.ai

    Frequently Asked Questions ### Is Claude Opus 4.7 worth upgrading to from 4.6? It depends on your workload. For agentic coding and vision tasks, the improvement is real and the upgrade is likely worth it. For writing, analysis, or text-only reasoning, the gap is smaller and the cost increase may not be justified. Test your specific use cases before committing. ### What are the main differences between Claude Opus 4.6 and 4.7? The three main improvements in 4.7 are: stronger agentic coding performance (better multi-step reliability and tool use), significantly improved vision and multimodal…

  • [47] Claude Opus 4.7: Complete Guide to Features ...nxcode.io

    Claude Opus 4.7: Complete Guide to Features, Benchmarks & Pricing April 16, 2026 — Anthropic released Claude Opus 4.7 today, and the story is not about a new pricing tier or a dramatic architectural overhaul. It is about targeted, measurable improvements in the two areas that matter most for production use: coding and vision. The model scores 70% on CursorBench (up from 58%), achieves 98.5% visual-acuity (up from 54.5%), and solves 3x more production tasks than Opus 4.6 — all at the same $5/$25 per million token pricing. The API identifier is claude-opus-4-7. It is generally available acr…

  • [48] Anthropic launches Opus 4.7 with better coding and 13% vision gaininterestingengineering.com

    Anthropic notes, “We are releasing Opus 4.7 with safeguards that automatically detect and block requests that indicate prohibited or high-risk cybersecurity uses.” To support professional users, the company has launched a Cyber Verification Program. This initiative allows vetted security researchers to access the model for tasks like penetration testing and vulnerability analysis. The move reflects growing industry pressure to balance capability with responsible deployment. Opus 4.7 is now available across multiple platforms, including Anthropic’s API, Amazon Bedrock, Google Cloud Vertex AI,…

  • [49] Claude Opus 4.7 pricing & specs — Anthropic | CloudPricecloudprice.net

    Anthropic logo # Claude Opus 4.7 Claude Opus 4.7isAnthropic logoAnthropic's language model with a 1.0M context window and up to 128K output tokens, available from 7 providers, starting at $5.00 / 1M input and $25.00 / 1M output. Anthropic's Claude 4.7 Opus model with adaptive reasoning at maximum effort, vision, and tool-use for complex enterprise tasks. | Spec | [...] | | | --- | | Intelligence Index | 57.3 #2 | | Coding Index | 52.5 #5 | | GPQA | 0.9 #5 | | HLE | 0.4 #6 | | IFBench | 0.6 #89 | | Time to First Token | 19.03s #436 | | SciCode | 0.5 #6 | | LCR | 0.7 #17 | | TerminalBench Hard…

  • [50] Claude Opus 4.7 Pricing In 2026: What It Actually Costs - CloudZerocloudzero.com

    If you’ve been tracking Claude pricing since the Opus 3 era ($15/$75 per MTok, a number that made finance teams visibly uncomfortable), the 4.x generation has been a welcome reset. Opus 4.5 dropped the flagship price by 67%. Opus 4.6 held it there. And now Opus 4.7 keeps the same sticker price while delivering what Anthropic calls its strongest generally available model, better coding, sharper vision, and longer-horizon agent work. The catch, as usual, lives in the details. This guide breaks down every pricing layer, compares all current Claude models, and covers the optimization levers that…

  • [51] Anthropic releases Claude Opus 4.7: How to try it, benchmarks, safetymashable.com

    Anthropic releases Claude Opus 4.7: How to try it, benchmarks, safety headshot of timothy beck werth, a handsome journalist with great hair The Claude AI logo is displayed on a smartphone screen with a multitude of Anthropic logos in the background Anthropic has been shipping products and making news at a blistering pace in 2026, and on Thursday, the AI company announced the launch of Claude Opus 4.7. Claude Opus 4.7 is Anthropic's most intelligent model available to the general public. Notably, Anthropic said in a press release") that Opus 4.7 is not as powerful as Claude Mythos, which Ant…

  • [52] Anthropic released Claude Opus 4.7 on April 17, 2026. The new AI ...facebook.com

    23h 3 Image 14 View all 2 replies []( Bimlesh Singh Usage more token 1d 2 Image 15 []( Code Master Claude can it generate image 1d 1 Image 16 View 1 reply []( Waliulu Raditya Drop skill n agent game please 1d 1 Image 17 []( Paul Karwatsky It's pretty bad. 1d 2 Image 18 View 1 reply Sam Kendall Luke Vumbaca 20h Lev Vayner Creating a clone of birdie using AI is wasteful and tasteless. You can clone one of many github repos that already have it publicly available. 2d 6 Image 19 Most relevant is selected, so some comments may have been filtered out. [...] 1d 3 Image 4 []( Jorge Avendaño Duran Wha…

  • [53] Claude Opus 4.7 Review: What It Really Means for Your Work (2026)karozieminski.substack.com
    Anthropic shipped Claude Opus 4.7 on April 16, 2026. Same sticker price at $5/$25 per million tokens, but a new tokenizer makes the real cost up to 35% higher on code-heavy prompts. Three API changes can break existing code: thinking.budget_tokens, temperature, and top_p now return 400 errors, and reasoning traces default to hidden. A new xhigh effort tier sits between high and max, Claude Code defaults to it, and mobile push notifications arrived in Claude Code 2.1.110. SWE-bench Verified hit 87.6% in vendor tests, but Terminal-Bench 2.0 regressed versus GPT-5.4 and r/ClaudeAI users r…
  • [54] Opus 4.7 is 50% more expensive with context regression?! - Redditreddit.com

    This essentially means the model has become 50% more expensive within the same limit. Image 4: r/ClaudeAI - Share New to Reddit? Create your account and connect with a world of communities. Continue with Email Continue With Phone Number By continuing, you agree to ourUser Agreementand acknowledge that you understand thePrivacy Policy. Image 5: hp Image 6: hp Check Claude service status. Public Anyone can view, post, and comment to this community 0 0 Reddit RulesPrivacy PolicyUser AgreementYour Privacy ChoicesAccessibilityReddit, Inc. © 2026. All rights reserved. Expand Navigation Collapse Nav…

  • [55] Instagraminstagram.com

    Anthropic releases Claude Opus 4.7 with stronger coding performance, a 1 million token context window, and new cyber safeguards · Anthropic has

  • [56] Instagraminstagram.com

    Anthropic just released Claude Opus 4.7 today. And the jumps are significant. Here's what the benchmarks mean in plain English. Agentic coding

  • [57] DR Anthropic released Opus 4.7 today. Same pricing as 4.6 ($5/$25 ...x.com

    Anthropic released Opus 4.7 today. Same pricing as 4.6 ($5/$25 per million tokens), available across API, Bedrock, Vertex AI, and Microsoft

  • [58] Instagraminstagram.com

    Anthropic has released Claude Opus 4.7, pushing AI closer to handling real-world professional work with minimal supervision.

  • [59] Claude Opus 4.7 Just Unlocked a $3000/Month Skill ...youtube.com

    Pricing is unchanged from Opus 4.6 at $5 per million input tokens ... This NEW Gemini Feature Is Erasing $1,000/Month Local SEO Services (Ask Maps

  • [60] GPT-5 - Wikipediaen.wikipedia.org

    GPT-5 is a multimodallarge language model developed by OpenAI and the fifth in its series of generative pre-trained transformer (GPT) foundation models. Preceded in the series by GPT-4, it was launched on August 7, 2025. It is publicly accessible to users of the chatbot products ChatGPT and Microsoft Copilot as well as to developers through the OpenAI API. ## Background [edit] Further information: Generative pre-trained transformer §History Part of a series on Machine learning and data mining Paradigms [...] 41. ^"Introducing GPT‑5 for developers". OpenAI. August 7, 2025. Retrieved August 7…

  • [61] GPT-5.5 is here: benchmarks, pricing, and what changes ... - Appwriteappwrite.io

    Atharva Deosthale Developer Advocate SHARE OpenAI released GPT-5.5 on April 23, 2026. The company is pitching it as "a new class of intelligence for real work", with the biggest gains in agentic coding, computer use, knowledge work, and early scientific research. This post walks through what actually shipped: the variants, pricing against recent competitors, the benchmark numbers OpenAI published, the safety posture in the system card, and a section on one thing the model still hasn't fixed. ## What shipped GPT-5.5 comes in two variants: [...] Skip to content Announcing the Appwrite X Mo…

  • [62] GPT-5.5 System Card - OpenAI Deployment Safety Hubdeploymentsafety.openai.com

    We measure GPT-5.5’s controllability by running CoT-Control, an evaluation suite described in (Yueh-Han, 2026 ) that tracks the model’s ability to follow user instructions about their CoT. CoT-Control includes over 13,000 tasks built from established benchmarks: GPQA (Rein et al., 2023 ), MMLU-Pro (Hendrycks et al., 2020 ), HLE (Phan et al., 2025 ), BFCL (Patil et al., 2025 [11: From tool use to agentic evaluation of large language models.” Proceedings of the 42nd international conference on machine learning . Available at: .")]) and SWE-Bench Verified. Each task is created by pairing a bench…

  • [63] OpenAI Announces GPT‑5.5 - Thurrott.comthurrott.com

    OpenAI is also highlighting the safeguards it’s built into GPT-5.5, which it says are its best yet. It’s designed to reduce misuse while preserving access for beneficial work, and OpenAI “evaluated this model across its full suite of safety and preparedness frameworks, worked with internal and external red-teamers, added targeted testing for advanced cybersecurity and biology capabilities, and collected feedback on real use cases from nearly 200 trusted early-access partners before release.” GPT-5.5 is available now across ChatGPT Plus, Pro, Business, and Enterprise users, and via Codex. GPT‑…

  • [64] OpenAI GPT 5.5 AI model to launch soon - Moneycontrol.commoneycontrol.com

    Sarthak Singh April 23, 2026 / 14:08 IST Image 11: join Us On WhatsApp Image 12: Follow Us On Google Image 13: Add as a Preferred Source on Google Image 14: OpenAI OpenAI Image 15: Snapshot AI OpenAI to launch GPT-5.5 with expanded multimodal capabilities GPT-5.5 handles text, images, audio, and video in one system Enables agent workflows, context window up to 256k tokens Did our AI summary help? Image 16: read AI Shorts [...] Business Markets Stocks Economy Companies Trends IPO Opinion EV Special Eco Pulse MC Learn Array ( [direction] => -1 [market_status] => red ) Image 9: LAMF Image 10: LA…

  • [65] OpenAI says its new GPT-5.5 model is more efficient and better at ...theverge.com

    “Instead of carefully managing every step, you can give GPT-5.5 a messy, multi-part task and trust it to plan, use tools, check its work, navigate through ambiguity, and keep going,” according to OpenAI. The company also notes that GPT-5.5 will have its “strongest set of safeguards to date” and can use “significantly fewer” tokens to complete tasks in Codex. GPT-5.5 will roll out starting Thursday to Plus, Pro, Business, and Enterprise ChatGPT tiers and Codex, with GPT-5.5 Pro coming to Pro, Business, and Enterprise users. [...] This is the title for the native ad ## Top Stories © 2026 Vox Me…

  • [66] OpenAI Says New Model Adept At Making AI Better<!-- --> - Barron'sbarrons.com

    OpenAI Says New Model Adept At Making AI Better By AFP - Agence France Presse OpenAI president Greg Brockman says the new GPT-5.5 model can tend to more computer work without human supervision (Caroline Brehman) OpenAI released a new model it touts as its best yet for handling research work like making improved versions of itself, as rapid-fire releases by AI rivals pick up pace. The Barron's news department was not involved in the creation of the content above. This article was produced by AFP. For more information go toAFP.com. © Agence France-Presse Barron's ### Topics Cryptocurrencies D…

  • [67] OpenAI unveils GPT-5.5 to field tasks with limited instructionsseattletimes.com

    OpenAI unveils GPT-5.5 to field tasks with limited instructions (Bloomberg) — OpenAI is introducing an artificial intelligence model that’s intended to be better at completing work without much direction, part of a push to keep pace with rivals like Anthropic PBC in courting business customers. The ChatGPT maker on Thursday unveiled GPT-5.5, a new model that it says is better at aiding scientists, streamlining software development and carrying out more complex tasks. That includes using email, spreadsheets, calendars and other applications to follow a user’s commands on a computer. [...] ##…

  • [68] OpenAI upgrades ChatGPT and Codex with GPT-5.5: 'a new class of ...9to5mac.com

    Go to the 9to5Mac home page ChatGPT AI OpenAI # OpenAI upgrades ChatGPT and Codex with GPT-5.5: ‘a new class of intelligence for real work’ Zac Hall | Apr 23 2026 - 11:20 am PT 2 Comments OpenAI is capping off a busy week of announcements with the release of GPT-5.5, its latest model upgrade for ChatGPT and Codex. The company calls its new model “a new class of intelligence for real work.” ## OpenAI says GPT-5.5 is its smartest and most intuitive to use model yet GPT-5.5 lands seven weeks after the release of GPT-5.4, which arrived on March 5. OpenAI says the newest model “understands what yo…

  • [69] OpenAI's GPT-5.5 masters agentic coding with 82.7% benchmark ...interestingengineering.com

    Introducing GPT-5.5 > > > > A new class of intelligence for real work and powering agents, built to understand complex goals, use tools, check its work, and carry more tasks through to completion. It marks a new way of getting computer work done. > > > > Now available in ChatGPT and Codex. pic.twitter.com/rPLTk99ZH5 — OpenAI (@OpenAI) April 23, 2026 OpenAI said the improvements go beyond benchmarks. Early testers reported that GPT-5.5 better understands system architecture and failure points. It can identify where fixes belong and predict downstream impacts across a codebase. The company em…

  • [70] OpenAI introduced a new artificial intelligence model, GPT-5.5 | УННunn.ua

    The statement emphasizes that the model "excellently copes" with these tasks. ## Context OpenAI introduced the previous version of the model, GPT-5.4, less than two months ago – on March 5. At that time, the company also emphasized its effectiveness for professional work. The new version became another step in the development of artificial intelligence tools for business and developers. OpenAI presented a new AI-based cyber tool to the US government and allies - Axios22.04.26, 19:19 • 2648 views Stepan HaftkoNews of the WorldTechnologies AI (artificial intelligence)ChatGPT [...] Kyiv • UNN •…

  • [71] OpenAI released GPT-5.5, its new frontier model for complex coding, computer use, knowledge work, and early scientific research, plus GPT-5.5 Pro for harder questions and higher-accuracy work, shipping with stronger safeguards and High capability treatment in Biological/Chemical and Cybersecurity unthreads.com

    Thread 214 views btibor91's profile picture btibor91 OpenAI released GPT-5.5, its new frontier model for complex coding, computer use, knowledge work, and early scientific research, plus GPT-5.5 Pro for harder questions and higher-accuracy work, shipping with stronger safeguards and High capability treatment in Biological/Chemical and Cybersecurity under the Preparedness Framework 8 6 Log in or sign up for ThreadsSee what people are talking about and join the conversation. Log in with username instead

  • [72] GPT-5 is here - OpenAIopenai.com

    Terms & Policies Terms of Use Privacy Policy Other Policies (opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window) OpenAI © 2015–2026 Manage Cookies English United States Image 24 [...] Everyone can be a power user ChatGPT thinks harder on complex tasks and asks relevant follow-up questions to keep work moving. Every employee can get expert-level results without switching models. See how businesses use GPT-5 Image 23: GPT-5 works with connectors Smarter with your company context GPT‑5 pr…

  • [73] OpenAI releases GPT-5.5 with improved coding and research ...investing.com

    GPT-5.5 Pro will cost $30 per million input tokens and $180 per million output tokens. The model underwent safety evaluations including external

  • [74] OpenAI releases GPT‑5.5, its newest AI model with enhanced ...streetinsider.com

    API pricing for GPT‑5.5 will be $5 per million input tokens and $30 per million output tokens, with a 1-million token context window. A higher-

  • [75] Model Drop: GPT-5.5 - by Jake Handyhandyai.substack.com

    Apr 23, 2026 7 1 1 Share Image 5 ## The Specs Model: GPT-5.5 (gpt-5.5 on the OpenAI API once it rolls out, plus gpt-5.5-pro). Ships in three consumer surfaces: default GPT-5.5, GPT-5.5 Thinking, and GPT-5.5 Pro. API reasoning effort levels: xhigh, high, medium, low, non-reasoning. Model type: Text + vision multimodal (same text/image input stack as the GPT-5 family, with computer-use screen reading in Codex). No native image, audio, or video output. Ship date: April 23, 2026 (ChatGPT and Codex rollout; API “very soon”) Maker: OpenAI [...] Expert-SWE, an internal 20-hour coding eval. OpenA…

  • [76] OpenAI Unveils GPT-5.5. Company Says Expect a Faster Model ...taekim.substack.com

    Key Context by Tae Kim # OpenAI Unveils GPT-5.5. Company Says Expect a Faster Model Release Pace ### On Thursday, OpenAI unveiled its newest flagship AI model called GPT-5.5. Tae Kim Apr 23, 2026 The latest and most advanced ChatGPT model has arrived. On Thursday, OpenAI unveiled its newest flagship AI model, named GPT-5.5, calling it “our smartest and most intuitive to use model” yet. Key Context by Tae Kim is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. [...] © 2026 Tae Kim · Privacy ∙ Terms ∙ Collection notice Star…

  • [77] GPT 5.5 might be released today? : r/OpenAI - Redditreddit.com

    GPT 5.5 might be released today? ; GPT-5.2, December 11, 2025, Thursday ; GPT-5.3-Codex, February 5, 2026, Thursday ; GPT-5.3 Instant, March 3,

  • [78] OpenAI released GPT-5.5, its new frontier model for complex coding ...x.com

    ... GPT-5.5 Pro for harder questions and higher-accuracy work, shipping with stronger safeguards and High capability treatment in Biological

  • [79] New GPT 5.5 Leaks are INSANE! - YouTubeyoutube.com

    ... date yet 0:48 OpenAI's insane 2026 release pace 1:29 Meet Spud — The ... 23:37 · Go to channel How I AI · GPT-5.5 crushed 6 months of tech

  • [80] OpenAI 最新消息openai.com

    Introducing GPT-5.5. 產品 2026年4月23日. GPT-5.5 Bio Bug Bounty > art card. GPT-5.5 Bio Bug Bounty. 安全 2026年4月23日. Making ChatGPT free for clinicians.

  • [81] Introducing GPT-5.5 - OpenAIopenai.com

    我們相信,我們的研究之路終將邁向通用人工智慧,一種能與人類解決問題能力比肩的系統。打造安全可靠、造福全人類的通用人工智慧是我們的使命。

  • [82] Post by TestingCatalog News on X: Sourcex.com

    Introducing GPT-5.5. openai.com. 1. 1. 19. 2893 · · Explore Trending StoriesGo to HomeSearch XNews. 13 hours ago