Yes—GPT Image 2 can be used to edit existing input images, not only create new ones from text. The safest technical term is input image: fal.ai shows image URLs, while OpenAI’s reference uses broader input image language.[7][18] Masks can guide edits, but OpenAI cautions they may not keep every protected pixel uncha...

Create a landscape editorial hero image for this Studio Global article: Can GPT Image 2 Edit Uploaded Images? API Evidence Says Yes—with Caveats. Article summary: Yes: GPT Image 2 is not prompt only. Official OpenAI docs support image edit and input image workflows for GPT image models, while GPT Image 2 provider pages explicitly show editing existing images; verify current Ope.... Topic tags: ai, openai, gpt image 2, image generation, image editing. Reference image context from search candidates: Reference image 1: visual subject "# Introducing OpenAI GPT Image 2 Edit on WaveSpeedAI. OpenAI's GPT Image 2 Edit enables image editing from natural-language instructions with one or more reference images. Openai G" source context "Introducing OpenAI GPT Image 2 Edit on WaveSpeedAI" Reference image 2: visual subject "### openai/gpt-image-2. OpenAI's state-of-the-art image generation model. Cre
GPT Image 2 is not best understood as text-to-image only. Available API docs and integration pages support workflows where an existing input image is supplied and modified, but the exact implementation details depend on the API surface you use: OpenAI’s docs establish the edit/input-image workflow, while the clearest GPT Image 2-specific edit examples in the checked sources come from Replicate and fal.ai.[5][
7][
15][
17][
18]
Yes—if uploaded image means an existing image or reference image supplied to an editing workflow. OpenAI’s image guide separates Generations, which create images from scratch based on a prompt, from Edits, which modify existing images.[17] OpenAI’s API reference also lists separately from and , so editing is not merely a prompting trick layered on top of generation.
Studio Global AI
Use this topic as a starting point for a fresh source-backed answer, then compare citations before you share it.
Yes—GPT Image 2 can be used to edit existing input images, not only create new ones from text.
Yes—GPT Image 2 can be used to edit existing input images, not only create new ones from text. The safest technical term is input image: fal.ai shows image URLs, while OpenAI’s reference uses broader input image language.[7][18]
Masks can guide edits, but OpenAI cautions they may not keep every protected pixel unchanged.[19]
Continue with "Mogami vs Type 31: Why Japan Wants New Zealand’s Frigate Deal" for another angle and extra citations.
Open related pageCross-check this answer against "Corpay’s BVNK Deal Brings Stablecoin Wallets and 24/7 Settlement to 800,000 Businesses".
Open related pageOverview. Models. Latest: GPT-5.4. Using tools. Overview. Models and providers. Running agents. Evaluate agent workflows. …
openai/gpt-image-2. OpenAI's state-of-the-art image generation model. Create and edit images from text with strong instruction following, sharp text rendering, and detailed editing. GPT Image 2. GPT Image 2 is OpenAI’s state-of-the-art image generation mode...
import { fal } from "@fal-ai/client"; const result = await fal.subscribe("openai/gpt-image-2/edit", { input: { prompt: "Change the background to a rainy Tokyo street at night", image urls: [" }, logs: true, onQueueUpdate: (update) = { if (update.status ===...
Skip to content. Generate an Image. Edit an Image. Create Variation. Retrieve a model. [Images](
Returned by default for the GPT image models, and only present if response format is set to b64 json for dall-e-2 and dall-e-3 . The number of image output tokens generated by the model. {{ "created": 0, "created": 0, "background": "transparent", "backgroun...
OpenAI’s broader image reference says a model can generate a new image from a prompt and/or an input image, which supports input-image workflows rather than text-only generation.[18] For GPT Image 2 specifically, Replicate describes the model as able to create images from text or edit existing images, and fal.ai exposes an
openai/gpt-image-2/edit endpoint whose example request includes a prompt plus image_urls.[5][
7]
The official OpenAI material in the checked sources is strongest on the workflow category: image generation and image editing are separate documented operations.[13][
15][
17] One OpenAI edit-reference snippet also refers to behavior returned by default for the GPT image models, which connects the edit method to the GPT image model family, although that snippet does not by itself spell out GPT Image 2’s full capability list.[
14]
That distinction matters because an edit workflow starts from an existing visual input and produces a new image, while a generation workflow starts from a text prompt alone.[17][
18] In practical terms, GPT Image 2 should not be described as only a new-image generator when the available GPT Image 2 integration pages explicitly document editing existing images.[
5][
7]
The safest wording is input image. fal.ai’s GPT Image 2 edit example uses image_urls, so that integration clearly accepts image URLs as image inputs.[7] OpenAI’s own reference wording is broader and says prompt and/or input image, without the snippet here showing every native OpenAI transport detail for GPT Image 2.[
18]
That means developers should avoid assuming that a provider wrapper parameter such as image_urls is identical to the direct OpenAI API schema. The checked OpenAI GPT Image 2 model-page snippet does not expose the full request schema, input limits, or account-specific availability, so those details should be verified in the current OpenAI model page and image-edit reference before production use.[1][
15]
OpenAI’s GPT Image cookbook describes an edit workflow where a mask can be supplied if you do not want the model to change a specific part of the input image.[19] The same cookbook note cautions that the model might still edit some parts inside the mask and recommends using an image segmentation model if an exact mask is required.[
19]
So masks are useful for guiding edits, but the provided documentation does not support treating them as pixel-perfect boundaries.[19]
image_urls as confirmed for fal.ai’s GPT Image 2 edit integration, not automatically as the universal OpenAI-native parameter name.[Yes: GPT Image 2 can edit supplied input images; it is not limited to generating brand-new images from text. The strongest general support comes from OpenAI’s documented edit and input-image workflows, while the strongest GPT Image 2-specific examples come from Replicate and fal.ai; developers still need to verify the current native OpenAI schema and limits before shipping.[1][
5][
7][
15][
17][
18]
Skip to content. Count input tokens. Generate an Image. Edit an Image. Create Variation. [Create a run](
Image generation. Image generation. Image generation. Image generation. Generations : Generate images from scratch based on a text prompt. Edits : [Modify existing images](
Given a prompt and/or an input image, the model will generate a new image. Create image · ImagesResponse images().generate(ImageGenerateParamsparams,
You can also provide a mask if you don’t want the model to change a specific part of the input image. Edit an image with a mask. Please note that the model might still edit some parts of the image inside the mask, but it will avoid it. If you need to have a...