Creates an edited or extended image given one or more source images and a prompt. This endpoint only supports gpt-image-1
and dall-e-2
.
The image(s) to edit. Must be a supported image file or an array of images.
For gpt-image-1
, each image should be a png
, webp
, or jpg
file less
than 50MB. You can provide up to 16 images.
For dall-e-2
, you can only provide one image, and it should be a square
png
file less than 4MB.
dall-e-2
, and 32000 characters for gpt-image-1
.image
should be edited. If there are multiple images provided, the mask will be applied on the first image. Must be a valid PNG file, less than 4MB, and have the same dimensions as image
.Allows to set transparency for the background of the generated image(s).
This parameter is only supported for gpt-image-1
. Must be one of
transparent
, opaque
or auto
(default value). When auto
is used, the
model will automatically determine the best background for the image.
If transparent
, the output format needs to support transparency, so it
should be set to either png
(default value) or webp
.
dall-e-2
and gpt-image-1
are supported. Defaults to dall-e-2
unless a parameter specific to gpt-image-1
is used.1024x1024
, 1536x1024
(landscape), 1024x1536
(portrait), or auto
(default value) for gpt-image-1
, and one of 256x256
, 512x512
, or 1024x1024
for dall-e-2
.url
or b64_json
. URLs are only valid for 60 minutes after the image has been generated. This parameter is only supported for dall-e-2
, as gpt-image-1
will always return base64-encoded images.gpt-image-1
. Must be one of png
, jpeg
, or webp
.
The default value is png
.gpt-image-1
with the webp
or jpeg
output
formats, and defaults to 100.gpt-image-1
. Supports high
and low
. Defaults to low
.false
. See the
Image generation guide for more information.The number of partial images to generate. This parameter is used for streaming responses that return partial images. Value must be between 0 and 3. When set to 0, the response will be a single image sent in one streaming event.
Note that the final image may be sent before the full number of partial images are generated if the full image is generated more quickly.
high
, medium
and low
are only supported for gpt-image-1
. dall-e-2
only supports standard
quality. Defaults to auto
.Specify how the response should be mapped to the table output. The following formats are available:
Structured Table: Returns a parsed table with data split into rows and columns.
transparent
or opaque
.png
, webp
, or jpeg
.1024x1024
, 1024x1536
, or 1536x1024
.low
, medium
, or high
.gpt-image-1
only, the token usage information for the image generation.Raw Response: Returns the raw response in a single row with the following columns:
You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.
To use this node in KNIME, install the extension OpenAI Nodes from the below update site following our NodePit Product and Node Installation Guide:
A zipped version of the software site can be downloaded here.
Deploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.