Icon

02 Multimodal prompting

<p>This workflow demonstrates how to <strong>prompt an LLM using both text and images.</strong></p><p>Textual data from insurance claims and corresponding accident images are imported, combined into row-wise multimodal prompts, and sent to a connected LLM. The model then returns a severity assessment for each accident based on both the text and image inputs.</p>

Import textual data (car accident insurance claims) and images (accident images)and combine them into a multimodal prompt

Set up the OpenAI API key, authenticate, and select the LLM

For each claim, sendthe multimodal prompt to an LLM to assess severity

02 Multimodal Prompting - Car Accident Severity Analysis

This workflow demonstrates how to prompt an LLM using both text and images to assess car accident severity.

Textual data from insurance claims and corresponding accident images are imported, combined into row-wise multimodal prompts, and sent to a connected LLM. The model then returns a severity assessment for each accident based on both the text and image inputs.

ollama pull qwen3:8b

adapted from "02 Multimodal prompting"

https://hub.knime.com/s/8acGwIQTQmAJM1q9

Set a dummy credentialfor the OpenAI client
Credentials Configuration
Path to URI
Model:qwen3:8b
OpenAI LLM Selector
Insurance claims& accident imagepaths
Table Reader
Point to Ollama serverby providing the URL of a local host:http://localhost:11434/v1
OpenAI Authenticator
View the resultsof the LLM analysis
Table View
only 2
Row Filter
Combine insuranceclaims and accidentimages into a messagefor the prompt
Message Creator
Select an Ollama Modellike "qwen3:8b" supporting images and text
Select Ollama Model
Prompt an LLM withboth text and images
LLM Prompter
Import imagesfrom paths tothe table
Read Images

Nodes

Extensions

Links