Icon

Ollama KNIME - Multimodal Prompting - Car Accident Severity Analysis

<p>This workflow demonstrates how to <strong>prompt a local LLM using both text and images.</strong></p><p>Textual data from insurance claims and corresponding accident images are imported, combined into row-wise multimodal prompts, and sent to a connected LLM. The model then returns a severity assessment for each accident based on both the text and image inputs.</p>

URL: KNIME Forum about multimodal LLM prompts https://forum.knime.com/t/llm-prompter-node-genai-data-workflow-error/90020/3?u=mlauber71
URL: Ollama: mistral-small3.1 https://ollama.com/library/mistral-small3.1
URL: Medium: Multimodal Prompting with local LLMs using KNIME and Ollama https://medium.com/low-code-for-advanced-data-science/multimodal-prompting-with-local-llms-using-knime-and-ollama-74928cf5d09f

Import textual data (car accident insurance claims) and images (accident images)and combine them into a multimodal prompt

Set up a dummy OpenAI API key, and select the local LLM using Ollama

For each claim, sendthe multimodal prompt to an LLM to assess severity

Multimodal Prompting - Car Accident Severity Analysis

This workflow demonstrates how to prompt a local LLM using both text and images to assess car accident severity.

Textual data from insurance claims and corresponding accident images are imported, combined into row-wise multimodal prompts, and sent to a connected LLM. The model then returns a severity assessment for each accident based on both the text and image inputs.

ollama pull qwen3:8b

ollama pull mistral-small3.1:latest
actually seem to provide descriptions of the images (https://ollama.com/library/mistral-small3.1)

adapted from "02 Multimodal prompting" - you will find also more data there

https://hub.knime.com/s/8acGwIQTQmAJM1q9

see discussion on the KNIME forum - also how to install Ollama and configure it for KNIME usage

https://forum.knime.com/t/llm-prompter-node-genai-data-workflow-error/90020/3?u=mlauber71

Extract and Format the response from the JSON file created by the LLM

Multimodal Prompting with local LLMs using KNIME and Ollama

https://medium.com/p/74928cf5d09f

Set a dummy credentialfor the OpenAI client
Credentials Configuration
Model:mistral-small3.1:latest
OpenAI LLM Selector
Combine insuranceclaims and accidentimages into a messagefor the prompt
Message Creator
insurance_claims.table
Table Reader
results_claims.table
Table Writer
Column Filter
JSON to Table
extract theJSON partfrom the resonse
Expression
String to JSON
Point to Ollama serverby providing the URL of a local host:http://localhost:11434/v1
OpenAI Authenticator
View the resultsof the LLM analysis
Table View
only 5
Row Filter
bring back the originalpicturesand claims
Joiner
Select an Ollama Modellike "mistral-small3.1:latest" supporting images and text
Select Ollama Model
String Format Manager
Prompt an LLM withboth text and images
LLM Prompter

Nodes

Extensions

Links