Icon

Mitigate hallucinations in LLMs with RAG

This workflow shows how to mitigate factual hallucinations in LLM responses about KNIME nodes for deep learning by implementing a RAG-based AI framework. The question we ask is: "What KNIME node should I use for transfer learning?"

We first import and embed a knowledge base containing the node descriptions of the KNIME Deep Learning - Keras Integration. Next, we create a Vector Store of that knowledge base and export it.

We implement a RAG process where we query the Vector Store and retrieve documents (5) that are most similar to the query. Next, we use the retrieved documents to augment the prompt with more context. Finally, we prompt ChatGPT to generate a response.

URL: OpenAI API Keys https://platform.openai.com/account/api-keys
URL: What are hallucinations in AI? (Blog) https://www.knime.com/blog/ai-hallucinations
URL: Mitigate hallucinations in LLMs using RAG with KNIME? (Blog) https://www.knime.com/blog/mitigate-hallucinations-in-LLMs-with-RAG

Minderung von Halluzination in LLMs mit RAG

2. Retrieval Augmented Generation

Retrieval
Augmentation
Generation

1. Vektor-Speicher aus einer Wissensdatenbank erstellen

Load & embed Knowledge Base
Create and export Vector Store
LLM Prompter
prompt engineering
String Manipulation
ChatGPT
OpenAI LLM Selector
Import vector store
Model Reader
Visualize answers about KNIME KerasIntegration nodes
RAG viz
KNIME DL Keras IntegrationNode descriptions
Excel Reader
Model Writer
OpenAIAPI Key
Credentials Configuration
Questions to the store(similarity search)
Table Creator
Excel Writer
Column Filter
Expression
FAISS Vector Store Creator
OpenAI Embedding Model Selector
OpenAI Authenticator
Retrieve documents fittingthe query from the vector store
Vector Store Retriever

Nodes

Extensions

Links