This connector allows you to connect to a local GPT4All LLM. To get started, you need to download a specific model either through the GPT4All client or by dowloading a GGUF model from Hugging Face Hub . Once you have downloaded the model, specify its file path in the configuration dialog to use it. It is not necessary to install the GPT4All client to execute the node.
Some models (e.g. Llama 2) have been fine-tuned for chat applications, so they might behave unexpectedly if their prompts don't follow a chat like structure:
User: <The prompt you want to send to the model> Assistant:
Use the prompt template for the specific model from the GPT4All model list if one is provided.
The currently supported models are based on GPT-J, LLaMA, MPT, Replit, Falcon and StarCoder.
For more information and detailed instructions on downloading compatible models, please visit the GPT4All GitHub repository .
Note: This node can not be used on the KNIME Hub, as the models can't be embedded into the workflow due to their large size.