This node allows you to connect to a local GPT4All LLM. To get started, you need to download a specific model either through the GPT4All client or by dowloading a GGUF model from Hugging Face Hub . Once you have downloaded the model, specify its file path in the configuration dialog to use it.
It is not necessary to install the GPT4All client to execute the node.
It is recommended to use models (e.g. Llama 2) that have been fine-tuned for chat applications. For model specifications including prompt templates, see GPT4All model list .
The currently supported models are based on GPT-J, LLaMA, MPT, Replit, Falcon and StarCoder.
For more information and detailed instructions on downloading compatible models, please visit the GPT4All GitHub repository .
Note : This node cannot be used on the KNIME Hub, as the models cannot be embedded into the workflow due to their large size.