Get structured output from an LLM
This workflow demonstrates how to reliably generate structured output from a large language model to produce a formatted output table. It uses a small set of customer feedback examples to show how structured prompting leads to consistent, machine readable results.
The workflow shows how to:
Instruct an LLM to return valid structured output using an explicit prompt schema
Define multiple output fields with different data types
Return each extracted field as a separate column in the output table
The LLM Prompter outputs the structured results directly as a KNIME table, making them easy to validate and integrate into downstream workflows such as dashboards, data pipelines, or agent tools.