This node passes JSON objects to a workflow, executes the workflow, and fetches the returned JSON objects. This happens once for each row in the input table, appending the fetched results for the row as new cells to the row.
Sending data. The called workflow can receive data from this node via the Container Input nodes, e.g., JSON, Row, or Table, which all expect a JSON object but make different assumptions on the structure of the object. For instance, Container Input (JSON) accepts any JSON object, while Container Input (Row) expects a JSON object where each key corresponds to a column name and the associated value denotes the according cell content.
What is passed to a specific Container Input node affects the execution of the called workflow. There are three options:
- From column: pass the JSON contained in a selected column of the current input row. The called workflow is executed for each row.
- Use custom JSON: pass static JSON. The called workflow is executed only once, the result is reused for all subsequent rows.
- Use default: send nothing, causing the default JSON object defined by the according Container Input node to be used. The called workflow is executed once, the result is reused for all subsequent rows. Note that if the called workflow has been saved with one of the output nodes in executed state, the return value for that output value is the json value "null".
Receiving data. The called workflow can send back data via Container Output nodes (Row, Table, or JSON). Each Container Output node will result in a column being appended to the output table.
Concurrent execution. Note that if the called workflow is local, concurrent calls to it will not be processed in parallel, i.e., each call we be executed sequentially. If the called workflow is remote, each call will result in a new job which can be executed in parallel with other jobs.