Spark Correlation Filter

Manipulator

This node uses the model as generated by a Correlation node to determine which columns are redundant (i.e. correlated) and filters them out. The output will contain the reduced set of columns.

The filtering step works roughly as follows: For each column in the correlation model the count of correlated columns is determined given a threshold value for the correlation coefficient (specified in the dialog). The column with the most correlated columns is chosen to "survive" and all correlated columns are filtered out. This procedure is repeated until no more columns can be identified. The problem of finding a minimum set of columns to satisfy the constraints is difficult to solve analytically. This method applied here is known to be good approximation, however.

Input Ports

  1. Type: Correlation The model from the correlation node.
  2. Type: Spark Data Numeric input data to filter. It must contain the set of columns that were used to create the correlation model. (Typically you connect the input data from the correlation node here.)

Output Ports

  1. Type: Spark Data Filtered data from input.

Find here

Tools & Services > Apache Spark > Statistics

Make sure to have this extension installed:

KNIME Extension for Apache Spark

Update site for KNIME Analytics Platform 3.7:
KNIME Analytics Platform 3.7 Update Site

How to install extensions