Hub
Pricing About
NodeNode / Manipulator

Spark Correlation Filter

Tools & ServicesApache SparkStatistics
Drag & drop
Like

This node uses the model as generated by a Correlation node to determine which columns are redundant (i.e. correlated) and filters them out. The output will contain the reduced set of columns.

The filtering step works roughly as follows: For each column in the correlation model the count of correlated columns is determined given a threshold value for the correlation coefficient (specified in the dialog). The column with the most correlated columns is chosen to "survive" and all correlated columns are filtered out. This procedure is repeated until no more columns can be identified. The problem of finding a minimum set of columns to satisfy the constraints is difficult to solve analytically. This method applied here is known to be good approximation, however.

Node details

Input ports
  1. Type: Correlation
    Correlation Model
    The model from the correlation node.
  2. Type: Spark Data
    Spark DataFrame/RDD
    Numeric input data to filter. It must contain the set of columns that were used to create the correlation model. (Typically you connect the input data from the correlation node here.)
Output ports
  1. Type: Spark Data
    Filtered data from input
    Filtered data from input.

Extension

The Spark Correlation Filter node is part of this extension:

  1. Go to item

Related workflows & nodes

  1. Go to item
  2. Go to item
  3. Go to item

KNIME
Open for Innovation

KNIME AG
Talacker 50
8001 Zurich, Switzerland
  • Software
  • Getting started
  • Documentation
  • Courses + Certification
  • Solutions
  • KNIME Hub
  • KNIME Forum
  • Blog
  • Events
  • Partner
  • Developers
  • KNIME Home
  • Careers
  • Contact us
Download KNIME Analytics Platform Read more about KNIME Business Hub
© 2025 KNIME AG. All rights reserved.
  • Trademarks
  • Imprint
  • Privacy
  • Terms & Conditions
  • Data Processing Agreement
  • Credits