This rule learner* uses Spark MLlib to compute frequent item sets and then extract association rules from the given input data. Association rules describe relations between items in a set of transactions. For example, if a customer bought onions, potatos and meat in a transaction, this implies that a new customer who buys onions and potatos is likely to also buy meat . This can be written as an association rule with onions and potatos as antecedents and meat as consequent.
Transactions/item sets are represented as collection columns. The Spark GroupBy or Spark SQL nodes are recommended to create collection columns in Spark.
Frequent item sets are computed using the FP-growth implementation provided by Spark MLlib, using input data with a collection column, where each cell holds the items of a transaction. Rows with missing values in the selected item column are ignored . FP-growth uses a suffix tree (FP-tree) structure to encode transactions without generating candidate sets explicitly and then extracts the frequent item sets from this FP-tree. This approach avoids the usually expensive generation of explicit candidates sets used in Apriori-like algorithms designed for the same purpose. More information about the FP-Growth algorithm can be found in Han et al., Mining frequent patterns without candidate generation . Spark implements Parallel FP-growth (PFP) described in Li et al., PFP: Parallel FP-Growth for Query Recommendation .
Association rules are computed using Spark MLlib, using the previously computed frequent item sets. Each association rule maps an item set ( antecedent ) to a single item ( consequent ). The Spark Association Rule Apply node can be used to apply the rules produced by this node.
See Association rule learning (Wikipedia) for general information.
This node requires at least Apache Spark 2.0.
(*) RULE LEARNER is a registered trademark of Minitab, LLC and is used with Minitab’s permission.