137 results
- Go to itemThis is the first workflow in the PubChem Big Data story. In the top part of the workflow we download the assay data from the Pub…0
- Go to itemThis is the third workflow in the PubChem Big Data story. First, we obtain the SMILES of the necessary CIDs using PubChem REST se…0
- Go to itemAWS Autentication component, Paths to Livy and S3 component, and Create Spark Contex (Livy) node require configuration.0
- Go to itemCreates a fully functional local big data environment including Apache Hive, Apache Spark and HDFS. The Spark WebUI of the create…0
- Go to itemCreates a fully functional local big data environment including Apache Hive, Apache Spark and HDFS. The Spark WebUI of the create…0
- Go to itemCreates a new Spark context via Spark Jobserver. Support for Spark Jobserver is deprecated and the Create Spark Context (Livy) no…0
- Go to itemCreates a new Spark context via Apache Livy . This node requires access to a remote file system such as HDFS/webHDFs/httpFS or S3…0
- Go to itemCreates a new Spark context via Apache Livy . This node requires access to a remote file system such as HDFS/webHDFs/httpFS or S3…0
- Go to itemCreates a fully functional big data environment for testing purposes, including Apache Hive, Apache Spark and a remote file syste…0
- Go to itemCreates a fully functional big data environment for testing purposes, including Apache Hive, Apache Spark and a remote file syste…0
- Go to itemCreates a Databricks Environment connected to an existsing Databricks cluster. See AWS or Azure Databricks documentation for more…0
- Go to itemCreates a Databricks Environment connected to an existsing Databricks cluster. See AWS or Azure Databricks documentation for more…0
- Go to itemCreates a fully functional local big data environment including Apache Hive, Apache Spark and HDFS. The Spark WebUI of the create…0
- Go to itemThis node allows columns to be filtered from the input Spark DataFrame/RDD while only the remaining columns are passed to the out…0
- Go to itemThis node joins two Spark DataFrame/RDDs in a database-like way. The join is based on the joining columns of both DataFrame/RDDs.0
- Go to itemReads missing value replacement settings from the PMML port and applies them to the data. The node can handle the output of the K…0
- Go to itemThis node helps handle missing values found in the ingoing Spark data. The first tab in the dialog (labeled Default ) provides de…0