This node uses database specific bulk loading functionality that only some databases (e.g. Hive, MySQL, PostgreSQL and H2) support to load large amounts of data into an existing database table.
Notice:
Most databases do not perform data checks when loading the data into the table which might lead to a corrupt data table . The node does some preliminary checks such as checking that the column order and column names are compatible. However it does not check the column type compatibility. So before using this node please make sure that the column types of the KNIME columns and the database columns are compatible.
Depending on the database an intermediate file format is often used for efficiency which might be required to upload the file to a server. If a file needs to be uploaded, any of the protocols supported by the file handling nodes can be used, e.g. for Apache Hive HDFS or webHDFS. After the loading of the data into a table, the uploaded file gets deleted if it is no longer needed by the database.
If there is no need to upload or store the file for any reason, a remote file input connection prevents execution.
Some databases such as MySQL and PostgreSQL support file based and memory based uploading which require different rights in the database. If this is the case and if you do not have the rights to execute the file based loading of the data try the memory based method instead. If supported the different modes can be changed in the Loader mode section of the "Options" tab which is otherwise hidden.
Depending on the connected database the dialog settings may change. For example, MySQL and PostgreSQL use a CSV file for the data transfer. In order to change how the CSV file is created go to the "Advanced" tab.
This node can access a variety of different file systems. More information about file handling in KNIME can be found in the official File Handling Guide.