This workflow shows how the new KNIME Keras integration can be used to train and deploy a specialized deep neural network for semantic segmentation. This means that our network decides for each pixel in the input image, what class of object it belongs to. In order to run the example, please make sure you have the following KNIME extensions installed: * KNIME Deep Learning - Keras Integration (Labs) * KNIME Image Processing (Community Contributions Trusted) * KNIME Image Processing - Deep Learning Extension (Community Contributions Trusted) * KNIME Streaming Execution (Beta) (Labs) * KNIME Image Processing - Python Extension (Community Contributions Trusted) You also need a local Python installation that includes Keras. Please refer to https://www.knime.com/deeplearning#keras for installation recommendations and further information. Acknowledgements: The network architecture we use is an adaptation of the U-Net proposed in . The dataset we used is taken from   Ronneberger et al. in "U-Net: Convolutional Networks for Biomedical Image Segmentation" (https://arxiv.org/abs/1505.04597)  Gould et al. "Decomposing a Scene into Geometric and Semantically Consistent Regions." (http://dags.stanford.edu/projects/scenedataset.html)
Used extensions & nodes
Created with KNIME Analytics Platform version 4.3.2
By using or downloading the workflow, you agree to our terms and conditions.
Discussions are currently not available, please try again later.