**560**
results

**560**
results

- Go to itemFreezes the parameters of the selected layers. If the model is trained afterwards, the parameters of the selected layers are not …0
- Go to itemThis node executes a Keras deep learning network on a compatible external back end that can be selected by the user.0
- Go to itemThis node performs supervised learning on a Keras deep learning network.0
- Go to itemThis node reads a Keras deep learning network from an input file. The file can either contain a full, pre-trained network (.h5 fi…0
- Go to itemThis layer performs convolution in a single dimension with a factorization of the convolution kernel into two smaller kernels. Co…0
- Go to itemNode / Other
~~Keras Separable Convolution 1D Layer (deprecated)~~This layer performs convolution in a single dimension with a factorization of the convolution kernel into two smaller kernels. Co…0 - Go to itemNode / Other
~~Keras Separable Convolution 2D Layer (deprecated)~~This layer performs convolution in two dimensions with a factorization of the convolution kernel into two smaller kernels. Corres…0 - Go to itemThis layer performs convolution in two dimensions with a factorization of the convolution kernel into two smaller kernels. Corres…0
- Go to itemNode / Other
~~Keras Transposed Convolution 2D Layer (deprecated)~~The need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of …0 - Go to itemThe need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of …0
- Go to itemRepeats the layer input element-wise in a single dimension. Corresponds to the Keras Upsampling 1D Layer .0
- Go to itemRepeats the rows and columns of the layer input. Corresponds to the Keras Upsampling 2D Layer .0
- Go to itemWrites a Keras network to a file.0
- Go to itemAllows to manipulate the network architecture of a Keras deep learning model by choosing a new set of output tensors of the model…0
- Go to itemExponential linear units were introduced to alleviate the disadvantages of ReLU and LeakyReLU units, namely to push the mean acti…0
- Go to itemA leaky ReLU is a rectified linear unit (ReLU) with a slope in the negative part of its input space. The motivation for leaky ReL…0
- Go to itemLike the leaky ReLU, the parametric ReLU introduces a slope in the negative part of the input space to improve learning dynamics …0
- Go to itemLike the leaky ReLU, the parametric ReLU introduces a slope in the negative part of the input space to improve learning dynamics …0
- Go to itemThe softmax function is commonly used as the last layer in a classification network. It transforms an unconstrained n-dimensional…0
- Go to itemSimilar to ordinary ReLUs but shifted by theta. f(x) = x for x > theta, f(x) = 0 otherwise . Corresponds to the Keras Thresholded…0

You are using an outdated browser.
Please update your browser