85 results
- Go to itemThis layer performs convolution in a single dimension with a factorization of the convolution kernel into two smaller kernels. Co…0
- Go to itemThis layer performs convolution in a single dimension with a factorization of the convolution kernel into two smaller kernels. Co…0
- Go to itemThis layer performs convolution in two dimensions with a factorization of the convolution kernel into two smaller kernels. Corres…0
- Go to itemThis layer performs convolution in two dimensions with a factorization of the convolution kernel into two smaller kernels. Corres…0
- Go to itemThe need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of …0
- Go to itemThe need for transposed convolutions generally arises from the desire to use a transformation going in the opposite direction of …0
- Go to itemFreezes the parameters of the selected layers. If the model is trained afterwards, the parameters of the selected layers are not …0
- Go to itemThis node executes a Keras deep learning network on a compatible external back end that can be selected by the user.0
- Go to itemThis node reads a Keras deep learning network from an input file. The file can either contain a full, pre-trained network (.h5 fi…0
- Go to itemSimilar to ordinary ReLUs but shifted by theta. f(x) = x for x > theta, f(x) = 0 otherwise . Corresponds to the Keras Thresholded…0
- Go to itemThis layer creates a convolution kernel that is convolved with the layer input over a single dimension. Corresponds to the Keras …0
- Go to itemThis layer creates a convolution kernel that is convolved with the layer input over a single dimension. Corresponds to the Keras …0
- Go to itemThis layer creates a convolution kernel that is convolved with the layer input over two dimensions. Corresponds to the Keras Conv…0
- Go to itemThis layer creates a convolution kernel that is convolved with the layer input over two dimensions. Corresponds to the Keras Conv…0
- Go to itemAllows to manipulate the network architecture of a Keras deep learning model by choosing a new set of output tensors of the model…0
- Go to itemExponential linear units were introduced to alleviate the disadvantages of ReLU and LeakyReLU units, namely to push the mean acti…0
- Go to itemA leaky ReLU is a rectified linear unit (ReLU) with a slope in the negative part of its input space. The motivation for leaky ReL…0
- Go to itemLike the leaky ReLU, the parametric ReLU introduces a slope in the negative part of the input space to improve learning dynamics …0