Gated recurrent unit as introduced by Cho et al. There are two variants. The default one is based on 1406.1078v3 and has reset gate applied to hidden state before matrix multiplication. The other one is based on original 1406.1078v1 and has the order reversed. Corresponds to the GRU Keras layer .
- Type: Keras Deep Learning NetworkKeras NetworkThe Keras deep learning network to which to add an GRU layer.
- Type: PortObjectKeras NetworkAn optional Keras deep learning network providing the initial state for this GRU layer. The hidden state must have shape [units], where units must correspond to the number of units this layer uses.