Skip to main content
Fig. 1 | Human-centric Computing and Information Sciences

Fig. 1

From: Enhancing recurrent neural network-based language models by word tokenization

Fig. 1

Basic recurrent neural network language model. Basic recurrent neural network language model consists of three layers: input layer, hidden layer and output layer; input word is presented to the network input layer using a 1-of-n encoding. The feedback between hidden and input layer allows the hidden neurons to remember the history of the previously processed word. The hidden layer output is computed using Tanh function [2]. The final network output is computed using the Softmax activation function [3]

Back to article page