Skip to main content
Fig. 5 | Human-centric Computing and Information Sciences

Fig. 5

From: Enhancing recurrent neural network-based language models by word tokenization

Fig. 5

Neural network input layer size in the proposed architecture against the basic RNN architecture with increasing size of training corpus. In this figure we compare the number of neurons in the input layer in the proposed architecture with the basic RNN architecture. The figure shows that the input layer size is decreased from 360K to only 98K neurons with nearly 9.6 M words corpus size which is a very efficient architecture for memory usage and processing performance.

Back to article page