Live Engine
Select Topic
easySequence Models Rnn Lstm
A vanilla RNN is trained to predict the next word in: "The trophy didn't fit in the bag because it was too ___." The model consistently outputs generic words like "large" rather than correctly predicting "big" or inferring whether "it" refers to "trophy" or "bag." What architectural limitation causes this failure?