3. The results of the word2vec were then fed to a BLSTM model which has 3 normal layers and 1 bi-directional layer. Results: The BLSTM model was trained successfully for this problem statement we were focusing more on recall rather than precision. Because we wanted to capture more names rather than tagging only the correct ones.