The document analyzes various architectures for neural machine translation (NMT), particularly focusing on using low computational resources. It compares transformer models with LSTM and convolutional networks using an English-Marathi dataset, revealing that transformers deliver better accuracy but at higher training costs, while LSTMs perform well with significantly lower training times. The findings suggest that depending on the computational resources available, different architectures may be more suitable for machine translation tasks.