This study investigates various neural machine translation architectures using low computational resources, focusing on architectures such as transformers and LSTMs for translating Marathi to English. The findings show that while transformers achieve higher BLEU scores, they require significantly more training time compared to LSTMs, which are more efficient under time constraints. The research emphasizes the importance of choosing the right architecture based on available resources and aims to explore further improvements and different languages in future work.