Large language models (LLMs) have shown remarkable capabilities in natural language processing tasks. However, fine-tuning them for specific domains remains crucial to unlock their full potential. This project explores the adaptation of the GEMMA model for analyzing Indian history, utilizing a dedicated Indian history dataset. We employ techniques like BitsAndBytes quantization and LoraConfig customization to optimize the model for causal language modeling tasks within this domain.