Built a RAG-powered system that combines vector embeddings with LLM generation. Documents are automatically chunked, embedded and stored in Qdrant vector database. When users ask questions, the system performs semantic similarity search to retrieve the most relevant document chunks, then uses Google Gemini to generate contextually accurate answers. The solution includes a complete document management system with upload/delete capabilities, automatic re-indexing, and clickable source references that jump to original documents.