ChatTensorFlow is an intelligent assistant that helps users with TensorFlow related queries by utilizing the power of LangGraph based Agents, vector embeddings, and a conversational interface powered by various LLM providers.
Features
Multi-provider support: Works with multiple LLM providers (Google Gemini, OpenAI, Anthropic, Cohere)
Content processing: Splits and embeds TensorFlow documentation for semantic search
Query routing: Intelligently classifies user questions and determines next steps
Assisted research: Follows a research plan to answer complex TensorFlow queries
Context-aware responses: Provides answers with citations from official TensorFlow documentation
Architecture
ChatTensorFlow is built with a modular architecture:
Document Crawler: Efficiently crawls TensorFlow documentation using sitemap.xml
Data Ingestion: Processes web content into chunks for embedding
Vector Store: Stores embeddings for semantic search using Chroma
Retriever: Retrieves relevant content when answering questions
Researcher Graph: Follows a structured plan to research answers
Router Graph: Determines how to handle each user query
Response Generator: Creates coherent, accurate responses based on retrieved information
Graphs
Getting Started
Prerequisites
Python 3.9+
API keys for at least one supported LLM provider
Installation
Clone the repository:
git clone https://github.com/yourusername/chatTensorFlow.git cd chatTensorFlow
Create a virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
Install dependencies:
pip install -r requirements.txt
Set up your environment variables:
cp .env.example .env # Edit .env with your API keys
Data Collection
To crawl and process the Scikit-learn documentation:
Run the web crawler:
python -m processor.crawler
Process and ingest the data:
python -m processor.data_ingestion
This will create a Chroma vector database with the embedded documentation chunks.
Usage
Run the application:
python app/main.py
The assistant can answer questions about:
TensorFlow API usage
Deep learning concepts
Model building and training
TensorFlow data pipelines
Common errors and troubleshooting
Example queries:
"How do I compile and train a Keras model?"
"How do I use tf.data.Dataset to load and preprocess large datasets?"
"What are the parameters for a Conv2D layer?"
"How do I create and use a custom loss function?"
Configuration
The application can be configured by setting the following environment variables: