.env file in the project root:/research [topic] [--depth=<1-5>] - Research a topic with specified depth/memory [query] - Search through memory for specific information/status - Show current system status and components/clear [--confirm] - Clear chat history/settings [setting] [value] - View or modify chat settings/browse <url> [--headless] - Open browser to research specific URL/save [type] [filename] - Save research or conversation/preferences [action] [key] [value] - View or update user preferences/help [command] - Display available commands/exit - Exit the applicationAutonomousBrowser class that provides enhanced browsing capabilities with self-healing and intelligent navigation:FIRECRAWL_API_KEY environment variable with your API keymxbai-embed-large via Ollama for fixed-size embeddingsAzazel-AI/llama-3.2-1b-instruct-abliterated.q8_0 for quick summarizationkrith/mistral-nemo-instruct-2407-abliterated:IQ3_M for extended summariesMEMORY_PROVIDER Backend storage provider redis string MEMORY_RETENTION_PERIOD Days to keep memories 90 integer MEMORY_IMPORTANCE_THRESHOLD Minimum importance score to store memory 0.5 float EMBEDDING_MODEL Model used for vectorizing text mxbai-embed-large string MEMORY_CHUNK_SIZE Maximum token size for memory chunks 500 integerRESEARCH_PROVIDER Search API provider serpapi string RESEARCH_MAX_SOURCES Maximum sources per query 5 integer RESEARCH_MAX_DEPTH How deep to search (1-3) 2 integer RESEARCH_CACHE_HOURS Hours to cache research results 24 integer SOURCE_CITATION_STYLE Citation format for sources mla stringLLM_PROVIDER LLM API provider ollama string LLM_MODEL Default language model mannix/dolphin-2.9-llama3-8b:latest string MAX_CONTEXT_TOKENS Maximum token context window 8192 integer TEMPERATURE Creativity of responses (0-2) 0.7 float SYSTEM_PROMPT Base system instructions see prompts.py stringmemory/manager.py - Central memory coordinationmemory/models.py - Data structures for memorymemory/storage/ - Storage backends (Redis, ChromaDB)research/system.py - Research orchestrationresearch/models.py - Data structures for researchresearch/browser_manager.py - Browser automationresearch/browser_control.py - Enhanced autonomous browserresearch/content_processor.py - Content extraction and analysisllm/system.py - LLM coordinationllm/client.py - Ollama clientcore/chat.py - Main application logiccore/commands.py - Command handlingcore/system.py - System managementheadless=True for server environments.curl http://localhost:11434/api/versionredis-cli pinglogs/ directory by default. Set LOG_LEVEL=DEBUG for verbose logging.projects_dir locationmemory/storage/BaseStorage interfaceresearch/providers/BaseResearchProvider interfacellm/providers/BaseLLMProvider interfacegit checkout -b feature/amazing-feature)git commit -m 'Add some amazing feature')git push origin feature/amazing-feature)Posted Apr 9, 2025
Sheppard is an AI agent for Ollama, handling memory, automation, and tool creation using Redis, PostgreSQL, and ChromaDB. ONGOING
0
4
Oct 22, 2024 - Ongoing