The FastAPI backend exposes a /run_agent endpoint that receives chat requests, selects the model, and invokes the agent. The agent can use both the selected AI model and the TavilySearch tool to answer queries. The Streamlit frontend lets users select a model, define the agent’s system prompt, and chat with the agent. Responses are displayed in the UI, including results from web searches if triggered. Setup: