Diana O
create_structured_output_chain
is function in the LangChain library which forces the LLM to generate a structured output. Internally, it creates a function chain (LLMChain) and returns the LLMChain, which consists of a series of components to process user input.Neo4jGraph
. Create a new Neo4j graph wrapper instance.create_structured_output_chain
returns an LLMChain. Run the chain on the given text, stored inside the variable document.page_content to generate a response.add_graph_documents
method, a convenient way of ingestion without the need to write Cypher code.RateLimitError: Error code: 429 - {'error': {'message': 'Rate limit reached for gpt-3.5-turbo in organization XXX on requests per min (RPM): Limit 3, Used 3, Requested 1
.MapReduceDocumentsChain
without being queued. It could also be because there 5 and 3 of these chains finish within a minute.create_structured_output_chain
function, an LLMChain, is called once per chunk (ie. one API call per chunk). Since each chunk takes a few minutes to have a generated response and the function will only be called again when the previous LLMChain has finished, the 3 RPM limit is not reached.load_summarize_chain
with chain_type="map_reduce"
."cl100k_base"
. This is a specific tiktoken encoder which is used by gpt-3.5-turbo and gpt-4. Tiktoken was created for use with OpenAI's models, so while there are many other possible tokenizers, only use tiktoken if the LLM is from OpenAI.