Aidan Tilgner
What's the problem?
My entire career I've been fascinated by the idea of a Jarvis-like digital assistant, which would understand the things that I was saying, anticipate needs, and act intelligently in response to stimuli. This fascination has led to repeated attempts by me to create such systems, either from scratch or by leveraging additional technologies. In fact, if you go over to my GitHub
Throughout my exploration of this concept, certain values and criteria appeared which I began to hold as necessary for any successful implementation of this plan. First, it had to be open-source and self-hostable, in order to ensure privacy, as well as utility in certain specific operations. Additionally, it needed to be highly-customizable, in order to allow people to tailor their assistant to their specific needs. Lastly, it needs to be highly capable, with advanced artificial intelligence capable of performing its duties correctly and powerfully.
Unfortunately, while certain projects do exist which meet some of these criteria ( Leon AI AutoGPT
Ok cool, so how do we fix it?
Introducing the Assistant Framework, a declarative framework for building out complex, powerful digital assistants. Acting as a wrapper around a language model of choice, most notably gpt-4, or gpt-3.5-turbo, the Assistant framework provides additional functionality and utilities to extend the model. Essentially, it does the heavy lifting of creating the base assistant, while allowing you to build your use-case around it. Inspired by React and other declarative libraries, the framework allows you to focus on structure over implementation.
Relying an an architecture comprising of multiple modules, users can extend the base functionality of the framework using Services and Channels. In a nutshell, a Service is like a toolbox with predefined tools that an agent will have access to. A Channel, on the other hand, is a path for communication between the assistant and a user, allowing for simple conversational interfaces but also more complex interactions. In the future, Events will also be available to allow the Assistant to respond to non-communicative stimuli.
How does it work and how can I use it?
More detailed usage and system information is available in the Repository Readme