Assistant Framework - Build Extensible AI Assistants with Ease

Aidan Tilgner

Prompt Engineer
AI Developer
ChatGPT
TypeScript

Watch on YouTube

What's the problem?

My entire career I've been fascinated by the idea of a Jarvis-like digital assistant, which would understand the things that I was saying, anticipate needs, and act intelligently in response to stimuli. This fascination has led to repeated attempts by me to create such systems, either from scratch or by leveraging additional technologies. In fact, if you go over to my

GitHub

, and search for "Onyx", you'll find a graveyard of attempted implementations of my dream digital assistant, named "Onyx".

Throughout my exploration of this concept, certain values and criteria appeared which I began to hold as necessary for any successful implementation of this plan. First, it had to be open-source and self-hostable, in order to ensure privacy, as well as utility in certain specific operations. Additionally, it needed to be highly-customizable, in order to allow people to tailor their assistant to their specific needs. Lastly, it needs to be highly capable, with advanced artificial intelligence capable of performing its duties correctly and powerfully.

Unfortunately, while certain projects do exist which meet some of these criteria (

Leon AI

, or

AutoGPT

), they suffer from certain limitations, such as rigid architecture or purely agentic design. Additionally, while some of these projects allow for some extensibility, they are largely built with the intent to be used directly as applications, rather than a framework to be built upon. This is great for most consumers, however when more flexibility is desired, the tradeoff becomes noticeable.

Ok cool, so how do we fix it?

Introducing the Assistant Framework, a declarative framework for building out complex, powerful digital assistants. Acting as a wrapper around a language model of choice, most notably gpt-4, or gpt-3.5-turbo, the Assistant framework provides additional functionality and utilities to extend the model. Essentially, it does the heavy lifting of creating the base assistant, while allowing you to build your use-case around it. Inspired by React and other declarative libraries, the framework allows you to focus on structure over implementation.

Relying an an architecture comprising of multiple modules, users can extend the base functionality of the framework using Services and Channels. In a nutshell, a Service is like a toolbox with predefined tools that an agent will have access to. A Channel, on the other hand, is a path for communication between the assistant and a user, allowing for simple conversational interfaces but also more complex interactions. In the future, Events will also be available to allow the Assistant to respond to non-communicative stimuli.

How does it work and how can I use it?

More detailed usage and system information is available in the

Repository Readme

, or in the YouTube video at the top of this page.

Partner With Aidan
View Services

More Projects by Aidan