Skip to content

Commit

Permalink
Merge pull request kreneskyp#35 from kreneskyp/generic_agents
Browse files Browse the repository at this point in the history
v0.1 merge

Major features
- langchain chains
- artifact system
- planning system
- multi-agent chat
- real time chat stream
- graphical chain viewer
- refined ux for messages
- refined ux light/dark mode
  • Loading branch information
kreneskyp committed May 22, 2023
2 parents 4318f2d + 143b678 commit 2d206df
Show file tree
Hide file tree
Showing 183 changed files with 9,303 additions and 2,091 deletions.
5 changes: 5 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@ node_modules
.pyc
.coverage
__generated__
frontend/schema.graphql
.ipython

__pycache__

Expand All @@ -18,3 +20,6 @@ public
react-app-unused
tmp
workspace
.bak
*.bak
workdir
90 changes: 90 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@


## 0.1 - Alpha release of core functionality.

This is the first tagged release and it focuses on establishing some core functionality. This project is
still in the early stages of development and is not ready for general use. It is in a state that allows
developers to begin hacking chains and agents.

### Langchain
The `AgentProcess` has been converted to use langchain chains, LLM, and memory. This is introduces support for
multi-step LLM processes.

The default agent is now a zero-shot agent that runs a root chain. A `LLMToolChooser` may be used to replicate
decision making of an agent. Future releases will introduce additional agent types.

#### JSON Config
A custom JSON config loading mechanism was introduced for loading langchain chains. This allowed for loading
custom chain classes provided by Ix and integrating Ix specific functionality.

It is a goal to provide better interoperability between vanilla langchain config objects and Ix. This will
be explored in a future release.

#### Data model

New models `ChainNode` and `ChainEdge` were introduced to store langchain chains. This includes helper methods
to convert to/from `ChainNode` and `ChainEdge`, a json config, and a `Chain` instance.

#### Custom Chain types
Multiple custom chains were added to support the Ix style of implementing chains and to add flow control.

New basic chains:
- LLMToolChain: chain that has `tools` available in the prompt.
- LLMChain: wrapper around `langchain.LLMChain` to add config loader.
- LLMReply: chain that replies to a message with LLM prompt.
- ParseJSON: chain that parses a JSON string into a python object.

New routing / flow control chains:
- IxSequence: wrapper for Sequence that provides config loader
- MapSubchain: chain that runs a subchain for each value in a list
- ToolChooser: chain that chooses a tool with a subchain
- LLMToolChooser: chain that chooses a tool with LLM prompt

New chains were added:
- ChatModerator: moderates a chat by delegating tasks to other agents.
- Planner v3: plans and executes a sequence of tasks.

Test / Example chains were added:
- MockChain: echos inputs back for tests.
- Fake Weather: generates fake weather data.
- Dad Jokes: tells dad jokes.


### Multi-agent chat
Chats now include the ability to add multiple agents. The main agent is a moderator that delegates tasks
to the other agents. Agent's may be targeted directly by starting a message with a @mention.


### UX Updates

#### Chain viewer
This release introduces a chain viewer that allows the user to view a graphical representation of a chain. The
viewer will be converted to an editor in a future release.

#### Chat
- Chat replaces the task view as the default view
- Chat message updates are now over websockets for real-time updates.
- Chat messages are now grouped by execution to provide a clean and concise view of the chat.
- Chat messages now indicate the agent that sent the message.


#### Misc
- various tweaks to improve light/dark style

### Artifacts

This version introduces the artifact system. Artifacts represent the result of tasks the agent has performed.
It provides a type of object permanence and common understand for the agent. Artifacts are stored in the database
and can be viewed by the user or used by the agent in future tasks.

Future release will introduce the ability to reference artifacts in tasks and to create new artifacts
from existing ones.

### System

- Webserver is now Nginx and uvicorn to support websockets and http requests.
- Django upgraded to 4.2.1
- Switched to psycopg v3 for async query support.
- Django channels is now available for inter-process communication.
- ReactFlow is now available to frontend
- Webpack now supports css to support ReactFlow
29 changes: 20 additions & 9 deletions Makefile
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@

DOCKER_COMPOSE=docker-compose.yml
DOCKERFILE=Dockerfile
DOCKER_REGISTRY=ghcr.io
Expand All @@ -7,8 +8,8 @@ IMAGE_TAG=$(shell cat $(HASH_FILES) | md5sum | cut -d ' ' -f 1)
IMAGE_URL=$(DOCKER_REPOSITORY):$(IMAGE_TAG)
IMAGE_SENTINEL=.sentinel/image

DOCKER_COMPOSE_RUN=docker-compose run --rm sandbox
DOCKER_COMPOSE_RUN_WITH_PORT=docker-compose run -p 8000:8000 --rm sandbox
DOCKER_COMPOSE_RUN=docker-compose run --rm web
DOCKER_COMPOSE_RUN_WITH_PORT=docker-compose run -p 8000:8000 --rm web

# set to skip build, primarily used by github workflows to skip builds when image is cached
NO_IMAGE_BUILD?=0
Expand Down Expand Up @@ -59,13 +60,13 @@ frontend: compose npm_install graphene_to_graphql compile_relay webpack

# install npm packages
.PHONY: npm_install
npm_install: compose
docker-compose run --rm sandbox npm install
npm_install: compose package.json
docker-compose run --rm web npm install

# compile javascript
.PHONY: webpack
webpack: compose
docker-compose run --rm sandbox webpack --progress
${DOCKER_COMPOSE_RUN} webpack --progress

# compile javascript in watcher mode
.PHONY: webpack-watch
Expand All @@ -75,19 +76,26 @@ webpack-watch: compose
# compile graphene graphql classes into schema.graphql for javascript
.PHONY: graphene_to_graphql
graphene_to_graphql: compose
docker-compose run --rm sandbox ./manage.py graphql_schema --out ./frontend/schema.graphql
${DOCKER_COMPOSE_RUN} ./manage.py graphql_schema --out ./frontend/schema.graphql

# compile javascript
.PHONY: compile_relay
compile_relay: compose
docker-compose run --rm sandbox npm run relay
${DOCKER_COMPOSE_RUN} npm run relay


# =========================================================
# Run
# =========================================================

# run backend and frontend
# run backend and frontend. This starts uvicorn for asgi+websockers
# and nginx to serve static files
.PHONY: server
server: compose
docker-compose up web nginx


# run django debug server, backup in case nginx ever breaks
.PHONY: runserver
runserver: compose
${DOCKER_COMPOSE_RUN_WITH_PORT} ./manage.py runserver 0.0.0.0:8000
Expand Down Expand Up @@ -126,7 +134,10 @@ migrations: compose
# load initial data needed for dev environment
.PHONY: dev_fixtures
dev_fixtures: compose
${DOCKER_COMPOSE_RUN} ./manage.py loaddata fake_user.json default_agent.json
${DOCKER_COMPOSE_RUN} ./manage.py loaddata fake_user
# use management commands for now since fixtures didn't load correctly
${DOCKER_COMPOSE_RUN} ./manage.py create_moderator_v1
${DOCKER_COMPOSE_RUN} ./manage.py create_coder_v1


# =========================================================
Expand Down
42 changes: 21 additions & 21 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,9 +29,12 @@ knowledge, casting a shadow of intrigue over the galaxy.

## About
<div>
iX is a platform to run semi-autonomous GPT-4 agents, providing a scalable and responsive solution for delegating tasks.
Agents can be spawned as individual processes to research and complete tasks through a web based interface while the
backend architecture efficiently manages message queues and inter-agent communication between a fleet of workers.
Ix is an experimental platform for designing and deploying semi-autonomous LLM agents. It provides a scalable and
responsive solution for delegating tasks to AI powered agents. The platform is designed to be extensible, allowing
developers to create custom agents and chains to perform a wide variety of tasks.

The backend is designed to support multiple agents running in parallel and communicating with each other. Each agent
may be customized and may utilize parallel processes to complete tasks.
<br>
<br>
The platform supports deployment using Docker containers, ensuring a consistent environment and enabling easy scaling
Expand All @@ -40,44 +43,41 @@ to multiple worker containers.

## How does it work

You provide a task and goals and an agent uses that direction to investigate, plan, and complete tasks. The agents are
### Basic Usage
You chat with an agent that uses that direction to investigate, plan, and complete tasks. The agents are
capable of searching the web, writing code, creating images, interacting with other APIs and services. If it can be
coded, it's within the realm of possibility for an agent to assist.
coded, it's within the realm of possibility that an agent can be built to assist you.

1. Start by create a task and goals for the agent to work towards.
![Dialog for entering task name and goals](docs/create_task.png)
1. Setup the server and visit `http://localhost:8000`, a new chat will be created automatically

2. The agent will begin researching and planning to complete the task.
![chat interface displaying log](docs/chat.png)
2. Enter a request and the Ix moderator will delegate the task to the agent best suited for the response. Or @mention
an agent to request a specific agent to complete the task.

3. The chat system provides an interaction model for the agent and user to communicate. The agent will respond to
user input and provide updates on its progress. The Agent will request additional information and authorization
from the user as needed.
3. Customized agents may be added or removed from the chat as needed to process your tasks

![chat interface displaying log](docs/chat_interactions.png)

4. Agents are customizable and can be extended to support new functionality. The platform provides a plugin architecture
for adding new agents and customizing existing agents.
![chat interface displaying log](docs/model_options.png)
### Creating Custom Agents and Chains

View the [documentation](docs/chains/chains.rst) to create custom agents and chains.


## Key Features

- Scalable model for running a fleet of GPT agents.
- Responsive user interface. for interacting with agents.
- Responsive user interface for interacting with agents.
- Persistent storage of interactions, processes, and metrics.
- Message queue for agent jobs and inter-agent communication.
- Extensible model for customizing agents.
- Deployment using Docker.


## Technologies:
## Stack:
- Python 3.11
- Django 4.2
- PostgreSQL 14.4
- Graphql
- Graphql / Graphene / Relay
- React 18
- Langchain
- Integrated with OpenAI GPT models
- Plugin architecture to support extending agent functionality (e.g. web browsing, writing code, etc)
- Generic framework for vector database based agent memory
Expand Down Expand Up @@ -131,7 +131,7 @@ make dev_setup
Run the dev server

```bash
make runserver
make server
```

Start a worker
Expand All @@ -155,7 +155,7 @@ Run as many worker processes as you want with `make worker`.
Here are some helpful commands for developers to set up and manage the development environment:

### Running:
- `make runserver`: Start the application in development mode on `0.0.0.0:8000`.
- `make server`: Start the application in development mode on `0.0.0.0:8000`.
- `make worker`: Start an Agent worker.

### Building:
Expand Down
1 change: 1 addition & 0 deletions bin/celery.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,5 +2,6 @@
set -o errexit

CELERY_CONCURRENCY=${CELERY_CONCURRENCY:=1}
DJANGO_SETTINGS_MODULE=ix.server.celery_settings

celery -A ix worker --loglevel=info --concurrency=${CELERY_CONCURRENCY}
16 changes: 14 additions & 2 deletions docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,12 @@ services:
POSTGRES_DB: ix
POSTGRES_HOST_AUTH_METHOD: trust

sandbox:
web:
image: ghcr.io/ix/sandbox:latest
#image: ghcr.io/ix:latest
command: uvicorn ix.server.asgi:application --host 0.0.0.0 --port 8001 --reload
ports:
- "8001:8001"
links:
- db
- redis
Expand All @@ -19,12 +23,21 @@ services:
#- .node_modules:/var/npm/node_modules
- ./bin:/usr/bin/ix
- .bash_profile:/root/.bash_profile
- .ipython:/root/.ipython
env_file:
- .env
environment:
DJANGO_SETTINGS_MODULE: "ix.server.settings"

nginx:
image: nginx:latest
ports:
- "8000:8000"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./.compiled-static:/var/static/
depends_on:
- web

worker:
image: ghcr.io/ix/sandbox:latest
Expand All @@ -41,7 +54,6 @@ services:
- .env



redis:
image: redis

Loading

0 comments on commit 2d206df

Please sign in to comment.