Skip to content

Commit

Permalink
chore: Merge the upstream indo main
Browse files Browse the repository at this point in the history
  • Loading branch information
lehcode committed Apr 15, 2024
2 parents d714eae + 2491a35 commit b0899c8
Show file tree
Hide file tree
Showing 28 changed files with 8,173 additions and 357 deletions.
3 changes: 3 additions & 0 deletions .github/ISSUE_TEMPLATE/bug_report.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,11 +16,14 @@ assignees: ''
```bash
```

* Operating System:

<!-- tell us everything about your environment -->
**My config.toml and environment vars** (be sure to redact API keys):
```toml
```


**My model and agent** (you can see these settings in the UI):
* Model:
* Agent:
Expand Down
16 changes: 16 additions & 0 deletions .github/ISSUE_TEMPLATE/question.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
---
name: Question
about: Use this template to ask a question regarding the project.
title: ''
labels: question
assignees: ''

---

## Describe your question

<!--A clear and concise description of what you want to know.-->

## Additional context

<!--Add any other context about the question here, like what you've tried so far.-->
1 change: 1 addition & 0 deletions .github/workflows/dogfood.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ jobs:
open-devin:
if: github.event.label.name == 'dogfood-this'
runs-on: ubuntu-latest
environment: OpenAI

steps:
- name: Checkout Repository
Expand Down
66 changes: 66 additions & 0 deletions Development.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
# Development Guide
This guide is for people working on OpenDevin and editing the source code.

## Start the server for development

### 1. Requirements
* Linux, Mac OS, or [WSL on Windows](https://learn.microsoft.com/en-us/windows/wsl/install)
* [Docker](https://docs.docker.com/engine/install/)(For those on MacOS, make sure to allow the default Docker socket to be used from advanced settings!)
* [Python](https://www.python.org/downloads/) >= 3.11
* [NodeJS](https://nodejs.org/en/download/package-manager) >= 18.17.1
* [Poetry](https://python-poetry.org/docs/#installing-with-the-official-installer) >= 1.8

Make sure you have all these dependencies installed before moving on to `make build`.

### 2. Build and Setup The Environment

- **Build the Project:** Begin by building the project, which includes setting up the environment and installing dependencies. This step ensures that OpenDevin is ready to run smoothly on your system.
```bash
make build
```

### 3. Configuring the Language Model

OpenDevin supports a diverse array of Language Models (LMs) through the powerful [litellm](https://docs.litellm.ai) library. By default, we've chosen the mighty GPT-4 from OpenAI as our go-to model, but the world is your oyster! You can unleash the potential of Anthropic's suave Claude, the enigmatic Llama, or any other LM that piques your interest.

To configure the LM of your choice, follow these steps:

1. **Using the Makefile: The Effortless Approach**
With a single command, you can have a smooth LM setup for your OpenDevin experience. Simply run:
```bash
make setup-config
```
This command will prompt you to enter the LLM API key and model name, ensuring that OpenDevin is tailored to your specific needs.

**Note on Alternative Models:**
Some alternative models may prove more challenging to tame than others. Fear not, brave adventurer! We shall soon unveil LLM-specific documentation to guide you on your quest. And if you've already mastered the art of wielding a model other than OpenAI's GPT, we encourage you to [share your setup instructions with us](https://github.com/OpenDevin/OpenDevin/issues/417).

For a full list of the LM providers and models available, please consult the [litellm documentation](https://docs.litellm.ai/docs/providers).

There is also [documentation for running with local models using ollama](./docs/documentation/LOCAL_LLM_GUIDE.md).

### 4. Run the Application

- **Run the Application:** Once the setup is complete, launching OpenDevin is as simple as running a single command. This command starts both the backend and frontend servers seamlessly, allowing you to interact with OpenDevin without any hassle.
```bash
make run
```

### 5. Individual Server Startup

- **Start the Backend Server:** If you prefer, you can start the backend server independently to focus on backend-related tasks or configurations.
```bash
make start-backend
```

- **Start the Frontend Server:** Similarly, you can start the frontend server on its own to work on frontend-related components or interface enhancements.
```bash
make start-frontend
```

### 6. Help

- **Get Some Help:** Need assistance or information on available targets and commands? The help command provides all the necessary guidance to ensure a smooth experience with OpenDevin.
```bash
make help
```
107 changes: 41 additions & 66 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,78 +121,53 @@ After completing the MVP, the team will focus on research in various areas, incl

Getting started with the OpenDevin project is incredibly easy. Follow these simple steps to set up and run OpenDevin on your system:

### 1. Requirements
* Linux, Mac OS, or [WSL on Windows](https://learn.microsoft.com/en-us/windows/wsl/install)
* [Docker](https://docs.docker.com/engine/install/)(For those on MacOS, make sure to allow the default Docker socket to be used from advanced settings!)
* [Python](https://www.python.org/downloads/) >= 3.11
* [NodeJS](https://nodejs.org/en/download/package-manager) >= 18.17.1
* [Poetry](https://python-poetry.org/docs/#installing-with-the-official-installer) >= 1.8

Make sure you have all these dependencies installed before moving on to `make build`.

### 2. Build and Setup The Environment

- **Build the Project:** Begin by building the project, which includes setting up the environment and installing dependencies. This step ensures that OpenDevin is ready to run smoothly on your system.
```bash
make build
```

### 3. Configuring the Language Model

OpenDevin supports a diverse array of Language Models (LMs) through the powerful [litellm](https://docs.litellm.ai) library. By default, we've chosen the mighty GPT-4 from OpenAI as our go-to model, but the world is your oyster! You can unleash the potential of Anthropic's suave Claude, the enigmatic Llama, or any other LM that piques your interest.

To configure the LM of your choice, follow these steps:

1. **Using the Makefile: The Effortless Approach**
With a single command, you can have a smooth LM setup for your OpenDevin experience. Simply run:
```bash
make setup-config
```
This command will prompt you to enter the LLM API key and model name, ensuring that OpenDevin is tailored to your specific needs.
The easiest way to run OpenDevin is inside a Docker container.
You can run:
```bash
# Your OpenAI API key, or any other LLM API key
export LLM_API_KEY="sk-..."

# The directory you want OpenDevin to modify. MUST be an absolute path!
export WORKSPACE_DIR=$(pwd)/workspace

docker build -t opendevin-app -f container/Dockerfile .

docker run \
-e LLM_API_KEY \
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_DIR \
-v $WORKSPACE_DIR:/opt/workspace_base \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 3000:3000 \
opendevin-app
```
Replace `$(pwd)/workspace` with the path to the code you want OpenDevin to work with.

You can find opendevin running at `http://localhost:3000`.

See [Development.md](Development.md) for instructions on running OpenDevin without Docker.

## 🤖 LLM Backends
OpenDevin can work with any LLM backend.
For a full list of the LM providers and models available, please consult the
[litellm documentation](https://docs.litellm.ai/docs/providers).

The following environment variables might be necessary for some LLMs:
* `LLM_API_KEY`
* `LLM_BASE_URL`
* `LLM_EMBEDDING_MODEL`
* `LLM_DEPLOYMENT_NAME`
* `LLM_API_VERSION`

**Note on Alternative Models:**
Some alternative models may prove more challenging to tame than others. Fear not, brave adventurer! We shall soon unveil LLM-specific documentation to guide you on your quest. And if you've already mastered the art of wielding a model other than OpenAI's GPT, we encourage you to [share your setup instructions with us](https://github.com/OpenDevin/OpenDevin/issues/417).

For a full list of the LM providers and models available, please consult the [litellm documentation](https://docs.litellm.ai/docs/providers).
Some alternative models may prove more challenging to tame than others.
Fear not, brave adventurer! We shall soon unveil LLM-specific documentation to guide you on your quest.
And if you've already mastered the art of wielding a model other than OpenAI's GPT,
we encourage you to [share your setup instructions with us](https://github.com/OpenDevin/OpenDevin/issues/417).

There is also [documentation for running with local models using ollama](./docs/documentation/LOCAL_LLM_GUIDE.md).

We are working on a [guide for running OpenDevin with Azure](./docs/documentation/AZURE_LLM_GUIDE.md).

[Docker Compose Documentation](./docker/README.md)

### 4. Run the Application

- **Run the Application:** Once the setup is complete, launching OpenDevin is as simple as running a single command. This command starts both the backend and frontend servers seamlessly, allowing you to interact with OpenDevin without any hassle.
```bash
make run
```

### 5. Individual Server Startup

- **Start the Backend Server:** If you prefer, you can start the backend server independently to focus on backend-related tasks or configurations.
```bash
make start-backend
```

- **Start the Frontend Server:** Similarly, you can start the frontend server on its own to work on frontend-related components or interface enhancements.
```bash
make start-frontend
```

### 6. Help

- **Get Some Help:** Need assistance or information on available targets and commands? The help command provides all the necessary guidance to ensure a smooth experience with OpenDevin.
```bash
make help
```

<p align="right" style="font-size: 14px; color: #555; margin-top: 20px;">
<a href="#readme-top" style="text-decoration: none; color: #007bff; font-weight: bold;">
↑ Back to Top ↑
</a>
</p>

## ⭐️ Research Strategy

Achieving full replication of production-grade applications with LLMs is a complex endeavor. Our strategy involves:
Expand Down Expand Up @@ -226,7 +201,7 @@ For details, please check [this document](./CONTRIBUTING.md).

## 🤖 Join Our Community

Now we have both Slack workspace for the collaboration on building OpenDevin and Discord server for discussion about anything related, e.g., this project, LLM, agent, etc.
Now we have both Slack workspace for the collaboration on building OpenDevin and Discord server for discussion about anything related, e.g., this project, LLM, agent, etc.

* [Slack workspace](https://join.slack.com/t/opendevin/shared_invite/zt-2etftj1dd-X1fDL2PYIVpsmJZkqEYANw)
* [Discord server](https://discord.gg/mBuDGRzzES)
Expand Down
34 changes: 34 additions & 0 deletions container/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
FROM node:21.7.2-bookworm-slim as frontend-builder

WORKDIR /app

COPY ./frontend/package.json frontend/package-lock.json ./
RUN npm install

COPY ./frontend ./
RUN npm run make-i18n && npm run build

FROM python:3.12-slim as runtime

WORKDIR /app
ENV PYTHONPATH '/app'
ENV RUN_AS_DEVIN=false
ENV USE_HOST_NETWORK=false
ENV SSH_HOSTNAME=host.docker.internal
ENV WORKSPACE_BASE=/opt/workspace_base
RUN mkdir -p $WORKSPACE_BASE

RUN apt-get update -y \
&& apt-get install -y curl make git build-essential \
&& python3 -m pip install poetry --break-system-packages

COPY ./pyproject.toml ./poetry.lock ./
RUN poetry install --without evaluation

COPY ./opendevin ./opendevin
COPY ./agenthub ./agenthub
RUN poetry run python opendevin/download.py # No-op to download assets

COPY --from=frontend-builder /app/dist ./frontend/dist

CMD ["poetry", "run", "uvicorn", "opendevin.server.listen:app", "--host", "0.0.0.0", "--port", "3000"]
31 changes: 31 additions & 0 deletions container/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
DOCKER_BUILD_REGISTRY=ghcr.io
DOCKER_BUILD_ORG=opendevin
DOCKER_BUILD_REPO=opendevin
DOCKER_BUILD_TAG=v0.2
FULL_IMAGE=$(DOCKER_BUILD_REGISTRY)/$(DOCKER_BUILD_ORG)/$(DOCKER_BUILD_REPO):$(DOCKER_BUILD_TAG)

LATEST_FULL_IMAGE=$(DOCKER_BUILD_REGISTRY)/$(DOCKER_BUILD_ORG)/$(DOCKER_BUILD_REPO):latest

MAJOR_VERSION=$(shell echo $(DOCKER_BUILD_TAG) | cut -d. -f1)
MAJOR_FULL_IMAGE=$(DOCKER_BUILD_REGISTRY)/$(DOCKER_BUILD_ORG)/$(DOCKER_BUILD_REPO):$(MAJOR_VERSION)
MINOR_VERSION=$(shell echo $(DOCKER_BUILD_TAG) | cut -d. -f1,2)
MINOR_FULL_IMAGE=$(DOCKER_BUILD_REGISTRY)/$(DOCKER_BUILD_ORG)/$(DOCKER_BUILD_REPO):$(MINOR_VERSION)

# normally, for local build testing or development. use cross platform build for sharing images to others.
build:
docker build -f Dockerfile -t ${FULL_IMAGE} -t ${LATEST_FULL_IMAGE} ..

push:
docker push ${FULL_IMAGE} ${LATEST_FULL_IMAGE}

test:
docker buildx build --platform linux/amd64 \
-t ${FULL_IMAGE} -t ${LATEST_FULL_IMAGE} --load -f Dockerfile ..

# cross platform build, you may need to manually stop the buildx(buildkit) container
all:
docker buildx build --platform linux/amd64,linux/arm64 \
-t ${FULL_IMAGE} -t ${LATEST_FULL_IMAGE} -t ${MINOR_FULL_IMAGE} --push -f Dockerfile ..

get-full-image:
@echo ${FULL_IMAGE}
Loading

0 comments on commit b0899c8

Please sign in to comment.