Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge develop #14

Merged
merged 29 commits into from
Apr 14, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
1ba9396
Add ollama, support, memGPT services
lehcode Apr 4, 2024
0ca8af9
feat: Docker services
lehcode Apr 4, 2024
309df69
hotfix: Restore useTranslation()
lehcode Apr 10, 2024
751442b
hotfix: Frontend integration
lehcode Apr 11, 2024
94619eb
hotfix: Backend app service dependencies fix under Conda
lehcode Apr 12, 2024
6fe6b59
feat: Add API startup script
lehcode Apr 12, 2024
ffbad57
feat: Add FastAPI server and Vite dev server logging for debug and li…
lehcode Apr 12, 2024
187ca9d
chore: Cleanup after local rebase
lehcode Apr 12, 2024
a3d6c03
feat: Improve docker compose services integration
lehcode Apr 12, 2024
03a530a
hotfix: Frontend and API integration. Build improvements.
lehcode Apr 14, 2024
e826a5c
feat/poetry-build (#8)
lehcode Apr 14, 2024
0676e94
fix: fix some of the styling to more closely match figma (#927)
Sparkier Apr 12, 2024
dea68c4
Add Italian, Spanish and Português (#1017)
PierrunoYT Apr 12, 2024
a52a495
Add Azure configuration doc (#1035)
enyst Apr 12, 2024
9165ba1
Formatting AZURE_LLM_GUIDE (#1046)
enyst Apr 12, 2024
c1754bf
Feat add agent manager (#904)
iFurySt Apr 12, 2024
1e73863
simplified get (#962)
SmartManoj Apr 12, 2024
8fdc728
Response recognition for weak llms (#523)
namtacs Apr 12, 2024
48efce0
Traffic Control: Add new config MAX_CHARS (#1015)
li-boxuan Apr 12, 2024
ea2abcf
fix: print the wrong ssh port number (#1054)
iFurySt Apr 13, 2024
30c4969
fix(editor): ui enhancements and code refactor (#1069)
akhilvc10 Apr 13, 2024
043ee5a
Add new sandbox type - local (#1029)
foragerr Apr 14, 2024
a53d4af
Auto-close stale issues and PRs (#1032)
rbren Apr 14, 2024
5a8553d
Throw error if an illegal sandbox type is used (#1087)
yimothysu Apr 14, 2024
fb30ad3
Unify linter behaviour across CI and pre-commit-hook (#1071)
li-boxuan Apr 14, 2024
a5051cb
Revamp Exception handling (#1080)
li-boxuan Apr 14, 2024
c5af998
doc: Add supplementary notes for WSL2 users to Local LLM Guide (#1031)
FZFR Apr 14, 2024
784f7ab
added to sudo group (#1091)
SmartManoj Apr 14, 2024
cddc385
chore: Merge .dockerignore
lehcode Apr 14, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 24 additions & 1 deletion .dockerignore
Original file line number Diff line number Diff line change
@@ -1,5 +1,28 @@
**/__pycache__
**/.venv
**/.classpath
**/.dockerignore
**/.gitignore
.github
.idea
.ollama
LICENSE
**/.project
**/.settings
**/.toolstarget
**/.vs
**/.vscode
**/*.*proj.user
**/*.dbmdl
**/*.jfm
**/charts
**/docker-compose*
**/docs
**/compose*
**/Dockerfile*
**/node_modules
**/npm-debug.log
**/obj
**/secrets.dev.yaml
**/values.dev.yaml
LICENSE
README.md
7 changes: 6 additions & 1 deletion .env
Original file line number Diff line number Diff line change
Expand Up @@ -28,11 +28,16 @@ POSTGRES_HOST_PORT=15432
POSTGRES_CONTAINER_PORT=5432
#
# Directories inside a container
APP_ROOT=/opt/opendevin/app
APP_ROOT=/opt/opendevin
WORKSPACE_DIR=/opt/opendevin/workspace
CONDA_ROOT=/var/lib/miniconda
# Directories
APP_DIR=/opt/opendevin
UI_DIR=/var/www/od_ui
# Path to ollama models directory at the host machine
HOST_MODELS_DIR=/mnt/g/LLMs/ollama/models
WORKSPACE_DIR=/opt/opendevin/workspace
PYTHONPATH=/opt/opendevin
# Name of the container's Conda vitual environment
VENV_NAME=od_env
#
Expand Down
12 changes: 4 additions & 8 deletions .github/workflows/lint.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,11 +32,7 @@ jobs:
uses: actions/setup-python@v5
with:
python-version: 3.11
- name: Create mypy cache directory
run: mkdir -p .mypy_cache
- name: Install dependencies
run: pip install ruff mypy==1.9.0 types-PyYAML types-toml
- name: Run mypy
run: python -m mypy --install-types --non-interactive --config-file dev_config/python/mypy.ini opendevin/ agenthub/
- name: Run ruff
run: ruff check --config dev_config/python/ruff.toml opendevin/ agenthub/
- name: Install pre-commit
run: pip install pre-commit
- name: Run pre-commit hooks
run: pre-commit run --files opendevin/**/* agenthub/**/* --show-diff-on-failure --config ./dev_config/python/.pre-commit-config.yaml
29 changes: 29 additions & 0 deletions .github/workflows/stale.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
name: 'Close stale issues'
on:
schedule:
- cron: '30 1 * * *'

jobs:
stale:
runs-on: ubuntu-latest
steps:
- uses: actions/stale@v9
with:
# Aggressively close issues that have been explicitly labeled `age-out`
any-of-labels: age-out
stale-issue-message: 'This issue is stale because it has been open for 7 days with no activity. Remove stale label or comment or this will be closed in 1 day.'
close-issue-message: 'This issue was closed because it has been stalled for over 7 days with no activity.'
stale-pr-message: 'This PR is stale because it has been open for 7 days with no activity. Remove stale label or comment or this will be closed in 1 days.'
close-pr-message: 'This PR was closed because it has been stalled for over 7 days with no activity.'
days-before-stale: 7
days-before-close: 1

- uses: actions/stale@v9
with:
# Be more lenient with other issues
stale-issue-message: 'This issue is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 7 days.'
close-issue-message: 'This issue was closed because it has been stalled for over 30 days with no activity.'
stale-pr-message: 'This PR is stale because it has been open for 30 days with no activity. Remove stale label or comment or this will be closed in 7 days.'
close-pr-message: 'This PR was closed because it has been stalled for over 30 days with no activity.'
days-before-stale: 30
days-before-close: 7
4 changes: 3 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -157,6 +157,8 @@ For a full list of the LM providers and models available, please consult the [li

There is also [documentation for running with local models using ollama](./docs/documentation/LOCAL_LLM_GUIDE.md).

We are working on a [guide for running OpenDevin with Azure](./docs/documentation/AZURE_LLM_GUIDE.md).

### 4. Run the Application

- **Run the Application:** Once the setup is complete, launching OpenDevin is as simple as running a single command. This command starts both the backend and frontend servers seamlessly, allowing you to interact with OpenDevin without any hassle.
Expand Down Expand Up @@ -222,7 +224,7 @@ For details, please check [this document](./CONTRIBUTING.md).

## 🤖 Join Our Community

Now we have both Slack workspace for the collaboration on building OpenDevin and Discord server for discussion about anything related, e.g., this project, LLM, agent, etc.
Now we have both Slack workspace for the collaboration on building OpenDevin and Discord server for discussion about anything related, e.g., this project, LLM, agent, etc.

* [Slack workspace](https://join.slack.com/t/opendevin/shared_invite/zt-2etftj1dd-X1fDL2PYIVpsmJZkqEYANw)
* [Discord server](https://discord.gg/mBuDGRzzES)
Expand Down
6 changes: 3 additions & 3 deletions agenthub/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@
load_dotenv()

# Import agents after environment variables are loaded
from . import monologue_agent # noqa: E402
from . import codeact_agent # noqa: E402
from . import planner_agent # noqa: E402
from . import monologue_agent # noqa: E402
from . import codeact_agent # noqa: E402
from . import planner_agent # noqa: E402

__all__ = ['monologue_agent', 'codeact_agent', 'planner_agent']
4 changes: 2 additions & 2 deletions agenthub/codeact_agent/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# CodeAct-based Agent Framework

This folder implements the [CodeAct idea](https://arxiv.org/abs/2402.13463) that relies on LLM to autonomously perform actions in a Bash shell. It requires more from the LLM itself: LLM needs to be capable enough to do all the stuff autonomously, instead of stuck in an infinite loop.
This folder implements the [CodeAct idea](https://arxiv.org/abs/2402.13463) that relies on LLM to autonomously perform actions in a Bash shell. It requires more from the LLM itself: LLM needs to be capable enough to do all the stuff autonomously, instead of stuck in an infinite loop.

**NOTE: This agent is still highly experimental and under active development to reach the capability described in the original paper & [repo](https://github.com/xingyaoww/code-act).**

Expand All @@ -18,6 +18,6 @@ Example: prompts `gpt-4-0125-preview` to write a flask server, install `flask` l

<img width="957" alt="image" src="https://github.com/OpenDevin/OpenDevin/assets/38853559/68ad10c1-744a-4e9d-bb29-0f163d665a0a">

Most of the things are working as expected, except at the end, the model did not follow the instruction to stop the interaction by outputting `<execute> exit </execute>` as instructed.
Most of the things are working as expected, except at the end, the model did not follow the instruction to stop the interaction by outputting `<execute> exit </execute>` as instructed.

**TODO**: This should be fixable by either (1) including a complete in-context example like [this](https://github.com/xingyaoww/mint-bench/blob/main/mint/tasks/in_context_examples/reasoning/with_tool.txt), OR (2) collect some interaction data like this and fine-tune a model (like [this](https://github.com/xingyaoww/code-act), a more complex route).
2 changes: 1 addition & 1 deletion agenthub/codeact_agent/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
from opendevin.agent import Agent
from .codeact_agent import CodeActAgent

Agent.register("CodeActAgent", CodeActAgent)
Agent.register('CodeActAgent', CodeActAgent)
51 changes: 28 additions & 23 deletions agenthub/codeact_agent/codeact_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@
{COMMAND_DOCS}
"""
if COMMAND_DOCS is not None
else ""
else ''
)
SYSTEM_MESSAGE = f"""You are a helpful assistant. You will be provided access (as root) to a bash shell to complete user-provided tasks.
You will be able to execute commands in the bash shell, interact with the file system, install packages, and receive the output of your commands.
Expand All @@ -46,27 +46,29 @@
{COMMAND_SEGMENT}

When you are done, execute the following to close the shell and end the conversation:
<execute>exit</execute>
<execute>exit</execute>
"""

INVALID_INPUT_MESSAGE = (
"I don't understand your input. \n"
"If you want to execute command, please use <execute> YOUR_COMMAND_HERE </execute>.\n"
"If you already completed the task, please exit the shell by generating: <execute> exit </execute>."
'If you want to execute command, please use <execute> YOUR_COMMAND_HERE </execute>.\n'
'If you already completed the task, please exit the shell by generating: <execute> exit </execute>.'
)


def parse_response(response) -> str:
action = response.choices[0].message.content
if "<execute>" in action and "</execute>" not in action:
action += "</execute>"
if '<execute>' in action and '</execute>' not in action:
action += '</execute>'
return action


class CodeActAgent(Agent):
"""
The Code Act Agent is a minimalist agent.
The Code Act Agent is a minimalist agent.
The agent works by passing the model a list of action-observation pairs and prompting the model to take the next step.
"""

def __init__(
self,
llm: LLM,
Expand All @@ -82,7 +84,7 @@ def __init__(

def step(self, state: State) -> Action:
"""
Performs one step using the Code Act Agent.
Performs one step using the Code Act Agent.
This includes gathering info on previous steps and prompting the model to make a command to execute.

Parameters:
Expand All @@ -97,42 +99,45 @@ def step(self, state: State) -> Action:
"""

if len(self.messages) == 0:
assert state.plan.main_goal, "Expecting instruction to be set"
assert state.plan.main_goal, 'Expecting instruction to be set'
self.messages = [
{"role": "system", "content": SYSTEM_MESSAGE},
{"role": "user", "content": state.plan.main_goal},
{'role': 'system', 'content': SYSTEM_MESSAGE},
{'role': 'user', 'content': state.plan.main_goal},
]
updated_info = state.updated_info
if updated_info:
for prev_action, obs in updated_info:
assert isinstance(
prev_action, (CmdRunAction, AgentEchoAction)
), "Expecting CmdRunAction or AgentEchoAction for Action"
), 'Expecting CmdRunAction or AgentEchoAction for Action'
if isinstance(
obs, AgentMessageObservation
): # warning message from itself
self.messages.append({"role": "user", "content": obs.content})
self.messages.append(
{'role': 'user', 'content': obs.content})
elif isinstance(obs, CmdOutputObservation):
content = "OBSERVATION:\n" + obs.content
content += f"\n[Command {obs.command_id} finished with exit code {obs.exit_code}]]"
self.messages.append({"role": "user", "content": content})
content = 'OBSERVATION:\n' + obs.content
content += f'\n[Command {obs.command_id} finished with exit code {obs.exit_code}]]'
self.messages.append({'role': 'user', 'content': content})
else:
raise NotImplementedError(
f"Unknown observation type: {obs.__class__}"
f'Unknown observation type: {obs.__class__}'
)
response = self.llm.completion(
messages=self.messages,
stop=["</execute>"],
stop=['</execute>'],
temperature=0.0
)
action_str: str = parse_response(response)
self.messages.append({"role": "assistant", "content": action_str})
state.num_of_chars += sum(len(message['content'])
for message in self.messages) + len(action_str)
self.messages.append({'role': 'assistant', 'content': action_str})

command = re.search(r"<execute>(.*)</execute>", action_str, re.DOTALL)
command = re.search(r'<execute>(.*)</execute>', action_str, re.DOTALL)
if command is not None:
# a command was found
command_group = command.group(1)
if command_group.strip() == "exit":
if command_group.strip() == 'exit':
return AgentFinishAction()
return CmdRunAction(command=command_group)
# # execute the code
Expand All @@ -149,4 +154,4 @@ def step(self, state: State) -> Action:
) # warning message to itself

def search_memory(self, query: str) -> List[str]:
raise NotImplementedError("Implement this abstract method")
raise NotImplementedError('Implement this abstract method')
1 change: 0 additions & 1 deletion agenthub/monologue_agent/TODO.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,3 @@ There's a lot of low-hanging fruit for this agent:
* Improve memory condensing--condense earlier memories more aggressively
* Limit the time that `run` can wait (in case agent runs an interactive command and it's hanging)
* Figure out how to run background processes, e.g. `node server.js` to start a server

2 changes: 1 addition & 1 deletion agenthub/monologue_agent/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
from opendevin.agent import Agent
from .agent import MonologueAgent

Agent.register("MonologueAgent", MonologueAgent)
Agent.register('MonologueAgent', MonologueAgent)
Loading
Loading