-
Notifications
You must be signed in to change notification settings - Fork 651
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Integrate Azure OpenAI API. #402
Conversation
91cdf91
to
24e9e0c
Compare
92fbe8e
to
871b506
Compare
Hi, @lightaime, I've checked all the failed tests and nearly all of them are "openai.APIConnectionError: Connection error.". I found a comment in this MR may related to my problem. I run some of the failed tests locally and all of them are passed. Could you help me with this? There are also two other failed test.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @L4zyy for the PR! Overall looks good. I have some questions
- About the
AzureOpenAIModel
, do we need a seperate class for azure openai models? It seems the only differences betweenAzureOpenAIModel
andOpenAIModel
will be the creation ofself._client
. Should we just support azure models withOpenAIModel
? - Should we just introduce enums in https://github.com/camel-ai/camel/blob/master/camel/types/enums.py to support the client creation? We can introduce somethings like:
ModelType.Azure_GPT_3_5_TURBO
and so on.
@dandansamax @Wendong-Fan any suggesitions?
camel/models/azure_openai_model.py
Outdated
azure_model_name_to_type = { | ||
"gpt-35-turbo": ModelType.GPT_3_5_TURBO, | ||
"gpt-35-turbo-16k": ModelType.GPT_3_5_TURBO_16K, | ||
"gpt-4": ModelType.GPT_4, | ||
"gpt-4-32k": ModelType.GPT_4_32K, | ||
"gpt-4-1106-preview": ModelType.GPT_4_TURBO, | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is this used for?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code for an older implementation, will be remove in the next commit.
camel/models/azure_openai_model.py
Outdated
assert self.deployment_name is not None, \ | ||
"Azure model deployment name is not provided." | ||
|
||
self.model_name = os.environ.get('AZURE_MODEL_TOKEN_LIMIT', 8192) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is it called model_name
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apologize for this typo, will be correct in the next commit.
camel/models/azure_openai_model.py
Outdated
self.deployment_name = os.environ.get('AZURE_MODEL_DEPLOYMENT_NAME', | ||
'gpt-3.5-turbo-1106') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need self.deployment_name
? Could we just use self.model_type.value
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is explained in the comment below. I will try to find a better way to do it.
camel/models/model_factory.py
Outdated
use_azure_api = 'USE_AZURE_API' in os.environ | ||
if use_azure_api: | ||
model_class = AzureOpenAIModel | ||
else: | ||
model_class = OpenAIModel |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We soemtime may want to use Azure openai models for some agents while openAI models for other agents. Using environment variable for the ModelFactory
may not be a very good idea.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will try to find a better way to do it in the next commit.
Hi @lightaime @L4zyy, |
These are my initial ideas for integrating Azure OpenAI API. However, the API allows users to deploy models like GPT-3/GPT-4 with different deployment names. Therefore, when calling the In this MR, there is some unused code left over from when I was trying other ways to integrate the API. I apologize for that and will remove it in the next commit. After considering your review, I think it would still be better to implement this integration in the OpenAIModel. I will try to integrate it in the OpenAIModel and commit a new implementation later. I still have a question about the tests. Is it okay to leave some tests failed? If so, which tests should I pass before I can call for a review and merge to the master? Thank you very much for the review.😊 |
Thank you very much for your suggestions. I will consider whether it is better to integrate with a new class. Since the deployment name is user-specific, I think it may not be feasible to move it to ‘enums.py’ unless we ask the user to deploy the model with the same name as OpenAI? Instead, I will try to move this to the 'model_config_dict' instead of using an environment variable and see if it’s a good idea. |
No problem. The tests failed because doing MR from your fork repo cannot use Github |
Hi @L4zyy, thank you for your contribution. Acutually, I don't think putting it in the |
7eaa39d
to
a0829c7
Compare
Thank you for the advice. I don’t think it’s worth adding a parameter in ‘ModelFactory’ and ‘ChatAgent’ only for Azure API. I have three plans to re-implement this integration:
I prefer the third method since it is compatible with future differences between Azure API and OpenAI API. Meanwhile, it leaves the original OpenAIModel class unchanged. The first method will introduce extra variables in OpenAIModel and add an if statement in the run method, which makes it unclean. While the second method will have the same defect and introduce some redundant enums in ‘enums.py’ (Cause we need to add Azure API for every supported GPT model version.) I have implemented the third method and updated the MR. I have run through nearly all tests I can run locally, and these are the tests that cannot pass locally:
Which one do you think is more appropriate for this integration? @lightaime @Wendong-Fan @dandansamax |
Thank you for your feedback. However, I think changing model name is a common operation. For exmaple, if I want to use OpenAI legacy APIs, it's very diffcult in the current implementation. But if we can pass model name to Also this feature can be widely used in the open source models. I still don't think modifying
Ensure your newly created tests can be passed and it will be fine. |
I agree with you that adding another parameter to pass extra information would make the API integration much easier and flexible. However, I am not familiar with the implementation of the open-source model class, and I don’t want to introduce too much variation for this single API integration. If it’s okay to add a new parameter for this integration, I will re-implement it later.
I see. I will pass the extra information in the new parameter and see if it's better.
Okay, thanks for the clarification. 😊 |
Hi @dandansamax, I have re-implemented the integration as we discussed. The new implementation allows us to pass extra backend information through the |
Hi @L4zyy , thanks for the contribution! I'm going to review this PR but the pipeline still has error, would you like to fix this first? Thank! |
Certainly, I will update my code and address these errors later. |
close this pr since another new pr opened #499 |
Description
This PR integrate the Azure OpenAI API to CAMEL.
Motivation and Context
Integrate Azure OpenAI API, close #231
Types of changes
What types of changes does your code introduce? Put an
x
in all the boxes that apply:Implemented Tasks
Checklist
Go over all the following points, and put an
x
in all the boxes that apply.If you are unsure about any of these, don't hesitate to ask. We are here to help!