Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PKCE client authentication in shared public/private server #146

Closed
night opened this issue Sep 9, 2019 · 16 comments
Closed

PKCE client authentication in shared public/private server #146

night opened this issue Sep 9, 2019 · 16 comments
Labels

Comments

@night
Copy link

night commented Sep 9, 2019

This issue is similar to my other reported issue #115 but essentially in an authorization server where applications are both public and private it isn't possible to natively authenticate to the token server as a public and private client depending on type of authorization request.

The only way to achieve this currently is to extend the client auth methods with a custom authentication scheme:

def authenticate_rfc7636(query_client, request):
    data = request.form
    code_verifier = data.get('code_verifier')
    if code_verifier:
        return authenticate_none(query_client, request)

This works because rfc7636 always runs validate_code_verifier after after_validate_token_request, which validates the code verifier provided.

@lepture
Copy link
Owner

lepture commented Sep 10, 2019

I don't think I'm going to support this case in Authlib. But you can rewrite authenticate_token_endpoint_client method of AuthorizationCodeGrant.

According to my experience, one client SHOULD have only one client type. For instance, in https://tools.ietf.org/html/rfc7591 client will have a certain token_endpoint_auth_method.

@night
Copy link
Author

night commented Sep 10, 2019

Do you have an example of a service where clients are explicitly either public or private? In my experience most OAuth2 providers allow a client to be both public and private so that the client can authenticate in its preferred method across multiple platforms. For example, if you have an application that logs into Google on the web and also on mobile you use PKCE on the mobile app but normal authorization code flow on the web. With how Authlib is designed this is not natively possible.

If you refer to https://tools.ietf.org/html/rfc6749#section-2.1 on client types, you can see some important points:

The authorization server SHOULD NOT make assumptions about the client type.

Authlib configuration is currently assuming the client type based on stored configuration rather than based on the authorization request itself.

A client may be implemented as a distributed set of components, each with a different client type and security context (e.g., a distributed client with both a confidential server-based component and a public browser-based component).

This is essentially my argument from above, whereby you have multiple clients in different security contexts.

If the authorization server does not provide support for such clients or does not provide guidance with regard to their registration, the client SHOULD register each component as a separate client.

This is essentially the state of Authlib right now, which is what I would like to avoid. Most OAuth2 servers do not require clients to explicitly predefine their client type.

@lepture
Copy link
Owner

lepture commented Sep 10, 2019

Yes, you are right. I just found that I want to remove check_client_type on ClientMixin. But I didn't remove all usage of check_client_type.

lepture added a commit that referenced this issue Sep 10, 2019
@lepture
Copy link
Owner

lepture commented Sep 10, 2019

@night thanks for your report. It seems check_client_type is only used in PKCE which I was intended to remove but forgot. Here is the fix b7a96d2

Can you check if it works for you?

@night
Copy link
Author

night commented Sep 10, 2019

That looks close, but I think there's still an issue with that fix.

If you set none as an allowed authentication method on the token endpoint, normal code grants can now be exchanged for a token without the secret. (This is why my code sample in my initial post filters by requests with code_verifier, since that forces the request to be validated against the rfc7636 code)

A proper fix for this might be that the grant type should be responsible for knowing what auth methods are supported. For example, for implicit grant and rfc7636 only none is supported. For authorization code, client credentials, password grant, and refresh token only client_secret_post and client_secret_basic are supported.

Edit: It looks like many of the grants do this already with TOKEN_ENDPOINT_AUTH_METHODS, but I suspect because rfc7636 is an extension onto AuthorizationCodeGrant it has this limitation.

@lepture
Copy link
Owner

lepture commented Sep 10, 2019

@night In CodeChallenge.validate_code_verifier, there is a check of the token endpoint auth method:

        # public client MUST verify code challenge
        if self.required and request.auth_method == 'none' and not verifier:
            raise InvalidRequestError('Missing "code_verifier"')

Note this request.auth_method == 'none'. If you use CodeChallenge(required=True), this extension will verify code_verifier before issuing token.

@night
Copy link
Author

night commented Sep 10, 2019

hmmm I see.. that should work. will try it out later today to confirm. my only remaining question would be, should this become the default behavior? usually you want to minimize footguns, and I can totally see someone not reading/understanding the implications of adding PKCE without required=True.

come to think of it, it would be cool if when you extended authorization grant with CodeChallenge it

  • added none to the list of allowed auth methods
  • required public clients to specify a verifier

lepture added a commit that referenced this issue Sep 10, 2019
Also make CodeChallenge required=True by default.
ref: #146
@lepture
Copy link
Owner

lepture commented Sep 11, 2019

@night I've changed the default value to required=True. But checking none method in AuthorizationCodeGrant is not added.

@lepture lepture added the spec label Sep 11, 2019
@night
Copy link
Author

night commented Sep 11, 2019

I tried upgrading to latest commit to test out this fix, but we now have failing tests that appear to be caused by changes to scope validation in 18858b7

get_allowed_scope doesn't appear to be checked on authorize properly, so we have tests failing due to invalid_scope not being returned.

After digging through this commit it looks like it checks arbitrary configuration on the server called scopes_supported within a new dict metadata, but it bypasses validation completely if it's not set. Is that working as intended? Shouldn't this actually be set(scope) == set(get_allowed_scope(scope))?

@lepture
Copy link
Owner

lepture commented Sep 12, 2019

@night see authlib/example-oauth2-server#52

It is changed to omit the unsupported scopes of the client. You can rollback the checking by rewriting def validate_requested_scope(self):

@night
Copy link
Author

night commented Sep 12, 2019

@lepture I'm not sure I understand. From your code:

def validate_requested_scope(self):
    if self.request.scope and self.server.metadata:

this is stating if the request provided a scope and the server has arbitrary metadata. why does the server need to have this stored? clients may have access to different scopes (i.e. get_allowed_scope returns different scopes on clients)

doing the following will effectively achieve what you want, no? scope is optional on request, and all requests get validated against the client without hardcoding static scopes on the server itself (which can vary in availability from client to client).

self.generate_token(
    ...,
    scope=scope or client.get_allowed_scope(scope),  # pulls in a default scopes if client specifies none
    ...
) 
if self.request.scope:
    if set(scope_to_list(self.request.scope)) != set(scope_to_list(client.get_allowed_scope(scope))):
        raise InvalidScopeError(state=self.request.state)

@lepture
Copy link
Owner

lepture commented Sep 12, 2019

@night the metadata is designed for https://tools.ietf.org/html/rfc8414

If you have defined Authorization Server Metadata, there may be a scopes_supported in metadata. Authlib will validate the request.scope with scopes_supported. If the requested scope is not in scopes_supported, authorization server will raise invalid_scope error.

From the spec RFC6479:

         invalid_scope
               The requested scope is invalid, unknown, or malformed.

In this case, the requested scope is unknown to the authorization server.


Oh, I just found that you can't rollback to previous behavior now. There is no self.request.client when validate requested scope. :(

I'll make a v0.12.1 release to fix it.


From my understanding of https://tools.ietf.org/html/rfc6749#section-3.3

I would say, if the requested scope is not in client allowed scopes, authorization server would just omit it. For instance:

GET /authorize?scope=a+b+c

But the client only allows a and b. Then, the authorize page would display:

Client is requesting:

1. a
2. b

[Confirm ?]

Please correct me if I am wrong. However, I will still make a new release, so that you can rewrite the validate_requested_scope method to rollback the behavior.

@lepture
Copy link
Owner

lepture commented Sep 12, 2019

v0.12.1 was released. If you want to rollback to previous behavior, here is the sample code:

class YourAuthorizationCodeGrant(grants.AuthorizationCodeGrant):
    # implement missing methods

    def validate_requested_scope(self):
        client = self.request.client
        scopes = scope_to_list(self.request.scope)
        if set(client.scope.split()).issuperset(set(scopes)):
            raise InvalidScopeError(state=self.request.state)

@lepture lepture closed this as completed Nov 12, 2019
@ThiefMaster
Copy link
Contributor

@@ -31,6 +31,12 @@ class OAuth2Client(db.Model, OAuth2ClientMixin):
         db.Integer, db.ForeignKey('user.id', ondelete='CASCADE'))
     user = db.relationship('User')

+    def check_endpoint_auth_method(self, method, endpoint):
+        if endpoint == 'token' and method == 'none':
+            return True
+        return super().check_endpoint_auth_method(method, endpoint)

@lepture @night Can you confirm that this is the proper solution to support both PKCE client auth (w/o client_secret) and regular (w/ client_secret) code auth in the same client, without introducing any security issues? Based on my tests in the example app this works fine, but I'd like to know for sure.

And if this has no disadvantages, wouldn't it make sense to include a allow_pkce_flow like column in the client mixin, and add that to the check above? authlib 1.0 coming up soon would be a good opportunity since AFAIK it already contains changes in the model mixins...

@night
Copy link
Author

night commented Feb 15, 2021

The method check_endpoint_auth_method just checks if the client is permitted to use method for the type of endpoint. it isn't really providing any security. As long as you have the code challenge mixed into the server and code challenge is configured as required then specifying no credentials in the request will require it to have a code verifier when authorizing.

@ThiefMaster
Copy link
Contributor

OK, that makes sense. I think then I'll actually override check_endpoint_auth_method as I originally planned and maybe store a list of allowed token_endpoint_auth_methods on the model like it's already done for response/grant types.

coopfeathy added a commit to coopfeathy/authlib-django that referenced this issue Dec 11, 2022
coopfeathy added a commit to coopfeathy/authlib-django that referenced this issue Dec 11, 2022
Also make CodeChallenge required=True by default.
ref: lepture/authlib#146
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants