Access Management and Micro-services – Part 3: Advanced Authorisation and Assurance

Continuing from the previous post which dealt with the core concepts around performing authentication and authorisation in a distributed environment, this post expands upon those concepts, looking at additional factors for authorisation decisions, including supplementary information, authentication challenges and risk assessment. While basic authentication and authorisation requirements can be met through the use of JWTs and OAuth, this post shifts to tackling bespoke requirements, outlining potential services which could provide capabilities above and beyond what is captured in those standards.

Posts in this series

Exposing and Sharing Identity Information

In the initial discussion of authorisation approaches, there was a focus upon utilising identity claims from a user’s JWT, however that discussion glossed over the complexities involved in the initial selection of those. I view defining the information about the bearer that should be put in the payload of the tokens that are issued by the authentication services as one of the larger challenges from a security design perspective. The payload has to provide enough information for services to perform authorisation, but as the data is simply encoded with base64 (or encrypted, but with widely distributed keys), any information it contains should probably be considered public. For many use cases, this will not be significant, as most authorisation decisions are going to be able to be made on public information, for instance validating the owner of some data matches the user principal in the token, or ensuring that a particular role appears in a list of roles.

There are however authorisation scenarios that require more sensitive information in order to make authorisation decisions, and in those cases, the public claims from the token will need to be supplemented by other user identity information held by the access management system. As I stated in the initial post in this series, I feel that the access management services are responsible for everything related to user identities, and so providing services with appropriate access this additional information is squarely within their remit.

These ‘additional identity information’ services are subject to the above statement that services are responsible for their own authorisation, and so during design, it is the role of the access management team to determine an appropriate authorisation model. Given that the service is providing access to information deemed too sensitive to be included in the JWT claims, such access ought to be tightly scoped, to both the user being queried, and the service making the request.

In order to implement this, services will need to be registered in the access management component with a set of ‘sensitive authorisation information’ assigned as available to that service. There are a number of variables in how this could be set up; whether it is scoped at a coarse or fine grained level (i.e. a general profile information entitlement vs. specific LDAP attributes); and somehow initialised based upon agreement between the access management teams and the service developers.

Such a service could look something like this (though this is by no means prescriptive):

POST …/authorisation_information

Headers:
Authorization: Bearer <service client token> OR Basic <service client credentials>

Body:
{
  user_assertion: <user JWT>,
  required_scopes: <list of authorisation scopes/attributes, e.g. “location phoneNumber“>
}

The service can then perform the following flow:

  1. Validate the Client Token/Credentials
  2. Validate the User JWT
  3. Confirm that the client is entitled to the requested scopes/attributes
  4. Retrieve the relevant attributes from the user store for the user in the JWT
  5. Return these attributes to the requesting service

In this flow, step 3 is an explicit authorisation step, though there are a number of steps in this flow that could be making ‘authorisation-like’ decisions, such as ensuring that the user JWT was issued for calling this client (by checking the ‘aud’ attribute). If the service is a third-party one, there could also be a standard OAuth consent management step for validating that the user has consented to provide the service with access to the requested scopes. The fact that the data returned is specific to user whose valid token is provided is also something of a security control, as services are therefore only able to retrieve records associated with users who are actively calling them.

The outcome of this call is that the service is provided with service and user-specific user identity information that can be combined with the claims in the JWT in order to make whatever authorisation decisions are relevant to the service. The idea is that this type of call will be made sparingly for more sensitive transactions, as most authorisation decisions will be able to be made based upon the claims in the JWT, and this adds the latency of an additional service invocation. It may make sense to cache the results of this call at the service, though obviously this is subject to considerations around storage of sensitive data, how rapidly the requested attributes change, whether the services can (or ought to) even maintain any state, etc.

A separate problem of using JWTs is that by performing distributed validation at each service, there isn’t an effective way to revoke them. Typically in an OAuth scenario, revocation is performed on the refresh_token, to prevent new access tokens from being issued, rather than revoking access tokens directly. In the majority of situations, this will not be an issue, as token expiry should be sensibly set, though for certain high-risk/sensitivity transactions checking for token revocation may desirable.  A similar service as the above (or even the same service with no scopes) can be made available for use by services in order to check for token revocation.

POST …/token/validate

Body:
{
  user_assertion: <user JWT>
}
Response (though this could be handled with status codes, i.e. 204/401):
{
  token_status: “valid” OR “invalid”
}

This needs to be complemented by a service which allows for a token to be revoked, which itself carries some security design considerations. Obviously a user should be able to revoke tokens that they own, but the identifier for that user is just that JWT. As a result, the design of the service inevitably becomes them POST’ing the token to a revoke/logout endpoint (or calling DELETE on a token endpoint, with the token in the Authorisation Header, which might be more intuitive). A side effect of this is that any service with access to the token is effectively able to revoke it, potentially impacting authorisation for other services, so needs to be utilised with care.

The addition of the capability to revoke JWTs, also requires some setup on the access management services side in order to maintain JWT validity states. The most straightforward approach is to simply create a cache entry for each issued token, with a time-to-live equal to the token expiry. The JWT spec (RFC 7519) provides an optional attribute ‘jti’ (JWT ID) which can act as an ideal key for this entry. Then calls to confirm the validity of the token simply checks whether the token exists in the cache or not, or alternately, whether a revocation flag associated with that jti has been set (in addition to checking integrity, etc).

Assurance for sensitive operations

In addition to checking whether a token has been revoked, there are times when sensitive operations may require higher levels of assurance in order to complete an operation. In an environment there may be a range of services which can be accessed simply by providing a username and password, while other more privileged services might require multi-factor authentication, or dynamic challenges. This capability can be incorporated into access management services by including the authentication methods that were used to obtain a token in the token itself.

This can be supported by JWTs as they can incorporate arbitrary claims, for instance setting an ‘assurance_level’ claim. This could simply be something like ‘0’ for client_credentials/resource owner password flows, ‘1’ for a 3-legged flow where the user has directly interacted with the access management UIs and ‘2’ if the authentication flow included a multi-factor element. Alternately, it could be a something as complex as a score between 0-1000 based upon some calculation of dynamic risk scores based upon access context that is then offset with an authentication score made up of one or more factors. Either way, a level of assurance is available to other services that they can use to make an authorisation decision.

As an extension to a set of authentication services, the access management services could make multi-factor authentication available as a service in order to provide step-up during authorisation decisions.

There are some considerations here, as multi-factor challenges, like logins, should only take place in consistent and controlled UIs. This is to mitigate the risk of phishing attacks, to avoid training your users that it’s OK to enter authentication information without validating the interface first. As a result, requests for step-up authentication from services should be handled in the same manner as an OAuth 3-legged flow. This could be done as follows:

  • Redirect user to the ‘authorize’ endpoint, requesting a scope associated with the step-up (such as second_factor)
  • User presents their existing JWT to the endpoint in the browser cookies
  • The existing JWT is used to identify the user and bypass initial factors (username/password) directing them to a second factor screen
  • User completes second factor authentication
  • Browser callback to requesting service endpoint, with new authorisation code
  • Service exchanges the authorisation code for a new ‘stepped-up’ token (with PKCE, state validation, etc)

As OAuth doesn’t actually specify anything about how to handle authentication, this can be handled with a vanilla OAuth 3-legged flow. The actual selection of a step up authentication mechanism could be based upon a number of factors. These include supported factors for the user (obviously), user preferences (to accommodate user disabilities/security postures) and possibly incorporate aspects of the risk scoring vs. required assurance for the requested operations.

There are a number of ways of handling the actual process of second factor authentication, including server-initiated, such as push-notification challenges, client-initiated, such as a mechanism similar to those advocated by the FIDO Alliance, or pre-initialised challenges, such as TOPT. All of these are solid approaches, though require varying levels of development on the client side. In an initial implementation of access management microservices, I would advocate implementing client-agnostic second factors in the first pass, for instance allowing for SMS OTPs, email OTPs/’magic-links’ and possibly TOPTs, using one of the many ‘Authenticator’ applications that exist. Once clients have become more established, and the there is a better understanding of first-party consumers, these mechanisms could be expanded with other factors which require greater client-side effort/capability.

There are a number of ways in which the core capabilities around authentication and authorisation provided by standards such as JWT and OAuth can be extended with bespoke access management services which provide discrete business functionality. This type of behaviour is keeping in the spirit of micro-services development; access management capability is not just a set of standards which can be consumed by other business-focused services, but a swathe of business capabilities in its own right. The same development paradigm that applies to building out other micro-services can apply to access management services, at least once a solid core, likely based upon authentication and authorisation standards, are in place. If other services require additional user information is required, a service ought to deliver that. If step-up authentication is required, consuming services need some mechanism by which to request it. If registering to use the access management services is part of an automated build process, then their need to be registration services to facilitate that. This post has discussed a handful of these requirements, but the important take away is that in a micro-services model, access management needs to be thought of in the same way as any other set of services, with a continual cycle of architecture, development, deployment, testing and maintenance, expanding to encompass new requirements as they emerge, rather than as a rigid unchanging support service.

In the final post of this series, we will look beyond the services provided by the access management team, looking at how they can accommodate new service development, ensuring it appropriately leverages existing security, as well as discussing how security can be applied when services are invoking each other.

3 thoughts on “Access Management and Micro-services – Part 3: Advanced Authorisation and Assurance”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s