Skip to content
Vladimir on TwitterVladimir on LinkedIn

Authenticating users with AWS ALB

It is usually wise to let somebody else handle authentication. Instead of rolling your own passwords, you can use Google, Facebook, GitHub, or dozens of other providers. It might not be as fun to code, but realistically, the auth team at any of those companies might be larger than your entire company, and they handle hunderds of things you don’t, all the way to using government ID to recover account access.

There are libraries that make using those OAuth providers easy, but it can be even easier to hide your service behind an auth proxy that handles all OAuth and sessions, and only passes authenticated connections through. The Google Identity-Aware Proxy is the canonic solution with a lot of features.

If you’re using AWS, using Google IAP might be convoluted. Fortunately, AWS Application Load Balancer (ALB) can also handle authentication now, and in this post, I’ll show how to take an existing Kubernetes service in AWS and make ALB protect accesses.

Prerequisite: K8S service

Before adding authentication, we need to have a K8S service. You need a deployment, a service, and an ingress. In other to map ingress to EC2 ALB you also need to install AWS Load Balancer Controller. While this is covered in many tutorials, I also have put together a example setup.

Presequisite: the domain and certificate

We also need to have a domain name, and an TSL certificate for it. We need it because as part of authenticaation flow, Google needs to redirect users to your domain, and naturally, it needs to use HTTPS to make sure it’s really your domain.

Registering your domain is out of scope here, while for obtaining a TSL certificate, the AWS Certificate Manager is the easiest option. Once you obtain a certificate, you’ll obtain certificate ARN, which we’ll use later.

Creating the Google OAuth client

This step, unfortunately, is both tricky and entirely manual. Google Cloud does not provide any API to manage projects and OAth clients, so we’ll need to do everything by hand.

First, navigate to the new project wizard. Use nanoproxy as project name, and accept whatever project id is automatically suggested.

Then, navigate to auth settings. Enable the auth API, and create a new OAuth client. Use Web Application type, and the other details should be straight-forward.

Finally, you need to tell Google about your domain. Under “Authorized redictor URI”, specify https://<your-domain>/oauth2/idpresponse

At the end, you’ll have two strings—client id and client secret.

Saving the Google OAuth secrets

We need to save the secrerts from the Google Console inside a Kubernetes secrets object, with the following YAML:

apiVersion: v1
kind: Secret
metadata:
  name: oauth
  namespace: nanoproxy
type: Opaque
stringData:
  clientId: <client-id-from-Google-Console>
  clientSecret: <client-secret-from-Google-Console>

Note that we’re using stringData block and therefore don’t need to base64-encode the keys.

We also need to make sure that ALB controller will be able to access our secrets, using the usual RBAC dance:

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: secret-reader
  namespace: nanoproxy
rules:
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - get
  - watch
  - list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: alb-reads-secrets
  namespace: nanoproxy
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: secret-reader
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: User
  name: system:serviceaccount:kube-system:aws-load-balancer-controller

Configuring the load balancer

Once everything is in place, actually making the load balancer perform authorization is a matter of a few annotations

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  ...
  annotations:
    # Force SSL and specify the certificate
    alb.ingress.kubernetes.io/ssl-policy: ELBSecurityPolicy-FS-1-2-Res-2020-10
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
    alb.ingress.kubernetes.io/ssl-redirect: '443'

    alb.ingress.kubernetes.io/certificate-arn: <certificate ARN>

    # The following annotation make ALB use Google for authentication
    alb.ingress.kubernetes.io/auth-type: oidc
    alb.ingress.kubernetes.io/auth-idp-oidc: |
      {
        "issuer": "https://accounts.google.com",
        "authorizationEndpoint": "https://accounts.google.com/o/oauth2/v2/auth",
        "tokenEndpoint": "https://oauth2.googleapis.com/token",
        "userInfoEndpoint": "https://openidconnect.googleapis.com/v1/userinfo",
        "secretName": "oauth"
      }
    alb.ingress.kubernetes.io/auth-on-unauthenticated-request: authenticate
    alb.ingress.kubernetes.io/auth-scope: 'email'

First, we force SSL for our load balancer. Then, we specify which URL to use for OAuth. Finally, we specify that if request is unauthenticated, we want load balancer to authenticate it.

Checking the user

After authenticating, the load balancer will add a JWT token to the request passed to our service. We can simply trust the token and extract the user email, or we go the zero-trust way and verify that the token is properly signed by AWS.

To remind, JWT token has a payload and a cryptographic signature. We can validate that signature matches the payload, but we need to know the public key of the signing entity, and with multiple regions, there are many possible signing keys. Therefore, the payload includes the key id, and we use an URL published by AWS to obtain the public key given that id.

The code to obtain the signing key is relatively simple, if we skip error handling and resource cleanup, and take advantage of the fact that the key is always ECDSA.

url := fmt.Sprintf("https://public-keys.auth.elb.eu-central-1.amazonaws.com/%v", kid)
response, err := http.Get(url)
body, err := io.ReadAll(response.Body)
block, _ := pem.Decode(body)
key, err := x509.ParsePKIXPublicKey(block.Bytes)
result := key.(*ecdsa.PublicKey)

Once we have the key, validating the token is also simple

tokenString := r.Header.Get("X-Amzn-Oidc-Data")

// Parse the JWT token, and verify it. To verify the cryptographic signature,
// we need to obtain the signing key, which is specified in the 'kid' field.
token, err := jwtParser.Parse(tokenString, func(token *jwt.Token) (interface{}, error) {
    // Note that the parser has a fixed list of valid signing methods, no need to check it
	if kid, ok := token.Header["kid"]; ok {
		return albKeys.Get(kid.(string))
	}
	return nil, errors.New("no kid field")
})

if err != nil  || !token.Valid {
    w.WriteHeader(http.StatusUnauthorized)
	return
}

claims := token.Claims.(jwt.MapClaims)
email := claims["email"].(string)
log.Info().Msgf("authorized user %s", email)

The complete source code, with error handling and caching, can be found in alb.go.

Conclusion

We have built a fairly robust way to authenticate users. AWS load balancer is the only public-facing component, it would not let any unauthorized requests to your service, and we rely on Google to know the users and ask them for the passwords.

Of course, we lost something. The flow is not exactly polished, and the user is unceremonously redirected to Google, and there’s no option to select other social provider. Still, if you want a secure service as soon as possible, using ALB authentication might be a good approach.

The complete setup is available in the nanoproxy repository.