Whose Encryption Key is this?

David Levitsky
Simply CloudSec
Published in
10 min readSep 6, 2022

--

How a subtle configuration can result in your AWS log data being encrypted with a key that’s not yours.

This blog post is a summarized version of David Levitsky and Matt Lorimor’s talk at BSides Las Vegas 2022. You can watch the recording or read the abstract on the BSides website.

Introduction

Collecting logs in AWS is a key pillar in securing any cloud environment, as they provide important security telemetry to keep track of what is happening inside of an AWS account. Typically, these logs are sent to S3 for storage, where they are available to be queried or processed downstream for custom workflows. During a security investigation, these logs are often critical when piecing together actions taken by a threat actor. However, what happens if you start collecting these logs and then find them to be completely inaccessible during an investigation?

In this blog post, we’ll describe a scenario we uncovered in AWS that could affect the accessibility of your log data. We’ll outline how subtleties in a bucket’s server-side encryption configuration can result in your critical logs being encrypted with a key that you do not have access to, and also detail how you can identify and prevent this issue in your own environment. If you have an AWS logging service (e.g., VPC flow logs) writing to an S3 bucket configured this way, your logs may be inaccessible.

Background

To discuss the issue, we must first have a high-level understanding of how logs are stored and encrypted inside of AWS. Logs are an AWS-managed service, which is fantastic from a customer usability perspective — just turn them on, specify a destination, and they magically appear.

A common scenario is sending these logs to S3, for cheap long-term storage, and taking advantage of S3’s server-side encryption options to ensure that they are encrypted at rest. In fact, your organization probably has compliance or security requirements to ensure that all data is encrypted at rest.

See the diagram below for a visualization of an extremely basic logging configuration.

Figure 1 — Basic AWS Logging Architecture

S3 provides several different options for server-side encryption:

  • SSE-S3: Keys are managed by the S3 service. Each object is encrypted with a unique key.
  • SSE-KMS: Similar to SSE-S3, but with the benefits of KMS. Access control and key rotation policies can be customized.
  • SSE-C: With SSE with Customer-Provided Keys, you manage the encryption keys and Amazon S3 manages the encryption.

SSE-KMS

While SSE-S3 is easy to set up, in order to ensure more granular and scoped access to logs, SSE-KMS is typically used. Let’s dive a little bit deeper into this encryption configuration.

With SSE-KMS, you have the ability to specify which key you would like to use for encryption purposes. Your options are:

  • a default AWS-provided key present in customer accounts
  • Your own custom KMS key

The AWS-provided key is a special key. Aliased as aws/s3 inside of accounts, this key has a key policy that grants permissions to any principal inside of the account to use it, as long as that principal is going through S3. The key policy is as such:

{
"Version": "2012-10-17",
"Id": "auto-s3-2",
"Statement": [
{
"Sid": "Allow access through S3 for all principals in the account that are authorized to use S3",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"kms:ViaService": "s3.<REGION>.amazonaws.com",
"kms:CallerAccount": "<AWS_ACCOUNT_ID>"
}
}
},
{
"Sid": "Allow direct access to key metadata to the account",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<AWS_ACCOUNT_ID>:root"
},
"Action": [
"kms:Describe*",
"kms:Get*",
"kms:List*"
],
"Resource": "*"
}
]
}

It’s important to note that this key policy is not modifiable by the customer — it’s essentially written in stone. This means that this key is not accessible by entities outside of your account, which will come into play later in this post.

This key is really nice to quickly get up and running if you need to have a key to encrypt S3 data inside of your account. However, due to its broad scope and lack of access control customization, specifying a custom KMS key is recommended.

When you provide your own custom KMS key, you have the ability to configure things like the key policy, key rotation schedules, and more.

Unexpected Behavior

Now that we have a better understanding of server-side encryption configurations, let’s get back to our original discussion about collecting AWS logs. In order to do this, let’s walk through a scenario we encountered.

Imagine that you’re a security engineer who has just become responsible for some new infrastructure in AWS accounts that needs to be integrated into your existing organization and secured. In order to do this, you might come up with a plan that involves:

  1. Collecting security telemetry to enable threat detection
  2. Assessing the security posture of the new infrastructure
  3. Working to close any gaps between current and desired state

For the scope of this post, we’re only going to focus on the first step.

Let’s assume you have gained a read-only access role in order to start pulling security telemetry, such as AWS flow logs. You’re able to find the correct bucket and list the objects, but when you try to access them, you’re faced with an access denied error. There are no explicit deny statements in the bucket resource policy, and your IAM role has all necessary permissions. What could be going on?

Taking a closer look at the objects, we see that they’re all encrypted with a KMS key, which is great — we want our data to always be encrypted at-rest, and the fact that it’s a custom KMS key is even better. Let’s take a look at the key and see if we can glean any additional information about it — perhaps we’re getting an access denied error because the key policy doesn’t grant us access.

Figure 2 — A sample object encrypted with a KMS key.

However, when we click on the key and try to access it, we get another error!

Figure 3 — Error accessing the key.

Upon investigation, the account ID in the ARN of the key is different from the account ID we’re actually operating in. Where is this key coming from? Is this even the AWS log writer that’s writing to the bucket, or something different? Did the data somehow get struck by some cloud-based ransomware?

At this point in time, we only know a few things:

  • the log data is encrypted with a key that we do not have access to for decryption
  • the key lives in an account that we have no knowledge about

Let’s take a look at the bucket and see what else we can learn.

Diving Deeper

When we check the default encryption configuration of the bucket, we notice something strange. SSE-KMS is turned on to encrypt data at rest, but there’s no key specified:

Figure 4 — Encryption is on, but where’s the key?

There are two things to note here. First of all, this almost looks like a bug in the console. Default encryption is enabled, but no key is listed — how could this be possible? It turns out, when configuring SSE-KMS encryption, specifying a key is optional and implicitly defaults to using the AWS-managed aws/s3 key.

Secondly, we see from Figure 2 that the key used to encrypt objects is definitely not the aws/s3 key. This means that the log writer is providing its own key when encrypting these objects. PutObject requests can specify what key to use, and, as long as the bucket policy doesn’t forbid it or there isn’t some key access issue, the object will happily write with the request-specified encryption. Since the logging service is a managed service, this means that there is a key being chosen, outside of your control, to encrypt your log data.

Putting It All Together

After lots of experimentation, we discovered that there are three key conditions that combine to trigger this behavior.

  1. SSE-KMS is set as the S3 bucket’s default encryption method
  2. No key is specified in this encryption configuration
  3. An AWS logging service is turned on to write to this bucket

There are some additional subtleties here. If only the first two conditions are present, and you have some service operating inside of your account that writes to the S3 bucket, everything will work fine. This is because the service is running inside of your AWS account, and the aws/s3 key policy will allow access to the key via S3 operations. However, since AWS logging services operate outside of your account, the addition of this third condition results in unexpected behavior where your data is no longer accessible.

The figures below demonstrate the full flow conceptually. Note that we do not work for AWS and are not privy to the exact operations under the hood. This is not an accurate technical diagram and merely serves to illustrate the concepts outlined in this post.

In Figure 5, the logging flow kicks off. A log is generated (for example, a VPC flow log) and the log writer operating in the AWS account decides it’s time to write this to the specified S3 bucket in your account. When the writer wants to write to the bucket, it sees that SSE-KMS encryption is specified. However, since there is no key specified, this implicitly defaults to the aws/s3 key, which is not accessible outside of the customer account and causes the operation to fail.

Figure 5 — First half of writer flow.

This is where the behavior becomes unexpected. Typically, if operations involving encrypting/decrypting are unable to access the key to perform the operation, they will simply abort and error out. This would result in no logs being written to the bucket and clearly signal that the logging configuration is not valid and needs to be fixed. However, in this scenario, the logging service proceeds by choosing its own key (not configured or specified by the customer) and using it to encrypt the log file. This results in logs being written to the bucket that are no longer accessible by the customer, since the key used to encrypt them lives inside of an external account and does not grant access for its usage.

Figure 6 — Full writer flow.

Preventing The Issue

The crux of the issue lies with the usage of SSE-KMS for server-side encryption without specifying a KMS key, which defaults to usage of the aws/s3 key. While AWS makes some references to this being an unsupported configuration for logging services, it’s very easy to miss this, and the logging services will even appear to succeed by still continuing to deliver logs to your logging bucket — you just won’t be able to decrypt them. Take a look at some IaC snippets below and picture them in a PR containing hundreds or thousands of lines — if you’re not explicitly looking for the absence of a KMS key and aren’t intimately familiar with all the different server side encryption configurations, you may very well miss it. In fact, you may even commend the author for enforcing encryption on their buckets!

Figure 7 — Terraform snippet with a missing key.
Figure 8 — CloudFormation snippet with a missing key.
Figure 9 — CDK snippet with a missing key.

In general, using KMS keys provides more granular security controls and key rotation abilities, so specifying a custom key will allow for an improved security posture for your cloud environment. There are several different ways to catch and prevent this issue.

Proactive
If you have the ability to perform static analysis on your codebase, you could leverage a tool to scan your Infrastructure as Code (IaC) resource definitions and flag on any encryption configurations that use SSE-KMS but do not specify a KMS key. See below for a sample policy written in Rego which could catch this:

Figure 10 — Sample Rego policy for static IaC analysis.

Additionally, you could take advantage of the x-amz-server-side-encryption-aws-kms-key-id condition key and apply a bucket policy to your logging buckets which require a specific KMS key to be used (one you control) when using default encryption. This would deny any log writes to the bucket that do not use the outlined key.

In fact, because any potential object writer can specify any valid encryption on a PutObject, enforcement of object encryption at-rest in S3 should not be considered complete unless you have, at a minimum, a bucket policy also enforcing that PutObject requests restrict the encryption method used to whatever your requirements are.

Reactive

If statically analyzing your infrastructure definition is not possible, you could also leverage AWS services such as CloudTrail or Config to detect this configuration and generate an alert to fix this. You may also have a custom asset inventory solution, which would provide you with the ability to search and alert on this configuration.

Summary

AWS logs are an important component of security telemetry in a cloud environment. When collecting these logs, it’s vital to be able to access them immediately to answer questions for security investigations. In this blog post, we outlined how subtleties in the interactions between AWS logging services and S3 encryption configurations can result in unexpected behavior that results in your data being encrypted with a key that you do not have access to. We also illustrated some methods to prevent this from happening in your environment.

Thanks for reading!

--

--

David Levitsky
Simply CloudSec

Security Engineer. Passionate about all things related to cloud platforms. Editor of Simply CloudSec, a blog on cloud security.