List s3 stage failed with error Status Code: 403; Error Code: AccessDenied

The Question

For the past 5 days I have tried to deploy a CDK Pipeline using the instructions available here but no luck. The buckets provisioned seem to consistently be lacking the relevant policies. As soon as I deploy the pipeline I get Upload to S3 failed with the following error: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: xxxx; S3 Extended Request ID: xxxx; Proxy: null) (Service: null; Status Code: 0; Error Code: null; Request ID: null; S3 Extended Request ID: null; Proxy: null) (Service: null; Status Code: 0; Error Code: null; Request ID: null; S3 Extended Request ID: null; Proxy: null). I have deleted the Stacks (including the CDK Toolkit one) and restarted the whole configuration from scratch 4 or 5 times. I have tried with the version 1.94.0 and 1.94.1 of the CDK but no luck there either.
I have partly resolved some of the issues by manually adding some bucket policies but that, in turn, only pushed the issue further down the pipeline. (First on the UpdatePipeline stage and then on the Prepare stage), where I ended up being completely stuck.

Any help would be greatly appreciated, I am running out of ideas.

Thanks

Environment

  • CDK CLI Version: 1.89.0 (build df7253c)
  • Module Version: 1.94.0 && 1.94.1
  • Node.js Version: 14.15.1
  • OS: macOS Catalina 10.15.7
  • Language (Version): TypeScript (4.2.3)

Other information

Deps:

"devDependencies": { "@aws-cdk/assert": "^1.94.0", "@types/jest": "^26.0.21", "@types/node": "14.14.35", "aws-cdk": "^1.94.0", "jest": "^26.4.2", "ts-jest": "^26.5.4", "ts-node": "^9.0.0", "typescript": "4.2.3" }, "dependencies": { "@aws-cdk/aws-appsync": "^1.94.0", "@aws-cdk/aws-cloudfront": "^1.94.0", "@aws-cdk/aws-cloudfront-origins": "^1.94.0", "@aws-cdk/aws-codebuild": "^1.94.0", "@aws-cdk/aws-codepipeline": "^1.94.0", "@aws-cdk/aws-codepipeline-actions": "^1.94.0", "@aws-cdk/aws-cognito": "^1.94.0", "@aws-cdk/aws-dynamodb": "^1.94.0", "@aws-cdk/aws-iam": "^1.94.0", "@aws-cdk/aws-kms": "^1.94.0", "@aws-cdk/aws-lambda": "^1.94.0", "@aws-cdk/aws-lambda-nodejs": "^1.94.0", "@aws-cdk/aws-pinpoint": "^1.94.0", "@aws-cdk/aws-s3": "^1.94.0", "@aws-cdk/aws-s3-deployment": "^1.94.0", "@aws-cdk/core": "^1.94.0", "@aws-cdk/custom-resources": "^1.94.0", "@aws-cdk/pipelines": "^1.94.0", "@aws-sdk/s3-request-presigner": "^3.9.0", "source-map-support": "^0.5.16" }

List s3 stage failed with error Status Code: 403; Error Code: AccessDenied

Edit: Upgraded to CLI 1.94.1 and deps to 1.94.1, deleted stacks, redeployed, same issue. The buckets are created without policies.

Edit2: Downgraded to "typescript": "~3.9.7" and to "@types/node": "10.17.27" to align with what ships today on a cdk init, deleted package lock, deleted stacks, boostrapped, redeployed and same problem.

Edit3: Since I am using AWS SSO and am using this Python package to use SSO from the CLI I created 2 IAM users with AdminAccess on my pipeline account and my target account to rule out the IAM problem. I deleted my package lock, deleted stacks, boostrapped, redeployed and same problem.

Last updated: 2022-07-28

My users are trying to access objects in my Amazon Simple Storage Service (Amazon S3) bucket, but Amazon S3 is returning the 403 Access Denied error. How can I troubleshoot this error?

Resolution

Use the AWS Systems Manager automation document

Use the AWSSupport-TroubleshootS3PublicRead automation document on AWS Systems Manager. This automation document helps you diagnose issues reading objects from a public S3 bucket that you specify.

Check bucket and object ownership

For AccessDenied errors from GetObject or HeadObject requests, check whether the object is also owned by the bucket owner. Also, verify whether the bucket owner has read or full control access control list (ACL) permissions.

Confirm the account that owns the objects

By default, an S3 object is owned by the AWS account that uploaded it. This is true even when the bucket is owned by another account. If other accounts can upload objects to your bucket, then verify the account that owns the objects that your users can't access.

Note: If you receive errors when running AWS CLI commands, make sure that you’re using the most recent version of the AWS CLI.

1.    Run the list-buckets AWS Command Line Interface (AWS CLI) command to get the Amazon S3 canonical ID for your account by querying the Owner ID.

aws s3api list-buckets --query "Owner.ID"

2.    Run the list-objects command to get the Amazon S3 canonical ID of the account that owns the object that users can't access. Replace DOC-EXAMPLE-BUCKET with the name of your bucket and exampleprefix with your prefix value.

aws s3api list-objects --bucket DOC-EXAMPLE-BUCKET --prefix exampleprefix

Tip: Use the list-objects command to check several objects.

3.    If the canonical IDs don't match, then you don't own the object. The object owner can grant you full control of the object by running the put-object-acl command. Replace DOC-EXAMPLE-BUCKET with the name of the bucket that contains the objects. Replace exampleobject.jpg with your key name.

aws s3api put-object-acl --bucket DOC-EXAMPLE-BUCKET --key exampleobject.jpg --acl bucket-owner-full-control

Check the bucket policy or IAM user policies

Review the bucket policy or associated IAM user policies for any statements that might be denying access. Verify that the requests to your bucket meet any conditions in the bucket policy or IAM policies. Check for any incorrect deny statements, missing actions, or incorrect spacing in a policy.

Deny statement conditions

Check deny statements for conditions that block access based on the following:

  • multi-factor authentication (MFA)
  • encryption keys
  • specific IP address
  • specific VPCs or VPC endpoints
  • specific IAM users or roles

Note: If you require MFA and users send requests through the AWS CLI, then make sure that the users configure the AWS CLI to use MFA.

For example, in the following bucket policy, Statement1 allows public access to download objects (s3:GetObject) from DOC-EXAMPLE-BUCKET. However, Statement2 explicitly denies everyone access to download objects from DOC-EXAMPLE-BUCKET unless the request is from the VPC endpoint vpce-1a2b3c4d. In this case, the deny statement takes precedence. This means that users who try to download objects from outside of vpce-1a2b3c4d are denied access.

{ "Id": "Policy1234567890123", "Version": "2012-10-17", "Statement": [ { "Sid": "Statement1", "Action": [ "s3:GetObject" ], "Effect": "Allow", "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*", "Principal": "*" }, { "Sid": "Statement2", "Action": [ "s3:GetObject" ], "Effect": "Deny", "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*", "Condition": { "StringNotEquals": { "aws:SourceVpce": "vpce-1a2b3c4d" } }, "Principal": "*" } ] }

Bucket policies or IAM policies

Check that the bucket policy or IAM policies allow the Amazon S3 actions that your users need. For example, the following bucket policy doesn’t include permission to the s3:PutObjectAcl action. If the IAM user tries to modify the access control list (ACL) of an object, then the user gets an Access Denied error.

{ "Id": "Policy1234567890123", "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1234567890123", "Action": [ "s3:PutObject" ], "Effect": "Allow", "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*", "Principal": { "AWS": [ "arn:aws:iam::111122223333:user/Dave" ] } } ] }

Other policy errors

Check that there aren’t any extra spaces or incorrect ARNs in the bucket policy or IAM user policies.

For example, if an IAM policy has an extra space in the Amazon Resource Name (ARN) as follows: arn:aws:s3::: DOC-EXAMPLE-BUCKET/*. In this case, the ARN is then incorrectly evaluated as arn:aws:s3:::%20DOC-EXAMPLE-BUCKET/ and gives the IAM user an access denied error.

Confirm that IAM permissions boundaries allow access to Amazon S3

Review the IAM permissions boundaries that are set on the IAM identities that are trying to access the bucket. Confirm that the IAM permissions boundaries allow access to Amazon S3.

Check the bucket's Amazon S3 Block Public Access settings

If you're getting Access Denied errors on public read requests that are allowed, check the bucket's Amazon S3 Block Public Access settings.

Review the S3 Block Public Access settings at both the account and bucket level. These settings can override permissions that allow public read access. Amazon S3 Block Public Access can apply to individual buckets or AWS accounts.

Review user credentials

Review the credentials that your users have configured to access Amazon S3. AWS SDKs and the AWS CLI must be configured to use the credentials of the IAM user or role with access to your bucket.

For the AWS CLI, run the configure command to check the configured credentials:

If users access your bucket through an Amazon Elastic Compute Cloud (Amazon EC2) instance, then verify that the instance is using the correct role. Connect to the instance, then run the get-caller-identity command:

aws sts get-caller-identity

Review temporary security credentials

If users receive Access Denied errors from temporary security credentials granted using AWS Security Token Service (AWS STS), then review the associated session policy. When an administrator creates temporary security credentials using the AssumeRole API call, or the assume-role command, they can pass session-specific policies.

To find the session policies associated with the Access Denied errors from Amazon S3, look for AssumeRole events within the AWS CloudTrail event history. Make sure to look for AssumeRole events in the same timeframe as the failed requests to access Amazon S3. Then, review the requestParameters field in the relevant CloudTrail logs for any policy or policyArns parameters. Confirm that the associated policy or policy ARN grants the necessary Amazon S3 permissions.

For example, the following snippet of a CloudTrail log shows that the temporary credentials include an inline session policy that grants s3:GetObject permissions to DOC-EXAMPLE-BUCKET:

"requestParameters": { "roleArn": "arn:aws:iam::123412341234:role/S3AdminAccess", "roleSessionName": "s3rolesession", "policy": "{ "Version": "2012-10-17", "Statement": [{ "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": ["arn:aws:s3:::DOC-EXAMPLE-BUCKET/*"] } } ] } " }

Confirm that the Amazon VPC endpoint policy includes the correct permissions to access your S3 buckets and objects

If users access your bucket with an EC2 instance routed through a VPC endpoint, then check the VPC endpoint policy.

For example, the following VPC endpoint policy allows access only to DOC-EXAMPLE-BUCKET. Users who send requests through this VPC endpoint can’t access any other bucket.

{ "Id": "Policy1234567890123", "Version": "2012-10-17", "Statement": [ { "Sid": "Stmt1234567890123", "Action": [ "s3:GetObject", "s3:PutObject", "s3:ListBucket" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::DOC-EXAMPLE-BUCKET", "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*" ], "Principal": "*" } ] }

Review your Amazon S3 access point's IAM policy

If you use an Amazon S3 access point to manage access to your bucket, then review the access point's IAM policy.

Permissions granted in an access point policy are only effective if the underlying bucket policy also allow the same access. Confirm that the bucket policy and access point policy grant the correct permissions.

Confirm that the object isn't missing object or contains special characters

Check whether the requested object exists in the bucket. Otherwise, the request doesn't find the object and Amazon S3 assumes that the object doesn't exist. You receive an Access Denied error (instead of 404 Not Found errors) if you don't have proper s3:ListBucket permissions.

An object that has a special character (such as a space) requires special handling to retrieve the object.

Run the head-object AWS CLI command to check if an object exists in the bucket. Replace DOC-EXAMPLE-BUCKET with the name of the bucket that you want to check.

aws s3api head-object --bucket DOC-EXAMPLE-BUCKET --key exampleobject.jpg

If the object exists in the bucket, then the Access Denied error isn't masking a 404 Not Found error. Check other configuration requirements to resolve the Access Denied error.

If the object isn’t in the bucket, then the Access Denied error is masking a 404 Not Found error. Resolve the issue related to the missing object.

Check the AWS KMS encryption configuration

Note the following about AWS KMS (SSE-KMS) encryption:

  • If an IAM user can’t access an object that the user has full permissions to, then check if the object is encrypted by SSE-KMS. You can use the Amazon S3 console to view the object’s properties, which include the object’s server-side encryption information.
  • If the object is SSE-KMS encrypted, then make sure that the KMS key policy grants the IAM user the minimum required permissions for using the key. For example, if the IAM user is using the key only for downloading an S3 object, then the IAM user must have kms:Decrypt permissions. For more information, see Allows access to the AWS account and enables IAM policies.
  • If the IAM identity and key are in the same account, then kms:Decrypt permissions should be granted using the key policy. The key policy must reference the same IAM identity as the IAM policy.
  • If the IAM user belongs to a different account than the AWS KMS key, then these permissions must also be granted on the IAM policy. For example, to download the SSE-KMS encrypted objects, the kms:Decrypt permissions must be specified in both the key policy and IAM policy. For more information about cross-account access between the IAM user and KMS key, see Allowing users in other accounts to use a KMS key.

Confirm that the request-payer parameter is specified by users (if you're using Requester Pays)

If your bucket has Requester Pays activated, then users from other accounts must specify the request-payer parameter when they send requests to your bucket. To check whether Requester Pays is enabled, use the Amazon S3 console to view your bucket’s properties.

The following example AWS CLI command includes the correct parameter to access a cross-account bucket with Requester Pays:

aws s3 cp exampleobject.jpg s3://DOC-EXAMPLE-BUCKET/exampleobject.jpg --request-payer requester

Check your AWS Organizations service control policy

If you're using AWS Organizations, then check the service control policies to make sure that access to Amazon S3 is allowed. Service control policies specify the maximum permissions for the affected accounts. For example, the following policy explicitly denies access to Amazon S3 and results in an Access Denied error:

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Deny", "Action": "s3:*", "Resource": "*" } ] }


Did this article help?


Do you need billing or technical support?

AWS support for Internet Explorer ends on 07/31/2022. Supported browsers are Chrome, Firefox, Edge, and Safari. Learn more »

How do I fix 403 forbidden on AWS S3?

Resolution.
Check your permissions for s3:PutObject or s3:PutObjectAcl. Follow these steps: ... .
Ask for permission to use an AWS KMS key. ... .
Check the bucket policy for explicit deny statements. ... .
Disable S3 Block Public Access. ... .
Grant the root user permission to write objects. ... .
Delete service control policies for AWS Organizations..

Why is S3 object URL Access Denied?

If you're trying to host a static website using Amazon S3, but you're getting an Access Denied error, check the following requirements: Objects in the bucket must be publicly accessible. S3 bucket policy must allow access to the s3:GetObject action. The AWS account that owns the bucket must also own the object.

What is the meaning of HTTP status code 403?

The HTTP 403 Forbidden response status code indicates that the server understands the request but refuses to authorize it. This status is similar to 401 , but for the 403 Forbidden status code re-authenticating makes no difference.