Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

botocore.exceptions.ClientError: An error occurred (InvalidClientTokenId) #5294

Closed
askkhan84 opened this issue Jul 7, 2022 · 1 comment
Closed
Labels

Comments

@askkhan84
Copy link

askkhan84 commented Jul 7, 2022

I have read This issue and this one but dont work for me.

I am trying to setup a mock test for my Lambda. The Lambda is triggered using S3 PutObject event. The Lambda downloads the file and processes it and uploads it to AWS CloudSearch. So I am mocking S3 and SNS but not CloudSearch (not supported but its acceptable to me if it accesses AWS API in this case).

My test file looks like this:

import json
import pytest
from src.upload_file import upload_files
import mock
import os
import moto
from moto import mock_s3
import boto3


@pytest.fixture()
def s3_event():
    """ Generates S3 Event"""

    return {
            "Records": [
                {
                "eventVersion": "2.1",
                "eventSource": "aws:s3",
                "awsRegion": "ap-southeast-2",
                "eventTime": "2022-07-07T06:19:19.407Z",
                "eventName": "ObjectCreated:Put",
                "userIdentity": {
                    "principalId": "AWS:AROAXxxxxxxxxxxxxxxxx:test"
                },
                "requestParameters": {
                    "sourceIPAddress": "141.168.xx.xx"
                },
                "responseElements": {
                    "x-amz-request-id": "K3940Q1FFVEJZERT",
                    "x-amz-id-2": "asdfa"
                },
                "s3": {
                    "s3SchemaVersion": "1.0",
                    "configurationId": "02c99d23-183a-4397-927e-7678b4866df2",
                    "bucket": {
                        "name": "test-cloudsearch-111111111111",
                        "ownerIdentity": {
                            "principalId": "A1XXXXXXXX"
                        },
                        "arn": "arn:aws:s3:::test-cloudsearch-111111111111"
                    },
                    "object": {
                        "key": "upload_your_word_doc_here/ID.docx",
                        "size": 160063,
                        "eTag": "b56af65b7xxxxxxxxxxxxxxxxx",
                        "sequencer": "0062C67AE751BC5BE4"
                    }
                }
                }
            ]
            }

BUCKET_NAME = 'test-cloudsearch-1111111111111'
FILE_NAME='/sample_reports/ID.docx'
FILE_LOCATION='upload_your_word_doc_here/ID.docx'

#@mock_lambda
#@mock_sts
@mock_s3
@mock.patch.dict(os.environ, {'SNS_TOPIC_ARN': "arn:aws:sns:ap-southeast-2:1111111111:Test-CloudSearch-Alert"})
@mock.patch.dict(os.environ, {'CS_DOMAIN_NAME': "testdomain"})
@mock.patch.dict(os.environ, {'FAILED_PROCESSING_FOLDER': "failed_processing"})
@mock.patch.dict(os.environ, {'SUCCESS_PROCESSING_FOLDER': "succesfully_processed/"})
@mock.patch.dict(os.environ, {'TABLES_SEARCH_STRING': "Test Findings"})
def test_lambda_handler(s3_event):
    s3 = boto3.resource("s3")
    bucket = s3.create_bucket(Bucket=BUCKET_NAME, CreateBucketConfiguration={'LocationConstraint': 'ap-southeast-2'})
    s3.meta.client.upload_file(FILE_NAME, BUCKET_NAME, FILE_LOCATION)
    boto3.setup_default_session()
    # Calling my actual Lambda handler
    response = upload_files.lambda_handler(s3_event, None)

    # Verify the response
    assert response == "File successfully processed"

The upload_files.lambda_handler code which is a separate file looks like this:

def lambda_handler(event, context):
        s3 = boto3.resource('s3')
        user_identity = event['Records'][0]['userIdentity']['principalId']
        email = user_identity[user_identity.rindex(":")+1:user_identity.rindex('-')] + '@test.com'
        logger.debug("Email address:{}".format(email))
        bucket = event['Records'][0]['s3']['bucket']['name']
        key = unquote_plus(event['Records'][0]['s3']['object']['key'])
        logger.debug("bucket name:{}".format(bucket))
        logger.debug("filename:{}".format(key))
        temp_file = key[key.rindex('/')+1:]
        logger.debug("Filename without the prefix:{}".format(temp_file))
        local_file_name = f'/tmp/{temp_file}'    
        s3.Bucket(bucket).download_file(key, local_file_name)

        # performs some processing to the downloaded file and attempts to upload to CloudSearch
        client = boto3.client('cloudsearch') 
        domain_name= os.environ['CS_DOMAIN_NAME']
        logger.debug("CS domain name:{}".format(domain_name))
        info = client.describe_domains(DomainNames=[domain_name]) 
        endpoint_url = info['DomainStatusList'][0]['DocService']['Endpoint'] 
        logger.debug("Endpoint document url:{}".format(endpoint_url))
        client = boto3.client('cloudsearchdomain', endpoint_url=f'https://{endpoint_url}')

It throws this error:

Traceback (most recent call last):
  File "src/upload_file/upload_files.py", line 148, in lambda_handler
    info = client.describe_domains(DomainNames=[domain_name])
  File "Library/Python/3.9/lib/python/site-packages/botocore/client.py", line 388, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "Library/Python/3.9/lib/python/site-packages/botocore/client.py", line 708, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (InvalidClientTokenId) when calling the DescribeDomains operation: The security token included in the request is invalid.

I tried calling sts.get_caller_identity() and it gives me the same error. If I disable the s3 mocking, it works fine and can read the AWS credential environment variables properly.

I cannot disable the S3 mock because I need the Lambda to be able to read the file and upload it to CloudSearch. I am not sure what can be the workaround for this.

@bblommers
Copy link
Collaborator

bblommers commented Jul 26, 2022

Hi @askkhan84 , this behavior is on purpose, actually. We intercept all requests to AWS to prevent people from accidentally changing their real infrastructure.

There is no way to only intercept only some services, but let others pass through. There is an outstanding issue to tackle this already, #4597, but no real progress on this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants