Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: patch for AWS::Bedrock::DataSource DataSourceConfiguration #1255

Merged
merged 3 commits into from
Aug 22, 2024

Conversation

GavinZZ
Copy link
Contributor

@GavinZZ GavinZZ commented Aug 22, 2024

Copy link

@aws-cdk/aws-service-spec: Model database diff detected

└[~] service aws-bedrock
  └ resources
     └[~] resource AWS::Bedrock::DataSource
       ├ properties
       │  └[+] DataSourceConfiguration: DataSourceConfiguration (required)
       └ types
          ├[+] type ConfluenceCrawlerConfiguration
          │ ├  documentation: The configuration of the Confluence content. For example, configuring specific types of Confluence content.
          │ │  name: ConfluenceCrawlerConfiguration
          │ └ properties
          │    └FilterConfiguration: CrawlFilterConfiguration
          ├[+] type ConfluenceDataSourceConfiguration
          │ ├  documentation: The configuration information to connect to Confluence as your data source.
          │ │  name: ConfluenceDataSourceConfiguration
          │ └ properties
          │    ├SourceConfiguration: ConfluenceSourceConfiguration (required)
          │    └CrawlerConfiguration: ConfluenceCrawlerConfiguration
          ├[+] type ConfluenceSourceConfiguration
          │ ├  documentation: The endpoint information to connect to your Confluence data source.
          │ │  name: ConfluenceSourceConfiguration
          │ └ properties
          │    ├HostUrl: string (required)
          │    ├HostType: string (required)
          │    ├AuthType: string (required)
          │    └CredentialsSecretArn: string (required)
          ├[+] type CrawlFilterConfiguration
          │ ├  documentation: The configuration of filtering the data source content. For example, configuring regular expression patterns to include or exclude certain content.
          │ │  name: CrawlFilterConfiguration
          │ └ properties
          │    ├Type: string (required)
          │    └PatternObjectFilter: PatternObjectFilterConfiguration
          ├[+] type DataSourceConfiguration
          │ ├  documentation: The connection configuration for the data source.
          │ │  name: DataSourceConfiguration
          │ └ properties
          │    ├Type: string (required, immutable)
          │    ├S3Configuration: S3DataSourceConfiguration
          │    ├ConfluenceConfiguration: ConfluenceDataSourceConfiguration
          │    ├SalesforceConfiguration: SalesforceDataSourceConfiguration
          │    ├SharePointConfiguration: SharePointDataSourceConfiguration
          │    └WebConfiguration: WebDataSourceConfiguration
          ├[+] type PatternObjectFilter
          │ ├  documentation: The specific filters applied to your data source content. You can filter out or include certain content.
          │ │  name: PatternObjectFilter
          │ └ properties
          │    ├ObjectType: string (required)
          │    ├InclusionFilters: Array<string>
          │    └ExclusionFilters: Array<string>
          ├[+] type PatternObjectFilterConfiguration
          │ ├  documentation: The configuration of filtering certain objects or content types of the data source.
          │ │  name: PatternObjectFilterConfiguration
          │ └ properties
          │    └Filters: Array<PatternObjectFilter> (required)
          ├[+] type S3DataSourceConfiguration
          │ ├  documentation: The configuration information to connect to Amazon S3 as your data source.
          │ │  name: S3DataSourceConfiguration
          │ └ properties
          │    ├BucketArn: string (required)
          │    ├InclusionPrefixes: Array<string>
          │    └BucketOwnerAccountId: string
          ├[+] type SalesforceCrawlerConfiguration
          │ ├  documentation: The configuration of the Salesforce content. For example, configuring specific types of Salesforce content.
          │ │  name: SalesforceCrawlerConfiguration
          │ └ properties
          │    └FilterConfiguration: CrawlFilterConfiguration
          ├[+] type SalesforceDataSourceConfiguration
          │ ├  documentation: The configuration information to connect to Salesforce as your data source.
          │ │  name: SalesforceDataSourceConfiguration
          │ └ properties
          │    ├SourceConfiguration: SalesforceSourceConfiguration (required)
          │    └CrawlerConfiguration: SalesforceCrawlerConfiguration
          ├[+] type SalesforceSourceConfiguration
          │ ├  documentation: The endpoint information to connect to your Salesforce data source.
          │ │  name: SalesforceSourceConfiguration
          │ └ properties
          │    ├HostUrl: string (required)
          │    ├AuthType: string (required)
          │    └CredentialsSecretArn: string (required)
          ├[+] type SeedUrl
          │ ├  documentation: The seed or starting point URL. You should be authorized to crawl the URL.
          │ │  name: SeedUrl
          │ └ properties
          │    └Url: string (required)
          ├[+] type SharePointCrawlerConfiguration
          │ ├  documentation: The configuration of the SharePoint content. For example, configuring specific types of SharePoint content.
          │ │  name: SharePointCrawlerConfiguration
          │ └ properties
          │    └FilterConfiguration: CrawlFilterConfiguration
          ├[+] type SharePointDataSourceConfiguration
          │ ├  documentation: The configuration information to connect to SharePoint as your data source.
          │ │  name: SharePointDataSourceConfiguration
          │ └ properties
          │    ├SourceConfiguration: SharePointSourceConfiguration (required)
          │    └CrawlerConfiguration: SharePointCrawlerConfiguration
          ├[+] type SharePointSourceConfiguration
          │ ├  documentation: The endpoint information to connect to your SharePoint data source.
          │ │  name: SharePointSourceConfiguration
          │ └ properties
          │    ├SiteUrls: Array<string> (required)
          │    ├HostType: string (required)
          │    ├AuthType: string (required)
          │    ├CredentialsSecretArn: string (required)
          │    ├TenantId: string
          │    └Domain: string (required)
          ├[+] type UrlConfiguration
          │ ├  documentation: The configuration of web URLs that you want to crawl. You should be authorized to crawl the URLs.
          │ │  name: UrlConfiguration
          │ └ properties
          │    └SeedUrls: Array<SeedUrl> (required)
          ├[+] type WebCrawlerConfiguration
          │ ├  documentation: The configuration of web URLs that you want to crawl. You should be authorized to crawl the URLs.
          │ │  name: WebCrawlerConfiguration
          │ └ properties
          │    ├CrawlerLimits: WebCrawlerLimits
          │    ├InclusionFilters: Array<string>
          │    ├ExclusionFilters: Array<string>
          │    └Scope: string
          ├[+] type WebCrawlerLimits
          │ ├  documentation: The rate limits for the URLs that you want to crawl. You should be authorized to crawl the URLs.
          │ │  name: WebCrawlerLimits
          │ └ properties
          │    └RateLimit: integer
          ├[+] type WebDataSourceConfiguration
          │ ├  documentation: The configuration details for the web data source.
          │ │  name: WebDataSourceConfiguration
          │ └ properties
          │    ├SourceConfiguration: WebSourceConfiguration (required)
          │    └CrawlerConfiguration: WebCrawlerConfiguration
          └[+] type WebSourceConfiguration
            ├  documentation: The configuration of the URL/URLs for the web content that you want to crawl. You should be authorized to crawl the URLs.
            │  name: WebSourceConfiguration
            └ properties
               └UrlConfiguration: UrlConfiguration (required)

@aws-cdk-automation aws-cdk-automation added this pull request to the merge queue Aug 22, 2024
Merged via the queue into main with commit 2926118 Aug 22, 2024
8 checks passed
@aws-cdk-automation aws-cdk-automation deleted the yuanhaoz/bedrock-patch branch August 22, 2024 20:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants