Skip to content

Commit

Permalink
feat(aws-android-sdk-rekognition): update models to latest (#2573)
Browse files Browse the repository at this point in the history
  • Loading branch information
awsmobilesdk authored Aug 9, 2021
1 parent c4357f8 commit a1efe56
Show file tree
Hide file tree
Showing 33 changed files with 1,743 additions and 303 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -1084,8 +1084,8 @@ DetectProtectiveEquipmentResult detectProtectiveEquipment(
* </p>
* <p>
* A word is one or more ISO basic latin script characters that are not
* separated by spaces. <code>DetectText</code> can detect up to 50 words in
* an image.
* separated by spaces. <code>DetectText</code> can detect up to 100 words
* in an image.
* </p>
* <p>
* A line is a string of equally spaced words. A line isn't necessarily a
Expand Down Expand Up @@ -1260,18 +1260,23 @@ GetCelebrityRecognitionResult getCelebrityRecognition(

/**
* <p>
* Gets the unsafe content analysis results for a Amazon Rekognition Video
* analysis started by <a>StartContentModeration</a>.
* Gets the inappropriate, unwanted, or offensive content analysis results
* for a Amazon Rekognition Video analysis started by
* <a>StartContentModeration</a>. For a list of moderation labels in Amazon
* Rekognition, see <a href=
* "https://docs.aws.amazon.com/rekognition/latest/dg/moderation.html#moderation-api"
* >Using the image and video moderation APIs</a>.
* </p>
* <p>
* Unsafe content analysis of a video is an asynchronous operation. You
* start analysis by calling <a>StartContentModeration</a> which returns a
* job identifier (<code>JobId</code>). When analysis finishes, Amazon
* Rekognition Video publishes a completion status to the Amazon Simple
* Notification Service topic registered in the initial call to
* <code>StartContentModeration</code>. To get the results of the unsafe
* content analysis, first check that the status value published to the
* Amazon SNS topic is <code>SUCCEEDED</code>. If so, call
* Amazon Rekognition Video inappropriate or offensive content detection in
* a stored video is an asynchronous operation. You start analysis by
* calling <a>StartContentModeration</a> which returns a job identifier (
* <code>JobId</code>). When analysis finishes, Amazon Rekognition Video
* publishes a completion status to the Amazon Simple Notification Service
* topic registered in the initial call to
* <code>StartContentModeration</code>. To get the results of the content
* analysis, first check that the status value published to the Amazon SNS
* topic is <code>SUCCEEDED</code>. If so, call
* <code>GetContentModeration</code> and pass the job identifier (
* <code>JobId</code>) from the initial call to
* <code>StartContentModeration</code>.
Expand All @@ -1281,10 +1286,10 @@ GetCelebrityRecognitionResult getCelebrityRecognition(
* Rekognition Devlopers Guide.
* </p>
* <p>
* <code>GetContentModeration</code> returns detected unsafe content labels,
* and the time they are detected, in an array,
* <code>ModerationLabels</code>, of <a>ContentModerationDetection</a>
* objects.
* <code>GetContentModeration</code> returns detected inappropriate,
* unwanted, or offensive content moderation labels, and the time they are
* detected, in an array, <code>ModerationLabels</code>, of
* <a>ContentModerationDetection</a> objects.
* </p>
* <p>
* By default, the moderated labels are returned sorted by time, in
Expand All @@ -1305,8 +1310,8 @@ GetCelebrityRecognitionResult getCelebrityRecognition(
* <code>GetContentModeration</code>.
* </p>
* <p>
* For more information, see Detecting Unsafe Content in the Amazon
* Rekognition Developer Guide.
* For more information, see Content moderation in the Amazon Rekognition
* Developer Guide.
* </p>
*
* @param getContentModerationRequest
Expand Down Expand Up @@ -2295,27 +2300,31 @@ StartCelebrityRecognitionResult startCelebrityRecognition(

/**
* <p>
* Starts asynchronous detection of unsafe content in a stored video.
* Starts asynchronous detection of inappropriate, unwanted, or offensive
* content in a stored video. For a list of moderation labels in Amazon
* Rekognition, see <a href=
* "https://docs.aws.amazon.com/rekognition/latest/dg/moderation.html#moderation-api"
* >Using the image and video moderation APIs</a>.
* </p>
* <p>
* Amazon Rekognition Video can moderate content in a video stored in an
* Amazon S3 bucket. Use <a>Video</a> to specify the bucket name and the
* filename of the video. <code>StartContentModeration</code> returns a job
* identifier (<code>JobId</code>) which you use to get the results of the
* analysis. When unsafe content analysis is finished, Amazon Rekognition
* Video publishes a completion status to the Amazon Simple Notification
* Service topic that you specify in <code>NotificationChannel</code>.
* analysis. When content analysis is finished, Amazon Rekognition Video
* publishes a completion status to the Amazon Simple Notification Service
* topic that you specify in <code>NotificationChannel</code>.
* </p>
* <p>
* To get the results of the unsafe content analysis, first check that the
* status value published to the Amazon SNS topic is <code>SUCCEEDED</code>.
* If so, call <a>GetContentModeration</a> and pass the job identifier (
* To get the results of the content analysis, first check that the status
* value published to the Amazon SNS topic is <code>SUCCEEDED</code>. If so,
* call <a>GetContentModeration</a> and pass the job identifier (
* <code>JobId</code>) from the initial call to
* <code>StartContentModeration</code>.
* </p>
* <p>
* For more information, see Detecting Unsafe Content in the Amazon
* Rekognition Developer Guide.
* For more information, see Content moderation in the Amazon Rekognition
* Developer Guide.
* </p>
*
* @param startContentModerationRequest
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1866,8 +1866,8 @@ public DetectProtectiveEquipmentResult detectProtectiveEquipment(
* </p>
* <p>
* A word is one or more ISO basic latin script characters that are not
* separated by spaces. <code>DetectText</code> can detect up to 50 words in
* an image.
* separated by spaces. <code>DetectText</code> can detect up to 100 words
* in an image.
* </p>
* <p>
* A line is a string of equally spaced words. A line isn't necessarily a
Expand Down Expand Up @@ -2121,18 +2121,23 @@ public GetCelebrityRecognitionResult getCelebrityRecognition(

/**
* <p>
* Gets the unsafe content analysis results for a Amazon Rekognition Video
* analysis started by <a>StartContentModeration</a>.
* Gets the inappropriate, unwanted, or offensive content analysis results
* for a Amazon Rekognition Video analysis started by
* <a>StartContentModeration</a>. For a list of moderation labels in Amazon
* Rekognition, see <a href=
* "https://docs.aws.amazon.com/rekognition/latest/dg/moderation.html#moderation-api"
* >Using the image and video moderation APIs</a>.
* </p>
* <p>
* Unsafe content analysis of a video is an asynchronous operation. You
* start analysis by calling <a>StartContentModeration</a> which returns a
* job identifier (<code>JobId</code>). When analysis finishes, Amazon
* Rekognition Video publishes a completion status to the Amazon Simple
* Notification Service topic registered in the initial call to
* <code>StartContentModeration</code>. To get the results of the unsafe
* content analysis, first check that the status value published to the
* Amazon SNS topic is <code>SUCCEEDED</code>. If so, call
* Amazon Rekognition Video inappropriate or offensive content detection in
* a stored video is an asynchronous operation. You start analysis by
* calling <a>StartContentModeration</a> which returns a job identifier (
* <code>JobId</code>). When analysis finishes, Amazon Rekognition Video
* publishes a completion status to the Amazon Simple Notification Service
* topic registered in the initial call to
* <code>StartContentModeration</code>. To get the results of the content
* analysis, first check that the status value published to the Amazon SNS
* topic is <code>SUCCEEDED</code>. If so, call
* <code>GetContentModeration</code> and pass the job identifier (
* <code>JobId</code>) from the initial call to
* <code>StartContentModeration</code>.
Expand All @@ -2142,10 +2147,10 @@ public GetCelebrityRecognitionResult getCelebrityRecognition(
* Rekognition Devlopers Guide.
* </p>
* <p>
* <code>GetContentModeration</code> returns detected unsafe content labels,
* and the time they are detected, in an array,
* <code>ModerationLabels</code>, of <a>ContentModerationDetection</a>
* objects.
* <code>GetContentModeration</code> returns detected inappropriate,
* unwanted, or offensive content moderation labels, and the time they are
* detected, in an array, <code>ModerationLabels</code>, of
* <a>ContentModerationDetection</a> objects.
* </p>
* <p>
* By default, the moderated labels are returned sorted by time, in
Expand All @@ -2166,8 +2171,8 @@ public GetCelebrityRecognitionResult getCelebrityRecognition(
* <code>GetContentModeration</code>.
* </p>
* <p>
* For more information, see Detecting Unsafe Content in the Amazon
* Rekognition Developer Guide.
* For more information, see Content moderation in the Amazon Rekognition
* Developer Guide.
* </p>
*
* @param getContentModerationRequest
Expand Down Expand Up @@ -3584,27 +3589,31 @@ public StartCelebrityRecognitionResult startCelebrityRecognition(

/**
* <p>
* Starts asynchronous detection of unsafe content in a stored video.
* Starts asynchronous detection of inappropriate, unwanted, or offensive
* content in a stored video. For a list of moderation labels in Amazon
* Rekognition, see <a href=
* "https://docs.aws.amazon.com/rekognition/latest/dg/moderation.html#moderation-api"
* >Using the image and video moderation APIs</a>.
* </p>
* <p>
* Amazon Rekognition Video can moderate content in a video stored in an
* Amazon S3 bucket. Use <a>Video</a> to specify the bucket name and the
* filename of the video. <code>StartContentModeration</code> returns a job
* identifier (<code>JobId</code>) which you use to get the results of the
* analysis. When unsafe content analysis is finished, Amazon Rekognition
* Video publishes a completion status to the Amazon Simple Notification
* Service topic that you specify in <code>NotificationChannel</code>.
* analysis. When content analysis is finished, Amazon Rekognition Video
* publishes a completion status to the Amazon Simple Notification Service
* topic that you specify in <code>NotificationChannel</code>.
* </p>
* <p>
* To get the results of the unsafe content analysis, first check that the
* status value published to the Amazon SNS topic is <code>SUCCEEDED</code>.
* If so, call <a>GetContentModeration</a> and pass the job identifier (
* To get the results of the content analysis, first check that the status
* value published to the Amazon SNS topic is <code>SUCCEEDED</code>. If so,
* call <a>GetContentModeration</a> and pass the job identifier (
* <code>JobId</code>) from the initial call to
* <code>StartContentModeration</code>.
* </p>
* <p>
* For more information, see Detecting Unsafe Content in the Amazon
* Rekognition Developer Guide.
* For more information, see Content moderation in the Amazon Rekognition
* Developer Guide.
* </p>
*
* @param startContentModerationRequest
Expand Down
Loading

0 comments on commit a1efe56

Please sign in to comment.