Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add typos check to CI #81

Merged
merged 4 commits into from
Apr 24, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 21 additions & 7 deletions .github/workflows/main.yml → .github/workflows/ci.yml
Original file line number Diff line number Diff line change
@@ -1,8 +1,11 @@
name: master
name: ci
on:
push:
branches:
- master
pull_request:


permissions:
contents: read
id-token: write
Expand All @@ -13,6 +16,7 @@ env:
AWS_REGION: us-east-2
jobs:
changes:
if: github.event_name != 'pull_request'
runs-on: ubuntu-latest
outputs:
ci: "${{ steps.filter.outputs.ci }}"
Expand All @@ -32,13 +36,21 @@ jobs:
- 'aws-backup-elastio-integration/**'
elastio-s3-changelog:
- 'elastio-s3-changelog/**'

typos:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: crate-ci/typos@v1.20.10

upload-aws-backup-elastio-integration:
runs-on:
- ubuntu-latest
runs-on: ubuntu-latest
needs: changes
if: >-
needs.changes.outputs.aws-backup-elastio-integration == 'true' ||
needs.changes.outputs.ci == 'true'
github.event_name != 'pull_request' && (
needs.changes.outputs.aws-backup-elastio-integration == 'true' ||
needs.changes.outputs.ci == 'true'
)
steps:
- name: Checkout repository
uses: actions/checkout@v4
Expand All @@ -64,8 +76,10 @@ jobs:
runs-on: ubuntu-latest
needs: changes
if: >-
needs.changes.outputs.elastio-s3-changelog == 'true' ||
needs.changes.outputs.ci == 'true'
github.event_name != 'pull_request' && (
needs.changes.outputs.elastio-s3-changelog == 'true' ||
needs.changes.outputs.ci == 'true'
)
env:
S3_BUCKET: elastio-prod-artifacts-us-east-2
steps:
Expand Down
2 changes: 1 addition & 1 deletion atlassian_backup-1.0.0/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,6 @@ export ATLASSIAN_TOKEN = "your_site_token"

### Download of large files times out.

When including attachments, the download file size can get rather large. To seperate scope of conerns, download is kept seperate from protect to ensure protect only starts on successful downloads.
When including attachments, the download file size can get rather large. To separate scope of conerns, download is kept separate from protect to ensure protect only starts on successful downloads.


8 changes: 4 additions & 4 deletions aws-backup-elastio-integration/lambda_handler.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ def handle_aws_backup_event(event):
tag_list = response.get('Tags')
enable_elastio_scan = any(item in tag_list for item in ENABLE_ELASTIO_SCAN_TAG_LIST)

#Existance of Scan enable or the Lambda trigger tag
#Existence of Scan enable or the Lambda trigger tag
if (enable_elastio_scan):
elastio_status_eb = os.environ.get('ElastioStatusEB')
if not elastio_status_eb:
Expand Down Expand Up @@ -140,7 +140,7 @@ def save_event_data_to_s3(s3_log_bucket,json_content):

def process_ransomware_details(account_id,product_arn,generator_id,scan_timestamp,aws_asset_id,aws_backup_rp_arn,elastio_rp_id,ransomware_details):
"""
This is the function responsbile to create ransomware findings based on ransomware_details
This is the function responsible to create ransomware findings based on ransomware_details
"""
try:
logger.info(f'Starting process_ransomware_details')
Expand Down Expand Up @@ -175,7 +175,7 @@ def process_ransomware_details(account_id,product_arn,generator_id,scan_timestam

def process_malware_details(account_id,product_arn,generator_id,scan_timestamp,aws_asset_id,aws_backup_rp_arn,elastio_rp_id,malware_details):
"""
This is the function responsbile to create malware findings based on malware_details
This is the function responsible to create malware findings based on malware_details
"""
try:
logger.info(f'Starting process_malware_details')
Expand Down Expand Up @@ -386,7 +386,7 @@ def handler(event, context):
if s3_log_bucket:
save_event_data_to_s3(s3_log_bucket,event)
else:
logger.info('S3 Log Bucket Name Env Paramter LogsBucketName is missing. Skipping logging to S3 Bucket')
logger.info('S3 Log Bucket Name Env Parameter LogsBucketName is missing. Skipping logging to S3 Bucket')

generate_security_hub_findings(event)

Expand Down
2 changes: 1 addition & 1 deletion demo-pipeline/buildspec/instance-backup.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ phases:
commands:
- export INSTANCE_ID=$(aws ec2 describe-instances | jq -r --arg env $ENVIRONMENT '.Reservations[].Instances[] | select(.Tags != null) | select(.Tags[].Key == "Environment" and .Tags[].Value == $env) | .InstanceId')
- echo "Backing up instance ${INSTANCE_ID}"
- OUTPUT=$(elastio ec2 backup --instance-id $INSTANCE_ID --tag relase:$CODEBUILD_RESOLVED_SOURCE_VERSION --output-format json)
- OUTPUT=$(elastio ec2 backup --instance-id $INSTANCE_ID --tag release:$CODEBUILD_RESOLVED_SOURCE_VERSION --output-format json)
- echo "${OUTPUT}"
- export JOB_ID=$(echo "${OUTPUT}" | jq -r '.job_id')
- export ABORT_TOKEN=$(echo "${OUTPUT}" | jq -r '.abort_token')
Expand Down
2 changes: 1 addition & 1 deletion demo-pipeline/terraform/iam.tf
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ data "aws_iam_policy_document" "codepipeline" {
}

statement {
sid = "ComputeDatabaseQueueNotifcationManagementPolicy"
sid = "ComputeDatabaseQueueNotificationManagementPolicy"
effect = "Allow"
resources = ["*"]
actions = [
Expand Down
4 changes: 2 additions & 2 deletions elastio-api-php-lumen/app/Service/EapService.php
Original file line number Diff line number Diff line change
Expand Up @@ -209,8 +209,8 @@ public static function iScanRp($json)

public static function iScanFile($request)
{
$direcotry = "iscan_" . date("U") . "_" . rand(100,1000);
$path = "/var/www/iscan_temp/" . $direcotry;
$directory = "iscan_" . date("U") . "_" . rand(100,1000);
$path = "/var/www/iscan_temp/" . $directory;
@mkdir($path, $mode = 0777, false);

$file = $request->file('iscan_file');
Expand Down
2 changes: 1 addition & 1 deletion elastio-fargate-mysql-backup/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@
4. Select `ElastioMySQLBackupRole` as Task role
5. Type Elastio-CLI as container name
6. Paste `public.ecr.aws/elastio-dev/elastio-cli:latest` in container image URI
7. Expand Docker configuration and paste `sh,-c` in Entry point and following comman in Command:
7. Expand Docker configuration and paste `sh,-c` in Entry point and following command in Command:
```
apt-get install awscli jq default-mysql-client -y && creds=$(aws secretsmanager get-secret-value --secret-id MySQLBackupCreds | jq ".SecretString | fromjson") && mysqldump -h $(echo $creds | jq -r ".host") -u $(echo $creds | jq -r ".username") -P $(echo $creds | jq -r ".port") -p"$(echo $creds | jq -r '.password')" DATABASE | elastio stream backup --stream-name MySQL-Daily-backup --hostname-override MySQL-hostname
```
Expand Down
2 changes: 1 addition & 1 deletion elastio-lambda-stream-backup/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

3. Download code from contrib repository and open directory elastio-lambda-stream-backup.

> if you want to use arm64/amd64 architecture, you should use the coresponding architecture for the instance
> if you want to use arm64/amd64 architecture, you should use the corresponding architecture for the instance

4. Build and push docker image:
1. Review the dockerfile and read the following comments for arguments such as ARCH(architecture), VERSION_TAG(elastio cli version)
Expand Down
4 changes: 2 additions & 2 deletions elastio-sql-backup-ssstar-stream/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
This article describs the procedure of backup and restore Miscrosoft SQL server databse. If your database hosted in Amazon RDS see [Amazon RDS SQL Server](https://github.com/elastio/contrib/edit/MSSQL/elastio-sql-backup-ssstar-stream/README.md#amazon-rds-sql-server), if you have selfhosted database see [self hosted SQL Server](https://github.com/elastio/contrib/edit/MSSQL/elastio-sql-backup-ssstar-stream/README.md#self-hosted-sql-server).
This article describes the procedure of backup and restore Microsoft SQL server database. If your database hosted in Amazon RDS see [Amazon RDS SQL Server](https://github.com/elastio/contrib/edit/MSSQL/elastio-sql-backup-ssstar-stream/README.md#amazon-rds-sql-server), if you have selfhosted database see [self hosted SQL Server](https://github.com/elastio/contrib/edit/MSSQL/elastio-sql-backup-ssstar-stream/README.md#self-hosted-sql-server).


# Amazon RDS SQL Server
Expand All @@ -24,7 +24,7 @@ This article describs the procedure of backup and restore Miscrosoft SQL server
exec msdb.dbo.rds_backup_database
@source_db_name='database_name',
@s3_arn_to_backup_to='arn:aws:s3:::bucket_name/file_name.extension',
[@kms_master_key_arn='arn:aws:kms:region:account-id:key/key-id'],
[@kms_master_key_arn='arn:aws:kms:region:account-id:key/key-id'],
[@overwrite_s3_backup_file=0|1],
[@type='DIFFERENTIAL|FULL'],
[@number_of_files=n];
Expand Down
2 changes: 1 addition & 1 deletion elastio-stream-kafka/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ def id_generator(size=6, chars=string.ascii_lowercase + string.digits) -> str:

def new_message_exists(topic: str, bootstrap_servers: list, partition: int, offset: int) -> dict:
"""
Check Kafka topic partiton for new message.
Check Kafka topic partition for new message.
Function connect to Kafka, read message for the specified partition.
Check message offset compares with the specified offset
Returns True if the message offset is 2 greater than the specified offset.
Expand Down
14 changes: 7 additions & 7 deletions elastio-stream-kafka/elastio_stream_kafka.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
prog="Elastio stream kafka",
)
subparser = parser.add_subparsers(dest="mod")
# subparser accept two posible modes of work this script backup and restore
# subparser accept two possible modes of work this script backup and restore

backup_parser = subparser.add_parser("backup")
# backup mode arguments
Expand All @@ -41,13 +41,13 @@
topic_info_data = {}
topic_info_data['topic_name'] = args.topic_name

# Creating Kafka consummer with random group.
# Creating Kafka consumer with random group.
consumer = KafkaConsumer(
group_id=f'{_id}-group',
bootstrap_servers=bootstrap_servers,
auto_offset_reset='earliest', # latest/earliest
enable_auto_commit=True,
auto_commit_interval_ms=1000, # 1s
auto_commit_interval_ms=1000, # 1s
consumer_timeout_ms=10000, # 10s
api_version=(0, 10, 1)
)
Expand All @@ -56,7 +56,7 @@
Call Elastio CLI to get stream recovery points list.
Checking if the topic was already backup.
If the topic previously backed up set variable topic_previously_backed_up = True
and geting offset last message what be stored in last time.
and getting offset last message what be stored in last time.
Last message offset stored in recovery point tags with name "partition_<PARTITION_NUMBER>_last_msg_offset".
"""
topic_previously_backed_up = False
Expand Down Expand Up @@ -203,8 +203,8 @@
msg_count = 0
# Read data from elastio stream restore process.
# Load to json format.
datas = (json.loads(line.decode()) for line in res.stdout.splitlines())
for data in datas:
lines = (json.loads(line.decode()) for line in res.stdout.splitlines())
for data in lines:
# Write data to the Kafka topic.
msg_stat = prod.send(
topic=data['topic'],
Expand All @@ -215,4 +215,4 @@
)
msg_count+=1
prod.close()
print("Restore finished successfuly!\nRestored messeges count: {msg_count}".format(msg_count=msg_count))
print("Restore finished successfully!\nRestored messages count: {msg_count}".format(msg_count=msg_count))
2 changes: 1 addition & 1 deletion elastio-velero-integration/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ export veleroBackupName=BackupName
export veleroS3Bucket=BucketName
export AWS_DEFAULT_REGION=AWSRegion
```
Replace `BackupName` and `BucketName` with coresponding names.
Replace `BackupName` and `BucketName` with corresponding names.

Retrieve the restored EBS volume ID and its tags using the following commands.

Expand Down
8 changes: 8 additions & 0 deletions typos.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# See documentation for this config file at https://github.com/crate-ci/typos/blob/master/docs/reference.md

[default.extend-identifiers]
# Abbreviation: Security information and event management
SIEMs = "SIEMs"

# Abbreviation: Recovery Time Objectives
RTO = "RTO"