Skip to content

Releases: mohammadzainabbas/pulumi-labs

🚀 Deploy PWA with MERN stack via Docker + Pulumi ✨

03 Jan 02:42
Compare
Choose a tag to compare

Pulumi Fundamentals - Tutorial

Deploy

Overview

A tutorial-style typescript based Pulumi program with sample progressive web application (PWA) the Pulumipus Boba Tea Shop; built with MongoDB, ExpressJS, ReactJS, and NodeJS (the MERN stack).

Note

This tutorial is part of official Pulumi tutorials. You can follow along here.

Key Concepts

  • Create a Pulumi Project
  • Pull Docker Images
  • Configure and Provision Containers

Prerequisites

  • Pulumi (Account and CLI)
  • Docker
  • TypeScript or JavaScript (NodeJS v14+)

Quick Start

Setup

  1. Clone the repo:
git clone https://github.com/mohammadzainabbas/pulumi-labs.git

or if GitHub CLI is installed:

gh repo clone mohammadzainabbas/pulumi-labs
  1. Change directory:
cd tutorial-pulumi-fundamentals
  1. Install dependencies:
yarn

or if you prefer npm:

npm install

Caution

If you are using npm, make sure to delete yarn.lock file first before installing dependencies.

  1. Deploy via Pulumi:
pulumi up

Note

Source code of Pulumipus Boba Tea Shop application (backend, frontend and database) is available here.

  1. Access the application by open the following URL in the browser:
pulumi stack output frontendUrl

Tip

By default, the application is deployed to http://localhost:3001. You can change the port by setting the frontendPort pulumi config variable as follows:

pulumi config set frontendPort <port>
  1. Cleanup:
pulumi destroy

Warning

This will delete all the resources (pulled images, containers, network) created by this Pulumi program.

Add new item

If you want to add a new item to your app, you can do so by making a POST request like the following:

curl --location --request POST 'http://localhost:3000/api/products' \
--header 'Content-Type: application/json' \
--data-raw '{
    "ratings": {
        "reviews": [],
        "total": 63,
        "avg": 5
    },
    "created": 1600979464567,
    "currency": {
        "id": "USD",
        "format": "$"
    },
    "sizes": [
        "M",
        "L"
    ],
    "category": "boba",
    "teaType": 2,
    "status": 1,
    "_id": "5f6d025008a1b6f0e5636bc7",
    "images": [
        {
            "src": "classic_boba.png"
        }
    ],
    "name": "My New Milk Tea",
    "price": 5,
    "description": "none",
    "productCode": "852542-107"
}'

Important

The above request will add a new drink item called
 My New Milk Tea
to the app. This is just an example. You can change the request body as per your needs.

Full Changelog: v2.0...v3.0

AWS Autoscaling Group with Spot Fleet

30 Dec 15:43
Compare
Choose a tag to compare

AWS Autoscaling Group with Spot Fleet

Deploy

Overview

A Pulumi IaC program written in Python to deploy AWS Autoscaling Group with Launch Template to request Spot Instances for multi-type instances.

Included:

  • AMI ID (Look up the latest AWS Deep Learning AMI GPU CUDA)
  • Create new VPC, Subnets (Public and Private), RouteTables and Security Group
  • Define Launch Configuration with User Data
  • Create Autoscaling Group with Launch Template and Spot Fleet

Prerequisites

  • Python 3.9+
  • Pulumi
  • AWS CLI v2 (with valid credentials configured)
  • AWS Native CLI

Quick Start

Setup

  1. Configuring OpenID Connect for AWS:

Follow the guideline here to configure Pulumi to use OpenID Connect to authenticate with AWS.

  1. Clone the repo:
git clone https://github.com/mohammadzainabbas/pulumi-labs.git

or if GitHub CLI is installed:

gh repo clone mohammadzainabbas/pulumi-labs
  1. Change directory:
cd aws-fleet-python
  1. Create a new Python virtualenv, activate it, and install dependencies:
python3 -m venv venv
source venv/bin/activate
pip3 install -r requirements.txt
  1. Create a new Pulumi stack, which is an isolated deployment target for this example:
pulumi stack init
  1. Update your environment:

Now, update your environment (that you'd already setup in step 1) in Pulumi.dev.yaml like the following:

environment:
  - aws-jarvis

Note that the aws-jarvis is the name of the environment that I've created in step 1.

  1. Set the AWS region (optional):

To deploy to a region other than the default one configured for your AWS CLI profile, run pulumi config set aws:region <region>

pulumi config set aws:region us-east-1

If you don't specify anything, everything will be deployed in eu-west-3 region.

  1. Run pulumi up to preview and deploy changes:
pulumi up

Note: you can use --yes flag to skip the confirmation prompt.

and voila! You've deployed Auto scaling group using spot fleet along with your custom launch config to AWS.

Cleanup

To destroy the Pulumi stack and all of its resources:

pulumi destroy

Note: you can use --yes flag to skip the confirmation prompt.

Full Changelog: v1.0...v2.0

Amazon SageMaker + Hugging Face LLM Deployment via Pulumi

10 Dec 13:51
Compare
Choose a tag to compare

Amazon SageMaker + Hugging Face LLM Deployment

Overview

A Pulumi IaC program written in Python to deploy a Hugging Face Language Model (LLM) on Amazon SageMaker.

Included:

  • IAM roles
  • SageMaker model endpoint
  • CloudWatch alarms

Prerequisites

  • Python 3.9+
  • Pulumi
  • AWS CLIv2 & valid credentials configured

Quick Start

Setup

  1. Create a new directory & initialize a new project:
mkdir newProject && cd newProject
pulumi new sagemaker-aws-python
  1. Deploy the stack:
pulumi up

Note that Pulumi will provide the SageMaker endpoint name as an output.

Test the SageMaker Endpoint

Use this rudimentary Python snippet to test the deployed SageMaker endpoint.

  1. Activate the Python venv locally
# On Linux & MacOS
source venv/bin/activate
  1. Save the following as test.py:

NOTE: change your region_name if using a different region than us-east-1

import json, boto3, argparse

def main(endpoint_name, text):
    client = boto3.client('sagemaker-runtime', region_name='us-east-1')
    payload = json.dumps({"inputs": text})
    response = client.invoke_endpoint(EndpointName=endpoint_name, ContentType="application/json", Body=payload)
    print("Response:", json.loads(response['Body'].read().decode()))

if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("endpoint_name")
    parser.add_argument("--text", default="In 3 words, name the biggest mountain on earth?")
    main(parser.parse_args().endpoint_name, parser.parse_args().text)
  1. Run the test:

Notice: using the pulumi stack output command to return EndpointName from Pulumi state

python3 test.py $(pulumi stack output EndpointName)

or

python3 test.py $(pulumi stack output EndpointName) --text "What's the most beautiful thing in life ? Provide a short essay on this."

Cleanup

To destroy the Pulumi stack and all of its resources:

pulumi destroy

Full Changelog: https://github.com/mohammadzainabbas/pulumi-labs/commits/v1.0