Skip to content

Commit

Permalink
docs(gen): fixed typos (#4109)
Browse files Browse the repository at this point in the history
  • Loading branch information
jcirinosclwy authored Dec 13, 2024
1 parent 63070e0 commit 3a1dfb5
Show file tree
Hide file tree
Showing 7 changed files with 33 additions and 37 deletions.
6 changes: 3 additions & 3 deletions ai-data/generative-apis/api-cli/using-chat-api.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -56,8 +56,8 @@ Find required headers in [this page](/ai-data/generative-apis/api-cli/using-gene

| Param | Type | Description |
| ------------- |-------------|-------------|
| **messages*** | array of objects | A list of messages comprising the conversation so far. |
| **model*** | string | The name of the model to query. |
| **messages** | array of objects | A list of messages comprising the conversation so far. |
| **model** | string | The name of the model to query. |

Our chat API is OpenAI compatible. Use OpenAI’s [API reference](https://platform.openai.com/docs/api-reference/chat/create) for more detailed information on the usage.

Expand Down Expand Up @@ -88,6 +88,6 @@ If you have a use case requiring one of these unsupported parameters, please [co

## Going further

1. [Python code examples](/ai-data/generative-apis/how-to/query-language-models/#querying-language-models-via-api) to query text models using Scaleway's Chat API.
1. [Python code examples](/ai-data/generative-apis/how-to/query-language-models/#querying-language-models-via-api) to query text models using Scaleway's Chat API
2. [How to use structured outputs](/ai-data/generative-apis/how-to/use-structured-outputs) with the `response_format` parameter
3. [How to use function calling](/ai-data/generative-apis/how-to/use-function-calling) with `tools` and `tool_choice`
4 changes: 2 additions & 2 deletions ai-data/generative-apis/api-cli/using-embeddings-api.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -40,8 +40,8 @@ Find required headers in [this page](/ai-data/generative-apis/api-cli/using-gene

| Param | Type | Description |
| ------------- |-------------|-------------|
| **input*** | string or array | Input text to embed, encoded as a string or array of strings. It cannot be an empty string. |
| **model*** | string | The name of the model to query. |
| **input** | string or array | Input text to embed, encoded as a string or array of strings. It cannot be an empty string. |
| **model** | string | The name of the model to query. |

Our embeddings API is OpenAI compatible. Use OpenAI’s [API reference](https://platform.openai.com/docs/api-reference/embeddings) for more detailed information on the usage.

Expand Down
2 changes: 1 addition & 1 deletion compute/gpu/how-to/create-manage-gpu-instance.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ When you have completed your calculations using the GPU Instance, [deletion can
## How to create a GPU Instance

1. Navigate to **Instances** under the **Compute** section in the side menu of the [Scaleway console](https://console.scaleway.com/). This action will direct you to the [Instance dashboard](https://console.scaleway.com/instance/servers).
2. Click ***+ Create Instance** to proceed to the [Instance creation page](https://console.scaleway.com/instance/servers).
2. Click **+ Create Instance** to proceed to the [Instance creation page](https://console.scaleway.com/instance/servers).
3. Follow these steps to configure your GPU Instance:
- **Availability Zone**: Choose the geographical region, represented by the Availability Zone, where your Instance will be deployed.
- **Instance Type (GPU)**: Select the desired GPU Instance type, considering factors such as processing power, memory, storage options, and bandwidth. Refer to our guide on [choosing the right GPU Instance type](/compute/gpu/reference-content/choosing-gpu-instance-type/) for more information.
Expand Down
24 changes: 12 additions & 12 deletions compute/gpu/quickstart.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -15,28 +15,28 @@ categories:

<Macro id="requirements" />

Scaleway GPU Instances are virtual compute instances equipped with dedicated high-end Nvidia graphical processing unit (GPUs). They are ideal for data processing, artificial intelligence, rendering and video encoding. After you have created your GPU Instance, you can [connect to it via SSH](/compute/instances/how-to/connect-to-instance/) and run one of our ready-made [Docker images](/compute/gpu/how-to/use-gpu-with-docker/) to access a [preinstalled environment](/compute/gpu/how-to/use-preinstalled-env/) with all your [favorite AI libraries and tools](/compute/gpu/reference-content/docker-images/) preinstalled.
Scaleway GPU Instances are virtual compute instances equipped with dedicated high-end Nvidia graphics processing units (GPUs). They are ideal for data processing, artificial intelligence, rendering, and video encoding. After you have created your GPU Instance, you can [connect to it via SSH](/compute/instances/how-to/connect-to-instance/) and run one of our ready-made [Docker images](/compute/gpu/how-to/use-gpu-with-docker/) to access a [preinstalled environment](/compute/gpu/how-to/use-preinstalled-env/) with all your [favorite AI libraries and tools](/compute/gpu/reference-content/docker-images/) preinstalled.

In addition to this, GPU Instances have all the features of our regular Instances, including [flexible IPs](/compute/instances/how-to/use-flexips/), [security groups](/compute/instances/how-to/use-security-groups/), [Private Networks](/compute/instances/how-to/use-private-networks/), [backups](/compute/instances/how-to/create-a-backup/) and more. When you are done using your GPU Instance, you can easily [delete it](/compute/instances/how-to/delete-instance/) from the Scaleway console.

## How to create a GPU Instance

1. Navigate to **Instances** under the **Compute** section in the side menu of the [Scaleway console](https://console.scaleway.com/). This action will direct you to the [Instance dashboard](https://console.scaleway.com/instance/servers).
2. Click ***+ Create Instance** to proceed to the [Instance creation page](https://console.scaleway.com/instance/servers).
2. Click **+ Create Instance** to proceed to the [Instance creation page](https://console.scaleway.com/instance/servers).
3. Follow these steps to configure your GPU Instance:
- **Availability Zone**: Choose the geographical region, represented by the Availability Zone, where your Instance will be deployed.
- **Instance Type (GPU)**: Select the desired GPU Instance type, considering factors such as processing power, memory, storage options, and bandwidth. Refer to our guide on [choosing the right GPU Instance type](/compute/gpu/reference-content/choosing-gpu-instance-type/) for more information.
- **Image**: Pick an operating system image suitable for your GPU Instance. For example, select **Ubuntu Jammy GPU OS 12**, which comes with preinstalled Nvidia drivers and an Nvidia Docker environment. You have the flexibility to customize your working environment using Docker with our provided Docker images or your own containers.
- **Image**: Pick an operating system image suitable for your GPU Instance. For example, select **Ubuntu Jammy GPU OS 12**, which comes with preinstalled Nvidia drivers and an Nvidia Docker environment. You can customize your working environment using Docker with our provided Docker images or your own containers.
- **Volumes**: Optionally, add storage volumes for your Instance. You can adjust settings such as Block and Local Storage volumes according to your requirements.
<Message type="note">
* The recommended minimum volume size for GPU OS images is 125 GB.
* If your GPU Instance supports scratch storage, the scratch volume displays but can not be edited. [Learn more about scratch storage.](/compute/gpu/how-to/use-scratch-storage-h100-instances/)
</Message>
- **Network Configuration**: Choose between a routed public IP or a NAT public IP for your Instance. We recommend using a routed public IP. You can allocate IPv4 and IPv6 addresses as needed, with a maximum of 5 IPs per Instance.
- **Instance Name and Tags**: Assign a name to your Instance for easy identification. You can also add tags to organize your Instances efficiently.
- **Advanced Options**: Configure `cloud-init` settings if required, or leave them at default values.
- **SSH Keys**: Verify the SSH keys that will grant you access to your Instance.
- **Estimated Cost**: Review the estimated cost of your GPU Instance based on the selected specifications.
- **Network configuration**: Choose between a routed public IP or a NAT public IP for your Instance. We recommend using a routed public IP. You can allocate IPv4 and IPv6 addresses as needed, with a maximum of 5 IPs per Instance.
- **Instance name and tags**: Assign a name to your Instance for easy identification. You can also add tags to organize your Instances efficiently.
- **Advanced options**: Configure `cloud-init` settings if required, or leave them at default values.
- **SSH keys**: Verify the SSH keys that will grant you access to your Instance.
- **Estimated cost**: Review the estimated cost of your GPU Instance based on the selected specifications.
4. Once you have completed the configuration, click on **Create Instance** to initiate the creation process. You will receive a notification once the GPU Instance is ready for use.

## How to connect to your GPU Instance
Expand All @@ -59,18 +59,18 @@ To access a preinstalled working environment with all your favorite Python packa

## How to use the preinstalled environment

When you run your Docker container as shown above, the container launches, and you are taken to its `ai` directory, where the Pipenv virtual environment is already activated.
When you run your Docker container as shown above, the container launches and you are taken to its `ai` directory, where the Pipenv virtual environment is already activated.

<Message type="tip">
Use the command `pipenv graph` to see a list of all installed packages and their versions, as well as all the dependencies of each package. For more help with pipenv, see our [dedicated documentation](/compute/gpu/how-to/use-pipenv/).
</Message>

### Launching an application in your local browser

Some applications, such as [Jupyter Lab](https://jupyter.org/), [Tensorboard](https://www.tensorflow.org/tensorboard) and [Code Server](https://code-server.dev/), require a browser to run. You can launch these from the `ai` virtual environment of your Docker container, and view them in the browser of your local machine. This is thanks to the possibility to add port mapping arguments when launching a container with the `docker run` command. In our example, we added the port mapping arguments `-p 8888:8888 -p 6006:6006` when we launched our container, mapping `8888:8888` for Jupyter Lab and `6006:6006` for Tensorboard.
Some applications, such as [Jupyter Lab](https://jupyter.org/), [Tensorboard](https://www.tensorflow.org/tensorboard) and [Code Server](https://code-server.dev/), require a browser to run. You can launch these from the `ai` virtual environment of your Docker container, and view them in the browser of your local machine. This is possible thanks to the addition of port mapping arguments when launching a container with the `docker run` command. In our example, we added the port mapping arguments `-p 8888:8888 -p 6006:6006` when we launched our container, mapping `8888:8888` for Jupyter Lab and `6006:6006` for Tensorboard.

<Message type="tip">
Code Server runs in Jupyter Lab via Jupyter Hub, so does not need port mapping in this case. You can add other port mapping arguments for other applications as you wish.
Code Server runs in Jupyter Lab via Jupyter Hub and does not need port mapping in this case. You can add other port mapping arguments for other applications as you wish.
</Message>

1. Launch an application. Here, we launch Jupyter Lab:
Expand All @@ -88,7 +88,7 @@ Some applications, such as [Jupyter Lab](https://jupyter.org/), [Tensorboard](ht
```

<Message type="tip">
Jupyter Lab is launched automatically when you run any Scaleway container image. You will see a message upon start up telling how to access the notebook in your browser. To override Jupyter Lab being launched automatically in this way, add `/bin/bash` to the end of your `docker run` command, e.g. `docker run --runtime=nvidia -it --rm -p 8888:8888 -p 6006:6006 rg.fr-par.scw.cloud/scw-ai/pytorch:latest /bin/bash`. This preempts the launch of Jupyter Lab at container startup, and replaces it with the specified command, in this case a bash shell.
Jupyter Lab is launched automatically when you run any Scaleway container image. You will see a message upon start up telling how to access the notebook in your browser. To override Jupyter Lab being launched automatically in this way, add `/bin/bash` to the end of your `docker run` command, e.g. `docker run --runtime=nvidia -it --rm -p 8888:8888 -p 6006:6006 rg.fr-par.scw.cloud/scw-ai/pytorch:latest /bin/bash`. This preempts the launch of Jupyter Lab at container startup and replaces it with the specified command. In this case, a bash shell.
</Message>
2. On your local computer, open a browser window and enter the following URL. Replace `<ip-address>` with the IP address of your Scaleway GPU Instance, and `<my-token>` with the token shown displayed in the last lines of terminal output after the `jupyter-lab` command
```
Expand Down
12 changes: 5 additions & 7 deletions compute/instances/how-to/create-a-block-volume.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -28,18 +28,16 @@ categories:
The volume must be in the same Availability Zone as the Instance you want to connect it to.
</Message>
5. Enter the volume name and select a size.
6. Configure the volume type. You can either create a **Block Storage** or ***Local Storage** volume.
6. Configure the volume type. You can either create a **Block Storage** or **Local Storage** volume.
<Message type="note">
To use **[Block Storage Low Latency](/storage/block/) 15k**, your Instance must have at least **3 GiB/s of block bandwidth**.
You can verify if an Instance type meets this requirement by consulting the [Scaleway Instances internet and Block Storage bandwidth overview](/compute/instances/reference-content/instances-bandwidth-overview/) documentation.
To use **[Block Storage Low Latency](/storage/block/) 15k**, your Instance must have at least **3 GiB/s of block bandwidth**.
You can verify if an Instance type meets this requirement by consulting the [Scaleway Instances internet and Block Storage bandwidth overview](/compute/instances/reference-content/instances-bandwidth-overview/) documentation.

Alternatively, use the [Instances API](https://www.scaleway.com/en/developers/api/instance/#path-instance-types-list-instance-types) to check the `block_bandwidth` field at `/instance/v1/zones/{zone}/products/servers`.
Alternatively, use the [Instances API](https://www.scaleway.com/en/developers/api/instance/#path-instance-types-list-instance-types) to check the `block_bandwidth` field at `/instance/v1/zones/{zone}/products/servers`.
This field provides detailed information about block bandwidth for each Instance type, helping you ensure compatibility with high-performance storage.
</Message>
7. Click **Create volume** to confirm.

<Message type="tip">
You can also [create](/storage/block/how-to/create-a-volume/) Block Storage volumes from the [Block Storage console section](https://console.scaleway.com/block-storage/).
</Message>


</Message>
10 changes: 5 additions & 5 deletions compute/instances/quickstart.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Scaleway [Instances](/compute/instances/concepts/#instance) are computing units
- **Choose an Availability Zone**, which represents the geographical region where your Instance will be deployed.
- **Choose an Instance type**. <br />
Instance offers vary in pricing, processing power, memory, storage, and bandwidth. [Discover the best Instance type for your needs](/compute/instances/reference-content/choosing-instance-type/).
- ***Choose an image** to run on your Instance. <br />
- **Choose an image** to run on your Instance. <br />
This can be an operating system, an InstantApp, or a custom image. [Check all available Linux distributions and InstantApps](/compute/instances/reference-content/images-and-instantapps/).
- **Name your Instance**, or leave the randomly-generated name in place. Optionally, you can add [tags](/compute/instances/concepts/#tags) to help you organize your Instance.
- **Add Volumes**, which are storage spaces used by your Instances. A block volume with a default name and 5,000 IOPS is automatically provided for your system volume. You can customize this volume and attach up to 16 local and/or block type volumes as needed.
Expand All @@ -49,7 +49,7 @@ Scaleway [Instances](/compute/instances/concepts/#instance) are computing units
- **Verify the [SSH keys](/identity-and-access-management/organizations-and-projects/concepts/#ssh-key)** that will give you access to your Instance.
- **Verify the Estimated cost** of your Instance, based on the specifications you chose.
4. Click **Create Instance**. The creation of your Instance begins, and you will be informed when the Instance is ready.
Once the Instance is created, you can connect to it using the SSH keys you have configured, and begin setting up your applications
Once the Instance is created, you can connect to it using the SSH keys you have configured, and begin setting up your applications.

## How to connect to an Instance

Expand All @@ -74,7 +74,7 @@ You are now connected to your Instance.

To connect to your Instance from Windows, you will need to use a small application called **PuTTY**, an SSH client.

1. Download and install PuTTY [here](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html)
1. [Download and install PuTTY](https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html).
2. Launch PuTTY on your computer.
3. Enter your Instance's IP address in the **Hostname** field.
4. In the side menu, under **Connection**, navigate to the **Auth** sub-category. (**Connection** > **SSH** > **Auth**).
Expand All @@ -86,7 +86,7 @@ To connect to your Instance from Windows, you will need to use a small applicati
You are now connected to your Instance.

<Message type="tip">
For instructions how to connect to Instances running Microsoft Windows Server OS, refer to our dedicated documentation [How to connect to Instances running Microsoft Windows Server OS](/compute/instances/how-to/connect-to-instance/).
For instructions on how to connect to Instances running Microsoft Windows Server OS, refer to our dedicated documentation [How to connect to Instances running Microsoft Windows Server OS](/compute/instances/how-to/connect-to-instance/).
</Message>

## How to delete an Instance
Expand All @@ -96,5 +96,5 @@ To connect to your Instance from Windows, you will need to use a small applicati
</Message>

1. Click **Instances** in the **Compute** section of the side menu. The [Instances page](https://console.scaleway.com/instance/servers) displays.
2. Click the <Icon name="more" /> icon next to the Instance you want to delete, and select **Delete** from the drop-down menu. A pop-up asks you to confirm the action.
2. Click the <Icon name="more" /> icon next to the Instance you want to delete and select **Delete** from the drop-down menu. A pop-up asks you to confirm the action.
3. Type **DELETE** and then click **Delete Instance**.
Loading

0 comments on commit 3a1dfb5

Please sign in to comment.