Skip to content

Commit

Permalink
Remove --no-spark flag from docs tests (#3625)
Browse files Browse the repository at this point in the history
  • Loading branch information
William Shin authored Nov 2, 2021
1 parent 948d6ed commit 06f39ff
Show file tree
Hide file tree
Showing 4 changed files with 15 additions and 23 deletions.
2 changes: 1 addition & 1 deletion azure-pipelines-docs-integration.yml
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ stages:
- script: |
pip install pytest pytest-azurepipelines
# TODO enable spark tests
pytest -v --docs-tests -m docs --no-spark --mysql --mssql tests/integration/test_script_runner.py
pytest -v --docs-tests -m docs --mysql --mssql tests/integration/test_script_runner.py
displayName: 'pytest'
env:
# snowflake credentials
Expand Down
20 changes: 10 additions & 10 deletions docs/guides/connecting_to_your_data/in_memory/spark.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ This will allow you to validate and explore your data.

Import these necessary packages and modules.

```python file=../../../../tests/integration/docusaurus/connecting_to_your_data/in_memory/spark_yaml_example.py#L1-L12
```python file=../../../../tests/integration/docusaurus/connecting_to_your_data/in_memory/spark_yaml_example.py#L1-L10
```

<SparkDataContextNote />
Expand All @@ -47,23 +47,23 @@ Using this example configuration add in the path to a directory that contains so
]}>
<TabItem value="yaml">

```python file=../../../../tests/integration/docusaurus/connecting_to_your_data/in_memory/spark_yaml_example.py#L37-L47
```python file=../../../../tests/integration/docusaurus/connecting_to_your_data/in_memory/spark_yaml_example.py#L33-L43
```

Run this code to test your configuration.

```python file=../../../../tests/integration/docusaurus/connecting_to_your_data/in_memory/spark_yaml_example.py#L49
```python file=../../../../tests/integration/docusaurus/connecting_to_your_data/in_memory/spark_yaml_example.py#L45
```

</TabItem>
<TabItem value="python">

```python file=../../../../tests/integration/docusaurus/connecting_to_your_data/in_memory/spark_python_example.py#L37-L47
```python file=../../../../tests/integration/docusaurus/connecting_to_your_data/in_memory/spark_python_example.py#L33-L43
```

Run this code to test your configuration.

```python file=../../../../tests/integration/docusaurus/connecting_to_your_data/in_memory/spark_python_example.py#L49
```python file=../../../../tests/integration/docusaurus/connecting_to_your_data/in_memory/spark_python_example.py#L45
```

</TabItem>
Expand All @@ -84,13 +84,13 @@ Save the configuration into your `DataContext` by using the `add_datasource()` f
]}>
<TabItem value="yaml">

```python file=../../../../tests/integration/docusaurus/connecting_to_your_data/in_memory/spark_yaml_example.py#L51
```python file=../../../../tests/integration/docusaurus/connecting_to_your_data/in_memory/spark_yaml_example.py#L47
```

</TabItem>
<TabItem value="python">

```python file=../../../../tests/integration/docusaurus/connecting_to_your_data/in_memory/spark_python_example.py#L51
```python file=../../../../tests/integration/docusaurus/connecting_to_your_data/in_memory/spark_python_example.py#L47
```

</TabItem>
Expand All @@ -102,17 +102,17 @@ Verify your new Datasource by loading data from it into a `Validator` using a `B

Add the variable containing your dataframe (`df` in this example) to the `batch_data` key under `runtime_parameters` in your `BatchRequest`.

```python file=../../../../tests/integration/docusaurus/connecting_to_your_data/in_memory/spark_yaml_example.py#L54-L60
```python file=../../../../tests/integration/docusaurus/connecting_to_your_data/in_memory/spark_yaml_example.py#L50-L56
```

:::note Note this guide uses a toy dataframe that looks like this.

```python file=../../../../tests/integration/docusaurus/connecting_to_your_data/in_memory/spark_yaml_example.py#L20-L25
```python file=../../../../tests/integration/docusaurus/connecting_to_your_data/in_memory/spark_yaml_example.py#L16-L20
```
:::

Then load data into the `Validator`.
```python file=../../../../tests/integration/docusaurus/connecting_to_your_data/in_memory/spark_yaml_example.py#L62-L67
```python file=../../../../tests/integration/docusaurus/connecting_to_your_data/in_memory/spark_yaml_example.py#L58-L63
```

<Congratulations />
Expand Down
Original file line number Diff line number Diff line change
@@ -1,20 +1,16 @@
import findspark
from pyspark import SparkContext
from pyspark.sql import SparkSession
from ruamel import yaml

import great_expectations as ge
from great_expectations.core.batch import BatchRequest, RuntimeBatchRequest
from great_expectations.core.util import get_or_create_spark_session
from great_expectations.data_context import BaseDataContext
from great_expectations.data_context.types.base import (
DataContextConfig,
InMemoryStoreBackendDefaults,
)

# Set up a basic spark dataframe
findspark.init()
sc = SparkContext(appName="app")
spark = SparkSession(sc)
spark = get_or_create_spark_session()

# basic dataframe
data = [
Expand Down
Original file line number Diff line number Diff line change
@@ -1,20 +1,16 @@
import findspark
from pyspark import SparkContext
from pyspark.sql import SparkSession
from ruamel import yaml

import great_expectations as ge
from great_expectations.core.batch import BatchRequest, RuntimeBatchRequest
from great_expectations.core.util import get_or_create_spark_session
from great_expectations.data_context import BaseDataContext
from great_expectations.data_context.types.base import (
DataContextConfig,
InMemoryStoreBackendDefaults,
)

# Set up a basic spark dataframe
findspark.init()
sc = SparkContext(appName="app")
spark = SparkSession(sc)
spark = get_or_create_spark_session()

# basic dataframe
data = [
Expand Down

0 comments on commit 06f39ff

Please sign in to comment.