Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Add s390x architecture #4777

Open
wants to merge 38 commits into
base: master
Choose a base branch
from
Open

[WIP] Add s390x architecture #4777

wants to merge 38 commits into from

Conversation

buger
Copy link
Member

@buger buger commented Feb 15, 2023

Description

Related Issue

Motivation and Context

How This Has Been Tested

Screenshots (if appropriate)

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Refactoring or add test (improvements in base code or adds test coverage to functionality)

Checklist

  • I ensured that the documentation is up to date
  • I explained why this PR updates go.mod in detail with reasoning why it's required
  • I would like a code coverage CI quality gate exception and have explained why

@buger buger requested a review from a team as a code owner February 15, 2023 10:24
@Tyk-ITS
Copy link
Contributor

Tyk-ITS commented Feb 15, 2023

API tests result: success
Branch used: refs/pull/4777/merge
Commit:
Triggered by: pull_request (@buger)
Execution page

@buger buger changed the title Add s390x architecture [WIP] Add s390x architecture Feb 15, 2023
@buger buger marked this pull request as draft February 15, 2023 10:56
@v1gnesh
Copy link

v1gnesh commented Feb 27, 2023

Hello @buger,

I've got good news :) To start off, it builds ok for s390x.

user@mac tyk % GOOS=linux GOARCH=s390x go build
user@mac tyk % file tyk
tyk: ELF 64-bit MSB executable, IBM S/390, version 1 (SYSV), statically linked, Go BuildID=JT_V362ULqATfMQF5A4I/pbf0Q2lHP9AT_xp81udl/j-ut4eSR3jveR8KpKvDU/TbcFds5l5WHQAtENSD6z, with debug_info, not stripped

EDIT: I see that redis is needed for running go test.
This should help - https://hub.docker.com/r/s390x/redis/
And then install golang within the same image, I suppose.

@buger
Copy link
Member Author

buger commented Feb 28, 2023

@v1gnesh yes. The main issue is not a gw but its dependencies. For example current tests are failing because it can't make work Python plugins. So if we want this arch, maybe we will have to say that Python plugins not tested/supported, and disable it in tests.

Copy link
Contributor

@alephnull alephnull left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm afraid this PR is incomplete. I would approach this by having a build validated on s390 and then approach adding s390 to the build pipeline.

@v1gnesh
Copy link

v1gnesh commented Feb 28, 2023

@v1gnesh yes. The main issue is not a gw but its dependencies. For example current tests are failing because it can't make work Python plugins. So if we want this arch, maybe we will have to say that Python plugins not tested/supported, and disable it in tests.

I may be completely off on this one, please bear with me.
So... in the CI test, I see it fails at pip install grpcio.
I see gRPC doesn't (and probably won't) support s390x.
When looking around, I found a couple of places where there are suggestions to build with an env. var.
Python build errors on Linux ppc64 #25454
GRPC support on s390x

This issue details GitLab's quest with BoringSSL and s390x - Path Forward for grpc s390x architecture support

And finally, IBM provides build instructions for BoringSSL for s390x.
https://github.com/linux-on-ibm-z/docs/wiki/Building-BoringSSL.

In summary, one way is to build with OpenSSL in place of BoringSSL.
Another is perhaps looking at IBM's method above.

Either way, this is quite a deep rabbit hole.
GitLab seems to have successfully got past a lot of problems (in general) trying to build for s390x, per the issue linked above.

Is any of this relevant or helpful? 😅

@Tyk-ITS
Copy link
Contributor

Tyk-ITS commented Feb 28, 2023

API tests result: success
Branch used: refs/pull/4777/merge
Commit: 885009a
Triggered by: pull_request (@buger)
Execution page

@Tyk-ITS
Copy link
Contributor

Tyk-ITS commented Feb 28, 2023

API tests result: success
Branch used: refs/pull/4777/merge
Commit: cfd3e0c
Triggered by: pull_request (@buger)
Execution page

@v1gnesh
Copy link

v1gnesh commented Mar 1, 2023

Is there an option of trying the python grpcio build with RHEL?

grpcio Python module fails to build on IBM/z (s390x) machine #23797
Apart from the specific post, the whole discussion may be useful.
And this one too - openssl/openssl#15941.

For a bunch of these errors, it seems that installing libssl-dev (for Debian/Ubuntu) or openssl-devel (for RHEl) helps.

2023-02-28T21:33:35.1370098Z #23 1549.7       ./src/core/tsi/ssl/key_logging/ssl_key_logging.h:23:10: fatal error: openssl/ssl.h: No such file or directory
2023-02-28T21:33:35.1370479Z #23 1549.7          23 | #include <openssl/ssl.h>
2023-02-28T21:33:35.1370769Z #23 1549.7             |          ^~~~~~~~~~~~~~~

Alternatively, I came across a command-line option to point openssl or "whoever's asking" to where the headers are.
I'm sure there are loads of other quirks addressed in this page - https://github.com/linux-on-ibm-z/docs/wiki

I've to say, I'm pretty out of depth now lol.

@buger
Copy link
Member Author

buger commented Mar 1, 2023

@v1gnesh to be frank I'm thinking of just removing this from the docker grpc - it is optional and required for Python and Coprocess plugins. So I think it should be good compromise.

@v1gnesh
Copy link

v1gnesh commented Mar 1, 2023

On an unrelated note, is there an option of using plugins in Rust, if we're unable to work out Python?

@v1gnesh
Copy link

v1gnesh commented Mar 1, 2023

@v1gnesh to be frank I'm thinking of just removing this from the docker grpc - it is optional and required for Python and Coprocess plugins. So I think it should be good compromise.

Okay. Another strange question - is there an option of using NATS for the inter-language connect? I do understand that RPC and messaging are different; just wondering if it makes sense for this platform or in some contexts, since you'll know more about both.

@buger
Copy link
Member Author

buger commented Mar 1, 2023

It is a bit different context, yes. RPC is about real-time response, e..g request -> response. NATS more about async work, you send the message and wait for worker to process it (and not always expect response back). But if you have some specific use-case with it, it can be implemented using Go plugins.

@buger
Copy link
Member Author

buger commented Mar 1, 2023

As for your question about Rust. GRPC plugins was designed for this case. And in general, maybe it will work without installing dependencies. In order for GRPC plugins to work, it require only Go codebase, and does not depend on Python deps. So if I remove Python support from this image, gRPC with plugin written in Rust still should be an option.

@v1gnesh
Copy link

v1gnesh commented Mar 1, 2023

Is there a significant (like maybe an order of magnitude) difference in response time when using NATS (async) for RPC (real-time)?
They do offer exactly once processing.

@v1gnesh
Copy link

v1gnesh commented Mar 1, 2023

As for your question about Rust. GRPC plugins was designed for this case. And in general, maybe it will work without installing dependencies. In order for GRPC plugins to work, it require only Go codebase, and does not depend on Python deps. So if I remove Python support from this image, gRPC with plugin written in Rust still should be an option.

That's good to hear. I wonder how much one can re-use stuff from axum.

@buger
Copy link
Member Author

buger commented Mar 1, 2023

Here is the simple Rust gRPC plugin, written by ChatGPT :)

use tonic::{transport::Server, Request, Response, Status};
use tyk::{Dispatcher, CoprocessRequest, CoprocessObject};

#[derive(Default)]
struct MyService {}

#[tonic::async_trait]
impl MyServiceServer for MyService {
    async fn my_service_method(&self, request: Request< CoprocessRequest >) -> Result<Response<CoprocessObject >, Status> {
        // Here you can implement your logic to communicate with Tyk API Gateway
        let response = CoprocessObject {
            body: "footer"
        };
        Ok(Response::new(response))
    }
}

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let addr = "0.0.0.0:50051".parse()?;
    let service = MyService::default();

    println!("Server listening on {}", addr);

    Server::builder()
        .add_service(Dispatcher::new(service))
        .serve(addr)
        .await?;

    Ok(())
}

Have not tested, and probably not work out of the box, but you can get the idea, it is similar to another gRPC plugins like https://tyk.io/docs/plugins/supported-languages/rich-plugins/grpc/request-transformation-java/

@github-actions
Copy link
Contributor

github-actions bot commented Mar 1, 2023

💥 CI tests failed 🙈

git-state

all ok

Please look at the run or in the Checks tab.

@Tyk-ITS
Copy link
Contributor

Tyk-ITS commented Mar 1, 2023

API tests result: success
Branch used: refs/pull/4777/merge
Commit: c02d2c1
Triggered by: pull_request (@buger)
Execution page

@Tyk-ITS
Copy link
Contributor

Tyk-ITS commented Mar 1, 2023

API tests result: success
Branch used: refs/pull/4777/merge
Commit: a96937b
Triggered by: pull_request (@buger)
Execution page

@v1gnesh
Copy link

v1gnesh commented Mar 19, 2023

Hi @buger,

I've managed to get access to s390x VMs through this IBM LinuxONE OSS Community Cloud form.
With this, RHEL9.1, ubuntu22.04, and SLES15SP4 are available... in addition to some previous releases.

To start off, I got a RHEL9.1 VM and did the following:

$ sudo dnf install gcc-c++ openssl-devel golang python3-devel python3-pip redis
$ pip install --upgrade pip setuptools wheel

... and grpcio has built successfully.

$ GRPC_PYTHON_BUILD_SYSTEM_OPENSSL=1 pip3 install grpcio
Defaulting to user installation because normal site-packages is not writeable
Collecting grpcio
  Using cached grpcio-1.51.3.tar.gz (22.1 MB)
  Preparing metadata (setup.py) ... done
Building wheels for collected packages: grpcio
  Building wheel for grpcio (setup.py) ... done
  Created wheel for grpcio: filename=grpcio-1.51.3-cp39-cp39-linux_s390x.whl size=78562413 sha256=eae086a6e53fade54c33ec1a6ccf0e00d1b369bdd56b9fe1ef3ce5cdb0fdae01
  Stored in directory: /home/linux1/.cache/pip/wheels/a9/cd/20/1347f1406df5f8bec22089be8f7406b843870f7685cc3fcfe4
Successfully built grpcio
Installing collected packages: grpcio
Successfully installed grpcio-1.51.3

If you could sign up for an s390x VM (which is free for OSS projects) in the above link, you'll get access to a machine to build/test for s390x.
I don't know if GitHub Actions can be hooked up to this machine via SSH though...

@buger
Copy link
Member Author

buger commented Mar 20, 2023

That's great research! But also quite complex process. I would say it will depend on prioritisation. In its current state it works well, even with gRPC plugins (just Python plugins does not work). So for majoirty of users it can be not a big deal. If we will have some commercial lever for Python plugin support on this platform, then we will get deeper. Hope it makes sense!

@v1gnesh
Copy link

v1gnesh commented Mar 20, 2023

I understand. Do you think it's worth re-running the build process once, with these additional packages going in before grpcio in the pip install command. It's really just 3 commands.
Below example is on RHEL/Fedora.

$ sudo dnf install gcc-c++ openssl-devel golang python3-devel python3-pip
$ pip install --upgrade pip setuptools wheel
$ GRPC_PYTHON_BUILD_SYSTEM_OPENSSL=1 pip3 install grpcio

On Ubuntu, I guess it'll be
python3-setuptools libpython3.9 python3.9-dev build-essential gcc g++ libssl-dev, pip.
Then, grpcio via pip (with the env var GRPC_PYTHON_BUILD_SYSTEM_OPENSSL=1 set before pip3 install grpcio).

Adding a link to the original build failure logs for future reference -
https://github.com/TykTechnologies/tyk/actions/runs/4297128472/jobs/7489899139#step:10:2086

I'm sure you already know this, but having the whole thing working unlocks (for Tyk) new customers who run on a whole new platform. For a reasonably small amount of extra work, your product becomes an important & compelling answer to API management across domains, for any company w/ distributed & mainframe tech.

If it was for Java plugins, I wouldn't care lol. A lot of people know Python (than Go for example), and having that piece of the puzzle work for this platform would be nice.

@chrisanderton
Copy link

One challenge with building grpcio from source as part of the release is the time it takes. It's not easy to cross-compile Python modules and it takes a long time to build under emulation; I've also found emulation of s390x and ppc64le through QEMU to be a bit buggy at times (especially when running locally on an M1 Mac). Self-hosted Github runners don't support many architectures at present; Gitlab runners are more flexible but that's a whole different topic.

There's also the question of supporting 3rd party package compilation.. what happens when a new release of the package fails to compile on a given target architecture? There's probably many more questions I haven't thought of yet :)

One approach might be to separate building of grpcio from the release of Tyk itself.. I've experimented with building the package as a wheel that is installed in a later build stage. This could be separated into a different build process altogether and once compiled the wheels could be pushed to a package repository rather than being built as part of the core Tyk release process: during the Tyk release the pre-compiled package for the target architecture would be installed. However, this also introduces a bunch of work/validation: Python wheels are sensitive to architecture, Python version and possibly pip version too.

Not everybody uses Docker.. so an interim step might just be to document how to build/install the Python packages on architectures where pre-compiled versions are not available?

@v1gnesh
Copy link

v1gnesh commented Mar 25, 2023

Thank you for sharing the details Chris, always fun learning the thought processes behind decisions.

One challenge with building grpcio from source as part of the release is the time it takes.

It might be worth taking a quick peek at https://earthly.dev/.
TBH, I don't know how well it'll fare against GHA's runners.
I'm not expecting any drastic changes in Tyk's workflows/tooling, just a suggestion that may help :)

It's not easy to cross-compile Python modules and it takes a long time to build under emulation; I've also found emulation of s390x and ppc64le through QEMU to be a bit buggy at times (especially when running locally on an M1 Mac). Self-hosted Github runners don't support many architectures at present; Gitlab runners are more flexible but that's a whole different topic.

IBM provides s390x virtual machine(s) for open source projects, and sign-up is here:
https://www.ibm.com/community/z/open-source/virtual-machines-request/
I do realize that GHA can't integrate with this directly, but at least this will be on real Z hardware.

There's also the question of supporting 3rd party package compilation.. what happens when a new release of the package fails to compile on a given target architecture?
There's probably many more questions I haven't thought of yet :)

You sharing your thought process here is great, everyone reading this can learn from it too!

One approach might be to separate building of grpcio from the release of Tyk itself.. ...during the Tyk release the pre-compiled package for the target architecture would be installed.
However, this also introduces a bunch of work/validation: Python wheels are sensitive to architecture, Python version and possibly pip version too.

True, I guess the only way to find out is to test the waters.
The cost of maintaining for a specific arch vs the market opportunity that comes with the arch.

Not everybody uses Docker.. so an interim step might just be to document how to build/install the Python packages on architectures where pre-compiled versions are not available?

Yup, I haven't gone beyond working out grpcio, because there's a lot of Tyk-specific auth & code that's in the CI process, which will be hard for me to recreate separately.

EDIT: In addition to a 120-day free trial, IBM can also provide permanent set of machines, which can be worked out with them (no cost for OSS projects).
There's some T&C to sign (digitally), but I don't think there's anything out of the ordinary there.

@v1gnesh
Copy link

v1gnesh commented Mar 29, 2023

Hey @chrisanderton, it seems you can also sign up for OpenShift on IBM/Marist's community cloud - https://linuxone.cloud.marist.edu/#/register?flag=OCP
This will allow deploying directly from git - https://video.ibm.com/recorded/129580742

@buger buger marked this pull request as ready for review March 31, 2023 14:39
@github-actions
Copy link
Contributor

💥 CI tests failed 🙈

git-state

all ok

Please look at the run or in the Checks tab.

@buger
Copy link
Member Author

buger commented Mar 31, 2023

I have added the proper cross compilation toolchain for given architectures to base golang-cross image, and it should resolve Go plugins issue. Soon we are migrating to Go 1.19, and this PR will migrate there too. Once 1.19 will be merged, we start planning merging of this PR as well.

@Tyk-ITS
Copy link
Contributor

Tyk-ITS commented Mar 31, 2023

API tests result: success
Branch used: refs/pull/4777/merge
Commit: 249b68e
Triggered by: pull_request (@buger)
Execution page

@github-actions
Copy link
Contributor

💥 CI tests failed 🙈

git-state

all ok

Please look at the run or in the Checks tab.

@github-advanced-security
Copy link
Contributor

You have successfully added a new SonarCloud configuration ``. As part of the setup process, we have scanned this repository and found no existing alerts. In the future, you will see all code scanning alerts on the repository Security tab.

@Tyk-ITS
Copy link
Contributor

Tyk-ITS commented Mar 31, 2023

API tests result: success
Branch used: refs/pull/4777/merge
Commit: c2d6713
Triggered by: pull_request (@buger)
Execution page

@chrisanderton
Copy link

Thanks for your continued engagement @v1gnesh 👍

True, I guess the only way to find out is to test the waters. The cost of maintaining for a specific arch vs the market opportunity that comes with the arch.

Exactly this. We're looking at this from all sides, both business and technical. The technical team are taking a closer look at including it in our build process so we can better evaluate the end to end situation. I expect you'll have seen some of the commits. From a business perspective having a customer agree to pay for a supported environment would obviously accelerate some of the thinking!

Yup, I haven't gone beyond working out grpcio, because there's a lot of Tyk-specific auth & code that's in the CI process, which will be hard for me to recreate separately.

The good thing is that getting all the packages built without Python plugin support is quite straightforward. As before, including Python plugin support in the build has a dependency on compiling grpcio against OpenSSL; doing this in the core build process is out of scope for our initial investigation. I know from my own experiments that it's possible to build and install the grpcio Python package on both s390x and ppc64le.. so this isn't a technical problem to solve, more of a packaging challenge/topic.

EDIT: In addition to a 120-day free trial, IBM can also provide permanent set of machines, which can be worked out with them (no cost for OSS projects). There's some T&C to sign (digitally), but I don't think there's anything out of the ordinary there.

Yep.. I have one of these on a trial basis and saw the other option for longer term use. The gap is not so much access to virtual environments, but more having a CI-system that supports remote runners on the architecture. We currently use Github actions and self-hosted runners are limited in architecture support at present. It's all part of the mix :)

We're on an invite together from a few weeks back so if you ever want to drop me an email or talk about this on a call i'm available. We'll keep you posted on any progress in the meantime.

@v1gnesh
Copy link

v1gnesh commented Apr 1, 2023

We currently use Github actions and self-hosted runners are limited in architecture support at present

IBM & others involved have been working a lot on bringing more open source to s390x.
Seems a hard thing to go any faster on... given the huge number of direct & indirect dependencies.
For GHA, it's down to Microsoft and IBM now..

I found these recently -
Issue - Support self-hosted runner on linux-s390x platform #2263
PR - [RFC] Support self-hosted runner on linux-s390x platform #2264

@sonarqubecloud
Copy link

Kudos, SonarCloud Quality Gate passed!    Quality Gate passed

Bug A 0 Bugs
Vulnerability A 0 Vulnerabilities
Security Hotspot A 0 Security Hotspots
Code Smell A 0 Code Smells

No Coverage information No Coverage information
No Duplication information No Duplication information

@Tyk-ITS
Copy link
Contributor

Tyk-ITS commented Apr 27, 2023

API tests result: success
Branch used: refs/pull/4777/merge
Commit: af307a5
Triggered by: pull_request (@buger)
Execution page

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants