HuBMAP Data Portal: This is a Flask app, using React on the front end and primarily Elasticsearch on the back end, wrapped in a Docker container for deployment using Docker Compose. The front end depends on AWS S3 and CloudFront for the hosting and delivery of images. It is deployed at portal.hubmapconsortium.org
The Data Portal depends on many APIs, and directly or indirectly, on many other HuBMAP repos.
graph LR
gateway
click gateway href "https://github.com/hubmapconsortium/gateway"
top[portal-ui] --> commons
click top href "https://github.com/hubmapconsortium/portal-ui"
click commons href "https://github.com/hubmapconsortium/commons"
top --> ccf-ui
click ccf-ui href "https://github.com/hubmapconsortium/ccf-ui"
top --> vitessce --> viv
click vitessce href "https://github.com/vitessce/vitessce"
click viv href "https://github.com/hms-dbmi/viv"
top --> portal-visualization --> vitessce-python
click portal-visualization href "https://github.com/hubmapconsortium/portal-visualization"
click vitessce-python href "https://github.com/vitessce/vitessce-python"
top --> cells-sdk --> cells-api --> pipe
click cells-sdk href "https://github.com/hubmapconsortium/cells-api-py-client"
click cells-api href "https://github.com/hubmapconsortium/cross_modality_query"
top --> gateway
gateway --> entity-api --> pipe[ingest-pipeline]
click entity-api href "https://github.com/hubmapconsortium/entity-api"
click pipe href "https://github.com/hubmapconsortium/ingest-pipeline"
gateway --> assets-api --> pipe
%% assets-api is just a file server: There is no repo.
gateway --> search-api --> pipe
click search-api href "https://github.com/hubmapconsortium/search-api"
gateway --> workspaces-api
click workspaces-api href "https://github.com/hubmapconsortium/user_workspaces_server"
pipe --> valid
pipe --> portal-containers
click portal-containers href "https://github.com/hubmapconsortium/portal-containers/"
subgraph APIs
entity-api
search-api
cells-api
assets-api
workspaces-api
end
subgraph Git Submodules
valid
end
subgraph Python Packages
commons
portal-visualization
vitessce-python
cells-sdk
end
subgraph NPM Packages
vitessce
viv
end
subgraph cdn.jsdelivr.net
ccf-ui
end
subgraph legend
owner
contributor
not-harvard
end
classDef contrib fill:#ddffdd,stroke:#88AA88,color:#000;
class owner,contributor,top,vitessce,viv,portal-visualization,vitessce-python,cells-sdk,portal-containers,valid,search-api contrib
classDef owner stroke-width:3px,font-style:italic,color:#000;
class owner,top,vitessce,viv,portal-visualization,vitessce-python,portal-containers owner
style legend fill:#f8f8f8,stroke:#888888;
Issues with the Portal can be reported via email. More information on how issues are tracked across HuBMAP is available here.
We try to have a design ready before we start coding.
Often, issues are filed in pairs, tagged design
and enhancement
.
All designs are in Figma.
(Note that if that link redirects to /files/recent
, you'll need to be added to the project, preferably with a .edu
email, if you want write access.)
git
: Suggest installing Apple XCode.python 3.9
- MiniConda:
- installing miniconda and creating a new conda environment:
conda create -n portal python=$(cat .python-version)
- installing miniconda and creating a new conda environment:
- pyenv:
brew install pyenv
brew install pyenv-virtualenv
- cd into portal-ui (or provide full path to /portal-ui/.python-version file)
pyenv install `cat .python-version`
pyenv virtualenv `cat .python-version` portal
pyenv activate portal
- MiniConda:
nodejs/npm
: Suggest installing nvm and then using it to install the appropriate node version:nvm install
.nvm install `cat .nvmrc`
nvm use `cat .nvmrc`
Optional:
VS Code
, with recommended extensions.- While this is optional, it is worth noting that it is in use by the whole development team
- Using VS Code lets us share default configuration settings and easily run scripts using VS Code tasks.
docker
- Docker is necessary in order to create images for the deploy process
- It is also used to run a local instance of the application when using the test scripts in the
./etc
directory
After checking out the project, cd-ing into it, and setting up a Python 3.9 virtual environment,
- Get
app.conf
from Confluence or from another developer and place it atcontext/instance/app.conf
. - Run
etc/dev/dev-start.sh
to start the webpack dev and flask servers and then visit localhost:5001.- If using VS Code, you can also use the
dev-start
task, which will launch these services in separate terminal windows.
- If using VS Code, you can also use the
The webpack dev server serves all files within the public directory and provides hot module replacement for the react application; The webpack dev server proxies all requests outside of those for files in the public directory to the flask server.
Note: Searchkit, our interface to Elasticsearch, has changed significantly in the latest release. Documentation for version 2.0 can be found here.
Every PR should be reviewed, and every PR should include a new CHANGELOG-something.md
at the root of the repository. These are concatenated by etc/build/push.sh
during deploy.
⚛️ React
Note
Any mentions of.js
/.jsx
in the following guidelines are interchangeable with.ts
/.tsx
. New features should ideally be developed in TypeScript.
- Components with tests or styles should be placed in to their own directory.
- Styles should follow the
style.*
pattern where the extension isjs
for styled components orcss
for stylesheets.- New styled components should use
styled
from@mui/material/styles
.
- New styled components should use
- Supporting test files have specific naming conventions:
- Jest Tests should follow the
*.spec.js
pattern. - Stories should follow the
*.stories.js
pattern. - Cypress tests should follow the
*.cy.js
pattern. - For all test files, the prefix is the name of the component.
- Jest Tests should follow the
- Each component directory should have an
index.js
which exports the component as default. - Components which share a common domain can be placed in a directory within components named after the domain.
🖼️ Images
Images should displayed using the source srcset
attribute. You should prepare four versions of the image starting at its original size and at 75%, 50% and 25% the original image's size preserving its aspect ratio. If available, you should also provide a 2x resolution for higher density screens.
- For example, to resize images using Mac's Preview you can visit the 'Tools' menu and select 'Adjust Size', from there you can change the image's width while making sure 'Scale Proportionally' and 'Resample Image' are checked. You can also use the
resize-images.sh
script. Once ready, each version of the image should be processed with an image optimizer such as ImgOptim or Online Image Compressor.
Homepage images should also be provided in .webp
format; a batch conversion script is provided to aid this process.
Finally after processing, the images should be added to the S3 bucket, portal-ui-images-s3-origin
,
to be delivered by the cloudfront CDN.
SVG files larger than 5KB should also be stored in S3 and delivered by the CDN.
SVG files smaller than 5KB can be included in the repository in context/app/static/assets/svg/
.
The CDN responds with a cache-control: max-age=1555200
header for all items,
but can be overridden on a per image basis by setting the cache-control
header for the object in S3.
If an uploaded file replaces an existing one and uses the same file name, a CloudFront cache invalidation should be run, targeting the specific file(s) that have been updated.
- Log in to the AWS console and go to distributions
- Select the distribution corresponding to the S3 server.
- Go to the
Invalidations
tab and clickCreate Invalidation
. - Enter the file names which should be invalidated in cache, with the full path; you can target multiple similar file names by using wildcards
- e.g. to invalidate all files in
/
starting withpublication-slide
, you would enter/publication-slide*
, which would select all the different sizes of that image.
- e.g. to invalidate all files in
- After confirming that you are targeting only the intended files, click
Create Invalidation
again.
For the homepage carousel, images should have a 16:9 aspect ratio, a width of at least 1400px, a title, a description, and, if desired, a url to be used for the 'Get Started' button.
Python unit tests use Pytest, front end tests use Jest, and end-to-end tests use Cypress. Each suite is run separately on GitHub CI.
Load tests are available, but they are not run as part of CI.
- Jest:
cd context; npm run test
- Cypress: With the application running,
cd end-to-end; npm run cypress:open
- If using WSL2, see the WSL2-specific steps in the end to end readme.
- Note that the cypress tests (particularly for the publication page) are expected to be run with the
test
environment enabled in app.conf
- Pytest:
cd context; pytest app --ignore app/api/vitessce_conf_builder
CI lints the codebase, and to save time, we also lint in a pre-commit hook.
If you want to bypass the hook, set HUSKY_SKIP_HOOKS=1
.
You can also lint and auto-correct from the command-line:
cd context
npm run lint
npm run lint:fix
EXCLUDE=node_modules,etc/dev/organ-utils
autopep8 --in-place --aggressive -r . --exclude $EXCLUDE
To start storybook locally you can either run etc/dev/dev-start.sh
, or just npm run storybook
,
and after it has started, visit localhost:6006.
The build, tag, deploy, and QA procedures are detailed here.
Instructions for Production are provided here.
Webpack
To view visualizations of the production webpack bundle run npm run build:analyze
.
The script will generate two files, report.html and stats.html, inside the public directory each showing a different visual representation of the bundle.
Docker
To build and run the docker image locally:
etc/dev/docker.sh 5001 --follow
Our base image is based on this template.
Docker Compose
In the deployments, our container is behind a NGINX reverse reproxy; Here's a simple demonstration of how that works.
The portal team contributes code to a subdirectory within search-api
to clean up the raw Neo4J export and provide us with clean, usable facets.
Within that directory, config.yaml
configures the Elasticsearch index itself.
Data visualization is an integral part of the portal, allowing users to view the results of analysis pipelines or raw uploaded data easily directly in the browser. How such data is processed and prepared for visualization in the client-side Javascript via vitessce
can be found here.
General-purpose tools:
viv
: JavaScript library for rendering OME-TIFF and OME-NGFF (Zarr) directly in the browser. Packaged as deck.gl layers.vitessce
: Visual integration tool for exploration of spatial single-cell experiments. Built on top of deck.gl.vitessce-python
: Python wrapper classes which make it easier to build configurations.
Particular to HuBMAP:
portal-visualization
: Given HuBMAP Dataset JSON, creates a Vitessce configuration.portal-containers
: Docker containers for visualization preprocessing.airflow-dev
: CWL pipelines wrapping those Docker containers.