Skip to content

knesset data scrapers and data sync - using the datapackage pipelines framework

License

Notifications You must be signed in to change notification settings

tzoof/knesset-data-pipelines

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Knesset data pipelines

Data processing pipelines for loading, processing and visualizing data about the Knesset

Uses the datapackage pipelines and DataFlows frameworks.

Quickstart for data science

Follow this method to get started quickly with exploration, processing and testing of the knesset data.

Running using Docker

Install Docker for Windows, Mac or Linux

Pull the latest Docker image

docker pull orihoch/knesset-data-pipelines

Create a directory which will be shared between the host PC and the container:

sudo mkdir -p /opt/knesset-data-pipelines

Start the Jupyter lab server:

docker run -it -p 8888:8888 --entrypoint jupyter \
           -v /opt/knesset-data-pipelines:/pipelines \
           orihoch/knesset-data-pipelines lab --allow-root --ip 0.0.0.0 --no-browser \
                --NotebookApp.token= --NotebookApp.custom_display_url=http://localhost:8888/

Access the server at http://localhost:8888/

Open a terminal inside the Jupyter Lab web-ui, and clone the knesset-data-pipelines project:

git clone https://github.com/hasadna/knesset-data-pipelines.git .

You should now see the project files on the left sidebar.

Access the jupyter-notebooks directory and open one of the available notebooks.

You can now add or make modifications to the notebooks, then open a pull request with your changes.

You can also modify the pipelines code from the host machine and it will be reflected in the notebook environment.

Contributing

Looking to contribute? check out the Help Wanted Issues or the Noob Friendly Issues for some ideas.

Useful resources for getting acquainted:

  • DPP documentation
  • Code for the periodic execution component
  • Info on available data from the Knesset site
  • Living document with short list of ongoing project activities

About

knesset data scrapers and data sync - using the datapackage pipelines framework

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Jupyter Notebook 86.7%
  • Python 13.1%
  • Other 0.2%