Skip to content

Run a model predicting the USA presidential party winner 2024, Allora worker for topic 11 an initial inferences of token R : Republican and D : Democrat

License

Notifications You must be signed in to change notification settings

arcxteam/allora-usa-election

Repository files navigation

IMG_098555555555

Disclaimer...Reading!!!

This campaign rewards users who run worker nodes providing inferences for the US presidential election party winner once a day. Every inference should be the likelihood of the republican party winning the election. source run-inference-political

1. Components

  • Worker: The node that publishes inferences to the Allora chain.
  • Inference: A container that conducts inferences, maintains the model state, and responds to internal inference requests via a Flask application. This node operates with a basic linear regression model for price predictions.
  • Updater: A cron-like container designed to update the inference node's data by daily fetching the latest market information from the data provider, ensuring the model stays current with new market trends.
  • Topic ID: Running this worker on TopicId 11
  • TOKEN= D For have inference D: Democrat
  • TOKEN= R For have inference R: Republic
  • MODEL: Own model or modify or check folder /models model.py
  • Probability: Predict of % total 0 - 100%
  • Dataset: polymarket.com
  • An expected result: Every 24 hours

Setup Worker

The structure of topic 11 - Allora Worker Node

./root/allora-usa-election
├── config.json
├── docker-compose.yml
├── Dockerfile
├── app.py
├── model.py
├── requirements.txt
├── worker-data
       └── environment
├── inference-data
       └── dataset.csv (R/D)
       └── model
             └── model.pkl (R/D)
  1. Clone this repository

    git clone https://github.com/arcxteam/allora-usa-election.git
    cd allora-usa-election
  2. Provided and config environment file modify model-tunning

    Copy and read the example .env.example for your variables

    nano .env.example .env

    Here are the currently accepted configuration

    • TOKEN= (D or R)
    • MODEL= the name as model (defaults SVR or modify your own model)
    • Save ctrl X + Y and Enter

    Modify model-tunning or check /models folder

     nano model.py
  3. Edit your config & initialize worker

    Edit for key wallet - name wallet - RPC endpoint & interval etc

    nano config.json

    Run the following commands root directory to initialize the worker

    chmod +x init.config
    ./init.config
  4. Start the Services

    Run the following command to start the worker node, inference, and updater nodes:

    docker compose up --build -d

    Check running

    docker compose logs -f --tail=100

    To confirm that the worker successfully sends the inferences to the chain, look for the following logs:

    {"level":"debug","msg":"Send Worker Data to chain","txHash":<tx-hash>,"time":<timestamp>,"message":"Success"}
    

    Capture333654

2. Testing Inference Only

Send requests to the inference model. For example, request probability of Democrat(D) or Republic(R)

curl http://127.0.0.1:8000/inference/D

Expected response of numbering: "value":"xx.xxxx"

3. Note: Checking the logs that worker are registered successfully

docker logs -f (your container id) 2>&1 | head -n 70
docker logs -f (your container id) 2>&1 | grep -i "Success"

Result every 24H: This will only get 1 request per day for topic 11 and points will depends on how your worker prediction vs the ground truth from polymarket

About

Run a model predicting the USA presidential party winner 2024, Allora worker for topic 11 an initial inferences of token R : Republican and D : Democrat

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published