Skip to content

Latest commit

 

History

History
145 lines (103 loc) · 3.66 KB

File metadata and controls

145 lines (103 loc) · 3.66 KB

First Time

Pre talk

  • Check out pre-early17 branch.
git checkout pre-early17
  • Adjust hostPath in datagrid/datagrid.yml to point to the correct folder.

  • Make sure Docker is running.

  • Docker settings: 5 CPU, 8 GB

  • Start OpenShift cluster:

oc cluster up --public-hostname=127.0.0.1
  • Deploy all components:
./deploy-all.sh

You can follow progress of deployment of Infinispan server pods via:

kubetail -l cluster=datagrid
cd analytics/analytics-jupyter
~/anaconda/bin/jupyter notebook
  • Clear Jupyter output by clicking: Cell / All Output / Clear

Real Time Demo

  • Start delays.query.continuous.fx.FxApp from IDE and show how no delays are being sent.

  • Implement query in delays.query.continuous.ContinuousQueryVerticle:

Query query = qf.from(StationBoard.class)
   .having("entries.delayMin").gt(0L)
   .build();
  • Implement publishing delay in delays.query.continuous.ContinuousQueryVerticle:
value.entries.stream()
   .filter(e -> e.delayMin > 0)
   .forEach(e -> {
      publishDelay(key, e);
});
  • Add continuous query in delays.query.continuous.ContinuousQueryVerticle:
continuousQuery.addContinuousQueryListener(query, listener);
  • Redeploy delays.query.continuous.ContinuousQueryVerticle verticle:
cd real-time/real-time-vertx
mvn clean package -DskipTests=true
oc start-build real-time --from-dir=. --follow
  • Start delays.query.continuous.fx.FxApp from IDE and within a few seconds you should see delays appearing.

Analytics Demo

  • Go to Jupyter notebook, open live-demo.ipynb and verify that URL returns 0 entries.

  • Implement delay ratio task in delays.java.stream.task.DelayRatioTask class:

Map<Integer, Long> totalPerHour = cache.values().stream()
      .collect(
            serialize(() -> Collectors.groupingBy(
                  e -> getHourOfDay(e.departureTs),
                  Collectors.counting()
            )));

Map<Integer, Long> delayedPerHour = cache.values().stream()
      .filter(e -> e.delayMin > 0)
      .collect(
            serialize(() -> Collectors.groupingBy(
                  e -> getHourOfDay(e.departureTs),
                  Collectors.counting()
            )));
  • Recompile and redeploy server task:
cd analytics
mvn clean package -pl analytics-server
yes | cp analytics-server/target/analytics-server-1.0-SNAPSHOT.jar ../datagrid/target/analytics-server.jar
  • Go to Jupyter notebook and run each cell again of live-demo.ipynb. Value for analytics.size should be 48. The time with biggest ratio of delayed trains should be 2am.