Skip to content

Commit

Permalink
Deployed 35de29b with MkDocs version: 1.4.3
Browse files Browse the repository at this point in the history
  • Loading branch information
cbyrohl committed Aug 14, 2023
1 parent ee9497d commit d06d5fd
Show file tree
Hide file tree
Showing 9 changed files with 2,267 additions and 764 deletions.
2,488 changes: 1,849 additions & 639 deletions api_docs/index.html

Large diffs are not rendered by default.

27 changes: 20 additions & 7 deletions halocatalogs/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -629,11 +629,11 @@
</li>

<li class="md-nav__item">
<a href="#operations-on-particle-data-for-all-groups" class="md-nav__link">
Operations on particle data for all groups
<a href="#applying-to-all-groups-in-parallel" class="md-nav__link">
Applying to all groups in parallel
</a>

<nav class="md-nav" aria-label="Operations on particle data for all groups">
<nav class="md-nav" aria-label="Applying to all groups in parallel">
<ul class="md-nav__list">

<li class="md-nav__item">
Expand Down Expand Up @@ -789,11 +789,11 @@
</li>

<li class="md-nav__item">
<a href="#operations-on-particle-data-for-all-groups" class="md-nav__link">
Operations on particle data for all groups
<a href="#applying-to-all-groups-in-parallel" class="md-nav__link">
Applying to all groups in parallel
</a>

<nav class="md-nav" aria-label="Operations on particle data for all groups">
<nav class="md-nav" aria-label="Applying to all groups in parallel">
<ul class="md-nav__list">

<li class="md-nav__item">
Expand Down Expand Up @@ -906,8 +906,21 @@ <h3 id="query-all-particles-belonging-to-some-group">Query all particles belongi
<div class="highlight"><pre><span></span><code><span class="n">data</span> <span class="o">=</span> <span class="n">ds</span><span class="o">.</span><span class="n">return_data</span><span class="p">(</span><span class="n">haloID</span><span class="o">=</span><span class="mi">42</span><span class="p">)</span>
</code></pre></div>
<p><em>data</em> will have the same structure as <em>ds.data</em> but restricted to particles of a given group.</p>
<h3 id="operations-on-particle-data-for-all-groups">Operations on particle data for all groups</h3>
<h3 id="applying-to-all-groups-in-parallel">Applying to all groups in parallel</h3>
<p>In many cases, we do not want the particle data of an individual group, but we want to calculate some reduced statistic from the bound particles of each group. For this, we provide the <em>grouped</em> functionality. In the following we give a range of examples of its use.</p>
<details class="warning" open="open">
<summary>Warning</summary>
<p>Executing the following commands can be demanding on compute resources and memory.
Usually, one wants to restrict the groups to run on. You can either specify "nmax"
to limit the maximum halo id to evaluate up to. This is usually desired in any case
as halos are ordered (in descending order) by their mass. For more fine-grained control,
you can also pass a list of halo IDs to evaluate via the "idxlist" keyword.
These keywords should be passed to the "evaluate" call.</p>
</details>
<p>!!!+ note</p>
<pre><code>By default, operations are done on for halos. By passing `objtype="subhalo"` to the
`grouped` call, the operation is done on subhalos instead.
</code></pre>
<h4 id="baryon-mass">Baryon mass</h4>
<p>Let's say we want to calculate the baryon mass for each halo from the particles.</p>
<div class="highlight"><pre><span></span><code><span class="n">mass</span> <span class="o">=</span> <span class="n">ds</span><span class="o">.</span><span class="n">grouped</span><span class="p">(</span><span class="s2">"Masses"</span><span class="p">,</span> <span class="n">parttype</span><span class="o">=</span><span class="s2">"PartType0"</span><span class="p">)</span><span class="o">.</span><span class="n">sum</span><span class="p">()</span><span class="o">.</span><span class="n">evaluate</span><span class="p">(</span><span class="n">compute</span><span class="o">=</span><span class="kc">True</span><span class="p">)</span>
Expand Down
138 changes: 130 additions & 8 deletions largedatasets/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -482,10 +482,68 @@



<label class="md-nav__link md-nav__link--active" for="__toc">
Large datasets
<span class="md-nav__icon md-icon"></span>
</label>

<a href="./" class="md-nav__link md-nav__link--active">
Large datasets
</a>



<nav class="md-nav md-nav--secondary" aria-label="Table of contents">






<label class="md-nav__title" for="__toc">
<span class="md-nav__icon md-icon"></span>
Table of contents
</label>
<ul class="md-nav__list" data-md-component="toc" data-md-scrollfix>

<li class="md-nav__item">
<a href="#starting-simple-computing-in-chunks" class="md-nav__link">
Starting simple: computing in chunks
</a>

</li>

<li class="md-nav__item">
<a href="#more-advanced-computing-in-parallel" class="md-nav__link">
More advanced: computing in parallel
</a>

<nav class="md-nav" aria-label="More advanced: computing in parallel">
<ul class="md-nav__list">

<li class="md-nav__item">
<a href="#running-a-localcluster" class="md-nav__link">
Running a LocalCluster
</a>

</li>

<li class="md-nav__item">
<a href="#running-a-slurmcluster" class="md-nav__link">
Running a SLURMCluster
</a>

</li>

</ul>
</nav>

</li>

</ul>

</nav>

</li>


Expand Down Expand Up @@ -638,6 +696,48 @@



<label class="md-nav__title" for="__toc">
<span class="md-nav__icon md-icon"></span>
Table of contents
</label>
<ul class="md-nav__list" data-md-component="toc" data-md-scrollfix>

<li class="md-nav__item">
<a href="#starting-simple-computing-in-chunks" class="md-nav__link">
Starting simple: computing in chunks
</a>

</li>

<li class="md-nav__item">
<a href="#more-advanced-computing-in-parallel" class="md-nav__link">
More advanced: computing in parallel
</a>

<nav class="md-nav" aria-label="More advanced: computing in parallel">
<ul class="md-nav__list">

<li class="md-nav__item">
<a href="#running-a-localcluster" class="md-nav__link">
Running a LocalCluster
</a>

</li>

<li class="md-nav__item">
<a href="#running-a-slurmcluster" class="md-nav__link">
Running a SLURMCluster
</a>

</li>

</ul>
</nav>

</li>

</ul>

</nav>
</div>
</div>
Expand All @@ -658,9 +758,9 @@

<div><h1 id="handling-large-data-sets">Handling Large Data Sets</h1>
<p>Until now, we have applied our framework to a very small simulation.
However, what if we are working with a very large data set
However, what if we are working with a very large data set
(like the TNG50-1 cosmological simulation, which has <span class="arithmatex">\(2160^3\)</span> particles, <span class="arithmatex">\(512\)</span> times more than TNG50-4)?</p>
<h1 id="starting-simple-computing-in-chunks">Starting simple: computing in chunks</h1>
<h2 id="starting-simple-computing-in-chunks">Starting simple: computing in chunks</h2>
<p>First, we can still run the same calculation as above, and it will "just work" (hopefully).</p>
<p>This is because Dask has many versions of common algorithms and functions
which work on "blocks" or "chunks" of the data, which split up the large array into smaller arrays.
Expand All @@ -683,15 +783,16 @@ <h1 id="starting-simple-computing-in-chunks">Starting simple: computing in chunk
<span class="go">52722.6796875 code_mass</span>
</code></pre></div>
<h2 id="more-advanced-computing-in-parallel">More advanced: computing in parallel</h2>
<p>Rather than sequentially calculating large tasks, we can also run the computation in parallel. </p>
<p>To do so different advanced dask schedulers are available.
<p>Rather than sequentially calculating large tasks, we can also run the computation in parallel.</p>
<p>To do so different advanced dask schedulers are available.
Here, we use the most straight forward <a href="https://docs.dask.org/en/latest/how-to/deploy-dask/single-distributed.html">distributed scheduler</a>.</p>
<p>Usually, we would start a scheduler and then connect new workers (e.g. running on multiple compute/backend nodes of a HPC cluster).
<h3 id="running-a-localcluster">Running a LocalCluster</h3>
<p>Usually, we would start a scheduler and then connect new workers (e.g. running on multiple compute/backend nodes of a HPC cluster).
After, tasks (either interactively or scripted) can leverage the power of these connected resources.</p>
<p>For this example, we will use the same "distributed" scheduler/API, but keep things simple by using just the one (local) node we are currently running on.</p>
<p>While the result is eventually computed, it is a bit slow, primarily because the actual reading of the data off disk is the limiting factor, and we can only use resources available on our local machine.</p>
<div class="highlight"><pre><span></span><code><span class="gp">&gt;&gt;&gt; </span><span class="kn">from</span> <span class="nn">dask.distributed</span> <span class="kn">import</span> <span class="n">Client</span><span class="p">,</span> <span class="n">LocalCluster</span>
<span class="gp">&gt;&gt;&gt; </span><span class="n">cluster</span> <span class="o">=</span> <span class="n">LocalCluster</span><span class="p">(</span><span class="n">n_workers</span><span class="o">=</span><span class="mi">16</span><span class="p">,</span> <span class="n">threads_per_worker</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span>
<span class="gp">&gt;&gt;&gt; </span><span class="n">cluster</span> <span class="o">=</span> <span class="n">LocalCluster</span><span class="p">(</span><span class="n">n_workers</span><span class="o">=</span><span class="mi">16</span><span class="p">,</span> <span class="n">threads_per_worker</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span>
<span class="go"> dashboard_address=":8787")</span>
<span class="gp">&gt;&gt;&gt; </span><span class="n">client</span> <span class="o">=</span> <span class="n">Client</span><span class="p">(</span><span class="n">cluster</span><span class="p">)</span>
<span class="gp">&gt;&gt;&gt; </span><span class="n">client</span>
Expand All @@ -706,15 +807,36 @@ <h2 id="more-advanced-computing-in-parallel">More advanced: computing in paralle

<span class="go">52722.6796875 code_mass</span>
</code></pre></div>
<p>The progress bar, we could use for the default scheduler (before initializing <code>LocalCluster</code>),
<p>The progress bar, we could use for the default scheduler (before initializing <code>LocalCluster</code>),
is unavailable for the distributed scheduler.
However, we can still view the progress of this task as it executes using its status dashboard
(as a webpage in a new browser tab or within <a href="https://github.com/dask/dask-labextension">jupyter lab</a>).
You can find it by clicking on the "Dashboard" link above.
If running this notebook server remotely, e.g. on a login node of a HPC cluster,
you may have to change the '127.0.0.1' part of the address to be the same machine name/IP.</p>
<h3 id="running-a-slurmcluster">Running a SLURMCluster</h3>
<p>If you are working with HPC resources, such as compute clusters with common schedulers (e.g. SLURM),
check out <a href="https://jobqueue.dask.org/en/latest/">Dask-Jobqueue</a> to automatically batch jobs spawning dask workers. </p></div>
check out <a href="https://jobqueue.dask.org/en/latest/">Dask-Jobqueue</a> to automatically batch jobs spawning dask workers.</p>
<p>Below is an example using the SLURMCluster.
We configure the job and node resources before submitting the job via the <code>scale()</code> method.</p>
<div class="highlight"><pre><span></span><code><span class="gp">&gt;&gt;&gt; </span><span class="kn">from</span> <span class="nn">dask.distributed</span> <span class="kn">import</span> <span class="n">Client</span>
<span class="gp">&gt;&gt;&gt; </span><span class="kn">from</span> <span class="nn">dask_jobqueue</span> <span class="kn">import</span> <span class="n">SLURMCluster</span>
<span class="gp">&gt;&gt;&gt; </span><span class="n">cluster</span> <span class="o">=</span> <span class="n">SLURMCluster</span><span class="p">(</span><span class="n">queue</span><span class="o">=</span><span class="s1">'p.large'</span><span class="p">,</span> <span class="n">cores</span><span class="o">=</span><span class="mi">72</span><span class="p">,</span> <span class="n">memory</span><span class="o">=</span><span class="s2">"500 GB"</span><span class="p">,</span>
<span class="gp">&gt;&gt;&gt; </span> <span class="n">processes</span><span class="o">=</span><span class="mi">36</span><span class="p">,</span>
<span class="gp">&gt;&gt;&gt; </span> <span class="n">scheduler_options</span><span class="o">=</span><span class="p">{</span><span class="s2">"dashboard_address"</span><span class="p">:</span> <span class="s2">":8811"</span><span class="p">})</span>
<span class="gp">&gt;&gt;&gt; </span><span class="n">cluster</span><span class="o">.</span><span class="n">scale</span><span class="p">(</span><span class="n">jobs</span><span class="o">=</span><span class="mi">1</span><span class="p">)</span> <span class="c1"># submit 1 job for 1 node</span>
<span class="gp">&gt;&gt;&gt; </span><span class="n">client</span> <span class="o">=</span> <span class="n">Client</span><span class="p">(</span><span class="n">cluster</span><span class="p">)</span>

<span class="gp">&gt;&gt;&gt; </span><span class="kn">from</span> <span class="nn">scida</span> <span class="kn">import</span> <span class="n">load</span>
<span class="gp">&gt;&gt;&gt; </span><span class="n">ds</span> <span class="o">=</span> <span class="n">load</span><span class="p">(</span><span class="s2">"TNG50_snapshot"</span><span class="p">)</span>
<span class="gp">&gt;&gt;&gt; </span><span class="o">%</span><span class="n">time</span> <span class="n">ds</span><span class="o">.</span><span class="n">data</span><span class="p">[</span><span class="s2">"PartType0"</span><span class="p">][</span><span class="s2">"Masses"</span><span class="p">]</span><span class="o">.</span><span class="n">sum</span><span class="p">()</span><span class="o">.</span><span class="n">compute</span><span class="p">()</span>
<span class="go">CPU times: user 1.27 s, sys: 152 ms, total: 1.43 s</span>
<span class="go">Wall time: 21.4 s</span>
<span class="gp">&gt;&gt;&gt; </span><span class="n">client</span><span class="o">.</span><span class="n">shutdown</span><span class="p">()</span>
</code></pre></div>
<p>The SLURM job will be killed by invoking <code>client.shutdown()</code> or if the spawning python process or ipython kernel dies.
Make sure to properly handle exceptions, particularly in active jupyter notebooks, as allocated nodes might otherwise
idle and not be cleaned up.</p></div>



Expand Down
Loading

0 comments on commit d06d5fd

Please sign in to comment.