-
Notifications
You must be signed in to change notification settings - Fork 4
Ska3 runtime environment for users
The Ska3 runtime environment consists of an integrated suite of Python-3 packages and associated applications. The installation and update of these packages is done using the Anaconda Python distribution and the Conda package manager. This allows simple installation on linux machines or Mac laptops. Similar support for Windows is under consideration.
For discussion about motivation for Python 3, new features, and converting from Python 2.7 to Python 3, see:
Power users note: on the HEAD linux network there can be a substantial improvement in package import
times if you install and maintain your own Ska3 installation on a local disk. The production
installation on /proj/sot
is served via a remote NetApp which can be quite slow.
The basis for using and managing Ska3 is the Conda package manager. So this needs to be installed before anything else can be done. If you already have the conda package manager installed then you can skip these steps and go to the Create Ska3 environment section.
The steps for creating a new Anaconda environment are as follows:
- Do one of the following from the command line (depending on your OS):
curl -O https://repo.continuum.io/miniconda/Miniconda3-4.3.21-Linux-x86_64.sh
curl -O https://repo.continuum.io/miniconda/Miniconda3-4.3.21-MacOSX-x86_64.sh
(for GRETA, the miniconda installer has been sync'd into /proj/sot/ska3/conda-builds)
-
Open a terminal window and change to the downloads directory for your browser.
-
The installer is called something like
Miniconda-<version>-<OS>.sh
. Find it and then enter the following (without the%
, which is used as the linux prompt in all these examples):
% bash Miniconda3-4.3.21-MacOSX-x86_64.sh
-
Hit ENTER, then agree to the license terms, and finally specify the installation directory. This will be created by the installer and needs to be a directory that doesn t currently exist. A fine choice is the default of
~/miniconda3
(which is what is assumed for the rest of the installation), but you may want to install to a different directory depending on the details of your system. -
At the end it asks about updating your
.bashrc
or.bash-profile
file. If you mostly use Python for Chandra work and want to make things easy, then sayyes
. -
Now create a new terminal window. If you answered
no
above about updating your bash configuration file then you need to do one of the following:
# FOR csh / tcsh
% set path=(${HOME}/miniconda3 $path)
% rehash
# For bash
% export PATH=$HOME/miniconda3:$PATH
% hash -r
- This gives you a minimal working Python along with the package manager utility conda. Check that this is working by doing:
% which python
--> shows the expected path, e.g. $HOME/miniconda3/bin/python
To install and maintain Ska3 environments there is a one-time configuration setup that is
required. This adds an environment variable ska3conda
to your bash or csh environment
to facilitate access to the Ska3 conda package repository via the https://icxc.cfa.harvard.edu
server. To access this you must be on the HEAD network or SI VPN or OCC VPN.
Add the following line to your bash or csh / tcsh initialization script (typically
~/.bash_profile
or ~/.bashrc
or ~/.cshrc
):
# FOR csh / tcsh
setenv ska3conda https://icxc.cfa.harvard.edu/aspect/ska3-conda
# For bash
export ska3conda=https://icxc.cfa.harvard.edu/aspect/ska3-conda
With this in place, all conda
commands where you need access to the Ska3 conda
package repository should be done with conda <cmd> --channel $ska3conda ..
.
As an alternative you can put the https://icxc.cfa.harvard.edu/aspect/ska3-conda
URL into your .condarc
channels list. The downside here is that this applies
globally (whether or not you are in a Ska3 environment), and conda
can hang unexpectedly
on that URL if https://icxc
is not accessible.
Note that if your goal is to make a Ska3 environment on CentOS5, 6 or 7 that is identical to the official /proj/sot/ska3/flight environment, that is only guaranteed at this time by explicitly setting your .condarc
to be just the ska3-conda repositories (nothing from continuum or conda-forge etc).
channels:
- https://icxc.cfa.harvard.edu/aspect/ska3-conda/
- https://icxc.cfa.harvard.edu/aspect/ska3-conda/core-pkg-repo/
If using this method with the explicit list of repositories in .condarc., the
--channel $ska3condaor
-c $ska3condawould be omitted from the
conda create` commands below.
Now you can install all the Ska3 packages and required dependencies to an environment
named ska3
:
% conda create --channel $ska3conda --name ska3 ska3-flight
You could also use the short-form command line arguments:
% conda create -c $ska3conda -n ska3 ska3-flight
After you see the list of packages, accept, and see that the install finished. Then you can get into the Ska3 environment (activate only works with bash at this time, so you'll need to be in that shell to use this method):
% source activate ska3
It's done and now you have a ska3
environment! The name ska3
is not magic, and you
could have called it my-ska
or whatever you want.
If you make a new shell window then you need to source activate ska3
to get into Ska3
again.
tcsh is not supported as a shell for working with the ska3 conda environment. activate
does not work to switch into the environment and conda list
and conda install
may try to list and install into the root environment instead of the ska3 env that has been created.
For some limited and unsupported functionality, tcsh users may just add the ska3 conda env appropriately in PATH and the SKA environment variable needs to be set (see the Ska data section below for actually getting ska data into the SKA/data directory).
% setenv PATH ${HOME}/miniconda3/envs/ska3/bin:${PATH}
% setenv SKA ${HOME}/ska
Congratulations, you should now have all the Ska3 packages installed!
As a quick smoke test that things are working, confirm that the following works and makes a line plot:
% ipython --matplotlib
>>> import matplotlib.pyplot as plt
>>> from Chandra.Time import DateTime
>>> import ska_path
>>>
>>> plt.plot([1, 2])
>>> DateTime(100).date
'1998:001:00:00:36.816'
To make full use of the Ska environment you need various package data, e.g.:
- Ska engineering archive
- Kadi databases
- Commanded states databases
- AGASC star catalog
- Telemetry database (MSIDs)
The very first thing to do is define the disk location where you want to store the Ska data. Packages and applications in the Ska runtime environment use an environment variable SKA to define the root directory of data and other shared resources. On the HEAD or GRETA networks this root is the familiar /proj/sot/ska3/flight.
For a machine (e.g. your laptop) not on the HEAD or GRETA networks you need to make a directory that will hold the Ska data. A reasonable choice is putting this in a ska directory in your home directory:
% mkdir ~/ska
% setenv SKA ${HOME}/ska # csh / tcsh
% export SKA=${HOME}/ska # bash
If you have made a local Ska environment on a machine on the HEAD or GRETA networks, you can just set the SKA environment variable to point at the existing root:
% setenv SKA /proj/sot/ska # csh / tcsh
% export SKA=/proj/sot/ska # bash
For a machine not on the HEAD or GRETA networks you need to get at least a subset of the data on your machine. FOT users may want to start up MATLAB which will run a task to update the Ska data used by those tools. Other users or FOT users with different data needs will use a script called ska_sync that is installed as part of Ska. This assume you have set the SKA environment variable and created a directory as shown above.
The first thing is to try running it and getting the online help:
% ska_sync --help
usage: ska_sync [-h] [--user USER] [--install] [--force]
Synchronize data files for Ska runtime environment
Arguments
=========
optional arguments:
-h, --help show this help message and exit
--user USER User name for remote host
--install Install sync config file in default location
--force Force overwrite of sync config file
Next you need to install a copy of the template configuration file into your Ska root directory. This will let you customize which data you want and how to get it. This installation is done with:
% echo $SKA # Check the SKA root directory
/Users/aldcroft/ska
% ska_sync --install
Wrote ska sync config file to /Users/aldcroft/ska/ska_sync_config
Now you need to edit the file it just created and set the remote host for getting data and the remote host user name. Choose either kadi.cfa.harvard.edu (HEAD) or the IP address for chimchim (OCC) and edit accordingly. Likewise put in the corresonding user name:
# Host machine to supply Ska data (could also be chimchim but kadi works
# from OCC VPN just as well).
host: kadi.cfa.harvard.edu
# Remote host user name. Default is local user name.
# user: name
If you want Ska engineering archive access then following the instructions in Syncing Ska engineering archive data
Finally do the sync step:
% ska_sync
Loaded config from /Users/aldcroft/ska/ska_sync_config
This prints something like below:
COPY and PASTE the following at your terminal command line prompt:
rsync -arzv --progress --size-only --files-from="/Users/aldcroft/ska/ska_sync_files" \
aldcroft@kadi.cfa.harvard.edu:/proj/sot/ska/ "/Users/aldcroft/ska/"
As instructed copy and paste the rsync line, after which you will need to enter your password. At this point it will sync the relevant Ska data files into your local Ska root.
As long as you don t change your config file, you can just re-run that same command to re-sync as needed.
To test that the data are really there make sure you can reproduce the following:
% ipython --matplotlib
>>> from Ska.tdb import msids
>>> msids.find('tephin')
[<MsidView msid="TEPHIN" technical_name="EPHIN SENSOR HOUSING TEMP">]
>>> from kadi import events
>>> events.normal_suns.filter('2014:001')
<NormalSun: start=2014:207:07:04:09.331 dur=65207>
>>> from Chandra.cmd_states import fetch_states
>>> fetch_states('2011:100', '2011:101', vals=['obsid'])
...
SOME WARNINGS WHICH ARE OK and will get patched up later
...
[ ('2011:100:11:53:12.378', '2011:101:00:26:01.434', 418823658.562, 418868827.618, 13255)
('2011:101:00:26:01.434', '2011:102:13:39:07.421', 418868827.618, 419002813.605, 12878)]
Syncing Ska engineering data to your standalone installation (laptop) is feasible. Circa mid-2019, the archive about 170 Gb. Copying the entire archive over VPN network will take a while, but it has been done. Once you have the whole archive on local disk, it typically takes 2-3 hours to bring up to date if you do this once a week.
However, the preferred practice at this point is to sync only the MSIDs or content types that you need. To do this read on.
The archive files that you need are divided by so-called "content type", which is the CXCDS way of splitting telemetry into parts that are in the same subsystem and are sampled at the same rate. The available content types are shown below. This splitting is convenient because you can sync only particular subsystems, e.g. HRC data but not worry about PCAD telemetry.
kadi$ ls $SKA/data/eng_archive/data
acis2eng/ dp_acispow128/ eps6eng/ obc3eng/ prop2eng/
acis3eng/ dp_eps16/ eps7eng/ obc4eng/ sim1eng/
acis4eng/ dp_eps8/ eps9eng/ obc5eng/ sim21eng/
acisdeahk/ dp_orbit1280/ hrc0hk/ orbitephem0/ sim2eng/
angleephem/ dp_pcad1/ hrc0ss/ orbitephem1/ sim3eng/
ccdm10eng/ dp_pcad16/ hrc2eng/ pcad10eng/ simcoor/
ccdm11eng/ dp_pcad32/ hrc4eng/ pcad11eng/ simdiag/
ccdm12eng/ dp_pcad4/ hrc5eng/ pcad12eng/ sim_mrg/
ccdm13eng/ dp_thermal1/ hrc7eng/ pcad13eng/ sms1eng/
ccdm14eng/ dp_thermal128/ lunarephem0/ pcad14eng/ sms2eng/
ccdm15eng/ ephhk/ lunarephem1/ pcad15eng/ solarephem0/
ccdm1eng/ ephin1eng/ misc1eng/ pcad3eng/ solarephem1/
ccdm2eng/ ephin2eng/ misc2eng/ pcad4eng/ tel1eng/
ccdm3eng/ eps10eng/ misc3eng/ pcad5eng/ tel2eng/
ccdm4eng/ eps1eng/ misc4eng/ pcad6eng/ tel3eng/
ccdm5eng/ eps2eng/ misc5eng/ pcad7eng/ thm1eng/
ccdm7eng/ eps3eng/ misc6eng/ pcad8eng/ thm2eng/
ccdm8eng/ eps4eng/ misc7eng/ pcad9eng/ thm3eng/
cpe1eng/ eps5eng/ misc8eng/ prop1eng/
If you are interested in a particular MSID, you can use fetch
to
determine the corresponding content type:
In [1]: from Ska.engarchive import fetch
In [2]: fetch.content['1WRAT']
Out[2]: 'acis4eng'
In [3]: fetch.content['DP_ROLL']
Out[3]: 'dp_pcad4'
You can reverse this to find all the MSIDs in a content type:
In [5]: [msid for msid in fetch.content if fetch.content[msid] == 'acis4eng']
Out[5]:
['1PIN1AT',
'1SSMYT',
'1SSPYT',
'1VAHCAT',
'1VAHCBT',
'1VAHOAT',
'1VAHOBT',
'1WRAT',
'1WRBT']
The example below shows how you can set up your ska_sync_config
file
to pull in various telemetry sets, ranging from everything in a content
type down to individual MSIDs.
# Example configuration to sync Ska engineering archive data
eng_archive:
# All HRC telemetry (full-resolution, 5-min, daily)
- data/hrc0hk
- data/hrc0ss
- data/hrc2eng
- data/hrc4eng
- data/hrc5eng
- data/hrc7eng
# Full resolution orbitephem0_{x,y,z}
- data/orbitephem0/colnames.pickle # Always needed for each content type
- data/orbitephem0/archfiles.db3 # Required for full-res data for content
- data/orbitephem0/TIME.h5 # Required for full-res data
- data/orbitephem0/ORBITEPHEM0_X.h5 # Individual MSID data file
- data/orbitephem0/ORBITEPHEM0_Y.h5
- data/orbitephem0/ORBITEPHEM0_Z.h5
# Just the 5-minute roll and pitch derived parameters
- data/dp_pcad4/colnames.pickle # Needed for content type
- data/dp_pcad4/5min/DP_ROLL.h5
- data/dp_pcad4/5min/DP_PITCH.h5
# Just 5-minute and daily AACCCDPT
- data/pcad5eng/5min/colnames.pickle
- data/pcad5eng/5min/AACCCDPT.h5
- data/pcad5eng/daily/AACCCDPT.h5
kadi$ du -sh $SKA/data/eng_archive/data/*
824M /proj/sot/ska/data/eng_archive/data/acis2eng
207M /proj/sot/ska/data/eng_archive/data/acis3eng
202M /proj/sot/ska/data/eng_archive/data/acis4eng
901M /proj/sot/ska/data/eng_archive/data/acisdeahk
680M /proj/sot/ska/data/eng_archive/data/angleephem
219M /proj/sot/ska/data/eng_archive/data/ccdm10eng
2.1G /proj/sot/ska/data/eng_archive/data/ccdm11eng
662M /proj/sot/ska/data/eng_archive/data/ccdm12eng
337M /proj/sot/ska/data/eng_archive/data/ccdm13eng
6.4M /proj/sot/ska/data/eng_archive/data/ccdm14eng
16M /proj/sot/ska/data/eng_archive/data/ccdm15eng
397M /proj/sot/ska/data/eng_archive/data/ccdm1eng
359M /proj/sot/ska/data/eng_archive/data/ccdm2eng
123M /proj/sot/ska/data/eng_archive/data/ccdm3eng
3.9G /proj/sot/ska/data/eng_archive/data/ccdm4eng
365M /proj/sot/ska/data/eng_archive/data/ccdm5eng
518M /proj/sot/ska/data/eng_archive/data/ccdm7eng
561M /proj/sot/ska/data/eng_archive/data/ccdm8eng
404M /proj/sot/ska/data/eng_archive/data/cpe1eng
240M /proj/sot/ska/data/eng_archive/data/dp_acispow128
863M /proj/sot/ska/data/eng_archive/data/dp_eps16
147M /proj/sot/ska/data/eng_archive/data/dp_eps8
357M /proj/sot/ska/data/eng_archive/data/dp_orbit1280
554M /proj/sot/ska/data/eng_archive/data/dp_pcad1
394M /proj/sot/ska/data/eng_archive/data/dp_pcad16
920M /proj/sot/ska/data/eng_archive/data/dp_pcad32
5.5G /proj/sot/ska/data/eng_archive/data/dp_pcad4
17G /proj/sot/ska/data/eng_archive/data/dp_thermal1
972M /proj/sot/ska/data/eng_archive/data/dp_thermal128
440M /proj/sot/ska/data/eng_archive/data/ephhk
50M /proj/sot/ska/data/eng_archive/data/ephin1eng
75M /proj/sot/ska/data/eng_archive/data/ephin2eng
2.7G /proj/sot/ska/data/eng_archive/data/eps10eng
667M /proj/sot/ska/data/eng_archive/data/eps1eng
191M /proj/sot/ska/data/eng_archive/data/eps2eng
625M /proj/sot/ska/data/eng_archive/data/eps3eng
181M /proj/sot/ska/data/eng_archive/data/eps4eng
104M /proj/sot/ska/data/eng_archive/data/eps5eng
622M /proj/sot/ska/data/eng_archive/data/eps6eng
435M /proj/sot/ska/data/eng_archive/data/eps7eng
4.8G /proj/sot/ska/data/eng_archive/data/eps9eng
151M /proj/sot/ska/data/eng_archive/data/hrc0hk
329M /proj/sot/ska/data/eng_archive/data/hrc0ss
108M /proj/sot/ska/data/eng_archive/data/hrc2eng
348M /proj/sot/ska/data/eng_archive/data/hrc4eng
94M /proj/sot/ska/data/eng_archive/data/hrc5eng
5.1M /proj/sot/ska/data/eng_archive/data/hrc7eng
1.5G /proj/sot/ska/data/eng_archive/data/lunarephem0
614M /proj/sot/ska/data/eng_archive/data/lunarephem1
237M /proj/sot/ska/data/eng_archive/data/misc1eng
307M /proj/sot/ska/data/eng_archive/data/misc2eng
112M /proj/sot/ska/data/eng_archive/data/misc3eng
556M /proj/sot/ska/data/eng_archive/data/misc4eng
76M /proj/sot/ska/data/eng_archive/data/misc5eng
99M /proj/sot/ska/data/eng_archive/data/misc6eng
217M /proj/sot/ska/data/eng_archive/data/misc7eng
803M /proj/sot/ska/data/eng_archive/data/misc8eng
224M /proj/sot/ska/data/eng_archive/data/obc3eng
1.6G /proj/sot/ska/data/eng_archive/data/obc4eng
303M /proj/sot/ska/data/eng_archive/data/obc5eng
1.6G /proj/sot/ska/data/eng_archive/data/orbitephem0
645M /proj/sot/ska/data/eng_archive/data/orbitephem1
126M /proj/sot/ska/data/eng_archive/data/pcad10eng
67M /proj/sot/ska/data/eng_archive/data/pcad11eng
569M /proj/sot/ska/data/eng_archive/data/pcad12eng
793M /proj/sot/ska/data/eng_archive/data/pcad13eng
239M /proj/sot/ska/data/eng_archive/data/pcad14eng
6.8G /proj/sot/ska/data/eng_archive/data/pcad15eng
55G /proj/sot/ska/data/eng_archive/data/pcad3eng
357M /proj/sot/ska/data/eng_archive/data/pcad4eng
736M /proj/sot/ska/data/eng_archive/data/pcad5eng
669M /proj/sot/ska/data/eng_archive/data/pcad6eng
20G /proj/sot/ska/data/eng_archive/data/pcad7eng
26G /proj/sot/ska/data/eng_archive/data/pcad8eng
8.8M /proj/sot/ska/data/eng_archive/data/pcad9eng
978M /proj/sot/ska/data/eng_archive/data/prop1eng
524M /proj/sot/ska/data/eng_archive/data/prop2eng
103M /proj/sot/ska/data/eng_archive/data/sim1eng
51M /proj/sot/ska/data/eng_archive/data/sim21eng
50M /proj/sot/ska/data/eng_archive/data/sim2eng
258M /proj/sot/ska/data/eng_archive/data/sim3eng
54M /proj/sot/ska/data/eng_archive/data/simcoor
106M /proj/sot/ska/data/eng_archive/data/simdiag
78M /proj/sot/ska/data/eng_archive/data/sim_mrg
168M /proj/sot/ska/data/eng_archive/data/sms1eng
66M /proj/sot/ska/data/eng_archive/data/sms2eng
1.4G /proj/sot/ska/data/eng_archive/data/solarephem0
548M /proj/sot/ska/data/eng_archive/data/solarephem1
474M /proj/sot/ska/data/eng_archive/data/tel1eng
228M /proj/sot/ska/data/eng_archive/data/tel2eng
536M /proj/sot/ska/data/eng_archive/data/tel3eng
616M /proj/sot/ska/data/eng_archive/data/thm1eng
151M /proj/sot/ska/data/eng_archive/data/thm2eng
168M /proj/sot/ska/data/eng_archive/data/thm3eng
rsync -av --prune-empty-dirs --exclude='/*/*/*/' --include='*/' \
--include='???????/ofls?/*.backstop' --include='???????/ofls?/*.or' --exclude='*' \
kady:/data/mpcrit1/mplogs/2019/ $SKA/data/mpcrit1/mplogs/2019/
Maintaining your environment is straightforward. When there is a new release (or just periodically) do:
% conda update -c $ska3conda ska3-flight