-
Notifications
You must be signed in to change notification settings - Fork 0
/
cuthesis.lof
66 lines (66 loc) · 31 KB
/
cuthesis.lof
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
\addvspace {10\p@ }
\addvspace {10\p@ }
\contentsline {figure}{\numberline {2.1}{\ignorespaces \textbf {Plus and Cross Polarisation.} The effect of a GW traveling into/out of the page on an initially circular ring of test particles floating in empty space. Starting on the left, the effect of the plus polarisation is shown on top and the cross polarisation on the bottom. \cite {mckechan-thesis} }}{8}{figure.2.1}
\contentsline {figure}{\numberline {2.2}{\ignorespaces \textbf {The First Direct Detection of a Gravitational Wave.} The top panel shows the a theoretical model of the first detected GW with an inset cartoon of the state of the binary at four different phases of the binary inspiral. The bottom panel shows the Keplerian seperation distance in units of Schwarzschild radii ($R_s=\frac {2GM}{c^2}$ where $M$ is the total mass of the system). Also shown is the relative velocity of the black holes. \cite {150914-det-paper} }}{14}{figure.2.2}
\contentsline {figure}{\numberline {2.3}{\ignorespaces A schematic of a Michelson interferometer. The end test masses of the X and Y arms (ETMX and ETMY) are the mirrors at the end of the interferometer arms. The laser light enters the interferometer from the symmetric port on the left, and the photodetector that measures the differential arm length ($L_Y-L_X$) is in the anti-symmetric port. \cite {ifo_tech} }}{15}{figure.2.3}
\contentsline {figure}{\numberline {2.4}{\ignorespaces \textbf {Electric Fields in a Michelson Interferometer.} This figure shows our labeling convention the electric fields in the different parts of the interferometer. The input laser light is labeled $E_0$. At the beam splitter, half of this light $E_1$ is reflected into the Y arm and the other half $E_2$ is transmitted into the X arm. The light then travels down each arm and is reflected off of the test mass mirrors. The light accumulates a phase shift in the arms, and we call this phase shifted light $E_3$ and $E_4$ for the Y and X arms respectively. The light in both arms then reaches the beam splitter again, when it will either be reflected/transmitted into the symmetric port or the anti-symmetric port. The light in the symmetric port is labeled $E_5$ and in the anti-symmetric port is labeled $E_6$. \cite {ifo_tech} }}{16}{figure.2.4}
\contentsline {figure}{\numberline {2.5}{\ignorespaces \textbf {Interferometer Antenna Pattern.} Here we see the sensitivity to an unpolarised GW, given by the distance from the origin, of an interferometer with arms aligned with the x and y axes. This is calculated as the root-sum-square of the plus and cross antenna patterns. We can see that the detector is sensitive to most of the sky, and most sensitive to GWs coming from perpendicular to the detector plane. However, GWs approaching from within the detector plane and at an angle off-set from the arms by $\pi /4$ radians will be in a null of the detector.}}{21}{figure.2.5}
\contentsline {figure}{\numberline {2.6}{\ignorespaces \textbf {ASD for LIGO Livingston Observatory.} Plotted in blue is the ASD of the background noise for the LIGO Livingston observatory as it was on the 1st August 2019. This plot is taken from the LIGO summary pages, which are used to monitor the detectors. In grey we see the gravitational wave interferometer noise curve (GWINC), a theoretical model of all the noise in the detector, and in orange we see the ASD from 13th May as a reference. }}{23}{figure.2.6}
\contentsline {figure}{\numberline {2.7}{\ignorespaces \textbf {Noise Budget for LIGO Livingston Observatory.} Here we have the noise budget for the LIGO Livingston observatory on 20th August 2017. A noise budget shows many different noise sources and how they affect the ASD for the detector. Some of these are determined theoretically, such as the quantum noise. Others are determined via sensors. The sum of all the known noise sources is shown with the dotted black line. Where the black line is close to the measured differential arm length (DARM, shown in blue), the noise is well understood. This is the case at high frequency, where the detector is dominated by quantum noise, and low frequency, where it is dominated by seismic and control system noise. From about 20-80 Hz, we can see that the sum of noises is far below DARM. This indicates that there is a noise source in this frequency range that we do not know about yet. The noise sources in LIGO detectors are described in detail in \cite {noise_budget_martynov} and \cite {GW150914-detector}.}}{24}{figure.2.7}
\addvspace {10\p@ }
\contentsline {figure}{\numberline {3.1}{\ignorespaces \textbf {BATSE gamma-ray light curves.} Here we see the light curves of a selection of GRBs. The duration and flux varies significantly between GRBs. \cite {GRBprompt} }}{30}{figure.3.1}
\contentsline {figure}{\numberline {3.2}{\ignorespaces \textbf {BATSE GRB Fluence.} This plot shows the fluence (given by the colour of each point) and the sky position of each GRB detected by the BATSE mission. \cite {BATSE_dist} }}{32}{figure.3.2}
\contentsline {figure}{\numberline {3.3}{\ignorespaces \textbf {LogN-logP for BATSE PVO.} Here we plot the log of the number of GRBs against the log of the peak flux. The sample includes GRBs detected by BATSE and by PVO. The energy range for BATSE was 50-300 keV, and the energy range for PVO was 100-2000 keV. For uniformly distributed GRBs, we expect this plot to have a gradient of $-3/2$. The expected gradient is observed for high energy GRBs but not at lower energies. This suggests a limited distance to which GRBs can be observed. \cite {batse_vpo2, batse_vpo1} }}{33}{figure.3.3}
\contentsline {figure}{\numberline {3.4}{\ignorespaces \textbf {T90 vs the Spectral Hardness Ratio.} Here we plot the $T_{90}$ values and the spectral hardness ratio for the BATSE GRBs. The top panel shows a histogram of the T90 data, which clearly has two populations of GRBs, short and long. The main plot shows T90 against spectral hardness, which makes the two populations even more clear and shows that short GRBs have harder spectra than long GRBs. Those GRBs with the greatest ratio of energy in the X-ray to gamma ray band, generally those with a peak energy of less than 15 keV, are called \textit {X-ray flashes} (XRF). Those with comparable energy in the gamma-ray and X-ray band are called \textit {X-ray rich} (XRR) GRBs. All other GRBs are simply called GRBs. These different classes of GRBs are marked on the plot. Also shown is the 2 second dividing line between short and long GRBs.\cite {bloom_grbs}}}{35}{figure.3.4}
\contentsline {figure}{\numberline {3.5}{\ignorespaces \textbf {Break in spectrum due to jetting.} Here we see the optical light curves for the afterglow GRB 990510. A break in the spectrum is visible after approximately one day.\cite {Harrison_1999} }}{39}{figure.3.5}
\contentsline {figure}{\numberline {3.6}{\ignorespaces \textbf {GRB170817A and GW170817.} Here we see the a coherent combination of the Hanford and Livingston strain data from GW 170817 in the bottom panel. The top two panels shows the Fermi GRM curve in the 10-50keV and the 50-300 keV range respectively. The INTEGRAL/SPI-ACS data is shown in the third plot. The background estimate for each GRB detector is indicated by the red line. Note that the GRB was detected 1.7 seconds after the GW signal was detected. We can also see that Fermi detected a longer, softer signal in the 10-50 keV range, that lasted for a few seconds after the triggering pulse. \cite {GW170817_GRB} }}{44}{figure.3.6}
\contentsline {figure}{\numberline {3.7}{\ignorespaces \textbf {Glitch in the LIGO Livingston Observatory.} The top panel shows a time frequency map for the whitened Livingston observatory data at the detection time of GW 170817. A glitch is clearly visible approximately 1.1 seconds before the end of the signal. Despite this the signal is still clearly visible. The bottom plot shows the raw strain data from the Livingston observatory. This data is bandpassed between 30 Hz and 2 kHz to emphasise the sensitive range of the detector. The grey curve (and right axis) shows the inverse Tukey window used to smoothly zero out the data around the glitch before the rapid reanalysis of the data. The blue curve shows the waveform model used to subtract the glitch from the data before measurements of the source's properties were made. \cite {GW170817_det} }}{45}{figure.3.7}
\contentsline {figure}{\numberline {3.8}{\ignorespaces \textbf {GW 170817 Detection.} Here we see time frequency maps of the LIGO Hanford and Livingston observatories, and the Virgo observatory at the detection time of GW 170817. This data has been whitened and independently observable noise sources have been subtracted, including a glitch in the Livingston data. The non-detection by Virgo significantly reduced the amount of the sky that the signal could have originated from.\cite {GW170817_det}}}{46}{figure.3.8}
\contentsline {figure}{\numberline {3.9}{\ignorespaces \textbf {Sky map for GW 170817/GRB 170817A.} Here we see the 28 deg$^2$ 90\% confidence sky localisation for the LIGO and Virgo collaborations in green, the $\sim 1100$ deg$^2$ \cite {grb170817a_mm} 90\% localisation obtained by GBM in purple, and the annulus formed by Fermi and INTEGRAL timing information in grey. \cite {GW170817_GRB} }}{47}{figure.3.9}
\contentsline {figure}{\numberline {3.10}{\ignorespaces \textbf {NGC 4993.} Image of NGC 4993 taken in 1992 by the Anglo-Australian Observatory (left) and August 18th 2017 by the Las Cumbres Observatory (right). Note the appearance of a bright new object to the North East of the galactic center. \cite {Arcavi:2017xiz} }}{48}{figure.3.10}
\contentsline {figure}{\numberline {3.11}{\ignorespaces \textbf {Brightness/Luminosity against redshift.} Here we see the distribution of the isotropic equivalent energy $E_\text {iso}$ and luminosity $L_\text {iso}$ against redshift for every GBM-detected GRB with a measured redshift. For GRBs with power law spectra, marked with a downward pointing arrow, this is taken to be an upper limit. This is because the spectra must have curvature, and so extrapolating a power law leads to an overestimation. The green dashed line shows the approximate detection threshold for the GBM. These plots show that GRB 170817A was more than two orders of magnitude dimmer than any other GRB in the sample.\cite {GW170817_GRB}}}{49}{figure.3.11}
\contentsline {figure}{\numberline {3.12}{\ignorespaces \textbf {Jet Structure Scenarios.} Three different scenarios that could explain the low luminosity of GRB 170817A. The first scenario is that a top-hat jet was viewed off-axis. The second is that the jet is structured, with photons emitted further from the axis being lower energy and fewer in number, and viewed relatively far from the axis. The third scenario is that a uniform jet has a surrounding cocoon that emits lower energy photons, and it was these lower energy photons that were detected.\cite {GW170817_GRB} }}{49}{figure.3.12}
\contentsline {figure}{\numberline {3.13}{\ignorespaces \textbf {Jet Model Comparison.} Here we see a comparison of the best fit for the structured jet, top-hat jet seen off-axis, and isotropic models. The afterglow's measured flux density at 3 GHz is shown by the blue symbols (though the fits were performed with multi-wavelength data). The inset shows the best fit isotropic energy and Lorentz factor for each model as a function of viewing angle. The arrows show the position of the observer for the structured and top-hat jet models.\cite {Lazzati_afterglow} }}{51}{figure.3.13}
\contentsline {figure}{\numberline {3.14}{\ignorespaces \textbf {Structured Jet.} Left panel: A pseudocolour density image of the simulation used to compute the afterglow curves. The low density core of the jet is the blue region near the middle. The orange and green regions around the core are the slow moving wings. Top right panel: Here we see the 3 GHz flux detected by an observer at $33^\circ $ from the jet axis from different parts of the structured jet as time progresses. The angle is relative to the jet axis, so the blue curve is the core of the jet, the orange curve is the fast wings of the jet, the orange curve is the material moving along the line of sight (an angle of about $33^\circ $ in this case), and the pink and brown curves correspond to large angles, that do not contribute much to the observed flux. Bottom right panel: The distribution of energy as a function of angular separation from the jet.\cite {Lazzati_afterglow} }}{52}{figure.3.14}
\addvspace {10\p@ }
\contentsline {figure}{\numberline {4.1}{\ignorespaces \textbf {Bank $\chi ^2$.} Here we plot the bank $\chi ^2$ values for a single template on real data from the O2 observing run. We can see that the bank $\chi ^2$ values approximate a $\chi ^2$ distribution with 40 degrees of freedom, plotted in red. We can also see a long tail of triggers with a high bank $\chi ^2$. These are glitches that can be cut. }}{74}{figure.4.1}
\contentsline {figure}{\numberline {4.2}{\ignorespaces \textbf {Autocorrelation $\chi ^2$.} Here we plot the autocorrelation $\chi ^2$ values for a single template on real data from the O2 observing run. The autocorrelation $\chi ^2$ test was calculated with 40 time slides, which would follow a $\chi ^2$ distribution with 160 degrees of freedom if the different time slides were not correlated. This is not the case, as can be seen from the $\chi ^2$ distribution in red. There is also a long tail of triggers with a high bank $\chi ^2$ (the plot has been truncated to not include the highest values). These are glitches that can be cut. }}{75}{figure.4.2}
\contentsline {figure}{\numberline {4.3}{\ignorespaces \textbf {Coherent $\chi ^2$.} Here we plot the coherent $\chi ^2$ values for a single template on real data from the O2 observing run. The test used 16 frequency bins, which would follow a $\chi ^2$ distribution with 60 degrees of freedom if the different frequency bins are not correlated. This is the case, as can be seen from how closely the distribution follow the $\chi ^2$ distribution, shown in red.}}{76}{figure.4.3}
\contentsline {figure}{\numberline {4.4}{\ignorespaces \textbf {Null Statistic Cut.} Here we plot the coherent SNR against the null SNR. The blue crosses are background triggers. The red pluses are signal injections. The black line is the veto line, with all triggers in the shaded region above the line being discarded. The green line indicates the expected SNR for optimally oriented injections. The magenta line shows the one sigma error on the green line. }}{78}{figure.4.4}
\contentsline {figure}{\numberline {4.5}{\ignorespaces \textbf {PyGRB Sky Grid.} Here we see an example of a full search grid used by PyGRB, indicated by the blue dots, and a reduced sky grid parsed by PyGRB in the case of a two detector search using the Hanford and Livingston detectors, the empty circles labeled 'parsed'. The parsed circles do not form a line due to the parsing routine, but this has no effect the on analysis. \cite {pygrb_Williamson:2014}}}{82}{figure.4.5}
\contentsline {figure}{\numberline {4.6}{\ignorespaces \textbf {PyGRB Workflow.} The workflow starts in two parallel branches, one that runs the injections jobs, and one that analyses the background and onsource. }}{85}{figure.4.6}
\contentsline {figure}{\numberline {4.7}{\ignorespaces \textbf {P-value for each GRB.} This is the p-value distribution for the 41 GRBs other than GRB 170817A. The GRBs with no trigger in the onsource window have upper and lower limits on the p-value. The upper limit is a p-value of 1. The lower limit is the fraction of offsource trials that also had no trigger. The distribution lays within the $2\sigma $ range, shown by the upper and lower dotted lines. }}{87}{figure.4.7}
\contentsline {figure}{\numberline {4.8}{\ignorespaces \textbf {Cumulative exclusion distance.} This is the cumulative $90\%$ exclusion distance for every GRB analysed by PyGRB except GRB170817A. The $90\%$ exclusion distance is the distance at which $90\%$ of injected simulated signals are recovered with a greater coherent SNR than the loudest trigger in the onsource. }}{88}{figure.4.8}
\contentsline {figure}{\numberline {4.9}{\ignorespaces \textbf {Cumulative Rate of BNS and short GRB Events.} The magenta lines show the 90\% confidence bounds for joint GRB/GW events as a function of redshift. This was calculated using the 41 non-detections and single detection by PyGRB during O2. The black line and the grey region shows the estimated BNS merger rate $1210^{+3230}_{-1040}$. In green is shown the estimated Fermi detection rate and its 90\% confidence region. \cite {Howell} The measured redshifts of every short GRB apart from GRB 170817A are shown in brown. The gold sample refers to those GRBs that were localised to near a host galaxy, making the redshift measurement more reliable that short GRBs measured more distant from a host galaxy. Our results are compatible with both the Fermi-GBM observed rate and the predicted BNS merger rate. \cite {o2grb}}}{90}{figure.4.9}
\addvspace {10\p@ }
\contentsline {figure}{\numberline {5.1}{\ignorespaces \textbf {Coherent and Reweighted SNR Time Series for GRB 170817A.} The top panel shows the coherent SNR vs time for GRB 170817A. The GW is clearly visible, as are some smaller peaks that are due to noise. The bottom panel shows the reweighted SNR time series. The background noise has been downweighted but the GW is still very prominent. It is noteworthy that the peaks in coherent SNR that were due to noise have mostly been downweighted to be less significant than the median background trigger.}}{98}{figure.5.1}
\contentsline {figure}{\numberline {5.2}{\ignorespaces \textbf {Coherent and Reweighted SNR Time Series for GRB 170112A.} The top panel shows the coherent SNR vs time for GRB 170112A. There is no GW, but several glitches are clearly visible. The bottom panel shows the reweighted SNR. We can see that the gltiches have been downweighted to be less significant than the median background trigger.}}{99}{figure.5.2}
\contentsline {figure}{\numberline {5.3}{\ignorespaces \textbf {Null SNR vs Reweighted SNR for GRB 170817A.} Here we plot the null SNR against the coherent SNR (left) and the reweighted SNR (right) for GRB 170817A. Only triggers with a null SNR above 4.25 are reweighted by the null SNR (with the other triggers being reweighted only by their $\chi ^2$ values). We can see that these triggers have been downweighted more than the triggers with a low null SNR. The GW is clearly visible on the right of both plots.}}{100}{figure.5.3}
\contentsline {figure}{\numberline {5.4}{\ignorespaces \textbf {Network $\chi ^2$ vs Coherent and Reweighted SNR for GRB 170817A.} Here we plots the network $\chi ^2$ against the coherent SNR (left) and reweighted SNR (right) for GRB 170817A. We can see that the higher the network $\chi ^2$ of a trigger, the more it is downweighted. The GW is clearly visible on the right of both plots. }}{101}{figure.5.4}
\contentsline {figure}{\numberline {5.5}{\ignorespaces \textbf {Network $\chi ^2$ vs Coherent and Reweighted SNR for GRB 170112A.} Here we plot the network $\chi ^2$ against the coherent SNR (left) and reweighted SNR (right) for GRB 1710112A. The data contained several glitches, which are apparent from the triggers with a high network $\chi ^2$ and high coherent SNR. As we can see, these triggers are downweighted appropriately, such that the reweighted SNR contains no significant peaks. }}{102}{figure.5.5}
\contentsline {figure}{\numberline {5.6}{\ignorespaces \textbf {Loudest Event per 6-second Trial for GRB 170817A.} Here we see the peak coherent SNR (orange line) and reweighted SNR (blue line) in each 6-second trial for the GRB 170817A analysis, with the stars indicating the GW. Again, we can see that the on-source has a much higher reweighted SNR than any of the off-source trials. Also note that the tail of events with a coherent SNR of about 7-9 does not appear in in the reweighted SNR. This shows that the reweighting is lowering the significance of glitches.}}{103}{figure.5.6}
\contentsline {figure}{\numberline {5.7}{\ignorespaces \textbf {Loudest Event per 6-second Trial for GRB 170112A.} Here we see the peak coherent SNR (orange line) and reweighted SNR (blue line) in each 6-second trial for the GRB 170112A analysis. The peak coherent and reweighted SNR in the on-source trial are indicated by the red and blue stars respectively. We see that the on-source results are consistent with background. We can also see that reweighting the SNR removed the long tail of high SNR glitches for this analysis. }}{104}{figure.5.7}
\contentsline {figure}{\numberline {5.8}{\ignorespaces \textbf {Injection Distance against Time.} Here we plot distance against time for the BNS injections of the GRB 170817A analysis. We can see that the analysis is able to better detect nearby injections than far ones. It also has a range of about 200 Mpc, which is comparable to the PyGRB analysis in O2.}}{104}{figure.5.8}
\contentsline {figure}{\numberline {5.9}{\ignorespaces \textbf {Injection Distance against Time.} This is the distance (Mpc) vs time (seconds) plot for the BNS injection run in the PyGRB O2 analysis of GRB 170817A. Blue crosses indicate that the injection was found and was more significant than event in the background data. Red crosses indicate that a trigger was found that was coincident with the injection, but it was vetoed. Black crosses indicate that no trigger was found that was coincident with the trigger. Coloured circles indicate that the injection was found but was not louder than all of the background, and in this case the circle colour indicates the FAP of the trigger. We can see that nearby injections are almost always found, and typically with a low FAP. More distant injections tend to be vetoed, missed completely, or have a relatively high FAP. It is at a distance of about 200 Mpc that injections start to be missed.}}{105}{figure.5.9}
\addvspace {10\p@ }
\contentsline {figure}{\numberline {6.1}{\ignorespaces \textbf {X-pipeline Time-Frequency Map} This figure shows a time-frequency map from X-pipeline for a $1.4-10 M_\odot $ NSBH merger using simulated background from the two Hanford detectors. The top figure shows a coherent signal stream called the \textit {standard likelihood} $E_\text {SL}$ and the bottom figure shows the top 1\% of pixels. \cite {xpipeline} }}{111}{figure.6.1}
\contentsline {figure}{\numberline {6.2}{\ignorespaces \textbf {X-pipeline Background Rejection Test.} This figure shows an example of X-pipeline background rejection. The axes show two of the statistics that X-pipeline calculates. Specifically, the x-axis shows the coherent null energy and the y-axis shows the incoherent null energy (see section \ref {sec:xcuts} for more details on these statistics). The red squares show simulated GW signals, and the crosses show background triggers. The colour bar shows the base 10 logarithm of the significance of each trigger. The injection amplitude plotted is chosen such that approximately 90\% of injections will survive the cut. Hence, the cut eliminates most of the noise but only a few signals. \cite {xpipeline} }}{112}{figure.6.2}
\contentsline {figure}{\numberline {6.3}{\ignorespaces \textbf {Cumulative Distribution of p-values.} Here we plotted the p-values for every GRB analysed by X-pipeline in O2 apart from GRB 170817A. Also plotted is the expected distribution and the $2\sigma $ deviation. The results are consistent with the no-signal hypothesis. \cite {o2grb}}}{119}{figure.6.3}
\contentsline {figure}{\numberline {6.4}{\ignorespaces \textbf {Cumulative Distribution of Exclusion Distance.} Here we plotted the 90\% exclusion distance for every GRB analysed by X-pipeline in O2 apart from GRB 170817A. This is the distance to which 90\% of injections can be recovered with a significance greater than the loudest event in the on-source.\cite {o2grb} }}{120}{figure.6.4}
\contentsline {figure}{\numberline {6.5}{\ignorespaces \textbf {Schematic Decision Tree.} To determine if a trigger is a signal or noise event the tree makes a series of cuts on the attributes x[N]. If the inequality in a node is true, then the next node is the branch to the left. Otherwise the next node is the one to the right. The properties of the tree, such as the number of layers it has, are set by the user (see section \ref {sec:opt}). }}{124}{figure.6.5}
\contentsline {figure}{\numberline {6.6}{\ignorespaces \textbf {Visualising the Classifier.} In the top plot you can see the value for log(Enull) and log(Inull) for all the signal and background training data used to build the classifier. We chose one of these events at random (indicated by the star) and varied Enull and Inull to see how it changed the MVA score, indicated by the colour in the bottom plot. As we can see, increasing Inull and decreasing Enull leads to the event being more likely to be classed as a signal. This is akin to the X-pipeline cut shown in figure \ref {fig:xcuts}. }}{125}{figure.6.6}
\contentsline {figure}{\numberline {6.7}{\ignorespaces \textbf {Removing WNB and Cusp Waveforms from Training Set.} Here we plot the percentage change in 50\% upper limit injection scale per waveform after removing WNB and Cusp waveforms from the training set. Negative values indicate that the sensitivity improved after the change. We see that the sensitivity to most waveforms drops, but by less than 1\%. As we use a few hundred injections at each injection scale, this is not a statistically significant result. This shows that the MVA is able to detect GW morphologies that it has not been trained on. }}{130}{figure.6.7}
\contentsline {figure}{\numberline {6.8}{\ignorespaces \textbf {Effect of Hyperparameter Optimisation.} Here we see the effects of optimisation on the 50\% upper limit injection scale. Lower values indicate a more sensitive search. The top panel shows the absolute values and the bottom panel shows the percentage change. The benefits of optimising the hyperparameters is no more than a $\sim 3\%$ improvement in sensitivity when compared to the default settings of the TMVA boosted decision tree classifier. It is also interesting to note that the three waveforms that have their sensitivity drop after optimisation (adi-a, adi-c, and adi-d) are all long waveforms. We will discuss the problem with these waveforms in section \ref {sec:mva future}. }}{134}{figure.6.8}
\contentsline {figure}{\numberline {6.9}{\ignorespaces \textbf {MVA Improvement.} Here we see the effects of using the MVA on the 50\% upper limit injection scale for the same GRB that was used for optimisation. The top panel shows the absolute values and the bottom panel shows the percentage change. We can see that the MVA outperforms X-pipeline on every waveform. As this was the GRB used to optimise the hyperparameters, it cannot be guaranteed that these results will hold for other GRB analyses. }}{135}{figure.6.9}
\contentsline {figure}{\numberline {6.10}{\ignorespaces \textbf {X-pipeline and XTMVA ADI-a 50\% Injection Scale Upper Limit by GRB.} Here we plot the sensitivity to the ADI-a waveform of both X-pipeline and XTMVA. The lower injection scales for XTMVA show that XTMVA is more sensitive than X-pipeline to the ADI-a waveform. Also, note the lower variation in injection scale between GRBs for XTMVA, suggesting that XTMVA is more stable than X-pipeline. }}{136}{figure.6.10}
\contentsline {figure}{\numberline {6.11}{\ignorespaces \textbf {X-pipeline and XTMVA CSG 50\% Injection Scale Upper Limit by GRB.} Here we plot the sensitivity to the 150 Hz circular sine-Gaussian waveform of both X-pipeline and XTMVA. The lower injection scales for XTMVA show that XTMVA is more sensitive than X-pipeline to the CSG waveform. Again, note the lower variation in injection scale between GRBs for XTMVA, suggesting that XTMVA is more stable than X-pipeline. }}{137}{figure.6.11}
\contentsline {figure}{\numberline {6.12}{\ignorespaces \textbf {Median 50\% Injection Scale Upper Limit by Waveform.} Here we plot the median sensitivity to each waveform of both X-pipeline and XTMVA. Overall, XTMVA is more sensitive, especially to shorter waveforms such as sine-Gaussians. Apart from ADI-a, the MVA is worse than X-pipeline for long waveforms, though the difference is small. If the MVA could use the long injection code that X-pipeline uses, it is reasonable to expect that the MVA would outperform X-pipeline for long waveforms as well. }}{138}{figure.6.12}
\contentsline {figure}{\numberline {6.13}{\ignorespaces \textbf {XTMVA p-values.} Here we have plotted the p-values for 13 of the GRBs analysed with the MVA. The blue triangles indicate the p-value reported by the MVA, the black dotted lines show the expected distribution and a $2\sigma $ deviation. Two GRBs that were analysed were left out from this plot: GRB 170817A as it had a known GW counterpart and E264930 as it was used to tune the hyperparameters. The analysis shows some bias towards low p-values. In particular, two out of the 13 analysed GRBs have a p-value of $\sim 1\%$. This can be compared to figure \ref {fig:xpvalue} which shows the X-pipeline p-values for O2. In particular, X-pipeline did not have the same bias towards low p-values that XTMVA does. This needs further investigation.}}{139}{figure.6.13}
\contentsline {figure}{\numberline {6.14}{\ignorespaces \textbf {Cumulative Distribution of Exclusion Distance.} Here we plotted the XTMVA 90\% exclusion distance for the 13 GRBs in the results set. This is the distance to which 90\% of injections can be recovered with a significance greater than the loudest event in the on-source. }}{140}{figure.6.14}
\contentsline {figure}{\numberline {6.15}{\ignorespaces \textbf {Generalised Clustering Sensitivity Change.} Here we see the change in 50\% injection scale upper limit for XTMVA with and without using generalised clustering. A lower value indicates a more sensitive search. The sensitivity of XTMVA is significantly improved for long waveforms such as ADIs, BNS, and NSBH when using generalised clustering. There is, however, a slight reduction in sensitivity to short waveforms. }}{143}{figure.6.15}
\contentsline {figure}{\numberline {6.16}{\ignorespaces \textbf {Detection Efficiency Curve without Generalised Clustering.} This is the detection efficiency curve for an XTMVA analysis without generalised clustering. The x-axis shows the root-sum-square amplitude of the injected waveforms and the y-axis shows the fraction of injections detected. This plot shows that at low amplitude no injections are found, while for very loud injections there is almost a 100\% detection efficiency. This is the typical, expected behaviour. }}{145}{figure.6.16}
\contentsline {figure}{\numberline {6.17}{\ignorespaces \textbf {Detection Efficiency Curve with Generalised Clustering.} This is the detection efficiency curve for an XTMVA analysis using generalised clustering. The x-axis shows the root-sum-square amplitude of the injected waveforms and the y-axis shows the fraction of injections detected. We can see that some very loud injections are being missed, despite the fact that close to 100\% of some lower energy injection sets are being found. }}{146}{figure.6.17}
\contentsline {figure}{\numberline {6.18}{\ignorespaces \textbf {Time-Frequency Box Size.} Here we have a histogram of the time-frequency box size of triggers in the signal training set for an analysis with generalised clustering and without. We can see that the default run not only has a lot more triggers overall, but it has more triggers with a large time-frequency box as well. }}{146}{figure.6.18}
\addvspace {10\p@ }
\addvspace {10\p@ }
\addvspace {10\p@ }