diff --git a/docs/theory/force_calibration/diode.rst b/docs/theory/force_calibration/diode.rst new file mode 100644 index 000000000..524e7f469 --- /dev/null +++ b/docs/theory/force_calibration/diode.rst @@ -0,0 +1,65 @@ +.. _diode_theory: + +Position sensitive detector +--------------------------- + +The previous section introduced the origin of the frequency spectrum of a bead in an optical trap. +In reality, our measurement is affected by two processes: + +1. The motion of the bead in the trap. +2. The response of the detector to the incident light. + +.. image:: figures/diode_filtering.png + :nbattach: + +This second factor depends on the type of measurement device being used. +Typical position sensitive detectors are made of silicon. +Such a detector has a very high bandwidth for visible light (in the MHz range). +Unfortunately, the bandwidth is markedly reduced for the near infra-red light of the trapping laser :cite:`berg2003unintended,berg2006power`. +This makes it less sensitive to changes in signal at high frequencies. + +Why is the bandwidth limited? +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The high bandwidth of visible light detection of a silicon photodiode is achieved when incoming photons are absorbed in the so-called depletion layer of the diode. +Unfortunately, silicon has an increased transparency at the near infra-red wavelength of the trapping laser. +The result of this is that light penetrates deeper into the substrate of the diode, where it generates charge carriers in a different region of the diode. +These charge carriers then have to diffuse back to the depletion layer, which takes time. +As a result, a fraction of the signal has a much slower dynamic response (i.e. a lower bandwidth). + +.. image:: figures/diode.png + :nbattach: + +This effect is often referred to as the parasitic filtering effect and is frequently modelled as a first order lowpass filter. +This model is characterized by two numbers whose values depend on the incident laser power :cite:`berg2003unintended`: + +- A frequency `f_diode`, given in Hertz. +- A unit-less relaxation factor `alpha` which reflects the fraction of light that is transmitted instantaneously. + +.. _high_corner_freq: + +High corner frequencies +^^^^^^^^^^^^^^^^^^^^^^^ + +In literature, the diode parameters are frequently estimated simultaneously with the calibration data :cite:`berg2003unintended,hansen2006tweezercalib,berg2006power,tolic2006calibration,tolic2004matlab,berg2004power`. +Unfortunately, this can cause issues when calibrating at high powers. + +Recall that the physical spectrum is characterized by a corner frequency `fc`, and diffusion constant `D`. +The corner frequency depends on the laser power and bead size (smaller beads resulting in higher corner frequencies). +The parasitic filtering effect also has a corner frequency (`f_diode`) and depends on the incident intensity :cite:`berg2003unintended`. + +When these two frequencies get close, they cannot be estimated from the power spectrum reliably anymore. +The reason for this is that the effects that these parameters have on the power spectrum becomes very similar. +When working with small beads or at high laser powers, it is important to verify that the corner frequency `fc` does not approach the frequency of the filtering effect `f_diode`. + +Sometimes, the parameters of this diode have been characterized independently. +In that case, the arguments `fixed_diode` and `fixed_alpha` can be passed to :func:`~lumicks.pylake.calibrate_force()` to fix these parameters to their predetermined values resolving this issue. + +Mathematical background +^^^^^^^^^^^^^^^^^^^^^^^ + +In literature, it is frequently modelled up to good accuracy with a first order approximation :cite:`berg2003unintended,tolic2006calibration,berg2006power`. + +.. math:: + + g(f, f_\mathrm{diode}, \alpha) = \alpha^2 + \frac{1 - \alpha ^ 2}{1 + (f / f_\mathrm{diode})^2} \tag{$-$} diff --git a/docs/theory/force_calibration/figures/blocking.gif b/docs/theory/force_calibration/figures/blocking.gif new file mode 100644 index 000000000..03b921170 --- /dev/null +++ b/docs/theory/force_calibration/figures/blocking.gif @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3e526c60c7eb0470f5d0554912a4d1f56bb5bc4eb2ebd3e7ffc5f7381e301fec +size 234543 diff --git a/docs/theory/force_calibration/figures/diode.png b/docs/theory/force_calibration/figures/diode.png new file mode 100644 index 000000000..072dd1bd8 --- /dev/null +++ b/docs/theory/force_calibration/figures/diode.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:46c1e473593aff5873db12f51b11b062d43e1d3ee420a9a6fde5e75651be848e +size 136470 diff --git a/docs/theory/force_calibration/figures/diode_filtering.png b/docs/theory/force_calibration/figures/diode_filtering.png new file mode 100644 index 000000000..a377ed62c --- /dev/null +++ b/docs/theory/force_calibration/figures/diode_filtering.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b4dfedf32d3a57da7077bd3bef93f599db4a1e28571087ed9351928ec1387925 +size 202936 diff --git a/docs/theory/force_calibration/figures/hydro_fast.png b/docs/theory/force_calibration/figures/hydro_fast.png new file mode 100644 index 000000000..07a8751a9 --- /dev/null +++ b/docs/theory/force_calibration/figures/hydro_fast.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:35cd996d7220906950b9c6bf8e3fd2cbbf7fbd0cae256dc2347dd61bff7ec04d +size 63964 diff --git a/docs/theory/force_calibration/figures/sim_trap_opt.gif b/docs/theory/force_calibration/figures/sim_trap_opt.gif new file mode 100644 index 000000000..3caa15796 --- /dev/null +++ b/docs/theory/force_calibration/figures/sim_trap_opt.gif @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e5254184598adb1fd8d801119d1b3d81e7a417632ac0c9dda035a38b7f784edf +size 849263 diff --git a/docs/theory/force_calibration/fitting.rst b/docs/theory/force_calibration/fitting.rst index 3c5b7637c..fe0ecaee5 100644 --- a/docs/theory/force_calibration/fitting.rst +++ b/docs/theory/force_calibration/fitting.rst @@ -1,13 +1,14 @@ Fitting a power spectrum ------------------------ -In the previous section, the physical origin of the power spectrum was introduced. -However, there are some practical aspects to consider. -So far, we have only considered the expectation value of the power spectrum. -In reality, power spectral values follow a distribution. +In the previous sections, the physical origin of the power spectrum was introduced. +However, there are some additional practical aspects to consider. -The real and imaginary part of the frequency spectrum are normally distributed. +So far, we have only considered the expected value of the power spectrum. +In reality, power spectral values follow a distribution. +The real and imaginary part of the spectrum are normally distributed. As a consequence, the squared magnitude of the power spectrum is exponentially distributed. + This has two consequences: - Fitting the power spectral values directly using a simple least squares fitting routine, we would @@ -15,9 +16,15 @@ This has two consequences: resulting in overestimated trap stiffness and force response and an underestimated distance response. - The signal to noise ratio is poor (equal to one :cite:`norrelykke2010power`). +.. _blocking_windowing: + +Spectral down-sampling +^^^^^^^^^^^^^^^^^^^^^^ + A commonly used method for dealing with this involves data averaging, which trades resolution for an -improved signal to noise ratio. In addition, by virtue of the central limit theorem, data averaging -leads to a more symmetric data distribution (more amenable to standard least-squares fitting procedures). +improved signal to noise ratio. +By virtue of the central limit theorem, as we average more data, the distribution of the data points +becomes more and more Gaussian and therefore more amenable to standard least-squares fitting procedures. There are two ways to perform such averaging: @@ -27,23 +34,71 @@ There are two ways to perform such averaging: spectral domain by averaging adjacent bins according to :cite:`berg2004power`. This procedure is referred to as *blocking*. -We use the blocking method for spectral averaging, since this allows us to reject noise peaks at high -resolution prior to averaging. Note however, that the error incurred by this blocking procedure depends -on :math:`n_b`, the number of points per block, :math:`\Delta f`, the spectral resolution and inversely -on the corner frequency :cite:`berg2004power`. +.. image:: figures/blocking.gif + :nbattach: + +Blocking +"""""""" + +Pylake uses the blocking method for spectral averaging, since this allows us to reject noise peaks +at high resolution prior to averaging (more on this later). +Note however, that the error incurred by this blocking procedure depends on :math:`n_b`, the number +of points per block, :math:`\Delta f`, the spectral resolution and inversely on the corner +frequency :cite:`berg2004power`. + +.. math:: + + \bar{f} &= \frac{1}{n_b} \sum_{f \in block} f\\ + \bar{P}_{meas} &= \frac{1}{n_b} \sum_{f \in block} P_{meas}(f) + +Setting the number of points per block too low results in a bias from insufficient averaging :cite:`berg2004power`. +Insufficient averaging would result in an overestimation of the force response :math:`R_f` and an +underestimation of the distance response :math:`R_d`. +In practice, one should use a high number of points per block (:math:`n_b \gg 100`), +unless a very low corner frequency precludes this. In such cases, it is preferable to increase the +measurement time. + +Bias correction +""""""""""""""" + +When sufficient blocking has taken place and noise peaks have been excluded prior to blocking, +the spectral data points are approximately Gaussian distributed with standard deviation: + +.. math:: + + \sigma(\bar{f}) = \frac{P(\bar{f})}{\sqrt{n_b}} + +This means that regular weighted least squares (WLS) can take place. +To ensure unbiased estimates in WLS, the data and squared weights must be uncorrelated. +However, we know that there is a known correlation between these which results +in a known bias in the estimate for the diffusion constant that can be corrected for after +fitting :cite:`norrelykke2010power`: + +.. math:: + + D_{corrected} = D_{wls} \frac{n_b}{n_b + 1} + +Noise floor +^^^^^^^^^^^ + +When operating at very low powers (and by extension corner frequencies), a noise floor may be visible at high frequencies. +It is important to ensure that the upper limit of the fitting range does *not* include the noise floor as it is not taken into account in the calibration model. + +Noise peaks +^^^^^^^^^^^ -Setting the number of points per block too low would result in a bias from insufficient averaging -:cite:`berg2004power`. Insufficient averaging would result in an overestimation of the force response -(:math:`R_f`) and an underestimation of the distance response (:math:`R_d`). In practice, one should -use a high number of points per block (:math:`n_b \gg 100`), unless a very low corner frequency precludes this. -In such cases, it is preferable to increase the measurement time. +Optical tweezers are precision instruments. +Despite careful determination and elimination of noise sources, it is not always possible to exclude all potential sources of noise. +One downside of weighted least squares estimation, is that it is very sensitive to outliers. +It is therefore important to either exclude noise peaks from the data prior to fitting or use :ref:`robust fitting`. +Noise peaks are always excluded prior to blocking to minimize data loss. .. _goodness_of_fit: Goodness of fit --------------- -Based on the model and noise assumptions, we can calculate a goodness of fit criterion. +When working with the Gaussian error model, we can calculate a goodness of fit criterion. When sufficient blocking has taken place, the sum of squared residuals that is being minimized during the fitting procedure is distributed according to a chi-squared distribution characterized by :math:`N_{\mathit{dof}} = N_{\mathit{data}} - N_{\mathit{free}}` degrees of freedom. Here :math:`N_{\mathit{data}}` corresponds to the number of data points we fitted (after blocking) and :math:`N_{\mathit{free}}` corresponds to the number of parameters we fitted. We can use the value we obtain to determine how unusual the fit error we obtained is. diff --git a/docs/theory/force_calibration/force_calibration.rst b/docs/theory/force_calibration/force_calibration.rst index 3cc2280e2..eb8acae89 100644 --- a/docs/theory/force_calibration/force_calibration.rst +++ b/docs/theory/force_calibration/force_calibration.rst @@ -8,9 +8,8 @@ Introduction Why is force calibration necessary? ----------------------------------- -Optical tweezers typically measure forces and displacements by detecting deflections of a trapping -laser by a trapped bead. These deflections are measured at the back focal plane of the beam using -position sensitive detectors (PSDs). +Optical tweezers typically measure forces and displacements by detecting deflections of a trapping laser by a trapped bead. +These deflections are measured at the back focal plane of the beam using position sensitive detectors (PSDs). .. image:: figures/back_focal.png :nbattach: @@ -30,15 +29,46 @@ and Where :math:`V` is the position-dependent voltage signal from the PSD and :math:`R_d` and :math:`R_f` are the displacement and force sensitivity proportionality constants, respectively. -Force calibration refers to computing these conversion factors. + +Force calibration refers to computing the calibration factors needed to convert from raw voltages to actual forces and displacements. +The values we wish to calculate are: + +- Trap stiffness :math:`\kappa`, which reflects how strongly a bead is held by a trap. +- Force response :math:`R_d`, the proportionality constant between voltage and force. +- Distance response :math:`R_f`, the proportionality constant between voltage and distance. Several methods exist to calibrate optical traps based on sensor signals. In this section, we will provide an overview of the physical background of power spectral calibration. -Why does the power spectrum look the way it does? -------------------------------------------------- +How can we calibrate? +--------------------- + +Consider a small bead suspended in fluid (no optical trapping taking place). +This bead moves around due to the random collisions of molecules against the bead. +This unimpeded movement is called free diffusion. +If there is no optical trap keeping it in place, the bead slowly drifts off from its starting position. + +Now we introduce the optical trap. +The trap pulls the bead back to the laser focus. +The strength of this pull depends on how far the bead is from the focus (like a spring). + +.. image:: figures/sim_trap_opt.gif + :nbattach: + +This effectively limits the amplitude of motion away from the focus. +Consider again the frequency spectrum of diffusion. +What we saw was that the vertical axis is proportional to amplitude squared (or loosely speaking travelled distance). +We can now intuitively understand why the trap limits this amplitude and why this manifests itself as sharp reduction of amplitudes above a certain threshold. + +Important takeaways +------------------- + +- The spectrum of bead motion in a trap can be characterized by a diffusion constant and corner frequency. +- At low frequencies the trapping force dominates, limiting the amplitudes, while at high frequencies the drag on the bead does. + +Mathematical background +----------------------- -Consider a small bead freely diffusing in a medium (no optical trapping taking place). Neglecting hydrodynamic and inertial effects (more on this later), we obtain the following equation of motion: .. math:: diff --git a/docs/theory/force_calibration/hyco.rst b/docs/theory/force_calibration/hyco.rst index 19b0aecce..d80a57193 100644 --- a/docs/theory/force_calibration/hyco.rst +++ b/docs/theory/force_calibration/hyco.rst @@ -1,17 +1,82 @@ Hydrodynamically correct model ------------------------------ -While the idealized model discussed in the previous section is sometimes sufficiently accurate, -there are scenarios where more detailed models are necessary. - -The frictional forces applied by the viscous environment to the bead are proportional to the bead's -velocity. The idealized model is based on the assumption that the bead's velocity is constant, which, -for a stochastic process such as Brownian motion, is not an accurate assumption. In addition, the -bead and the surrounding fluid have their own mass and inertia, which are also neglected in the idealized model. -Together, the non-constant speed and the inertial effects result in frequency-dependent frictional -forces that a more accurate hydrodynamically correct model takes into account. +.. only:: html + + :nbexport:`Download this page as a Jupyter notebook ` + +.. _hydro_model_theory: + +While the idealized model of the bead motion is sometimes sufficiently accurate, +it neglects inertial and hydrodynamical effects of the fluid and bead(s). + +The frictional forces applied by the viscous environment to the bead are proportional to the bead's velocity relative to the fluid. +The idealized model is based on the assumption that the bead's relative velocity is constant. +There is no dynamical change of the fluid motion around the bead. +In reality, when the bead moves through the fluid, the frictional force between the bead and the fluid depends on the past motion, since that determines the fluid's current motion. +For a stochastic process such as Brownian motion, constant motion is not an accurate assumption. +In addition, the bead and the surrounding fluid have their own mass and inertia, which are also neglected in the idealized model. + +Together, the non-constant relative velocity and the inertial effects result in frequency-dependent +frictional forces that a more accurate hydrodynamically correct model takes into account. These effects are strongest at higher frequencies, and for larger bead diameters. +The figure below shows the difference between the hydrodynamically correct model (solid lines) and the +idealized Lorentzian model (dashed lines) for various bead sizes. It can be seen that for large bead +sizes and higher trap powers the differences can be substantial. + +.. image:: figures/hydro.png + :nbattach: + +.. _fast_sensor_hyco: + +Fast sensor measurement +^^^^^^^^^^^^^^^^^^^^^^^ + +When fitting a power spectrum, one may ask the question, so why does the fit look good if the model is bad? +The answer to this lies in the model that is used to capture the :ref:`parasitic filtering effect`. +When the parameters of this model are estimated, what can happen is that they "hide" the mis-specification of the model. + +Fast detectors have the ability to respond much faster to incoming light resulting in no visible filtering effect in the frequency range we are fitting. +This means that for a fast detector, we do not need to include such a filtering effect in our model and see the power spectrum for what it really is. + +We can omit this effect by passing `fast_sensor=True` to the calibration models or to :func:`~lumicks.pylake.calibrate_force()`. +Note however, that this makes using the hydrodynamically correct model critical, as the simple model doesn't actually capture the data very well. +The following example data acquired on a fast sensor will illustrate why:: + + volts = f.force2y / f.force2y.calibration[0].force_sensitivity + + shared_parameters = { + "force_voltage_data": volts.data, + "bead_diameter": 4.38, + "temperature": 25, + "sample_rate": volts.sample_rate, + "fit_range": (1e2, 23e3), + "num_points_per_block": 200, + "excluded_ranges": ([190, 210], [13600, 14600]) + } + + plt.figure(figsize=(13, 4)) + plt.subplot(1, 3, 1) + fit = lk.calibrate_force(**shared_parameters, hydrodynamically_correct=False, fast_sensor=False) + fit.plot() + plt.title(f"Simple model + Slow (kappa={fit['kappa'].value:.2f})") + plt.subplot(1, 3, 2) + fit = lk.calibrate_force(**shared_parameters, hydrodynamically_correct=False, fast_sensor=True) + fit.plot() + plt.title(f"Simple model + Fast (kappa={fit['kappa'].value:.2f})") + plt.subplot(1, 3, 3) + fit = lk.calibrate_force(**shared_parameters, hydrodynamically_correct=True, fast_sensor=True) + fit.plot() + plt.title(f"Hydrodynamically correct + Fast (kappa={fit['kappa'].value:.2f})") + plt.tight_layout() + plt.show() + +.. image:: figures/hydro_fast.png + +Mathematical background +^^^^^^^^^^^^^^^^^^^^^^^ + The following equation accounts for a frequency dependent drag :cite:`tolic2006calibration`: .. math:: @@ -46,101 +111,3 @@ Here :math:`f_{\nu}` is the frequency at which the penetration depth equals the :math:`4 \nu/(\pi d^2)` with :math:`\nu` the kinematic viscosity. This approximation is reasonable, when the bead is far from the surface. - -When approaching the surface, the drag experienced by the bead depends on the distance between the -bead and the surface of the flow cell. An approximate expression for the frequency dependent drag is -then given by :cite:`tolic2006calibration`: - -.. math:: - - \gamma(f, R/l) = \frac{\gamma_\mathrm{stokes}(f)}{1 - \frac{9}{16}\frac{R}{l} - \left(1 - \left((1 - i)/3\right)\sqrt{\frac{f}{f_{\nu}}} + \frac{2}{9}\frac{f}{f_{\nu}}i - - \frac{4}{3}(1 - e^{-(1-i)(2l-R)/\delta})\right)} \tag{$\mathrm{kg/s}$} - -Where :math:`\delta = R \sqrt{\frac{f_{\nu}}{f}}` represents the aforementioned penetration depth, -:math:`R` corresponds to the bead radius and :math:`l` to the distance from the bead center to the nearest surface. - -While these models may look daunting, they are all available in Pylake and can be used by simply -providing a few additional arguments to the :class:`~.PassiveCalibrationModel`. It is recommended to -use these equations when less than 10% systematic error is desired :cite:`tolic2006calibration`. -No general statement can be made regarding the accuracy that can be achieved with the simple Lorentzian -model, nor the direction of the systematic error, as it depends on several physical parameters involved -in calibration :cite:`tolic2006calibration,berg2006power`. - -The figure below shows the difference between the hydrodynamically correct model (solid lines) and the -idealized Lorentzian model (dashed lines) for various bead sizes. It can be seen that for large bead -sizes and higher trap powers the differences can be substantial. - -.. image:: figures/hydro.png - :nbattach: - -.. note:: - - One thing to note is that when considering the surface in the calibration procedure, the drag - coefficient returned from the model corresponds to the drag coefficient extrapolated back to its - bulk value. - -Faxen's law -^^^^^^^^^^^ - -The hydrodynamically correct model presented in the previous section works well when the bead center -is at least 1.5 times the radius above the surface. When moving closer than this limit, we fall back -to a model that more accurately describes the change in drag at low frequencies, but neglects the -frequency dependent effects. - -To understand why, let's introduce Faxen's approximation for drag on a sphere near a surface under -creeping flow conditions. This model is used for lateral calibration very close to a surface -:cite:`schaffer2007surface` and is given by the following equation: - -.. math:: - - \gamma_\mathrm{faxen}(R/l) = \frac{\gamma_0}{ - 1 - \frac{9R}{16l} + \frac{1R^3}{8l^3} - \frac{45R^4}{256l^4} - \frac{1R^5}{16l^5} - } \tag{$\mathrm{kg/s}$} - -At frequency zero, the frequency dependent model used in the previous section reproduces this model -up to and including its second order term in :math:`R/l`. It is, however, a lower order model and the -accuracy decreases rapidly as the distance between the bead and surface become very small. -The figure below shows how the model predictions at frequency zero deviate strongly from the higher order model: - -.. image:: figures/freq_dependent_drag_zero.png - :nbattach: - -In addition, the deviation from a Lorentzian due to the frequency dependence of the drag is reduced -upon approaching a surface :cite:`schaffer2007surface`. - -.. image:: figures/freq_dependence_near.png - :nbattach: - -These two aspects make using Faxen's law in combination with a Lorentzian a more suitable model for -situations where we have to calibrate extremely close to the surface. - -Axial Calibration -^^^^^^^^^^^^^^^^^ - -For calibration in the axial direction, no hydrodynamically correct theory exists. - -Similarly as for the lateral component, we will fall back to a model that describes the change in -drag at low frequencies. However, while we had a simple expression for the lateral drag as a function -of distance, no simple closed-form equation exists for the axial dimension. Brenner et al provide an -exact infinite series solution :cite:`brenner1961slow`. Based on this solution :cite:`schaffer2007surface` -derived a simple equation which approximates the distance dependence of the axial drag coefficient. - -.. math:: - - \gamma_\mathrm{axial}(R/l) = \frac{\gamma_0}{ - 1.0 - - \frac{9R}{8l} - + \frac{1R^3}{2l^3} - - \frac{57R^4}{100l^4} - + \frac{1R^5}{5l^5} - + \frac{7R^{11}}{200l^{11}} - - \frac{1R^{12}}{25l^{12}} - } \tag{$\mathrm{kg/s}$} - -This model deviates less than 0.1% from Brenner's exact formula for :math:`l/R >= 1.1` and less than -0.3% over the entire range of :math:`l` :cite:`schaffer2007surface`. -Plotting these reveals that there is a larger effect of the surface in the axial than lateral direction. - -.. image:: figures/drag_coefficient.png - :nbattach: diff --git a/docs/theory/force_calibration/index.rst b/docs/theory/force_calibration/index.rst index d1a1c3c8b..f6e5c8a6f 100644 --- a/docs/theory/force_calibration/index.rst +++ b/docs/theory/force_calibration/index.rst @@ -8,7 +8,9 @@ Find out about force calibration :maxdepth: 1 force_calibration + diode fitting passive hyco + surface active diff --git a/docs/theory/force_calibration/surface.rst b/docs/theory/force_calibration/surface.rst new file mode 100644 index 000000000..fc33a438e --- /dev/null +++ b/docs/theory/force_calibration/surface.rst @@ -0,0 +1,117 @@ +Surface models +-------------- + +.. only:: html + + :nbexport:`Download this page as a Jupyter notebook ` + +.. _surface_models: + +The theory in the previous section holds for beads far away from any surface (in bulk). +When doing experiments near the surface, the surface starts to have an effect on the friction felt by the bead. +When the height and bead radius are known, the hydrodynamically correct model can take this into account. + +The hydrodynamically correct model works well when the bead center is at least 1.5 times the radius +above the surface. When moving closer than this limit, it is better to use a model that more +accurately describes the change in drag at low frequencies, but neglects the frequency dependent effects. + +To see why this is, consider model predictions for the drag coefficient at frequency zero (constant flow). +At frequency zero, the Lorentzian and hydrodynamically correct model should predict similar behavior +(as the hydrodynamic effects should only be present at higher frequencies). Let's compare the +difference between the simple Lorentzian model and the hydrodynamically correct model near the surface: + +.. image:: figures/freq_dependent_drag_zero.png + :nbattach: + +In addition, the frequency dependent effects reduce as we approach the surface :cite:`schaffer2007surface`. +We can see this when we plot the spectrum for two different ratios of `l/R`. + +.. image:: figures/freq_dependence_near.png + :nbattach: + +These two aspects make using the simpler model in combination with a Lorentzian a +more suitable model for situations where we have to calibrate extremely close to the surface. + +As a last note, note that the height-dependence of _axial_ force is different than the +height-dependence of lateral force. For axial force, no hydrodynamically correct theory for the +power spectrum near the surface exists. + +.. image:: figures/drag_coefficient.png + :nbattach: + +Mathematical background +^^^^^^^^^^^^^^^^^^^^^^^ + +**Hydrodynamically correct theory** + +When approaching the surface, the drag experienced by the bead depends on the distance between the +bead and the surface of the flow cell. An approximate expression for the frequency dependent drag is +then given by :cite:`tolic2006calibration`: + +.. math:: + + \gamma(f, R/l) = \frac{\gamma_\mathrm{stokes}(f)}{1 - \frac{9}{16}\frac{R}{l} + \left(1 - \left((1 - i)/3\right)\sqrt{\frac{f}{f_{\nu}}} + \frac{2}{9}\frac{f}{f_{\nu}}i - + \frac{4}{3}(1 - e^{-(1-i)(2l-R)/\delta})\right)} \tag{$\mathrm{kg/s}$} + +Where :math:`\delta = R \sqrt{\frac{f_{\nu}}{f}}` represents the aforementioned penetration depth, +:math:`R` corresponds to the bead radius and :math:`l` to the distance from the bead center to the nearest surface. + +While these models may look daunting, they are all available in Pylake and can be used by simply +providing a few additional arguments to the :class:`~.PassiveCalibrationModel`. It is recommended to +use these equations when less than 10% systematic error is desired :cite:`tolic2006calibration`. +No general statement can be made regarding the accuracy that can be achieved with the simple Lorentzian +model, nor the direction of the systematic error, as it depends on several physical parameters involved +in calibration :cite:`tolic2006calibration,berg2006power`. + +.. note:: + + One thing to note is that when considering the surface in the calibration procedure, the drag + coefficient returned from the model corresponds to the drag coefficient extrapolated back to its + bulk value. + +**Lorentzian model (lateral)** + +The hydrodynamically correct model presented in the previous section works well when the bead center +is at least 1.5 times the radius above the surface. When moving closer than this limit, we fall back +to a model that more accurately describes the change in drag at low frequencies, but neglects the +frequency dependent effects. + +To understand why, let's introduce Faxen's approximation for drag on a sphere near a surface under +creeping flow conditions. This model is used for lateral calibration very close to a surface +:cite:`schaffer2007surface` and is given by the following equation: + +.. math:: + + \gamma_\mathrm{faxen}(R/l) = \frac{\gamma_0}{ + 1 - \frac{9R}{16l} + \frac{1R^3}{8l^3} - \frac{45R^4}{256l^4} - \frac{1R^5}{16l^5} + } \tag{$\mathrm{kg/s}$} + +What we see is that the frequency dependent model used in the previous section reproduces this model +up to and including its second order term in :math:`R/l`. It is, however, a lower order model and the +accuracy decreases rapidly as the distance between the bead and surface become very small. + +**Lorentzian model (axial)** + +For calibration in the axial direction, no hydrodynamically correct theory exists. + +Similarly as for the lateral component, we will fall back to a model that describes the change in +drag at low frequencies. However, while we had a simple expression for the lateral drag as a function +of distance, no simple closed-form equation exists for the axial dimension. Brenner et al provide an +exact infinite series solution :cite:`brenner1961slow`. Based on this solution :cite:`schaffer2007surface` +derived a simple equation which approximates the distance dependence of the axial drag coefficient. + +.. math:: + + \gamma_\mathrm{axial}(R/l) = \frac{\gamma_0}{ + 1.0 + - \frac{9R}{8l} + + \frac{1R^3}{2l^3} + - \frac{57R^4}{100l^4} + + \frac{1R^5}{5l^5} + + \frac{7R^{11}}{200l^{11}} + - \frac{1R^{12}}{25l^{12}} + } \tag{$\mathrm{kg/s}$} + +This model deviates less than 0.1% from Brenner's exact formula for :math:`l/R >= 1.1` and less than +0.3% over the entire range of :math:`l` :cite:`schaffer2007surface`. diff --git a/docs/tutorial/force_calibration.rst b/docs/tutorial/force_calibration.rst index d14264e1d..92bd990c4 100644 --- a/docs/tutorial/force_calibration.rst +++ b/docs/tutorial/force_calibration.rst @@ -1,89 +1,3 @@ -Force Calibration -================= - -.. only:: html - - :nbexport:`Download this page as a Jupyter notebook ` - -Force calibration refers to computing the calibration factors needed to convert a raw voltage recorded by a position sensitive detector to actual force and displacement. -The values we wish to calculate are: - -- Trap stiffness :math:`\kappa`, which reflects how strongly a bead is held by a trap. -- Force response :math:`R_d`, the proportionality constant between voltage and force. -- Distance response :math:`R_f`, the proportionality constant between voltage and distance. - -This tutorial will focus on performing force calibration with Pylake. -Note that force calibration in Bluelake is actually performed by Pylake. -Therefore, the results obtained with Bluelake can be reproduced exactly when using the same data and parameters. - -To calibrate the optical traps we fit a power spectrum of the voltage measured by a position sensitive diode (PSD) to a theoretical model. The measured voltage (and thus the shape of the power spectrum) is mainly determined by - -1. The Brownian motion of the bead within the trap. -2. The instantaneous response of the PSD to the incident light. - -The contribution to the power spectrum by the bead in the optical trap is characterized by a diffusion constant and a corner frequency. -The second contribution refers to a filtering effect where the PSD becomes less sensitive to changes in signal at high frequencies. -This filtering effect is characterized by a constant that reflects the fraction of light that is transmitted instantaneously and a corner frequency (this is referred to as the diode frequency to be able to differentiate). - -.. image:: figures/force_calibration/diode_filtering.png - :nbattach: - -.. warning:: - - For high corner frequencies, the parameter estimation procedure can become unreliable. - A warning sign for this is when the corner frequency `f_c` approaches or goes beyond the diode frequency `f_diode`. - For more information see the section on `High corner frequencies`_. - -The various theoretical models that can be used to fit these data are described in the :doc:`theory section on force calibration`, while their use is described below. - -We can download the data needed for this tutorial directly from Zenodo using Pylake. -Since we don't want it in our working folder, we'll put it in a folder called `"test_data"`:: - - filenames = lk.download_from_doi("10.5281/zenodo.7729823", "test_data") - -Undoing the previous calibration --------------------------------- - -Our starting point will be data acquired with Bluelake, which is already calibrated. -Therefore, the first step is to undo the calibration applied by Bluelake to get the raw voltage signals. -Let's load the dataset:: - - f = lk.File("test_data/passive_calibration.h5") - -The force timeline in this file contains a single calibration measurement. -We can see calibrations relevant to this file by inspecting the `calibration` attribute for the entire force :class:`~lumicks.pylake.channel.Slice`. -This is a list of dictionaries containing all of the relevant calibration parameters:: - - print(len(f.force1x.calibration)) - print(f.force1x.calibration[1]) - -The first calibration item in the dictionary always refers to the calibration that was active at the start of the slice (usually from a calibration that occurred before the start of the slice). -The second calibration refers to the calibration measurement performed in this slice. -This is illustrated in the figure below: - -.. image:: figures/force_calibration/calibrations.png - :nbattach: - -Our goal here is to reproduce the calibration factors calculated from the calibration measurement corresponding to `calibrations[1]`. -First we grab the chunk of data that was used for that calibration in Bluelake:: - - start = f.force1x.calibration[1]["Start time (ns)"] - stop = f.force1x.calibration[1]["Stop time (ns)"] - force_slice = f.force1x[start:stop] - -Now we can see that this new :class:`~lumicks.pylake.channel.Slice` only contains a single calibration:: - - print(len(force_slice.calibration)) - -Next we'll decalibrate the force data back to the original voltage measured by the PSD. Let's write a short function for this so we can reuse it elsewhere:: - - def decalibrate(force_slice): - offset = force_slice.calibration[0]["Offset (pN)"] - response = force_slice.calibration[0]["Rf (pN/V)"] - return (force_slice - offset) / response - - volts = decalibrate(force_slice) - Performing the calibration -------------------------- @@ -102,123 +16,6 @@ This method returns a :class:`~lumicks.pylake.force_calibration.power_spectrum_c The rest of this tutorial illustrates the various steps involved when performing such a calibration. -Obtaining the power spectrum ----------------------------- - -To use the more manual lower-level API, we first need the power spectrum to fit. To compute a power spectrum from our data we can invoke :func:`~lumicks.pylake.calculate_power_spectrum()`:: - - power_spectrum = lk.calculate_power_spectrum(volts.data, sample_rate=volts.sample_rate) - -This function returns a :class:`~lumicks.pylake.force_calibration.power_spectrum.PowerSpectrum` which we can plot:: - - plt.figure() - power_spectrum.plot() - plt.show() - -.. image:: figures/force_calibration/power_spectrum.png - -The power spectrum is smoothed by downsampling adjacent power spectral values (known as blocking). -Downsampling the spectrum is required to fulfill some of the assumptions in the fitting procedure, but it comes at the cost of spectral resolution. -One must be careful that the shape of the power spectrum is still sufficiently preserved. -If the corner frequency is very low then downsampling too much can lead to biases in the calibration parameters. -In such cases, it is better to either measure a longer interval to increase the spectral resolution or reduce the number of points (`num_points_per_block`) used for blocking. - -The range over which to compute the spectrum can be controlled using the `fit_range` argument. -One can also exclude specific frequency ranges from the spectrum (`excluded_ranges`) which can be useful if there are noise peaks in the spectrum. -Let's see which ranges were excluded in our Bluelake calibration:: - - force_slice.calibration[0] - -.. image:: figures/force_calibration/bl_dictionary.png - -Here, they are listed as `Exclusion range 0 (min.) (Hz)`, `Exclusion range 0 (max.) (Hz)` etc. -To reproduce the result obtained with Bluelake, these should be excluded from the power spectrum:: - - power_spectrum = lk.calculate_power_spectrum( - volts.data, - sample_rate=volts.sample_rate, - fit_range=(1e2, 23e3), - num_points_per_block=2000, - excluded_ranges=([19348, 19668], [24308, 24548]) - ) - - plt.figure() - power_spectrum.plot(marker=".") - plt.show() - -.. image:: figures/force_calibration/power_spectrum_excluded_ranges.png - -Note that exclusion ranges are excluded *prior* to downsampling. -Considering that a noise peak may be very narrow, it is beneficial to lower the number of points per block temporarily to find the exact exclusion range. -After determination of this exclusion range, the number of points per block can be increased again. However, also see `Robust fitting`_ for an automated peak identification routine. - -Passive calibration -------------------- - -In literature, passive calibration is often referred to as thermal calibration. -It involves fitting a physical model to the power spectrum obtained in the previous step. -This physical model relies on a number of parameters that have to be specified in order to get the correct calibration factors. - -The most important parameters are the bead diameter (in microns) and viscosity. -Let's use the bead diameter found in the calibration performed in Bluelake. - -Note that the viscosity of water strongly depends on :ref:`temperature`. -To find the viscosity of water at a particular temperature, Pylake uses :func:`~lumicks.pylake.viscosity_of_water` which implements the model presented in :cite:`huber2009new`. -When omitted, this function will automatically be used to look up the viscosity of water for that particular temperature - -.. note:: - - Note that for experiments that use a different medium than water, the viscosity at the experimental temperature should explicitly be provided. - -The next step is setting up the calibration model:: - - bead_diameter = f.force1x.calibration[1]["Bead diameter (um)"] - force_model = lk.PassiveCalibrationModel(bead_diameter, temperature=25) - -To fit this model to the data use :func:`~lumicks.pylake.fit_power_spectrum()`:: - - calibration = lk.fit_power_spectrum(power_spectrum, force_model) - calibration - -.. image:: figures/force_calibration/calibration_item.png - -This will produce a table with your fitted calibration parameters. -These parameters can be accessed as follows:: - - >>> print(calibration["kappa"].value) - >>> print(f.force1x.calibration[1]["kappa (pN/nm)"]) - 0.12872206850762546 - 0.1287225353482303 - -.. note:: - - Note that by default, a bias correction is applied to the fitted results :cite:`norrelykke2010power`. - This bias correction is applied to the diffusion constant and amounts to a correction of :math:`\frac{N}{N+1}`, where :math:`N` refers to the number of points used for a particular spectral data point. - It can optionally be disabled by passing `bias_correction=False` to :func:`~lumicks.pylake.fit_power_spectrum` or :func:`~lumicks.pylake.calibrate_force`. - -We can plot the calibration by calling:: - - plt.figure() - calibration.plot() - plt.show() - -.. image:: figures/force_calibration/fitted_spectrum.png - -Hydrodynamically correct model ------------------------------- - -While the simple theory can suffice for small beads, it is usually a good idea to use the more realistic hydrodynamically correct model. -This model takes into account hydrodynamic and inertial effects (which scale with the size of the bead) leading to more accurate estimates. -As such, it requires a few extra parameters: the density of the sample and bead:: - - force_model = lk.PassiveCalibrationModel( - bead_diameter, - hydrodynamically_correct=True, - rho_sample=999, - rho_bead=1060.0 - ) - -Note that when `rho_sample` and `rho_bead` are omitted, values for water and polystyrene are used for the sample and bead density respectively. Calibration near the surface ---------------------------- @@ -505,157 +302,3 @@ If we compare the different fits, we can see that the simple model with filterin While this is true for this particular dataset, no general statement can be made about the bias caused by fitting the simple model rather than the hydrodynamically correct power spectrum. If low bias is desired, one should always use the hydrodynamically correct model when possible. On regular sensors, it is best to fit the hydrodynamically correct model with the filtering effect enabled. - -High corner frequencies ------------------------ - -In specific situations, the filtering effect of the position sensitive detector can cause issues when calibrating. -The power spectrum of the bead in the optical trap is characterized by a diffusion constant and a corner frequency. -The filtering effect is characterized by a constant that reflects the fraction of light that is transmitted instantaneously and a corner frequency (referred to as the diode frequency to be able to differentiate). - -The corner frequency of the physical spectrum can be found in the results as `f_c` and depends on the laser power and bead size (smaller beads resulting in higher corner frequencies) . -The corner frequency of the filtering effect can be found in the results as `f_diode` (which stands for diode frequency) and depends on the incident intensity :cite:`berg2003unintended`. -When these two frequencies get close, they cannot be determined reliably anymore. -The reason for this is that the effect of one can be compensated by the other. -When working with small beads or at high laser powers, it is important to verify that the corner frequency `f_c` does not approach the frequency of the filtering effect `f_diode`. - -Sometimes, the filtering effect has been characterized independently. -In that case, the arguments `fixed_diode` and `fixed_alpha` can be passed to :func:`~lumicks.pylake.calibrate_force()` to fix these parameters to their predetermined values. - -.. _robust_fitting: - -Robust fitting --------------- - -So far, we have been using least-squares fitting routines for force calibration. In that case, we assume that the error in the power at each frequency is distributed according to a Gaussian distribution. -Blocking or windowing the power spectrum ensures that this assumption is close enough to the truth such that the fit provides accurate estimates of the unknown parameters. -Occasionally, the power spectrum might show a spurious noise peak. -Such a peak is an outlier in the expected behavior of the spectrum and therefore interferes with the assumption of having a Gaussian error distribution. -As a result, the fit is skewed. In those cases, it can be beneficial to do a robust fit. When a robust fit is performed, one assumes that the probability of encountering one or multiple outliers is non-negligible. -By taking this into account during fitting, the fit can be made more robust to outliers in the data. The following example illustrates the method. - -To see this effect, let's load a dataset of uncalibrated force sensor data of a 4.4 μm bead showing Brownian motion while being trapped. In particular, look at the `Force 2y` sensor signal:: - - f = lk.File("test_data/robust_fit_data.h5") - f2y = f.force2y - -First create a power spectrum without blocking or windowing for later use. Then derive a power spectrum with blocking from the first power spectrum:: - - ps = lk.calculate_power_spectrum(f2y.data, sample_rate=f2y.sample_rate, num_points_per_block=1, fit_range=(10, 23e3)) - ps_blocked = ps.downsampled_by(200) - -First use a passive calibration model using the hydrodynamically correct model to perform a least-squares fit and plot the result:: - - model = lk.PassiveCalibrationModel(4.4, temperature=25.0, hydrodynamically_correct=True) - fit = lk.fit_power_spectrum(ps_blocked, model) - - plt.figure() - fit.plot() - plt.title( - f"Skewed fit: $f_c$ = {fit.results['fc'].value:.1f}, " - f"$D$ = {fit.results['D'].value:.4f}, " - f"$f_d$ = {fit.results['f_diode'].value:.1f}" - ) - plt.show() - -.. image:: figures/force_calibration/power_spectrum_noise_peak.png - -Notice how the tail of the model is skewed towards the peak, in order to reduce the least-squares error. In this case, the free parameters to fit the diode filter contribution are 'abused' to reduce the error between the model and the outlier. -This results in biased parameter estimates. - -Now do a robust fit. We do this by specifying a loss function in the function :func:`~lumicks.pylake.fit_power_spectrum()`. -For least-squares fitting, the loss function is `'gaussian'`, which is the default if nothing is specified. However, if we specify `'lorentzian'`, a robust fitting routine will be used instead. -Because `bias_correction` and robust fitting are mutually exclusive, we need to explicitly turn it off:: - - fit = lk.fit_power_spectrum(ps_blocked, model, bias_correction=False, loss_function="lorentzian") - -Now plot the robust fit:: - - plt.figure() - fit.plot() - plt.title( - f"Robust fit: $f_c$ = {fit.results['fc'].value:.1f}, " - f"$D$ = {fit.results['D'].value:.4f}, " - f"$f_d$ = {fit.results['f_diode'].value:.1f}" - ) - plt.show() - -.. image:: figures/force_calibration/power_spectrum_noise_peak_robust.png - -Notice how the model now follows the power spectrum nearly perfectly. The value for `f_diode` has increased significantly, now that it is not abused to reduce the error induced by the outlier. - -This example shows that a robust fitting method is less likely to fail on outliers in the power spectrum data. It is therefore a fair question why one would not use it all the time? -Robust fitting leads to a small bias in the fit results for which Pylake has no correction. -Least-squares fitting also leads to a bias, but this bias is known (:cite:`norrelykke2010power`) and can be corrected with `bias_correction=True`. -Secondly, for least-squares fitting, methods exist to estimate the expected standard errors in the estimates of the free parameters, which are implemented in the least-squares fitting routines that Pylake uses :cite:`press1990numerical`. -These error estimates are not implemented for robust fitting, and as such, the fit results will show `nan` for the error estimates after a robust fit. -However, as will be shown below, the robust fitting results may be used as a start to identify outliers automatically, in order to exclude these from a second, regular least-squares, fit. - -Automated spurious peak detection ---------------------------------- - -We will continue the tutorial with the results of the previous section. If you did not yet do that part of the tutorial, please go back and execute the code examples in that section. - -We still have the power spectrum `ps` that was created without blocking or windowing. Here we will use it to identify the peak and automatically obtain frequency exclusion ranges. -We will use the method :meth:`~lumicks.pylake.force_calibration.power_spectrum.PowerSpectrum.identify_peaks()` in order to do so. -This method takes a function that accurately models the power spectrum as a function of frequency, in order to normalize it. -It then identifies peaks based on the likelihood of encountering a peak of a certain magnitude in the resulting data set. -If we have a "good fit", then the easiest way to get that function is to use our fitted model:: - - plt.figure() - frequency_range = np.arange(100, 22000) - # We can call the fit with a list of frequencies to evaluate the model at those frequencies. - # This uses the best fit parameters from fit.fitted_params. - plt.plot(frequency_range, fit(frequency_range)) - plt.xscale("log") - plt.yscale("log") - -If there are no spurious peaks, then normalizing the unblocked power spectrum results in random numbers with an exponential distribution with a mean value of 1. -The chance of encountering increasingly larger numbers decays exponentially, and this fact is used by `identify_peaks()`:: - - frequency_exclusions = ps.identify_peaks(fit, peak_cutoff=20, baseline=1) - -The parameter `peak_cutoff` is taken as the minimum magnitude of any value in the normalized power spectrum in order to be concidered a peak. -The default value is 20, and it corresponds to a chance of about 2 in a billion of a peak of magnitude 20 or larger occuring naturally in a data set. -If a peak is found with this or a higher magnitude, the algorithm then expands the range to the left and right until the first time the power spectrum drops below the value `baseline`. -The frequencies at which this occurs end up as the lower and higher frequency of a frequency exclusion range. As such, the value of `baseline` controls the width of the frequency exclusion range. -We can visualize the excluded peaks as follows:: - - fig, ax = plt.subplots(1, 2, sharey=True) - for axis, title in zip(ax, ('Full spectrum', 'Zoom')): - axis.loglog(ps.frequency, ps.power, label="Power spectrum") - for idx, item in enumerate(frequency_exclusions, 1): - to_plot = np.logical_and(item[0] <= ps.frequency, ps.frequency < item[1]) - axis.plot(ps.frequency[to_plot], ps.power[to_plot], 'r', label=f'peak {idx}') - axis.legend() - axis.set_title(title) - axis.set_xlabel('Frequency [Hz]') - ax[1].set_xlim(frequency_exclusions[0][0] - 1.0, frequency_exclusions[-1][1] + 1.0) - ax[1].set_xscale('linear') - ax[0].set_ylabel('Power [V$^2$/Hz]') - plt.suptitle('Identified peaks') - plt.show() - -.. image:: figures/force_calibration/identify_peaks.png - -Finally, we can do a least-squares fit, but in this case we will filter out the frequency ranges that contain peaks. -Because we use a least-squares method, we get error estimates on the fit parameters, and bias in the fit result can be corrected. -The default values of `loss_function='gaussian'` and `bias_correction=True` ensure least-squares fitting and bias correction, so we do not need to specify them:: - - ps_no_peak = lk.calculate_power_spectrum( - f2y.data, sample_rate=f2y.sample_rate, num_points_per_block=200, fit_range=(10, 23e3), excluded_ranges=frequency_exclusions, - ) - fit_no_peak = lk.fit_power_spectrum(ps_no_peak, model) - - plt.figure() - fit_no_peak.plot() - plt.title( - f"Least squares (ex. peaks): $f_c$ = {fit_no_peak.results['fc'].value:.1f}, " - f"$D$ = {fit_no_peak.results['D'].value:.4f}, " - f"$f_d$ = {fit_no_peak.results['f_diode'].value:.1f}" - ) - plt.show() - -.. image:: figures/force_calibration/power_spectrum_no_noise_peak.png - -Notice that no skewing occurs, and that the values of `fc`, `D` and `f_diode` are now closer to values found via robust fitting in the section above. diff --git a/docs/tutorial/force_calibration/calibration_items.rst b/docs/tutorial/force_calibration/calibration_items.rst new file mode 100644 index 000000000..ebcab7fa4 --- /dev/null +++ b/docs/tutorial/force_calibration/calibration_items.rst @@ -0,0 +1,125 @@ +Calibration items +----------------- + +.. only:: html + + :nbexport:`Download this page as a Jupyter notebook ` + +This tutorial will focus on performing force calibration. +It is deliberately light on theory, to focus on the usage aspect of the calibration procedures. +For more background on the theory, please refer to the :doc:`theory section on force calibration`, while their use is described below. + +We can download the data needed for this tutorial directly from Zenodo using Pylake:: + + filenames = lk.download_from_doi("10.5281/zenodo.7729823", "test_data") + +When force calibration is applied in Bluelake, it uses Pylake to calibrate the force, after which a force calibration item is added to the timeline. +To see what such items looks like, let's load the dataset:: + + f = lk.File("test_data/passive_calibration.h5") + +The force timeline in this file contains a single calibration measurement. +Note that every force axis (e.g. `1x`, `1y`, `2x` etc) has its own calibration. +We can see calibrations relevant for a single force channel (in this case `1x`) by inspecting the `calibration` attribute for the entire force :class:`~lumicks.pylake.channel.Slice`:: + + f.force1x.calibration + +This produces a list with the following calibration items: + +.. image:: figures/listing.png + +This list provides a quick overview which calibration items are present in the file and when they were applied. +More importantly however, it tells us whether the raw data for the calibration is present in the :class:`~lumicks.pylake.channel.Slice`. +When Bluelake creates a calibration item, it only contains the results of a calibration as well as information on when the data was acquired, but *not* the raw data. +We can see this if we plot the time ranges over which these calibration items were acquired and then applied:: + + plt.figure(figsize=(8, 2)) + f.force1x.plot(color='k') + f.force1x.highlight_time_range(f.force1x.calibration[0], color="C0", annotation="0") + f.force1x.highlight_time_range(f.force1x.calibration[0].applied_at, color="C0") + f.force1x.highlight_time_range(f.force1x.calibration[1], color="C1", annotation="1") + f.force1x.highlight_time_range(f.force1x.calibration[1].applied_at, color="C1") + +.. image:: figures/time_ranges.png + +This helps us see how the calibration items relate to the data present in the file. +Calibration item `0` is the calibration that was acquired *before* the start of this file (and is therefore the calibration that is active when the file starts). +Calibration item `1` is the calibration item acquired in *this* file. + +We can see what is in the calibration item by extracting it:: + + f.force1x.calibration[1] + +.. image:: figures/calibration_item.png + +This shows us all the relevant calibration parameters. +These parameters are properties and can be extracted as such:: + + >>> calibration_item = f.force1x.calibration[1] + ... calibration_item.stiffness + 0.1287225353482303 + +Redoing a Bluelake calibration +------------------------------ + +We can directly slice the channel by the calibration item we want to reproduce to extract the relevant data:: + + force1x_slice = f.force1x[f.force1x.calibration[1]] + +To recalibrate data we first have to de-calibrate the data to get back to voltages. +To do this, we divide our data by the force sensitivity that was active at the start of the slice. + + >>> old_calibration = force1x_slice.calibration[0] + ... volts1x_slice = force1x_slice / old_calibration.force_sensitivity + +To reproduce a calibration performed with Bluelake, the easiest way to extract all the relevant parameters is to use :meth:`~lumicks.pylake.force_calibration.calibration_item.ForceCalibrationItem.calibration_params()`:: + + >>> calibration_params = calibration.calibration_params() + ... calibration_params + {'num_points_per_block': 2000, + 'sample_rate': 78125, + 'excluded_ranges': [(19348.0, 19668.0), (24308.0, 24548.0)], + 'fit_range': (100.0, 23000.0), + 'bead_diameter': 4.89, + 'viscosity': 0.00089, + 'temperature': 25.0, + 'fast_sensor': False, + 'axial': False, + 'hydrodynamically_correct': False, + 'active_calibration': False} + +This returns a dictionary with exactly those parameters you would need to reproduce this calibration in a format that `pylake` will accept. +Depending on the type of calibration that was performed, the number of parameters may vary. +To quickly perform the same calibration as was performed in Bluelake, we can use the function :func:`~lumicks.pylake.calibrate_force()`. +The easiest way to use this function is to just unpack the dictionary with parameters into it using the `**` notation:: + + >>> recalibrated = lk.calibrate_force(volts1x_slice.data, **calibration_params) + +We can plot this calibration:: + + recalibrated.plot() + +.. image:: passive_calibration.png + +And the fitting residual: + +.. image:: residual.png + +We can see that this reproduces the calibration:: + + >>> recalibrated.stiffness + 0.12872253516809967 + + >>> f.force1x.calibration[1].stiffness + 0.1287225353482303 + +In this particular case, it looks like we calibrated with the `hydrodynamically_correct` model disabled. +Given that we used big beads, we should have probably enabled it instead. +We can still retroactively change this. + + >>> calibration_params["hydrodynamically_correct"] = True + ... recalibrated_hyco = lk.calibrate_force(volts1x_slice.data, **calibration_params) + ... recalibrated_hyco.stiffness + 0.15453110071085924 + +As expected, the difference in this case is substantial. diff --git a/docs/tutorial/figures/force_calibration/bl_dictionary.png b/docs/tutorial/force_calibration/figures/bl_dictionary.png similarity index 100% rename from docs/tutorial/figures/force_calibration/bl_dictionary.png rename to docs/tutorial/force_calibration/figures/bl_dictionary.png diff --git a/docs/tutorial/figures/force_calibration/calibration_item.png b/docs/tutorial/force_calibration/figures/calibration_item.png similarity index 100% rename from docs/tutorial/figures/force_calibration/calibration_item.png rename to docs/tutorial/force_calibration/figures/calibration_item.png diff --git a/docs/tutorial/figures/force_calibration/calibration_item_active.png b/docs/tutorial/force_calibration/figures/calibration_item_active.png similarity index 100% rename from docs/tutorial/figures/force_calibration/calibration_item_active.png rename to docs/tutorial/force_calibration/figures/calibration_item_active.png diff --git a/docs/tutorial/force_calibration/figures/fitted_spectrum.png b/docs/tutorial/force_calibration/figures/fitted_spectrum.png new file mode 100644 index 000000000..d38ef1523 --- /dev/null +++ b/docs/tutorial/force_calibration/figures/fitted_spectrum.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7e8c49dfba636eaa5040d3f27105763d8179500994eeb03fbc83c31ebcb99193 +size 149661 diff --git a/docs/tutorial/figures/force_calibration/identify_peaks.png b/docs/tutorial/force_calibration/figures/identify_peaks.png similarity index 100% rename from docs/tutorial/figures/force_calibration/identify_peaks.png rename to docs/tutorial/force_calibration/figures/identify_peaks.png diff --git a/docs/tutorial/force_calibration/figures/listing.png b/docs/tutorial/force_calibration/figures/listing.png new file mode 100644 index 000000000..52d19f728 --- /dev/null +++ b/docs/tutorial/force_calibration/figures/listing.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:53c360d24db91039ad31412f9680f4d95628177611a47daef7dc88ecf5c4b688 +size 49210 diff --git a/docs/tutorial/force_calibration/figures/passive_calibration.png b/docs/tutorial/force_calibration/figures/passive_calibration.png new file mode 100644 index 000000000..2107a482e --- /dev/null +++ b/docs/tutorial/force_calibration/figures/passive_calibration.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7e440d35ab5090268b7ab768eda3c9c40fadeedba6dbb0435d3a077fa804afc6 +size 28735 diff --git a/docs/tutorial/figures/force_calibration/power_spectrum.png b/docs/tutorial/force_calibration/figures/power_spectrum.png similarity index 100% rename from docs/tutorial/figures/force_calibration/power_spectrum.png rename to docs/tutorial/force_calibration/figures/power_spectrum.png diff --git a/docs/tutorial/figures/force_calibration/power_spectrum_excluded_ranges.png b/docs/tutorial/force_calibration/figures/power_spectrum_excluded_ranges.png similarity index 100% rename from docs/tutorial/figures/force_calibration/power_spectrum_excluded_ranges.png rename to docs/tutorial/force_calibration/figures/power_spectrum_excluded_ranges.png diff --git a/docs/tutorial/figures/force_calibration/power_spectrum_no_noise_peak.png b/docs/tutorial/force_calibration/figures/power_spectrum_no_noise_peak.png similarity index 100% rename from docs/tutorial/figures/force_calibration/power_spectrum_no_noise_peak.png rename to docs/tutorial/force_calibration/figures/power_spectrum_no_noise_peak.png diff --git a/docs/tutorial/figures/force_calibration/power_spectrum_noise_peak.png b/docs/tutorial/force_calibration/figures/power_spectrum_noise_peak.png similarity index 100% rename from docs/tutorial/figures/force_calibration/power_spectrum_noise_peak.png rename to docs/tutorial/force_calibration/figures/power_spectrum_noise_peak.png diff --git a/docs/tutorial/figures/force_calibration/power_spectrum_noise_peak_robust.png b/docs/tutorial/force_calibration/figures/power_spectrum_noise_peak_robust.png similarity index 100% rename from docs/tutorial/figures/force_calibration/power_spectrum_noise_peak_robust.png rename to docs/tutorial/force_calibration/figures/power_spectrum_noise_peak_robust.png diff --git a/docs/tutorial/force_calibration/figures/residual.png b/docs/tutorial/force_calibration/figures/residual.png new file mode 100644 index 000000000..cab57a5f2 --- /dev/null +++ b/docs/tutorial/force_calibration/figures/residual.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1ef1c29aba130dee755cc2db7af81ac08b5d47ade760e5826f3857ce17e0a5bf +size 19901 diff --git a/docs/tutorial/force_calibration/figures/temperature_dependence.png b/docs/tutorial/force_calibration/figures/temperature_dependence.png new file mode 100644 index 000000000..ebb9c54b9 --- /dev/null +++ b/docs/tutorial/force_calibration/figures/temperature_dependence.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:16c919dbd78567d8129cb22a9d6b98915b36888f5f20fbe0240b8f781a03bcf1 +size 252340 diff --git a/docs/tutorial/force_calibration/figures/time_ranges.png b/docs/tutorial/force_calibration/figures/time_ranges.png new file mode 100644 index 000000000..00eef67b7 --- /dev/null +++ b/docs/tutorial/force_calibration/figures/time_ranges.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:dea26afb3bb7cf15b74f7787cfcabac61775dc86fb0810b1bb9ec64c35fe9b26 +size 13043 diff --git a/docs/tutorial/force_calibration/force_calibration.rst b/docs/tutorial/force_calibration/force_calibration.rst new file mode 100644 index 000000000..f023e602a --- /dev/null +++ b/docs/tutorial/force_calibration/force_calibration.rst @@ -0,0 +1,168 @@ +Getting good calibrations +========================= + +The number of options presented when calling :func:`lk.calibrate_force()` may be daunting at +first, but hopefully, this chapter will provide some guidelines on how to obtain a accurate and +precise force calibrations using Pylake. + +Force calibration involves two sets of decisions. +The first is deciding which calibration to use. +We consider these `Experimental considerations` as they pertain to the experiment. +The second is deciding whether the fits look good and there are no warning signs in the obtained estimates. +The latter is referred to as `Technical considerations`. + +Experimental considerations +*************************** + +Core parameters +""""""""""""""" + +Calibration involves fitting a model to data. +This model relies on a number of parameters that have to be specified in order to get the correct +calibration factors. +The most important parameters are: + +- The bead diameter (in microns). +- The viscosity of the medium, which strongly depends on :ref:`temperature` which must also be provided. This viscosity of water at a particular temperature can be found using :func:`~lumicks.pylake.viscosity_of_water` :cite:`huber2009new`. + +.. image:: figures/temperature_dependence.png + :nbattach: + +Hydrodynamically correct model +"""""""""""""""""""""""""""""" + +For lateral calibration, it is recommended to use the +:ref:`hydrodynamically correct theory` by setting `hydrodynamically_correct` to +`True`) as it provides an improved model of the underlying physics. +For small beads (< 1 micron) the differences will be small, but for larger beads substantial +differences can occur. There is only one exception to this recommendation, which is when the beads +are so close to the flowcell surface (0.75 x diameter) that this model becomes invalid. + +Using the hydrodynamically correct theory requires a few extra parameters: the density of the +sample `rho_sample` and bead `rho_bead`. When `rho_sample` and `rho_bead` are not provided, +Pylake uses values for water and polystyrene for the sample and bead density respectively. + +Experiments near the surface +"""""""""""""""""""""""""""" + +When doing experiments near the surface, it is recommended to provide a `distance_to_surface`. +This distance should be the distance from the center of the bead to the surface of the flowcell. +Since it can be challenging to determine this distance, it is recommended to use active calibration +when calibrating near the surface, since this makes calibration far less sensitive to mis-specification +of the bead diameter and height. + + + + + +- For axial calibration the flag `axial=True` should be set. +- In the case of a pre-characterized diode, the values for its parameters can be passed to + `fixed_alpha` and `fixed_diode`. +- In the case of active calibration, it is mandatory to provide a nanostage signal, as well as a + guess of the driving frequency. + + +Technical considerations +************************ + +Sensor +"""""" + +In addition to the model that describes the bead's motion, it is important to take into account the +:ref:`characteristics of the sensor` used to measure the data. + +A silicon diode sensor is characterized by two parameters, a "relaxation factor" `alpha` and +frequency `f_diode`. +These parameters can either be estimated along with the other parameters or characterized independently. + +When estimated, care must be taken that the corner frequency of the power spectrum `fc` is +:ref:`lower than the estimated diode frequency`. +You can check whether a calibration item had to estimate parameters from the calibration +data by checking the property `recalibrated.fitted_diode`. +When this property is set to true, it means that the diode parameters were not fixed during the fit. + +.. warning:: + + For high corner frequencies, calibration can become unreliable when the diode parameters are fitted. + A warning sign for this is when the corner frequency `fc` approaches or goes beyond the diode frequency `f_diode`. + For more information see the section on :ref:`High corner frequencies`. + +Fit range +""""""""" + +The fit range determines which section of the power spectrum is actually fit. Two things are important when choosing a fitting range: + +- The corner frequency should be clearly visible in the fitted range (frequency where the slope increases). +- When working at low laser laser powers, it is possible that the noise floor becomes visible at higher frequencies. This noise floor should be excluded. + +Exclusion ranges +"""""""""""""""" + + + +Blocking +"""""""" + + + + + +.. list-table:: Calibration Parameters + :widths: 25 75 + :header-rows: 1 + + * - parameter + - recommendation + * - bead_diameter + - Bead diameter in microns + * - temperature + - Temperature in Celsius. Note that the temperature may be higher than ambient due to laser heating. + * - viscosity + - Can be omitted if water. When provided, remember that this strongly depends on temperature. + * - hydrodynamically_correct + - Set to `True` when further away from the surface than 0.75 x diameter. + * - axial + - Set to true if this is an axial calibration. + * - fast_sensor + - Disables the diode model. Set to `True` when using a fast detector, such as the one used in axial calibration. + * - fit_range + - Set such that the noise floor is excluded from the fitting range. Ensure the corner frequency is clearly visible in this range. + * - + +active_calibration : bool, optional + Active calibration, when set to True, driving_data must also be provided. +driving_data : numpy.ndarray, optional + Array of driving data. +driving_frequency_guess : float, optional + Guess of the driving frequency. Required for active calibration. +axial : bool, optional + Is this an axial calibration? Only valid for a passive calibration. +hydrodynamically_correct : bool, optional + Enable hydrodynamically correct model. +rho_sample : float, optional + Density of the sample [kg/m**3]. Only used when using hydrodynamically correct model. +rho_bead : float, optional + Density of the bead [kg/m**3]. Only used when using hydrodynamically correct model. +distance_to_surface : float, optional + Distance from bead center to the surface [um] + When specifying `None`, the model will use an approximation which is only suitable for + measurements performed deep in bulk. +fast_sensor : bool, optional + Fast sensor? Fast sensors do not have the diode effect included in the model. +fit_range : tuple of float, optional + Tuple of two floats (f_min, f_max), indicating the frequency range to use for the full model + fit. [Hz] +num_points_per_block : int, optional + The spectrum is first block averaged by this number of points per block. + Default: 2000. +excluded_ranges : list of tuple of float, optional + List of ranges to exclude specified as a list of (frequency_min, frequency_max). +drag : float, optional + Overrides the drag coefficient to this particular value. Note that you want to use the + bulk drag coefficient for this (obtained from the field `gamma_ex`). This can be used to + carry over an estimate of the drag coefficient obtained using an active calibration + procedure. +fixed_diode : float, optional + Fix diode frequency to a particular frequency. +fixed_alpha : float, optional + Fix diode relaxation factor to particular value. diff --git a/docs/tutorial/force_calibration/index.rst b/docs/tutorial/force_calibration/index.rst new file mode 100644 index 000000000..2da9eb886 --- /dev/null +++ b/docs/tutorial/force_calibration/index.rst @@ -0,0 +1,13 @@ +Force Calibration +================= + +Find out about force calibration + +.. toctree:: + :caption: Contents + :maxdepth: 1 + + calibration_items + force_calibration + low_level_api + robust_fitting diff --git a/docs/tutorial/force_calibration/low_level_api.rst b/docs/tutorial/force_calibration/low_level_api.rst new file mode 100644 index 000000000..68f2bd3f0 --- /dev/null +++ b/docs/tutorial/force_calibration/low_level_api.rst @@ -0,0 +1,123 @@ +Low level API +------------- + +For those who want an API that is a little more composable, Pylake also offers a low level API to perform force calibration. +This API is intended for advanced users and separates the steps of creating a power spectrum and fitting models to it. + +Obtaining the power spectrum +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +To use the more manual lower-level API, we first need the power spectrum to fit. +To compute a power spectrum from our data we can invoke :func:`~lumicks.pylake.calculate_power_spectrum()`:: + + power_spectrum = lk.calculate_power_spectrum(volts.data, sample_rate=volts.sample_rate) + +This function returns a :class:`~lumicks.pylake.force_calibration.power_spectrum.PowerSpectrum` which we can plot:: + + plt.figure() + power_spectrum.plot() + plt.show() + +.. image:: figures/power_spectrum.png + +The power spectrum is smoothed by downsampling adjacent power spectral values (known as blocking). +Downsampling the spectrum is required to fulfill some of the assumptions in the fitting procedure, but it comes at the cost of spectral resolution. +One must be careful that the shape of the power spectrum is still sufficiently preserved. +If the corner frequency is very low then downsampling too much can lead to biases in the calibration parameters. +In such cases, it is better to either measure a longer interval to increase the spectral resolution or reduce the number of points (`num_points_per_block`) used for blocking. + +The range over which to compute the spectrum can be controlled using the `fit_range` argument. +One can also exclude specific frequency ranges from the spectrum (`excluded_ranges`) which can be useful if there are noise peaks in the spectrum. +Let's see which ranges were excluded in our Bluelake calibration:: + + force_slice.calibration[0] + +.. image:: figures/bl_dictionary.png + +Here, they are listed as `Exclusion range 0 (min.) (Hz)`, `Exclusion range 0 (max.) (Hz)` etc. +To reproduce the result obtained with Bluelake, these should be excluded from the power spectrum:: + + power_spectrum = lk.calculate_power_spectrum( + volts.data, + sample_rate=volts.sample_rate, + fit_range=(1e2, 23e3), + num_points_per_block=2000, + excluded_ranges=([19348, 19668], [24308, 24548]) + ) + + plt.figure() + power_spectrum.plot(marker=".") + plt.show() + +.. image:: figures/power_spectrum_excluded_ranges.png + +Note that exclusion ranges are excluded *prior* to downsampling. +Considering that a noise peak may be very narrow, it is beneficial to lower the number of points per block temporarily to find the exact exclusion range. +After determination of this exclusion range, the number of points per block can be increased again. However, also see `Robust fitting`_ for an automated peak identification routine. + +Passive calibration +^^^^^^^^^^^^^^^^^^^ + +In literature, passive calibration is often referred to as thermal calibration. +It involves fitting a physical model to the power spectrum obtained in the previous step. +This physical model relies on a number of parameters that have to be specified in order to get the correct calibration factors. + +The most important parameters are the bead diameter (in microns) and viscosity. +Let's use the bead diameter found in the calibration performed in Bluelake. + +Note that the viscosity of water strongly depends on :ref:`temperature`. +To find the viscosity of water at a particular temperature, Pylake uses :func:`~lumicks.pylake.viscosity_of_water` which implements the model presented in :cite:`huber2009new`. +When omitted, this function will automatically be used to look up the viscosity of water for that particular temperature + +.. note:: + + Note that for experiments that use a different medium than water, the viscosity at the experimental temperature should explicitly be provided. + +The next step is setting up the calibration model:: + + bead_diameter = f.force1x.calibration[1]["Bead diameter (um)"] + force_model = lk.PassiveCalibrationModel(bead_diameter, temperature=25) + +To fit this model to the data use :func:`~lumicks.pylake.fit_power_spectrum()`:: + + calibration = lk.fit_power_spectrum(power_spectrum, force_model) + calibration + +.. image:: figures/calibration_item.png + +This will produce a table with your fitted calibration parameters. +These parameters can be accessed as follows:: + + >>> print(calibration["kappa"].value) + >>> print(f.force1x.calibration[1]["kappa (pN/nm)"]) + 0.12872206850762546 + 0.1287225353482303 + +.. note:: + + Note that by default, a bias correction is applied to the fitted results :cite:`norrelykke2010power`. + This bias correction is applied to the diffusion constant and amounts to a correction of :math:`\frac{N}{N+1}`, where :math:`N` refers to the number of points used for a particular spectral data point. + It can optionally be disabled by passing `bias_correction=False` to :func:`~lumicks.pylake.fit_power_spectrum`. + +We can plot the calibration by calling:: + + plt.figure() + calibration.plot() + plt.show() + +.. image:: figures/fitted_spectrum.png + +Setting up a model +^^^^^^^^^^^^^^^^^^ + +The next step is setting up a model. +We can set up a model for passive calibration using the hydrodynamically correct theory according to:: + + force_model = lk.PassiveCalibrationModel( + bead_diameter, + hydrodynamically_correct=True, + rho_sample=999, + rho_bead=1060.0 + ) + +Note that when `rho_sample` and `rho_bead` are omitted, values for water and polystyrene are used for the sample and bead density respectively. diff --git a/docs/tutorial/force_calibration/robust_fitting.rst b/docs/tutorial/force_calibration/robust_fitting.rst new file mode 100644 index 000000000..c77a0e337 --- /dev/null +++ b/docs/tutorial/force_calibration/robust_fitting.rst @@ -0,0 +1,143 @@ +.. _robust_fitting: + +Robust fitting +-------------- + +.. only:: html + + :nbexport:`Download this page as a Jupyter notebook ` + +So far, we have been using least-squares fitting routines for force calibration. +In that case, we assume that the error in the power at each frequency is distributed according to a Gaussian distribution. +:ref:`Blocking or windowing` the power spectrum ensures that this assumption is close enough to the truth such that the fit provides accurate estimates of the unknown parameters. +Occasionally, the power spectrum might show a spurious noise peak. +Such a peak is an outlier in the expected behavior of the spectrum and therefore interferes with the assumption of having a Gaussian error distribution. +As a result, the fit is skewed. In those cases, it can be beneficial to do a robust fit. When a robust fit is performed, one assumes that the probability of encountering one or multiple outliers is non-negligible. +By taking this into account during fitting, the fit can be made more robust to outliers in the data. The following example illustrates the method. + +To see this effect, let's load a dataset of uncalibrated force sensor data of a 4.4 μm bead showing Brownian motion while being trapped. In particular, look at the `Force 2y` sensor signal:: + + f = lk.File("test_data/robust_fit_data.h5") + f2y = f.force2y + +First create a power spectrum without blocking or windowing for later use. Then derive a power spectrum with blocking from the first power spectrum:: + + ps = lk.calculate_power_spectrum(f2y.data, sample_rate=f2y.sample_rate, num_points_per_block=1, fit_range=(10, 23e3)) + ps_blocked = ps.downsampled_by(200) + +First use a passive calibration model using the hydrodynamically correct model to perform a least-squares fit and plot the result:: + + model = lk.PassiveCalibrationModel(4.4, temperature=25.0, hydrodynamically_correct=True) + fit = lk.fit_power_spectrum(ps_blocked, model) + + plt.figure() + fit.plot() + plt.title( + f"Skewed fit: $f_c$ = {fit.results['fc'].value:.1f}, " + f"$D$ = {fit.results['D'].value:.4f}, " + f"$f_d$ = {fit.results['f_diode'].value:.1f}" + ) + plt.show() + +.. image:: figures/power_spectrum_noise_peak.png + +Notice how the tail of the model is skewed towards the peak, in order to reduce the least-squares error. +In this case, the free parameters to fit the diode filter contribution are 'abused' to reduce the error between the model and the outlier. +This results in biased parameter estimates. + +Now do a robust fit. We do this by specifying a loss function in the function :func:`~lumicks.pylake.fit_power_spectrum()`. +For least-squares fitting, the loss function is `'gaussian'`, which is the default if nothing is specified. However, if we specify `'lorentzian'`, a robust fitting routine will be used instead. +Because `bias_correction` and robust fitting are mutually exclusive, we need to explicitly turn it off:: + + fit = lk.fit_power_spectrum(ps_blocked, model, bias_correction=False, loss_function="lorentzian") + +Now plot the robust fit:: + + plt.figure() + fit.plot() + plt.title( + f"Robust fit: $f_c$ = {fit.results['fc'].value:.1f}, " + f"$D$ = {fit.results['D'].value:.4f}, " + f"$f_d$ = {fit.results['f_diode'].value:.1f}" + ) + plt.show() + +.. image:: figures/power_spectrum_noise_peak_robust.png + +Notice how the model now follows the power spectrum nearly perfectly. The value for `f_diode` has increased significantly, now that it is not abused to reduce the error induced by the outlier. + +This example shows that a robust fitting method is less likely to fail on outliers in the power spectrum data. It is therefore a fair question why one would not use it all the time? +Robust fitting leads to a small bias in the fit results for which Pylake has no correction. +Least-squares fitting also leads to a bias, but this bias is known (:cite:`norrelykke2010power`) and can be corrected with `bias_correction=True`. +Secondly, for least-squares fitting, methods exist to estimate the expected standard errors in the estimates of the free parameters, which are implemented in the least-squares fitting routines that Pylake uses :cite:`press1990numerical`. +These error estimates are not implemented for robust fitting, and as such, the fit results will show `nan` for the error estimates after a robust fit. +However, as will be shown below, the robust fitting results may be used as a start to identify outliers automatically, in order to exclude these from a second, regular least-squares, fit. + +Automated spurious peak detection +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +We will continue the tutorial with the results of the previous section. If you did not yet do that part of the tutorial, please go back and execute the code examples in that section. + +We still have the power spectrum `ps` that was created without blocking or windowing. Here we will use it to identify the peak and automatically obtain frequency exclusion ranges. +We will use the method :meth:`~lumicks.pylake.force_calibration.power_spectrum.PowerSpectrum.identify_peaks()` in order to do so. +This method takes a function that accurately models the power spectrum as a function of frequency, in order to normalize it. +It then identifies peaks based on the likelihood of encountering a peak of a certain magnitude in the resulting data set. +If we have a "good fit", then the easiest way to get that function is to use our fitted model:: + + plt.figure() + frequency_range = np.arange(100, 22000) + # We can call the fit with a list of frequencies to evaluate the model at those frequencies. + # This uses the best fit parameters from fit.fitted_params. + plt.plot(frequency_range, fit(frequency_range)) + plt.xscale("log") + plt.yscale("log") + +If there are no spurious peaks, then normalizing the unblocked power spectrum results in random numbers with an exponential distribution with a mean value of 1. +The chance of encountering increasingly larger numbers decays exponentially, and this fact is used by `identify_peaks()`:: + + frequency_exclusions = ps.identify_peaks(fit, peak_cutoff=20, baseline=1) + +The parameter `peak_cutoff` is taken as the minimum magnitude of any value in the normalized power spectrum in order to be concidered a peak. +The default value is 20, and it corresponds to a chance of about 2 in a billion of a peak of magnitude 20 or larger occuring naturally in a data set. +If a peak is found with this or a higher magnitude, the algorithm then expands the range to the left and right until the first time the power spectrum drops below the value `baseline`. +The frequencies at which this occurs end up as the lower and higher frequency of a frequency exclusion range. As such, the value of `baseline` controls the width of the frequency exclusion range. +We can visualize the excluded peaks as follows:: + + fig, ax = plt.subplots(1, 2, sharey=True) + for axis, title in zip(ax, ('Full spectrum', 'Zoom')): + axis.loglog(ps.frequency, ps.power, label="Power spectrum") + for idx, item in enumerate(frequency_exclusions, 1): + to_plot = np.logical_and(item[0] <= ps.frequency, ps.frequency < item[1]) + axis.plot(ps.frequency[to_plot], ps.power[to_plot], 'r', label=f'peak {idx}') + axis.legend() + axis.set_title(title) + axis.set_xlabel('Frequency [Hz]') + ax[1].set_xlim(frequency_exclusions[0][0] - 1.0, frequency_exclusions[-1][1] + 1.0) + ax[1].set_xscale('linear') + ax[0].set_ylabel('Power [V$^2$/Hz]') + plt.suptitle('Identified peaks') + plt.show() + +.. image:: figures/identify_peaks.png + +Finally, we can do a least-squares fit, but in this case we will filter out the frequency ranges that contain peaks. +Because we use a least-squares method, we get error estimates on the fit parameters, and bias in the fit result can be corrected. +The default values of `loss_function='gaussian'` and `bias_correction=True` ensure least-squares fitting and bias correction, so we do not need to specify them:: + + ps_no_peak = lk.calculate_power_spectrum( + f2y.data, sample_rate=f2y.sample_rate, num_points_per_block=200, fit_range=(10, 23e3), excluded_ranges=frequency_exclusions, + ) + fit_no_peak = lk.fit_power_spectrum(ps_no_peak, model) + + plt.figure() + fit_no_peak.plot() + plt.title( + f"Least squares (ex. peaks): $f_c$ = {fit_no_peak.results['fc'].value:.1f}, " + f"$D$ = {fit_no_peak.results['D'].value:.4f}, " + f"$f_d$ = {fit_no_peak.results['f_diode'].value:.1f}" + ) + plt.show() + +.. image:: figures/power_spectrum_no_noise_peak.png + +Notice that no skewing occurs, and that the values of `fc`, `D` and `f_diode` are now closer to values found via robust fitting in the section above. diff --git a/docs/tutorial/index.rst b/docs/tutorial/index.rst index 12be84214..0c5d0167a 100644 --- a/docs/tutorial/index.rst +++ b/docs/tutorial/index.rst @@ -31,6 +31,6 @@ These import conventions are used consistently in the tutorial. fdfitting nbwidgets kymotracking - force_calibration + force_calibration/index population_dynamics piezotracking