MOPTOP

Introduction

MOPTOP, the Multicolour OPTimised Optical Polarimeter, is specially designed for time domain astrophysics. It takes the already-novel aspects from the RINGO series of polarimeters and add a unique optical dual-camera configuration to both minimize systematic errors and provide the highest possible sensitivity.

MOPTOP's design enables the measurement of polarisation and photometric variability on timescales as short as a few seconds. Overall the instrument allows accurate measurements of the intra-nightly variability of the polarisation of sources such as gamma-ray bursts and blazars, allowing the constraint of magnetic field models to reveal more information about the formation, ejection and collimation of jets.

Current Status

ONLINE Last changed: 24 October 2020

MOPTOP was mounted on the telescope for commissioning and calibration in early 2020, just before coronavirus lockdown travel restrictions between the UK and the Canary Islands came into force. The instrument began observing robotically in October 2020.

Description


Optical schematic of MOPTOP
Based on Shrestha et al. (2020)

MOPTOP is a dual-beam polarimeter. Incoming collimated light first passes through a continuously rotating half-wave plate which modulates the beam's polarisation angle. The polarised light then passes through a wire-grid polarising beamsplitter. This splits the light into the p and s polarised states and sends the now-separate beams through filter wheels to a pair of low-noise fast-readout imaging cameras.

Image acquisition is electronically synchronised to the rotation angle of the half-wave plate. This combination of half-wave plate and beamsplitter provides about twice as much throughput as conventional polarimeters that use polaroid filters as the analyser.

Technical Spec

Optical
Performance
  • 16 wave plate angle positions per revolution
  • field of view 7x7 arcmin
  • exposure time vs polarization accuracy for sources of magnitude 12-17 at right
Cameras
  • Nikon AF Nikkor 50mm f/1.4D imaging lens
  • Andor Zyla sCMOS detectors - science grade CMOS
  • 4.2 megapixel
  • 6.5 µm pixels
  • 82% peak QE
  • 0.9e- read noise
  • ~0.1–1 Hz frame rate
Half-wave
Plate
  • ThorLabs Achromatic half-wave plate, 400-800nm
  • two user-selectable rotation speed modes: fast (8s period) and slow (80s period)
  • exposure time per waveplate position 0.45 seconds (fast) or 4.5 seconds (slow) (linked to rotation rate)
Beamsplitter
  • ThorLabs Wire-Grid Polarising Beamsplitter 400-700nm
Filters
  • MOP-B "Blue" filter: 380–520nm
  • MOP-V "Green" filter: 490–570nm
  • MOP-R "Red" filter: 580–695nm
  • MOP-I "Infrared" filter: 695nm – cutoff defined by detector QE
  • MOP-L broadband "Luminosity" filter: 400–700nm

Operational Principle

Key to MOPTOP's operational principle is the use of a half-wave plate to modulate the polarisation angle of the incoming beam, plus camera systems that can simultaneously record the resulting two orthogonal polarization states. A 22.5° rotation of the waveplate rotates the polarization angle of the beam by 45° effectively swapping the q and u Stokes parameters. This allows variations in the polarisation response of MOPTOP to be corrected by a differential technique, reducing systematic errors to a level below those produced by RINGO3.

Sixteen images are acquired in each camera for every rotation of the waveplate. As four frames in succession are needed to make one polarisation measurement, four measurements are therefore obtained per complete rotation.

The figure below left shows exposure times and angles for each rotation position. The blue shaded areas in the top figure indicate the waveplate angle for each exposure. The white areas denote the readout time. The plot below right shows waveplate angle, and the resulting electric vector position angle (EVPA), as a function of time.


Left: MOPTOP operational principle, showing the sequence of frames taken at different angles of the rotating wave plate as a function of time when operating in "fast mode" (one revolution in 8 sec — more info on modes below). Exposure of a frame simultaneously begins on both cameras 0.5s after the previous one, with the waveplate rotation angle having increased by a total of 22.5° over that time period. The blue shaded areas indicate the time the shutter is open (duration 0.45s), with the shorter white regions indicating the 0.05s readout gaps between frames. Frames are numbered 1-16. Right: waveplate rotation angle and resulting EVPA rotation of the incoming beam as a function of time. (from Shrestha et al. (2020))

"Fast" and "Slow" Speed Modes

attribute speed mode
fast slow
rotation period (s) 8 80
frame interval (s) 0.5 5
frame exposure
time (s)
0.45 4.95
frames/second 2 0.2
frames/minute 120 12

The half-wave plate can rotate at two different speeds, chosen by the observer. In "fast mode", the waveplate completes one revolution in 8 seconds (7.5rpm), while in "slow mode" one revolution lasts 80 seconds (0.75rpm).

In fast mode, each of the 16 rotation positions is imaged every 0.5 seconds. Exposure time is actually 0.45s, with 0.05s for readout time. In slow mode, positions are imaged every 5s with exposures lasting 4.95s, and again 0.05s is used for readout time.

Observers should choose fast mode if the target is brighter than mv = 12, or if there is a need for time resolution better than a few seconds in polarisation measurement. Using slow mode whenever possible is recommended, as this yields a smaller data set which is easier to handle.

Cassegrain Rotator

We also strongly recommend that all the observations are made with the cassegrain mount angle set to zero degrees.

Exposure Time

MOPTOP exposure times are treated differently compared to the other instruments. Each rotation produces 16 exposures, and MOPTOP only observes for a complete number of rotations. Therefore as the time you enter into the Phase2UI corresponds to the number of rotations rather than individual exposures, we refer to this time as the "duration" to distinguish it from the individual exposures.

A complicating factor however is that MOPTOP rounds down the duration to correspond to a complete number of rotations. So if a duration of 200s in slow mode (period 80s) is entered, then only two rotations (2×80=160s) would be observed, because three rotations (240s) would go over the duration time.

If however 200s was absolutely necessary, then three rotations would have to be used. You should therefore enter the duration for the integer number of rotations that cover the integration time you want.

Another consideration is that the individual exposure times are not simply the rotation period divided by 16 (i.e. 5s or 0.5s). Frame readout takes up 0.05s per exposure, so the actual exposure times are 4.95s or 0.45s for slow and fast modes respectively. This difference can add up over time for long durations, and if total integration time on sky is important to the observation then this extra effect must be taken into account.

An equation to calculate the duration td to enter into the Phase2UI to make sure a total integration time on-sky of at least ti seconds is achieved is:

\[ t_{d} = P \; \left\lceil\frac{t_{i}}{16 \; t_{e}}\right\rceil \]

where ⌈⌉ is the notation for the "ceiling" function that rounds up to the nearest integer, and P and te are the rotation period and exposure time of each frame for the speed mode selected.

Example:

Required:
Duration time to ensure an integration of at least 700s in slow speed mode.

Answer:

  • Slow mode is selected, so we use P = 80s and te = 4.95s
  • Entering 700s for ti into equation (2) we get:
  • \[ t_{d} = 80 \; \left\lceil\frac{700}{16 \times 4.95}\right\rceil = 720 \]
  • Therefore entering 720s as the duration ensures we get at least 700s on-sky.

Duration td to enter for typical values of required total integration time ti:

Total
Integration
Time ti (s)
Duration Time td (s)
fast
mode
slow
mode
60 72 80
100 112 160
200 224 240
500 560 560
1000 1112 1040

Note also that the fast and slow rotation periods dictate the minimum exposure time to give MOPTOP. Durations shorter than 8s in fast mode and 80s in slow mode are not allowed and will be rejected by the Phase2UI validator. If the group is submitted anyway the telescope will try to observe it, but MOPTOP itself will reject the attempt.

Filename Convention

MOPTOP filenames are organised slightly differently to the usual instrument filename notation:

filename example
Example MOPTOP filename which is defined below.
Image © 2020 Doug Arnold

The first few terms (separated by underscores) are the usual:

  • instrument label (cameras "1" and "2" in MOPTOP's case)
  • observation type ("e" for science exposure, "s" for standard, "f" for flat)
  • observation date = date of start of night (YYYYMMDD); does not change after midnight
  • run number = the "run" or observation number for that instrument for the night

The next two terms are unique to MOPTOP:

  • rotation number = the number of the current waveplate rotation
  • rotator position = the current angular position of the waveplate, ranging from 1–16

The last term is as normal:

  • flag that shows the reduction status of the file (0 = raw, 1 = reduced)

Data Reduction Pipeline

MOPTOP raw frames undergo bias subtraction, dark subtraction and flat fielding to remove the normal instrumental detector signatures. The pipeline is automated and runs on all data before they are stored in the archive and distributed to users.

Bias and Dark

Since there are only two possible exposure times to consider (fast and slow rotor), no scaling is needed to account for varying integration times and there is no need to separate bias from dark contributions to the signal. Bias and dark signal are independent of rotor position. The pipeline maintains and subtracts a combined ‘bias-plus-dark’ frame form every image.

Flat-field 

(Caution with moptop photometry)

Each frame is divided by a flat-field image. The flat field being applied is an average stack of all sixteen rotor positions. That is, the same flat field is used for all sixteen frames in a rotation set. A separate flat field image is created for each filter. This means it well represents flat field effects deriving from the pixel-to-pixel sensitivity variations of the detector, wavelength dependence of the filters and instrumental vignetting of beam. It will not represent and correct any polarisation dependent features of the illumination pattern.

Instrument performance analysis is still ongoing but we currently believe this bias is corrected when using the differential analysis of the signal from the two cameras which is described in the "Deriving Polarisation" section below. However, we currently urge caution about using single-camera MOPTOP data for photometry. It is anticipated the flat field might introduce systematic errors for photometry, but not polarimetry.

WCS

A World Coordinate System (WCS) is applied to every image:

  • All frames from a specific observation are co-added to create a high signal-to-noise image stack of the entire integration
  • Sources are detected in that image using SExtractor
  • A WCS is created using imwcs software, based on cross matching to the USNOB-1 or 2MASS Point Source catalogues
  • The WCS is then transcribed into each individual frame and the stack discarded

In cases where the WCS fit fails, an approximate WCS is constructed on the basis of the telescope's blind pointing.

Deriving Polarisation

(Most of the information in this section is adapted from "Characterisation of a dual-beam, dual-camera optical imaging polarimeter", Shrestha et al, Monthly Notices of the Royal Astronomical Society, Vol 494, Issue 4, 2020. The paper is referred to as "Shrestha (2020)" in this text)

If time resolution during the observation is not important, all frames with the same rotator position can be stacked to obtain 16 frames per camera, each stacked frame corresponding to a 22.5° rotation of the waveplate.

Reduction begins by first using aperture photometry to extract sky-corrected source counts. Two means of converting this data to polarisation values are outlined in Shrestha (2020), namely the one and two-camera techniques. The two-camera technique however produces better results than the one-camera technique; it provides less error in degree of polarisation, and less scatter in Stokes q & u.

For this reason, the example that follows is of the two-camera technique. For brevity also, we show only half of what we can calculate from the 16 waveplate positions — remember that we can get four Stokes q and u values when applied to all 16 waveplate positions.

Obtaining polarisation of a source from the post-pipeline reduced data is achieved by:

  1. Correcting for the sensitivity difference between camera 1 and camera 2 by calculating the sensitivity factor
  2. Applying this sensitivity factor to correct the counts from camera 2
  3. Calculating the Stokes parameters q and u from the variation in counts with waveplate angle position
  4. Accounting for instrument polarisation
  5. Calculating percentage polarisation and position angle

Each of these steps are described in turn below.

1. Calculate sensitivity factor between cameras

First we must make sure the the two MOPTOP cameras have the same effective sensitivity. Let m and n be the observed counts from camera 1 and camera 2 respectively. Taking positions 1 and 3, equation 16 in Shrestha (2020) gives the relative sensitivity factor F as:

\[ F = \sqrt{\frac{n_{1}\,n_{3}}{m_{1}\,m_{3}}} \]

For example, to calculate this factor for Stokes parameters u and q for rotor positions 1 and 2:

\[ F_{1q} = \sqrt{\frac{n_{1}\,n_{3}}{m_{1}\,m_{3}}} \] \[ F_{2q} = \sqrt{\frac{n_{5}\,n_{7}}{m_{5}\,m_{7}}} \] \[ F_{1u} = \sqrt{\frac{n_{2}\,n_{4}}{m_{2}\,m_{4}}} \] \[ F_{2u} = \sqrt{\frac{n_{6}\,n_{8}}{m_{6}\,m_{8}}} \]

2. Correct camera 2 counts

With these sensitivity factors we can correct the counts for camera 2 to end up with corrected counts c and d for all positions for camera 1 and camera 2 respectively:

\( c_{1} = m_{1}, \quad d_{1} = n_{1} / F_{1q} \)
\( c_{2} = m_{2}, \quad d_{2} = n_{2} / F_{1u} \)
\( c_{3} = m_{3}, \quad d_{3} = n_{3} / F_{1q} \)
\( c_{4} = m_{4}, \quad d_{4} = n_{4} / F_{1u} \)
\( c_{5} = m_{5}, \quad d_{5} = n_{5} / F_{2q} \)
\( c_{6} = m_{6}, \quad d_{6} = n_{6} / F_{2u} \)
\( c_{7} = m_{7}, \quad d_{7} = n_{7} / F_{2q} \)
\( c_{8} = m_{8}, \quad d_{8} = n_{8} / F_{2u} \)

3. Calculate Stokes q and u

Corrected counts can now be used to calculate Stokes Q, U and I:

\( Q_{1} = c_{1} - d_{1} \)
\( U_{1} = c_{2} - d_{2} \)
\( I_{1} = \left( c_{1} + d_{1} + c_{2} + d_{2} \right) / 2 \)
\( q_{1} = Q_{1} / I_{1} \)
\( u_{1} = U_{1} / I_{1} \)

\( Q_{2} = c_{3} - d_{3} \)
\( U_{2} = c_{4} - d_{4} \)
\( I_{2} = \left( c_{3} + d_{3} + c_{4} + d_{4} \right) / 2 \)
\( q_{2} = Q_{2} / I_{2} \)
\( u_{2} = U_{2} / I_{2} \)

calculating the error in Stokes q and u

The error in Stokes q and u are found by propagating the errors in the counts from the photometry. Let merr and nerr be the error in counts from cameras 1 and 2 respectively, and let qerr and uerr be the errors in q and u respectively.

If we then set:

\[ A_{q} = { \left[ n_{err_{1}}^2 + \left(\frac{m_{3}}{2\,n_{1}\,n_{3}\,m_{1}}\right)^2 \cdot \left(\frac{1}{n_{1}\,n_{3}\,m_{1}}\right)^2 \cdot \left(\frac{n_{err_{1}}}{n_{1}}\right)^2 \cdot \left(\frac{n_{err_{3}}}{n_{3}}\right)^2 \cdot \left(\frac{m_{err_{1}}}{m_{1}}\right)^2 \cdot \left(\frac{m_{err_{3}}}{m_{3}}\right)^2 \right] }^{\frac{1}{2}} \]

and

\[ A_{u} = { \left[ n_{err_{2}}^2 + \left(\frac{m_{4}}{2\,n_{2}\,n_{4}\,m_{2}}\right)^2 \cdot \left(\frac{1}{n_{2}\,n_{4}\,m_{2}}\right)^2 \cdot \left(\frac{n_{err_{2}}}{n_{2}}\right)^2 \cdot \left(\frac{n_{err_{4}}}{n_{4}}\right)^2 \cdot \left(\frac{m_{err_{2}}}{m_{2}}\right)^2 \cdot \left(\frac{m_{err_{4}}}{m_{4}}\right)^2 \right] }^{\frac{1}{2}} \]

Then qerr and uerr are given by:

\[ q_{err} = q \cdot \left[ \left( \frac{A_{q}}{F_{1q}(m_{1}-n_{1})} \right)^2 + \left( \frac{A_{q}}{F_{1q}(m_{1}+n_{1})} \right)^2 \right] \]

\[ u_{err} = u \cdot \left[ \left( \frac{A_{u}}{F_{1u}(m_{2}-n_{2})} \right)^2 + \left( \frac{A_{u}}{F_{1u}(m_{2}+n_{2})} \right)^2 \right] \]

[Binder implementation of a Jupyter notebook of the above steps in full detail]

4. Account for instrument polarisation

To continue, finding the true polarisation of the science target involves accounting for instrument polarisation too. This is done by also observing an unpolarised standard star:

If qt, ut are the stokes parameters of the target, and qs, us are the Stokes parameters of the standard, then the corrected Stokes values of the target qc, uc, are:

\[q_{c} = q_{t} - q_{s}\] \[u_{c} = u_{t} - u_{s}\]

5. Percentage polarisation & position angle

From qc and uc, percentage polarisation %p and position angle PA are given by:

\[ \%p = 100 \sqrt{{q_{c}}^2 + {u_{c}}^2} \] \[ PA = 0.5\;atan2(u_{c},q_{c}) \]

The PA angle measured is relative to the position of the telescope and instrument. The position of the telescope is quantified using the ROTSKYPA fits header which gives the angle of rotation East of North on the sky. Corrected position angle is given by:

\[ EV PA = PA + ROTSKYPA + K \]

where K is correction angle that you calculate using the standards observed.