Imaging of Moon Polarization in Support of Nighttime Cloud Phase Remote Sensing

Sierra Dabby
Montana State University Solar Physics REU 2022
Adviser: Dr. Joseph Shaw and Erica Venkatesulu

About Me

I am a rising junior at the University of California, Berkeley where I am studying atmospheric science and physics. This summer I had the honor of working in the Optical and Remote Sensing Laboratory (ORSL) during my time in the Solar Physics REU program at Montana State University. Alongside Erica Venkatesulu, I worked to expand remote visible polarization imaging of cloud thermodynamic phase to nighttime. Using moonlight as a source of illumination, as opposed to sunlight, means that we now have to characterize the polarization of moonlight throughout the lunar phase cycle.

Project Description

Cloud thermodynamic phase (CTP) refers to whether a cloud is made up of liquid water droplets or ice crystals. The composition of clouds is incredibly important because ice and water clouds behave and interact very different within the climate. Also supercooled liquid clouds are particularly dangerous for aircraft safety. Supercooled liquid refers to when a cloud is comprised of liquid but its temperature is below freezing point. This also showcases why thermal measurments aren't enough to determine whether a cloud is ice or water. There are many different methods for measuring CTP but the one most prominent and well tested in Dr. Shaw's lab is dual polarization lidar.

This summer, I helped develop visible polarization imaging methods for nighttime measurements. The benefit of using polarization imaging is that it's much lower cost, passive, and simpler to operate. This has been done successfully for CTP measurements during the daytime, but we want to expand these measurements to night so CTP measurements can be collected all day as well as in places that get total darkness near the poles.

In December, I plan to present my work at the American Geophysical Union (AGU) fall meeting this December in Chicago, IL.

This halo effect only occurs when a ice cloud aligns with the sun perfectly. This is one definitive way to tell if a cloud is made of ice crystals but it's a rare phenomenon (image credits: Erica Venkatesulu).

Image credits: Erica Venkatesulu

Measuring Cloud Thermodynamic Phase (CTP)

Current methods for CTP measurements in the Optical and Remote Sensing Laboratory include dual polarization lidar as well as the developing visible imaging methods. The MAML dual polarization lidar is a tried and true method of measuring CTP thus we use the lidar to validate our measurements.

Methodology

We are using a FLIR Blackfly monochrome polarization camera. This is a division of focal plane (DoFP) camera thus meaning its sensor is comprised of superpixels. In each superpixel, there are four separate pixels: one that measures a linear polarization axis of 0°, 45°, 90°, and 135°. By separating out each of these pixels from their superpixels, we can get all the data for I0, I45, I90, and I135 and thus the Stokes parameters. And then from there we are able to calculate DoLP and AoP. We have a Matlab script that does all of this computation.

The FLIR Blackfly monochrome polarization camera. Our physical setup for taking measurements. The FLIR Blackfly monochrome polarization camera is mounted in the back of a 2m focal length telescope. The camera is connected to the computer and can be controlled through a Matlab script.

Polarization and the Stokes Parameters

Light travels as transverse electromagnetic waves and in the case of the sun, these waves oscillate in every direction making them unpolarized. A perfectly horizontally polarized wave would oscillate only in the horizontal direction. A partially polarized wave means that some fraction of the waves travel along one linear polarization axis and this is what occurs with moonlight. The way to measure polarization is through the Stokes parameters. These parameters measure the brightness of light and measure linear polarization along four axis: 0°, 45°, 90°, and 135°. These are represented by the I0, I45, I90, and I135 variables below. The S0 Stokes parameters adds together the horizontal and vertical polarization axis so that within this equation we capture all of the possible light and full polarization scheme. The S1 Stokes parameter is the difference between horizontal and vertical polarization states so a positive value means the horizontal polarization (I0) is greater than vertical, therefore the light is more horizontally polarized. In a similar manner, S2 is the difference between 45° and 135° and if S2 is positive, more light is polarized along the 45° axis.

From these Stokes parameters we can further classify polarization with degree of linear polarization (DoLP) and angle of polarization (AoP) which are made up of the Stokes parameters. DoLP is the percentage of light that is linearly polarized and AoP is the orientation of polarization. The formulas are seen below.

Why is moonlight partially polarized?

As the moon travels around the earth and the earth around the sun, the incident angle the sunlight makes with the moon changes. As this incident angle changes, sunlight will become partially polarized and thus as the moon moves through its changes the DoLP and AoP will change accordingly.

Lunar phase cycle and selenographic coordinates

29-day lunar phase cycle, starting with a new moon and waxing until the full moon on the 15th day and waning from there forward. Here is a lunar calendar with images of the whole moon that I've collected this summer and the percentages are percent illumination (ie percent full).

Phase angle is the angle between the sun, moon and earth and the formula to go from units of percent illumination to phase angle is below.

Selenographic coordinates are the latitude and longitude system for the moon. The Mösting crater marks the center of the coordinate system. Key features to look for is in the north, Sinus Iridum; south, Tycho; west, Grimaldi Crater; east, Mare Crisium. The image below is an S0 plot we took of the moon and it has been rotated to match the selenographic coordinate system. I've also labeled each of the features that are mentioned above.

An S0 image of the moon converted into gray scale and has a few key markers labeled so we can see how a moon oriented in selenographic coordinates looks.

Image Processing

(1) The first script we wrote helps organize all of the images into separate folders. When we take data, we focus the telescope on one ege of the moon and let the moon move through the field of view of the telescope. We take images about every 30 seconds as the moon moves through the field of view of the telescope. All of these images taken in one stretch are called swaths, and we create a separate folder for each swath.

(2) Now we stitch the images together into one swath. This script allows the person to select a common feature on the moon between two images and then these images are overlayed. This repeats for all images in sets of two until a full swath is formed.

(3) Crop all of the swaths so that they are the same size and there's no issues with rotation in the following step.

(4) To account for the rotation that occurs while we move the camera on the telescope mount, we must rotate each of the swaths before we stitch them together. In this script, the user must find two common points between two swaths and select each of these. Then the anlge between these two features is calculated for each image and the difference between the anlges is how much the second swath is rotated by. All rotation is done in reference to the first swath.

(5) Now that the swaths are rotated, we are able to stitch them together and this done in a similar way to the image stitching step (2). Except this time, the user is find one moon feature between two images of moon swaths rather than images of sections.

(6) Crop the entire image of the moon to get rid of empty space and cut down on the size of the data file.

(7) Final rotation. A roated image of the full moon will apear and this will become the reference image. Similar to the swath rotation, in this step the user finds two features that are common on both moon images and selects these. Then the angle between these two features are calculated for each moon and the difference between the angle is what's used to rotate the image to selenographic coordinates.

(8) The last script is a post-processing script and it's used to calculate DoLP, AoP, and S0 and show figures complete with color bars, labels, and titles. In order to create these figures and process the data correctly, in this step the user also selects a moon mask so that we are only averaging the moon pixels and leaving out the background.

Analyzing results

Below we will look at both DoLP and AoP images of the moon at two different phases: third quarter and full.

References/Helpful Resources

Contact Info

Email: sdabby@berkeley.edu
LinkedIn: See Profile