Exploring GCNSβ€”Gaia Catalogue of Nearby StarsΒΆ

The GCNS is described in an open access A&A article by Smart et al. (2021). This is believed to be a nearly complete compilation of stars within 100 pc (0.1 kpc). Table 2 contains a detailed explanation of all the columns of data available in the catalog. Note that some of the header names are slightly different in the database than they are in this table. Why? To keep out the riffraff, I guess!

FITS tables are available on CDS. I found the connection a bit slow, but it worked with no hiccups. Click the FTP tab to get to the data tables. What you want is table1c, which is available in several formats. The plain text version is human-readable, and can be snarfed up using pandas.read_table.

InΒ [1]:
# Somewhat modeled on Acquaviva's k-means example.
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import make_blobs
from sklearn.cluster import KMeans
from scipy.spatial.distance import cdist
from sklearn import metrics
from cmcrameri import cm     # scientific colormaps, colorblind friendly https://www.fabiocrameri.ch/colourmaps/
# %matplotlib widget

Loading GCNS as a Pandas DataFrameΒΆ

InΒ [2]:
# Load catalogue into a Pandas dataframe.
import pandas as pd
df = pd.read_table('J_A+A_649_A6_table1c.dat.gz.txt', sep='|',
                   header=5,            # header is in row 5.
                   skiprows=[6,331319]) # Skip horizontal rules in the table.
df.rename(columns=lambda x: x.strip(), inplace=True) # Strip whitespace from headers
df  # display the dataframe to see that it read in properly.
Out[2]:
GaiaEDR3 RAdeg e_RAdeg DEdeg e_DEdeg Plx e_Plx pmRA e_pmRA pmDE ... e_Ksmag WISE W1mag e_W1mag W2mag e_W2mag W3mag e_W3mag W4mag e_
0 2334666126716440064 0.002565 0.03305 -26.365350 0.02500 14.697 0.03698 23.497 0.03680 -62.339 ... 0.026 J000000.61-262155.2 11.799 0.012 11.606 0.009 11.100 0.128 9.074 NaN
1 2341871673090078592 0.005121 0.42837 -19.498841 0.34734 26.798 0.50664 179.805 0.57186 -1.041 ... 0.018 J000001.23-192955.7 6.868 0.021 6.800 0.009 6.722 0.016 6.704 0.077
2 530861741656374272 0.005637 0.00951 70.887364 0.00858 10.282 0.01075 -52.864 0.01210 17.787 ... 0.022 J000001.37+705314.4 8.965 0.013 9.024 0.009 8.937 0.027 8.649 0.333
3 2745400068346761216 0.009336 0.03793 6.511017 0.03139 16.260 0.05893 117.495 0.06235 9.521 ... 0.020 J000002.22+063039.7 11.866 0.013 11.660 0.009 11.630 0.218 8.881 NaN
4 2855176271335676800 0.013536 0.02864 29.277896 0.01968 10.295 0.04369 51.287 0.05345 46.282 ... 0.018 J000003.24+291640.4 11.907 0.012 11.743 0.008 11.589 0.218 9.077 NaN
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
331307 2766925825958528512 359.993523 0.03391 12.376062 0.02638 22.624 0.04461 -73.354 0.05561 -84.683 ... 0.022 J235958.43+122233.9 11.516 0.012 11.288 0.008 11.161 0.164 8.734 0.505
331308 2773791481503524992 359.994028 0.22628 18.053030 0.14174 14.321 0.27067 261.914 0.33586 -99.384 ... 0.035 J235958.54+180310.9 13.521 0.015 13.331 0.012 12.667 9.057 NaN
331309 6521388186590534272 359.994771 0.01233 -53.182288 0.01463 18.105 0.02083 57.761 0.01537 96.078 ... 0.025 J235958.73-531056.2 10.857 0.012 10.679 0.008 10.482 0.068 8.853 NaN
331310 2011682661920690304 359.995813 0.46508 60.918306 0.34315 9.094 0.53559 -24.265 0.58406 -25.545 ... 0.029 J235959.00+605505.8 12.739 0.014 12.590 0.009 NaN
331311 2314850075324449408 359.999926 0.24543 -30.024529 0.23446 11.352 0.33888 295.092 0.30139 -306.598 ... 0.089 J235959.97-300128.2 14.456 0.015 14.244 0.019 11.771 8.202 NaN

331312 rows Γ— 74 columns

Minor Data Selection IssueΒΆ

Note that the data selection for GCNS is < 0.1 kpc in the 1st percentile distance, so at 50th percentile we'll see more than 0.1 kpc. That may not sound like a big deal, but those additional stars can create artifacts in color-and-magnitude diagrams. This is a known issue, discussed in the GCNS paper. So we will cul the sample based on 50th percentile distances.

InΒ [3]:
# Sort the dataframe by distance (50th percentile, kpc)
df.sort_values(by=['Dist50'], inplace=True, 
               ignore_index=True) # Forget the old index values.
df['Dist50'] # prints index, Dist50
Out[3]:
0         0.00130
1         0.00184
2         0.00241
3         0.00255
4         0.00267
           ...   
331307    0.11864
331308    0.11880
331309    0.11890
331310    0.11907
331311    0.11931
Name: Dist50, Length: 331312, dtype: float64
InΒ [4]:
# Illustrate the selection effect described in the previous cell.
plt.figure()
df['Dist50'].plot(xlabel='Data Frame Row Index (from 0)', ylabel='Distance (kpc), 50th percentile',
                  title='GCNS selection anomaly at large Dist50')
plt.xscale('log')
plt.yscale('log')
plt.rcParams['figure.constrained_layout.use'] = True
plt.show()
No description has been provided for this image
InΒ [5]:
# Downselect to Dist50 ≀ 0.1 kpc.
for j in range(1, 100000):
    if df['Dist50'].iloc[-j] <= 0.1:
        break
print('trimming ', j-1, ' rows.')
df = df.drop(index=df.index[(1-j):])
print(df['Dist50'])
trimming  30745  rows.
0         0.00130
1         0.00184
2         0.00241
3         0.00255
4         0.00267
           ...   
300562    0.10000
300563    0.10000
300564    0.10000
300565    0.10000
300566    0.10000
Name: Dist50, Length: 300567, dtype: float64
InΒ [6]:
dist_pc = 1e3 * df['Dist50'].to_numpy()
Nstars = len(dist_pc)
cum_distro = (1+np.arange(Nstars))/Nstars
model_dist = np.array([1,100])
model_cum = (model_dist/model_dist[-1])**3
plt.figure(figsize=[8,6])
plt.loglog(dist_pc, cum_distro, label='GCNS')
plt.plot(model_dist, model_cum,'--', label='cubic')
plt.xlabel('distance (pc)')
plt.ylabel('cumulative distribution (normalized)')
plt.legend()
plt.rcParams['figure.constrained_layout.use'] = True
plt.show()
No description has been provided for this image

Color-Absolute Magnitude (Hertzsprung-Russel) DiagramsΒΆ

Absolute MagnitudeΒΆ

Apparent brightnesses of stars are given in magnitudes), $m$, proportional to the logarithm of irradiance, $E$: $$ m = -\frac{5}{2} \log_{10}E + C, $$ where $C$ is an arbitrary constant, set by convention (e.g., Vega is magnitude 0). Each step in magnitude corresponds to a factor of $\sqrt[5]{100}$. To take distance out of this measurement, we the absolute magnitude is defined as the relative magnitude a star would have if observed at 10 pc: $$ M = m -5\left( \log_{10} d_{\mathrm{pc}} - 1 \right). $$ The absolute magnitude is independent of distance. It depends only on the spectral flux (watts per unit wavelength) of the star integrated over whatever passband we are observing.

Color IndecesΒΆ

Relative magnitudes through different filters, e.g. $B$ and $V$, can be combined to form a color index, that is proportional to the log of the filter ratio. For example: $$ B-V = \frac{5}{2}\log_{10}\frac{E_B}{E_V}. $$ Since it comes down depends only on the filter ratio, the color index is also independent of distance.

CAMDsΒΆ

The above two distance-independent measures of stars, absolute magnitude and color index, can be combined to form a color-absolute magnitude diagram (CAMD), also called a Hertzsprung-Russel (HR) diagram. On a CAMD, stellar populations can be distinguished, including the main sequence, white dwarfs, red giants, etc. The CAMD is a major tool in the study of stellar evolution.

GaiaΒΆ

The Gaia orbital observatory (2013-2025)) had three filter channels: $G$, $G_{\mathrm{RP}}$, and $G_{\mathrm{BP}}$. The best combination for color-magnitude diagrams is $M_G$ vs. $G - G_{\mathrm{RP}}$.

InΒ [7]:
# Construct arrays for CAMDs (Color-Absolute Magnitude Diagrams)
# Filter passbands are at https://www.cosmos.esa.int/web/gaia/edr3-passbands
G    = pd.to_numeric(df['Gmag'],  errors='coerce').to_numpy()  # apparent mag, GAIA G Band (~400-850 nm)
G_BP = pd.to_numeric(df['BPmag'], errors='coerce').to_numpy()  # apparent mag, GAIA BP Band (~400-660 nm)
G_RP = pd.to_numeric(df['RPmag'], errors='coerce').to_numpy()  # apparent mag, GAIA RP Band (~630-920 nm)
M_G = G - 5 * ( np.log10(dist_pc) - 1 )                        # Absolute magnitude
InΒ [8]:
plt.figure(figsize=(8,6))
plt.plot(G-G_RP, M_G,'.', markersize=0.5, alpha=0.1)
plt.gca().invert_yaxis()
plt.xlabel(r'$G-G_{\mathrm{RP}}$ (mag)')
plt.ylabel(r'Absolute $M_G$ (mag)')
plt.rcParams['figure.constrained_layout.use'] = True
plt.show()
No description has been provided for this image
InΒ [9]:
xmin = -0.5
xmax = 2.5
ymin = -2
ymax = 20
asp = (xmax-xmin)/(ymax-ymin) # aspect ratio to make the plot square
CAMD1_hist, xedges, yedges = np.histogram2d(G-G_RP, M_G, bins=512, range=[[xmin,xmax],[ymin,ymax]])
log_CAMD1_hist = np.log10(np.maximum( np.transpose(CAMD1_hist), 10**(-0.1) ))
    # Careful job of log scaling for a nice display.

plt.figure(figsize=(8,6))
plt.imshow(log_CAMD1_hist, interpolation='nearest', extent=[xmin,xmax,ymax,ymin], 
           aspect=asp, cmap=cm.bilbao_r)
plt.colorbar(label=r'$\log_{10}$ # of stars')
plt.xlabel(r'$G-G_{\mathrm{RP}}$ (mag)')
plt.ylabel(r'Absolute $M_G$ (mag)')
plt.rcParams['figure.constrained_layout.use'] = True
plt.show()
No description has been provided for this image

Data CleaningΒΆ

Often, data contains bad or highly uncertain values that should not be trusted.

  • Earlier, I removed the statistically anomalous stars that were included in GCNS because the peculiar choice to use Dist1 for the distance threshold. Those anomalous stars actually create artifacts in the color-magnitude diagram if I leave them in.
  • In the next cell, I remove data with non-finite values such as IEEE NaN.
  • I could probably do more to clean up GCNS before using it.
  • Inspect your data carefully:
    • Were bad data flags added during processing/reduction?
    • If uncertainties are given, are they sometimes unusually large?
InΒ [10]:
###### Data Prep ######
data1 = np.array([G-G_RP, M_G]).transpose() # 2 columns, with one row per star.

# Remove rows with NaNs
clean_data1 = data1[np.isfinite(data1).all(axis=1)]
print(data1.shape, clean_data1.shape)
(300567, 2) (294748, 2)

K-Means ClusteringΒΆ

Can clustering be used to β€œdiscover” a natural classification of stars?

InΒ [11]:
from sklearn.cluster import KMeans
from scipy.spatial.distance import cdist
from sklearn import metrics

###### K-Means ######
N_CLUSTERS = 8
kmeans1 = KMeans(n_clusters=N_CLUSTERS, n_init=10)
%time kmeans1.fit(clean_data1)
clusters1 = kmeans1.predict(clean_data1)
centers1 = kmeans1.cluster_centers_

# Plot
plt.figure(figsize=[8,6])
# plt.style.use('tableau-colorblind10') 
plt.scatter(clean_data1[:, 0], clean_data1[:, 1], s = 0.5, alpha=0.2, c = clusters1, cmap='viridis')
plt.plot(centers1[:,0], centers1[:,1], 'o', label='centroids', 
         markerfacecolor='white', markeredgecolor='k', markersize=15, alpha=0.5 )
plt.xlabel(r'$G-G_{\mathrm{RP}}$ (mag)')
plt.ylabel(r'Absolute $M_G$ (mag)')
plt.title('K-Means')
plt.legend()
plt.gca().invert_yaxis()
plt.rcParams['figure.constrained_layout.use'] = True
plt.show()
CPU times: user 1.11 s, sys: 275 ms, total: 1.38 s
Wall time: 359 ms
No description has been provided for this image

Q: Why are the clusters horizontal bands?ΒΆ

InΒ [12]:
# Plot
plt.figure(figsize=[8,6])
# plt.style.use('tableau-colorblind10') 
plt.scatter(clean_data1[:, 0], clean_data1[:, 1], s = 0.5, alpha=0.2, c = clusters1, cmap='viridis')
plt.plot(centers1[:,0], centers1[:,1], 'o', label='centroids', 
         markerfacecolor='white', markeredgecolor='k', markersize=15, alpha=0.5 )
plt.xlabel(r'$G-G_{\mathrm{RP}}$ (mag)')
plt.ylabel(r'Absolute $M_G$ (mag)')
plt.title('K-Means, with equal horizontal & vertical scales')
plt.legend()
plt.gca().invert_yaxis()
plt.axis('equal')
plt.rcParams['figure.constrained_layout.use'] = True
plt.show()
No description has been provided for this image

Feature ScalingΒΆ

Feature scaling affects how clustering works because of the distance measure.

Some options:

  1. Normalize by dividing out some physically meaningful scale.
  2. Normalize by dividing out the width of the data distribution (standard deviation or interquartile distance).

Some ML tasks, such as regression, may be affected by offsets.

  1. Choose some appropriate zero point.
  2. Subtract a mean or median.
  3. For ratios, use a log scale.

When the ML is done, I still probably want to plot the results with the original scaling.

InΒ [13]:
# Rescale data (comment out one....)
offsetA = np.mean(clean_data1,axis=0)
rescaleB = np.std(clean_data1,axis=0)
rescaled_data2 = (clean_data1 - offsetA)/rescaleB
InΒ [14]:
###### K-Means ######
kmeans2 = KMeans(n_clusters=N_CLUSTERS, n_init=10)
%time kmeans2.fit(rescaled_data2)
clusters2 = kmeans2.predict(rescaled_data2)
centers2 = kmeans2.cluster_centers_ * rescaleB + offsetA
    # Note how centers2 is carefully put back into clean_data1 coordinates, inverting
    # the transformation I used to make rescaled_data2.

# Plot
plt.figure(figsize=[8,6])
# plt.style.use('tableau-colorblind10') 
plt.scatter(clean_data1[:, 0], clean_data1[:, 1], s = 0.5, alpha=0.1, c = clusters2, cmap='viridis')
plt.plot(centers2[:,0], centers2[:,1], 'o', label='centroids', 
         markerfacecolor='white', markeredgecolor='k', markersize=15, alpha=0.5 )
plt.xlabel(r'$G-G_{\mathrm{RP}}$ (mag)')
plt.ylabel(r'Absolute $M_G$ (mag)')
plt.title('K-Means: Rescaled Data')
plt.legend()
plt.gca().invert_yaxis()
plt.rcParams['figure.constrained_layout.use'] = True
plt.show()
CPU times: user 941 ms, sys: 220 ms, total: 1.16 s
Wall time: 303 ms
No description has been provided for this image

DBSCAN: Density-Based Spatial Clustering of Applications with NoiseΒΆ

InΒ [15]:
####### DBSCAN #######
from sklearn.cluster import DBSCAN
%time db = DBSCAN(eps=0.15, min_samples=200).fit(rescaled_data2) # Try eps=0.15, min_samples=300
labels = np.array(db.labels_) + 1
n_clusters_ = np.amax(labels); n_noise_ = list(labels).count(0)
print( 'Found {:d} clusters, {:d} noise points.'.format(n_clusters_,n_noise_) )

# Plot
plt.figure(figsize=(8,6))
plt.scatter((clean_data1[:, 0])[labels>0], (clean_data1[:, 1])[labels>0], 
            c = labels[labels>0], marker='.', s=1, alpha=0.1)
plt.plot((clean_data1[:, 0])[labels==0],(clean_data1[:, 1])[labels==0], 
         'k.', markersize=1, alpha=0.1, label='noise (gray/black)')
plt.xlabel(r'$G-G_{\mathrm{RP}}$ (mag)')
plt.ylabel(r'Absolute $M_G$ (mag)')
plt.title('DBSCAN')
plt.legend()
plt.gca().invert_yaxis()
plt.rcParams['figure.constrained_layout.use'] = True
plt.show()
CPU times: user 16.3 s, sys: 2.66 s, total: 19 s
Wall time: 19 s
Found 3 clusters, 6796 noise points.
No description has been provided for this image

Conclusions Regarding ClusteringΒΆ

  • Data scaling strongly affects (at least K-Means) clustering.
  • K-Means clustering of the CAMD requires a large number of clusters to distinguish white dwarves from the main sequence. A side-effect is that the main sequence is divided into multiple clusters. Perhaps this segmentation is not crazy, if you compare the OBAFGKM classification of stars.
  • DBSCAN naturally follows the contours of an irregularly shaped cluster. It quite naturally distinguished white dwarf, main sequence, and even red giant stars.

Introducing a new color dimensionΒΆ

As I mentioned earlier, Gaia had three filter channels: $G$, $G_{\mathrm{RP}}$, and $G_{\mathrm{BP}}$.

  • Recall that color indeces are formed by subtracting magnitudes from different filters.
    • By convention, the relatively redder channel is subtracted from the relatively bluer one.
    • Since increasing magnitude corresponds to decreasing brightness, temperature decreases as color index increases.
  • We have so far used $G-G_{\mathrm{RP}}$ as a color index.
  • Why not $G_{\mathrm{BP}}-G$?
InΒ [16]:
xmin = -1
xmax = 4
ymin = -2
ymax = 20
asp = (xmax-xmin)/(ymax-ymin) # aspect ratio to make the plot square
#CAMD2_hist, xedges, yedges = np.histogram2d(G_BP-G_RP, M_G, bins=512, range=[[xmin,xmax],[ymin,ymax]])
CAMD2_hist, xedges, yedges = np.histogram2d(G_BP-G, M_G, bins=512, range=[[xmin,xmax],[ymin,ymax]])
log_CAMD2_hist = np.log10(np.maximum( np.transpose(CAMD2_hist), 10**(-0.1) ))
    # Careful job of log scaling for a nice display.

plt.figure(figsize=(8,6))
plt.imshow(log_CAMD2_hist, interpolation='nearest', extent=[xmin,xmax,ymax,ymin], aspect=asp, cmap=cm.bilbao_r)
plt.colorbar(label=r'$\log_{10}$ # of stars')
plt.xlabel(r'$G_{\mathrm{BP}}-G$ (mag)')
plt.ylabel(r'Absolute $M_G$ (mag)');
plt.rcParams['figure.constrained_layout.use'] = True
plt.show()
No description has been provided for this image

Counterfactual Exercise: Reconstruct a Missing Filter ChannelΒΆ

The preferred Gaia filter combination for color-magnitude diagrams is what we used previously: $M_G$ vs. $G - G_{\mathrm{RP}}$. As you can see above, a CAMD formed using $G_{\mathrm{BP}} - G$ in place of $G - G_{\mathrm{RP}}$ has a strange artifact, a swoosh that links the lower ends of the main sequence and white dwarf populations. And as you can see in the color-color diagram below, the relationship between the two color indeces is complicated. I now pose a counterfactual problem:

Suppose the Gaia $G_{\mathrm{RP}}$ had broken partway through building the GCNS. Could we reconstruct the missing information for subsequent observations?

Supervised MLΒΆ

Supervised ML singles out a feature (or features) as the target, the parameter(s) we are trying to infer from all the rest. That is, we want to use known features to infer something that is unknown.

We need as many examples as possible for which the target is known, and these will be divided into two or three sets:

  1. Training set for building the ML model. This is the β€œfitting” step.
  2. Validation set (optional) for tuning hyperparameters.
  3. Test set for assessing the performance of the ML model.

Important: fit only the training data! Validation and test sets are kept strictly separate from the training data. A typical train/test split is 80/20. It can be very informative to repeat the selection of data, training, validation, and testing 5 or so times to assess the robustness of the modeling approach.

In supervised ML, it is sometimes necessary to bias the training sample to ensure that important qualitative aspects of the data are adequately represented. For example, perhaps only a tiny fraction of the stars in GCNS are red giants. If red giants are important to you, you might sample the data in such a way as to overrepresent that category.

If we have a dearth of data, it may be necessary to use the one out approach (also called leave one out cross-validation, LOOCV), where a single data point is used for validation and/or test. Training, validation, and testing are repeated iteratively, letting each member of the dataset serve in the validation/test role.

RegressionΒΆ

If the target is a continuous variable, the ML model is called a regression.

Estimation of discrete variables (yes/no or integers) is called categorization.

InΒ [18]:
# Dataset [Note: Column 0 is the "target"]
tmp_data = np.array([G-G_RP,G_BP-G, M_G]).transpose()

# Clean -- Remove rows with NaNs
clean_cdata = tmp_data[np.isfinite(tmp_data).all(axis=1)]
print(tmp_data.shape, clean_cdata.shape)

# Separate target from data features
ctarget = clean_cdata[:,0]
cdata = clean_cdata[:, 1:]
(300567, 3) (294704, 3)

Bivariate RegressionΒΆ

Try something simple first!

  • Model $G-G_{\mathrm{RP}}$ as a function of $G_{\mathrm{BP}}-G$
  • "Soft L1" fit using training data (least squares, smoothly transitioning to min abs dev for points beyond f_scale of model)
  • Train-test split emulates the idea of pre-post filter failure
InΒ [19]:
# Train-test split
from sklearn.model_selection import train_test_split
training_fraction = 0.5  # In our counterfactual scenario, this represents the completeness 
                         # of the GCNS survey when the G_RP channel fails.

x_train, x_test, y_train, y_test = train_test_split(cdata, ctarget, test_size=(1-training_fraction))

# special versions for bivariate regression
xb_train = x_train[:,0]  # only need color in the independent variable (leave out M_G)
xb_test = x_test[:,0]    # only need color in the independent variable (leave out M_G)
yb_train = y_train
yb_test = y_test
InΒ [20]:
from scipy.optimize import least_squares
from scipy.interpolate import CubicSpline

#def G_RP_model(x, prams):
#    """
#    Bivariate regression model for G_RP as a function of B_BP.  
#    """
#    return np.log( prams[0] + prams[1]*x + prams[2]*(x**2) )
#prams0 = [1,2,-0.3]  # initial guess, chi-by-eye

def G_RP_model(x, ys):
    """
    Bivariate regression model for G_RP as a function of B_BP. 
    The parameters are the tie points of a cubic spline.
    """
    xs = [0,0.25, 0.5,1,2,2.5]
    spline = CubicSpline(xs, ys, bc_type='natural')
    return  spline(x)

prams0 = [-0.003, 0.422, 0.657, 0.995, 1.321, 1.43]  # initial guess, chi-by-eye

def objective(p, x, y):
    """
    Objective function (residuals).
    """
    return G_RP_model(x, p) - y

# Fit using 'soft_l1' (similar to absolute deviation)
result = least_squares(objective, prams0, args=(xb_train, yb_train), loss='soft_l1', f_scale=0.1)
prams = result.x
print("Initial guess parameters: ", prams0, " Fitted: ", prams)

GBPm = np.linspace(-0.2, 3.2, 200)
GRPm_0 = G_RP_model(GBPm, prams0)
GRPm = G_RP_model(GBPm, prams)


# Color-Color Diagram
xmin = -1
xmax = 4
ymin = -0.5
ymax = 2.5
asp = (xmax-xmin)/(ymax-ymin) # aspect ratio to make the plot square
CCD1_hist, xedges, yedges = np.histogram2d(G_BP-G, G-G_RP, bins=1024, range=[[xmin,xmax],[ymin,ymax]])
log_CCD1_hist = np.log10(np.maximum( np.transpose(CCD1_hist), 10**(-0.1) ))
    # Careful job of log scaling for a nice display.

plt.figure(figsize=(8,6))
plt.imshow(log_CCD1_hist, interpolation='nearest', extent=[xmin,xmax,ymin,ymax], aspect=asp, 
           origin='lower', cmap=cm.bilbao_r)
plt.colorbar(label=r'$\log_{10}$ # of stars')
plt.plot(GBPm,GRPm_0,'c--',label='CCK initial guess')
plt.plot(GBPm,GRPm,'b:',label='alleged best fit :(')
plt.xlabel(r'$G_{\mathrm{BP}}-G$ (mag)')
plt.ylabel(r'$G-G_{\mathrm{RP}}$ (mag)');
plt.legend()
plt.rcParams['figure.constrained_layout.use'] = True
plt.show()
Initial guess parameters:  [-0.003, 0.422, 0.657, 0.995, 1.321, 1.43]  Fitted:  [0.13646321 0.41745564 0.66378161 1.00893156 1.33836489 1.45639709]
No description has been provided for this image
InΒ [21]:
### Evaluate the bivariate model
predictionsb = G_RP_model(xb_test, prams0)

plt.figure(figsize=(8,6))
plt.plot(yb_test, predictionsb,'.',markersize=1,alpha=0.1, label='test data')
plt.xlabel(r'$G-G_{RP}$ (mag)')
plt.ylabel(r'Bivariate model of $G-G_{\mathrm{RP}}$ (mag)')
plt.xlim((-0.3,1.8))
plt.ylim((-0.3,1.8))
foo = [np.amin(y_test),np.amax(y_test)]
plt.plot(foo,foo, '--', label='y=x')
plt.legend()
plt.show()
No description has been provided for this image
InΒ [22]:
plt.figure(figsize=(10,6))

plt.subplot(121)
plt.plot(predictionsb, x_test[:,1],'.',markersize=1,alpha=0.1)
plt.title('Test Data')
plt.xlabel(r'Bivariate model of $G-G_{\mathrm{RP}}$ (mag)')
plt.ylabel(r'Absolute magnitude $M_G$')
plt.xlim((-0.4,2.2))
plt.ylim((-1,19))
plt.gca().invert_yaxis()
plt.rcParams['figure.constrained_layout.use'] = True

plt.subplot(122)
plt.plot(y_test, x_test[:,1],'.',markersize=1,alpha=0.1)
plt.title('Test Data')
plt.xlabel(r'True $G-G_{\mathrm{RP}}$ (mag)')
plt.ylabel(r'Absolute magnitude $M_G$')
plt.xlim((-0.4,2.2))
plt.ylim((-1,19))
plt.gca().invert_yaxis()
plt.rcParams['figure.constrained_layout.use'] = True

plt.show()
No description has been provided for this image

K-Nearest-Neighbors (KNN) RegressionΒΆ

from sklearn.neighbors import KNeighborsRegressor
from sklearn.model_selection import train_test_split

KNN is relatively simple. It is nonparametric (aside from the choice of distance measure), and works well for relatively low-dimensional problems. For higher dimensional problems, if KNN is not cutting it for you, consider random forests.

ParametersΒΆ

  • $K$ is the number of neighbors to consider.
  • weights = { 'uniform' | β€˜distance’ | [callable] }
  • test_size (fractional) for train_test_split()

ProcedureΒΆ

  • The target is estimated as the mean of the $K$ nearest neighbors.

Discuss:ΒΆ

Our task is to overcome the loss of $G_{\mathrm{RP}}$. Consider the following possible statements of the regression problem:

  1. Infer $G_{\mathrm{RP}}$ from $\left\{ G,\ G_{\mathrm{BP}} \right\}$.
  2. Infer $M_{\mathrm{RP}}$ from $\left\{ M_G,\ M_{\mathrm{BP}} \right\}$.
  3. Infer $G - G_{\mathrm{RP}}$ from $\left\{ M_G,\ G_{\mathrm{BP}} - G \right\}$.

Would you expect the choice of representation to influence the likelihood of success?

InΒ [23]:
from sklearn.neighbors import KNeighborsRegressor

Feature ScalingΒΆ

e.g., from sklearn.preprocessing import StandardScaler

As we observed with K-Means clustering, feature scaling can be very important for ML algorithms. When employing feature scaling for regression or other supervised ML tasks, however, it is essential to perform the scaling after the train/test split to avoid data leakage and misleading results. I commend to you the following best practices:

  1. Perform the train/test split first.
  2. Retain the original values for plotting and reporting statistics, so that the original units and meaning will be preserved.
  3. Use only training data statistics to work out the appropriate scaling.
  4. Apply the same scaling to the training and test data (i.e., do not recalculate the scaling parameters based on the test data!).

For more information, see Acquaviva Β§Β§2.3.1, 3.4.3.

I have omitted feature scaling for this exercise.ΒΆ

I tried KNN with & without feature scaling at 50% training fraction. Scaling did not appear to make a difference. Maybe for lower training fraction it would. TBR.

InΒ [24]:
# Run KNN on training data
knn_regressor = KNeighborsRegressor(n_neighbors=3, weights='distance')
knn_regressor.fit(x_train, y_train)

# Apply the model to the test data
predictions = knn_regressor.predict(x_test)

### ??? how to undo the scaling for plot ??? ###
InΒ [25]:
plt.figure(figsize=(8,6))
plt.plot(y_test, predictions,'.',markersize=1,alpha=0.1, label='test data')
plt.xlabel(r'$G-G_{\mathrm{RP}}$ (mag)')
plt.ylabel(r'KNN modeled $G-G_{\mathrm{RP}}$ (mag)')
plt.xlim((-0.3,2))
plt.ylim((-0.3,2))
foo = [np.amin(y_test),np.amax(y_test)]
plt.plot(foo,foo, '--', label='y=x')
plt.legend()
plt.show()
No description has been provided for this image
InΒ [26]:
plt.figure(figsize=(10,6))

plt.subplot(121)
plt.plot(predictions, x_test[:,1],'.',markersize=1,alpha=0.1)
plt.title('Test Data')
plt.xlabel(r'KNN modeled $G-G_{\mathrm{RP}}$ (mag)')
plt.ylabel(r'Absolute magnitude $M_G$')
plt.xlim((-0.4,2.2))
plt.ylim((-1,19))
plt.gca().invert_yaxis()
plt.rcParams['figure.constrained_layout.use'] = True

plt.subplot(122)
plt.plot(y_test, x_test[:,1],'.',markersize=1,alpha=0.1)
plt.title('Test Data')
plt.xlabel(r'True $G-G_{\mathrm{RP}}$ (mag)')
plt.ylabel(r'Absolute magnitude $M_G$')
plt.xlim((-0.4,2.2))
plt.ylim((-1,19))
plt.gca().invert_yaxis()
plt.rcParams['figure.constrained_layout.use'] = True

plt.show()
No description has been provided for this image

Conclusions on RegressionΒΆ

  • When the $G_{\mathrm{BP}}-G$ color variable is used for color-magnitude diagrams, there is a peculiar artifact linking the lower main sequence to the white dwarf branch.
  • I first tried mapping directly from $G_{\mathrm{BP}}-G$ to $G-G_{\mathrm{RP}}$ with a simple bivariate regression. Not surprisingly, the artifact cannot be removed by simply distorting the color axis.
  • The KNN multivariate model, taking absolute magnitude into account, reproduced $G-G_{\mathrm{RP}}$ with hardly a trace of the artifact.