Exploring GCNS—Gaia Catalogue of Nearby Stars¶
Prelude: Demonstrating the Kolmogorov-Smirnov Test
The GCNS is described in an open access A&A article by Smart et al. (2021). This is believed to be a nearly complete compilation of stars within 100 pc (0.1 kpc). Table 2 contains a detailed explanation of all the columns of data available in the catalog. Note that some of the header names are slightly different in the database than they are in this table. Why? To keep out the riffraff, I guess!
FITS tables are available on CDS. I found the connection a bit slow, but it worked with no hiccups. Click the FTP tab to get to the data tables. What you want is table1c, which is available in several formats. The plain text version is human-readable, and can be snarfed up using pandas.read_table.
import numpy as np
import matplotlib.pyplot as plt
from tqdm import tqdm # Simple way to display a progress bar for Python for loops.
# Load catalogue into a Pandas dataframe.
import pandas as pd
df = pd.read_table('J_A+A_649_A6_table1c.dat.gz.txt', sep='|',
header=5, # header is in row 5.
skiprows=[6,331319]) # Skip horizontal rules in the table.
df.rename(columns=lambda x: x.strip(), inplace=True) # Strip whitespace from headers
df # display the dataframe to see that it read in properly.
| GaiaEDR3 | RAdeg | e_RAdeg | DEdeg | e_DEdeg | Plx | e_Plx | pmRA | e_pmRA | pmDE | ... | e_Ksmag | WISE | W1mag | e_W1mag | W2mag | e_W2mag | W3mag | e_W3mag | W4mag | e_ | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 2334666126716440064 | 0.002565 | 0.03305 | -26.365350 | 0.02500 | 14.697 | 0.03698 | 23.497 | 0.03680 | -62.339 | ... | 0.026 | J000000.61-262155.2 | 11.799 | 0.012 | 11.606 | 0.009 | 11.100 | 0.128 | 9.074 | NaN |
| 1 | 2341871673090078592 | 0.005121 | 0.42837 | -19.498841 | 0.34734 | 26.798 | 0.50664 | 179.805 | 0.57186 | -1.041 | ... | 0.018 | J000001.23-192955.7 | 6.868 | 0.021 | 6.800 | 0.009 | 6.722 | 0.016 | 6.704 | 0.077 |
| 2 | 530861741656374272 | 0.005637 | 0.00951 | 70.887364 | 0.00858 | 10.282 | 0.01075 | -52.864 | 0.01210 | 17.787 | ... | 0.022 | J000001.37+705314.4 | 8.965 | 0.013 | 9.024 | 0.009 | 8.937 | 0.027 | 8.649 | 0.333 |
| 3 | 2745400068346761216 | 0.009336 | 0.03793 | 6.511017 | 0.03139 | 16.260 | 0.05893 | 117.495 | 0.06235 | 9.521 | ... | 0.020 | J000002.22+063039.7 | 11.866 | 0.013 | 11.660 | 0.009 | 11.630 | 0.218 | 8.881 | NaN |
| 4 | 2855176271335676800 | 0.013536 | 0.02864 | 29.277896 | 0.01968 | 10.295 | 0.04369 | 51.287 | 0.05345 | 46.282 | ... | 0.018 | J000003.24+291640.4 | 11.907 | 0.012 | 11.743 | 0.008 | 11.589 | 0.218 | 9.077 | NaN |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 331307 | 2766925825958528512 | 359.993523 | 0.03391 | 12.376062 | 0.02638 | 22.624 | 0.04461 | -73.354 | 0.05561 | -84.683 | ... | 0.022 | J235958.43+122233.9 | 11.516 | 0.012 | 11.288 | 0.008 | 11.161 | 0.164 | 8.734 | 0.505 |
| 331308 | 2773791481503524992 | 359.994028 | 0.22628 | 18.053030 | 0.14174 | 14.321 | 0.27067 | 261.914 | 0.33586 | -99.384 | ... | 0.035 | J235958.54+180310.9 | 13.521 | 0.015 | 13.331 | 0.012 | 12.667 | 9.057 | NaN | |
| 331309 | 6521388186590534272 | 359.994771 | 0.01233 | -53.182288 | 0.01463 | 18.105 | 0.02083 | 57.761 | 0.01537 | 96.078 | ... | 0.025 | J235958.73-531056.2 | 10.857 | 0.012 | 10.679 | 0.008 | 10.482 | 0.068 | 8.853 | NaN |
| 331310 | 2011682661920690304 | 359.995813 | 0.46508 | 60.918306 | 0.34315 | 9.094 | 0.53559 | -24.265 | 0.58406 | -25.545 | ... | 0.029 | J235959.00+605505.8 | 12.739 | 0.014 | 12.590 | 0.009 | NaN | |||
| 331311 | 2314850075324449408 | 359.999926 | 0.24543 | -30.024529 | 0.23446 | 11.352 | 0.33888 | 295.092 | 0.30139 | -306.598 | ... | 0.089 | J235959.97-300128.2 | 14.456 | 0.015 | 14.244 | 0.019 | 11.771 | 8.202 | NaN |
331312 rows × 74 columns
# Sort the dataframe by distance (50th percentile, kpc)
# Note that the data selection for GCNS is < 0.1 kpc in the 1st percentile distance,
# so at 50th percentile we'll see more than 0.1 kpc.
df.sort_values(by=['Dist50'], inplace=True,
ignore_index=True) # Forget the old index values.
df['Dist50'] # prints index, Dist50
0 0.00130
1 0.00184
2 0.00241
3 0.00255
4 0.00267
...
331307 0.11864
331308 0.11880
331309 0.11890
331310 0.11907
331311 0.11931
Name: Dist50, Length: 331312, dtype: float64
# Illustrate the selection effect described in the previous cell.
plt.figure()
df['Dist50'].plot(xlabel='Data Frame Row Index (from 0)', ylabel='Distance (kpc), 50th percentile',
title='GCNS selection anomaly at large Dist50')
plt.xscale('log')
plt.yscale('log')
plt.show()
# Downselect to Dist50 ≤ 0.1 kpc.
R = 0.1 # GCNS radius limit in kpc
R_pc = 1000*R # GCNS radius in pc
for j in range(1, 100000):
if df['Dist50'].iloc[-j] <= R:
break
print('trimming ', j-1, ' rows.')
df = df.drop(index=df.index[(1-j):])
print(df['Dist50'])
trimming 30745 rows.
0 0.00130
1 0.00184
2 0.00241
3 0.00255
4 0.00267
...
300562 0.10000
300563 0.10000
300564 0.10000
300565 0.10000
300566 0.10000
Name: Dist50, Length: 300567, dtype: float64
dist_pc = 1e3 * df['Dist50'].to_numpy()
Nstars = len(dist_pc)
cum_distro = (0.5+np.arange(Nstars))/Nstars
model_cum = (dist_pc/R_pc)**3 # Null hypothesis cum dist evaluated at dist_pc
plt.figure()
plt.loglog(dist_pc, cum_distro, label='GCNS')
plt.plot(dist_pc, model_cum,'--', label='cubic')
plt.xlabel('distance (pc)')
plt.ylabel('cumulative distribution (normalized)')
plt.legend()
plt.show()
Kolmogorov-Smirnov (KS) Test¶
The KS test (e.g., Numerical Recipes, 2ed, §14.3) asks: Do two distributions differ? The KS statistic, $D$, is defined as the maximum absolute difference between two CDFs: $$ D = \max_x \left| C(x) - C'(x) \right|, $$ where $C_1(x)$ is evaluated for $N$ discrete samples, $x \in x_1, x_2,... x_N$.
- In the one-sample test, $C'(x)$ is a proposed analytic CDF. The effective sample size is $N_e=N$.
- In the two-sample test, $C'(x)$ is evaluated for $M$ discrete samples, $x \in x'_1, x'_2,... x'_{M'}$. The effective sample size is $$ N_e = \frac{N N'}{N+N'}.$$
Note: Many authors include a factor of $\sqrt{N_e}$ in the definition of $D$. I have instead followed the NR approach, in which $D \in (0,1)$ is useful measure of the degree of discrepancy, independent of sample size. This convention is similar to how we define correlation coefficients (Pearson's $r$, Spearman's $r_s$, Kendall's $\tau$). See the "What does it mean?" heading below.
The $p$-value is $$ \Pr(D \ge \text{observed}) = Q_{\mathrm{KS}}(\lambda) \equiv \sum_{i=1}^{\infty}{(-1)^{j-1}e^{-2j^2\lambda^2}}, $$ where $$ \lambda \equiv D \left( \sqrt{N_e} + 0.12 + \frac{0.11}{\sqrt{N_e}} \right). $$
Critical $p$-values do not depend on the form of the distributions $C,C'$. The terms after $\sqrt{N_e}$ in the $\lambda$ formula are intended to correct for small $N_e$. Using these terms, NR (see references therein) reports that the $p$-values are accurate for $N_e \ge 4$. I have verified this. Moreover, as we shall see, the sum for $Q_{\mathrm{KS}}(\lambda)$ converges extremely rapidly for $\lambda > 1$. Even the first term ($j=1$) is sufficient for all practical purposes. Note that the 98% confidence level (confidence refers to $1-p$) is at about $\lambda = 1.5$, and 90% at about $\lambda = 1.2$, so $\lambda < 1$ never results in rejection of the null hypothesis.
!!! KS Implementation Note !!!¶
To calculate $D$ correctly, we must exercise care in thinking about the cumulative distribution of a discrete set of points $x_1,x_2,...x_N$. The sample CDF jumps by $1/N$ at each point, starting at $0$ for $x < x_1$, and ending at $1$ for $x > x_N$. Because of the jumps, the sample CDF is technically undefined at the data points. Practically, we may consider the values on the data points as double-valued: $$ C(x_i) = \frac{i - \frac{1}{2} \pm \frac{1}{2}}{N} $$ Consequently, for the one-sample test, $$ D = \max_x \left| C(x) - C'(x) \right| = \max_i \left| \frac{i - \frac{1}{2}}{N} - C'(x_i) \right| + \frac{1}{2N}. $$ If the above prescription is not adhered to, the $p$-values implied by $Q_{\mathrm{KS}}$ will be incorrect!
___Above:__ Implementation of $C(x)$ and $D$ in relation to data $x_i$ and analytic distribution $C'(x)$ for the one-sample KS test._
The two-sample test is more complicated since generally $N \ne M$. There are then two stairstep distributions, which must be compared at every $x_i$ and at every $x'_j$.
In Python, we index from $0$, so the implementation is something like this:
# Let x be the data array, with N elements;
def Cp(x):
"""
Analytic CDF evaluated at x.
"""
cdf = ...
return cdf
C = (np.arange(N) + 0.5) / N # Sample CDF, center of range for each x.
D = np.amax(abs(C - Cp(x))) + 0.5/N # KS statistic (NR style)
def Q_KS(jmax, lam, progress=False):
"""
Q_KS(lambda) defined in Numerical Recipes (2 ed.) for calculating
p-values for the KS test.
lam: lambda, the Ne-dependent version of the KS-statistic.
jmax: maximum index for the infinite sum.
progress: if true, display a progress bar while calculating the sum.
Result: Probability under the null hypothesis of given lambda or greater.
"""
result = 0
if progress:
for j in tqdm(range(1,jmax+1)):
result += (-1.0)**(j-1) * np.exp( -2.0 * (j*lam)**2 )
else:
for j in range(1,jmax+1):
result += (-1.0)**(j-1) * np.exp( -2.0 * (j*lam)**2 )
return 2*result
lam_arr = np.linspace(0.01,3, num=1000)
plt.figure()
plt.loglog(lam_arr, Q_KS(1,lam_arr),'--', label=r'$j_{\mathrm{max}}=1$ (adequate)')
plt.plot(lam_arr, Q_KS(2,lam_arr), label=r'$j_{\mathrm{max}}=2$')
plt.plot(lam_arr, Q_KS(5,lam_arr), '--', label=r'$j_{\mathrm{max}}=3$')
plt.plot(lam_arr, Q_KS(10,lam_arr), label=r'$j_{\mathrm{max}}=10$')
plt.plot(lam_arr, Q_KS(11,lam_arr), '--', label=r'$j_{\mathrm{max}}=11$')
plt.plot(lam_arr, Q_KS(1000,lam_arr), 'k', label=r'$j_{\mathrm{max}}=1000$')
plt.xlabel(r'$\lambda$')
plt.ylabel(r'$Q_{\mathrm{KS}}(\lambda)$')
plt.legend()
plt.show()
Applying KS to the distribution of star distances¶
I could use scipy.stats.kstest, but that would require me to write a separate routine to evaluate the analytic CDF. Since KS is very simple, and I have done all of the steps myself anyway to generate the plots above, using scipy.stats would only make more work.
# Back to our star sample....
plt.figure()
plt.semilogx(dist_pc, cum_distro - model_cum)
plt.xlabel('distance (pc)')
plt.ylabel('CDF discrepancy')
plt.show()
# Kolmogorov-Smirnov (K-S) test
Ne = len(dist_pc) # For testing data against an analytic CDF, effective N is just N.
D = np.amax(np.abs(cum_distro - model_cum)) + 0.5/Ne # KS D-statistic
Lambda = D * ( np.sqrt(Ne) + 0.12 + 0.11/np.sqrt(Ne) )
print("Sample size, Ne = ", Ne)
print("KS statistic D = ", D)
print("Lambda = ", Lambda)
print("p = Q_KS(Lambda) = ", Q_KS(1000,Lambda))
Sample size, Ne = 300567 KS statistic D = 0.012503124751393419 Lambda = 6.8562148843975175 p = Q_KS(Lambda) = 2.9558046389820403e-41
What does this mean?¶
| Parameter | Interpretation | |
|---|---|---|
| Small $D$ | $\implies$ | Star distribution is nearly uniform! |
| $p\rightarrow 0$ | $\implies$ | Star distribution is NOT uniform! |
Is there a contradiction? Discuss.
The low $p$-value says we can reject the null hypothesis, which was that the distribution is uniform. The small value of $D=0.0125$ tells us that the distribution deviates by only 1.25% from uniform. The $p$-value is telling us that this small deviation is statistically significant.
Aside: Anderson-Darling¶
The KS test is robust and quite useful, but there is a more powerful test called Anderson-Darling. Unfortunately, the $p$-values for AD depend on the form of the expected distribution, $C'(x)$. The current implementation scipy.stats.anderson is limited to comparison with a short list of distributions, and even that with only a handful of critical values tabulated. Consequently, if you want to use the AD test, be prepared to generate your own $p$-values by MonteCarlo.
Are the $p$-values given by $Q_{\mathrm{KS}}$ correct?¶
MonteCarlo test
If you have time, it is worth running the code a few times. Suggested parameters:
| $\mathtt{Ntrials}$ | $\mathtt{Nstarsmc}$ |
|---|---|
| $\mathtt{1000000}$ | $\mathtt{10}$ |
| $\mathtt{1000000}$ | $\mathtt{100}$ |
| $\mathtt{1000000}$ | $\mathtt{1000}$ |
| $\mathtt{10000}$ | $\mathtt{Ne}$ |
Ntrials = 1000000
Nstarsmc = 100 # Ne to directly replicate above results with GCNS. Smaller for quick test.
Ncube = int( 2*(1+6/np.sqrt(Nstarsmc))*Nstarsmc ) # Number of stars in cube (need extras!)
cum_distro_mc = (0.5+np.arange(Nstarsmc))/Nstarsmc
D_mc = np.empty((Ntrials))
for i in tqdm(range(Ntrials)):
# Create a uniformly distributed sample of "stars" with 0.1 kpc
x = R_pc * np.random.random((Ncube))
y = R_pc * np.random.random((Ncube))
z = R_pc * np.random.random((Ncube))
r = np.sqrt(x**2 + y**2 + z**2)
r = r[r<R_pc] # Select only stars within the GCNS distance limit.
r = r[:Nstarsmc] # Select the same number of stars as my trimmed GCNS.
r = np.sort(r) # Sorted distances
cum_model_mc = (r/R_pc)**3
D_mc[i] = np.amax(np.abs(cum_model_mc - cum_distro_mc)) + 0.5/Nstarsmc
100%|██████████████████████████████| 1000000/1000000 [00:11<00:00, 89761.50it/s]
D_mc = np.sort(D_mc)
lam_mc = D_mc * ( np.sqrt(Nstarsmc) + 0.12 + 0.11/np.sqrt(Nstarsmc) )
cum_mc = np.flip((0.5+np.arange(Ntrials))/Ntrials) # Flipped cum distro, as with Q_KS.
print('Gonna use a very large jmax in Q_KS, this could take awhile....')
QKS_mc = Q_KS(10000, lam_mc, progress=True)
plt.figure()
plt.semilogy(lam_mc, cum_mc, label='MonteCarlo')
plt.plot(lam_mc, QKS_mc, "--", label=r'$Q_{\mathrm{KS}}(\lambda)$')
plt.ylabel('Cumulative Distribution of Trials')
plt.xlabel(r'$\lambda$')
plt.legend()
plt.show()
Gonna use a very large jmax in Q_KS, this could take awhile....
100%|████████████████████████████████████| 10000/10000 [00:22<00:00, 435.20it/s]
plt.figure()
plt.plot(lam_mc, (QKS_mc-cum_mc)/cum_mc)
plt.ylim((-0.2,0.2))
plt.ylabel(r'Fractional Discrepancy in $p$-value')
plt.xlabel(r'$\lambda$')
plt.show()
D = np.amax(np.abs(cum_mc-QKS_mc)) + 0.5/Ntrials
plt.figure()
plt.title(r'Testing $Q_{\mathrm{KS}}$...with Kolmogorov-Smirnov!')
plt.plot(lam_mc, cum_mc-QKS_mc)
plt.ylabel(r'CDF discrepancy')
plt.xlabel(r'$\lambda$')
plt.annotate(r'$D={:.3f}$, $\lambda={:.1f}$'.format(D,D*np.sqrt(Ntrials)),(1.5,-D/2))
plt.show()
Conclusion¶
The Kolmogorov-Smirnov test is a handy, nonparametric test of whether two distributions differ. KS is so straightforward that I did not bother using the scipy.stats implementation. The Anderson-Darling test is more sensitive, especially in the tails of the distribution, but since the results depend on the form of the distribution, you will likely have to calculate your own $p$-values by MonteCarlo.
The analytic cumulative distribution, $Q_{\mathrm{KS}}(\lambda)$, used to generate $p$-values for the KS test is quite reliable. A careful MonteCarlo test reveals deviations at small $\lambda$, which are inconsequential.