Machine Learning¶

I highly recommend Machine Learning for Physics and Astronomy by Viviana Acquaviva (Princeton, 2023). Acquaviva is a seasoned teacher and machine learning practitioner. Her book is written at an introductory level, offering clear descriptions, insight into a broad selection of techniques, and practical examples in the form of downloadable Jupyter notebooks. Her writing also comes with a certain joie de vivre that is often lacking in academic texts. Our brief intro will be just a sampling of select methods, with my own unique demonstrations. If you enjoy the appetizer, I recommend investing in the full meal.

Machine learning is the use of computers to find patterns in data. We imagine the data as something like a table:

Index W clr typ …
0 0.1 'green' 4 …
1 3.5 'blue' 4 …
2 1.2 'cyan' 5 …
… … … … …

Columns (W, clr, typ, …) are called features.

Supervised vs. Unsupervised ML¶

In supervised ML problems, you have a data set for which you know the desired outputs as well as the inputs. You use that data to train the algorithm and then evaluate its performance. Examples include classification and regression problems.

In unsupervised ML, you turn an algorithm loose on a data set to discover patterns without prior guidance. Examples include clustering and anomaly detection.

Even in “unsupervised“ ML, human insight and interaction are essential, as we will see!

In [1]:
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib widget
from sklearn.cluster import KMeans, DBSCAN

Clustering¶

Clustering is a type of unsupervised machine learning in which the algorithm attempts to categorize data based on its featues. Because this is unsupervised ML, the categories are not defined in advance; rather, they emerge (perhaps deterministically, or perhaps not) from the algorithm's exploration of the data. Clustering methods are either partitional (dividing data into non-overlapping categories) or hierarchical (having categories and subcategories)

In this demonstration, we will look at two partitional clustering algorithms:

  1. K-Means
  2. DBSCAN

There exist many variations on the above algorithms, which we will not have time to explore.

Data¶

Assume that we have $M$ data points indexed by $0 \le i < M$. The features are components of vectors $\mathbf{x}_i \in \mathbb{R}^N$.

Both algorithms require a measure of distance. Typically employ Euclidean distance between points in the data space, e.g., $$ \left| \mathbf{x}_j - \mathbf{x}_i \right|. $$ Alternative distance measures are sometimes used, but I'll assume Euclidean distance below.

K-Means¶

from sklearn.cluster import KMeans

Parameters:¶

  • $K$, the number of clusters (keyword: n_clusters)
  • Initial cluster centroids, $\mathbf{z}_j \in \mathbb{R}^N$, $0 \le j < K$.
    • Can be chosen by hand, at random, or by some algorithm.
    • keyword: init='k-means++',
  • Since initialization is pseudo-random, it helps to try multiple times.
    • keyword: n_init

Procedure:¶

Repeat until centroids don't change:

  1. Form $K$ clusters, $\mathcal{C}_j$, by assigning every point to the closest centroid.
  2. Recompute the centroid.

This procedure has the effect of minimizing the following cost function: $$ \mathrm{Cost} \equiv \sum_j \sum_{\mathbf{x}_i \in \mathcal{C}_j} \left| \mathbf{x}_i - \mathbf{z}_j \right|^2 $$

Density-based spatial clustering of applications with noise (DBSCAN)¶

from sklearn.cluster import DBSCAN

Parameters:¶

  • Neighborhood radius, $\epsilon$ (keyword: eps)
  • Minimum number of points, $\mu$, within the neighborhood to form a dense region (keyword: min_samples)
    • Rule of thumb: $\mu > N$.

Procedure:¶

  1. Identify all core points, which have at least $\mu$ neighbors (including the core point itself) within radius $\epsilon$.
  2. Neighboring core points belong to the same cluster by definition (iterate and merge!).
  3. Non-core points within $\epsilon$ of a cluster are boundary points belonging to that cluster.
  4. Remaining points (neither core nor boundary) are noise points (outliers, which do not belong to a cluster).

Example: Clusters of Varying Density¶

In [2]:
# Blob parameters
Npts = [20, 100, 500]
x0 = [-1, 0, 1]
w0 = [0.2, 0.2, 0.2]


# Make data
bx = np.empty((0))
by = np.empty((0))
bcolor = np.empty((0))
Nblobs = len(Npts)
for i in range(Nblobs):
    bx = np.append( bx, w0[i] * np.random.normal(size=Npts[i]) + x0[i] )
    by = np.append( by, w0[i] * np.random.normal(size=Npts[i]) )
    bcolor = np.append( bcolor, i/Nblobs * np.ones((Npts[i])) )
bdata = np.array([bx,by]).transpose() # 2 columns

plt.figure(figsize=[7,3])
plt.scatter(bx, by, c=bcolor, marker='o', alpha=0.5)
plt.title('Native Categories'); plt.xlabel(r'$x$'); plt.ylabel(r'$y$')
plt.gca().set_aspect('equal')
plt.rcParams['figure.constrained_layout.use'] = True
plt.show()
No description has been provided for this image
In [3]:
# K-Means
kmeans = KMeans(n_clusters=3, n_init=10).fit(bdata)
bclusters = kmeans.predict(bdata)
bcenters = kmeans.cluster_centers_
# Plot
plt.figure(figsize=[7,3])
plt.scatter(bdata[:, 0], bdata[:, 1], c = bclusters, marker='o', alpha=0.5)
plt.plot(bcenters[:,0], bcenters[:,1], 'o', label='centroids', 
         markerfacecolor='white', markeredgecolor='k', markersize=15, alpha=0.5 )
plt.legend()
plt.title('K-Means'); plt.xlabel(r'$x$'); plt.ylabel(r'$y$')
plt.gca().set_aspect('equal')
plt.rcParams['figure.constrained_layout.use'] = True
plt.show()
No description has been provided for this image
In [4]:
# DBSCAN
db = DBSCAN(eps=0.25, min_samples=7).fit(bdata)
labels = np.array(db.labels_) + 1
n_clusters_ = np.amax(labels); n_noise_ = list(labels).count(0)
print( 'Found {:d} clusters, {:d} noise points.'.format(n_clusters_,n_noise_) )

# Plot 
plt.figure(figsize=(7,3))
plt.scatter(bx[labels>0], by[labels>0], c = labels[labels>0], marker='o', alpha=0.5)
plt.plot(bx[labels==0],by[labels==0], 'kx', label='noise')
plt.title('DBSCAN'); plt.xlabel(r'$x$'); plt.ylabel(r'$y$')
plt.legend()
plt.rcParams['figure.constrained_layout.use'] = True
plt.show()
Found 2 clusters, 4 noise points.
No description has been provided for this image

Example: Arbitrarily Shaped Clusters¶

In [5]:
Nface =  500 # _approximate_ number of points in the face (subject to rounding error)
err = 0.025  # standard deviation of position error

# (1) Elliptical outline
Noutline = int(0.8*Nface)
theta1 = 2*np.pi*np.random.rand((Noutline))
r1 = np.ones((Noutline)) - 0.2*np.cos(theta1)**2
category = 0.1 * np.ones((Noutline))

# (2) Smile
Nsmile = int(0.2*Nface)
r2 = 0.6 * np.ones((Nsmile))
theta2 = 1.5*np.pi + 1.5*(np.random.rand((Nsmile)) - 0.5)
category = np.append(category, 0.2*np.ones((Nsmile)) )

# (3) Eyes
Neye = int(0.04*Nface)
r3 = 0.5 * np.ones((2*Neye))
theta3 = np.pi/2 + 0.6*np.concatenate([np.ones((Neye)), -np.ones((Neye))])
category = np.concatenate( [category, 0.3*np.ones((Neye)), 0.4*np.ones((Neye))] )

# (4) Eyebrows
Neyebrow = int(0.1*Nface)
r4 = 0.65 * np.ones((2*Neyebrow))
theta4 = np.pi/2 + 0.3*np.concatenate([1+2*np.random.random((Neyebrow)), -1-2*np.random.random((Neyebrow))])
category = np.concatenate( [category, 0.5*np.ones((Neyebrow)), 0.6*np.ones((Neyebrow))] )

# (5) Nose
Nnose = int(0.1*Nface)
r5 = 0.2*np.ones((Nnose))
theta5 = np.pi * (np.random.random((Neyebrow)) + 0.5)
category = np.append( category, 0.7*np.ones((Nnose)) )

# (0) Concatenated arrays for the whole face
r0 = np.concatenate([r1,r2,r3,r4,r5])
theta0 = np.concatenate([theta1,theta2,theta3,theta4,theta5])
x0 = r0 * np.cos(theta0) 
y0 = r0 * np.sin(theta0) 
N = len(r0)

# Noised data
x = x0 + err*np.random.normal(size=(N))
y = y0 + err*np.random.normal(size=(N))

plt.figure(figsize=(8,6))
plt.scatter(x,y, marker='.', c=category, alpha=0.5)
plt.title('Native Categories'); plt.xlabel(r'$x$'); plt.ylabel(r'$y$')
plt.gca().set_aspect('equal')
plt.rcParams['figure.constrained_layout.use'] = True
plt.show()
No description has been provided for this image
In [6]:
# Dataset
facedata = np.array([x,y]).transpose()

# K-Means
kmeans = KMeans(n_clusters=5, n_init=10).fit(facedata)
faceclusters = kmeans.predict(facedata)
facecenters = kmeans.cluster_centers_
# Plot
fig_face = plt.figure(figsize=[8,6])
plt.scatter(x, y, c = faceclusters, marker='.', alpha=0.5)
plt.plot(facecenters[:,0], facecenters[:,1], 'o', label='centroids', 
         markerfacecolor='white', markeredgecolor='k', markersize=15, alpha=0.5 )
fig_face.legend(loc='outside upper right')
plt.title('K-Means'); plt.xlabel(r'$x$'); plt.ylabel(r'$y$')
plt.gca().set_aspect('equal')
plt.rcParams['figure.constrained_layout.use'] = True
plt.show()
No description has been provided for this image
In [7]:
# DBSCAN
db = DBSCAN(eps=0.1, min_samples=3).fit(facedata)
labels = np.array(db.labels_) + 1
n_clusters_ = np.amax(labels); n_noise_ = list(labels).count(0)
print( 'Found {:d} clusters, {:d} noise points.'.format(n_clusters_,n_noise_) )

# Plot 
plt.figure(figsize=(8,6))
plt.scatter(x[labels>0], y[labels>0], c = labels[labels>0], marker='.', alpha=0.5)
plt.plot(x[labels==0],y[labels==0], 'kx', label='noise')
plt.title('DBSCAN'); plt.xlabel(r'$x$'); plt.ylabel(r'$y$')
plt.gca().set_aspect('equal')
plt.legend()
plt.rcParams['figure.constrained_layout.use'] = True
plt.show()
Found 5 clusters, 0 noise points.
No description has been provided for this image

Feature Engineering¶

Sometimes it helps to replace or add to the features already present in the dataset. In the face example, I found it somewhat helpful to change my representation from $(x,y)$ to $(x,y,\pi\,r)$. The new coordinate doesn't add new information, but it makes the clusters marginally more distinguishable.

In [8]:
# Prospective new features for the dataset
pir = np.pi * np.sqrt(x**2 + y**2)
theta = np.arctan2(y,x)
cos = np.cos(theta)
sin = np.sin(theta)
In [9]:
plt.figure(figsize=(8,4))
plt.scatter(theta,pir, c=category, marker='.', alpha=0.5)
plt.title('Native Categories'); plt.xlabel(r'$\theta$'); plt.ylabel(r'$\pi\, r$')
plt.gca().set_aspect('equal')
plt.rcParams['figure.constrained_layout.use'] = True
plt.show()
No description has been provided for this image
In [10]:
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(projection='3d')
ax.scatter(pir, x, y, marker='.', c=category, alpha=0.5)
ax.set_xlabel(r'$\pi\,r$'); ax.set_ylabel(r'$x$'); ax.set_zlabel(r'$y$')
ax.set_title('Native Categories')
ax.set_aspect('equal')
plt.rcParams['figure.constrained_layout.use'] = True
plt.show()
No description has been provided for this image
In [11]:
# Revised dataset
facedata2 = np.array([x,y,pir]).transpose() # use any of [x, y, pir, theta, cos, sin]

# K-Means
kmeans = KMeans(n_clusters=5, n_init=10).fit(facedata2)
faceclusters = kmeans.predict(facedata2)
facecenters = kmeans.cluster_centers_
# Plot
fig_face = plt.figure(figsize=[8,6])
plt.scatter(x, y, c = faceclusters, marker='.', alpha=0.5)
plt.plot(facecenters[:,0], facecenters[:,1], 'o', label='centroids', 
         markerfacecolor='white', markeredgecolor='k', markersize=15, alpha=0.5 )
fig_face.legend(loc='outside upper right')
plt.title('K-Means'); plt.xlabel(r'$x$'); plt.ylabel(r'$y$')
plt.gca().set_aspect('equal')
plt.rcParams['figure.constrained_layout.use'] = True
plt.show()
No description has been provided for this image
In [12]:
# DBSCAN
db = DBSCAN(eps=0.13, min_samples=3).fit(facedata2)
labels = np.array(db.labels_) + 1
n_clusters_ = np.amax(labels); n_noise_ = list(labels).count(0)
print( 'Found {:d} clusters, {:d} noise points.'.format(n_clusters_,n_noise_) )

# Plot 
plt.figure(figsize=(8,6))
plt.scatter(x[labels>0], y[labels>0], c = labels[labels>0], marker='.', alpha=0.5)
plt.plot(x[labels==0],y[labels==0], 'kx', label='noise')
plt.title('DBSCAN'); plt.xlabel(r'$x$'); plt.ylabel(r'$y$')
plt.gca().set_aspect('equal')
plt.legend()
plt.rcParams['figure.constrained_layout.use'] = True
plt.show()
Found 8 clusters, 1 noise points.
No description has been provided for this image