Student ID: 20709541
Note: Please include your numerical student ID only, do not include your name.
Note: Refer to the PDF for the full instructions (including some hints), this notebook contains abbreviated instructions only. Cells you need to fill out are marked with a "writing hand" symbol. Of course, you can add new cells in between the instructions, but please leave the instructions intact to facilitate marking.
# Import numpy and matplotlib -- you shouldn't need any other libraries
import numpy as np
import matplotlib.pyplot as plt
import scipy.optimize as sci # For question 2.1b)
# Fix the numpy random seed for reproducible results
np.random.seed(18945)
# Some formating options
%config InlineBackend.figure_formats = ['svg']
a) Computing gain and bias. In general, for a neuron model
Solving the following system of equations we see that:
and
from (2) it follows that
then plugging (3) into (1) we see that
Re-arranging (4) we can find an equation for
and plugging (5) back into (3) we can find the following equation for
Now, simplify these equations for the specific case
When
and
Solving equations (7) and (8) we find that
or
where
or
We also know that in the case of the ReLU rate approximation encoder,
b) Neuron tuning curves. Plot the neuron tuning curves
number_of_neurons = 16
Generate random
low_freq = 100 # Hz
high_freq = 200 # Hz
num_a_max_samples = number_of_neurons
a_max_set = np.random.uniform(low_freq, high_freq, num_a_max_samples)
Generate random
min_intercept = -0.95
max_intercept = 0.95
num_intercepts = number_of_neurons
intercept_set = np.random.uniform(min_intercept, max_intercept, num_intercepts)
Create helper functions:
tau_ref = 0.002
tau_rc = 0.020
def gain(a_max, x_intercept, e, type):
if type == "relu":
return a_max / abs(e - x_intercept)
if type == "lif":
encode = abs(e - x_intercept)
exp_term = 1 - np.exp((tau_ref - 1 / a_max) / tau_rc)
return 1 / (encode * exp_term)
if type == "lif-2d":
encode = np.vdot(e, e) - np.vdot(e, x_intercept)
exp_term = 1 - np.exp((tau_ref - 1 / a_max) / tau_rc)
return 1 / (encode * exp_term)
return 0
def bias(x_intercept, e, gain, type):
if type == "relu":
return -(gain) * x_intercept * e
if type == "lif":
return -(gain) * x_intercept * e
if type == "lif-2d":
return -(gain) * np.vdot(e, x_intercept)
return 0
def relu_encode(neuron, x):
rate = max(neuron.a * x * neuron.encoder_sign + neuron.j_bias, 0)
return rate
def lif_encode(neuron, x):
J = neuron.a * x * neuron.encoder_sign + neuron.j_bias
if J > 1:
return 1 / (tau_ref - tau_rc * np.log(1 - 1 / J))
return 0
def lif_encode_2d(neuron, xy):
J = neuron.a * np.vdot(xy, neuron.circ) + neuron.j_bias
if J > 1:
return 1 / (tau_ref - tau_rc * np.log(1 - 1 / J))
return 0
def print_block(title, data):
print(title + " ----------")
print(data)
print("-----------------")
class Neuron:
def __init__(self, a_max, x_intercept, id, type):
self.id = id
self.a_max = a_max
self.encoder_sign = np.random.choice([-1, 1])
a = gain(a_max, x_intercept, self.encoder_sign, type)
j_bias = bias(x_intercept, self.encoder_sign, a, type)
self.a = a
self.j_bias = j_bias
self.rate = []
def rate_at_point(self, x, type):
if type == "relu":
return relu_encode(self, x)
if type == "lif":
return lif_encode(self, x)
def find_rate(self, space, type):
for element in space:
self.rate.append(self.rate_at_point(element, type))
def print_details(self):
print("Neuron: --------------")
print("id " + str(self.id))
print("a_max " + str(self.a_max))
print("gain " + str(self.a))
print("bias " + str(self.j_bias))
print("--------------")
# create array of neuron objects
neurons = []
for i in range(number_of_neurons):
neurons.append(Neuron(a_max_set[i], intercept_set[i], i, "relu"))
# create a linespace for us to plot
x = np.linspace(-1, 1, 41)
for neuron in neurons:
neuron.find_rate(x, "relu")
Plot the ReLU encoder rates
plt.figure()
plt.suptitle("Rectified Tuning Curves $a_{i=16}(x)$ versus stimuli $x$")
for neuron in neurons:
plt.plot(x, neuron.rate)
plt.xlabel("stimuli $x$")
plt.ylabel("rate $a_i(x)$ Hz")
plt.show()
c) Computing identity decoders. Compute the optimal identity decoder
Solving the problem (below) of least squares for the general case when
# make a list of all the activities
activities = []
inputs = []
for neuron in neurons:
activities.append(neuron.rate)
inputs.append(x)
# make A matrix
A = np.array(activities)
X = np.array(inputs)
# Calculate Decoders
D = np.linalg.lstsq(A.T, X.T, rcond=None)[0].T[0]
print_block("Decoders",D)
Decoders ----------
[-9.04442718e-05 -8.18141634e-05 9.40587950e-04 -1.39753131e-03
5.26991976e-05 -1.32515445e-03 -6.58968919e-04 -4.26051588e-04
8.56513564e-04 -1.80305414e-04 9.20342728e-03 -1.03734207e-02
6.72187738e-03 1.82491677e-03 3.28703379e-04 -4.72469121e-03]
-----------------
d) Evaluating decoding errors. Compute and plot
Plotting decoded values
x_hat = np.dot(D, A)
plt.figure()
plt.suptitle("Neural Representation of Stimuli")
plt.plot(x, x, "r", "--")
plt.plot(x, x_hat, "b")
plt.xlabel("$x$ (red) and $\hat x$ (blue)")
plt.ylabel("approximation")
plt.show()
Plotting the difference betwee
err = x - x_hat
plt.figure()
plt.suptitle("Error $x-\hat x$ versus $x$")
plt.plot(x, err)
plt.xlabel("$x$")
plt.ylabel("Error")
plt.show()
Report the Root Mean Squared Error (RMSE)
def rmse(x1, x2):
return np.sqrt(np.mean(np.power(x1 - x2, 2)))
x_rmse = rmse(x, x_hat)
x_rmse_rounded = np.round(x_rmse, 10)
print_block("Root Mean Squared Error", x_rmse_rounded)
Root Mean Squared Error ----------
0.0016309119
-----------------
e) Decoding under noise. Now try decoding under noise. Add random normally distributed noise to
Generate
noise_sdtdev = 0.2 * np.amax(A)
W_noise = np.random.normal(scale=noise_sdtdev, size=np.shape(A))
A_noise = A + W_noise
x_hat_noise = np.dot(D, A_noise)
Plot decoded
plt.figure()
plt.suptitle("Neural Representation of Stimuli with Noise Added")
plt.plot(x, x, "r", "--")
plt.plot(x, x_hat_noise, "b")
plt.xlabel("$x$ (red) and $\hat x_{noise}$ (blue)")
plt.ylabel("approximation")
plt.show()
Plotting the difference betwee
err_noise = x - x_hat_noise
plt.figure()
plt.suptitle("Error $x-\hat x_{noise}$ versus $x$")
plt.plot(x, err_noise)
plt.xlabel("$x$")
plt.ylabel("$Error_{noise}$")
plt.show()
Report the Root Mean Squared Error (RMSE)
x_rmse_noise = rmse(x, x_hat_noise)
x_rmse_noise_rounded=np.round(x_rmse_noise, 10)
print_block("Root Mean Squared Error (Noise) ", x_rmse_noise_rounded)
Root Mean Squared Error (Noise) ----------
0.6866944254
-----------------
f) Accounting for decoder noise. Recompute the decoder
Taking noise into account and performing regularization we see that $$ \bold{D} \approx (\bold{A}\bold{A^{T}}+N\sigma^{2}\bold{I})^{-1}\bold{A}\bold{X^{T}} $$
# taking noise into account
N = len(x)
n = number_of_neurons
D_noisey = np.linalg.lstsq(
A @ A.T + 0.5 * N * np.square(noise_sdtdev) * np.eye(n),
A @ X.T,
rcond=None,
)[0].T[0]
print_block("Noisey Decoders",D_noisey)
Noisey Decoders ----------
[ 0.00052308 0.00066024 -0.00089647 -0.00092633 0.00113458 0.00054764
0.00042239 0.00084794 -0.00068381 -0.00078467 -0.00089853 -0.00100487
0.00069608 0.00042051 0.00058708 -0.0007256 ]
-----------------
Plotting decoded values
x_hat_noisey_decoder = np.dot(D_noisey, A)
plt.figure()
plt.suptitle("Neural Representation of Stimuli with noisey decoder and no noise in A")
plt.plot(x, x, "r", "--")
plt.plot(x, x_hat_noisey_decoder, "b")
plt.xlabel("$x$ (red) and $\hat x$ (blue)")
plt.ylabel("approximation")
plt.show()
Plotting the difference betwee
err_noise_decoder = x - x_hat_noisey_decoder
plt.figure()
plt.suptitle("Error $x-\hat x$ versus $x$ with noisey decoder and no noise in A")
plt.plot(x, err_noise_decoder)
plt.xlabel("$x$")
plt.ylabel("Error")
plt.show()
Report the Root Mean Squared Error (RMSE)
x_rmse_noisey_decoder = rmse(x, x_hat_noisey_decoder)
x_rmse_noisey_decoder_rounded = np.round(x_rmse_noisey_decoder, 10)
print_block("Root Mean Squared Error (Noisey Decoder) and no noise in A ", x_rmse_noisey_decoder_rounded)
Root Mean Squared Error (Noisey Decoder) and no noise in A ----------
0.0204626651
-----------------
Plotting decoded values
x_hat_noisey_decoder_noisey_A = np.dot(D_noisey, A_noise)
plt.figure()
plt.suptitle(
"Neural Representation of Stimuli with noisey decoder and noise in A as $A_{noise}$"
)
plt.plot(x, x, "r", "--")
plt.plot(x, x_hat_noisey_decoder_noisey_A, "b")
plt.xlabel("$x$ (red) and $\hat x$ (blue)")
plt.ylabel("approximation with noise")
plt.show()
Plotting the difference betwee
err_noise_decoder_noisey_A = x - x_hat_noisey_decoder_noisey_A
plt.figure()
plt.suptitle(
"Error $x-\hat x$ versus $x$ with noisey decoder and noise in A as $A_{noise}$"
)
plt.plot(x, err_noise_decoder_noisey_A)
plt.xlabel("$x$")
plt.ylabel("Error")
plt.show()
Report the Root Mean Squared Error (RMSE)
x_rmse_noisey_decoder_noisey_A = rmse(x, x_hat_noisey_decoder_noisey_A)
x_rmse_noisey_decoder_noisey_A_rounded = np.round(x_rmse_noisey_decoder_noisey_A, 10)
print_block(
"Root Mean Squared Error (Noisey Decoder) and noise in A",
x_rmse_noisey_decoder_noisey_A_rounded,
)
Root Mean Squared Error (Noisey Decoder) and noise in A ----------
0.1270503631
-----------------
g) Interpretation. Show a 2x2 table of the four RMSE values reported in parts d), e), and f). This should show the effects of adding noise and whether the decoders
Plotting the table of all the different RMSE values
plt.figure()
fig, ax = plt.subplots()
ax.set_axis_off()
rmse_table_values = [
[x_rmse_rounded, x_rmse_noisey_decoder_rounded],
[x_rmse_noise_rounded, x_rmse_noisey_decoder_noisey_A_rounded],
]
table_rows = ["No Noise", "Gaussian Noise"]
table_cols = ["Simple Decoder", "Noise-Tuned Decoder"]
plt.table(
cellText=rmse_table_values,
rowLabels=table_rows,
colLabels=table_cols,
loc="upper left",
)
plt.show()
<Figure size 432x288 with 0 Axes>
When adding noise to the activities in the case of a simple decoder, the noise causes an increase in the RMSE of nearly
a) Exploring error due to distortion and noise. Plot the error due to distortion
num_neurons_to_eval = [4, 8, 16, 32, 64, 128, 256, 512]
num_runs = 5
def find_error(stddev_factor, runs, neurons_to_eval):
E_dist = []
E_noise = []
for amount_of_neurons in num_neurons_to_eval:
e_dist = []
e_noise = []
for r in range(runs):
local_neurons = []
for i in range(amount_of_neurons):
local_neurons.append(
Neuron(
np.random.uniform(low_freq, high_freq),
np.random.uniform(min_intercept, max_intercept),
i,
"relu",
)
)
local_activies = []
local_inputs = []
for neuron in local_neurons:
neuron.find_rate(x, "relu")
for neuron in local_neurons:
local_activies.append(neuron.rate)
local_inputs.append(x)
# make A matrix
A_L = np.array(local_activies)
X_L = np.array(local_inputs)
sigma_noise = stddev_factor * np.amax(A_L)
N_L = len(x)
n_L = len(local_neurons)
D_L_noisey = np.linalg.lstsq(
A_L @ A_L.T + 0.5 * N_L * np.square(sigma_noise) * np.eye(n_L),
A_L @ X_L.T,
rcond=None,
)[0].T[0]
local_x_hat = np.dot(D_L_noisey, A_L)
e_dist_L = sum(np.power(x - local_x_hat, 2)) / N_L
e_noise_L = sigma_noise * sum(np.power(D_L_noisey, 2))
e_dist.append(e_dist_L)
e_noise.append(e_noise_L)
E_dist.append(np.mean(e_dist))
E_noise.append(np.mean(e_noise))
return E_dist, E_noise
Find Errors
factor_1 = 0.1
error_dist, error_noise = find_error(factor_1, num_runs, num_neurons_to_eval)
n = [1 / N for N in num_neurons_to_eval]
n2 = [1 / np.power(N, 2) for N in num_neurons_to_eval]
# function to plot the error
def plot_error(title, source):
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
plt.suptitle(title)
plt_error = ax.plot(num_neurons_to_eval, source, label="Error")
plt_n = ax.plot(num_neurons_to_eval, n, "--", label="$1/n$")
plt_n2 = ax.plot(num_neurons_to_eval, n2, "--", label="$1/n^2$")
plt.legend(handles=[plt_error, plt_n, plt_n2], labels=[])
ax.set_xscale("log")
ax.set_yscale("log")
plt.xlabel("Number of neurons")
plt.ylabel("$E_{noise}$")
plt.show()
Plot
plot_error(
"Log-Log Plot of error due to and $E_{noise}$ alonside $1/n$ or $1/n^2$ with $factor=0.1$",
error_noise,
)
Plot
plot_error(
"Log-Log Plot of error due to and $E_{dist}$ alonside $1/n$ or $1/n^2$ with $factor=0.1$",
error_dist,
)
b) Adapting the noise level. Repeat part a) with
factor_2 = 0.01
error_dist_2, error_noise_2 = find_error(factor_2, num_runs, num_neurons_to_eval)
plot_error(
"Log-Log Plot of error due to and $E_{noise}$ alonside $1/n$ or $1/n^2$ with $factor=0.01$",
error_noise_2,
)
plot_error(
"Log-Log Plot of error due to and $E_{dist}$ alonside $1/n$ or $1/n^2$ with $factor=0.01$",
error_dist_2,
)
c) Interpretation. What does the difference between the graphs in a) and b) tell us about the sources of error in neural populations?
As the number of neurons,
a) Computing gain and bias. As in the second part of 1.1a), given a maximum firing rate
Given that when
since
then
$$
a^\mathrm{max}=\frac{1}{\tau_{ref}-\tau_{RC}{\ln{\left(1-\frac{1}{\alpha + J^\mathrm{bias}}\right)}}} ,, \quad\quad 0 =\alpha \xi + J^\mathrm{bias} ,
$$
These can be re-arranged into the following equations
$$
\frac{1}{\alpha+J^\mathrm{bias}}=1-\exp(\frac{\tau_{ref-\frac{1}{a^\mathrm{max}}}}{\tau_{RC}}) ,, \quad\quad J^\mathrm{bias}=-\alpha \xi
$$
plugging in
b) Neuron tuning curves. Generate the same plot as in 1.1b). Use
lif_neurons = []
for i in range(number_of_neurons):
lif_neurons.append(Neuron(a_max_set[i], intercept_set[i], i, "lif"))
# create a linespace for us to plot
x = np.linspace(-1, 1, 41)
for neuron in lif_neurons:
neuron.find_rate(x, "lif")
plt.figure()
plt.suptitle("LIF Tuning Curves $a_{i=16}(x)$ versus stimuli $x$")
for neuron in lif_neurons:
plt.plot(x, neuron.rate)
plt.xlabel("stimuli $x$")
plt.ylabel("rate $a_i(x)$ Hz")
plt.show()
c) Impact of noise. Generate the same four plots as in 1.1f) (adding/not adding noise to
lif_activities = []
lif_inputs = []
for neuron in lif_neurons:
lif_activities.append(neuron.rate)
lif_inputs.append(x)
# make A matrix and matrix of activities
A_LIF = np.array(lif_activities)
X_LIF = np.array(lif_inputs)
lif_noise_sdtdev = 0.2 * np.amax(A_LIF)
W_noise_lif = np.random.normal(scale=lif_noise_sdtdev, size=np.shape(A_LIF))
A_noise_lif = A_LIF + W_noise_lif
N_LIF = len(x)
n_lif = number_of_neurons
D_noisey_lif = np.linalg.lstsq(
A_LIF @ A_LIF.T + 0.5 * N_LIF * np.square(lif_noise_sdtdev) * np.eye(n_lif),
A_LIF @ X_LIF.T,
rcond=None,
)[0].T[0]
print_block("Noisey LIF Decoders", D_noisey_lif)
Noisey LIF Decoders ----------
[ 0.0008891 0.0011683 -0.00052936 -0.0006679 0.00115335 -0.00038266
-0.00071348 0.00083195 -0.00049516 0.0005447 0.00106202 -0.00069248
-0.00045101 -0.00071007 -0.00056998 -0.0004199 ]
-----------------
Case with noise optimized decoders and no noise in A
x_hat_noisey_decoder_lif = np.dot(D_noisey_lif, A_LIF)
plt.figure()
plt.suptitle("Neural Representation of Stimuli with noisey decoder and no noise in A (LIF)")
plt.plot(x, x, "r", "--")
plt.plot(x, x_hat_noisey_decoder_lif, "b")
plt.xlabel("$x$ (red) and $\hat x$ (blue)")
plt.ylabel("approximation")
plt.show()
Plotting the difference betwee
err_noise_decoder_lif = x - x_hat_noisey_decoder_lif
plt.figure()
plt.suptitle(
"Error $x-\hat x_{LIF}$ versus $x_{LIF}$ with noisey decoder and no noise in $A_{LIF}$"
)
plt.plot(x, err_noise_decoder_lif)
plt.xlabel("$x$")
plt.ylabel("Error")
plt.show()
Report Root mean squared error (RMSE)
x_rmse_noisey_decoder_lif = rmse(x, x_hat_noisey_decoder_lif)
x_rmse_noisey_decoder_rounded_lif = np.round(x_rmse_noisey_decoder_lif, 10)
print_block(
"Root Mean Squared Error (Noisey Decoder) and no noise in A",
x_rmse_noisey_decoder_rounded_lif,
)
Root Mean Squared Error (Noisey Decoder) and no noise in A ----------
0.0305808108
-----------------
Case with noise optimized decoders and noise in A
x_hat_noisey_decoder_noisey_A_lif = np.dot(D_noisey_lif, A_noise_lif)
plt.figure()
plt.suptitle(
"Neural Representation of Stimuli with noisey decoder and noise in A (LIF)"
)
plt.plot(x, x, "r", "--")
plt.plot(x, x_hat_noisey_decoder_noisey_A_lif, "b")
plt.xlabel("$x$ (red) and $\hat x$ (blue)")
plt.ylabel("approximation")
plt.show()
Plotting the difference betwee
err_noise_decoder_noisey_A_lif = x - x_hat_noisey_decoder_noisey_A_lif
plt.figure()
plt.suptitle(
"Error $x-\hat x_{LIF}$ versus $x_{LIF}$ with noisey decoder and noise in $A_{LIF}$"
)
plt.plot(x, err_noise_decoder_noisey_A_lif)
plt.xlabel("$x$")
plt.ylabel("Error")
plt.show()
Report Root Mean Squared Error (RMSE)
x_rmse_noisey_decoder_noisey_A_lif = rmse(x, x_hat_noisey_decoder_noisey_A_lif)
x_rmse_noisey_decoder_noisey_A_rounded_lif = np.round(
x_rmse_noisey_decoder_noisey_A_lif, 10
)
print_block(
"Root Mean Squared Error (Noisey Decoder) and noise in A",
x_rmse_noisey_decoder_noisey_A_rounded_lif,
)
Root Mean Squared Error (Noisey Decoder) and noise in A ----------
0.0961018259
-----------------
a) Plotting 2D tuning curves. Plot the tuning curve of an LIF neuron whose 2D preferred direction vector is at an angle of
Create 2D LIF Neuron Class
class LIFNeuron2D:
def __init__(self, a_max, x_intercept, angle, id):
self.id = id
self.a_max = a_max
self.circ = [np.cos(angle), np.sin(angle)]
a = gain(a_max, x_intercept, self.circ, "lif-2d")
j_bias = bias(x_intercept, self.circ, a, "lif-2d")
self.a = a
self.j_bias = j_bias
self.rate = []
def rate_at_point(
self,
point,
):
return lif_encode_2d(self, point)
def find_rate(self, point):
self.rate.append(self.rate_at_point(point))
def find_rate_2d(self, set):
for point in set:
self.rate.append(self.rate_at_point(point))
def clear_rate(self):
self.rate = []
def print_details(self):
print("Neuron: --------------")
print("id " + str(self.id))
print("a_max " + str(self.a_max))
print("gain " + str(self.a))
print("angle " + str(self.circ))
print("bias " + str(self.j_bias))
print("rate " + str(len(self.rate)))
print("--------------")
Instanciate 2D LIF Neuron and find required rates
a_max2d = 100
angle = -np.pi / 4
x_intercept = [0, 0]
neuron2d = LIFNeuron2D(a_max2d, x_intercept, angle, 0)
l = 41
x, y = np.linspace(-1, 1, l), np.linspace(-1, 1, l)
x, y = np.meshgrid(x, y)
points = []
for i in range(len(x)):
for j in range(len(x)):
points.append([x[j][i], y[j][i]])
for point in points:
neuron2d.find_rate(point)
rates = neuron2d.rate
rates = np.reshape(rates, (l, l))
from matplotlib import cm
from matplotlib.ticker import LinearLocator
fig, ax = plt.subplots(subplot_kw={"projection": "3d"})
plt.suptitle("Two-Dimensional Tuning Curve for LIF Neuron")
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
ax.set_zlabel("$a$ (Hz)")
surf = ax.plot_surface(
X=x, Y=y, Z=rates, cmap=cm.turbo, linewidth=0, antialiased=False
)
ax.zaxis.set_major_locator(LinearLocator(5))
fig.colorbar(surf, shrink=0.5, aspect=5)
ax.zaxis.set_major_formatter('{x:.02f}')
plt.show()
b) Plotting the 2D tuning curve along the unit circle. Plot the tuning curve for the same neuron as in a), but only considering the points around the unit circle, i.e., sample the activation for different angles
def fcos(theta, c1, c2, c3, c4):
return c1 * np.cos(c2 * theta + c3) + c4
samples = 250
thetas = np.linspace(-np.pi, np.pi, samples)
# clear the rates from the previous question
neuron2d.clear_rate()
for theta in thetas:
neuron2d.find_rate([np.cos(theta), np.sin(theta)])
theta_rates = neuron2d.rate
popt, pcov = sci.curve_fit(fcos, thetas, theta_rates)
fit = []
for theta in thetas:
fit.append(fcos(theta, *popt))
c) Discussion. What makes a cosine a good choice for the curve fit in 2.1b? Why does it differ from the ideal curve?
Because the curve we are trying to fit has exponential characteristics, a cosine function is not a bad choise. It also has the shape of a bell curve in many cases. Where it differs from the ideal curve is that it is continuous in nature and consequently is defined in regions in which the value of the tuning curve is 0, as we can see in the graph. It is a these points that it fails.
a) Choosing encoding vectors. Generate a set of
def generate_encoders(samples):
thetas = []
encs = []
for i in range(samples):
theta = np.random.uniform(0, 2 * np.pi)
enc = [np.cos(theta), np.sin(theta)]
thetas.append(theta)
encs.append(enc)
return encs, thetas
num_samples = 100
encs, thetas = generate_encoders(num_samples)
U, V = zip(*encs)
X1Y1 = np.zeros(len(encs))
plt.figure()
plt.suptitle(" 100 Encoders as Unit Vectors Uniformly Distributed About Unit Circle")
ax = plt.gca()
plt.plot(U, V, ".",color="r")
ax.quiver(
X1Y1, X1Y1, U, V, angles="xy", scale_units="xy", scale=1, color="b", width=0.0025
)
domain = [-1.1, 1.1]
ax.set_xlim(domain)
ax.set_ylim(domain)
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
plt.show()
b) Computing the identity decoder. Use LIF neurons with the same properties as in question 1.3. When computing the decoders, take into account noise with
l = 41
x, y = np.linspace(-1, 1, l), np.linspace(-1, 1, l)
x, y = np.meshgrid(x, y)
inputs = []
for i in range(len(x)):
for j in range(len(x)):
inputs.append([x[j][i], y[j][i]])
neurons = []
for theta in thetas:
a_max = np.random.uniform(low_freq, high_freq)
angle = np.random.uniform(0, 2 * np.pi)
x_intercept = [np.cos(angle), np.sin(angle)]
neuron = LIFNeuron2D(a_max, x_intercept, theta, theta)
neurons.append(neuron)
for neuron in neurons:
neuron.find_rate_2d(inputs)
activities = []
for neuron in neurons:
activities.append(neuron.rate)
# make A matrix and matrix of activities
A = np.array(activities)
X = np.array(inputs)
noise_sdtdev = 0.2 * np.amax(A)
W_noise = np.random.normal(scale=noise_sdtdev, size=np.shape(A))
A_noise_lif = A + W_noise
N = len(inputs)
n = len(neurons)
D = np.linalg.lstsq(
A @ A.T + 0.5 * N * np.square(noise_sdtdev) * np.eye(n),
A @ X,
rcond=None,
)[0]
Plot Decoders
U, V = zip(*D)
X1Y1 = np.zeros(len(D))
plt.figure()
plt.suptitle("Decoders for 2-D LIF Neurons")
ax = plt.gca()
plt.plot(U, V, ".",color="r")
ax.quiver(
X1Y1, X1Y1, U, V, angles="xy", scale_units="xy", scale=1, color="b", width=0.0025
)
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
plt.show()
c) Discussion. How do these decoding vectors compare to the encoding vectors?
In the above case of the encoders, they all have a magnitude of 1, however when it comes to the decoding vectors, the magnitude is significantly less. In both cases however, the angles of the decoders and encoders are somewhat similar in terms of their distribution. This leads to the belief that while the decoders can maintain some information about direction, this is not true about magnitude.
d) Testing the decoder. Generate 20 random
num_vecs = 20
inputs = []
for i in range(20):
angle = np.random.uniform(0, 2 * np.pi)
radius = np.random.uniform(0, 1)
point = [radius * np.cos(angle), radius * np.sin(angle)]
inputs.append(point)
neurons2 = neurons
# clear rates for neurons
for neuron in neurons2:
neuron.clear_rate()
for neuron in neurons2:
neuron.find_rate_2d(inputs)
activities = []
for neuron in neurons2:
activities.append(neuron.rate)
# make A matrix and matrix of activities
A = np.array(activities)
x_hat = np.dot(D.T, A)
U, V = zip(*inputs)
W, M = zip(*x_hat.T)
X1Y1 = np.zeros(20)
plt.figure()
plt.suptitle("Decoded Values and True Values for 2-D LIF Neurons")
ax = plt.gca()
true = plt.plot(U, V, ".", color="r", label="True Inputs")
decoded = plt.plot(W, M, ".", color="b", label="Decoded Inputs")
ax.quiver(
X1Y1, X1Y1, U, V, angles="xy", scale_units="xy", scale=1, color="r", width=0.0025
)
ax.quiver(
X1Y1, X1Y1, W, M, angles="xy", scale_units="xy", scale=1, color="b", width=0.0025
)
plt.legend(
handles=[
true,
decoded,
],
labels=[],
)
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
plt.show()
Calculate Root Mean Squared Error (RMSE)
def n_dim_rmse(x1, x2):
diff = x1 - x2
d = np.power(x1 - x2, 2)
d = d.flatten()
mu = np.mean(d)
return np.round(np.sqrt(mu), 10)
error = n_dim_rmse(np.array(inputs), x_hat.T)
print_block("RMSE between Decoded Values and True Values", error)
RMSE between Decoded Values and True Values ----------
0.050592665
-----------------
e) Using encoders as decoders. Repeat part d) but use the encoders as decoders. This is what Georgopoulos used in his original approach to decoding information from populations of neurons. Plot the decoded values and compute the RMSE. In addition, recompute the RMSE in both cases, but ignore the magnitude of the decoded vectors by normalizing before computing the RMSE.
E = np.array(encs)
x_hat_encs = np.dot(E.T, A)
U, V = zip(*inputs)
W, M = zip(*x_hat_encs.T)
X1Y1 = np.zeros(20)
plt.figure()
plt.suptitle(
"Decoded Values and True Values for 2-D LIF Neurons using Encoders as the Decoders"
)
ax = plt.gca()
true = plt.plot(U, V, "*", color="r", label="True Inputs")
decoded = plt.plot(W, M, ".", color="b", label="Decoded Inputs")
ax.quiver(
X1Y1, X1Y1, W, M, angles="xy", scale_units="xy", scale=1, color="b", width=0.0025
)
plt.legend(
handles=[
true,
decoded,
],
labels=[],
)
plt.xlabel("$x_1$")
plt.ylabel("$x_2$")
plt.show()
Calculate Root Mean Squared Error (RMSE)
error = n_dim_rmse(np.array(inputs), x_hat_encs.T)
print_block("RMSE Between Decoded Values and True Values Using Encoders as Decoders", error)
RMSE Between Decoded Values and True Values Using Encoders as Decoders ----------
946.0418891939
-----------------
Calculate Root Mean Squared Error (RMSE) ingnoring the magnitude of the decoded vectors
# normalize inputs
X = np.array(inputs)
norm_x = np.linalg.norm(X)
X_norm = X / norm_x
# normalize x_hat
x_hat = x_hat.T
norm_x_hat = np.linalg.norm(x_hat)
x_hat_norm = x_hat / norm_x_hat
# normalize x_hat_encs
x_hat_encs = x_hat_encs.T
norm_x_hat_encs = np.linalg.norm(x_hat_encs)
x_hat_encs_norm = x_hat_encs / norm_x_hat_encs
rmse_norm_x = n_dim_rmse(X_norm, x_hat_norm)
print_block("RMSE When Normalizing Vectors", rmse_norm_x)
rmse_norm_x_encs = n_dim_rmse(X_norm, x_hat_encs_norm)
print_block("RMSE When Normalizing Vectors and Using Encoder as Decoder", rmse_norm_x_encs)
RMSE When Normalizing Vectors ----------
0.0142102314
-----------------
RMSE When Normalizing Vectors and Using Encoder as Decoder ----------
0.0345973893
-----------------
f) Discussion. When computing the RMSE on the normalized vectors, using the encoders as decoders should result in a larger, yet still surprisingly small error. Thinking about random unit vectors in high dimensional spaces, why is this the case? What are the relative merits of these two approaches to decoding?
With the presence of random vectors in high dimensional spaces, because the RMSE in the case when using encoders as decoders is also suprisingly small and similar to that of the case when we used the true decoders themselves, it stands to reason that in some cases it may be advantageous to use the encoders as the decoders for the purpose of computational effeciency. By using the encoders as decoders we can avoid the least-squares calculation of determining the optimal decoders which is a computationally intensive process. With large neuron populations in