-
Notifications
You must be signed in to change notification settings - Fork 3
/
Copy pathTopics_of_interest
67 lines (67 loc) · 3.07 KB
/
Topics_of_interest
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
autopoesia
reducing prediction error:
bayesian filtering
predictive coding
markov blankets in the ECoG data
bayesian belief updating:
predictive coding with reflexes
second order prediction errors
action, perception, expectation
active inference
free energy minimization, external states
descending proprioceptive predictions
corollary discharge visual predictions
hetero clinic cycle (central pattern generator)
action-observation
active planning as inference
epexted free energy minimization
complexity, accuracy
divergence, log-evidence
risk, ambiguity
intrinsic value, extrinsic value
negative free energy, evidence bound
KL or risk-sensitive control
bayesian surprise and infomax
mutual information
expected utility theory
active sampling
stimulus, visual input, salience
multi objective cost functions and optimizing
factor graph
generative neural networks
attention mechanisms
transformer neural networks
Adaptive Learning
Using predictions to efficiently encode neural activity
Continuously updating the predictive models to adapt to changes in neural activity patterns
Error Minimization
predictive model
Geodesic Paths
Modeling Neural Manifolds
Curvature Analysis
Persistent Homology to identify stable and robust features in the neural activity
Betti Numbers to quantify the topological complexity of neural activity patterns
Mapper Algorithm to create simplified representations of high-dimensional neural data
Probabilistic Inference on Manifolds: Extend Bayesian filtering and predictive coding frameworks to operate on curved spaces, taking into account the geometric structure of neural manifolds
Topological Analysis
Topology-Informed Machine Learning: use topological features as inputs to machine learning models for decoding neural intentions
Geometric Learning algorithms that are aware of the curved structure of neural spaces, making sure updates and adaptations respect the manifold structure
Topological Regularization
Topological Summaries
Attractors
Stable States
Transition Dynamics
Modeling Neural Plasticity
Manifold Learning
Symbolic Dynamics
Stable states in neural activity can be associated with specific thoughts, intentions, or actions.
stable states are less likely to change abruptly, they provide a level of predictability
Active states are periods of heightened neural activity that are often associated with specific cognitive or motor functions
inderstanding the transition between active and inactive states can help in setting adaptive thresholds
By focusing on the Markov blanket, a BCI system can filter out irrelevant information, improving decoding accuracy.
State Estimation
Markov blanket can be used to understand how external stimuli (encoded signals) influence the internal states of the brain
multivariate embedding theory applied to obtain multivariate embedded vectors and multivariate dispersion patterns, by assigning
labels to these patterns, the original multichannel time series can be transformed into a symbolic sequence with multiple symbols instead of the original binary conversion
multivariate multiscale dispersion Lempel-Ziv complexity (mvMDLZC)
adaptive embedding fusion