-
Notifications
You must be signed in to change notification settings - Fork 0
/
Possible Algorithms
50 lines (38 loc) · 2.79 KB
/
Possible Algorithms
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
A list of possible algorithms for sorting bat echolocation calls
Format:
Name
-URL
++Comment
ANJI: Another Java NEAT Implementation
-http://anji.sourceforge.net/
++
Algorithm 1(Unproven)
--Let assume we are interesting on an upward curve graphs since that may resemble a bat echolocation. We maybe able to propose the idea to do a left to right scan through the graph data and compare a pair of nodes(data) to each other. We can look into the x and y coordinates and see where point B is comparing to A. Since we are interesting in the upward motion of the graph point B needs to indicate a positive increment when compare to point A regarding it's x coordinates. Repeat this process until the trend of each pair y's are similar.
Algorithm 2
- Assume a graph of echolocation data has been condensed such that only the dense curves are present and the scattered dots are absent. First, we extrapolate two points A and B such that A is the leftmost endpoint of the curve and B is the rightmost endpoint. Then we calculate the slope of the line that crosses A and B, the slope going in the rightward direction. If the slope of that line is negative, then the curve represents a normal bat call, otherwise it represents an abnormal call. Finally, we sort the respective curves into separate groups based on the slope.
Bagging
-https://machinelearningmastery.com/implement-bagging-scratch-python/
Support Vector Machine
-https://medium.com/machine-learning-101/chapter-2-svm-support-vector-machine-theory-f0812effc72
Supervised learning algorithms:
Classification:
K-Nearest Neighbors
-https://medium.com/@adi.bronshtein/a-quick-introduction-to-k-nearest-neighbors-algorithm-62214cea29c7
Random Forest
-https://www.datacamp.com/community/tutorials/random-forests-classifier-python
Gradient Boosting
-in scikit-learn package
-http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingClassifier.html
Time Series Classification
-http://alexminnaar.com/time-series-classification-and-clustering-with-python.html
-https://www.datasciencecentral.com/profiles/blogs/time-series-classification-with-tensorflow
Unsupervised learning algorithm:
Clustering:
MiniBatch KMeans
-in scikit-learn package
-http://scikit-learn.org/stable/modules/clustering.html#mini-batch-kmeans
Note:
Classification algorithms such as Random Forest, Gradient Boosting and Time Series Classification methods require labeled data. Even though Clustering method like MiniBatch KMeans can work without labeled data, but it cannot get the classifed results as we expect.
Edit (3 October 2018):
-https://towardsdatascience.com/the-5-clustering-algorithms-data-scientists-need-to-know-a36d136ef68
--article providing an overview of K-means, mean-shift, DBSCAN, Gaussian mixture expectation–maximization, and agglomerative hierarchical clustering algorithms