This project is supervised by Professor Thomas Sinclair of Purdue Mathematics. Contributors of this project are Darshini Rajamani, Luke Luschwitz, and Karim El-Sharkawy of Purdue University.
Below are two research papers we have completed so far:
-
"Insights into the Untanglement Mapping, the Properties of Positive Mappings, and the Conditions for Extension" – This paper explores the theoretical underpinnings of positive mappings, their structural properties, and the necessary conditions for their extension. Positive_Mappings_Proofs-3.pdf
-
"An Elementary Approach to Farkas’ Lemma and Its Relation to Hyperplane Separation" – This paper presents a fundamental perspective on Farkas’ Lemma, highlighting its connections to hyperplane separation in optimization theory. Farkas Lemma.pdf
Our research focuses on analyzing positive mappings and their extendibility, involving the development of an intricate code to evaluate specific matrix properties and visualize their cones. We're using Python with NumPy, SciPy (specifically linprog
), and sklearn libraries. The code itself classifies and validates matrices based on mathematical criteria such as extendibility. It draws on a blend of disciplines including linear algebra, optimization, linear programming, Euclidean distance geometry, and machine learning, particularly SVM. Please read our papers for more information on the theory of positive mappings and their extensions.
Our main goal is to find patterns within extendable matrices. In other words, we want to determine what differentiates extendable and nonextendable matrices. This would reduce the time and effort needed to identify whether a matrix extends. Currently, we're investigating collinearity and coplanarity to determine if these properties are key indicators.
- Creating_Extendable_and_Nonextendable_Maps.ipynb: Generates a large number of mappings, some extendable and others not, then applies ML (
sklearn.logisticregression
) to classify them. - pattern_recognition_and_visualization.ipynb: Analyzes patterns and tests theoretical assumptions.
- Data Sets Folder:
- (Non)ExtendableMappings: Contains two files of extendable and nonextendable mappings (100k total) generated by the first notebook.
- farthestBsMORE: Lists the farthest nonextendable mappings from the extendable mappings.
- trueClassifiersGood: Stores the best classifiers found through SVMs and verification processes.
We generate 4×4 matrices that satisfy specific mathematical properties, including:
- Row and column sum conditions
- Positivity constraints
- Classification using linear programming
Each mapping is classified as extendable or nonextendable using scipy.optimize.linprog
. The constraints are derived from matrix entries to determine feasibility.
We analyze the nonextendable matrices farthest from extendable ones and use Support Vector Machines (SVMs) to find linear separators.
We employ 3D scatter plots to visualize extendable, nonextendable, and classifier mappings. These visualizations help in understanding how these mappings form clusters in space.
We compute key matrix properties such as:
- Rank, determinant, eigenvalues, and eigenvectors
- Reduced row echelon form (RREF)
- Null space and column space
We analyze whether matrices are coplanar or collinear to identify structural similarities between extendable and nonextendable mappings.
Support Vector Machines (SVMs) are used to classify extendable and nonextendable matrices. We verify classifier validity by testing conditions on matrix rows.
We analyze the positivity and negativity distributions across extendable, nonextendable, and classifier mappings to explore potential patterns.
- Python 3.10.12
- NumPy 1.26.4
- Scikit-learn 1.3.2
- SciPy 1.13.1
- Matplotlib 3.7.1