diff --git a/docs/kakomonn/kyoto_university/index.md b/docs/kakomonn/kyoto_university/index.md
index 10ffbd383..fd1a916e8 100644
--- a/docs/kakomonn/kyoto_university/index.md
+++ b/docs/kakomonn/kyoto_university/index.md
@@ -61,7 +61,7 @@ tags:
- [力学系数学](informatics/amp_201808_mathematics_for_dynamical_systems.md)
- 2018年度:
- [線形計画](informatics/amp_201708_linear_programming.md)
- - [アルゴリズム基礎](informatics/amp_201808_algorithm.md)
+ - [アルゴリズム基礎](informatics/amp_201708_algorithm.md)
- [オペレーションズ・リサーチ](informatics/amp_201708_operation_research.md)
- [力学系数学](informatics/amp_201708_mathematics_for_dynamical_systems.md)
- 2017年度:
diff --git a/docs/kakomonn/tokyo_university/frontier_sciences/cbms_201608_10.md b/docs/kakomonn/tokyo_university/frontier_sciences/cbms_201608_10.md
new file mode 100644
index 000000000..7526aaf21
--- /dev/null
+++ b/docs/kakomonn/tokyo_university/frontier_sciences/cbms_201608_10.md
@@ -0,0 +1,183 @@
+---
+comments: false
+title: 東京大学 新領域創成科学研究科 メディカル情報生命専攻 2016年8月実施 問題10
+tags:
+ - Tokyo-University
+ - Graph-Theory
+ - Shortest-Path-Problem
+---
+
+# 東京大学 新領域創成科学研究科 メディカル情報生命専攻 2016年8月実施 問題10
+
+## **Author**
+[zephyr](https://inshi-notes.zephyr-zdz.space/)
+
+## **Description**
+We define the shortest distance from a vertex $i$ to a vertex $j$ on a graph as the number of edges in a path from $i$ to $j$ that contains the smallest number of edges, except that the shortest distance is $+\infty$ when no such path exists and that it is $0$ when $i$ and $j$ are identical.
+
+(1) Let us consider the directed graph shown below.
+
+- (A) Show the adjacency matrix.
+- (B) Show a matrix $\mathbf{S}$, whose element $s_{i,j}$ is the shortest distance from a vertex $i$ to a vertex $j$.
+
+
+
+
+
+(2) Suppose we are given a simple directed graph $G = (V, E)$, where the vertex set $V = \{1, 2, \ldots, n\}$ and $E$ is the edge set. $E$ is represented by a matrix $\mathbf{D^{(0)}} = (d_{i,j}^{(0)})$, where
+
+$$
+d_{i,j}^{(0)} = \begin{cases}
+0 & \text{(if } i = j \text{)} \\
+1 & \text{(if an edge } i \to j \text{ exists)} \\
++\infty & \text{(otherwise)}
+\end{cases}
+$$
+
+- (A) Let $\mathbf{V_{i,j}^{(k)}} = \{1, 2, \ldots, k\} \cup \{i, j\}$. Let $\mathbf{E_{i,j}^{(k)}}$ be the set of edges in $E$ that start from and end at a vertex in $\mathbf{V_{i,j}^{(k)}}$. Let $d_{i,j}^{(k)}$ be the shortest distance from a vertex $i$ to a vertex $j$ on a directed graph $G_{i,j}^{(k)} = (\mathbf{V_{i,j}^{(k)}}, \mathbf{E_{i,j}^{(k)}})$, and let $\mathbf{D^{(k)}} = (d_{i,j}^{(k)})$. Express $\mathbf{D^{(1)}}$ in terms of $\mathbf{D^{(0)}}$.
+
+- (B) $\mathbf{D^{(k+1)}}$ can be computed from $\mathbf{D^{(k)}}$ as follows. Fill in the two blanks.
+
+$$
+d_{i,j}^{(k+1)} = \min \left( d_{i,j}^{(k)}, \boxed{\phantom{ddd}} + \boxed{\phantom{ddd}} \right)
+$$
+
+- (C) Given $G$, show an algorithm to compute the all-pair shortest distances, and find its time complexity with regard to $n$.
+
+---
+
+我们将从顶点 $i$ 到顶点 $j$ 的最短距离定义为图中从 $i$ 到 $j$ 的包含最少边数的路径中的边数,除了当不存在这样的路径时最短距离为 $+\infty$,以及当 $i$ 和 $j$ 相同时为 $0$。
+
+(1) 让我们考虑下图所示的有向图。
+
+- (A) 显示邻接矩阵。
+- (B) 显示一个矩阵 $\mathbf{S}$,其元素 $s_{i,j}$ 是从顶点 $i$ 到顶点 $j$ 的最短距离。
+
+
+
+
+
+(2) 假设我们有一个简单的有向图 $G = (V, E)$,其中顶点集 $V = \{1, 2, \ldots, n\}$ 和边集 $E$。$E$ 由矩阵 $\mathbf{D^{(0)}} = (d_{i,j}^{(0)})$ 表示,其中
+
+$$
+d_{i,j}^{(0)} = \begin{cases}
+0 & \text{(如果 } i = j \text{)} \\
+1 & \text{(如果存在边 } i \to j \text{)} \\
++\infty & \text{(否则)}
+\end{cases}
+$$
+
+- (A) 设 $\mathbf{V_{i,j}^{(k)}} = \{1, 2, \ldots, k\} \cup \{i, j\}$。设 $\mathbf{E_{i,j}^{(k)}}$ 为从顶点 $\mathbf{V_{i,j}^{(k)}}$ 中的顶点出发并结束于顶点的边集。设 $d_{i,j}^{(k)}$ 为有向图 $G_{i,j}^{(k)} = (\mathbf{V_{i,j}^{(k)}}, \mathbf{E_{i,j}^{(k)}})$ 上从顶点 $i$ 到顶点 $j$ 的最短距离,并设 $\mathbf{D^{(k)}} = (d_{i,j}^{(k)})$。用 $\mathbf{D^{(0)}}$ 表示 $\mathbf{D^{(1)}}$。
+- (B) $\mathbf{D^{(k+1)}}$ 可以从 $\mathbf{D^{(k)}}$ 计算如下。填写两个空格。
+
+$$
+d_{i,j}^{(k+1)} = \min \left( d_{i,j}^{(k)}, \boxed{\phantom{ddd}} + \boxed{\phantom{ddd}} \right)
+$$
+
+- (C) 给定 $G$,展示一个算法来计算所有对的最短距离,并找到其关于 $n$ 的时间复杂度。
+
+## **Kai**
+### (1)
+#### (A)
+
+The adjacency matrix $\mathbf{A}$ for the graph is a square matrix where the element $a_{i,j}$ is 1 if there is an edge from vertex $i$ to vertex $j$, and 0 otherwise.
+
+$$
+\mathbf{A} = \begin{bmatrix}
+0 & 1 & 0 & 0 & 0 & 0 & 0 \\
+0 & 0 & 1 & 0 & 0 & 0 & 0 \\
+1 & 0 & 0 & 0 & 0 & 1 & 0 \\
+1 & 0 & 0 & 0 & 0 & 0 & 0 \\
+1 & 0 & 0 & 1 & 0 & 0 & 0 \\
+0 & 0 & 0 & 1 & 0 & 0 & 0 \\
+0 & 0 & 0 & 0 & 1 & 1 & 0 \\
+\end{bmatrix}
+$$
+
+#### (B)
+
+The matrix $\mathbf{S}$ will be computed using the Floyd-Warshall algorithm. The element $s_{i,j}$ is the shortest distance from vertex $i$ to vertex $j$.
+
+1. Initialize $\mathbf{S}$ with:
+ - $s_{i,j} = 0$ if $i = j$
+ - $s_{i,j} = 1$ if there is an edge from $i$ to $j$
+ - $s_{i,j} = +\infty$ otherwise
+
+2. Update $\mathbf{S}$ using the Floyd-Warshall algorithm:
+
+ $$
+ s_{i,j} = \min(s_{i,j}, s_{i,k} + s_{k,j})
+ $$
+
+The final $\mathbf{S}$ matrix is:
+
+$$
+\mathbf{S} = \begin{bmatrix}
+0 & 1 & 2 & 4 & \infty & 3 & \infty \\
+2 & 0 & 1 & 3 & \infty & 2 & \infty \\
+1 & 2 & 0 & 2 & \infty & 1 & \infty \\
+1 & 2 & 3 & 0 & \infty & 4 & \infty \\
+1 & 2 & 3 & 1 & 0 & 4 & \infty \\
+2 & 3 & 4 & 1 & \infty & 0 & \infty \\
+2 & 3 & 4 & 2 & 1 & 1 & 0 \\
+\end{bmatrix}
+$$
+
+### (2)
+#### (A)
+
+The matrix $\mathbf{D^{(1)}}$ is computed by considering paths that may pass through the vertex 1.
+
+$$
+d_{i,j}^{(1)} = \min(d_{i,j}^{(0)}, d_{i,1}^{(0)} + d_{1,j}^{(0)})
+$$
+
+#### (B)
+
+To find $\mathbf{D^{(k+1)}}$ from $\mathbf{D^{(k)}}$, use:
+
+$$
+d_{i,j}^{(k+1)} = \min(d_{i,j}^{(k)}, d_{i,k+1}^{(k)} + d_{k+1,j}^{(k)})
+$$
+
+#### (C)
+
+The Floyd-Warshall algorithm is suitable for computing all-pair shortest distances:
+
+1. Initialize $\mathbf{D}$ where $d_{i,j}$ is 0 if $i = j$, 1 if there is an edge $i \to j$, and $+\infty$ otherwise.
+2. Update $\mathbf{D}$ using: $d_{i,j} = \min(d_{i,j}, d_{i,k} + d_{k,j})$ for all vertices $k$ from 1 to $n$.
+
+**Algorithm:**
+
+```
+function FloydWarshall(V, E):
+ let D be a |V| x |V| matrix of minimum distances
+ for each vertex v in V:
+ D[v][v] = 0
+ for each edge (u, v) in E:
+ D[u][v] = 1
+ for each k from 1 to |V|:
+ for each i from 1 to |V|:
+ for each j from 1 to |V|:
+ D[i][j] = min(D[i][j], D[i][k] + D[k][j])
+ return D
+```
+
+**Time Complexity:**
+
+The time complexity of the Floyd-Warshall algorithm is $O(|V|^3)$ because it involves three nested loops, each running $n$ times.
+
+## **Knowledge**
+
+最短路径 Floyd-Warshall算法 图论
+
+### 重点词汇
+
+- adjacency matrix 邻接矩阵
+- shortest distance 最短距离
+- edge 边
+- vertex 顶点
+
+### 参考资料
+
+1. 《算法导论》 第 25 章 最短路径算法
diff --git a/docs/kakomonn/tokyo_university/frontier_sciences/cbms_201608_8.md b/docs/kakomonn/tokyo_university/frontier_sciences/cbms_201608_8.md
new file mode 100644
index 000000000..5c9138c32
--- /dev/null
+++ b/docs/kakomonn/tokyo_university/frontier_sciences/cbms_201608_8.md
@@ -0,0 +1,238 @@
+---
+comments: false
+title: 東京大学 新領域創成科学研究科 メディカル情報生命専攻 2016年8月実施 問題8
+tags:
+ - Tokyo-University
+ - Linear-Algebra
+---
+
+# 東京大学 新領域創成科学研究科 メディカル情報生命専攻 2016年8月実施 問題8
+
+## **Author**
+[zephyr](https://inshi-notes.zephyr-zdz.space/)
+
+## **Description**
+Answer the following questions about linear algebra.
+
+(1) Denote by $\mathbf{o}$ the zero vector. Let $\mathbf{a}$ denote a two-dimensional vector that is not $\mathbf{o}$. $T_{\mathbf{a}}(\mathbf{x})$ is the orthogonal projection of a point $\mathbf{x}$ on $\mathbf{a}$. Prove the following propositions.
+
+- (1.1) $T_{\mathbf{a}}(T_{\mathbf{a}}(\mathbf{x})) = T_{\mathbf{a}}(\mathbf{x})$ for any two-dimensional point $\mathbf{x}$.
+
+- (1.2) $T_{\mathbf{b}}(T_{\mathbf{a}}(\mathbf{x})) = \mathbf{o}$ for any non-zero two-dimensional vector $\mathbf{b}$ orthogonal to $\mathbf{a}$.
+
+
+
+
+
+(2) Assume that a real symmetric matrix $\mathbf{P}$ satisfies $\mathbf{P}^2 = \mathbf{P}$. Prove that the eigenvalues of $\mathbf{P}$ are either 0 or 1.
+
+(3) Denote by $\mathbf{a_1}, \mathbf{a_2}$ the column vectors corresponding to the bases of a two-dimensional subspace of the three dimensional space. Describe the projection matrix to the subspace using $\mathbf{A} = [\mathbf{a_1}, \mathbf{a_2}]$.
+
+---
+
+回答以下有关线性代数的问题。
+
+(1) 用 $\mathbf{o}$ 表示零向量。设 $\mathbf{a}$ 表示一个二维向量,它不是 $\mathbf{o}$。$T_{\mathbf{a}}(\mathbf{x})$ 是点 $\mathbf{x}$ 在 $\mathbf{a}$ 上的正交投影。证明以下命题。
+
+- (1.1) 对于任意二维点 $\mathbf{x}$,$T_{\mathbf{a}}(T_{\mathbf{a}}(\mathbf{x})) = T_{\mathbf{a}}(\mathbf{x})$。
+
+- (1.2) 对于任意非零二维向量 $\mathbf{b}$,它与 $\mathbf{a}$ 正交,$T_{\mathbf{b}}(T_{\mathbf{a}}(\mathbf{x})) = \mathbf{o}$。
+
+(2) 假设一个实对称矩阵 $\mathbf{P}$ 满足 $\mathbf{P}^2 = \mathbf{P}$。证明 $\mathbf{P}$ 的特征值要么是 0,要么是 1。
+
+(3) 用 $\mathbf{a_1}, \mathbf{a_2}$ 表示对应于三维空间的二维子空间基的列向量。用 $\mathbf{A} = [\mathbf{a_1}, \mathbf{a_2}]$ 描述该子空间的投影矩阵。
+
+## **Kai**
+### (1)
+#### (1.1)
+
+The orthogonal projection of $\mathbf{x}$ on $\mathbf{a}$ is given by:
+
+$$
+T_{\mathbf{a}}(\mathbf{x}) = \frac{\mathbf{a} \cdot \mathbf{x}}{\mathbf{a} \cdot \mathbf{a}} \mathbf{a}
+$$
+
+To prove the proposition, we apply $T_{\mathbf{a}}$ again on $T_{\mathbf{a}}(\mathbf{x})$:
+
+$$
+T_{\mathbf{a}}(T_{\mathbf{a}}(\mathbf{x})) = T_{\mathbf{a}}\left( \frac{\mathbf{a} \cdot \mathbf{x}}{\mathbf{a} \cdot \mathbf{a}} \mathbf{a} \right)
+$$
+
+Using the definition of orthogonal projection:
+
+$$
+T_{\mathbf{a}}\left( \frac{\mathbf{a} \cdot \mathbf{x}}{\mathbf{a} \cdot \mathbf{a}} \mathbf{a} \right) = \frac{\mathbf{a} \cdot \left( \frac{\mathbf{a} \cdot \mathbf{x}}{\mathbf{a} \cdot \mathbf{a}} \mathbf{a} \right)}{\mathbf{a} \cdot \mathbf{a}} \mathbf{a}
+$$
+
+Simplifying the dot products:
+
+$$
+= \frac{\frac{(\mathbf{a} \cdot \mathbf{x})(\mathbf{a} \cdot \mathbf{a})}{(\mathbf{a} \cdot \mathbf{a})}}{\mathbf{a} \cdot \mathbf{a}} \mathbf{a} = \frac{\mathbf{a} \cdot \mathbf{x}}{\mathbf{a} \cdot \mathbf{a}} \mathbf{a}
+$$
+
+Thus:
+
+$$
+T_{\mathbf{a}}(T_{\mathbf{a}}(\mathbf{x})) = T_{\mathbf{a}}(\mathbf{x})
+$$
+
+#### (1.2)
+
+Given $\mathbf{b} \cdot \mathbf{a} = 0$, we need to show:
+
+$$
+T_{\mathbf{b}}(T_{\mathbf{a}}(\mathbf{x})) = T_{\mathbf{b}}\left( \frac{\mathbf{a} \cdot \mathbf{x}}{\mathbf{a} \cdot \mathbf{a}} \mathbf{a} \right)
+$$
+
+Using the definition of orthogonal projection:
+
+$$
+T_{\mathbf{b}}\left( \frac{\mathbf{a} \cdot \mathbf{x}}{\mathbf{a} \cdot \mathbf{a}} \mathbf{a} \right) = \frac{\mathbf{b} \cdot \left( \frac{\mathbf{a} \cdot \mathbf{x}}{\mathbf{a} \cdot \mathbf{a}} \mathbf{a} \right)}{\mathbf{b} \cdot \mathbf{b}} \mathbf{b}
+$$
+
+Since $\mathbf{b} \cdot \mathbf{a} = 0$:
+
+$$
+= \frac{\frac{(\mathbf{a} \cdot \mathbf{x})(\mathbf{b} \cdot \mathbf{a})}{(\mathbf{a} \cdot \mathbf{a})}}{\mathbf{b} \cdot \mathbf{b}} \mathbf{b} = \frac{(\mathbf{a} \cdot \mathbf{x}) \cdot 0}{(\mathbf{a} \cdot \mathbf{a}) \cdot (\mathbf{b} \cdot \mathbf{b})} \mathbf{b} = 0
+$$
+
+Thus:
+
+$$
+T_{\mathbf{b}}(T_{\mathbf{a}}(\mathbf{x})) = \mathbf{o}
+$$
+
+### (2)
+
+Assume that a real symmetric matrix $\mathbf{P}$ satisfies $\mathbf{P}^2 = \mathbf{P}$. Prove that the eigenvalues of $\mathbf{P}$ are either 0 or 1.
+
+Let $\mathbf{P}$ be a real symmetric matrix. Therefore, it is diagonalizable. Let $\mathbf{v}$ be an eigenvector of $\mathbf{P}$ with eigenvalue $\lambda$:
+
+$$
+\mathbf{P}\mathbf{v} = \lambda \mathbf{v}
+$$
+
+Applying $\mathbf{P}$ again:
+
+$$
+\mathbf{P}^2 \mathbf{v} = \mathbf{P} (\mathbf{P} \mathbf{v}) = \mathbf{P} (\lambda \mathbf{v}) = \lambda \mathbf{P} \mathbf{v} = \lambda (\lambda \mathbf{v}) = \lambda^2 \mathbf{v}
+$$
+
+Since $\mathbf{P}^2 = \mathbf{P}$, we have:
+
+$$
+\mathbf{P}^2 \mathbf{v} = \mathbf{P} \mathbf{v} = \lambda \mathbf{v}
+$$
+
+Thus:
+
+$$
+\lambda^2 \mathbf{v} = \lambda \mathbf{v}
+$$
+
+Since $\mathbf{v}$ is a non-zero vector, we can conclude:
+
+$$
+\lambda^2 = \lambda
+$$
+
+Thus, the eigenvalues $\lambda$ must satisfy:
+
+$$
+\lambda (\lambda - 1) = 0
+$$
+
+Therefore:
+
+$$
+\lambda = 0 \quad \text{or} \quad \lambda = 1
+$$
+
+### (3)
+
+Given the matrix $\mathbf{A}$ formed by two column vectors $\mathbf{a_1}$ and $\mathbf{a_2}$, which represent the basis of a two-dimensional subspace in three-dimensional space, we want to find the projection matrix $\mathbf{P}$ that projects any vector in $\mathbb{R}^3$ onto this subspace.
+
+Matrix $\mathbf{A}$ is:
+
+$$
+\mathbf{A} = [\mathbf{a_1}, \mathbf{a_2}]
+$$
+
+where $\mathbf{A}$ is a $3 \times 2$ matrix.
+
+#### Derivation of the Projection Matrix
+
+##### 1. Projection of a Vector
+
+The projection of a vector $\mathbf{x}$ onto the subspace spanned by the columns of $\mathbf{A}$ can be expressed as a linear combination of the columns of $\mathbf{A}$:
+
+$$
+\mathbf{P}\mathbf{x} = c_1 \mathbf{a_1} + c_2 \mathbf{a_2}
+$$
+
+In matrix form, we write:
+
+$$
+\mathbf{P}\mathbf{x} = \mathbf{A}\mathbf{c}
+$$
+
+where $\mathbf{c}$ is a column vector of coefficients:
+
+$$
+\mathbf{c} = \begin{bmatrix}
+c_1 \\
+c_2
+\end{bmatrix}
+$$
+
+##### 2. Finding the Coefficients
+
+To determine the coefficients $\mathbf{c}$, we use the property that the projection minimizes the distance to the subspace. This can be formulated as:
+
+$$
+\mathbf{A}^T (\mathbf{x} - \mathbf{A}\mathbf{c}) = \mathbf{0}
+$$
+
+This equation implies:
+
+$$
+\mathbf{A}^T \mathbf{x} = \mathbf{A}^T \mathbf{A} \mathbf{c}
+$$
+
+Assuming $\mathbf{A}^T \mathbf{A}$ is invertible, we solve for $\mathbf{c}$:
+
+$$
+\mathbf{c} = (\mathbf{A}^T \mathbf{A})^{-1} \mathbf{A}^T \mathbf{x}
+$$
+
+##### 3. Constructing the Projection Matrix
+
+Substituting $\mathbf{c}$ back into the projection formula, we have:
+
+$$
+\mathbf{P}\mathbf{x} = \mathbf{A} (\mathbf{A}^T \mathbf{A})^{-1} \mathbf{A}^T \mathbf{x}
+$$
+
+Since this holds for any vector $\mathbf{x}$, the projection matrix $\mathbf{P}$ can be identified as:
+
+$$
+\mathbf{P} = \mathbf{A} (\mathbf{A}^T \mathbf{A})^{-1} \mathbf{A}^T
+$$
+
+## **Knowledge**
+
+对称矩阵 特征值和特征向量 投影矩阵
+
+### 重点词汇
+
+- Orthogonal projection 正交投影
+- Symmetric matrix 对称矩阵
+- Eigenvalue 特征值
+- Column vector 列向量
+- Subspace 子空间
+- Projection matrix 投影矩阵
+
+### 参考资料
+
+1. Gilbert Strang, "Linear Algebra and Its Applications," Chap. 3, 5.
+2. David C. Lay, "Linear Algebra and Its Applications," Chap. 6.
diff --git a/docs/kakomonn/tokyo_university/frontier_sciences/cbms_201608_9.md b/docs/kakomonn/tokyo_university/frontier_sciences/cbms_201608_9.md
new file mode 100644
index 000000000..67de0209c
--- /dev/null
+++ b/docs/kakomonn/tokyo_university/frontier_sciences/cbms_201608_9.md
@@ -0,0 +1,295 @@
+---
+comments: false
+title: 東京大学 新領域創成科学研究科 メディカル情報生命専攻 2016年8月実施 問題9
+tags:
+ - Tokyo-University
+ - Sorting-Algorithm
+ - Quick-Sort
+---
+
+# 東京大学 新領域創成科学研究科 メディカル情報生命専攻 2016年8月実施 問題9
+
+## **Author**
+[zephyr](https://inshi-notes.zephyr-zdz.space/)
+
+## **Description**
+Suppose that $\mathbf{M}$ is an array with $n (\geq 1)$ distinct integers. The quicksort algorithm for sorting $\mathbf{M}$ in the ascending order has the following three steps.
+
+A) Select and remove an element $x$ from $\mathbf{M}$. Call $x$ a pivot.
+
+B) Divide $\mathbf{M}$ into arrays $\mathbf{M_1}$ and $\mathbf{M_2}$ such that $y \leq x$ for $y \in \mathbf{M_1}$ and $x \leq z$ for $z \in \mathbf{M_2}$.
+
+C) Sort $\mathbf{M_1}$ and $\mathbf{M_2}$ in the ascending order using quicksort, and return the concatenation of $\mathbf{M_1}$, $x$, and $\mathbf{M_2}$.
+
+Answer the following questions.
+
+(1) How would you implement Step B in quicksort?
+
+(2) In Step A, if we always set the first element in $\mathbf{M}$ to pivot $x$, show an input array that the algorithm sorts in $O(n^2)$ worst-case time, and prove this property.
+
+(3) In Step A, if we select a position in $\mathbf{M}$ at random and set the element at the position to pivot $x$, prove that the expected time complexity is $O(n \log n)$ for an arbitrary input array.
+
+(4) Design an algorithm that calculates the $k$-th smallest element in $\mathbf{M}$ in $O(n)$ expected time, and prove this property.
+
+---
+
+假设 $\mathbf{M}$ 是一个包含 $n (\geq 1)$ 个不同整数的数组。用于升序排列 $\mathbf{M}$ 的快速排序算法有以下三个步骤。
+
+A) 从 $\mathbf{M}$ 中选择并移除一个元素 $x$。称 $x$ 为枢轴。
+
+B) 将 $\mathbf{M}$ 分成数组 $\mathbf{M_1}$ 和 $\mathbf{M_2}$,使得 $y \leq x$ 对于所有 $y \in \mathbf{M_1}$,并且 $x \leq z$ 对于所有 $z \in \mathbf{M_2}$。
+
+C) 使用快速排序按升序排列 $\mathbf{M_1}$ 和 $\mathbf{M_2}$,并返回 $\mathbf{M_1}$,$x$ 和 $\mathbf{M_2}$ 的连接。
+
+回答以下问题。
+
+(1) 你将如何在快速排序中实现步骤 B?
+
+(2) 在步骤 A 中,如果我们总是将 $\mathbf{M}$ 的第一个元素设置为枢轴 $x$,展示一个输入数组,使算法在 $O(n^2)$ 最坏情况下进行排序,并证明这一性质。
+
+(3) 在步骤 A 中,如果我们在 $\mathbf{M}$ 中随机选择一个位置并将该位置的元素设置为枢轴 $x$,证明对于任意输入数组,预期时间复杂度为 $O(n \log n)$。
+
+(4) 设计一个算法,在 $O(n)$ 预期时间内计算 $\mathbf{M}$ 中的第 $k$ 小元素,并证明这一性质。
+
+## **Kai**
+### (1)
+
+To implement Step B, we need to partition the array $\mathbf{M}$ around the pivot element $x$. Here is a common way to do it, known as the Lomuto partition scheme:
+
+1. Initialize an index $i$ to track the boundary of elements less than or equal to the pivot.
+2. Traverse the array from left to right, comparing each element with the pivot.
+3. Swap elements to ensure all elements less than or equal to the pivot are on its left, and all elements greater than the pivot are on its right.
+
+Here is a Python function to achieve this:
+
+```python
+def partition(arr, low, high):
+ pivot = arr[high] # Choose the last element as pivot
+ i = low - 1 # i: Index of smaller element
+
+ for j in range(low, high):
+ if arr[j] <= pivot:
+ i += 1 # Increment index of smaller element
+ arr[i], arr[j] = arr[j], arr[i]
+ # Swap the current element to teh end of the smaller half of arr
+
+ arr[i+1], arr[high] = arr[high], arr[i+1] # Place pivot in correct position
+ return i+1 # Return the partition index
+```
+
+### (2)
+
+If we always select the first element as the pivot, the worst-case scenario occurs when the array is already sorted (either in ascending or descending order).
+
+**Example of Worst-case Input:**
+
+$$
+\mathbf{M} = [1, 2, 3, \ldots, n]
+$$
+
+**Proof of $O(n^2)$ Time Complexity:**
+
+1. In the first call, the pivot is the smallest element (1), and the array is divided into $\mathbf{M_1} = []$ and $\mathbf{M_2} = [2, 3, \ldots, n]$.
+2. In the second call, the pivot is the smallest element in $\mathbf{M_2}$ (2), and the array is divided into $\mathbf{M_1} = []$ and $\mathbf{M_2} = [3, 4, \ldots, n]$.
+3. This process continues, making $n-1$ comparisons in the first step, $n-2$ in the second step, and so on.
+
+The total number of comparisons is:
+
+$$
+
+(n-1) + (n-2) + \cdots + 1 = \frac{n(n-1)}{2} = O(n^2)
+
+$$
+
+### (3)
+
+To rigorously prove that the expected time complexity of Quicksort with a random pivot selection is $O(n \log n)$, we will employ probabilistic analysis.
+
+#### Definitions and Assumptions
+
+- Let $T(n)$ be the expected time complexity of Quicksort on an array of size $n$.
+- Each pivot is chosen uniformly at random from the array.
+- $C(n)$ is the number of comparisons made by Quicksort on an array of size $n$.
+
+#### Key Observations
+
+1. When a pivot $x$ is chosen, it partitions the array into two subarrays $\mathbf{M_1}$ and $\mathbf{M_2}$. The sizes of $\mathbf{M_1}$ and $\mathbf{M_2}$ depend on the position of $x$ in the sorted array.
+
+2. If the pivot is the $i$-th smallest element, $\mathbf{M_1}$ will have $i-1$ elements and $\mathbf{M_2}$ will have $n-i$ elements.
+
+3. The number of comparisons required to partition the array around the pivot is $n-1$.
+
+#### Expected Comparisons
+
+We aim to find $E[C(n)]$, the expected number of comparisons for an array of size $n$.
+
+The recurrence relation for $C(n)$ is:
+
+$$
+C(n) = C(i-1) + C(n-i) + (n-1)
+$$
+
+where $i$ is the position of the pivot in the sorted array, chosen uniformly at random from $1$ to $n$.
+
+The expected number of comparisons is:
+
+$$
+E[C(n)] = E\left[ C(i-1) \right] + E\left[ C(n-i) \right] + (n-1)
+$$
+
+#### Taking Expectation
+
+Since the pivot is chosen randomly, the expected sizes of $\mathbf{M_1}$ and $\mathbf{M_2}$ are uniformly distributed:
+
+$$
+E[C(n)] = \frac{1}{n} \sum_{i=1}^{n} \left( E[C(i-1)] + E[C(n-i)] \right) + (n-1)
+$$
+
+This simplifies to:
+
+$$
+E[C(n)] = \frac{2}{n} \sum_{i=0}^{n-1} E[C(i)] + (n-1)
+$$
+
+#### Solving the Recurrence Relation
+
+To solve the recurrence relation, we will use the concept of telescoping sums and known techniques for analyzing recurrence relations.
+
+First, we need to assume that $T(n)$ is bounded by some function $f(n)$. We hypothesize that $T(n) = O(n \log n)$. Let's test this hypothesis by substituting it into the recurrence relation:
+
+Assume $T(n) = c \cdot n \log n$ for some constant $c$. Then,
+
+$$
+T(n) = \frac{2}{n} \sum_{i=0}^{n-1} c \cdot i \log i + (n-1)
+$$
+
+We can approximate the sum $\sum_{i=0}^{n-1} i \log i$ using integral approximation:
+
+$$
+\sum_{i=0}^{n-1} i \log i \approx \int_1^n x \log x \, \mathrm{d}x
+$$
+
+Compute the integral:
+
+$$
+\int x \log x \, \mathrm{d}x = \frac{x^2}{2} \log x - \frac{x^2}{4} + C
+$$
+
+Evaluate the integral from $1$ to $n$:
+
+$$
+\int_1^n x \log x \, \mathrm{d}x = \left[ \frac{x^2}{2} \log x - \frac{x^2}{4} \right]_1^n
+$$
+
+At $x=n$:
+
+$$
+\frac{n^2}{2} \log n - \frac{n^2}{4}
+$$
+
+At $x=1$:
+
+$$
+\frac{1}{2} \log 1 - \frac{1}{4} = -\frac{1}{4}
+$$
+
+So the integral result is approximately:
+
+$$
+\frac{n^2}{2} \log n - \frac{n^2}{4} + \frac{1}{4}
+$$
+
+Simplifying this, we get:
+
+$$
+\int_1^n x \log x \, \mathrm{d}x \approx \frac{n^2}{2} \log n - \frac{n^2}{4}
+$$
+
+Thus,
+
+$$
+\sum_{i=0}^{n-1} i \log i \approx \frac{n^2}{2} \log n - \frac{n^2}{4}
+$$
+
+Substitute this back into the recurrence relation:
+
+$$
+T(n) = \frac{2}{n} \left( \frac{n^2}{2} \log n - \frac{n^2}{4} \right) + (n-1)
+$$
+
+Simplify:
+
+$$
+T(n) = n \log n - \frac{n}{2} + (n-1)
+$$
+
+Since lower-order terms are negligible when considering asymptotic behavior, we get:
+
+$$
+T(n) = O(n \log n)
+$$
+
+Thus, the expected number of comparisons $T(n)$ for Quicksort with a random pivot selection is $O(n \log n)$.
+
+### (4)
+
+This can be achieved using the Quick-select algorithm, which is similar to Quicksort but only recurses into one partition.
+
+**Algorithm:**
+
+1. Choose a pivot element $x$ randomly.
+2. Partition the array into $\mathbf{M_1}$ and $\mathbf{M_2}$ as described in Step B.
+3. If the size of $\mathbf{M_1}$ is $k-1$, then $x$ is the $k$-th smallest element.
+4. If the size of $\mathbf{M_1}$ is greater than $k-1$, recurse into $\mathbf{M_1}$.
+5. Otherwise, recurse into $\mathbf{M_2}$ with the adjusted $k$ value.
+
+**Python Code:**
+
+```python
+import random
+
+def quickselect(arr, low, high, k):
+ if low == high:
+ return arr[low]
+
+ pivot_index = random.randint(low, high)
+ pivot_index = partition(arr, low, high)
+
+ if k == pivot_index:
+ return arr[k]
+ elif k < pivot_index:
+ return quickselect(arr, low, pivot_index - 1, k)
+ else:
+ return quickselect(arr, pivot_index + 1, high, k)
+
+def find_kth_smallest(arr, k):
+ return quickselect(arr, 0, len(arr) - 1, k - 1)
+```
+
+**Proof of $O(n)$ Expected Time Complexity:**
+
+The Quickselect algorithm has the same recurrence as Randomized Quicksort, but only recurses into one subarray. Hence, the expected time complexity is:
+
+$$
+
+T(n) = T\left(\frac{n}{2}\right) + O(n)
+
+$$
+
+This solves to $T(n) = O(n)$ using the Master Theorem or similar analysis.
+
+## **Knowledge**
+
+快速排序 时间复杂度 分治算法
+
+### 重点词汇
+
+- Quicksort 快速排序
+- Pivot 枢轴
+- Partition 分区
+- Expected time complexity 期望时间复杂度
+- Recurrence relation 递推关系
+
+### 参考资料
+
+1. **Introduction to Algorithms**, Cormen, Leiserson, Rivest, and Stein, 3rd Edition, Chap. 7 (Quicksort), Chap. 9 (Medians and Order Statistics)
diff --git a/docs/kakomonn/tokyo_university/index.md b/docs/kakomonn/tokyo_university/index.md
index cbeb0a5e9..8209f9294 100644
--- a/docs/kakomonn/tokyo_university/index.md
+++ b/docs/kakomonn/tokyo_university/index.md
@@ -262,6 +262,9 @@ tags:
- [8月 問題12](frontier_sciences/cbms_201708_12.md)
- 2017年度:
- [8月 問題7](frontier_sciences/cbms_201608_7.md)
+ - [8月 問題8](frontier_sciences/cbms_201608_8.md)
+ - [8月 問題9](frontier_sciences/cbms_201608_9.md)
+ - [8月 問題10](frontier_sciences/cbms_201608_10.md)
- 海洋技術環境学専攻:
- 2022年度:
- [第1~6問](frontier_sciences/otpe_2022_all.md)
diff --git a/docs/tags.md b/docs/tags.md
index 1cad661c0..0c2a75a86 100644
--- a/docs/tags.md
+++ b/docs/tags.md
@@ -1,3 +1,3 @@
# Tags
-
\ No newline at end of file
+
diff --git a/mkdocs.yml b/mkdocs.yml
index c0120e8e0..ee69fcaec 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -383,6 +383,9 @@ nav:
- 8月 問題12: kakomonn/tokyo_university/frontier_sciences/cbms_201708_12.md
- 2017年度:
- 8月 問題7: kakomonn/tokyo_university/frontier_sciences/cbms_201608_7.md
+ - 8月 問題8: kakomonn/tokyo_university/frontier_sciences/cbms_201608_8.md
+ - 8月 問題9: kakomonn/tokyo_university/frontier_sciences/cbms_201608_9.md
+ - 8月 問題10: kakomonn/tokyo_university/frontier_sciences/cbms_201608_10.md
- 海洋技術環境学専攻:
- 2022年度:
- 第1~6問: kakomonn/tokyo_university/frontier_sciences/otpe_2022_all.md
@@ -441,7 +444,7 @@ nav:
- 2024年度:
- グラフ理論: kakomonn/kyoto_university/informatics/amp_202308_graph_theory.md
- 凸最適化: kakomonn/kyoto_university/informatics/amp_202308_convex_optimaztion.md
- - 常微分方程式: kakomonn/kyoto_university/informatics/amp_202308_ordinary_differential_equations
+ - 常微分方程式: kakomonn/kyoto_university/informatics/amp_202308_ordinary_differential_equations.md
- 2023年度:
- 線形計画: kakomonn/kyoto_university/informatics/amp_202208_linear_programming.md
- グラフ理論: kakomonn/kyoto_university/informatics/amp_202208_graph_theory.md
diff --git a/requirements.txt b/requirements.txt
index f399cf700..eef9fcdcd 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -1,4 +1,4 @@
-mkdocs-material
+mkdocs-material==9.5.50
mkdocs-git-revision-date-localized-plugin
mkdocs-git-authors-plugin
mkdocs-img2fig-plugin