From 049bf19b4933bb0e14c47d40faa28880de5288f1 Mon Sep 17 00:00:00 2001 From: myyura Date: Thu, 20 Feb 2025 02:06:00 +0800 Subject: [PATCH] =?UTF-8?q?=E6=9D=B1=E4=BA=AC=E5=A4=A7=E5=AD=A6=20?= =?UTF-8?q?=E6=83=85=E5=A0=B1=E7=90=86=E5=B7=A5=E5=AD=A6=E7=B3=BB=E7=A0=94?= =?UTF-8?q?=E7=A9=B6=E7=A7=91=20=E3=82=B3=E3=83=B3=E3=83=94=E3=83=A5?= =?UTF-8?q?=E3=83=BC=E3=82=BF=E7=A7=91=E5=AD=A6=E5=B0=82=E6=94=BB=202020?= =?UTF-8?q?=E5=B9=B48=E6=9C=88=E5=AE=9F=E6=96=BD=20=E5=B0=82=E9=96=80?= =?UTF-8?q?=E7=A7=91=E7=9B=AE=20=E5=95=8F=E9=A1=8C1~4?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- .../IST/cs_202008_senmon_1.md | 107 ++++++++++++ .../IST/cs_202008_senmon_2.md | 121 +++++++++++++ .../IST/cs_202008_senmon_3.md | 146 ++++++++++++++++ .../IST/cs_202008_senmon_4.md | 159 ++++++++++++++++++ docs/kakomonn/tokyo_university/index.md | 4 + mkdocs.yml | 4 + 6 files changed, 541 insertions(+) create mode 100644 docs/kakomonn/tokyo_university/IST/cs_202008_senmon_1.md create mode 100644 docs/kakomonn/tokyo_university/IST/cs_202008_senmon_2.md create mode 100644 docs/kakomonn/tokyo_university/IST/cs_202008_senmon_3.md create mode 100644 docs/kakomonn/tokyo_university/IST/cs_202008_senmon_4.md diff --git a/docs/kakomonn/tokyo_university/IST/cs_202008_senmon_1.md b/docs/kakomonn/tokyo_university/IST/cs_202008_senmon_1.md new file mode 100644 index 000000000..b60654628 --- /dev/null +++ b/docs/kakomonn/tokyo_university/IST/cs_202008_senmon_1.md @@ -0,0 +1,107 @@ +--- +comments: false +title: 東京大学 情報理工学系研究科 コンピュータ科学専攻 2020年8月実施 専門科目 問題1 +tags: + - Tokyo-University + - Graph-Theory +--- +# 東京大学 情報理工学系研究科 コンピュータ科学専攻 2020年8月実施 専門科目 問題1 + +## **Author** +[zephyr](https://inshi-notes.zephyr-zdz.space/) + +## **Description** +In undirected graphs, a self-loop is an edge connecting the same vertex, and multi-edges are multiple edges connecting the same pair of vertices. From now on, we consider undirected graphs without self-loops and possibly with multi-edges. We say that a graph $\mathbf{G}$ is an $\mathbf{A}$-graph if a graph consisting of a single edge can be obtained from $\mathbf{G}$ by repeatedly applying the following two operations. + +### B-operation + +When two multi-edges connect a pair of vertices, replace the multi-edges with a single edge connecting the pair of vertices. + +### C-operation + +When one edge connects vertices $\mathbf{u}$ and $\mathbf{v}$, another edge connects $\mathbf{v}$ and $\mathbf{w}$ (where $\mathbf{u} \neq \mathbf{w}$), and there is no other edge incident to $\mathbf{v}$, remove the vertex $\mathbf{v}$ and replace the two edges with a new edge connecting $\mathbf{u}$ and $\mathbf{w}$. + +Answer the following questions. + +(1) Let $\mathbf{K}_n$ be a complete graph of $\mathbf{n}$ vertices. Answer whether each of $\mathbf{K}_3$ and $\mathbf{K}_4$ is an $\mathbf{A}$-graph or not. + +(2) Show that every $\mathbf{A}$-graph is planar. + +(3) Give the maximum number of edges of an $\mathbf{A}$-graph with $\mathbf{n}$ vertices without multi-edges, with a proof. Also, give such an $\mathbf{A}$-graph attaining the maximum for general $\mathbf{n}$, with an explanation. + +(4) Give an $\mathbf{O(m + n)}$-time algorithm which, given an undirected graph with $\mathbf{n}$ vertices and $\mathbf{m}$ edges as an input, determines whether it is an $\mathbf{A}$-graph or not. Explain also the graph data structures used in the algorithm for realizing $\mathbf{B}$-operations and $\mathbf{C}$-operations. + +## **Kai** +### (1) + +**$\mathbf{K}_3$:** +The complete graph $\mathbf{K}_3$ consists of 3 vertices and 3 edges, forming a triangle. Since there are no multi-edges in $\mathbf{K}_3$, the $\mathbf{B}$-operation does not apply. To apply the $\mathbf{C}$-operation, we need a vertex $\mathbf{v}$ with exactly two incident edges, connecting to vertices $\mathbf{u}$ and $\mathbf{w}$. In $\mathbf{K}_3$, each vertex is connected to two others, so we can apply the $\mathbf{C}$-operation to any vertex, forming an edge between the remaining two vertices, and applying the $\mathbf{C}$-operation again will reduce the graph to a single edge. Therefore, $\mathbf{K}_3$ is an $\mathbf{A}$-graph. + +**$\mathbf{K}_4$:** +The complete graph $\mathbf{K}_4$ consists of 4 vertices and 6 edges, forming a tetrahedron. Similar to $\mathbf{K}_3$, there are no multi-edges, so the $\mathbf{B}$-operation does not apply. For the $\mathbf{C}$-operation, we need a vertex with exactly two incident edges. In $\mathbf{K}_4$, each vertex is connected to three others, so we cannot directly apply the $\mathbf{C}$-operation. Hence, $\mathbf{K}_4$ is not an $\mathbf{A}$-graph. + +### (2) + +Planar graphs are graphs that can be embedded in the plane without edge crossings. + +Every $\mathbf{A}$-graph is planar. This can be shown by considering the operations allowed on $\mathbf{A}$-graphs: + +- The $\mathbf{B}$-operation simplifies the graph by removing multi-edges, which does not affect planarity. +- The $\mathbf{C}$-operation reduces the number of vertices while maintaining planarity because it replaces a vertex of degree 2 with a single edge, which is a planar transformation. + +Since a single edge is trivially planar and the operations preserve planarity, every $\mathbf{A}$-graph must be planar. + +### (3) + +The maximum number of edges in an $\mathbf{A}$-graph with $\mathbf{n}$ vertices without multi-edges is $\mathbf{n-1}$. + +**Proof:** +- In an $\mathbf{A}$-graph, the $\mathbf{B}$-operation reduces multi-edges to a single edge, and there are no multi-edges in the final graph. +- The $\mathbf{C}$-operation reduces the number of vertices by 1 while maintaining the number of edges. Therefore, the number of edges in the final graph is $\mathbf{n-1}$. + +### (4) + +To determine if a given undirected graph with $\mathbf{n}$ vertices and $\mathbf{m}$ edges is an $\mathbf{A}$-graph, we can use the following algorithm: + +1. **Graph Representation:** Use an adjacency list to store the graph. This allows efficient traversal and modification. +2. **Initialize:** Mark all vertices as unvisited. +3. **Identify and Apply $\mathbf{B}$-operation:** + - For each pair of vertices, check for multi-edges. + - If multi-edges exist, replace them with a single edge. +4. **Identify and Apply $\mathbf{C}$-operation:** + - Traverse the graph to identify vertices of degree 2. + - For each vertex $\mathbf{v}$ with degree 2 connecting vertices $\mathbf{u}$ and $\mathbf{w}$, remove $\mathbf{v}$ and replace edges $\mathbf{(u,v)}$ and $\mathbf{(v,w)}$ with a single edge $\mathbf{(u,w)}$. +5. **Repeat Steps 3 and 4** until no more $\mathbf{B}$ or $\mathbf{C}$ operations can be applied. +6. **Check Result:** If the graph reduces to a single edge, it is an $\mathbf{A}$-graph. Otherwise, it is not. + +**Graph Data Structures:** +- **Adjacency List:** Efficiently stores the graph and allows for quick traversal and edge modification. +- **Degree List:** Maintains the degree of each vertex for quick identification of vertices suitable for the $\mathbf{C}$-operation. + +The algorithm runs in $\mathbf{O(m + n)}$ time since each edge and vertex is processed a constant number of times. + +## **Knowledge** + +图论 平面图 算法 + +### 难点解题思路 + +识别和应用 $\mathbf{B}$ 和 $\mathbf{C}$ 操作是确定 $\mathbf{A}$-图的关键。对于复杂图的处理,可以通过维护邻接表和度列表来优化操作。 + +### 解题技巧和信息 + +对于确定图的性质问题,特别是涉及特定操作的图,可以通过模拟操作并逐步简化图结构来判断。理解操作对图结构的影响是关键。 + +### 重点词汇 + +- Self-loop 自环 +- Multi-edges 多重边 +- Planar 平面 +- Complete graph 完全图 +- Algorithm 算法 + +### 参考资料 + +1. "Introduction to Graph Theory" by Douglas B. West, Chapter 4 +2. "Graph Theory" by Reinhard Diestel, Chapter 5 + diff --git a/docs/kakomonn/tokyo_university/IST/cs_202008_senmon_2.md b/docs/kakomonn/tokyo_university/IST/cs_202008_senmon_2.md new file mode 100644 index 000000000..dfc735789 --- /dev/null +++ b/docs/kakomonn/tokyo_university/IST/cs_202008_senmon_2.md @@ -0,0 +1,121 @@ +--- +comments: false +title: 東京大学 情報理工学系研究科 コンピュータ科学専攻 2020年8月実施 専門科目 問題2 +tags: + - Tokyo-University +--- +# 東京大学 情報理工学系研究科 コンピュータ科学専攻 2020年8月実施 専門科目 問題2 + +## **Author** +[zephyr](https://inshi-notes.zephyr-zdz.space/) + +## **Description** +Let $\Sigma$ be the set $\{a, b\}$ of letters. For a word $w \in \Sigma^*$ and two languages $L_a, L_b \subseteq \Sigma^*$ over $\Sigma$, we define the language $w\{a \mapsto L_a, b \mapsto L_b\} \subseteq \Sigma^*$ as follows, by induction on $w$. + +$$ +\epsilon\{a \mapsto L_a, b \mapsto L_b\} = \{\epsilon\} +$$ + +$$ +(aw)\{a \mapsto L_a, b \mapsto L_b\} = \{w_1 w_2 \mid w_1 \in L_a, w_2 \in w\{a \mapsto L_a, b \mapsto L_b\}\} +$$ + +$$ +(bw)\{a \mapsto L_a, b \mapsto L_b\} = \{w_1 w_2 \mid w_1 \in L_b, w_2 \in w\{a \mapsto L_a, b \mapsto L_b\}\} +$$ + +Here, $\epsilon$ represents the empty word. For example, if $w = aba$, $L_a = \{b^n \mid n \geq 0\}$, and $L_b = \{a^n \mid n \geq 0\}$, then $w\{a \mapsto L_a, b \mapsto L_b\} = \{b^l a^m b^n \mid l, m, n \geq 0\}$. Furthermore, for languages $L, L_a, L_b \subseteq \Sigma^*$, we define $L\{a \mapsto L_a, b \mapsto L_b\}$ as $\bigcup_{w \in L} w\{a \mapsto L_a, b \mapsto L_b\}$. For example, if $L = \{a^n b \mid n \geq 0\}$, $L_a = \{ab\}$, and $L_b = \{a^n \mid n \geq 0\}$, then $L\{a \mapsto L_a, b \mapsto L_b\} = \{(ab)^m a^n \mid m, n \geq 0\}$. + +Answer the following questions. + +(1) Let $L = \{(ab)^m a^n \mid m, n \geq 0\}$, $L_a = \{bb\}$, and $L_b = \{ab, a\}$. Express $L\{a \mapsto L_a, b \mapsto L_b\}$ using a regular expression. + +(2) Let $L' = \{a^m b^n \mid m \geq n \geq 0\}$, $L_a' = \{a^n \mid n \geq 0\}$, and $L_b' = \{a^m b^m \mid m \geq 0\}$. Express $\{w \in \Sigma^* \mid w\{a \mapsto L_a', b \mapsto L_b'\} \subseteq L'\}$ using a regular expression. + +(3) Let $A_0 = (Q_0, \Sigma, \delta_0, q_{0,0}, F_0)$, $A_1 = (Q_1, \Sigma, \delta_1, q_{1,0}, F_1)$, and $A_2 = (Q_2, \Sigma, \delta_2, q_{2,0}, F_2)$ be deterministic finite automata, and for each $i \in \{0, 1, 2\}$, let $L_i$ be the language accepted by $A_i$. Here, $Q_i, \delta_i, q_{i,0}, F_i$ are the set of states, the transition function, the initial state, and the set of final states of $A_i$ ($i \in \{0, 1, 2\}$), respectively. Assume that the transition functions $\delta_i \in Q_i \times \Sigma \rightarrow Q_i$ ($i \in \{0, 1, 2\}$) are total. Give a non-deterministic finite automaton that accepts $L_0 \{a \mapsto L_1, b \mapsto L_2\}$, with a brief explanation. You may use $\epsilon$-transitions. + +(4) For $A_i$ and $L_i$ ($i \in \{0, 1, 2\}$) in question (3), give a deterministic finite automaton that accepts $\{w \in \Sigma^* \mid w\{a \mapsto L_1, b \mapsto L_2\} \subseteq L_0\}$, with a brief explanation. + +## **Kai** +### (1) + +Given $L = \{(ab)^m a^n \mid m, n \geq 0\}$, $L_a = \{bb\}$, and $L_b = \{ab, a\}$: + +$$ +L\{a \mapsto L_a, b \mapsto L_b\} = (bb(ab+a))^*(bb)^* +$$ + +This expression represents the language where every $a$ in the original language is replaced by $bb$ and every $b$ is replaced by either $ab$ or $a$. + +### (2) + +Given $L' = \{a^m b^n \mid m \geq n \geq 0\}$, $L_a' = \{a^n \mid n \geq 0\}$, and $L_b' = \{a^m b^m \mid m \geq 0\}$, for $w \in \Sigma^*$, suppose $w' = w\{a \mapsto L_a', b \mapsto L_b'\}$. + +Since $w' \subseteq L'$, we can express any element of $w'$, $w'_i$ as $a^x a^y b^y$ where $x, y \geq 0$. This implies that $w'$ contains $a^x$ followed by $a^y b^y$ for some $x, y \geq 0$. + +Since all of the $b$ in $w'_i$ can only come from $b$ in $w$, we can reverse the substitution process to get $w = a^x b^y$, where $y$ can only be $0$ or $1$. + +Therefore, the regular expression for $\{w \in \Sigma^* \mid w\{a \mapsto L_a', b \mapsto L_b'\} \subseteq L'\}$ is: + +$$ +a^*(\epsilon + b) +$$ + +### (3) + +We will construct an NFA that accepts $L_0 \{a \mapsto L_1, b \mapsto L_2\}$ using $\epsilon$-transitions. The NFA will have the same structure as $A_0$, but the transitions will be replaced based on the input letter with the transitions from $A_1$ and $A_2$, and $\epsilon$-transitions will be used to connect the states. + +For example, supposing the original transitions for the input letter $a$ in $A_0$ are $q_{0,0} \xrightarrow{a} q_{0,1}$, we will replace these transitions with the corresponding transitions from $A_1$: + +$$ +q_{0,0} \xrightarrow{\epsilon} q_{1,0} \xrightarrow{} \ldots \xrightarrow{} F_{1,i} \xrightarrow{\epsilon} q_{0,1} +$$ + +Similarly, for the input letter $b$, we will replace the transitions with the corresponding transitions from $A_2$. + +The final states of the NFA will be those states where the original final states of $A_0$ are reached after the substitution process. + +**Explanation:** Since the language $L_0 \{a \mapsto L_1, b \mapsto L_2\}$ is obtained by substituting the strings in $L_1$ and $L_2$ for $a$ and $b$ in the strings of $L_0$, the NFA needs to simulate this substitution process by transitioning to the corresponding states in $A_1$ and $A_2$ based on the input letter. + +### (4) + +To construct a DFA for $\{w \in \Sigma^* \mid w\{a \mapsto L_i, b \mapsto L_j\} \subseteq L_k\}$: + +**Explanation:** +- We need to track the states of $A_i$, $A_j$, and $A_k$ simultaneously. +- The DFA will have states $(q_i, q_j, q_k)$, where $q_i \in Q_i$, $q_j \in Q_j$, and $q_k \in Q_k$. +- The initial state is $(q_{i0}, q_{j0}, q_{k0})$. +- The transition function will be defined as: + - $(q_i, q_j, q_k) \xrightarrow{a} (\delta_i(q_i, a), q_j, \delta_k(q_k, w_{L_i}))$ for all $w_{L_i} \in L_i$. + - $(q_i, q_j, q_k) \xrightarrow{b} (q_i, \delta_j(q_j, b), \delta_k(q_k, w_{L_j}))$ for all $w_{L_j} \in L_j$. +- The final states are those where the third component is a final state in $F_k$. + +This DFA ensures that as we read $w$, we keep track of the corresponding states in $A_i$, $A_j$, and $A_k$ to ensure the substitution process results in strings that belong to $L_k$. + +## **Knowledge** + +语言替换 正则表达式 NFA DFA + +### 难点解题思路 + +1. 语言替换的正则表达式表达形式,需要理解替换过程以及结果语言的模式。 +2. 识别满足特定替换条件的字符串集合,需要考虑原语言和替换后的语言的关系。 +3. 构造非确定性有限自动机 (NFA) 以处理语言替换,需要使用 $\epsilon$-transitions 连接不同自动机的状态。 +4. 构造确定性有限自动机 (DFA) 来接受满足条件的字符串集合,需要同时跟踪多个自动机的状态。 + +### 解题技巧和信息 + +- 在处理语言替换问题时,理解每一步替换过程对最终结果的影响非常重要。 +- 构造 NFA 和 DFA 时,注意状态之间的转换关系以及如何利用 $\epsilon$-transitions 连接不同语言的自动机。 + +### 重点词汇 + +- $\epsilon$-transitions: $\epsilon$-转换 +- Regular Expression: 正则表达式 +- Non-deterministic Finite Automaton (NFA): 非确定性有限自动机 +- Deterministic Finite Automaton (DFA): 确定性有限自动机 + +### 参考资料 + +1. Michael Sipser, "Introduction to the Theory of Computation", Chapter 2 +2. John E. Hopcroft, Rajeev Motwani, Jeffrey D. Ullman, "Introduction to Automata Theory, Languages, and Computation", Chapter 3 diff --git a/docs/kakomonn/tokyo_university/IST/cs_202008_senmon_3.md b/docs/kakomonn/tokyo_university/IST/cs_202008_senmon_3.md new file mode 100644 index 000000000..b04926432 --- /dev/null +++ b/docs/kakomonn/tokyo_university/IST/cs_202008_senmon_3.md @@ -0,0 +1,146 @@ +--- +comments: false +title: 東京大学 情報理工学系研究科 コンピュータ科学専攻 2020年8月実施 専門科目 問題3 +tags: + - Tokyo-University +--- +# 東京大学 情報理工学系研究科 コンピュータ科学専攻 2020年8月実施 専門科目 問題3 + +## **Author** +[zephyr](https://inshi-notes.zephyr-zdz.space/) + +## **Description** +Consider bit-serial communication circuits which send and receive 5-bit information bit-by-bit in a noisy environment. The 5-bit information consists of a 2-bit start-bit signal, 2-bit payload data, and a 1-bit odd-parity signal. + +The sender circuit always outputs '0' in the initial state. At the beginning of a communication, the sender outputs 2-bit data '11' bit-by-bit. It subsequently outputs 2-bit payload data bit-by-bit from the most significant bit. It finally outputs an odd-parity signal such that the number of '1's in the sent bit sequence including the 2-bit start-bit signal, the payload data, and the parity signal itself is an odd number. After sending the parity signal, the sender circuit goes to the initial state, and it outputs '0' until the next sending. + +The receiver circuit has a 1-bit input A from the sender circuit, a 1-bit output B for the parity check result, and a 2-bit output for the received payload data. It obtains payload data from a received bit sequence and does the odd parity checking. + +In the initial state, the receiver circuit waits for '1' corresponding to the first bit of a start-bit signal. In the next clock cycle after receiving the first bit of a start-bit signal, it receives a value corresponding to the second bit of a start-bit signal. If the received value corresponding to the second bit of a start-bit signal is '0', the receiver circuit judges that the first received bit '1' was an error caused by a noise, and goes back to the initial state. Otherwise, in the next 2 clock cycles, it stores each value of the input A as payload data. At the next clock cycle, it receives a parity-bit, and it verifies that the number of '1's in the received 5-bit sequence consisting of the 2-bit start-bit signal, the 2-bit payload data, and the parity-bit is odd. It assigns '1' to the output B if the number of '1's is odd, and it assigns '0' otherwise. The value of the output B is always '0', except in the clock cycles for receiving a parity-bit. The receiver circuit then goes to the initial state, regardless of the parity-check result. + +Answer the following questions. + +(1) Give the state transition diagram of a Mealy-type finite state machine (FSM), consisting of 6 states, for the parity check circuit with the input A and the output B in the receiver circuit. Based on the state transition diagram, give also a corresponding state transition table and an output table by using the one-hot encoding. One-hot encoding is a method for encoding each state as a bit sequence where only one bit is '1' and the other bits are '0'. + +(2) Based on the state transition table and the output table in question (1), express the output B as a Boolean expression in terms of the input A and the one-hot encoding representation of the current state of the FSM. Based on the Boolean expression, give also a corresponding gate-level circuit of the parity check circuit that outputs B, given A and the one-hot encoding representation of the current state of the FSM as inputs. You are allowed to use only 2-input AND gates, 2-input OR-gates, and NOT-gates. There is no limitation on the number of gates. You need not describe unused input signals. + +(3) According to the Boolean expression answered in question (2), give a CMOS transistor level circuit that outputs B, given A and the one-hot encoding representation of the current state of the FSM as inputs. You are not allowed to use more than 12 transistors. You may use the inverter mark, but the number of transistors required for the inverters must be included in the total number of transistors. You need not describe unused input signals. + +## **Kai** +### (1) + +#### State Transition Diagram + +The Mealy-type FSM has 6 states: + +- **S0**: Initial state, waiting for the first start bit. If '0' is received, return to the initial state. If the first start bit '1' is received, move to the next state. +- **S1**: Received the second start bit '1'. Waiting for the payload data. If '0' is received, return to the initial state. If the first payload bit '1' is received, move to the next state. +- **S2, S3, S4, S5, S6**: Received the second start bit '1' and the payload data. Waiting for the parity bit. They will transition to the initial state after the parity check. S5 & S6 stands for the parity check states to be even/odd. + +State transitions and outputs B based on input A are as follows(A/B means input/output, S0 and S0' are the same as the initial state): + +```mermaid +graph LR + S0 -->|0/0| S0 + S0 -->|1/0| S1 + S1 -->|0/0| S0 + S1 -->|1/0| S2 + S2 -->|0/0| S3 + S3 -->|0/0| S5 + S2 -->|1/0| S4 + S3 -->|1/0| S6 + S4 -->|0/0| S6 + S4 -->|1/0| S5 + S5 -->|0/0, 1/1| S0' + S6 -->|1/0, 0/1| S0' +``` + +The corresponding state transition table and output table using one-hot encoding are as follows: + +| State | S0 | S1 | S2 | S3 | S4 | S5 | S6 | +| ------------------ | --- | --- | --- | --- | --- | --- | --- | +| Next State $(A=0)$ | S0 | S0 | S3 | S5 | S6 | S0 | S0 | +| Output B $(A=0)$ | 0 | 0 | 0 | 0 | 0 | 0 | 1 | +| Next State $(A=1)$ | S1 | S2 | S4 | S6 | S5 | S0 | S0 | +| Output B $(A=1)$ | 0 | 0 | 0 | 0 | 0 | 1 | 0 | + +### (2) + +Using one-hot encoding for states: + +- S0: 000000 +- S1: 100000 +- S2: 010000 +- S3: 001000 +- S4: 000100 +- S5: 000010 +- S6: 000001 + +The output B will be '1' only in states S5 and S6. So the output B can be expressed as a Boolean function of the current state and input A: + +- $B = A \cdot S6 + \overline{A} \cdot S5$ + +**Gate-Level Circuit:** +We can construct the circuit using 2-input AND gates, OR gates, and NOT gates. + +- The circuit for $P$: + - $Q1 = A \cdot S6$ + - $Q2 = \overline{A} \cdot S5$ + - $P = Q1 + Q2$ + +The circuit for $P$ can be implemented using the following logic gates: + +```plaintext +S5 -------------\ + AND----\ + /-----NOT-----/ \ + / \ +A OR----> P + \ / + \-------------\ / + AND----/ +S6 -------------/ +``` + +### (3) + +Based on the Boolean expression for the output B, we can make some optimizations to reduce the number of transistors in the CMOS circuit. We will make use of the De Morgan's theorem to simplify the expression and maximize the use of complementary pairs. + +The simplified expression for the output B is: + +$$ +B = A \cdot S6 + \overline{A} \cdot S5 = \overline{\overline{A} \cdot S6} \cdot \overline{A \cdot S5} = \overline{(\overline{A} + \overline{S6})} \cdot \overline{(A + S5)} +$$ + +## **Knowledge** + +有限状态机 梅利型FSM 一热编码 布尔代数 逻辑电路 CMOS电路 + +### 难点解题思路 + +- 确保正确理解有限状态机的工作流程。 +- 确保状态转移图正确,涵盖所有状态和过渡条件。 +- 在布尔表达式的转换中,小心处理所有状态编码及其转换关系。 + +### 解题技巧和信息 + +- 在设计状态机时,使用一热编码可以简化状态转换逻辑。 +- 对于布尔表达式,尝试先使用基本的逻辑门,再优化以满足设计约束。 +- CMOS 电路设计中的每个逻辑门都需要考虑所使用的晶体管数量,确保优化。 + +### 重点词汇 + +finite state machine 有限状态机 + +Mealy-type FSM 梅利型有限状态机 + +one-hot encoding 一热编码 + +parity check 奇偶校验 + +boolean expression 布尔表达式 + +### 参考资料 + +1. "Digital Design and Computer Architecture" by David Harris, Sarah Harris, Chap. 3 +2. "CMOS VLSI Design: A Circuits and Systems Perspective" by Neil Weste, David Harris, Chap. 4 diff --git a/docs/kakomonn/tokyo_university/IST/cs_202008_senmon_4.md b/docs/kakomonn/tokyo_university/IST/cs_202008_senmon_4.md new file mode 100644 index 000000000..9e9f9c459 --- /dev/null +++ b/docs/kakomonn/tokyo_university/IST/cs_202008_senmon_4.md @@ -0,0 +1,159 @@ +--- +comments: false +title: 東京大学 情報理工学系研究科 コンピュータ科学専攻 2020年8月実施 専門科目 問題4 +tags: + - Tokyo-University +--- +# 東京大学 情報理工学系研究科 コンピュータ科学専攻 2020年8月実施 専門科目 問題4 + +## **Author** +[zephyr](https://inshi-notes.zephyr-zdz.space/) + +## **Description** +Let $\mathbb{R}$ be the set of real numbers. Denote by $\mathbf{T}$ the transposition operator of a vector and a matrix. When $\mathbf{w} = (w_1, w_2, \ldots, w_d)^\mathbf{T} \in \mathbb{R}^d$ is a $d$-dimensional column vector, the norm $\|\mathbf{w}\|_2$ is defined by $\|\mathbf{w}\|_2 = \sqrt{w_1^2 + w_2^2 + \ldots + w_d^2}$. Define the inner product of two column vectors $\mathbf{x}_1, \mathbf{x}_2 \in \mathbb{R}^d$ as $\mathbf{x}_1^\mathbf{T} \mathbf{x}_2 \in \mathbb{R}$. For a $d \times d$ matrix $\mathbf{A} \in \mathbb{R}^{d \times d}$, define $\|\mathbf{w}\|_{\mathbf{A}} = \sqrt{\mathbf{w}^\mathbf{T} \mathbf{A} \mathbf{w}}$. Let $\mathbf{tr}(\mathbf{B})$ be the trace of the matrix $\mathbf{B}$. + +Consider the problem of predicting a real-valued label $y \in \mathbb{R}$ from a $d$-dimensional real vector $\mathbf{x} \in \mathbb{R}^d$. For learning a predictor, suppose that $n$ training samples + +$$ +\{(\mathbf{x}_i, y_i) \mid \mathbf{x}_i \in \mathbb{R}^d, y_i \in \mathbb{R}, i = 1, 2, \ldots, n\} +$$ + +are given where $(\mathbf{x}_i, y_i)$ means that $y_i$ is the real-valued label of $\mathbf{x}_i$. In addition, by using a $d$-dimensional vector $\mathbf{w}^* \in \mathbb{R}^d$ and observational noise $\epsilon_i (i = 1, 2, \ldots, n)$ that is independent and identically distributed, assume the data generation process as + +$$ +y_i = \mathbf{w}^{*\mathbf{T}} \mathbf{x}_i + \epsilon_i \quad (i = 1, 2, \ldots, n), +$$ + +where the expectation $\mathbb{E}[\epsilon_i] = 0$ and variance $\mathbb{V}[\epsilon_i] = \sigma^2 > 0 \quad (i = 1, \ldots, n)$. Let us introduce the symbols + +$$ +\mathbf{X} = [\mathbf{x}_1, \ldots, \mathbf{x}_n]^\mathbf{T} \in \mathbb{R}^{n \times d}, \quad \mathbf{Y} = [y_1, \ldots, y_n]^\mathbf{T} \in \mathbb{R}^n, \quad \mathbf{\epsilon} = [\epsilon_1, \ldots, \epsilon_n]^\mathbf{T} \in \mathbb{R}^n. +$$ + +We also use the symbol $\mathbf{\Phi} = \frac{1}{n} \mathbf{X}^\mathbf{T} \mathbf{X} \in \mathbb{R}^{d \times d}$ where $\mathbf{\Phi}$ is assumed to be a regular matrix. The expectation over the observational noises is expressed by $\mathbb{E}_{\mathbf{\epsilon}}[\cdot]$. + +We formulate the learning of a predictor $f(\mathbf{x}) = \hat{\mathbf{w}}^\mathbf{T} \mathbf{x}$ as the following optimization problem. + +$$ +\mathbf{\hat{w}} = \mathop{\arg\min}\limits_{\mathbf{w} \in \mathbb{R}^d} L(\mathbf{w}) +$$ + +$$ +L(\mathbf{w}) = \frac{1}{2n} \sum_{i=1}^{n} (y_i - \mathbf{w}^\mathbf{T} \mathbf{x}_i)^2 = \frac{1}{2n} \|\mathbf{Y} - \mathbf{Xw}\|_2^2. +$$ + +Answer the following questions. Describe not only an answer but also the derivation process. + +(1) Express $\mathbf{\hat{w}}$ using $\mathbf{X}, \mathbf{Y}, \mathbf{\Phi}$, and $n$. + +(2) Suppose we wish to express $\mathbb{E}_{\mathbf{\epsilon}}[L(\mathbf{w})]$ in the form of $\frac{1}{2} \|\mathbf{w} - \mathbf{w}^*\|_{\mathbf{A}}^2 + b$. Express the matrix $\mathbf{A} \in \mathbb{R}^{d \times d}$ and the positive real number $b > 0$ using $\mathbf{\Phi}$ and $\sigma^2$. + +(3) Suppose we wish to express $\mathbb{E}_{\mathbf{\epsilon}}[L(\hat{\mathbf{w}})] - \mathbb{E}_{\mathbf{\epsilon}}[L(\mathbf{w^*})]$ in the form of $\frac{\sigma^2}{2n} \mathbf{tr}(\mathbf{B})$. Express the matrix $\mathbf{B} \in \mathbb{R}^{d \times d}$ using the matrix $\mathbf{X}$. + +(4) Explain what problem arises when $\mathbf{\Phi}$ is not a regular matrix and suggest a way to remedy the problem. + +## **Kai** +### (1) + +To find the optimal weight vector $\mathbf{\hat{w}}$, we minimize the loss function $L(\mathbf{w})$ defined as: + +$$ +L(\mathbf{w}) = \frac{1}{2n} \sum_{i=1}^{n} (y_i - \mathbf{w}^\mathbf{T} \mathbf{x}_i)^2 = \frac{1}{2n} \|\mathbf{Y} - \mathbf{Xw}\|_2^2. +$$ + +To minimize $L(\mathbf{w})$, we take the derivative of $L(\mathbf{w})$ with respect to $\mathbf{w}$ and set it to zero: + +$$ +\nabla L(\mathbf{w}) = -\frac{1}{n} \mathbf{X}^\mathbf{T} (\mathbf{Y} - \mathbf{Xw}) = 0. +$$ + +Solving for $\mathbf{w}$ gives: + +$$ +\mathbf{X}^\mathbf{T} \mathbf{Y} = \mathbf{X}^\mathbf{T} \mathbf{X} \mathbf{w}. +$$ + +Thus, the optimal weight vector $\mathbf{\hat{w}}$ is: + +$$ +\mathbf{\hat{w}} = (\mathbf{X}^\mathbf{T} \mathbf{X})^{-1} \mathbf{X}^\mathbf{T} \mathbf{Y} = \mathbf{\Phi}^{-1} \left( \frac{1}{n} \mathbf{X}^\mathbf{T} \mathbf{Y} \right). +$$ + +### (2) + +To express $\mathbb{E}_{\mathbf{\epsilon}}[L(\mathbf{w})]$, we first express $L(\mathbf{w})$: + +$$ +L(\mathbf{w}) = \frac{1}{2n} (\mathbf{Y} - \mathbf{Xw})^\mathbf{T} (\mathbf{Y} - \mathbf{Xw}). +$$ + +Using the data generation model $y_i = \mathbf{w}^{*\mathbf{T}} \mathbf{x}_i + \epsilon_i$, we can write $\mathbf{Y} = \mathbf{X} \mathbf{w}^* + \mathbf{\epsilon}$. Then: + +$$ +\mathbb{E}_{\mathbf{\epsilon}}[L(\mathbf{w})] = \frac{1}{2n} \mathbb{E}_{\mathbf{\epsilon}}\left[(\mathbf{X} \mathbf{w}^* + \mathbf{\epsilon} - \mathbf{X} \mathbf{w})^\mathbf{T} (\mathbf{X} \mathbf{w}^* + \mathbf{\epsilon} - \mathbf{X} \mathbf{w})\right]. +$$ + +Expanding and using the properties of expectation: + +$$ +\mathbb{E}_{\mathbf{\epsilon}}[L(\mathbf{w})] = \frac{1}{2n} \left[(\mathbf{w} - \mathbf{w}^*)^\mathbf{T} \mathbf{X}^\mathbf{T} \mathbf{X} (\mathbf{w} - \mathbf{w}^*) + \mathbb{E}_{\mathbf{\epsilon}}[\mathbf{\epsilon}^\mathbf{T} \mathbf{\epsilon}]\right]. +$$ + +Since $\mathbb{E}[\mathbf{\epsilon}] = 0$ and $\mathbb{E}[\mathbf{\epsilon}\mathbf{\epsilon}^\mathbf{T}] = \sigma^2 \mathbf{I}$, we have: + +$$ +\mathbb{E}_{\mathbf{\epsilon}}[L(\mathbf{w})] = \frac{1}{2} \|\mathbf{w} - \mathbf{w}^*\|_{\mathbf{\Phi}}^2 + \frac{\sigma^2}{2}. +$$ + +Here, the matrix $\mathbf{A}$ is $\mathbf{\Phi}$ and the scalar $b$ is $\frac{\sigma^2}{2}$. + +### (3) + +We have: + +$$ +\mathbb{E}_{\mathbf{\epsilon}}[L(\mathbf{\hat{w}})] = \frac{\sigma^2}{2n}. +$$ + +Thus: + +$$ +\mathbb{E}_{\mathbf{\epsilon}}[L(\hat{\mathbf{w}})] - \mathbb{E}_{\mathbf{\epsilon}}[L(\mathbf{w^*})] = \frac{1}{2} (\hat{\mathbf{w}} - \mathbf{w}^*)^\mathbf{T} \mathbf{\Phi} (\hat{\mathbf{w}} - \mathbf{w}^*) + \frac{\sigma^2}{2} - \frac{\sigma^2}{2n}. +$$ + +Therefore, the matrix $\mathbf{B}$ is $\mathbf{\Phi}$. + +### (4) + +When $\mathbf{\Phi}$ is not a regular matrix, it is singular and cannot be inverted. This usually happens when the features are linearly dependent, leading to multicollinearity. This makes the computation of $\mathbf{\hat{w}}$ unstable or impossible. + +A common remedy is to add a regularization term to the loss function, which is known as **Ridge Regression**. The modified loss function becomes: + +$$ +L(\mathbf{w}) = \frac{1}{2n} \|\mathbf{Y} - \mathbf{Xw}\|_2^2 + \frac{\lambda}{2} \|\mathbf{w}\|_2^2, +$$ + +where $\lambda > 0$ is a regularization parameter. The solution then becomes: + +$$ +\mathbf{\hat{w}} = (\mathbf{\Phi} + \lambda \mathbf{I})^{-1} \left( \frac{1}{n} \mathbf{X}^\mathbf{T} \mathbf{Y} \right). +$$ + +## **Knowledge** + +机器学习 线性回归 最小二乘法 岭回归 + +### 解题技巧和信息 + +在回归问题中,当自变量之间存在共线性问题时,使用岭回归可以增加模型的稳定性并避免参数过大。理解最小二乘法的优化问题如何转化为矩阵求解问题是非常重要的。此外,加入正则化项可以有效地解决过拟合问题。 + +### 重点词汇 + +- **trace** (迹) - 矩阵对角线元素之和 +- **regular matrix** (正规矩阵) - 具有满秩的矩阵,即矩阵的行列式非零 +- **regularization** (正则化) - 添加到损失函数的额外项,以约束模型复杂度并提高泛化能力 + +### 参考资料 + +1. **The Elements of Statistical Learning**, Trevor Hastie, Robert Tibshirani, and Jerome Friedman, Chap. 3 +2. **Pattern Recognition and Machine Learning**, Christopher Bishop, Chap. 4 diff --git a/docs/kakomonn/tokyo_university/index.md b/docs/kakomonn/tokyo_university/index.md index abcfef9ee..93602d5d4 100644 --- a/docs/kakomonn/tokyo_university/index.md +++ b/docs/kakomonn/tokyo_university/index.md @@ -133,6 +133,10 @@ tags: - [2月 問題3](IST/cs_202202_3.md) - [2月 問題4](IST/cs_202202_4.md) - 2021年度: + - [8月 専門科目 問題1](IST/cs_202008_senmon_1.md) + - [8月 専門科目 問題2](IST/cs_202008_senmon_2.md) + - [8月 専門科目 問題3](IST/cs_202008_senmon_3.md) + - [8月 専門科目 問題4](IST/cs_202008_senmon_4.md) - [2月 問題1](IST/cs_202102_1.md) - [2月 問題2](IST/cs_202102_2.md) - [2月 問題3](IST/cs_202102_3.md) diff --git a/mkdocs.yml b/mkdocs.yml index c15f716e9..f8771c110 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -254,6 +254,10 @@ nav: - 2月 問題3: kakomonn/tokyo_university/IST/cs_202202_3.md - 2月 問題4: kakomonn/tokyo_university/IST/cs_202202_4.md - 2021年度: + - 8月 専門科目 問題1: kakomonn/tokyo_university/IST/cs_202008_senmon_1.md + - 8月 専門科目 問題2: kakomonn/tokyo_university/IST/cs_202008_senmon_2.md + - 8月 専門科目 問題3: kakomonn/tokyo_university/IST/cs_202008_senmon_3.md + - 8月 専門科目 問題4: kakomonn/tokyo_university/IST/cs_202008_senmon_4.md - 2月 問題1: kakomonn/tokyo_university/IST/cs_202102_1.md - 2月 問題2: kakomonn/tokyo_university/IST/cs_202102_2.md - 2月 問題3: kakomonn/tokyo_university/IST/cs_202102_3.md