Skip to content

My review of important, common algorithms, and practices of algorithms on Leetcode.

License

Notifications You must be signed in to change notification settings

DayuanTan/AlgorithmsAndPractices

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This repo is about my notes and a record of my practicing of algorithms on Leetcode/lintcode.

Parts of this note refer to Jiuzhang, Labuladong, and other internet recourses. Thanks to them.

(It is wrote mainly in English and barely Chinese. Ignoring the Chinese parts doens't affect the understanding of this repo.)

Contents

0. Before Starting

Recommended Steps:

  1. Read through this whole README.md. Get fimilar with "Theory" subsection of each section. No need to dive into those linked files.
  2. Go through those problems marked with "❗️".
  3. Go through left problems.

Steps for all questions

  1. Clarification. Confirm questions with interviewer, including input, output, examples; discuss about corner cases examples.
  2. Think out loud. Describe your idea/thoughts before coding. Talk before write.
  3. Point out your assumption. Like arguments/inputs are in memory, are they sorted already or not.
  4. Talk through. Write your code while explaining it.
  5. Introduce your tests and run tests and error handling. Test in real time, use examples given, then edge cases.
  6. Time and space complexity analysis. Scale analysis.
  7. Optimal solution or follow up.

Data structure

  • Underlying storage methods: only array (sequential storage) and linked list (chain storage).
  • VS
  • Operations: traversing and accessing. More specifically, add, delete, search, alter.
  • Must-master data structures:

Algorithms

  • Must-master algorithms: (Google Code Interview 3:04)

    • Sorting, searching, binary searching
    • Divide and conquer
    • Dynamic programming and memorizaiton
    • Greedy algorithms
    • Recursion
    • Graph traversal, breadth- depth-first
  • Algorithms with Time complexity O(n):

    • 考得频率Test freq.: 2 pointers (without sort) >> 打擂台算法 ~= 单调栈算法 ~= 单调队列算法
  • Common Time Complexity:

Time Complexity Possible corresponding Syntax Note
O(1) Bit Operation 位运算 Usually won't be asked. 常数级复杂度,一般面试中不会有
O(logn) Binary Search 二分法, Doubling 倍增法, Fast exponentiation algorithm 快速幂算法, Euclidean algorithm 辗转相除法 Mainly BS. 比O(n)更好几乎一定是BS。
O(sqrt(n)) Factorization/Decomposition prime factors 分解质因数 Barely 极少
O(n) Enumeration 枚举法, Two Pointers (w/o sorting) 双指针算法, Monotonic Stack Algorithm 单调栈算法, KMP KMP算法,Rabin Karp,Manacher's Algorithm A.k.a linear time complexity. 又称作线性时间复杂度
O(nlogn) Quick Sort 快速排序, Merge Sort 归并排序, Heap Sort 堆排序. Or work on O(logn) data structure. O(logn) Data Structure includes Balanced Tree, Heap, RB-tree.
O(n^2) Enumeration 枚举法, Dynamic Programming 动态规划, Dijkstra
O(n^3) Enumeration 枚举法, Dynamic Programming 动态规划, Floyd
O(2^n) Combination related search questions. 与组合有关的搜索问题
O(n!) Permutation related search questions. 与排列有关的搜索问题

Problems

  • Find all plans/solutions problems
    • paths == plans == nodes combinations/permutations
    • DFS (with recursion)
    • BFS
  • Find amount of all plans
    • DP
  • Find best among all solutions problems
    • Iteration/loop
    • Greedy/ Eplison Greedy
    • Dynamic Programming
    • MDP/ Reinforcement Learning
  • Shortest Path problems
    • BFS (simple Graph)
    • Floyd, Dijkstra, Bellman-ford, SPFA (complex graph)
  • Longest Path problems
    • Graph can be layered: DP
    • Cannot: DFS

1. Two Pointers Method (most frequent) - O(n)

1Theory:

  1. Face to Face Two Pointers 相向双指针 (most frequently asked)
    • Two Sum type
      • sum of two numbers.
      • sum of three numbers.
    • Reverse type
      • 判断回文串 palindrome.
      • 翻转字符串 flip string.
    • Partition type
      • quick sort.
      • color sort.
  2. Back to Back Two Pointers 背向双指针 (only below 3 tyeps:)
    • Longest Palindromic Substring 最长回文子串 - 中心线枚举算法
    • Find K closet elements (also in binary search)
    • Square of sorted array
  3. Same Direction Two Pointers 同向双指针
    • Sliding Window 滑动窗口类
    • Fast & Slow pointers 快慢指针类
Time Complexity Space Complexity
Use hashmap O(n) O(n)
Use [sort O(nlogn)+] 2 pointers O(n) O(nlogn) O(1)

1Practice:

  1. Face to Face Two Pointers 相向双指针 (most frequently asked)
  2. Back to Back Two Pointers 背向双指针 (only below 3 tyeps:)
    • Longest Palindromic Substring 最长回文子串 - 中心线枚举算法
    • Find K closet elements (also in binary search)
    • Square of sorted array
  3. Same Direction Two Pointers 同向双指针
    • Sliding Window 滑动窗口类
    • Fast & Slow pointers 快慢指针类

2. Recursion, Heap and Stack (Hardware), Tree

2Theory:

2.1 Recursion 3 key elementts:

  • Definition, Stopping Condition, Divide. 递归的定义,出口,拆解。
  • 递归的定义:接受什么参数,返回什么值,代表什么意思
  • 递归的拆解:每次递归都是为了让问题规模变⼩
  • 递归的出⼝:必须有⼀个明确的结束条件

=> 得到⼀个可供其他函数调⽤的递归函数

2.2 Heap, stack and overflow

Stack and Heap in Memory 内存中的堆栈

Heap: Stack:
Store what? store the object got by "new" store the reference to object, variables, arrays, function calls
Size No limit. All left mem space. Limited. Usually as small as few MB. (Stack Overflow, Segment Fault if recursion depth is big)

Stack overflow

  • Java: About 18615 recursion delpth.
  • C/C++: About 262009 recursion delpth.
  • Python: About 998 recursion delpth. Not stackoverflow. But “RuntimeError: maximum recursion depth exceeded” which is a setting.

2.3 Pass by value vs Pass by reference 值传递与引用传递

Pass by value Pass by reference
Java Java Primitive Data Types (byte, short, int, long, float, double, char, boolean) Java Class Instances
Those class elements with 'final'.
Python Python Value types. Similar to Java Python Reference types. Similar to Java
C++ Default Parameters with '&'
Diff: copy content? Yes. (more space/time, don't affect upper level) No. (affect upper level)

2.4 Recursion with Tree, Binary Search递归版本

  • Binary Tree
    • Inorder Traversal
    • Preorder Traversal
    • Postorder Traversal
  • Construct binary tree from 2 traversals
    • from Inorder and (Preorder or Postorder Traversal)

2.5 Use Recursion V.S. Non-Recursion (Iteration)

Use Recursion Don't Use Recursion
If interviewer ask so. If interviewer ask not.
Recursion depth is not big. No Stackoverflow would happen Recutsion depth is deep. May stackoverflow.
Don't use recursion on LinkedList (Easy to have 100k nodes).
Cannot implement using non-recursion. Easy to implement using non-recursion (like binary rearch).
Common: DFS uses recursion. Common: Binary tree inorder traversal uses non-recursion.
Other algorithms often use non-recursion.

Note recursion (more details inside if needed)

2Practice:

3. Binary Search O(logn)

3Theory:

3.0 Make question smaller 由大化小:

  • Recursion
  • Binary Search
  • Divide and Conqer
  • Dynamic Programming

3.1 Prerequiste and BS Application:

  • For BS, the array must be sorted (ascending or descending).
  • BS also works for some question may not be sorted, like i585.Maximum_Number_in_Mountain_Sequence
  • BS on answer-set instead of question-input.

3.2 Binary Search V.S. Regular Search

Time Complexity Space Complexity Muse be sorted?
Binary Search O(logn) Same Ask the array to be sorted.
For-loop Search O(N) Same No

Implementation

  • Recursion
  • For-loop (best choice)
  • Both

3.3 Binary Search Template

  • Attention Point 1: start + 1 < end
  • Attention Point 2: start + (end - start) / 2. Overflow if mid = (start + end ) / 2 in Java/C++.
  • Attention Point 3: Seperate cases for =, <, >. Don't change mid value.
  • Attention Point 4: Deal with start and end after while-loop.
  • Overall idea: Down the range from n to 2 (start or end); Deal with Start and End. (If use while start < end or start <= end, you may encounter dead-loop.)
start, end = 0, len(nums) - 1     
while start + 1 < end: # point 1
    mid = start + (end - start) / 2 # point 2
    if nums[mid] < target: # point 3
        start = mid
    elif nums[mid] == target:
        end = mid # (if ask for first pos)  or return mid (if ask for exist)
    else: 
        end = mid

if nums[start] == target: # point 4. These 2 if may exchange
    return start
if nums[end] == target:
    return end
return -1

Note Binary Search (more details inside if needed), Template

3Practice:

4. Queue/Stack, Set/Map/List

4Theory:

  • Queue: FIFO, abstract data type. Widely used for BFS.

4.1 Two ways to store elements internally for queue:

  • Array: Better performance for random access: O(1).
  • LinkedList: Better performance for insert and delete: O(1).

4.2 Queue Implementation:

  • Use Array
  • Use Array to implement Circular Queue
  • Use LinkedList
  • Use ArrayList
  • Java Interface Queue (Class PriorityQueue, AbstractQueue)
  • Java Interface Deque (Class ArrayDeque)

4.3 Java common Interfaces:

  • Set
  • Map
  • List
  • Queue
Set Duplicate Null Order
HashSet No duplicate elements Can have null Non Order
TreeSet No duplicate elements Cannot have null In Order (Default alphabet order)
Map Duplicate Null Order
HashMap key cannot duplicate,value can key and value both can be null Non order
TreeMap key cannot duplicate,value can no null In Order (by key)
List Better at
LinkedList Random access: get, set. O(1)
ArrayList Add, Delete (Given position). O(1)
Queue Under Infrastructure FIFO
PriorityQueue Implement base on Heap Non-FIFO (order by natural order (alphabet order) if no comparator designed, or order by priority if designed)
Queue Implement base on LinkedList FIFO

4.4 Interfaces VS Abstract Class

Interface Abstract Class
Keyword "implements" "extends"
Methods Abstract and non-abstract methods. (Java 8 has default and static methods). Only abstract methods. Abstract methods in Abstract class must be implemented by sub-calss.
Class Variables Final (default), static Final, non-final, static, non-static
Class Members Public (default) Public, Private, Protected
Multiple Inheritance Yes No (C++: Yes)
Implementation of each other Cannot impl Abstract Class Can impl Interface
Extend Can extend other interfaces Can extend other one class and implement multiple interfaces
Java/C++ Partially Interexchangable in Java C++ only has abstract class

Note Interface, Abstract Class (more details inside if needed)

4Practice:

  • ❗️Easy 232. Implement Queue

    • Java:
      • Using Array
      • Array (Circular Queue)
      • Class LinkedList
      • Interface Queue (Class LinkedList)
      • Abstract Class (AbstractQueue)
      • Interface Deque (Class ArrayDeque)
      • Class ArrayList
      • Implement Queue Using Stack (Use 2 stacks)
    • Python:
      • list (data type)
      • collections (module, containers datatypes) - Class deque
      • queue class (synchronized) - Queue (no peek() method)
  • ❗️Easy 225. Implement Stack

    • Java:
      • Class Stack
      • Class LinkedList
      • Class ArrayList
      • Interface Deque (Class ArrayDeque)
      • Implement Stack Using Queue (Sol1: use 2 queues; Sol2: use 1 queue)
    • Python:
      • list (data type)
      • collections (module, containers datatypes) - Class deque
      • queue class (synchronized) - LifoQueue (no peek() method)
      • Implement Stack Using Queue (use queue.Queue())
  • If just use Queue/Stack:

Java:

It's better to use ArrayDeque instead of LinkedList when implementing Stack and Queue in Java. ArrayDeque is likely to be faster than Stack interface (while Stack is thread-safe) when used as a stack, and faster than LinkedList when used as a queue. Refer Link.

Deque/Queue<E> myqueue = new ArrayDeque<E>(); (interface Deque extends interface Queue )

Deque<E> mystack = new ArrayDeque<E>(); (use like a stack) (ArrayDeque is faster than LinkedList)

Queue Method (Use this column methods directly) Equivalent Deque Method Stack Method (Use this column methods directly) Equivalent Deque Method
add(e) addLast(e) push(e) addFirst(e)
offer(e) offerLast(e)
remove() removeFirst()
poll() pollFirst() pop() removeFirst()
element() getFirst()
peek() peekFirst() peek() peekFirst()

Python:

from collections import deque

mystack = deque() # left as top

myqueue = deque() (left as head/first)(synchronize Queue is slower)

Queue Method (Use this column methods directly) Equivalent collections.deque Method Stack Method (Use this column methods directly) Equivalent collections.deque Method
push myqueue.append(x) push mystack.appendleft(x)
pop myqueue.popleft() pop mystack.popleft()
peek if myqueue:return myqueue[0] else: return -1 top if mystack: return mystack[0]
empty len(myqueue) == 0 empty len(mystack) == 0

5. BFS + Graph

5Theory

5.1 Graph Data Structure

  1. Adjacency Matrix
    • int[][] matrix; 2D array.
    • Space: O(n^2)
[
[1,0,0,1],
[0,1,1,0],
[0,1,1,0],
[1,0,0,1]
]
  1. Adjacency List (More frequently used)
    • Each node records its neighbors.
    • Space O(E). Worst case O(n^2).
    • Implement an Adjacency List:
      • Use self-defined Class
        • more popular in Software Engineered, recommend if enough time
      • Use HashMap<T, HashSet>
        • Map<Integer, Set<Integer>> myGraph = new HashMap<>(); myGraph.put(0, new HashSet<Integer>();
        • (less code)
[
[1],
[0,2,3],
[1],
[1]
]

Java:

class DirectedGraphNode {
  int label;
  List neighbors;
  ...
}

Python:

def DirectedGraphNode:
  def init(self, label):
    self.label = label
    self.neighbors = [] # a list of DirectedGraphNode's
    ...

Java:

Map<Integer, Set<Integer>> myGraph = new HashMap<Integer, HashSet>(); 
myGraph.put(0, new HashSet<Integer>();

Python:

adjacency_list = {x:set() for x in nodes}
adjacency_list = {}
for x in nodes:
  adjacency_list[x] = set()

5.2 Three scenarios BFS fits

  • level order traversal 分层遍历
    • traversal hierarchical a Grapg, Tree, Matrix
      • BFS on tree
      • BFS on Graph: O(V+E)
      • BFS on Matrix: O(row * col)
    • shortest path for simple Graph (all edge length are 1)
  • connected component 连通块问题
    • find all connected cell
    • non-recursion implementation for find-all-plans problem
  • topological sort 拓扑排序 (几乎每个公司面试都会有一道拓扑排序题)
    • Can also be done using DFS. BFS much easier. (When both can, always use BFS).
    • TS must for DAG (Directed Acyclic Graph). If not DAG then no TS. If DG has not TS then it must be cyclic.
    • Examples: Course Select, Compile Order
    • Questions:
      • find any Topological order
      • whether there is TO (a graph may have #TO >= 0)
        • If len(the answer of previous one) == #nodes then exist. Or don't exist.
      • find whether only one TO
        • Yes if queue.size == 1 during whole process. (bc only one choose for next node)
        • If at some step queue.size > 1 then not only one. (more than 1 choose)
      • find the TO with min lexicographical order
        • Min/Max -> use PriorityQueue

Min/Max -> use PriorityQueue in Java / heapqueue in Python

  • Java:
    • Queue<> myqueue = new PriorityQueue<>()
    • myqueue.poll()
    • myqueue.offer(newnode)
  • Python:
    • myqueue = []
    • heapify(myqueue)
    • heappop(myqueue)
    • heappush(myqueue, newnode)

5.3 Implementation

5.3.1 Use Queue and hashset

  • Queue is used to record which nodes need to be process
  • Hashtable to record visited nodes if necessary:
    • to avoid repeated calculation
    • to prune in tree
    • to avlid cycle in graph
    • Java: HashSet/HashMap
    • Python: set/dict
    • C++: unordered_set/unordered_map

5.3.2 Three implementation ways

  • Single Queue (simplest, recommend)
  • Two Queues (easier understand)
  • Dummy Node 哨兵节点
    • Usually point to first node in LinkedList
    • pop(): delete head node. DummyNode.next = DummyNode.next.next
    • In BFS: used to indicate end of each level.
    • Advantage: reduce one for-loop.

5.3.3 layered or non-layered 分层还是不分层

  • ❗️BFS Template Queue record nodes to process. HashTable record visited node to prune or avoid cycle.

5.3.4 If BFS and DFS both work

  • Use BFS (always easier).
    • DFS recursion easily stackoverflow.
    • DFS iteration is hard and interviewer may also don't understand.
  • You barely write BFS wrongly

5.4 Other notable points:

  • BFS for shorted path
    • Simple graph: BFS
    • Complex graph: Floyd, Dijkstra, Bellman-ford, SPFA. (Usually not appear in interviews)
  • Longest path:
    • The graph can be layered (directed, no cycle): DP
    • Cannot: DFS
  • BFS for Binary Tree vs BFS for Graph
    • Tree is a special Graph
    • Tree has no circle so no need to record visited nodes
    • HashSet/dict/unordered_map -> record visited nodes
      • Possible circle in graph, where one node will be enqueued more than 1 times -> olution: record visited nodes

5.5 BFS for “all plans” problems 使用宽度优先搜索找所有方案

5.6 Bidirectional BFS 双向

  • 如果同时给了start and end point
  • 图的话一般是无向图
  • Bi-BFS Template

5Practice

other

6. HashMap + Heap

HashMap

  • O(1)
    • get(key), set(key, value)
    • actually O(size of keys)
  • Capacity, load factor, rehash
    • Java HashMap: default initial capacity (16) and the default load factor (0.75).
    • HashSet: the backing HashMap instance has default initial capacity (16) and load factor (0.75).
    • rehash when size >= 16 * 0.75 = 12

Java - HashMap vs HashSet

HashMap HashSet HashTable
Implement Map interface Implement Set intergace Implement Map interface. Inherits Dictionary class.
Store key-value pair Store object Store key-value pair
Unique key Unique element Unique key
One null as key. Multiple null can be values. One null element allowed No null as key neither value
put() - add element add() - add element put() - add element
HashMap uses key to calculate Hashcode HashSet uses the member object to calculate the hashcode value. The hashcode may be the same for two objects, so use the equals() method to determine the equality of the objects. If the two objects are different, return false HashTable uses key to calculate Hashcode
It is not Thread-Safe because it is not Synchronized but it gives better performance. Like HashMap, it is not Thread-Safe because it is not Synchronized. It is Thread-Safe because it is Synchronized.
HashSet TreeSet LinkedHashSet
HashSet is fastest than LinkedHashSet and TreeSet TreeSet is slow when compared with both Hashset and LinkedHashSet LinkedHashSet is second fastest next to HashSet
HashSet does not maintain any order TreeSet maintains Sorting Order LinkedHashSet maintains insertion order
HashSet allows null TreeSet does not allow null LinkedHashSet allows null
HashSet uses equals() method TreeSet uses compareTo() method LinkedHashSet uses equals() method
HashSet backed by HashMap TreeSet backed by NavigableMap LinkedHashSet backed by HashSet
  • Python - set()/dict()
    • Collision: open-hashing.
  • C++ - unordered_set() / unordered_map()

Collision

  • open hashing / closed addressing / seperate chaining
    • keys are stored in linked lists attached to cells of a hash table.
  • closed hashing / open addressing/ linear probing
    • all keys are stored in the hash table itself without the use of linked lists.
    • tomb (deleted element flag)
    • double hashing (reduce hitting clumps possibility)

Note Hash more

7. DFS

7heory (记得用java再做一遍)

7.1 Combination and Permutation BFS

  • Except Binary Tree, 90% of DFS is combination or permutation problems. Especially combination.
  • DFS with Combinations
    • C(n, k) = n! / {k! (n-k)!}
    • Implicit graph search problem
      • If one problem doesn't tell you what are vertexes and edges but ask you to search, then it's implicit graph searching problem.
      • Firstly we need to figure out what are vertexes and edges.
      • ❗️Medium 78. Subset
        • Find all plans problem -> Use DFS
        • Solution 1 Combination Special DFS 分层
        • Solution 2 General DFS 不分层
        • Solution 3 BFS
      • ❗️Medium 90. Subset ii Duplicate elements
        • Remove duplicate while searching
          • Sort and compare to avoid duplicate
          • Hash (Python: set/dict; Java: HashSet/HashMap)
  • DFS with Permutations
    • Permutation considers order. P(n, k) = A(n, k) = n factorial = n!
    • ❗️Medium 46. Permutations Distinct elements. 1.DFS-Permutation-Recursion; 2.Non-Recursion
  • Combination DFS vs Permutation DFS
    • Search Tree shape
    • Templates
  • DFS Time Complexition
    • O(#all-plans * time-to-build-one-plan)
      • Combinations: O(2^N * N)
      • Permutations: O(N! * N)
      • => small depth and may large width => DFS

7.2 All-plans problems

  • Find all plans meet requirements 找所有满足某个条件的方案
  • Find all paths 找到图中的所有满足条件的路径
    • 路径 == 方案 == 图中节点的排列组合
  • Can be solved using BFS?
    • Yes
    • Need to convert find-all-paths to find-all-nodes

7.3 When to use DFS vs BFS

  • The space complexity of BFS depends on its width 宽度优先搜索的空间复杂度取决于宽度
  • The space complexity of DFS depends on its depth 深度优先搜索的空间复杂度取决于深度
  • If the search tree is deep but not wide then BFS
    • Matrix uses BFS
      • For example, in matrix, the width is <= sqrt(n) while depth is <= n^2 so it's better to use BFS.
  • If the search tree is wide but not deep then DFS
    • Combinations/Permutations use DFS
      • They are O(2^n)/O(n!) so the n cannot be large => the depth won't be large but the width might be large => DFS

7Practice

TSP DFS, pruning, DP

  1. Shortest Path Visiting All Nodes
  2. Minimum Time Visiting All Points

然后完成bfs bi-bfs 然后7dc 然后dp

8. DC - Divide and Conquer

8Theory

8.1 Recursion V.S. DFS V.S. BackTracking

Recursion DFS BackTracking
Recursion function: One way to implement the program, i.e. a function calls itself. DFS can be implemented by using Recursion fucntion. Backtracking method: It is DFS algorithm.
Recursion algorithm: The result of bigger problems depend on the result of smaller problems. So use the recursion function to solve the smaller problems. DFS can be implemented without using Recursion fucntion, for example you design a stack and operate by hand yourself. Backtracking operation: when the recursion function return to its previous layer, some parameters need to be changed back to their previous values.
Usually when we say recursion, we mean the recursion function. DFS is a traversal/searching algorithm.
Traversal 遍历法 DC 分治法
A helper guy stores all visited nodes Ask your gangs to solve the sub-problem and you sum up their results yourself
Usually use a global variable or shared argument Usually use return-value to store results of sub-problems

8Practice

9. Dynamic Programming - Memoization 记忆化搜索 动态规划

9Theory:

9.1 DFS-Traverse -> DC -> DC-with-Hashmap/Memoization Search

  • ❗️Easy 120 Triangle

    • DFS-Traverse O(2^n),
    • Divide & Conquer O(2^n),
    • DC-with-Hashmap/Memoization Search O(n^2)
    • 以后还待加上DP
  • Memoization Search 记忆化搜索

    • Use Hashmap/Dict to record intermidiate results of searching process, to avoid repeated calculation
    • Record the result of calculation, so next time meet same arguments, directly return record withoug calculating again
    • Reduce exponential time complexity to polynomial lelve (O(2^n) to O(n^2))
  • Memoization Search Usage Limit 对这个函数的限制

    • The function must have argument and return value. If not then cannot hashmap it.
    • The function need to be pure function, which means the return value is only affected by the arguments
    • Like cache in Architecture/System Design
  • Memoization Search Essence: is DP 记忆化搜索的本质就是动态规划

    • MS is an implementation method of DP
    • DP implementation methods:
      • Memoization Search
      • Non-recursion/Multi-level for loop
      • see more in below subsection
  • MS vs DC

    • Key difference is: whether has repeated computing
    • After Divide step:
      • If left and right sub-parts have overlap => is DP
      • If not => is DC
  • MS advantages and disadvantages:

    • cons
      • If Time complexity is O(n) and Recursion Depth is O(n) then it will stackoverflow
      • If Time complexity is O(n^2) and Recursion Depth is O(n) then it will not stackoverflow
      • MS is not suitable for O(n) problems becuase of the risk of stackoverflow
    • pros
      • Add few lines from DC
      • State Transition is simple

9.2 Dynamic Programming

  • DP is a thought, not an algorithm
    • Key: Divide big problems into smaller problem 核心思想:由大化小
    • Algorithms using same thought:
      • Recursion
      • Divide and Conquer (DC)
  • DP vs DC
    • whether there is overlap subproblems
      • For example:
        • Binary tree, left sub-tree and right sub-tree: on overlap => DC
        • Triangle problem: left child and right child: have overlap => DP
      • also see "MS vs DC" above
  • DP vs Greedy
    • DP would loss of current benefits for long-term benefits
    • Greedy always pursue the maximization of current benefits
  • DP implementation
    • Memoization
    • For-loop
      • Top-down
      • Bottom-up
  • DP 4 elements:
    • State 动规的状态
    • Function (state transition function) 动规的方程
    • Initialize (base case) 动规的初始化
    • Answer 动规的答案
  • DP 4 elements 1-to-1 correspond with Recursion 3 elements 一一对应

9.3 DP usually used for 3 types of problems:

  • min value/max value 最值问题
  • number of all plans 方案总数
  • feasibility 可行性
DP 一一对应 Recursion
DP State 动规的状态 dp[i] or dp[i][j] for sub-problems. 用 dp[i] 或者 dp[i][j] 代表在某些特定条件下某个规模更小的问题的答案。 规模更小用参数 i,j 之类的来划定 Recursion Definition 递归的定义
DP Function 动规的方程 How problems are divided into sub-problems. 大问题如何拆解为小问题.Use smaller scale state to derive dp[i][j]. dp[i][j] = 通过规模更小的一些状态求 max / min / sum / or 来进行推导 Recursion Divide 递归的拆解
DP Initialize 动规的初始化 Base case. 设定无法再拆解的极限小的状态下的值. E.g. dp[i][0] or dp[0][i] Recursion Stop Condition 递归的出口
DP Answer 动规的答案 What is asked for? 最后要求的答案是什么. E.g. dp[n][m] or max(dp[n][0], dp[n][1] … dp[n][m]) Recursion Calling 递归的调用
  • This is why Memoization (using recursion) can be one implementation of DP.

Note DP (more details inside if needed)

9Practice:

About

My review of important, common algorithms, and practices of algorithms on Leetcode.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published