Data Structures and Algorithms are the building blocks of computer science. They are the tools you'll use to build software systems.
A Data Structurs is a way of storing/organising data in the memory in such a way tha access, management and modification of data become efficient.
- Linear
- Non-Linear
-
Linear Data Structures :
-
Array: A collection of elements of the same type stored in contiguous memory locations.
-
String: A sequence of characters.
-
Linked List: A linear data structure where elements are not stored in contiguous memory locations.
-
Stack: A linear data structure that follows the Last In First Out (LIFO) principle.
-
Queue: A linear data structure that follows the First In First Out (FIFO) principle.
-
-
Non-Linear Data Structures :
- Graph: A non-linear data structure consisting of vertices and edges.
- Tree: A non-linear data structure used to store data hierarchically.
- Trie: A tree-like data structure used to store a dynamic set of strings.
Linear Data Stuctures | None-Linear Data Stuctures |
---|---|
sequential | None-Sequential |
All items present in single layer | In multiple layers |
Inefficient memory utilization | Relatively efficient memory utilization |
Time complexity increases with data | Time complexity remains the same |
Traversed in a single run | Requires multiple runs |
Algorithms are step-by-step procedures designed to perform a specific task or solve a particular problem or perform computations.
Here are some common algorithms you should be familiar with:
-
Sorting: Methods like Quick Sort, Merge Sort, and Bubble Sort that rearrange elements in a specific order.
-
Searching: Techniques like Binary Search and Linear Search used to find elements within a data structure.
-
Recursion: A technique where a function calls itself to solve a smaller instance of the same problem.
-
Dynamic Programming: Dynamic Programming is a bottom-up approach we solve all possible small problems and then combine them to obtain solutions for bigger problems
-
Greedy Algorithms: Greedy method is used to solve the optimization problem. An optimization problem is one in which we are given a set of input values, which are required either to be maximized or minimized (known as objective), i.e. some constraints or conditions.
-
Divide and Conquer: It is a top-down approach. The algorithms which follow the divide & conquer techniques involve three steps:
- Divide the original problem into a set of subproblems.
- Solve every subproblem individually, recursively.
- Combine the solution of the subproblems (top level) into a solution of the whole original problem.
-
Brute Force: Solving problems by trying all possible solutions.
-
Bit Manipulation: Algorithmically manipulating bits or binary digits.
-
Graph Algorithms: Solving problems on graphs such as Dijkstra’s for finding the shortest path in a graph.
-
String Matching: Finding a substring within a string.
- Input: The data provided to the algorithm to process.
- Output: The result produced by the algorithm after processing the input.
- Process/Steps: A sequence of operations or instructions that transform the input into the output.
Algorithms are fundamental to computer science and programming, as they provide the blueprint for solving problems efficiently and effectively. They are essential for tasks ranging from simple calculations to complex data processing in software applications.
- correct
- Efficient (Time, Space Complexity)
- Easy to implement
Time complexity is a way to measure the amount of time an algorithm takes to complete as a function of the length of the input. It provides a high-level understanding of the algorithm's efficiency and helps compare the performance of different algorithms.
For Example, If i asked you for the sum of the all number from 1 to n .... There are two ways to solve it, and you decide for yourself after that.
sol1
int sum_n(int n){
int res = 0;
for(int i = 1; i <= n; i++){
res += i;
}
return res;
}
sol2
int sum_n(int n){
return (n*(n+1)/2);
}
Both methods will give tha same result but with different algorithms. So which do you think is better ??
Big O notation is a mathematical notation used to describe the upper bound of an algorithm's runtime. It provides a worst-case scenario for the time complexity of an algorithm.
-
Constant Time: O(1)
- The execution time does not change with the input size.
- Example: Accessing an element in an array.
-
Logarithmic Time: O(log(n))
- The execution time grows logarithmically as the input size increases.
- Example: Binary search in a sorted array.
-
Linear Time: O(n)
- The execution time grows linearly with the input size.
- Example: Linear search in an unsorted array.
-
Linearithmic Time: O(n log(n))
- The execution time grows as n multiplied by the logarithm of n
- Example: Efficient sorting algorithms like Merge Sort and Quick Sort.
-
Quadratic Time: O(n2)
- Execution time grows quadratically as input size increases.
- Example: Bubble Sort, where two nested loops iterate through the input.
-
Cubic Time: O(n3)
- Execution time grows cubically with the input size.
- Example: Certain dynamic programming solutions.
-
Exponential Time: O(2n)
- Execution time doubles with each additional input element.
- Example: Solving the Traveling Salesman Problem using brute-force.
-
Factorial Time: O(n!)
- Execution time grows factorially with the input size.
- Example: Generating all permutations of a set.
In fact, there are two other types besides Big O Notation:
- Big Omega Notation (Ω)
- Big Theta Notation (Θ)
Big O notation, Big Theta notation, and Big Omega notation are all mathematical concepts used to describe the time complexity of algorithms, but they serve different purposes. Here’s a comparison of each:
-
Big Omega Notation (Ω)
- Definition: Big Omega notation describes the lower bound of an algorithm's running time. It provides a best-case scenario for how the algorithm performs.
- Usage: Used to express the minimum time an algorithm will take.
-
Big Theta Notation (Θ)
- Definition: Big Theta notation provides a tight bound on the running time of an algorithm. It describes both the upper and lower bounds, meaning it provides an asymptotically tight estimate.
- Usage: Used when the algorithm's running time grows at the same rate for both upper and lower bounds.
Notation | Definition | Purpose | Example |
---|---|---|---|
Big O (O) | Upper bound (worst case) | Maximum time | O(n2) |
Big Omega (Ω) | Lower bound (best case) | Minimum time | Ω(n) |
Big Theta (Θ) | Tight bound (both upper and lower) | Exact asymptotic growth | Θ (n log(n)) |
- Big O: When you want to describe how an algorithm performs in the worst-case scenario.
- Big Omega: When you want to convey the best-case performance of an algorithm.
- Big Theta: When you need to provide a precise characterization of the algorithm’s growth rate, indicating that it behaves consistently in both upper and lower bounds.
In practice, Big O is the most commonly used notation, but understanding the other two notations is crucial for a comprehensive analysis of algorithm performance.
-
Identify the Basic Operations: Determine the most significant operations in the algorithm (e.g., comparisons, assignments) that contribute to its running time.
-
Count the number of times these operations: Analyze loops, recursive calls, and other structures to count how many times the basic operations are executed as a function of the input size n.
-
Express this count using Big O notation: Simplify the count to its most significant term, ignoring constant factors and lower-order terms.
After each topic, l will post problems on it, most of which are from leetcode (Easy, Med. ,Hard), and l will strive to upload my solations to those problems in c++ and later it may be in python.
Happy coding❤ Happy Learning❤💡