Skip to content

Louiixx-h/algorithms

Repository files navigation

Understanding Algorithms and Big O Notation

Algorithms are fundamental to computer science and software development. They are step-by-step procedures or formulas for solving problems. To evaluate and compare algorithms, we often use Big O notation. This article will introduce you to the basics of algorithms and Big O notation, providing the foundational knowledge needed to analyze and optimize code.

What is an Algorithm?

An algorithm is a finite set of instructions that, when followed, perform a specific task or solve a particular problem. Algorithms can range from simple (like adding two numbers) to complex (like sorting a list of millions of numbers).

Key Characteristics of Algorithms

  1. Input: The algorithm takes zero or more inputs.
  2. Output: It produces at least one output.
  3. Definiteness: Each step is clearly defined.
  4. Finiteness: The algorithm terminates after a finite number of steps.
  5. Effectiveness: Each step is basic enough to be performed, in principle, by a human using a pencil and paper.

Introduction to Big O Notation

Big O notation is a mathematical concept used to describe the efficiency of an algorithm in terms of time and space. It specifically describes the worst-case scenario, providing an upper bound on the time or space required as a function of the input size.

Common Big O Notations

  1. O(1): Constant time - The algorithm's performance is independent of the input size.
  2. O(log n): Logarithmic time - The performance increases logarithmically as the input size increases.
  3. O(n): Linear time - The performance increases linearly with the input size.
  4. O(n log n): Linearithmic time - A combination of linear and logarithmic growth.
  5. O(n^2): Quadratic time - The performance is proportional to the square of the input size.
  6. O(2^n): Exponential time - The performance doubles with each addition to the input data set.
  7. O(n!): Factorial time - The performance is a factorial function of the input size.

Examples

  1. O(1): Accessing an element in an array by index.
  2. O(log n): Binary search in a sorted array.
  3. O(n): Iterating through a list.
  4. O(n log n): Efficient sorting algorithms like Merge Sort and Quick Sort.
  5. O(n^2): Simple sorting algorithms like Bubble Sort, Insertion Sort, and Selection Sort.
  6. O(2^n): Solving the Tower of Hanoi problem.
  7. O(n!): Solving the traveling salesman problem with a brute-force approach.

Analyzing Algorithms with Big O

Time Complexity

Time complexity is a function that describes the amount of computational time an algorithm takes relative to the input size. It helps determine the scalability of the algorithm.

Space Complexity

Space complexity refers to the amount of memory an algorithm needs to run to completion. It's important when considering memory limitations and efficiency.

Example Analysis

Let's analyze the time complexity of a simple algorithm that adds up all the elements of an array:

fun sumArray(arr: IntArray): Int {
    var total = 0
    for (num in arr) {
        total += num
    }
    return total
}
  • Input Size (n): The number of elements in the array.
  • Number of Operations: The loop runs n times, each time performing a constant-time addition operation.
  • Time Complexity: O(n), because the time it takes to complete the function grows linearly with the input size.

Sorting Algorithms: Bubble Sort, Selection Sort, Quick Sort, and Insertion Sort

1. Bubble Sort

Explanation: Bubble Sort is a simple comparison-based algorithm where each pair of adjacent elements is compared, and the elements are swapped if they are in the wrong order. This process repeats from the beginning of the list until the list is sorted.

Advantages:

  • Easy to understand and implement.
  • No additional memory is required (in-place sorting).

Disadvantages:

  • Very inefficient for large datasets.
  • Performs poorly compared to other sorting algorithms.

Time Complexity:

  • Best Case: O(n) (when the array is already sorted).
  • Average Case: O(n^2).
  • Worst Case: O(n^2).

2. Selection Sort

Explanation: Selection Sort divides the list into a sorted and an unsorted region. It repeatedly selects the smallest (or largest, depending on sorting order) element from the unsorted region and moves it to the end of the sorted region.

Advantages:

  • Simple to implement.
  • Performs well on small lists.

Disadvantages:

  • Inefficient on large lists.
  • The number of comparisons is fixed (n(n-1)/2), leading to poor performance.

Time Complexity:

  • Best Case: O(n^2).
  • Average Case: O(n^2).
  • Worst Case: O(n^2).

3. Quick Sort

Explanation: Quick Sort is a divide-and-conquer algorithm. It selects a 'pivot' element and partitions the array into two sub-arrays, according to whether the elements are less than or greater than the pivot. The sub-arrays are then sorted recursively.

Advantages:

  • Efficient for large datasets.
  • Generally performs better than other O(n log n) algorithms in practice.
  • In-place sorting (requires little additional memory).

Disadvantages:

  • Performance can degrade to O(n^2) if the pivot selection is poor.
  • Recursive implementation may lead to stack overflow for very large lists.

Time Complexity:

  • Best Case: O(n log n).
  • Average Case: O(n log n).
  • Worst Case: O(n^2).

4. Insertion Sort

Explanation: Insertion Sort builds the sorted array one element at a time. It takes each element and inserts it into its correct position within the already sorted part of the array, shifting elements as necessary.

Advantages:

  • Simple to implement.
  • Efficient for small datasets or nearly sorted lists.
  • Stable sort (maintains relative order of equal elements).

Disadvantages:

  • Inefficient for large datasets.
  • More shifts and comparisons compared to more advanced algorithms.

Time Complexity:

  • Best Case: O(n) (when the array is already sorted).
  • Average Case: O(n^2).
  • Worst Case: O(n^2).

Pathfinding Algorithms: Dijkstra

Dijkstra's algorithm is a fundamental algorithm in graph theory, used for finding the shortest paths between nodes in a graph, which may represent, for example, road networks.

Explanation

Dijkstra's algorithm works by iteratively selecting the node with the smallest tentative distance, updating the distances to its neighbors, and marking the node as "visited." This process continues until all nodes have been visited or the shortest path to the target node has been found.

Steps of Dijkstra's Algorithm:

  1. Initialization:

    • Set the distance to the start node to zero.
    • Set the distance to all other nodes to infinity.
    • Set all nodes as unvisited.
    • Set the start node as the current node.
  2. Visit the Current Node:

    • For the current node, consider all its unvisited neighbors and calculate their tentative distances (current node's distance + edge weight to neighbor).
    • If the calculated tentative distance is less than the current known distance to the neighbor, update the shortest distance to the neighbor.
  3. Mark as Visited:

    • After visiting all neighbors of the current node, mark the current node as visited. A visited node will not be checked again.
  4. Select the Next Node:

    • Select the unvisited node with the smallest tentative distance and set it as the new current node.
    • Repeat the process until all nodes are visited or the shortest path to the target node is determined.

Advantages

  • Efficiency: Efficiently finds the shortest path in graphs with non-negative weights.
  • Wide Applicability: Useful in various practical applications, such as routing and navigation systems.

Disadvantages

  • Non-negative Weights Only: Does not handle graphs with negative weight edges. For such graphs, the Bellman-Ford algorithm is more appropriate.
  • Single-Source Shortest Path: Finds the shortest paths from a single source node to all other nodes. If multiple sources are needed, the algorithm needs to be run multiple times.
  • Space Complexity: Requires additional memory to maintain the set of unvisited nodes and the distance table.

Time Complexity

The time complexity of Dijkstra's algorithm depends on the data structures used:

  • Using a simple list: O(V^2), where V is the number of vertices.
  • Using a priority queue (binary heap): O((V + E) log V), where E is the number of edges.
  • Using a Fibonacci heap: O(E + V log V), which is more efficient for graphs with a large number of edges.

Example of Dijkstra's Algorithm

Imagine a graph representing a road network where nodes are intersections, and edges are roads with weights representing the travel distance:

     (A)
    / | \
   1  3  6
  /   |   \
(B)-- 5 --(C)
  \   |   /
   4  2  1
    \ | /
     (D)
  1. Initialization:

    • Start at node A.
    • Set distance to A = 0, and distances to B, C, D = ∞.
  2. Visit A:

    • Update distances to neighbors: B (1), C (3), D (6).
    • Mark A as visited.
  3. Select B (smallest tentative distance):

    • Update distances to neighbors: D (5 through B).
    • Mark B as visited.
  4. Select C:

    • Update distances to neighbors: D (5 through C).
    • Mark C as visited.
  5. Select D:

    • All neighbors visited, algorithm terminates.

Final shortest paths from A:

  • A to B: 1
  • A to C: 3
  • A to D: 5

Search Algorithms: Binary Search

Binary Search is an efficient algorithm for finding an item from a sorted list of items. It works by repeatedly dividing in half the portion of the list that could contain the item until you've narrowed down the possible locations to just one.

Explanation

Binary Search operates on the principle of divide-and-conquer. The algorithm compares the target value to the middle element of the list. If the target value matches the middle element, the search is successful. If the target value is less than the middle element, the algorithm repeats the search on the left half of the list. If the target value is greater, the algorithm repeats the search on the right half of the list. This process continues, halving the list each time, until the target value is found or the sublist has size zero.

Steps of Binary Search:

  1. Initialization: Start with the entire list and determine the middle element.
  2. Comparison: Compare the target value to the middle element.
  3. Sublist Selection: Based on the comparison, decide whether to search the left or right sublist.
  4. Repeat: Continue the process on the selected sublist until the target value is found or the sublist is empty.

Advantages

  • Efficiency: Binary Search is much faster than linear search for large datasets. By halving the search space at each step, it quickly reduces the size of the problem.
  • Predictable Performance: The time complexity of binary search is logarithmic, which means it grows very slowly with the size of the input.
  • Simple Implementation: Once the list is sorted, binary search is relatively straightforward to implement.

Disadvantages

  • Requires Sorted Data: Binary Search can only be applied to a list that is already sorted. This means there is an upfront cost to sorting the list if it is not already sorted.
  • Overhead for Small Lists: For very small lists, the overhead of the repeated comparisons and divisions may make binary search less efficient than a simple linear search.
  • Complexity with Dynamic Data: In scenarios where the data is frequently changing, maintaining a sorted list can add complexity and overhead.

Time Complexity

  • Best Case: O(1) - The target value is found at the middle of the list on the first comparison.
  • Average Case: O(log n) - The list is halved repeatedly, leading to logarithmic growth relative to the number of elements.
  • Worst Case: O(log n) - The target value is in the last possible position checked, still requiring a logarithmic number of steps.

Releases

No releases published

Packages

No packages published

Languages