Participation Guidelines


Week 1: Introductions

Write a blog post introducing yourself and your goals for this course. Reflect on your initial thoughts and expectations.

Blog Post:

About Me

My name is Daniel Monbrod. (Phonetic Pronunciation: Dan-yuhl Mon-brahd)

I am currently the Manager of a Beggars Pizza location, but my background includes hands-on experience in cybersecurity, compliance, and IT problem-solving.

Professional Experience

I recently worked as a Security Compliance Analyst Intern at iManage, where I implemented security compliance programs, conducted risk assessments, and collaborated with cross-functional teams to improve security practices. In my current role as Manager, I have developed leadership, organizational, and team-building skills while maintaining operational excellence.

Hobbies and Interests

I have a passion for astronomy and enjoy photographing celestial objects with my telescope. Philosophy is another major interest, particularly exploring ethical questions and reflecting on the nature of existence. Fitness, specifically calisthenics, is also a part of my daily routine. Additionally, I love experimenting with music theory and audio acoustics in my free time.

Academic Pursuits

I am pursuing a Bachelor of Science in Cybersecurity, expected to graduate in May 2025. My primary motivation is to create secure systems that protect individuals and organizations while exploring the ethical implications of digital advancements.

Future Goals

In the future, I aspire to work as a cybersecurity professional specializing in ethical hacking and cybercrime prevention. I also hope to contribute to discussions about international cybersecurity through ideas like a global cyber constitution. Beyond my career goals, I aim to continue exploring my passions for astronomy, philosophy, and music.


Week 3: Algorithmic Analysis

The Importance of Algorithm Analysis

Write a blog post covering one of the following subtopics:

Ensure your blog is at least 500 words and includes examples like code snippets and real-world applications.

Week 3 Blog Post

Algorithmic Analysis: Understanding Big O Notation

Introduction

Have you ever wondered why some programs run faster than others? Or why one solution to a problem is more efficient than another? The answer lies in algorithmic analysis, a fundamental concept in computer science that helps us measure and compare the efficiency of different algorithms. One of the most widely used tools for this is Big O Notation, which allows us to quantify an algorithm’s performance as input size grows.

In this post, I’ll break down Big O Notation, provide real-world examples, and explain why understanding algorithm efficiency is crucial for writing better code. This discussion is based on Chapter 3: Algorithm Analysis from our coursework and incorporates key insights from the accompanying PowerPoint presentation.


What is Big O Notation?

Big O Notation is a mathematical concept used to describe an algorithm’s worst-case performance in terms of time complexity (how long it takes to run) and space complexity (how much memory it consumes). It helps computer scientists and developers understand how an algorithm scales as the input size (n) increases.

Common Big O Complexities

Here are some common time complexities and their significance:

Summary of Big O complexities and their corresponding algorithm examples
Big O Notation Complexity Type Example Algorithm
O(1) Constant Time Hash table lookup
O(log n) Logarithmic Time Binary search
O(n) Linear Time Iterating through an array
O(n log n) Log-Linear Time Merge sort, quicksort
O(n^2) Quadratic Time Bubble sort, insertion sort
O(2^n) Exponential Time Recursive Fibonacci
O(n!) Factorial Time Traveling Salesman Problem

Why Does Big O Matter?

Big O notation helps us make data-driven decisions about which algorithm to use. If you’re working on a large dataset, an O(n log n) algorithm (like merge sort) is preferable to an O(n^2) algorithm (like bubble sort) because it scales better with increasing input sizes.


Example: Comparing Algorithms

Example 1: Linear vs. Quadratic Complexity

Consider two different approaches to checking for duplicate elements in a list:

O(n^2) - Naive Approach
# Brute force approach (Nested loop) - O(n^2)
def has_duplicates(lst):
    for i in range(len(lst)):
        for j in range(i + 1, len(lst)):
            if lst[i] == lst[j]:
                return True
    return False
                
O(n) - Optimized Approach
# Using a set (Hash table) - O(n)
def has_duplicates_optimized(lst):
    seen = set()
    for num in lst:
        if num in seen:
            return True
        seen.add(num)
    return False
                

Here, the first approach checks each element against every other element, resulting in O(n^2) time complexity. The second approach, using a set, reduces the complexity to O(n), making it much more efficient for large datasets.


Insights from the PowerPoint: Why Theoretical Analysis Matters

The PowerPoint on Algorithm Analysis emphasizes the importance of theoretical analysis over purely experimental studies. Key takeaways:

  • Experimental analysis requires implementation and hardware-specific benchmarking, making it inconsistent across different systems.
  • Theoretical analysis allows us to evaluate an algorithm’s performance independently of hardware by considering its asymptotic behavior.
  • The RAM Model assumes that accessing any memory cell takes constant time, making it useful for analyzing algorithms in a generalized way.

Real-World Applications of Big O

1. Search Engines

Google’s search algorithms need to be fast and scalable. They optimize for O(log n) or O(1) lookups using binary search trees and hash tables.

2. Sorting Large Data Sets

When sorting massive datasets (e.g., database records), algorithms like merge sort (O(n log n)) are preferred over bubble sort (O(n^2)).

3. Machine Learning and AI

Training deep learning models often involves O(n log n) or O(n^2) complexity due to matrix operations.


Best Practices and Common Pitfalls

Best Practices

  • ✔ Always consider the worst-case scenario.
  • ✔ Use hash tables or binary search trees when possible to optimize performance.
  • ✔ Prefer divide and conquer approaches (e.g., quicksort, merge sort) for sorting.
  • ✔ Profile and benchmark code to identify bottlenecks.
  • ✔ Leverage amortized analysis for understanding efficiency over multiple operations.

Common Pitfalls

  • ❌ Ignoring time complexity when writing code.
  • ❌ Using inefficient loops where hash tables or sorting algorithms could be applied.
  • ❌ Assuming that a faster processor will always compensate for a bad algorithm.
  • ❌ Confusing Big O with Big Theta (Θ) or Omega (Ω)—Big O gives an upper bound, but Θ provides a tight bound on performance.

Conclusion

Understanding Big O Notation is essential for any programmer who wants to write efficient and scalable code. By analyzing an algorithm’s time and space complexity, we can make informed choices about which approach to use in different scenarios. Whether you're working on search engines, sorting data, or building AI models, knowing how to optimize algorithms will save you time, money, and computing resources.

Your Turn!

What’s an example of a slow algorithm you’ve encountered? How would you optimize it? Let’s discuss in the comments!


References

  1. Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stein, C. (2009). Introduction to Algorithms.
  2. Goodrich, M. T., Tamassia, R., & Goldwasser, M. H. (2013). Data Structures and Algorithms in Python.
  3. Course materials from CPSC 34000, Week 3 PowerPoint: Algorithm Analysis.

(This blog post is part of my coursework for CPSC 34000: Algorithms and Data Structures.)


Week 5: Coding and Reflection — Dynamic Arrays in Python

Implementing and Reflecting on Dynamic Arrays

Blog Post Requirements:

Part 1: Implementing a Dynamic Array

Part 2: Reflection on Your Work


Discussion: Coding and Reflection -- Dynamic Arrays in Python

Understanding Dynamic Arrays: Implementation and Insights

Introduction

Have you ever wondered how Python’s list effortlessly expands when you add elements? Unlike static arrays, which have a fixed size, dynamic arrays adjust their capacity automatically. This ability is crucial in data structures and algorithm optimization.

In this week's blog post, I’ll walk through the implementation of a simplified dynamic array, discuss its challenges, and reflect on what I learned. Understanding how dynamic arrays work under the hood strengthens problem-solving skills and enhances efficiency when writing Python programs.

Part 1: Implementing a Dynamic Array

Python’s built-in list functions as a dynamic array, handling memory reallocation internally. However, manually implementing a dynamic array offers insights into how resizing, insertion, and deletion work at a lower level.

Key Features of Our Dynamic Array Implementation

  • ✔ Supports append(), insert(), delete(), get(), and resize()
  • ✔ Uses low-level memory allocation with ctypes to mimic a C-like array
  • ✔ Dynamically expands when full by allocating a new array with double capacity

Part 1: Implementing a Dynamic Array

Below is the Python implementation of a simplified dynamic array. This implementation supports basic operations like append(), insert(), delete(), and resize(). It also dynamically grows or shrinks based on the number of elements, mimicking how Python's list works under the hood.

Python Implementation


import ctypes

class DynamicArray:
    """A simplified dynamic array similar to Python's list.

    Features:
    - Dynamically resizes when full by doubling its capacity.
    - Shrinks its capacity to half when the number of elements is less than 25% of the current capacity.
    - Supports append(), insert(), delete(), get(), and resize() operations.

    Note:
    This implementation is for educational purposes only and is not optimized for production use.
    """
                
    def __init__(self):
        """Initialize an empty array with capacity 1."""
        self._n = 0  # Number of elements currently in the array
        self._capacity = 1  # Initial capacity of the array
        self._A = self._make_array(self._capacity)  # Allocate low-level array with initial capacity

    def __len__(self):
        """Return the number of elements stored in the array."""
        return self._n
                
    def __getitem__(self, k):
        """Return the element at index k."""
        if not 0 <= k < self._n:  # Check if index is within bounds (0 <= k < self._n)
            raise IndexError("Invalid index")
        return self._A[k]
            
    def __repr__(self):
        """Return a string representation of the array."""
        return "[" + ", ".join(str(self._A[i]) for i in range(self._n)) + "]"

    def append(self, obj):
        """Add an element at the end of the array, resizing if necessary."""
        if self._n == self._capacity:  # Check if the array is full
            self._resize(2 * self._capacity)  # Double the capacity to accommodate more elements
        self._A[self._n] = obj  # Add the new element
        self._n += 1  # Increment the count of elements

    def insert(self, k, value):
        """Insert an element at index k, shifting elements to the right."""
        if not 0 <= k <= self._n:  # Ensure the index is valid (0 <= k < self._n)
            raise IndexError("Invalid index")
        if self._n == self._capacity:  # Resize if the array is full
            self._resize(2 * self._capacity)
        for i in range(self._n, k, -1):  # Shift elements to the right to make space
            self._A[i] = self._A[i - 1]
        self._A[k] = value  # Insert the new value
        self._n += 1  # Increment the count of elements

    def delete(self, k):
        """Remove the element at index k, shifting elements to the left."""
        if not 0 <= k < self._n:  # Ensure the index is valid (0 <= k < self._n)
            raise IndexError("Invalid index")
        for i in range(k, self._n - 1):  # Shift elements to the left to fill the gap
            self._A[i] = self._A[i + 1]
        self._n -= 1  # Decrement the count of elements
        # Optionally shrink the array if it's less than 25% full
        if 0 < self._n < self._capacity // 4 and self._capacity > 16:
            self._resize(self._capacity // 2)

    def _resize(self, c):
        """Resize the internal array to a new capacity c."""
        B = self._make_array(c)  # Create a new array with the desired capacity
        for k in range(self._n):  # Copy existing elements to the new array
            B[k] = self._A[k]
        self._A = B  # Replace the old array with the new one
        self._capacity = c  # Update the capacity

        def _make_array(self, c):
            """Return a new array with capacity c."""
            return (c * ctypes.py_object)()  # Use ctypes to create a low-level array
        
    

Example Usage

Here’s how you can use the DynamicArray class:


        if __name__ == "__main__":
            arr = DynamicArray()
            arr.append(10)
            arr.append(20)
            arr.insert(1, 15)  # Insert 15 at index 1
            arr.delete(2)  # Delete element at index 2
            print(arr)  # Output: [10, 15]
    

This example demonstrates appending, inserting, and deleting elements in the dynamic array.


Part 2: Reflection on Implementation

Understanding Dynamic Arrays

Dynamic arrays are versatile and efficient data structures, offering flexibility and performance benefits in various scenarios. This project forced me to really think about what happens under the hood when Python’s list dynamically resizes itself. It involves allocating new memory, copying old data, and continuing operations—which works well until frequent resizing leads to an O(n) operation.

Why Dynamic Over Static?

  • Resizes itself—No awkward "Uh, I don’t have space for that" moments like with static arrays.
  • Memory-efficient—It only grows when needed, unlike a linked list hogging memory for pointers.
  • Fast append()—Except for those annoying moments when it has to resize.

Implementation Challenges

  • Memory Management: Manually allocating memory with ctypes was a wake-up call.
  • Shifting Elements: Inserting or deleting in the middle required shifting elements, leading to O(n) operations.
  • Resizing Bugs: Debugging index errors due to incorrect copying during resizing.

Considering Big O

Big O Complexity of Dynamic Array Operations
Operation Complexity Notes
append() O(1) amortized Most of the time it’s O(1), but when resizing occurs, it's O(n). Python avoids frequent resizing by using a growth factor of approximately 2x in CPython.
insert(k, v) O(n) Requires shifting elements to accommodate insertion.
delete(k) O(n) Shifting elements left to maintain order.
get(k) O(1) Direct index-based access is constant time.
resize() O(n) Copies everything to a bigger array, which can be expensive but happens infrequently due to the 2x growth factor.

Takeaways

This project was an eye-opener. I knew Python lists were convenient, but now I get why they work so well. It’s all about handling memory intelligently and trading off between speed and flexibility.

At the end of the day, understanding how data structures actually work gives me an edge in writing better code. While Python handles a lot of this automatically, knowing what’s happening under the hood makes debugging and optimization much easier.


References

  1. Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stein, C. (2009). Introduction to Algorithms.
  2. Goodrich, M. T., Tamassia, R., & Goldwasser, M. H. (2013). Data Structures and Algorithms in Python.
  3. Python Software Foundation. (n.d.). Python Lists - Link
  4. GeeksForGeeks. (n.d.). Dynamic Array in Python - Link
  5. CS50 Harvard. (n.d.). Memory and Dynamic Arrays Lecture - Link
  6. Course Instructional Material from CPSC 34000, Week 5 PowerPoint: Chapter 5 | Array-Based Sequences.

(This blog post is part of my coursework for CPSC 34000: Algorithms and Data Structures.)