Skip to content

A repository for Operating Systems (CMSC 125) laboratory exercises using C.

Notifications You must be signed in to change notification settings

rmlescano2023/Operating-Systems

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Operating Systems

This is a repository for Operating Systems (CMSC 125) laboratory exercises using C.


LAB 1

This laboratory exercise demonstrates inter-process communication and process management in C using the fork(), waitpid(), and exit() system calls. The program defines a parent process that creates two child processes and generates a Fibonacci sequence up to the 10th term. Initially, the main function declares an integer array fibonacci[] to store the sequence values and computes the Fibonacci numbers using a loop.

The fork() system call is used to create a child process (child1_pid). The first child process, if successfully created, outputs its process ID and prints the entire Fibonacci sequence. Following the creation of child1_pid, the program attempts to create a second child process (child2_pid) but incorrectly accesses child2_pid before it is assigned with another fork(). This results in a logical error, as child2_pid is not correctly initialized.

In the parent process, the program uses waitpid() to synchronize the parent process with both child processes, waiting for them to finish before printing a final message and exiting. The purpose of this exercise is to provide hands-on experience with process control and observe the output generated by each process. However, a correction to the fork() logic for child2_pid would be necessary to ensure both child processes function as intended.


LAB 2

The purpose of this exercise is to simulate and analyze different CPU scheduling algorithms, which are essential for managing the execution of processes in an operating system. The exercise focuses on four fundamental scheduling algorithms: First-Come-First-Served (FCFS), Shortest Job First (SJF), Priority-Based Scheduling, and Round Robin (RR). Each algorithm allocates CPU time to processes in different ways, offering unique approaches to process execution. Understanding these algorithms is crucial for gaining insights into how operating systems prioritize and manage multiple tasks, ensuring efficient CPU utilization.

In this exercise, the implementation of each scheduling algorithm helps simulate their behavior and performance in managing processes. FCFS operates in a simple, non-preemptive manner, where processes are executed in the order they arrive. While easy to implement, it can lead to the convoy effect, where long processes delay shorter ones. SJF, another non-preemptive algorithm, selects the process with the shortest burst time next, minimizing average waiting time but requiring prior knowledge of burst times. Priority-Based Scheduling assigns priorities to processes and executes the highest-priority one first. While efficient for critical tasks, it can lead to starvation of lower-priority processes. Round Robin is a preemptive algorithm where each process is assigned a fixed time quantum, allowing for fairness in time-sharing systems, but it may increase turnaround times, especially for processes with varying execution requirements.

The implementation also involves calculating key performance metrics such as waiting time and turnaround time for each algorithm. These metrics provide a basis for comparing the efficiency and fairness of the scheduling strategies. The results of these simulations are visually represented in a Gantt chart, which illustrates the execution timeline of processes, highlighting idle times and periods when processes are being executed. This visual representation helps in understanding the flow of processes and the impact of different scheduling policies on system performance.

By comparing these algorithms, the exercise enables users to understand their respective advantages and trade-offs. For example, while FCFS is simple, it can result in high waiting times for processes with longer burst times. SJF is optimal for minimizing waiting times but is impractical without knowledge of burst times. Priority-Based Scheduling is ideal when certain tasks need to be prioritized, though it may lead to starvation for low-priority processes. Round Robin ensures fairness but can cause higher turnaround times, especially when processes have significantly different CPU time requirements.


LAB 3

The purpose of this program is to demonstrate multithreading in C using POSIX threads (pthreads) while ensuring data consistency and synchronization using mutexes. It creates multiple threads that increment a shared global counter variable, counter, up to a predefined limit (MAXIMUM_NUMBER), while using a mutex to prevent data races, ensuring that only one thread modifies the counter at a time. The program also measures and outputs the total time taken for all threads to complete their execution.

The program starts by initializing a mutex using pthread_mutex_init(). This mutex is then used inside the count_to_1000 function, which is executed by each thread. Inside this function, each thread repeatedly locks the mutex before accessing and incrementing the counter variable, ensuring that the counter's value is updated atomically and preventing concurrent access by multiple threads. Once the thread increments the counter, it unlocks the mutex, allowing other threads to access and modify the counter.

In the body of the count_to_1000 function, the counter's current value is stored in a local variable, number, which is used to control the execution of nested loops. These nested loops do not perform any specific computation but are included to simulate work being done, making the execution of the threads noticeable in terms of time taken. After each thread finishes executing the loops and increments the counter, it prints a message indicating which thread finished counting and the value it counted to.

The main function creates two threads (NUM_THREADS is set to 2) using pthread_create() and waits for all threads to finish using pthread_join(). It measures the total time taken for the program execution using clock(), and at the end, it prints this time. The program also properly cleans up by destroying the mutex with pthread_mutex_destroy() and exiting the threads using pthread_exit() to ensure that all resources are released appropriately.

The program demonstrates key concepts in multithreading, such as the use of mutexes to synchronize access to shared resources, thread creation and management with pthread_create() and pthread_join(), and performance measurement using the clock() function. This exercise is useful for understanding how to manage concurrency in a multithreaded environment, preventing issues like race conditions, while also evaluating the time complexity and execution efficiency of multithreaded applications.


LAB 4

This program implements a producer-consumer problem using pthreads and condition variables to synchronize the interaction between producer and consumer threads. The producer generates random items and places them in a shared buffer, while the consumer consumes items from the buffer. The buffer has a fixed size (BUFFER_SIZE), and the program ensures proper synchronization between threads to prevent race conditions and manage resource contention. The use of mutexes ensures mutual exclusion when accessing the shared buffer, and condition variables are employed to allow threads to wait when the buffer is either full or empty.

The program starts by initializing a buffer of fixed size (BUFFER_SIZE set to 5) and an integer counter (count) to track the number of items in the buffer. It creates two condition variables—buffer_empty and buffer_full—that control the flow of the producer and consumer threads. A mutex is used to protect the critical section, ensuring that only one thread can access the buffer at any given time.

The producer function runs in an infinite loop, where it locks the mutex and checks if the buffer is full. If the buffer is full, the producer waits for the buffer_empty condition variable to be signaled by the consumer, indicating that there is space in the buffer. Once space is available, the producer generates a random item, adds it to the buffer, increments the item count, and signals the buffer_full condition variable to notify the consumer that there is an item available for consumption.

Similarly, the consumer function also runs in an infinite loop, where it locks the mutex and checks if the buffer is empty. If the buffer is empty, the consumer waits for the buffer_full condition variable to be signaled by the producer, indicating that there is an item to consume. Once an item is available, the consumer randomly selects a slot in the buffer, consumes a random quantity of the item, and updates the buffer accordingly. If the consumed item reduces the quantity in a slot to zero, the consumer shifts the remaining items in the buffer to fill the gap and decrements the item count. After consuming, the consumer signals the buffer_empty condition variable to notify the producer that there is space in the buffer.

The main function takes two arguments: the number of producer threads and the number of consumer threads. It creates the specified number of producer and consumer threads, passing each thread a unique ID. The main function then waits for all threads to finish by calling pthread_join on each thread.

In summary, the program demonstrates how to manage concurrent access to a shared resource (the buffer) using mutexes and condition variables in a multithreaded environment. The producers and consumers run concurrently, with synchronization mechanisms ensuring that no producer tries to add items when the buffer is full, and no consumer tries to consume when the buffer is empty. This problem and its solution provide valuable insights into thread synchronization and managing shared resources in multithreaded applications.

Releases

No releases published

Packages

No packages published

Languages