Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .idea/misc.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

126 changes: 126 additions & 0 deletions Report.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@
# Atomic Variables

```java
import java.util.concurrent.atomic.AtomicInteger;
public class AtomicDemo {
private static AtomicInteger atomicCounter = new AtomicInteger(0);
private static int normalCounter = 0;
public static void main(String[] args) throws InterruptedException {
Runnable task = () -> {
for (int i = 0; i < 1_000_000; i++) {
atomicCounter.incrementAndGet();
normalCounter++;
}
};

Thread t1 = new Thread(task);
Thread t2 = new Thread(task);
t1.start();
t2.start();
t1.join();
t2.join();

System.out.println("Atomic Counter: " + atomicCounter);
System.out.println("Normal Counter: " + normalCounter);
}
}

```

## **Questions:**

- What output do you get from the program? Why?

- What is the purpose of AtomicInteger in this code?

- What thread-safety guarantees does atomicCounter.incrementAndGet() provide?

- In which situations would using a lock be a better choice than an atomic variable?

- Besides AtomicInteger, what other data types are available in the java.util.concurrent.atomic package?

---

## **Answers** :

### Q1 :

- Atomic Counter: 2000000
- Normal Counter: Less than 2000000

#### Why:

- `atomicCounter` uses `AtomicInteger`, which ensures atomic increments, resulting in exactly 2,000,000 (1M increments per thread).

- `normalCounter` is a regular `int`, and `normalCounter++` is not atomic, leading to race conditions where some increments are lost due to concurrent modifications.

### Q2 :

`AtomicInteger` provides thread-safe, atomic operations to safely increment `atomicCounter` without locks, preventing race conditions in a multithreaded environment.

### Q3 :

`incrementAndGet()` is atomic, ensuring that the read, increment, and write operations are performed as a single, indivisible unit.

It guarantees visibility (changes are immediately visible to all threads) and prevents race conditions.

### Q4 :

- When operations involve multiple variables or complex logic that require mutual exclusion (e.g., updating two counters consistently).
- When fairness or explicit control over locking (e.g., ReentrantLock) is needed.

### Q5 :

#### Other Data Types:

- `AtomicLong`: For atomic operations on `long` values.
- `AtomicBoolean`: For atomic operations on `boolean` values.
- `AtomicReference<V>`: For atomic operations on object references.
- `AtomicIntegerArray`, `AtomicLongArray`, `AtomicReferenceArray`: For arrays of atomic integers, longs, or references.

---

# Monte Carlo π Estimation Report

## Performance Comparison

- **Single-Threaded Version:**
- Execution Time: ~1.4-2 seconds for 50,000,000 points.
- Estimated π: ~3.1416 (accuracy depends on point count).

- **Multi-Threaded Version (4 threads):**
- Execution Time: ~0.1-0.3 second for 50,000,000 points.
- Estimated π: ~3.1416 (same accuracy as single-threaded).

one of the outputs :
```
Single threaded calculation started:
Monte Carlo Pi Approximation (single thread): 3.14150792
Time taken (single threads): 1553 ms
Multi threaded calculation started: (your device has 32 logical threads)
Monte Carlo Pi Approximation (Multi-threaded): 3.14165904
Time taken (Multi-threaded): 161 ms
```
## Questions

### Was the multi-threaded implementation always faster than the single-threaded one?

No, the multi-threaded implementation is not always faster.

**Why not?**
- Small point counts (e.g., 10,000) incur thread pool setup overhead that outweighs processing time.
- Excessive threads beyond CPU cores (e.g., 16 threads on 4 cores) cause context switching overhead.
- Single-core systems lack parallelism, making multi-threading slower.
- Uneven point distribution (mitigated in this code) can lead to idle threads.

### If not, what factors are the cause and what can you do to mitigate these issues?

**Factors:**
- **Thread pool setup overhead:** Initializing `ExecutorService` is costly for small point counts.
- **Context switching:** Excessive threads beyond CPU cores increase switching overhead.
- **Hardware limitations:** Single-core systems lack parallelism.

**Mitigations:**
- Use `ExecutorService` to reuse threads, reducing creation overhead.
- Set thread count to match CPU cores (`availableProcessors()`).
- Fall back to single-threaded version for small point counts (<1M).
37 changes: 30 additions & 7 deletions src/main/java/Banking/BankAccount.java
Original file line number Diff line number Diff line change
Expand Up @@ -17,25 +17,48 @@ public int getId(){
return id;
}
public int getBalance() {
// TODO: Consider locking (if needed)
return balance;
return balance; // No need for a lock, as access to the balance is safe in other methods
}

public Lock getLock() {
return lock;
}

public void deposit(int amount) {
// TODO: Safely add to balance.
lock.lock();
try {
balance += amount;
} finally {
lock.unlock();
}
}

public void withdraw(int amount) {
// TODO: Safely withdraw from balance.
lock.lock();
try {
balance -= amount;
} finally {
lock.unlock();
}
}

public void transfer(BankAccount target, int amount) {
// TODO: Safely make the changes
// HINT: Both accounts need to be locked, while the changes are being made
// HINT: Be cautious of potential deadlocks.
// Locking order by id to avoid deadlock
Lock firstLock = this.id < target.id ? this.lock : target.lock;
Lock secondLock = this.id < target.id ? target.lock : this.lock;

firstLock.lock();
try {
secondLock.lock();
try {
this.balance -= amount; // Deduction from the first account
target.balance += amount; // Deposit to destination account
} finally {
secondLock.unlock();
}
} finally {
firstLock.unlock();
}
}

}
2 changes: 1 addition & 1 deletion src/main/java/Banking/BankingMain.java
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ public List<BankAccount> calculate() throws InterruptedException {
Thread[] threads = new Thread[4];
for(int i = 1; i <= 4; i++){
String fileName = i + ".txt";
threads[i - 1] = new Thread(new TransactionProcessor(accounts.get(i - 1), fileName, accounts));
threads[i - 1] = new Thread(new TransactionProcessor(accounts.get(i - 1), fileName, accounts) , ("Acc-" + i));
}

for(Thread thread : threads){
Expand Down
76 changes: 60 additions & 16 deletions src/main/java/MonteCarloPI/MonteCarloPi.java
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
package MonteCarloPI;

import java.util.concurrent.ExecutionException;
import java.util.Random;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.atomic.AtomicLong;

public class MonteCarloPi {

Expand All @@ -25,31 +27,73 @@ public static void main(String[] args) throws InterruptedException, ExecutionExc
endTime = System.nanoTime();
System.out.println("Monte Carlo Pi Approximation (Multi-threaded): " + piWithThreads);
System.out.println("Time taken (Multi-threaded): " + (endTime - startTime) / 1_000_000 + " ms");

// TODO: After completing the implementation, reflect on the questions in the description of this task in the README file
// and include your answers in your report file.
}

// Monte Carlo Pi Approximation without threads
public static double estimatePiWithoutThreads(long numPoints)
{
// TODO: Implement this method to calculate Pi using a single thread
return 0;
Random random = new Random();
long pointsInsideCircle = 0;

for (long i = 0; i < numPoints; i++) {
double x = random.nextDouble() * 2 - 1; // x in [-1, 1]
double y = random.nextDouble() * 2 - 1; // y in [-1, 1]
if (x * x + y * y <= 1) { // Inside circle
pointsInsideCircle++;
}
}

return 4.0 * pointsInsideCircle / numPoints;
}

// Monte Carlo Pi Approximation with threads
public static double estimatePiWithThreads(long numPoints, int numThreads) throws InterruptedException, ExecutionException
{
// TODO: Implement this method to calculate Pi using multiple threads
public static double estimatePiWithThreads(long numPoints, int numThreads) throws InterruptedException {
AtomicLong pointsInsideCircle = new AtomicLong(0);
Thread[] threads = new Thread[numThreads];
long pointsPerThread = numPoints / numThreads;
long remainingPoints = numPoints % numThreads;

// Distribute points among threads using try-with-resources
try (ExecutorService executor = Executors.newFixedThreadPool(numThreads)) {
for (int i = 0; i < numThreads; i++) {
long pointsForThisThread = pointsPerThread + (i == 0 ? remainingPoints : 0);
threads[i] = new Thread(new PiTask(pointsForThisThread, pointsInsideCircle));
executor.execute(threads[i]); // Use executor to manage threads
}

// Wait for all threads to complete
for (Thread thread : threads) {
thread.join();
}
} // ExecutorService automatically shuts down here

return 4.0 * pointsInsideCircle.get() / numPoints;
}

// Task for each thread
private static class PiTask implements Runnable {
private final long numPoints;
private final AtomicLong pointsInsideCircle;

public PiTask(long numPoints, AtomicLong pointsInsideCircle) {
this.numPoints = numPoints;
this.pointsInsideCircle = pointsInsideCircle;
}

ExecutorService executor = Executors.newFixedThreadPool(numThreads);
@Override
public void run() {
Random random = new Random();
long localCount = 0;

// HINT: You may need to create a variable to *safely* keep track of points that fall inside the circle
// HINT: Each thread should generate and process a subset of the total points
for (long i = 0; i < numPoints; i++) {
double x = random.nextDouble() * 2 - 1;
double y = random.nextDouble() * 2 - 1;
if (x * x + y * y <= 1) {
localCount++;
}
}

// TODO: After submitting all tasks, shut down the executor to prevent new tasks
// TODO: wait for the executor to be fully terminated
// TODO: Calculate and return the final estimation of Pi
return 0;
pointsInsideCircle.addAndGet(localCount);
}
}
}