Skip to content

YashwanthNavari/smart-campus-network

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

5 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Typing SVG


πŸ‘‡ TEST THE LIVE PROJECT HERE πŸ‘‡

Live Dashboard


Course: Computer Networks (24TU04MJM2) β€” Woxsen University
Version: 2.0 | Date: February 2026
Team/Author: Woxsen University CN PBL 2026


Python Machine Learning Streamlit Cybersecurity

πŸ“– Table of Contents

  1. What Is This Project?
  2. How It Works β€” The Big Picture
  3. System Architecture
  4. Pipeline β€” Step by Step
  5. The Trust Score Engine
  6. Machine Learning Models
  7. Network & Firewall Design
  8. Project Structure
  9. Quick Start
  10. The Dashboard
  11. Generated Visualisation Plots
  12. Results & Accuracy
  13. Access Policy Reference
  14. Technologies Used
  15. References

🧠 What Is This Project?

Imagine a university campus where hundreds of devices β€” laptops, phones, tablets β€” try to connect to the campus network every hour. The network administrator needs to answer three questions automatically:

  1. Should this device be allowed in? (Access Control)
  2. Is this device acting suspiciously? (Intrusion Detection)
  3. How trustworthy is this device, right now? (Trust Scoring)

This project builds a Smart Campus Network that answers all three questions in real time using a combination of:

  • Rule-Based Logic β€” simple, transparent access policies
  • Dynamic Trust Scoring β€” a live score (0–100) for every login attempt
  • Machine Learning β€” a Decision Tree (access control) and an SVM (intrusion detection)
  • A Live Streamlit Dashboard β€” to monitor all of this in real time

The whole system is simulation-based β€” we generate realistic authentication logs, process them through our rule engine and ML pipeline, and visualise everything in an interactive dashboard.


🌐 How It Works β€” The Big Picture

When a device connects to the campus network, it goes through the following process:

Device Tries to Connect
        β”‚
        β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   Authentication Log  β”‚  MAC address, OS, Role, Timestamp, Login Result
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
            β”‚
            β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Parser + Rule Engine β”‚  Applies time/role-based access rules
β”‚  Trust Score Engine   β”‚  Calculates a score from 0 to 100
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
            β”‚
            β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   ML Models           β”‚  Decision Tree β†’ Access Control
β”‚                       β”‚  SVM           β†’ Intrusion Detection
β”‚                       β”‚  K-Means       β†’ Device Clustering
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
            β”‚
            β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Streamlit Dashboard  β”‚  Real-time feed, charts, AI prediction panel
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Result: Every login attempt gets labelled as ALLOW, RESTRICT, or BLOCK β€” and any suspicious activity is flagged as an anomaly, instantly visible on the dashboard.


πŸ—οΈ System Architecture

Overall Data Flow

flowchart TD
    A[πŸ–₯️ Device Login Attempt] --> B{Authentication Gateway}
    B -->|Sends log entry| C[auth_logs.csv]
    C --> D[parser_and_rules.py]
    D --> E{Rule Engine}
    E -->|MAC registered?| F[Trust Score Engine]
    E -->|Unknown MAC| G[🚫 BLOCK]
    F --> H{Score β‰₯ 70?}
    H -->|Yes| I[βœ… ALLOW]
    H -->|40–69| J[⚠️ RESTRICT]
    H -->|< 40| G
    D --> K[parsed_logs.csv]
    K --> L[ml_models.py]
    L --> M[🌲 Decision Tree]
    L --> N[πŸ”΅ SVM RBF Kernel]
    L --> O[πŸ“Š K-Means Clustering]
    M --> P[Access Decision Prediction]
    N --> Q[Anomaly / Intrusion Flag]
    O --> R[Device Fingerprint Cluster]
    K --> S[analysis_plots.py]
    S --> T[6 Visualisation Plots]
    K --> U[dashboard.py]
    P --> U
    Q --> U
    T --> U
    U --> V[πŸ–₯️ Live Streamlit Dashboard]
Loading

Network Topology

graph TD
    Internet["🌐 Internet"]
    FW["πŸ”₯ Firewall\nfirewall_rules.bat"]
    Router["πŸ”€ Core Router"]
    
    VLAN10["πŸ“‘ VLAN 10\n192.168.10.x\nFaculty"]
    VLAN20["πŸ“‘ VLAN 20\n192.168.20.x\nStudents"]
    VLAN30["πŸ“‘ VLAN 30\n192.168.30.x\nGuests"]
    VLAN40["πŸ“‘ VLAN 40\n192.168.40.x\nServers / LMS"]

    Internet --> FW
    FW --> Router
    Router --> VLAN10
    Router --> VLAN20
    Router --> VLAN30
    Router --> VLAN40

    VLAN10 -->|"Full Access\n(Anytime)"| VLAN40
    VLAN20 -->|"LMS Access\n(07:00–22:00)"| VLAN40
    VLAN30 -->|"BLOCKED\nfrom Faculty & Servers"| VLAN40
Loading

πŸ”„ Pipeline β€” Step by Step

The project runs in a 5-step pipeline. Each script feeds the next one.

sequenceDiagram
    participant You
    participant gen as generate_data.py
    participant par as parser_and_rules.py
    participant ml as ml_models.py
    participant plot as analysis_plots.py
    participant dash as dashboard.py

    You->>gen: python generate_data.py
    gen-->>You: data/auth_logs.csv (110 records)

    You->>par: python parser_and_rules.py
    par-->>You: data/parsed_logs.csv (+ trust scores + anomaly flags)

    You->>ml: python ml_models.py
    ml-->>You: models/*.pkl + models/metrics.json

    You->>plot: python analysis_plots.py
    plot-->>You: plots/*.png (6 charts)

    You->>dash: streamlit run dashboard.py
    dash-->>You: Live dashboard at localhost:8501
Loading

Step 1 β€” generate_data.py : Simulating the Campus Network

This script creates realistic synthetic authentication logs β€” the kind of data a RADIUS server on a university campus would produce.

It generates 110 records spread across 5 user groups:

User Group Count Behaviour
πŸ§‘β€πŸ« Faculty 25 Daytime logins (08:00–20:00), registered MACs, 90% success rate
πŸŽ’ Student 35 Day/evening logins (07:00–22:00), registered MACs, 70% success rate
πŸ‘€ Guest 25 Random hours, mix of unknown MACs, mostly failures
πŸ”΄ Intruders 15 Off-hours (00:00–05:00), unknown MACs, Unknown OS, all failures
🟑 Suspicious Students 10 Late-night (23:00), registered MACs but login failures

Each record has these fields:

timestamp  | mac               | os      | role    | login_result
-----------+-------------------+---------+---------+-------------
08:30      | AA:BB:CC:DD:EE:01 | Windows | faculty | success
02:15      | FF:FF:FF:FF:FF:07 | Unknown | guest   | failure

MAC Addresses:

  • AA:BB:CC:DD:EE:01 to AA:BB:CC:DD:EE:50 β†’ Registered (known) campus devices
  • FF:FF:FF:FF:FF:01 to FF:FF:FF:FF:FF:20 β†’ Unknown/unregistered devices

Step 2 β€” parser_and_rules.py : Rules + Trust Scoring

This script reads auth_logs.csv and applies two engines to every record.

Engine A β€” Access Control Rules

Simple rule-based logic that mirrors a real network access policy:

flowchart TD
    start([Login Attempt]) --> mac{MAC in whitelist?}
    mac -- No --> BLOCK1[🚫 BLOCK]
    mac -- Yes --> os{OS = Unknown?}
    os -- Yes --> BLOCK2[🚫 BLOCK]
    os -- No --> role{User Role?}
    role -- Faculty --> ALLOW1[βœ… ALLOW]
    role -- Student --> time{Hour 07–22?}
    time -- No --> BLOCK3[🚫 BLOCK]
    time -- Yes --> login{Login Success?}
    login -- No --> BLOCK4[🚫 BLOCK]
    login -- Yes --> ALLOW2[βœ… ALLOW]
    role -- Guest --> BLOCK5[🚫 BLOCK]
Loading

Engine B β€” Dynamic Trust Score

Every login gets a score from 0 to 100 based on 5 factors:

Trust Score = Baseline (50)
            + Device Factor   (MAC registered: +20, Unknown MAC: -30)
            + OS Factor       (Known OS: +10, Unknown OS: -20)
            + Role Factor     (Faculty: +15, Student: +5, Guest: -10)
            + Time Factor     (Business hours 08–18: +10, Off-hours: -20)
            + Behavior Factor (Login success: +10, Failure: -15)

Score β‰₯ 70  β†’  ALLOW
Score 40–69 β†’  RESTRICT
Score < 40  β†’  BLOCK

Example Calculations:

Scenario Score Decision
Faculty, registered MAC, Windows, 10:00, success 50+20+10+15+10+10 = 115 β†’ capped at 100 βœ… ALLOW
Student, registered MAC, Windows, 14:00, success 50+20+10+5+10+10 = 105 β†’ capped at 100 βœ… ALLOW
Student, registered MAC, Windows, 23:00, failure 50+20+10+5-20-15 = 50 ⚠️ RESTRICT
Intruder, unknown MAC, Unknown OS, 03:00, failure 50-30-20-10-20-15 = -45 β†’ floored at 0 🚫 BLOCK

Engine C β€” Anomaly Detection

A rule flags a record as an anomaly (1) if all of the following are true:

  • Login result = failure AND
  • At least one of: hour < 06:00 OR unknown MAC OR Unknown OS

Step 3 β€” ml_models.py : Training the ML Models

Reads parsed_logs.csv and trains two classifiers. All categorical columns (mac, os, role, access_decision) are label-encoded to numbers before training.

Features used:

mac_encoded | os_encoded | role_encoded | hour | trust_score

Train/Test Split: 75% training, 25% testing (stratified with random_state=42).

Model 1 β€” Decision Tree (Access Control)

graph TD
    root["trust_score ≀ 65?"]
    root -- Yes --> n1["role = guest?"]
    root -- No --> leaf1["βœ… ALLOW"]
    n1 -- Yes --> leaf2["🚫 BLOCK"]
    n1 -- No --> n2["mac unknown?"]
    n2 -- Yes --> leaf3["🚫 BLOCK"]
    n2 -- No --> leaf4["⚠️ ALLOW/RESTRICT"]
Loading
  • Task: Predict whether a login should be ALLOW or BLOCK
  • Algorithm: DecisionTreeClassifier(max_depth=5)
  • Why Decision Tree? Transparent, interpretable β€” you can see exactly why a decision was made
  • Typical Accuracy: ~90%+

Model 2 β€” SVM (Intrusion / Anomaly Detection)

  • Task: Predict whether a login is Normal (0) or an Anomaly (1) (potential intrusion)
  • Algorithm: SVC(kernel='rbf', C=1.0, gamma='scale', probability=True)
  • Why SVM with RBF kernel? SVMs are excellent for binary classification with a non-linear boundary, perfect for detecting the edge-case patterns of intrusion attempts
  • Typical Accuracy: ~95%+

Model 3 β€” K-Means Clustering (Device Fingerprinting)

  • Task: Group devices into 3 clusters based on OS type and user role, without any labels
  • Algorithm: KMeans(n_clusters=3)
  • Why K-Means? Unsupervised clustering can reveal natural groupings in device behaviour, useful for discovering unknown device profiles

All models and label encoders are saved to the models/ folder as .pkl files using joblib.


Step 4 β€” analysis_plots.py : Generating Visualisations

Generates 6 charts from parsed_logs.csv and saves them to plots/:

Plot File What it Shows
Login Distribution login_distribution.png Bar chart of success vs failure counts
Hourly Activity hourly_activity.png Stacked bar chart of logins per hour of day
Anomaly Scatter anomalies.png Scatter plot highlighting intrusion attempts by hour & role
Trust Score Distribution trust_scores.png Scatter of all login trust scores with ALLOW/RESTRICT/BLOCK zones
Access by Role access_by_role.png Grouped bar chart of ALLOW vs BLOCK per user role
Device Clustering device_clustering.png Scatter plot of K-Means device fingerprint clusters

Step 5 β€” dashboard.py : The Live Streamlit Dashboard

A real-time monitoring dashboard that auto-refreshes every few seconds, simulating live campus network activity.


πŸ” The Trust Score Engine

The Trust Score is the core innovation of this system. It is a Zero Trust-inspired continuous evaluation of every login attempt. Rather than just checking if a user has the right password, it asks: "How confident are we that this is a legitimate, safe login?"

graph LR
    MAC["πŸ–§ Device Factor\nMAC in whitelist?\n+20 / -30"] --> SUM
    OS["πŸ’» OS Factor\nKnown OS?\n+10 / -20"] --> SUM
    ROLE["πŸ‘€ Role Factor\nFaculty / Student / Guest\n+15 / +5 / -10"] --> SUM
    TIME["πŸ• Time Factor\nBusiness hours?\n+10 / 0 / -20"] --> SUM
    BEH["πŸ”‘ Behavior Factor\nLogin success?\n+10 / -15"] --> SUM
    SUM["βˆ‘ Score\nBaseline = 50\nClamped: 0–100"] --> DEC{Decision}
    DEC -->|"β‰₯ 70"| ALLOW["βœ… ALLOW"]
    DEC -->|"40–69"| RESTRICT["⚠️ RESTRICT"]
    DEC -->|"< 40"| BLOCK["🚫 BLOCK"]
Loading

This is inspired by the concept of Context-Based Access Control and Zero Trust Architecture, where no device is trusted by default β€” trust must be earned and continuously re-evaluated.


πŸ€– Machine Learning Models

Why Do We Need ML If We Already Have Rules?

The rule engine is fast and transparent but rigid β€” it can only catch patterns we explicitly coded. ML models can:

  • Learn subtle patterns from the data automatically
  • Generalise to new, unseen attack patterns
  • Provide a second layer of validation on top of the rules

Model Comparison

Model Type Task Accuracy
Decision Tree Supervised Access Control Classification ~90%+
SVM (RBF Kernel) Supervised Intrusion / Anomaly Detection ~95%+
K-Means (k=3) Unsupervised Device Fingerprint Clustering N/A (unsupervised)

Feature Engineering

Before training, categorical data is converted using LabelEncoder:

  • mac β†’ mac_encoded (integer)
  • os β†’ os_encoded (integer)
  • role β†’ role_encoded (integer)
  • access_decision β†’ access_encoded (0 = ALLOW, 1 = BLOCK)

Final feature matrix X:

[mac_encoded, os_encoded, role_encoded, hour, trust_score]

🌐 Network & Firewall Design

VLAN Segmentation

The campus network is segmented into 4 VLANs to isolate traffic by user type:

VLAN 10 β†’ 192.168.10.0/24 β†’ Faculty   (Full access, anytime)
VLAN 20 β†’ 192.168.20.0/24 β†’ Students  (LMS access, 07:00–22:00)
VLAN 30 β†’ 192.168.30.0/24 β†’ Guests    (Internet only, blocked from internal)
VLAN 40 β†’ 192.168.40.0/24 β†’ Servers   (LMS, file servers, faculty resources)

Firewall Rules (firewall_rules.bat)

The batch script simulates Windows Firewall rules that enforce the network policy:

Rule Action Description
Block Unknown OS BLOCK inbound from VLAN 30 Prevents unrecognised OS devices from accessing LAN
Block Guest β†’ Faculty BLOCK TCP from 192.168.30.x to 192.168.10.x Isolates guest devices from faculty resources
Allow Faculty ALLOW all from 192.168.10.x Faculty get unrestricted access
Allow Student β†’ LMS ALLOW TCP from 192.168.20.x to 192.168.40.x Students can reach learning servers only
MAC Port Security Simulated note Real implementation via Cisco switch CLI (switchport port-security maximum 2)

Note: firewall_rules.bat is a simulation for demonstration. In a real deployment, these rules would be applied through a managed switch (e.g., Cisco CLI) and a RADIUS authentication server (e.g., FreeRADIUS).


πŸ“‚ Project Structure

CN Live Code (1)/
β”‚
β”œβ”€β”€ README.md                   ← You are here
β”‚
β”œβ”€β”€ code/
β”‚   β”œβ”€β”€ generate_data.py        ← STEP 1: Generates 110 synthetic auth log records
β”‚   β”œβ”€β”€ parser_and_rules.py     ← STEP 2: Applies access rules + Trust Score Engine
β”‚   β”œβ”€β”€ ml_models.py            ← STEP 3: Trains Decision Tree, SVM; saves .pkl files
β”‚   β”œβ”€β”€ analysis_plots.py       ← STEP 4: Generates 6 visualisation charts
β”‚   β”œβ”€β”€ dashboard.py            ← STEP 5: Streamlit real-time dashboard (675 lines)
β”‚   └── firewall_rules.bat      ← Simulated Windows firewall rules (VLAN policies)
β”‚
β”œβ”€β”€ data/                       ← Auto-generated by running the scripts
β”‚   β”œβ”€β”€ auth_logs.csv           ← Raw authentication logs (110 records)
β”‚   └── parsed_logs.csv         ← Processed logs with trust scores + anomaly flags
β”‚
β”œβ”€β”€ models/                     ← Auto-generated by ml_models.py
β”‚   β”œβ”€β”€ dt_model.pkl            ← Trained Decision Tree model
β”‚   β”œβ”€β”€ svm_model.pkl           ← Trained SVM model
β”‚   β”œβ”€β”€ le_mac.pkl              ← MAC address label encoder
β”‚   β”œβ”€β”€ le_os.pkl               ← OS label encoder
β”‚   β”œβ”€β”€ le_role.pkl             ← Role label encoder
β”‚   β”œβ”€β”€ le_access.pkl           ← Access decision label encoder
β”‚   └── metrics.json            ← Accuracy scores (loaded by dashboard)
β”‚
└── plots/                      ← Auto-generated by analysis_plots.py
    β”œβ”€β”€ login_distribution.png  ← Success vs Failure bar chart
    β”œβ”€β”€ hourly_activity.png     ← Logins per hour of day
    β”œβ”€β”€ anomalies.png           ← Anomaly scatter plot (role vs hour)
    β”œβ”€β”€ trust_scores.png        ← Trust score scatter with ALLOW/RESTRICT/BLOCK zones
    β”œβ”€β”€ access_by_role.png      ← Access decisions grouped by user role
    └── device_clustering.png   ← K-Means device fingerprint clusters

⚑ Quick Start

Prerequisites

Make sure you have Python 3.8+ installed. Then install all dependencies:

pip install pandas matplotlib seaborn scikit-learn streamlit joblib

Run the Pipeline (in order)

Navigate into the code/ folder first:

cd code

Step 1 β€” Generate the Dataset

python generate_data.py

πŸ“€ Output: ../data/auth_logs.csv (110 records)

Step 2 β€” Parse Logs & Apply Rules

python parser_and_rules.py

πŸ“€ Output: ../data/parsed_logs.csv (adds trust_score, anomaly, access_decision columns)

Example terminal output:
--- Summary ---
Total Records   : 110
ALLOW decisions : 47
BLOCK decisions : 63
Anomalies Flagged: 22
Avg Trust Score : 54.3

Step 3 β€” Train the ML Models

python ml_models.py

πŸ“€ Output: ../models/dt_model.pkl, ../models/svm_model.pkl, ../models/metrics.json, and encoder .pkl files.

Example terminal output:
[1] Decision Tree β€” Access Control Classification
    Accuracy : 92.59%
[2] SVM β€” Intrusion / Anomaly Detection
    Accuracy : 96.30%

Step 4 β€” Generate Visualisation Plots

python analysis_plots.py

πŸ“€ Output: 6 chart images saved to ../plots/

Step 5 β€” Launch the Live Dashboard

streamlit run dashboard.py

🌐 Open your browser at: http://localhost:8501


πŸ–₯️ The Dashboard

The dashboard (dashboard.py) is a real-time network monitoring system built with Streamlit. It has 4 tabs:

Tab 1 β€” πŸ“‘ Live Feed

  • Latest 5 event ticker: Shows the most recent login attempts with colour-coded ALLOW / BLOCK / ANOMALY badges
  • Live Activity Timeline: Rolling 2-minute plot of trust scores (auto-updates every few seconds)
  • Full Authentication Log: Filterable table (by role, login result, anomaly status)

Tab 2 β€” πŸ“ˆ Analytics

  • Live-updating versions of all 6 analysis charts, reflecting the growing dataset as new events arrive

Tab 3 β€” πŸ€– AI Security Engine

  • Interactive prediction panel: Enter a device's role, OS, hour, MAC type, and login result
  • Click Run Security Check to get:
    • A Trust Score (0–100)
    • An access decision (ALLOW / RESTRICT / BLOCK)
    • Decision Tree classification
    • SVM threat-level prediction
    • Full score breakdown table showing each factor's contribution

Tab 4 β€” ℹ️ System Info

  • Pipeline architecture overview
  • Dataset summary table per role
  • Model accuracy results
  • Academic references

Sidebar Controls

Control Description
⚑ Real-Time Monitoring toggle Start / pause live event generation
Refresh Interval slider Choose 2s / 3s / 5s / 8s auto-refresh
Session Stats Uptime, live events generated, total records
Model Performance Decision Tree & SVM accuracy
Access Policy table Quick reference for the firewall rules
graph LR
    Sidebar["βš™οΈ Sidebar\nControls + Stats"] --> Dashboard
    Dashboard["πŸ–₯️ Dashboard"] --> T1["πŸ“‘ Tab 1\nLive Feed"]
    Dashboard --> T2["πŸ“ˆ Tab 2\nAnalytics"]
    Dashboard --> T3["πŸ€– Tab 3\nAI Engine"]
    Dashboard --> T4["ℹ️ Tab 4\nSystem Info"]
    T1 --> Ticker["πŸ”” Event Ticker"]
    T1 --> Timeline["πŸ“ˆ 2-min Timeline"]
    T1 --> LogTable["πŸ“‹ Filterable Log"]
    T3 --> Predict["β–Ά Run Security Check"]
    Predict --> TrustScore["πŸ“Š Trust Score"]
    Predict --> DTpred["🌲 DT Prediction"]
    Predict --> SVMpred["πŸ”΅ SVM Threat Level"]
Loading

πŸ“Š Generated Visualisation Plots

# Chart Description
1 Login Distribution Simple bar chart β€” how many logins succeeded vs failed overall
2 Hourly Activity Stacked bar chart showing activity patterns across all 24 hours
3 Anomaly Scatter Shows suspicious login attempts (red βœ•) by role and hour; highlights the 00:00–06:00 high-risk zone
4 Trust Score Distribution Each dot = one login. Three colour zones: green (ALLOW β‰₯ 70), orange (RESTRICT 40–69), red (BLOCK < 40)
5 Access by Role Side-by-side bars showing how many ALLOW vs BLOCK decisions each role received
6 Device Clustering K-Means clusters plotted by encoded OS vs encoded Role β€” reveals natural device groupings

πŸ† Results & Accuracy

Model Task Accuracy
🌲 Decision Tree Access Control Classification ~90%+
πŸ”΅ SVM (RBF Kernel) Intrusion / Anomaly Detection ~95%+
πŸ“Š K-Means (k=3) Device Fingerprint Clustering Unsupervised (no accuracy metric)

Exact results vary slightly each run due to the random nature of synthetic data generation. All runs use random_state=42 for reproducibility.


πŸ“‹ Access Policy Reference

Role Hours Allowed MAC Required OS Required Decision
πŸ§‘β€πŸ« Faculty Anytime (00:00–23:59) Registered Known βœ… ALLOW
πŸŽ’ Student 07:00–22:00 (login must succeed) Registered Known βœ… ALLOW
πŸ‘€ Guest β€” Registered β€” 🚫 BLOCK
❓ Unknown MAC β€” β€” β€” 🚫 BLOCK
❓ Unknown OS β€” β€” β€” 🚫 BLOCK
πŸ”΄ Off-hours + failure Before 06:00 or after 22:00 β€” β€” 🚨 ANOMALY

πŸ› οΈ Technologies Used

Technology Version Purpose
Python 3.8+ Core language
pandas latest Data manipulation
scikit-learn latest Decision Tree, SVM, K-Means, LabelEncoder
matplotlib latest Static chart generation
seaborn latest Chart styling
Streamlit latest Real-time web dashboard
joblib latest Saving / loading ML models
Windows Firewall (netsh) β€” Simulated network access rules

πŸ“š References

  1. DEBAC: Dynamic Explainable Behavior-Based Access Control β€” IEEE, 2023
  2. Zero Trust Architecture SLR β€” TechRxiv, 2025
  3. Context-Based Access Control Using Dynamic Trust Scores β€” 2020
  4. Kindervag, J. β€” Zero Trust Network Architecture β€” Forrester Research, 2010
  5. scikit-learn documentation: https://scikit-learn.org
  6. Streamlit documentation: https://docs.streamlit.io

Developed for Computer Networks PBL β€” Woxsen University, 2026
Course: 24TU04MJM2 | Smart Campus Network with Automatic Access Control

About

Smart Campus Network with Automatic Access Control

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors