diff --git a/README.md b/README.md index bdb87090be..68984bc014 100644 --- a/README.md +++ b/README.md @@ -9,3 +9,38 @@ - You can choose to develop individual microservices within separate folders within this repository **OR** use individual repositories (all public) for each microservice. - In the latter scenario, you should enable sub-modules on this GitHub classroom repository to manage the development/deployment **AND** add your mentor to the individual repositories as a collaborator. - The teaching team should be given access to the repositories as we may require viewing the history of the repository in case of any disputes or disagreements. + +--- + +## Architecture Diagram + +![Overall Architecture Diagram](./docs/architecture_diagram.png) + +The overall architecture of PeerPrep follows a microservices architecture. The client acts as an orchestrator for the interaction between the different services. + +## Screenshots + +![Home Page](./docs/home_page.png) + +![Collaboration Page](./docs/collab_page_1.png) + +![Collaboration Page](./docs/collab_page_2.png) + +![Question Page](./docs/question_page.png) + +![Question Page](./docs/indiv_question_page.png) + +![History Page](./docs/submission_history_page.png) + +## More details + +- [Frontend](./apps/frontend/README.md) +- [User Service](./apps/user-service/README.md) +- [Question Service](./apps/question-service/README.md) +- [Matching Service](./apps/matching-service/README.md) +- [Signalling Service](./apps/signalling-service/README.md) +- [History Service](./apps/history-service/README.md) +- [Execution Service](./apps/execution-service/README.md) +- [CI/CD Guide](./docs/cicid.md) +- [Docker Compose Guide](./apps/README.md) +- [Set Up Guide](./docs/setup.md) diff --git a/apps/README.md b/apps/README.md index ae12a36fc1..c54d4922f9 100644 --- a/apps/README.md +++ b/apps/README.md @@ -2,6 +2,8 @@ This project uses Docker Compose to manage multiple services such as a frontend, backend, and a database. The configuration is defined in the `docker-compose.yml` file, and environment variables can be stored in environment files for different environments (e.g., development, production). +More details on how to set up Docker Compose can be found [here](../docs/setup.md) + ## Prerequisites Before you begin, ensure you have the following installed on your machine: @@ -30,7 +32,15 @@ In the `./apps` directory: ├── user-service │ ├── Dockerfile # Dockerfile for user-service │ └── ... (other user-service files) - +├── execution-service +│ ├── Dockerfile # Dockerfile for execution-service +│ └── ... (other execution-service files) +├── signalling-service +│ ├── Dockerfile # Dockerfile for signalling-service +│ └── ... (other signalling-service files) +├── history-service +│ ├── Dockerfile # Dockerfile for history-service +│ └── ... (other history-service files) ``` ## Docker Compose Setup @@ -54,11 +64,15 @@ This will: Once running, you can access: -- The **frontend** at http://localhost:3000 -- The **user service** at http://localhost:3001 -- The **question service** at http://localhost:8080 (REST) and http://localhost:50051 (gRPC) -- The **matching service** at http://localhost:8081 -- The **redis service** at http://localhost:6379 +- The [**frontend**](./frontend/README.md) at http://localhost:3000 +- The [**user-service**](./user-service/README.md) at http://localhost:3001 +- The [**question-service**](./question-service/README.md) at http://localhost:8080 (REST) and http://localhost:50051 (gRPC) +- The [**matching-service**](./matching-service/README.md) at http://localhost:8081 +- The [**history-service**](./history-service/README.md) at http://localhost:8082 +- The [**execution-service**](./execution-service/README.md) at http://localhost:8083 +- The [**signalling-service**](./signalling-service/README.md) at http://localhost:4444 +- The **redis** at http://localhost:6379 +- The **rabbitmq** at http://localhost:5672 3. Stopping Services diff --git a/apps/execution-service/README.md b/apps/execution-service/README.md index c53dc3ab9f..bce92e1b6f 100644 --- a/apps/execution-service/README.md +++ b/apps/execution-service/README.md @@ -1,5 +1,59 @@ # Execution Service +The Execution Service provides backend functionality for running and validating code executions or submissions within a coding platform. It enables users to execute code against test cases and receive feedback on the correctness of their solutions. + +The Execution Service incorporates a code execution mechanism designed to run user-submitted solutions within an isolated, sandboxed environment. This approach enhances security by preventing arbitrary code from interacting with the host system directly and allows for performance monitoring + +### Technology Stack + +- Golang (Go): Statically typed, compiled language with low latency. Fast and efficient processing is ideal for high-read, high-write environments like in Execution Service, when many users run tests or submit tests. +- Rest Server: chi router was utilized which supports CORS, logging and timeout via middlewares. It is stateless, which reduces coupling and enhances scalability and reliability, simplicity and flexibility. For example, clients may make requests to different server instances when scaled. +- Firebase Firestore: NoSQL Document database that is designed for automatic horizontal scaling and schema-less design that allows for flexibility as number of tests increases or more users run tests. +- Docker: used to containerize the Execution Service to simplify deployment. Additionally used to provide a sandboxed execution environment for user-submitted code, ensuring security by limiting code access to the host system and managing dependencies independently. + +### Execution Process + +For execution of user code (running of test cases without submission), only visible (public) and custom test cases are executed. + +![Diagram of code execution process](../../docs/exeuction_process.png) + +### Submission Process + +For submission of user code, both visible (public) and hidden testcases are executed, before calling the history-service API to submit the submission data, code and test results. + +![Diagram of code submission process](../../docs/submission_process.png) + +### Design Decisions + +1. **Docker Containerisation** + a. Upon receiving a code execution request, the service dynamically creates a Docker container with a controlled environment tailored to Python + b. The Docker container is set up with only the minimal permissions and resources needed to execute the code, restricting the execution environment to reduce risk + c. This containerized environment is automatically destroyed after execution, ensuring no residual data or state remains between executions + +2. **Security and Isolation** + a. Containers provide isolation from the host system, limiting any interaction between user code and the underlying infrastructure + b. Only essential files and libraries required for code execution are included, reducing potential attack surfaces within each container. The sandboxed, container-based execution system provides a secure and efficient way to run user code submissions. + +The sandboxed, container-based execution system provides a secure and efficient way to run user code submissions. + +### Communication between Execution and History Service + +The communication between the Execution service and the History service is implemented through a RabbitMQ Message Queue. RabbitMQ is ideal for message queues in microservices due to its reliability, flexible routing, and scalability. It ensures messages aren’t lost through durable queues and supports complex routing to handle diverse messaging needs. + +Asynchronous communication was chosen as a user’s submission history did not need to be updated immediately. Instead of waiting for a response, the Execution Service can put the message in a queue and continue processing other requests. + +![RabbitMQ Message Queue](./../../docs/rabbit_mq_queue.png) + +A message queue allows services to communicate without depending on each other's availability. The Execution Service can send a message to the queue, and the History Service can process it when it’s ready. This decoupling promotes loose coupling and reduces dependencies between services, which helps maintain a robust and adaptable system. + +--- + +## Setup + +### Prerequisites + +Ensure you have Go installed on your machine. + ### Installation 1. Install dependencies: @@ -61,10 +115,10 @@ The server will be available at http://localhost:8083. ## API Endpoints -- `POST /tests/populate` -- `GET /tests/{questionDocRefId}/` -- `POST /tests/{questionDocRefId}/execute` -- `POST /tests/{questionDocRefId}/submit` +- `POST: /tests/populate`: Deletes and repopulates all tests in Firebase +- `GET: /{questionDocRefId}`: Reads the public testcases for the question, identified by the question reference ID +- `POST: /{questionDocRefId}/execute`: Executes the public testcases for the question, identified by the question reference ID +- `POST: /{questionDocRefId}/submit`: Executes the public and hidden testcases for the question, identified by the question reference ID, and submits the code submission to History Service ## Managing Firebase diff --git a/apps/frontend/README.md b/apps/frontend/README.md index 2572193fb1..a763465b0d 100644 --- a/apps/frontend/README.md +++ b/apps/frontend/README.md @@ -1,11 +1,27 @@ -This is the frontend for the question service. +# Frontend -## Tech Stack +![Home page](../../docs/home_page.png) -- Next.js -- TypeScript -- Ant Design -- SCSS +### Tech Stack + +- React: React is one of the most popular UI libraries that allows the creation of reusable UI functional components. Its community ecosystem also offers React hooks that simplify the implementation of some of our frontend components, such as websockets. +- Next.js: A React framework for building single-page applications. It comes with several useful features such as automatic page routing based on filesystem. +- Ant Design: An enterprise-level design system that comes with several extensible UI components and solutions out-of-the-box, which allows us to quickly create nice-looking components that can be adjusted according to our requirements. +- Typescript: A language extension of Javascript that allows us to perform static type-checking, to ensure that issues with incorrectly used types are caught and resolved as early as possible, improving code maintainability. + +### Authorization-based Route Protection with Next.js Middleware + +Middleware is a Next.js feature that allows the webpage server to intercept page requests and perform checks before serving the webpage. We used this feature to protect page access from unauthenticated users. This was done by checking the request’s JWT token (passed as a cookie) against the user service and redirecting users without authorized access to a public route (namely, the login page). + +### User Flow and Communication between Microservices + +Clients interact with the microservices through dedicated endpoints, with each microservice managing its own database for independent reading and writing. + +Having individual databases per microservice improves data security, scalability, fault isolation, flexibility in database choice, and development efficiency. This approach allows each microservice to operate independently, optimizing stability, performance, and adaptability in the system. + +![Diagram for user flow and communication between microservices](../../docs/userflow.png) + +--- ## Getting Started diff --git a/apps/history-service/README.md b/apps/history-service/README.md index 6fec9e5bbd..1d02df4807 100644 --- a/apps/history-service/README.md +++ b/apps/history-service/README.md @@ -1,4 +1,27 @@ -# Question Service +# History Service + +The History Service is designed to store and retrieve a user’s code submission history. Users can view their past submission records of a collaboration session, with details such as the submitted date, the question attempted on, and the matched username. The information on the submission is stored within the history service’s database, and the data is accessed through querying the history-service from the frontend. It uses Google Firestore as a cloud-based NoSQL database for efficient and scalable data storage and retrieval. It is developed with a RESTful API structure, allowing flexibility for client applications to interact with the service. + +### Technology Stack + +- Golang (Go): Statically typed, compiled language with low latency. Fast and efficient processing is ideal for high-read, high-write environments like in history service. +- Firebase Firestore: NoSQL Document database that is designed for automatic horizontal scaling and schema-less design that allows for flexibility as application grows and new features are added. +- REST Server: chi router was utilized which supports CORS, logging and timeout via middlewares. It is stateless, which reduces coupling and enhances scalability and reliability, simplicity and flexibility. For example, clients may make requests to different server instances when scaled. +- Docker: used to containerize the History Service to simplify deployment. + +### Design Decisions + +The submission history is organized with the most recent submission displayed first, making it easy for users to review their past submissions. Pagination is implemented to help users find specific records efficiently and reduce the amount of data transferred when loading the page. + +![Screenshot of the submission history page with pagination](../../docs/submission_history_page.png) + +On the question page, users can view their past submissions for that question, allowing them to see their submitted code alongside the question details for better context. + +![Screenshot of a question’s submission history](../../docs/indiv_question_page.png) + +Each submission record is created through the execution service via an asynchronous call, ensuring smooth and efficient processing (more details provided in the next section). + +--- ## Overview @@ -90,10 +113,10 @@ The server will be available at http://localhost:8082. ## API Endpoints -- `POST /histories` -- `GET /histories/{docRefId}` -- `PUT /histories/{docRefId}` -- `DELETE /histories/{docRefId}` +- `POST: /histories`: Create a history record of the code submission +- `GET: /histories/{historyDocRefId}`: Reads the history record of the code submission, identified by the history reference ID +- `GET: /histories/user/{username}`: Returns a paginated list of history records by a user, identified by the username +- `GET: /histories/user/{username}/question/{questionDocRefId}`: Returns a paginated list of history records by a user, for a question, identified by the username and question reference ID ```bash go run main.go diff --git a/apps/matching-service/README.md b/apps/matching-service/README.md index 92b957c7b0..116a821356 100644 --- a/apps/matching-service/README.md +++ b/apps/matching-service/README.md @@ -1,7 +1,90 @@ # Matching Service +The Matching Service is responsible for processing client match requests, efficiently pairing users based on specific criteria. + The Matching Service provides a WebSocket server to manage real-time matching requests through WebSocket connections. It is implemented in Go and uses the Gorilla WebSocket package to facilitate communication. +## Technology Stack + +- Golang (Go): A high-performance, statically typed language, ideal for low-latency and fast data processing, ensuring quick response times in high-traffic matching scenarios. +- Redis: An in-memory data store used for fast data retrieval and management of match parameters. Redis supports key-value pairs and complex structures like lists and sets, providing rapid access for real-time matching. +- Websocket Server: Enables persistent, bidirectional communication between clients and the Matching Service, ensuring real-time updates and continuous connection without the need for frequent polling. +- gRPC Server: Chosen for its high-performance, efficient communication, and real-time capabilities, gRPC facilitates seamless communication within the system with the Question Service. +- Docker: Used to containerize the Matching Service, ensuring consistent deployment and simplified scaling across various environments. + +## Design Decisions + +### Websocket communication between clients and matching service + +The Matching Service uses WebSockets for a persistent, real-time communication between the multiple clients and the backend, ensuring users remain connected during the matching process. Each client is assigned a unique connection that allows for real-time, two-way communication, eliminating the need for constant polling. + +![Websocket connection](../../docs/websocket.png) + +When a client sends a match request, the Websocket server receives the data. Once a match is found, the server pushes the final result directly to the client instantly, ensuring that the user receives real-time feedback without delay, optimizing the user experience. + +Websocket’s persistent connection model helps to detect connection issues immediately. If a client disconnects, the server can identify the loss of connection through the closed WebSocket, triggering appropriate error handling. + +### Frontend Matchmaking Interface + +Several considerations were made when designing the component that receives the websocket connection and displays it in the UI. + +- The Matchmaking API should be agnostic to the underlying websocket implementation. +- Each layer has its own intermediate state independently. (e.g. the matchmaking API stores whether it is searching for a match but not how long the matchmaking has taken.) +- The asynchronous nature of the websocket connection meant that the component must be designed to handle push events from the server. + +A 3-layer architecture was implemented to meet these requirements. + +1. A React Hook (an external library) is used to establish the websocket from the matching service. +2. A custom component (useMatchMaking) processes the raw messages and events sent by the websocket component and interprets them in the context of state matchmaking state. It then exposes this state to the user of the component. +3. The React UI component responsible for displaying the match state receives the state and redenrs it to the user. + +![Matchmaking Interface](../../docs/frontend_matchmaking_architecture.png) + +When the user sends commands to the UI Component, such as to start a match or cancel matching, the Component determines whether to propagate the action to the next layer or not. For example, when the user closes the modal, this change does not propagate to the matchmaking component. However, if the user chooses to retry matchmaking, the UI Component must inform the matchmaking component to start another matchmaking session, which then informs the websocket component to open a connection to the matchmaking server. + +### Matching Algorithm + +The matching algorithm consists of two stages. The first stage finds two matching users. The second stage finds a matching question fitting the matched criteria. The first stage is implemented in the Matching service and the second stage is implemented in the Question service, in order to have separation of concerns of the different responsibilities. + +The implications of having two stages means that it is possible for users to be matched on a question with a different topic or difficulty from what was specified by the user, if such a question does not exist in the database. + +**Stage 1: Finding two matching users (in Matching Service)** + +The matching algorithm prioritizes matching users that have been in the Redis queue the longest first, followed by users that have common topics/difficulties. If a user does not select any topics/difficulties, it is treated as the user does not need to match based on the topic/difficulty. + +1. A new user joins the queue, starting the process of comparing the new user’s topics and difficulties against existing users in the queue. If the user is an existing user in the queue, an error is returned to the user. +2. If two different users have common topics or difficulties, the users are matched together, and a random match ID is generated. +3. The matched topics and difficulties are then used to query the question service to find a matching question in stage 2. + +**Stage 2: Finding a matching question (in Question Service)** + +When a matching question is found, the result is returned to the matching service tocomplete the matching process for the matched users. + +1. If there are questions with any of the specified topics and any of the difficulties found, return one of those questions randomly. +2. If no questions from any of the specified topics and difficulty is found, find a question of any of the specified topics. If found, return one at random. +3. If no questions from any of the specified topics is found, find a question of any of the specified difficulties. If found, return one at random. +4. If no question is still found, then return a random question. + +### Communication between Matching service and Question service + +The communication between the Matching Service and the Question Service is implemented through gRPC API calls. Synchronous communication was chosen as the match result should be returned as soon as possible. gRPC was chosen as it has better performance compared to REST. + +![Diagram of gRPC Communication](../../docs/grpc.png) + +Each matching process in the Matching Service communicates via gRPC to the gRPC Server within the Question Service, when a match between two users is found (end of stage 1 in the matching algorithm). The information sent to the Question Service is used during stage 2 of the matching algorithm, and the result of stage 2 is sent as a response back to the matching process. + +### Communication within Matching service + +The Matching Service handles internal communication after the matching question is returned (end of stage 2 of the matching algorithm), utilizing Redis Pub/Sub messaging for inter-process communication. + +![Diagram of Redis Pub/Sub Communication within the Matching Service](../../docs/redis_pubsub.png) + +The matching process that holds the match result only needs to know the topics (usernames) to publish to and the matching processes for each username can subscribe to only their topic, and retrieve the matching results from their respective topics (which is based on usernames). + +This design decouples the process of determining the match and retrieving the question from the process of sending the match results back to the user, ensuring efficient and asynchronous communication between components. + +--- + ## Setup ### Prerequisites diff --git a/apps/question-service/README.md b/apps/question-service/README.md index 3045231030..8c06dd4821 100644 --- a/apps/question-service/README.md +++ b/apps/question-service/README.md @@ -1,5 +1,37 @@ # Question Service +The Question Service is a core component of the system, dedicated to managing all interactions with question-related data. It handles the creation, retrieval, updating, and deletion of questions in the database, ensuring that users can efficiently access and manage question content for the application. + +### Technology Stack + +- Golang (Go): A fast, compiled language ideal for handling high-throughput, low-latency operations, making it perfect for rapid question data processing. +- Firebase Firestore: A NoSQL, schema-less database that scales horizontally and efficiently handles evolving data structures and growing amounts of question data. +- REST Server (chi router): A lightweight, stateless router for handling HTTP requests, offering flexibility, scalability, and built-in middleware support for CORS, logging, and timeouts. +- gRPC Server: Enables high-performance, real-time communication between microservices for efficient, reliable data exchange in the system with the Matching Service. +- Docker: Containerised the Question Service for consistent deployment across environments, simplifying testing, scaling, and management. + +### Detailed Design and Implementation + +The Question Service exposes several RESTful endpoints that allow clients to interact with question data. Each endpoint is designed to support the specific functionality needed for managing question information. + +**Listing of Questions** + +The `GET /questions` endpoint supports advanced pagination, search, filtering, and sorting features to enhance the user experience when querying for questions. By supporting multiple filtering and sorting criteria, the API adapts to a variety of user needs and use cases. + +The endpoint supports efficient pagination, allowing clients to retrieve questions in smaller, manageable chunks. This prevents performance bottlenecks when dealing with large datasets. Clients can specify the limit (number of questions per page) to navigate through the data easily. + +Users can search for questions based on specific keywords in the question name, providing relevant results quickly. + +The endpoint supports robust filtering options that allow users to narrow down results based on several attributes like category and difficulty. + +The endpoint also supports sorting of various fields like difficulty or date created. + +These features combine to create a more dynamic, user-friendly interface for discovering questions, especially in large datasets, improving search efficiency and allowing users to quickly find the questions that match their needs. As the database grows, these features ensure that performance remains consistent, even with large volumes of data. + +![Screenshot of the question page with the pagination, filtering, sorting and search features](../../docs/question_page.png) + +--- + ## Overview The Question Service is built with Go, utilizing Firestore as the database and Chi as the HTTP router. It allows for basic operations such as creating, reading, updating, and deleting question records. @@ -65,11 +97,13 @@ The server will be available at http://localhost:8080. ## API Endpoints -- `POST /questions` -- `GET /questions/{id}` -- `GET /questions` -- `PUT /questions/{id}` -- `DELETE /questions/{id}` +REST Service Endpoints + +- `POST /questions`: Creates a new question in the database. This endpoint accepts the necessary data to define a new question, including the title, difficulty, topic, and any related metadata. +- `GET /questions/{id}`: Retrieves the data for a specific question using its unique ID. This endpoint returns all relevant details, including the question text, answers, and other metadata associated with the question. +- `GET /questions`: Fetches a list of questions. This can be customized with query parameters such as difficulty, topic, or pagination, allowing clients to retrieve a set of questions based on specific criteria. +- `PUT /questions/{id}`: Updates the data of an existing question. This allows partial updates, meaning clients can modify specific attributes (like difficulty or topic) without affecting other parts of the question data. +- `DELETE /questions/{id}`: Deletes a question from the database based on its ID. This removes the question and all its related data from the system. ## Managing Firebase diff --git a/apps/signalling-service/README.md b/apps/signalling-service/README.md index c49ca99356..a34619c0ce 100644 --- a/apps/signalling-service/README.md +++ b/apps/signalling-service/README.md @@ -1,5 +1,85 @@ +# Signalling Service + This is a signalling server that is used to establish WebRTC connections between users. It is built using Node.js and Socket.IO. +## Collaboration Service + +The Collaboration Service is a microservice that is responsible for enabling real-time collaboration (e.g., code-editing) between two matched users and providing them with a live collaboration space (e.g., code editor). + +Technology Stack + +- Yjs: Yjs is a high-performance conflict-free replicated data type (CRDT) for building collaborative applications that sync automatically. It has multiple editor support, including CodeMirror and Monaco which are suitable for our use case to connect two peers to a collaboration space. +- WebRTC: WebRTC is a real time communication protocol that enables direct peer to peer connection. It allows us to connect the matched peers via a signaling server\* to establish the connection. Using y-webrtc which is a provider for Yjs, leveraging on WebRTC protocol, real time updates is enabled by allowing updates to a Yjs document that is shared between the users. (y-webrtc) +- CodeMirror: CodeMirror is an open source project that is a versatile text editor implemented in JavaScripts for the browser. It is good for code editing and comes in-built with multi language support with add-ons for syntax highlighting, CSS theming and also a rich programming api. This allows us to customize CodeMirror to the needs of PeerPrep, for e.g., auto-completion, syntax-highlighting and cursor hover for collaborative users. (y-codemirror) +- Signaling Server\*: Rather than using a public webrtc signaling server, we decided to host our own signaling server using the sample code provided by Yjs, this provides us with better control over scalability and less reliance over a third party service. This helps to ensure data privacy and customize the signaling server for events such as inactive timeouts. + +### Communication between two users in a collaboration session + +Each user will first initialize a websocket connection to the signaling server where the exchange of necessary information is done. This includes exchange of Interactive Connectivity Establishment (ICE) candidates which in turn uses a STUN/TURN server to allow peers to find a path for direct communication. The role of the signaling server is to act as the middleman to handle the initial connection and establishment of WebRTC connection between peers. After the connection is established, the signaling server is no longer involved in the communication between peers and the peers can directly communicate with one another via the data channel. + +Utilizing the Yjs’s CDRT structure and CodeMirror, peers can synchronize updates to the coding space via the Yjs Document that is shared among them which enables them to perform live collaboration. The Yjs Document is uniquely identified through the `matchId` returned by the matching algorithm for both peers. The diagram below illustrates the establishment of the WebRTC data channel through the signaling server. + +![Collaboration Service Data Flow Diagram](../../docs/collaboration_service.png) + +The key design decision here to use Peer-to-Peer (P2P) WebRTC connection rather than hosting a collaborative server is because each matched session only targets 2 users rather than a group of users. By creating a direct data channel connection between the matched users, we reduce the latency as compared to relaying through a collaborative server. Utilizing WebRTC connection abstracts away the need to host and manage our own collaborative server. This allows the application to scale without having the problem of bottleneck in the collaborative server due to increased server load from an increase in users. The decision also helps to reduce the complexity + +### Communication between Matching Service and Collaboration Service + +When two users are matched, a unique `matchId` will be generated within the Matching Microservice and returned to the Frontend Client. The Frontend Client will subsequently pass the `matchId` over to the collaboration service which will be used as the identifier for the room name in the Yjs’s WebRTC provider. This enables us to create a collaborative environment for the two matched users via the unique `matchId`. The matching service and the collaboration service communicate as shown in the diagram below. + +![Diagram of Communication with Collaboration and Matching Service](../../docs/communication_between_collab_and_matching.png) + +![Screenshot of Collaboration Page](../../docs/collab_page_1.png) + +## Communication + +To enhance the collaboration between users in the collaboration space. We implemented a video and audio functionality which allows users to communicate visually and verbally. + +### Technology Stack + +- PeerJS: PeerJS is a wrapper around the browser’s WebRTC implementation to provide an easy P2P connection API, allowing transmission of data and/or media stream to remote peers. + +### Design Decisions + +We chose to implement the video and audio functionality over a chat functionality as it allows for immediate feedback while collaborating without a need to swap between the code environment and chat which can be a hassle. Furthermore, since the collaborative code editor can technically exist as a chat function, this offered more reason to opt for the video and audio functionality. + +We chose PeerJS as it abstracts away the complexity of handling exchange of ICE candidates, network traversal and Network Address Translation (NAT) problems. It provides us with a simple wrapper and interface to work with so that we can focus on enhancing the UI components for the video display on the frontend with less worry on implementing the protocols on the backend. + +The reason behind using the provided public PeerJS server rather than hosting our own is due to our small user base and the fact that we do not require custom server configurations. Without having to worry about maintaining the server’s infrastructure and the overhead involved, this allows us to focus on the application logic. This simplifies the development process, enabling us to focus our efforts into delivering the primary functionality. Since the existing public server is capable of handling low to medium applications, it can handle the existing needs of our application. However, if we foresee the load to increase, we will then swap to hosting one on our own to cater to the needs of PeerPrep. + +### Communication between two users in the collaboration space + +Similar to the collaborative service, PeerJS also utilizes a signaling server to help establish connection between two users. We establish the unique peer IDs as follows: + +- Peer1’s ID: - for e.g., `AdminUser-af6debdaae1daec04c9f33aa1eee0a6c` +- Peer2’s ID: - for e.g, `test-af6debdaae1daec04c9f33aa1eee0a6c` + +With the two unique IDs, an exchange of information is performed through the central signaling server provided by PeerJS. The signaling data comprises the unique peer’s ID, the ICE candidates and media capabilities. The process of establishing the connection between peers is similar to how one would try to contact a friend via their phone. Peer1 would initiate a call to Peer2 by specifying Peer2’s ID coupled with its own media stream. Peer2 on receiving the call, will accept and provide their own media stream. + +Once the connection is established, the stream event will be fired and both users will be able to see one another’s media stream (both video and audio). + +![PeerPrep Integration with PeerJS](../../docs/peerjs.png) + +We also provided functionalities to allow users more media control options over the video/audio functionality. This enables them to on/off their video camera, on/off their microphone and initiate/end call sessions. The screenshot image below showcases an example of the video/audio functionality in PeerPrep. + +![Screenshot of the collaboration service video functionality](../../docs/collab_page_2.png) + +## Enhancing Collaboration Service + +The Collaboration Service is enhanced with a powerful code editor that facilitates real-time collaborative coding between users. By integrating CodeMirror, it provides an intuitive and interactive coding experience with features like syntax highlighting, code formatting, and automatic indentation. + +This enhances user productivity and improves collaboration by making the code more readable and easier to work with. Users can seamlessly write, edit, and view code with minimal effort, improving the overall experience of real-time coding sessions. + +### Technology Stack + +- CodeMirror: A versatile code editor component for the web, providing features like syntax highlighting and code formatting. It supports a wide range of programming languages and offers a rich API for customization and extension. + +![Screenshot of the code editor](../../docs/code_editor.png) + +CodeMirror allows for code formatting and syntax highlighting. With reference to the screenshot above, “def” and “name” are highlighted, and the 2nd line is auto-indented. + +--- + ## Getting Started First, install the dependencies: diff --git a/apps/user-service/README.md b/apps/user-service/README.md index 1e030a25f9..567276c9a0 100644 --- a/apps/user-service/README.md +++ b/apps/user-service/README.md @@ -1,5 +1,36 @@ # User Service Guide +NOTE: The User Service is adapted from [here](https://github.com/CS3219-AY2425S1/PeerPrep-UserService/tree/main). + +The User Service is a microservice responsible for handling user-related operations, such as registration, authentication, profile management, and authorization. It is part of a larger system architecture and integrates with other microservices within the project. Below outlines the requirements, architecture, and design for the User Service, and provides a thorough overview of its functionality, structure, and technology stack. + +The User Service is designed using a microservices architecture, integrating with other +services through RESTful APIs. Below is a description of the architecture components +and their interactions. + +1. User Service: This microservice handles all user-related operations, including user creation, authentication, data retrieval, updates, and role management. It exposes REST endpoints that allow clients to interact with the database. +2. MongoDB: MongoDB is used as the database to store user information. It is hosted on MongoDB Cloud. +3. Authentication: JWT tokens are generated upon login to authenticate users. This service ensures secure access to data based on user roles and verifies token validity for routing. + +### Technology Stack + +- Node.js: The primary runtime environment for the User Service due to its scalability, ease of use for API development, and compatibility with MongoDB. +- MongoDB Cloud: Used for its scalability and ease of management. MongoDB’s document-based structure is well-suited for handling user data. +- JWT (JSON Web Tokens): Used for secure and stateless authentication. + +### Authentication and Authorization + +The User Service uses JWT for secure token-based authentication. Upon login, a JWT is issued. + +This token is used in the Authorization header to grant or restrict access to specific routes based on the user’s role: + +- Admin Users: Can access all routes, including those that involve managing other users’ data, or editing question details. +- Non-admin Users: Restricted to actions related to their own data only. + +--- + +NOTE: This following parts in the README is adapted from [here](https://github.com/CS3219-AY2425S1/PeerPrep-UserService/tree/main). + ## Setting-up > :notebook: If you are familiar to MongoDB and wish to use a local instance, please feel free to do so. This guide utilizes MongoDB Cloud Services. diff --git a/docs/cicid.md b/docs/cicid.md new file mode 100644 index 0000000000..d234b54bbf --- /dev/null +++ b/docs/cicid.md @@ -0,0 +1,47 @@ +# CI/CD Guide + +## CI with Github Actions + +The variables and secrets were placed in Github Action’s environment secrets and environment variables, and referenced when running the workflows. + +Whenever code is pushed to our main or staging branches, or a Pull Request is opened to these branches, the CI/CD pipeline is run. The pipeline workflow allows us to run the tests automatically on each commit. + +## Frontend unit testing + +Unit tests were done for the client-side functions for the question service API which fetch data from the question service. + +We decided to use the Jest testing library for this purpose as the NextJs framework has builtin configurations for it, making integration into our work processes easier. Jest’s Mocking feature also allowed us to replace dependencies in code that would be called while running tests without modifying production code, even for functions in the global namespace such as the Fetch API. Injecting mock functions into test code allowed us to (a) simulate responses from the Question Service so that tests can be made deterministically and (b) verify that the functions under test are formatting queries correctly. Testing was done during CI by downloading the relevant dependencies in the runner and running the test. + +![Frontend Unit Testing](./unit_testing_workflow.png) + +## Question service Integration Testing + +Integration testing was done for the Question service functions that access the Firestore database by starting a local Firestore emulator and modifying the client to point to it instead of the production database when making requests. Running a local emulator has the following advantages over testing against a remote database: + +- When running tests, the emulator is loaded with a list of sample questions to create a controlled environment where the results of calls to the database are deterministic. +- Calls to a remote database have unpredictable latency and are susceptible to unexpected downtime which reduces testability Testing is done in CI by installing the emulator via firebase.tools and running the tests using the Go testing package. + +![Question Service Integration Testing](./question_service_testing.png) + +## Browser Compatibility Testing + +Browser compatibility testing was done for Google Chrome using Selenium, a popular browser automation tool. +For CI, Selenium and a publicly available webdriver for Chrome was downloaded. An automation script using the Jest library was used to emulate a basic case of a user accessing the page, entering login credentials and submitting the login form. + +![Browser Compatibility Testing](./browser_test.png) + +## Jobs in CI Workflow + +1. question-service-test + a. This job handles testing for the Question Service, which is dependent on Firebase and Go. + b. It sets up the environment variables, firebase credentials, Go environment and dependencies and runs tests with firebase emulator. + +2. frontend-unit-tests + a. This job tests the frontend application using Node.js and pnpm + b. It sets up the environment and Node.js and installs pnpm and dependencies, before running the frontend tests. + +3. test-docker-compose + a. This job uses Docker Compose to run multiple services together and validate inter-service connectivity. + b. It sets up the environment files and database credential files across all services and builds and runs services with Docker Compose. + c. The services’ availability are checked using curl and websocat to validate the HTTP endpoints and WebSocket endpoints respectively. + d. Chrome compatibility test was done against the running frontend and user service endpoint. diff --git a/docs/setup.md b/docs/setup.md new file mode 100644 index 0000000000..b7aec7d69b --- /dev/null +++ b/docs/setup.md @@ -0,0 +1,150 @@ +# Setup Guide + +## Instructions to set up + +### Setting up the local environment + +1. Visit the repository at the following [url](https://github.com/CS3219-AY2425S1/cs3219-ay2425s1-project-g24) +2. Clone the repository +3. Ensure that you are in the correct branch (main) + a. Open terminal and run the following command: `git branch` +4. Install Docker + a. https://www.docker.com/get-started +5. Install Docker Compose + a. https://docs.docker.com/compose/install/ + +### Setting up the database secrets + +1. Copy the `cs3219-staging-codeexecution-firebase-adminsdk-ce48j-00ab09514c.json` file found in the zip file into the `./apps/execution-service/` directory. +2. Copy the `cs3219-staging-codehisto-bb61c-firebase-adminsdk-egopb-95cfaf9b87.json` file found in the zip file into the `./apps/history-service/` directory. +3. Copy the `cs3219-g24-firebase-adminsdk-9cm7h-b1675603ab.json` file found in the zip file into the `./apps/question-service/` directory. + +### Setting up environment variables + +1. Create/Update the `./apps/execution-service/.env` file with the following variables: + +``` +FIREBASE_CREDENTIAL_PATH=cs3219-staging-codeexecution-firebase-adminsdk-ce48j-00ab09514c.json +PORT=8083 +HISTORY_SERVICE_URL=http://history-service:8082/ +RABBITMQ_URL=amqp://guest:guest@rabbitmq:5672/ +``` + +2. Create/Update the `./apps/frontend/.env` file with the following variables: + +``` +NEXT_PUBLIC_QUESTION_SERVICE_URL="http://localhost:8080/" +NEXT_PUBLIC_USER_SERVICE_URL="http://localhost:3001/" +NEXT_PUBLIC_MATCHING_SERVICE_URL="ws://localhost:8081/match" +NEXT_PUBLIC_SIGNALLING_SERVICE_URL="ws://localhost:4444/" +NEXT_PUBLIC_HISTORY_SERVICE_URL="http://localhost:8082/" +NEXT_PUBLIC_EXECUTION_SERVICE_URL="http://localhost:8083/" +``` + +3. Create/Update the `./apps/history-service/.env` file with the following variables: + +``` +FIREBASE_CREDENTIAL_PATH=cs3219-staging-codehisto-bb61c-firebase-adminsdk-egopb-95cfaf9b87.json +PORT=8082 +RABBITMQ_URL=amqp://guest:guest@rabbitmq:5672/ +``` + +4. Create/Update the `./apps/matching-service/.env` file with the following variables: + +``` +PORT=8081 +MATCH_TIMEOUT=30 +JWT_SECRET=63059c735adba274cde40f2b1c0b955842d531b115bb4df3058d769b173dcc78 +REDIS_URL=redis-container:6379 +QUESTION_SERVICE_GRPC_URL=question-service:50051 +``` + +5. Create/Update the `./apps/question-service/.env` file with the following variables: + +``` +FIREBASE_CREDENTIAL_PATH=cs3219-g24-firebase-adminsdk-9cm7h-b1675603ab.json +JWT_SECRET=63059c735adba274cde40f2b1c0b955842d531b115bb4df3058d769b173dcc78 +EXECUTION_SERVICE_URL="http://execution-service:8083/" +``` + +6. Create/Update the `./apps/signalling-service/.env` file with the following variables: + +``` +PORT=4444 +JWT_SECRET=63059c735adba274cde40f2b1c0b955842d531b115bb4df3058d769b173dcc78 +``` + +7. Create/Update the `./apps/user-service/.env` file with the following variables: + +``` +DB_CLOUD_URI=mongodb+srv://admin:admin_user_service@cs3219-user-service.bbmji.mongodb.net/?retryWrites=true&w=majority&appName=cs3219-user-service +DB_LOCAL_URI=mongodb://127.0.0.1:27017/peerprepUserServiceDB +PORT=3001 +ENV=PROD +JWT_SECRET=63059c735adba274cde40f2b1c0b955842d531b115bb4df3058d769b173dcc78 +``` + +### Set-up and run ALL Services via Docker Compose + +1. Change directory to `./apps/` directory, run: `cd ./apps` +2. To set up and run all services via docker compose, run: `docker compose up --build` + +### Set-up and run Docker Container for Execution Service + +1. Change directory to `./apps/execution-service`, run: `cd ./apps/execution-service` +2. To set up the docker container for the execution service, run: `docker build -t execution-service .` +3. To run the docker container for the execution service, run: `docker run -p 8083:8083 --env-file .env -d execution-service` + +### Set-up and run Docker Container for Frontend + +1. Change directory to `./apps/frontend` directory, run: `cd ./apps/frontend` +2. To set up the docker container for the frontend, run: `docker build -t frontend -f Dockerfile .` +3. To run the docker container for the frontend, run: `docker run -p 3000:3000 --env-file .env -d frontend` + +### Set-up and run Docker Container for History Service + +1. Change directory to `./apps/history-service` directory, run: `cd ./apps/history-service` +2. To set up the docker container for the history service, run: `docker build -t history-service .` +3. To run the docker container for the history service, run: `docker run -p 8082:8082 -d history-service` + +### Set-up and run Docker Container for Matching-Service + +1. Change directory to `./apps/matching-service` directory, run: `cd ./apps/matching-service` +2. Ensure that `REDIS_URL=redis-container:6379` in .env file for matching-service +3. To set up the go docker container for the matching service, run: `docker build -f Dockerfile -t match-go-app .` +4. To create the docker network for redis and go, run: `docker network create redis-go-network` +5. To start a new Redis container in detached mode using the redis image from Docker Hub, run: `docker run -d --name redis-container --network redis-go-network redis` +6. To run the go docker container for the matching-service, run: `docker run -d -p 8081:8081 --name go-app-container --network redis-go-network match-go-app` + +### Set-up and run Docker Container for Question-Service + +1. Change directory to `./apps/question-service` directory, run: `cd ./apps/question-service` +2. To set up the docker container for the question service, run: `docker build -t question-service .` +3. To run the docker container for the question service, run: `docker run -p 8080:8080 --env-file .env -d question-service` + +### Set-up and run Docker Container for Signalling-Service + +1. Change directory to `./apps/signalling-service` directory, run: `cd ./apps/signalling-service` +2. To set up the docker container for the signalling service, run: `docker build -t signalling-service -f Dockerfile .` +3. To run the docker container for the signalling service, run: `docker run -p 4444:4444 --env-file .env -d signalling-service` + +### Set-up and run Docker Container for User-Service + +1. Change directory to `./apps/user-service` directory, run: `cd ./apps/user-service` +2. To set up the docker container for the user service, run: `docker build -t user-service -f Dockerfile .` +3. To run the docker container for the user service, run: `docker run -p 3001:3001 --env-file .env -d user-service` + +## Running the application + +1. To check if all the Docker containers are up, run `docker ps`. +2. Navigate to `http://localhost:3000/register` to register to the application. +3. Navigate to `http://localhost:3000/login` to login to the application. +4. Access the application via an admin account with the following login details: + a. email: `admin@gmail.com` + b. password: `admin` +5. Access the application via non-admin account with the following login details (via + incognito mode to test matchmaking/collaboration function): + a. email: `notadmin@gmail.com` + b. password: `notadmin` + +NOTE: The MongoDB connection might get blocked on NUS Wi-Fi, so might have to use another Wi-Fi.