Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
79 changes: 79 additions & 0 deletions .github/workflows/Dev_CD.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
name: dev-cd

on:
push:
branches:
- "develop"
- "SCRUM-117"

permissions:
contents: read
checks: write
actions: read
pull-requests: write

jobs:
test:
uses: ./.github/workflows/Dev_CI.yml
secrets: inherit

build:
needs: test
runs-on: ubuntu-latest

steps:
- name: Checkout Code
uses: actions/checkout@v4

- name: Setting dev-secret.yml
run: |
echo "${{ secrets.DEV_SECRET_YML }}" > ./src/main/resources/dev-secret.yml

Comment on lines +28 to +31
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

빌드 산출물에 dev-secret.yml 포함 위험

빌드 단계에서 dev-secret.yml을 리소스에 기록하면 업로드되는 JAR 내부에 비밀이 포함됩니다. 이는 아티팩트 유출 리스크입니다. 배포 단계에서만 파일을 생성·주입하도록 이동하세요.

다음처럼 빌드 단계에서 제거하고, 배포 단계에서 생성하도록 변경:

-      - name: Setting dev-secret.yml
-        run: |
-          echo "${{ secrets.DEV_SECRET_YML }}" > ./src/main/resources/dev-secret.yml
+      # dev-secret.yml은 빌드 아티팩트에 포함하지 않습니다.

배포 단계에 추가:

     steps:
+      - name: Create dev-secret.yml on runner
+        run: |
+          mkdir -p ~/app/config
+          echo "${{ secrets.DEV_SECRET_YML }}" > ~/app/config/dev-secret.yml
+
       - name: Download artifact file
         uses: actions/download-artifact@v4
         with:
           name: app-artifact
           path: ~/app
+
+      - name: Verify secret file exists
+        run: test -s ~/app/config/dev-secret.yml

애플리케이션이 외부 파일을 읽도록(코드/스크립트 측) JVM 옵션 예:
-Dspring.config.import=optional:file:~/app/config/dev-secret.yml

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Setting dev-secret.yml
run: |
echo "${{ secrets.DEV_SECRET_YML }}" > ./src/main/resources/dev-secret.yml
# dev-secret.yml은 빌드 아티팩트에 포함하지 않습니다.
steps:
- name: Create dev-secret.yml on runner
run: |
mkdir -p ~/app/config
echo "${{ secrets.DEV_SECRET_YML }}" > ~/app/config/dev-secret.yml
- name: Download artifact file
uses: actions/download-artifact@v4
with:
name: app-artifact
path: ~/app
- name: Verify secret file exists
run: test -s ~/app/config/dev-secret.yml
🤖 Prompt for AI Agents
.github/workflows/Dev_CD.yml lines 28-31: the workflow writes dev-secret.yml
into src/main/resources during the build which embeds secrets into the produced
JAR; remove that run step from the build job, and instead add a step in the
deploy job that creates the file on the runner (or injects it as a secret file)
just before deployment; ensure the build uses no local secret file, update
deployment steps to write the secret from the GitHub secret into an external
path (e.g., ~/app/config/dev-secret.yml) and/or set the JVM/SPRING config to
import an external optional file (for example via
-Dspring.config.import=optional:file:~/app/config/dev-secret.yml) so secrets are
injected at deploy time only and not baked into artifacts.

- name: Set up JDK 21
uses: actions/setup-java@v4
with:
distribution: 'temurin'
java-version: '21'

- name: Make gradlew executable
run: chmod +x gradlew

- name: bootJar with Gradle
run: ./gradlew bootJar --info

- name: Change artifact file name
run: mv build/libs/*.jar build/libs/app.jar

- name: Upload artifact file
uses: actions/upload-artifact@v4
with:
name: app-artifact
path: ./build/libs/app.jar
if-no-files-found: error

Comment on lines +47 to +53
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

업로드 경로도 서브모듈 경로로 정합성 유지

아티팩트 업로드 경로를 실제 위치로 수정해야 합니다.

-      - name: Upload artifact file
+      - name: Upload artifact file
         uses: actions/upload-artifact@v4
         with:
           name: app-artifact
-          path: ./build/libs/app.jar
+          path: ./api-owner/build/libs/app.jar
           if-no-files-found: error
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- name: Upload artifact file
uses: actions/upload-artifact@v4
with:
name: app-artifact
path: ./build/libs/app.jar
if-no-files-found: error
- name: Upload artifact file
uses: actions/upload-artifact@v4
with:
name: app-artifact
path: ./api-owner/build/libs/app.jar
if-no-files-found: error
🤖 Prompt for AI Agents
In .github/workflows/Dev_CD.yml around lines 47 to 53, the artifact upload path
points to ./build/libs/app.jar which doesn't match the submodule layout; update
the path to the actual artifact location inside the submodule (for example
./your-submodule-name/build/libs/app.jar) so the Upload artifact step points to
the correct relative path, and verify if-no-files-found behavior is appropriate.

- name: Upload deploy scripts
uses: actions/upload-artifact@v4
with:
name: deploy-scripts
path: ./scripts/dev/
if-no-files-found: error

deploy:
needs: build
runs-on: dev

steps:
- name: Download artifact file
uses: actions/download-artifact@v4
with:
name: app-artifact
path: ~/app

- name: Download deploy scripts
uses: actions/download-artifact@v4
with:
name: deploy-scripts
path: ~/app/scripts

- name: Replace application to latest
run: sudo sh ~/app/scripts/replace-new-version.sh
91 changes: 91 additions & 0 deletions .github/workflows/Dev_CI.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
name: dev-ci

on:
pull_request:
branches:
- develop
workflow_call:

permissions:
contents: read
checks: write
pull-requests: write

jobs:
build-and-push:
runs-on: ubuntu-latest
timeout-minutes: 10
env:
TEST_REPORT: true

services:
mysql:
image: mysql:8.0
env:
MYSQL_ROOT_PASSWORD: ""
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
MYSQL_DATABASE: wellmeet_noti_test
ports:
- 3306:3306
Comment on lines +25 to +29
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

MySQL DB 이름 불일치 가능성(wellmeet_noti_test vs wellmeet_noti)

  • 워크플로우는 MYSQL_DATABASE: wellmeet_noti_test를 생성하지만, application-local.ymlwellmeet_noti를 가리킵니다. 테스트 프로파일이 없다면 커넥션 실패/빈 스키마 가능성이 있습니다.

테스트 프로파일 유무와 사용 DB를 확인해 주세요:

  • 선택 1: Run Tests 단계에 SPRING_PROFILES_ACTIVE=test를 설정하고 application-test.yml에서 wellmeet_noti_test 사용.
  • 선택 2: 로컬 프로파일 대신 테스트에서 사용하도록 SPRING_PROFILES_ACTIVE=local을 지정.

🏁 Script executed:

#!/bin/bash
# 테스트 프로파일 및 DB URL 탐색
fd -HI "application-test.yml" || echo "⚠️ application-test.yml 없음"
rg -n "spring\.datasource\.url" -C2 src/main/resources || true

Length of output: 133


테스트·로컬 프로파일에서 실제로 사용 중인 DB 이름과 URL을 명확히 파악하기 위해 아래 스크립트를 실행해 주세요:

이 결과로 wellmeet_noti vs wellmeet_noti_test 참조 여부를 확인하겠습니다.


🏁 Script executed:

#!/bin/bash
echo "===== application-local.yml datasource 설정 ====="
rg -n "datasource" -C3 src/main/resources/application-local.yml || true

echo "===== application-test.yml datasource 설정 ====="
rg -n "datasource" -C3 src/main/resources/application-test.yml || true

Length of output: 793


Run Tests 단계에서 SPRING_PROFILES_ACTIVE 설정 여부를 확인해 주세요:


🏁 Script executed:

#!/bin/bash
echo "===== Run Tests 섹션 ====="
rg -n "Run Tests" -C5 .github/workflows/Dev_CI.yml || true

echo "===== SPRING_PROFILES_ACTIVE 설정 ====="
rg -n "SPRING_PROFILES_ACTIVE" -C3 .github/workflows/Dev_CI.yml || echo "⚠️ SPRING_PROFILES_ACTIVE 설정 없음"

Length of output: 527


default 프로파일로 실행될 때 로드되는 application.yml의 datasource 설정 유무를 확인해 주세요:


🏁 Script executed:

#!/bin/bash
echo "===== application.yml 파일 위치 확인 ====="
fd -HI src/main/resources/application.yml || echo "⚠️ application.yml 없음"

echo "===== application.yml datasource 설정 ====="
rg -n "datasource" -C3 src/main/resources/application.yml || true

Length of output: 359


Run Tests 단계에 SPRING_PROFILES_ACTIVE=test 설정
application-test.yml은 jdbc:mysql://localhost:3306/wellmeet_noti_test를, application-local.yml은 …/wellmeet_noti를 참조하지만 기본 프로파일(application.yml)이 없어 워크플로우에서 프로파일을 지정하지 않으면 잘못된 DB로 연결되거나 커넥션 실패가 발생합니다.
.github/workflows/Dev_CI.yml의 Run Tests With Gradle 단계에 아래 env 설정을 추가해 주세요:

       - name: Run Tests With Gradle
         run: ./gradlew test
+        env:
+          SPRING_PROFILES_ACTIVE: test
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
MYSQL_ROOT_PASSWORD: ""
MYSQL_ALLOW_EMPTY_PASSWORD: "yes"
MYSQL_DATABASE: wellmeet_noti_test
ports:
- 3306:3306
- name: Run Tests With Gradle
run: ./gradlew test
env:
SPRING_PROFILES_ACTIVE: test
🤖 Prompt for AI Agents
In .github/workflows/Dev_CI.yml around lines 25 to 29 the CI job doesn't set
SPRING_PROFILES_ACTIVE, so Gradle tests can pick the wrong profile/DB; add an
environment variable SPRING_PROFILES_ACTIVE=test to the "Run Tests With Gradle"
step's env block so the test profile (and application-test.yml DB URL) is used
during test execution.

options: >-
--health-cmd="mysqladmin ping"
--health-interval=10s
--health-timeout=5s
--health-retries=3

zookeeper:
image: confluentinc/cp-zookeeper:7.0.1
env:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 2181:2181
options: >-
--health-cmd="curl -f http://localhost:8080/commands/stat || exit 1"
--health-interval=10s
--health-timeout=10s
--health-retries=5
Comment on lines +44 to +47
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Zookeeper 헬스체크 커맨드가 잘못되었습니다.

  • cp-zookeeper 기본 이미지에는 8080 HTTP 엔드포인트가 없습니다. ruok 체크로 교체하세요.
-          --health-cmd="curl -f http://localhost:8080/commands/stat || exit 1"
+          --health-cmd="bash -c 'echo ruok | nc -w 2 localhost 2181 | grep imok'"
           --health-interval=10s
           --health-timeout=10s
           --health-retries=5
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
--health-cmd="curl -f http://localhost:8080/commands/stat || exit 1"
--health-interval=10s
--health-timeout=10s
--health-retries=5
--health-cmd="bash -c 'echo ruok | nc -w 2 localhost 2181 | grep imok'"
--health-interval=10s
--health-timeout=10s
--health-retries=5
🤖 Prompt for AI Agents
.github/workflows/Dev_CI.yml lines 44-47: the current health-cmd targets a
non-existent HTTP endpoint on port 8080; replace it with a Zookeeper "ruok"
check against the Zookeeper client port (2181). Update the --health-cmd to run a
TCP check that sends "ruok" and verifies the "imok" response (for example use:
echo ruok | nc -w1 localhost 2181 | grep imok || exit 1), keeping the existing
--health-interval, --health-timeout, and --health-retries values.


kafka:
image: confluentinc/cp-kafka:7.0.1
env:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
Comment on lines +50 to +56
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Kafka 리스너 바인딩 명시(KAFKA_LISTENERS) 추가 권장

  • 일부 환경에서 KAFKA_ADVERTISED_LISTENERS만 설정 시 부트 실패 사례가 있어, 리스너 바인딩을 명시하는 편이 안전합니다.
       kafka:
         image: confluentinc/cp-kafka:7.0.1
         env:
           KAFKA_BROKER_ID: 1
           KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
           KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT
+          KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
           KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
image: confluentinc/cp-kafka:7.0.1
env:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
kafka:
image: confluentinc/cp-kafka:7.0.1
env:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT
KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
🤖 Prompt for AI Agents
.github/workflows/Dev_CI.yml around lines 50 to 56: the workflow sets
KAFKA_ADVERTISED_LISTENERS but omits KAFKA_LISTENERS which can cause Kafka to
fail to bind in some environments; add an env entry KAFKA_LISTENERS set to
PLAINTEXT://0.0.0.0:9092 (or matching interface and port you intend to bind)
alongside the existing advertised listeners so the broker explicitly binds to
the correct network interface.

KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_AUTO_CREATE_TOPICS_ENABLE: true
ports:
- 9092:9092
options: >-
--health-cmd="timeout 10s bash -c 'until printf \"\" 2>>/dev/null >>/dev/tcp/localhost/9092; do sleep 1; done'"
--health-interval=10s
--health-timeout=10s
--health-retries=10

steps:
- uses: actions/checkout@v4

- name: Setting local-secret.yml
run: |
echo "${{ secrets.LOCAL_SECRET_YML }}" > ./src/main/resources/local-secret.yml

- name: Set up JDK 21
uses: actions/setup-java@v4
with:
java-version: '21'
distribution: 'temurin'

- name: Setup Gradle
uses: gradle/actions/setup-gradle@v4

- name: Grant Permission
run: chmod +x ./gradlew

- name: Build With Gradle
run: ./gradlew clean build -x test

- name: Run Tests With Gradle
run: ./gradlew test
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -36,3 +36,4 @@ out/
### VS Code ###
.vscode/
/src/main/resources/local-secret.yml
.serena/
2 changes: 2 additions & 0 deletions build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,8 @@ dependencies {
implementation 'org.springframework.boot:spring-boot-starter-data-jpa'
implementation 'org.springframework.boot:spring-boot-starter-validation'
implementation 'org.springframework.boot:spring-boot-starter-web'
implementation 'org.springframework.boot:spring-boot-starter-mail'
implementation 'org.springframework.kafka:spring-kafka'
implementation('nl.martijndwars:web-push:5.1.1') {
exclude group: 'org.asynchttpclient', module: 'async-http-client'
}
Expand Down
40 changes: 40 additions & 0 deletions scripts/dev/replace-new-version.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
#!/bin/bash

PID=$(lsof -t -i:8080)

# 프로세스 종료
if [ -z "$PID" ]; then
echo "No process is using port 8080."
else
echo "Killing process with PID: $PID"
kill -15 "$PID"

# 직전 명령(프로세스 종료 명령)이 정상 동작했는지 확인
if [ $? -eq 0 ]; then
echo "Process $PID terminated successfully."
else
echo "Failed to terminate process $PID."
fi
fi
Comment on lines +3 to +18
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

종료 대기/강제 종료 루프 추가 필요

kill -15 성공 여부만 확인하고 포트 해제가 보장되지 않습니다. 일정 시간 대기 후 미종료 시 -9로 강제 종료하는 루프를 권장합니다.

-if [ -z "$PID" ]; then
+if [ -z "$PID" ]; then
   echo "No process is using port 8080."
 else
   echo "Killing process with PID: $PID"
   kill -15 "$PID"
 
   # 직전 명령(프로세스 종료 명령)이 정상 동작했는지 확인
   if [ $? -eq 0 ]; then
     echo "Process $PID terminated successfully."
   else
     echo "Failed to terminate process $PID."
   fi
+  # 포트 해제 대기(최대 30초), 미해제 시 SIGKILL
+  for i in {1..30}; do
+    if ! lsof -t -i:8080 >/dev/null; then
+      break
+    fi
+    sleep 1
+  done
+  if lsof -t -i:8080 >/dev/null; then
+    echo "Force killing process $PID"
+    kill -9 "$PID" || true
+  fi
 fi
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
PID=$(lsof -t -i:8080)
# 프로세스 종료
if [ -z "$PID" ]; then
echo "No process is using port 8080."
else
echo "Killing process with PID: $PID"
kill -15 "$PID"
# 직전 명령(프로세스 종료 명령)이 정상 동작했는지 확인
if [ $? -eq 0 ]; then
echo "Process $PID terminated successfully."
else
echo "Failed to terminate process $PID."
fi
fi
PID=$(lsof -t -i:8080)
# 프로세스 종료
if [ -z "$PID" ]; then
echo "No process is using port 8080."
else
echo "Killing process with PID: $PID"
kill -15 "$PID"
# 직전 명령(프로세스 종료 명령)이 정상 동작했는지 확인
if [ $? -eq 0 ]; then
echo "Process $PID terminated successfully."
else
echo "Failed to terminate process $PID."
fi
# 포트 해제 대기(최대 30초), 미해제 시 SIGKILL
for i in {1..30}; do
if ! lsof -t -i:8080 >/dev/null; then
break
fi
sleep 1
done
if lsof -t -i:8080 >/dev/null; then
echo "Force killing process $PID"
kill -9 "$PID" || true
fi
fi


JAR_FILE=$(ls /home/ubuntu/app/*.jar | head -n 1)

echo "JAR 파일 실행: $JAR_FILE"

Comment on lines +20 to +23
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

하드코딩된 경로 제거 및 아티팩트 명 일치

워크플로는 ~/app로 다운로드하며 파일명은 app.jar입니다. 스크립트는 /home/ubuntu/app/*.jar를 탐색하여 환경에 따라 실패합니다. $HOME 기반 경로와 고정 파일명을 사용하세요. JAR 미존재 시 즉시 실패하도록 체크를 추가합니다.

-JAR_FILE=$(ls /home/ubuntu/app/*.jar | head -n 1)
+APP_DIR="${APP_DIR:-$HOME/app}"
+JAR_FILE="$APP_DIR/app.jar"
+if [ ! -f "$JAR_FILE" ]; then
+  echo "JAR not found: $JAR_FILE"
+  exit 1
+fi
-
-echo "JAR 파일 실행: $JAR_FILE"
+echo "JAR 파일 실행: $JAR_FILE"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
JAR_FILE=$(ls /home/ubuntu/app/*.jar | head -n 1)
echo "JAR 파일 실행: $JAR_FILE"
APP_DIR="${APP_DIR:-$HOME/app}"
JAR_FILE="$APP_DIR/app.jar"
if [ ! -f "$JAR_FILE" ]; then
echo "JAR not found: $JAR_FILE"
exit 1
fi
echo "JAR 파일 실행: $JAR_FILE"
🤖 Prompt for AI Agents
In scripts/dev/replace-new-version.sh around lines 20 to 23, the script
currently hardcodes /home/ubuntu/app/*.jar which can fail in other environments
and with the known artifact name; change it to use $HOME-based path and the
fixed filename (e.g. $HOME/app/app.jar) and add an existence check that
immediately exits non-zero if the JAR is missing; ensure subsequent code uses
the JAR_FILE variable set to that path and that the script prints the JAR path
before proceeding.

# 애플리케이션 로그 파일 설정
APP_LOG_DIR="/home/ubuntu/app/logs"
APP_LOG_FILE="$APP_LOG_DIR/application-$(date +%Y%m%d-%H%M%S).log"

echo "애플리케이션 로그 파일: $APP_LOG_FILE"

Comment on lines +25 to +29
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

로그 디렉터리 생성 누락

로그 디렉터리가 없으면 리다이렉션이 실패합니다. mkdir -p 추가를 권장합니다.

 APP_LOG_DIR="/home/ubuntu/app/logs"
 APP_LOG_FILE="$APP_LOG_DIR/application-$(date +%Y%m%d-%H%M%S).log"
 
 echo "애플리케이션 로그 파일: $APP_LOG_FILE"
+mkdir -p "$APP_LOG_DIR"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
APP_LOG_DIR="/home/ubuntu/app/logs"
APP_LOG_FILE="$APP_LOG_DIR/application-$(date +%Y%m%d-%H%M%S).log"
echo "애플리케이션 로그 파일: $APP_LOG_FILE"
APP_LOG_DIR="/home/ubuntu/app/logs"
APP_LOG_FILE="$APP_LOG_DIR/application-$(date +%Y%m%d-%H%M%S).log"
echo "애플리케이션 로그 파일: $APP_LOG_FILE"
mkdir -p "$APP_LOG_DIR"
🤖 Prompt for AI Agents
In scripts/dev/replace-new-version.sh around lines 25 to 29, the script sets
APP_LOG_DIR and APP_LOG_FILE but doesn't ensure the log directory exists which
can cause redirection failures; add a directory creation step before using
APP_LOG_FILE by running mkdir -p "$APP_LOG_DIR" and check its exit status (exit
with non-zero and a clear error message if creation fails) so the log file can
be created reliably.

sudo nohup java \
-Dspring.profiles.active=dev \
-Duser.timezone=Asia/Seoul \
-Dserver.port=8080 \
-Ddd.service=wellmeet-notification \
-Ddd.env=dev \
-jar "$JAR_FILE" > "$APP_LOG_FILE" 2>&1 &
Comment on lines +30 to +36
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

SC2024: sudo가 리다이렉션에 영향 없음 — root 권한으로 파일 쓰기 보장 필요

현재 리다이렉션은 호출 쉘 권한으로 수행되어 권한 오류가 날 수 있습니다. sudo sh -c로 전체 커맨드를 승격하세요.

-sudo nohup java \
-    -Dspring.profiles.active=dev \
-    -Duser.timezone=Asia/Seoul \
-    -Dserver.port=8080 \
-    -Ddd.service=wellmeet-notification \
-    -Ddd.env=dev \
-    -jar "$JAR_FILE" > "$APP_LOG_FILE" 2>&1 &
+sudo sh -c "nohup java \
+    -Dspring.profiles.active=dev \
+    -Duser.timezone=Asia/Seoul \
+    -Dserver.port=8080 \
+    -Ddd.service=wellmeet-notification \
+    -Ddd.env=dev \
+    -jar \"$JAR_FILE\" >> \"$APP_LOG_FILE\" 2>&1 &"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
sudo nohup java \
-Dspring.profiles.active=dev \
-Duser.timezone=Asia/Seoul \
-Dserver.port=8080 \
-Ddd.service=wellmeet-notification \
-Ddd.env=dev \
-jar "$JAR_FILE" > "$APP_LOG_FILE" 2>&1 &
sudo sh -c "nohup java \
-Dspring.profiles.active=dev \
-Duser.timezone=Asia/Seoul \
-Dserver.port=8080 \
-Ddd.service=wellmeet-notification \
-Ddd.env=dev \
-jar \"$JAR_FILE\" >> \"$APP_LOG_FILE\" 2>&1 &"
🧰 Tools
🪛 Shellcheck (0.10.0)

[warning] 36-36: sudo doesn't affect redirects. Use ..| sudo tee file

(SC2024)

🤖 Prompt for AI Agents
In scripts/dev/replace-new-version.sh around lines 30 to 36, the nohup command's
redirection runs under the calling shell (not sudo) which can cause permission
errors when writing the log; wrap the entire command and its redirection in a
privileged shell by using sudo sh -c '... &' so that the redirection is executed
as root, ensure proper quoting of variables inside the single-quoted command or
escape inner quotes as needed, and keep nohup and the trailing ampersand to run
the process in background.


echo "애플리케이션이 백그라운드에서 실행되었습니다."
echo "로그 확인: tail -f $APP_LOG_FILE"
echo "=== 배포 완료 ==="
49 changes: 49 additions & 0 deletions src/main/java/com/wellmeet/config/KafkaConfig.java
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
package com.wellmeet.config;

import com.wellmeet.notification.consumer.dto.NotificationMessage;
import java.util.HashMap;
import java.util.Map;
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.common.serialization.StringDeserializer;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.annotation.EnableKafka;
import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
import org.springframework.kafka.core.ConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import org.springframework.kafka.support.serializer.JsonDeserializer;

@EnableKafka
@Configuration
public class KafkaConfig {

@Value("${spring.kafka.bootstrap-servers}")
private String bootstrapServers;

@Value("${spring.kafka.consumer.group-id}")
private String groupId;

@Bean
public ConsumerFactory<String, NotificationMessage> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");

props.put(JsonDeserializer.USE_TYPE_INFO_HEADERS, false);
props.put(JsonDeserializer.TRUSTED_PACKAGES, "com.wellmeet.consumer.dto");
props.put(JsonDeserializer.VALUE_DEFAULT_TYPE, NotificationMessage.class);
return new DefaultKafkaConsumerFactory<>(props);
Comment on lines +36 to +39
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

JsonDeserializer TRUSTED_PACKAGES 경로 오타로 역직렬화 실패 가능성

DTO 패키지명이 com.wellmeet.notification.consumer.dto인데, 신뢰 패키지가 com.wellmeet.consumer.dto로 되어 있어 역직렬화가 실패할 수 있습니다. 또한 VALUE_DEFAULT_TYPE은 FQCN 문자열로 지정하는 편이 안전합니다.

다음 최소 패치 권장:

-        props.put(JsonDeserializer.TRUSTED_PACKAGES, "com.wellmeet.consumer.dto");
-        props.put(JsonDeserializer.VALUE_DEFAULT_TYPE, NotificationMessage.class);
+        props.put(JsonDeserializer.TRUSTED_PACKAGES, "com.wellmeet.notification.consumer.dto");
+        props.put(JsonDeserializer.VALUE_DEFAULT_TYPE, "com.wellmeet.notification.consumer.dto.NotificationMessage");
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
props.put(JsonDeserializer.USE_TYPE_INFO_HEADERS, false);
props.put(JsonDeserializer.TRUSTED_PACKAGES, "com.wellmeet.consumer.dto");
props.put(JsonDeserializer.VALUE_DEFAULT_TYPE, NotificationMessage.class);
return new DefaultKafkaConsumerFactory<>(props);
props.put(JsonDeserializer.USE_TYPE_INFO_HEADERS, false);
props.put(JsonDeserializer.TRUSTED_PACKAGES, "com.wellmeet.notification.consumer.dto");
props.put(JsonDeserializer.VALUE_DEFAULT_TYPE, "com.wellmeet.notification.consumer.dto.NotificationMessage");
return new DefaultKafkaConsumerFactory<>(props);
🤖 Prompt for AI Agents
In src/main/java/com/wellmeet/config/KafkaConfig.java around lines 36 to 39, the
JsonDeserializer TRUSTED_PACKAGES value is incorrect (com.wellmeet.consumer.dto)
which can cause deserialization failures; change it to the correct package
com.wellmeet.notification.consumer.dto and set
JsonDeserializer.VALUE_DEFAULT_TYPE to the fully-qualified class name string
"com.wellmeet.notification.consumer.dto.NotificationMessage" (rather than the
class literal) so the deserializer can resolve the type reliably; keep
USE_TYPE_INFO_HEADERS = false and return the DefaultKafkaConsumerFactory with
the updated props.

}

@Bean
public ConcurrentKafkaListenerContainerFactory<String, NotificationMessage> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, NotificationMessage> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
return factory;
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
package com.wellmeet.notification.consumer;

import com.wellmeet.notification.repository.OwnerNotificationEnabledRepository;
import com.wellmeet.notification.repository.UserNotificationEnabledRepository;
import java.util.List;
import lombok.RequiredArgsConstructor;
import org.springframework.stereotype.Component;

@Component
@RequiredArgsConstructor
public class NotificationChecker {

private final OwnerNotificationEnabledRepository ownerNotificationEnabledRepository;
private final UserNotificationEnabledRepository userNotificationEnabledRepository;

public List<String> check(String recipient, String recipientType) {
return null;
}
Comment on lines +16 to +18
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

null 반환으로 인한 NPE 및 채널 확인 실패
Line 16: check가 언제나 null을 반환하면, 이를 사용하는 측에서 즉시 NullPointerException이 발생하거나 채널 판별이 아예 불가능해집니다. 실제 리포지토리 조회 후 활성화된 채널 목록을 돌려주는 로직을 구현해 주세요(비어 있으면 최소한 빈 리스트라도 반환해야 합니다).

🤖 Prompt for AI Agents
In src/main/java/com/wellmeet/notification/consumer/NotificationChecker.java
around lines 16 to 18, the check method currently returns null which causes NPEs
and prevents channel determination; replace this stub with logic that queries
the notification/channel repository using recipient and recipientType, filters
for active/enabled channels, returns a List<String> of channel identifiers, and
never returns null (return Collections.emptyList() when none found); also catch
repository exceptions, log appropriately and return an empty list on error.

}
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
package com.wellmeet.notification.consumer;

import com.wellmeet.notification.consumer.dto.NotificationMessage;
import java.util.List;
import lombok.RequiredArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Service;

@Slf4j
@Service
@RequiredArgsConstructor
public class NotificationConsumer {

private final NotificationChecker notificationChecker;
private final NotificationSender notificationSender;

@KafkaListener(topics = "notification", groupId = "notification-group")
public void consume(NotificationMessage message) {
List<String> enabled = notificationChecker.check(message.getNotification().getRecipient(),
message.getNotification().getRecipientType());
notificationSender.send(message, enabled);
}
Comment on lines 18 to 23
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

토픽/그룹 하드코딩 제거 + 메타데이터 포함 로깅으로 운용성 개선

  • 프로퍼티 기반으로 외부화하여 환경별 유연성을 높이고, 파티션/오프셋까지 로깅하면 장애 추적이 쉬워집니다.
-import org.springframework.kafka.annotation.KafkaListener;
+import org.springframework.kafka.annotation.KafkaListener;
+import org.apache.kafka.clients.consumer.ConsumerRecord;
@@
-    @KafkaListener(topics = "notification", groupId = "notification-group")
-    public void consume(String message) {
-        log.info("Received message from notification topic: {}", message);
+    @KafkaListener(
+        topics = "${notification.topic:notification}",
+        groupId = "${spring.kafka.consumer.group-id}"
+    )
+    public void consume(ConsumerRecord<String, String> record) {
+        log.info("Consumed reservation event: topic={}, partition={}, offset={}, key={}, value={}",
+                 record.topic(), record.partition(), record.offset(), record.key(), record.value());
     }

메일/SMS/웹푸시 전달 로직을 붙이는 구현이 필요하면, 인터페이스와 간단한 어댑터(이벤트→도메인 DTO 변환) 초안을 제공하겠습니다.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
@KafkaListener(topics = "notification", groupId = "notification-group")
public void consume(String message) {
log.info("Received message from notification topic: {}", message);
}
import org.springframework.kafka.annotation.KafkaListener;
import org.apache.kafka.clients.consumer.ConsumerRecord;
@KafkaListener(
topics = "${notification.topic:notification}",
groupId = "${spring.kafka.consumer.group-id}"
)
public void consume(ConsumerRecord<String, String> record) {
log.info("Consumed reservation event: topic={}, partition={}, offset={}, key={}, value={}",
record.topic(),
record.partition(),
record.offset(),
record.key(),
record.value());
}

Comment on lines +19 to +23
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

에러 처리 로직 누락으로 메시지 처리 안정성 저하

notificationChecker.check() 또는 notificationSender.send()에서 예외 발생 시 Kafka 리스너의 기본 동작에 따라 메시지가 재처리되거나 DLQ로 이동할 수 있습니다. 명시적인 에러 핸들링과 로깅을 추가하여 장애 추적 및 복구 전략을 개선하세요.

다음 diff를 적용하여 에러 처리 및 로깅을 추가하세요:

     @KafkaListener(topics = "notification", groupId = "notification-group")
     public void consume(NotificationMessage message) {
+        try {
+            log.info("Processing notification message: recipient={}, type={}", 
+                     message.getNotification().getRecipient(),
+                     message.getNotification().getRecipientType());
+            
             List<String> enabled = notificationChecker.check(message.getNotification().getRecipient(),
                     message.getNotification().getRecipientType());
             notificationSender.send(message, enabled);
+            
+            log.info("Successfully processed notification message for recipient={}",
+                     message.getNotification().getRecipient());
+        } catch (Exception e) {
+            log.error("Failed to process notification message: recipient={}, error={}",
+                      message.getNotification().getRecipient(), e.getMessage(), e);
+            throw e; // Re-throw to trigger Kafka retry/DLQ logic
+        }
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
public void consume(NotificationMessage message) {
List<String> enabled = notificationChecker.check(message.getNotification().getRecipient(),
message.getNotification().getRecipientType());
notificationSender.send(message, enabled);
}
@KafkaListener(topics = "notification", groupId = "notification-group")
public void consume(NotificationMessage message) {
try {
log.info("Processing notification message: recipient={}, type={}",
message.getNotification().getRecipient(),
message.getNotification().getRecipientType());
List<String> enabled = notificationChecker.check(
message.getNotification().getRecipient(),
message.getNotification().getRecipientType());
notificationSender.send(message, enabled);
log.info("Successfully processed notification message for recipient={}",
message.getNotification().getRecipient());
} catch (Exception e) {
log.error("Failed to process notification message: recipient={}, error={}",
message.getNotification().getRecipient(), e.getMessage(), e);
throw e; // Re-throw to trigger Kafka retry/DLQ logic
}
}
🤖 Prompt for AI Agents
In src/main/java/com/wellmeet/notification/consumer/NotificationConsumer.java
around lines 19 to 23, add explicit error handling and logging around the calls
to notificationChecker.check(...) and notificationSender.send(...) so exceptions
are not silently propagated: wrap the existing logic in a try-catch, log a
detailed error (include message id, recipient, recipientType and stacktrace)
when either call fails, and then decide to either rethrow a
RuntimeException/KafkaException to let the Kafka listener handle retries/DLQ or
call a recovery/dlqPublisher if you have one; ensure logging uses the class
logger and includes enough context to trace the message and failure.

}
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
package com.wellmeet.notification.consumer;

import com.wellmeet.notification.consumer.dto.NotificationMessage;
import com.wellmeet.notification.repository.NotificationHistoryRepository;
import com.wellmeet.notification.webpush.WebPushService;
import java.util.List;
import lombok.RequiredArgsConstructor;
import org.springframework.stereotype.Component;

@Component
@RequiredArgsConstructor
public class NotificationSender {

private final WebPushService webPushService;
private final NotificationHistoryRepository notificationHistoryRepository;

public void send(NotificationMessage payload, List<String> enabled) {

}
Comment on lines +17 to +19
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

비어 있는 send 구현으로 기능이 수행되지 않습니다
Line 17: send 메서드가 비어 있어 NotificationConsumer가 호출해도 실제 알림 전송이나 이력 저장이 전혀 이뤄지지 않습니다. 이는 본 PR의 핵심 기능을 막는 치명적 문제입니다. 최소한 enabled 채널을 순회해 WebPushService 호출과 NotificationHistoryRepository 저장 로직을 구현해 주세요.

🤖 Prompt for AI Agents
In src/main/java/com/wellmeet/notification/consumer/NotificationSender.java
around lines 17 to 19, the send method is empty so notifications are never sent
or recorded; implement it to iterate over the provided enabled channel list, for
each channel call the appropriate service (e.g., WebPushService.send or other
channel-specific send methods), build and persist a NotificationHistory entity
via NotificationHistoryRepository.save (include payload, channel, status,
timestamp), and catch/log exceptions per-channel so one failure doesn't stop
others; ensure dependencies (WebPushService, NotificationHistoryRepository,
logger) are injected/used and update method signature behavior to
return/propagate errors if required by surrounding code.

}
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
package com.wellmeet.notification.consumer.dto;

import java.time.LocalDateTime;
import lombok.Getter;
import lombok.NoArgsConstructor;

@Getter
@NoArgsConstructor
public class MessageHeader {

private String messageId;
private LocalDateTime timestamp;
private String source;
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
package com.wellmeet.notification.consumer.dto;

import lombok.Getter;
import lombok.NoArgsConstructor;

@Getter
@NoArgsConstructor
public class NotificationInfo {

private String type;
private String category;
private String recipient;
Comment on lines +6 to +12
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

🧩 Analysis chain

Jackson 역직렬화 호환성 점검 + 타입 안전성 개선 제안

  • 현재 @Getter, @NoArgsConstructor만으로는 기본 ObjectMapper 설정에서 private 필드 역직렬화가 실패할 수 있습니다. 필드 기반 접근 허용 또는 세터/생성자 제공이 필요합니다.
  • 또한 type/category 문자열은 오타·불일치 위험이 있으니, enum 기반(예: NotificationType, 별도 Category enum)으로의 전환을 고려해 주세요.

적용 예시(필드 접근 허용 및 unknown 필드 무시):

 package com.wellmeet.notification.consumer.dto;

+import com.fasterxml.jackson.annotation.JsonAutoDetect;
+import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
 import lombok.Getter;
 import lombok.NoArgsConstructor;

 @Getter
 @NoArgsConstructor
+@JsonAutoDetect(fieldVisibility = JsonAutoDetect.Visibility.ANY)
+@JsonIgnoreProperties(ignoreUnknown = true)
 public class NotificationInfo {

     private String type;
     private String category;
     private String recipient;
 }

다음 스크립트로 역직렬화 설정(필드 가시성/JsonDeserializer/신뢰 패키지) 존재 여부를 확인해 주세요:


🏁 Script executed:

#!/bin/bash
# Jackson 필드 가시성/AutoDetect/Deserializer 설정 탐색
rg -n -C2 -e '@JsonAutoDetect' -e 'setVisibility' -e 'JsonAutoDetect.Visibility' -e 'fieldVisibility' --type=java --type=yml --type=properties

# Spring Kafka JsonDeserializer 및 trusted.packages 설정 탐색
rg -n -C2 -e 'JsonDeserializer' -e 'spring.kafka.consumer' -e 'trusted.packages' --type=java --type=yml --type=properties

# 커스텀 ObjectMapper Bean 정의 탐색
rg -n -C2 -P 'ObjectMapper\s+\w+\s*\(' --type=java

Length of output: 55


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "=== File: NotificationInfo.java (first 200 lines) ==="
sed -n '1,200p' src/main/java/com/wellmeet/notification/consumer/dto/NotificationInfo.java || true
echo

echo "=== Search: Jackson annotations / ObjectMapper config / module registration ==="
rg -n -C2 --hidden -S '@JsonAutoDetect|@JsonIgnoreProperties|JsonAutoDetect.Visibility|setVisibility|fieldVisibility|Jackson2ObjectMapperBuilder|new ObjectMapper|@Bean\s+ObjectMapper|registerModule|SimpleModule|addDeserializer|@JsonComponent' || true
echo

echo "=== Search: Kafka JsonDeserializer / trusted.packages / spring.kafka.consumer ==="
rg -n -C2 --hidden -S 'JsonDeserializer|trusted.packages|spring.kafka.consumer|value-deserializer|key-deserializer|spring.kafka.properties' || true
echo

echo "=== Search: application.yml / application.yaml / application.properties for trusted.packages or spring.kafka.consumer ==="
rg -n -C2 --hidden -S 'trusted.packages|spring.kafka.consumer' -g 'application.*' -g '**/application.*' || true
echo

echo "=== Search: any mention of NotificationInfo usage (deserialization targets) ==="
rg -n -C2 --hidden -S 'NotificationInfo' || true

Length of output: 5962


조치 필요 — NotificationInfo 역직렬화 접근성 설정 및 Kafka JsonDeserializer trusted.packages 수정(치명적)

  • NotificationInfo (src/main/java/com/wellmeet/notification/consumer/dto/NotificationInfo.java): 현재 @NoArgsConstructor + private 필드(@Getter)만 있어 Jackson 기본 설정으로는 역직렬화 실패 가능. 클래스에 @JsonAutoDetect(fieldVisibility = JsonAutoDetect.Visibility.ANY) 및 @JsonIgnoreProperties(ignoreUnknown = true)를 추가하거나, 각 필드에 세터/ @JsonProperty 또는 @JsonCreator 생성자를 제공해 역직렬화를 보장하세요.
  • KafkaConfig (src/main/java/com/wellmeet/config/KafkaConfig.java): props.put(JsonDeserializer.TRUSTED_PACKAGES, "com.wellmeet.consumer.dto"); — 실제 DTO 패키지(com.wellmeet.notification.consumer.dto)와 불일치합니다. trusted.packages 값을 실제 패키지로 수정하거나 필요한 패키지를 포함시키지 않으면 JsonDeserializer가 역직렬화를 차단합니다.
  • 권장 변경: type/category 필드를 enum(NotificationType, Category)으로 전환해 타입 안정성 확보.
🤖 Prompt for AI Agents
In src/main/java/com/wellmeet/notification/consumer/dto/NotificationInfo.java
around lines 6-12, the class only has @NoArgsConstructor and private fields with
@Getter which can prevent Jackson from deserializing; update the class to allow
Jackson deserialization by adding @JsonIgnoreProperties(ignoreUnknown = true)
and @JsonAutoDetect(fieldVisibility = JsonAutoDetect.Visibility.ANY) to the
class or by adding setters/@JsonProperty annotations or a @JsonCreator
constructor for all fields; additionally, fix Kafka config in
src/main/java/com/wellmeet/config/KafkaConfig.java by setting
JsonDeserializer.TRUSTED_PACKAGES to the actual DTO package
"com.wellmeet.notification.consumer.dto" (or include it alongside existing
packages); optionally migrate type and category to enums (NotificationType,
Category) for stronger typing and update deserialization accordingly.

private String recipientType;
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
package com.wellmeet.notification.consumer.dto;

import lombok.Getter;
import lombok.NoArgsConstructor;

@Getter
@NoArgsConstructor
public class NotificationMessage {

private MessageHeader header;
private NotificationInfo notification;
private NotificationPayload payload;
}
Comment on lines +6 to +13
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Kafka/Jackson 역직렬화 실패 가능성: setter 부재 + 다형성 payload 문제

  • 현재 DTO는 getter만 있어 Jackson이 바인딩에 실패할 수 있습니다.
  • 필드 payload가 추상 타입(NotificationPayload)이라 다형성 역직렬화 설정이 없으면 실패합니다.

다음 패치를 권장합니다(Setter + unknown 필드 허용):

 package com.wellmeet.notification.consumer.dto;

+import com.fasterxml.jackson.annotation.JsonIgnoreProperties;
 import lombok.Getter;
+import lombok.Setter;
 import lombok.NoArgsConstructor;

 @Getter
+@Setter
 @NoArgsConstructor
+@JsonIgnoreProperties(ignoreUnknown = true)
 public class NotificationMessage {

     private MessageHeader header;
     private NotificationInfo notification;
     private NotificationPayload payload;
 }

또한, 다형성 역직렬화를 위해 NotificationPayload에 타입 정보를 선언하세요(다른 파일 변경):

// src/main/java/com/wellmeet/notification/consumer/dto/NotificationPayload.java
package com.wellmeet.notification.consumer.dto;

import com.fasterxml.jackson.annotation.JsonSubTypes;
import com.fasterxml.jackson.annotation.JsonTypeInfo;

@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY, property = "payloadType")
@JsonSubTypes({
    @JsonSubTypes.Type(value = ReservationCreatedPayload.class, name = "RESERVATION_CREATED")
})
public abstract class NotificationPayload {}

생성 측에서 JSON에 "payloadType": "RESERVATION_CREATED"를 포함시키세요.

🤖 Prompt for AI Agents
In src/main/java/com/wellmeet/notification/consumer/dto/NotificationMessage.java
around lines 6 to 13, Jackson deserialization can fail because the class only
exposes getters and the payload field is an abstract type without polymorphic
type info; add setters for all fields and annotate the class with
@JsonIgnoreProperties(ignoreUnknown = true) so unknown Kafka fields are allowed,
and update the abstract NotificationPayload class (separate file) to include
@JsonTypeInfo(use = JsonTypeInfo.Id.NAME, include = JsonTypeInfo.As.PROPERTY,
property = "payloadType") and @JsonSubTypes registrations for concrete payload
implementations; finally ensure producers include "payloadType" in the JSON
(e.g., "RESERVATION_CREATED") so Jackson can instantiate the correct subtype.

Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
package com.wellmeet.notification.consumer.dto;

public abstract class NotificationPayload {
}
Loading