π― Intended Audience
Translators, translation trainees, educators, RLHF data curators
This repository provides a structured and retrainable self-review template for translation quality assurance, focusing on sentence-level structural fidelity and AI-alignment suitability.
- Semantic Layout β Semantic core retention, syntax node mapping
- Structural Flow β Syntactic trunk stability, punctuation, conjunction usage
- Spacetime Dynamics β Temporal, causal, and emotional sequencing
- FPE Standards β Post-editing corrections, style appropriateness, register consistency
- Neural Alignment β Syntactic reproducibility, parallel corpus suitability
π Optional: Translation Resonance β Rhythm, imagery, poetic cadence
docs/system_instructions.mdβ Full specification (8000 chars, transparency & reproducibility)docs/scoring_logic.mdβ Pseudo BLEU/ROUGE structural scoring logicdocs/self_review_axes.mdβ Five-axis evaluation guidedocs/examples.mdβ Sample reviews with revision proposals
examples/en-ja_review.mdβ EnglishβJapanese review caseexamples/cn-ja_review.mdβ ChineseβJapanese review caseexamples/en-fr_review.mdβ EnglishβFrench review case
docs/β Full specifications & guidesexamples/β Case studies (ENβJA, CNβJA, ENβFR, β¦)templates/β Self-review forms for Notion, Google Docs, Word (planned)
β
Usable as a self-review sheet in Notion / Google Docs / Word
β
Compatible with BLEU/ROUGE auto-scores (comparative comments)
β
Exportable to JSON/CSV for RLHF training and translation QA education
Example self-review workflow:
# Evaluate a translation draft with pseudo scoring
python evaluate.py --input my_translation.txt --axes 5 --output review.json