-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improvement/llm question styling #1183
base: master
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #1183 +/- ##
==========================================
- Coverage 36.54% 36.50% -0.04%
==========================================
Files 432 435 +3
Lines 19220 19271 +51
Branches 5667 5678 +11
==========================================
+ Hits 7023 7034 +11
- Misses 12159 12199 +40
Partials 38 38 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All the changes listed look good code- and page-style wise!
While we're making styling updates anyway, I have a new request: I noticed that the tooltip for remaining attempts looks odd and the Figma design doc has a different look we should probably be using instead (I think it looks better and consistent with the rest of the site). For the sake of testing it, I coded it up in this branch, so if you agree that we should use it that'll save the effort of making it twice.
@@ -124,6 +125,9 @@ const GameboardBuilderRow = ( | |||
{question.subtitle && <> | |||
<span className="small text-muted d-none d-sm-block">{question.subtitle}</span> | |||
</>} | |||
{question.tags?.includes("llm_question_page") && <div className="ms-n1 my-2 mb-lg-0"> | |||
<LLMFreeTextQuestionIndicator small /> | |||
</div>} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that this "LLM marked question" icon in search/gameboard builder makes the card quite vertically clunky. This is probably still the best way to do it for now for maximum clarity, but perhaps worth making into an in-line icon with a tooltip at some point in the future once these are adjusted to?
(The tooltip would say "LLM marked question" instead of it being displayed directly)
Good for now, but I certainly think it's worth discussing with RPF.
@@ -156,7 +158,7 @@ export const IsaacQuestion = withRouter(({doc, location}: {doc: ApiTypes.Questio | |||
isInlineQuestion ? {disabled: !canSubmit, value: submitButtonLabel, type: "submit", onClick: () => { | |||
submitInlineRegion(inlineContext, currentGameboard, currentUser, pageQuestions, dispatch, hidingAttempts); | |||
}} : | |||
/* else ? */ {disabled: !canSubmit, value: submitButtonLabel, type: "submit"}; | |||
/* else ? */ {disabled: !canSubmit || awaitingFeedback, value: submitButtonLabel, type: "submit"}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a major problem, but worth noting in case it wasn't already known: A student can get around this requirement to submit feedback before re-attempting by simply refreshing the page.
It still definitely helps us get more feedback anyway, and I don't think(?) there's a simple way to stop this with how we're recording feedback in the database - but so I've clearly reported it.
This PR implements the improvements to LLM-marked free-text questions suggested by the RPF design team.
The most consequential change is that users should not be able to re-submit an answer to the question until they have responded with their self-assessment of the marks provided by the LLM. The self-assessment is an important part of the pedagogical value of the question type so we would like to encourage it. It also hopefully allows the automatic marking accuracy to improve over time through reviewing where there is disagreement and altering the LLM prompt accordingly.