You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Need to think about the next step after sentence classification, i.e. fact extraction
An assessment of how our methods perform wrt annotation
Set up a workflow
Extract sentences with BioBert (current production pipeline)
-- Feed data to an LLM (GPT-4o) asking to extract an annotation
Use full text and ask an LLM (GPT-4o) to extract an annotation
-- A curator creates an annotation from text
What are the differences between the three methods?
How do we evaluate/score the results? Annotation based?
An assessment of how our methods perform on literature from other organisms
Focus on co-published species?
Prioritize SGD>ZFIN>Xenbase
How to get the list of copublished papers from postgres
A more direct comparison with other methods, e.g. rule-based methods such as Textpresso category searches or RLIMS-P (they have an api)
The text was updated successfully, but these errors were encountered:
Need to think about the next step after sentence classification, i.e. fact extraction
Set up a workflow
-- Feed data to an LLM (GPT-4o) asking to extract an annotation
-- A curator creates an annotation from text
What are the differences between the three methods?
How do we evaluate/score the results? Annotation based?
Focus on co-published species?
Prioritize SGD>ZFIN>Xenbase
How to get the list of copublished papers from postgres
A more direct comparison with other methods, e.g. rule-based methods such as Textpresso category searches or RLIMS-P (they have an api)
The text was updated successfully, but these errors were encountered: