Skip to content

Commit

Permalink
pval threshold
Browse files Browse the repository at this point in the history
  • Loading branch information
slobentanzer committed Feb 15, 2024
1 parent 32a9909 commit f1d1a88
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion content/20.results.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ OpenAI's GPT models (gpt-4 and gpt-3.5-turbo) lead by some margin on overall per
Of note, while the newer version (0125) of gpt-3.5-turbo outperforms the previous version (0613) of gpt-4, version 0125 of gpt-4 shows a significant drop in performance.
The performance of open-source models appears to depend on their quantisation level, i.e., the bit-precision used to represent the model's parameters.
For models that offer quantisation options, performance apparently plateaus or even decreases after the 4- or 5-bit mark (Figure @fig:benchmark A).
There is no apparent correlation between model size and performance (Pearson's r = 0.171, p = 9.59e-05).
There is no apparent correlation between model size and performance (Pearson's r = 0.171, p < 0.001).

To evaluate the benefit of BioChatter functionality, we compared the performance of models with and without the use of BioChatter's prompt engine for KG querying.
The models without prompt engine still have access to the BioCypher schema definition, which details the KG structure, but they do not use the multi-step procedure available through BioChatter.
Expand Down

0 comments on commit f1d1a88

Please sign in to comment.