We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
High accuracy RAG for answering questions from scientific documents with citations
Python 7.1k 699
Gymnasium framework for training language model agents on constructive tasks
Python 154 18
Agent framework for constructing language model agents and training on constructive tasks.
Python 68 10
Evaluation dataset for AI systems intended to benchmark capabilities foundational to scientific research in biology
Python 41 2
LitQA Eval: A difficult set of scientific questions that require context of full-text research papers to answer
Python 38 5
37 2
An aviary-based data science agent based on jupyter notebooks
Benchmark for LLM-based Agents in Computational Biology
FutureHouse fork of trl
Central LLM client for use by Aviary and PaperQA
This organization has no public members. You must be a member to see who’s a part of this organization.
Loading…