In order to properly evaluate a given LM, we require implementation of a wrapper class subclassing the lmms_eval.api.model.lmms
class, that defines how the lmms_eval should interface with your model. This guide walks through how to write this lmms
subclass via adding it to the library!
To get started contributing, go ahead and fork the main repo, clone it, create a branch with the name of your task, and install the project requirements in your environment:
# After forking...
git clone https://github.com/<YOUR-USERNAME>/lmms-eval.git
cd lmms-eval
git checkout -b <model-type>
pip install -e .
Now, we'll create a new file where we'll be adding our model:
touch lmms_eval/models/<my_model_filename>.py
As a rule of thumb, we recommend you to use lmms_eval/models/qwen_vl.py
and lmms_eval/models/instructblip.py
as reference implementations for your model. You can copy and paste the contents of one of these files into your new file to get started.
All models must subclass the lmms_eval.api.model.lmms
class.
The lmms class enforces a common interface via which we can extract responses from a model:
class MyCustomLM(lmms):
#...
def loglikelihood(self, requests: list[Instance]) -> list[tuple[float, bool]]:
#...
def generate_until(self, requests: list[Instance]) -> list[str]:
#...
#...
Where Instance
is a dataclass defined in lmms_eval.api.instance
with property args
of request-dependent type signature described below.
We support three types of requests, consisting of different interactions / measurements with an autoregressive LM.
All three request types take as input requests
of type list[Instance]
that have a matching Instance.request_type
to the method name. Overall, you can check the construct_requests to see how the arguments are being constructed for different types of output type requests.
-
generate_until
- Each request contains
Instance.args : Tuple[str, dict]
containing 1. an input string to the LM and 2. a dictionary of keyword arguments used to control generation parameters. - In each
Instance.args
there will be 6 elements which arecontexts, all_gen_kwargs, doc_to_visual, doc_id, task, split
.contexts
refers to the formatted question and is the text input for the LMM. Sometimes it might contains image token and need to address differently for different models.all_gen_kwargs
refers to the dict that contains all the generation configuration for the model. We usedoc_id
,task
, andsplit
to access the dataset and then you can usedoc_to_visual
which is a function reference to process the image. When you implement your own model, you should use these to write your own generate_util function. - Using this input and these generation parameters, text will be sampled from the language model (typically until a maximum output length or specific stopping string sequences--for example,
{"until": ["\n\n", "."], "max_gen_toks": 128}
). - The generated input+output text from the model will then be returned.
- Each request contains
-
loglikelihood
- Each request contains
Instance.args : Tuple[str, str]
containing 1. an input string to the LM and 2. a target string on which the loglikelihood of the LM producing this target, conditioned on the input, will be returned. - In each
Instance.args
there will be 6 elements which arecontexts, doc_to_target, doc_to_visual, doc_id, task, split
.contexts
refers to the formatted question and is the text input for the LMM. Sometimes it might contains image token and need to address differently for different models.doc_to_target
is a function reference that get the get the answer from the doc. This will be the continuation of the answer and only tokens belong to this part should be calculated for the loglikelihood. - Each request will have, as result,
(ll, is_greedy): Tuple[float, int]
returned, wherell
is a floating point number representing the log probability of generating the target string conditioned on the input, andis_greedy
being either the value0
or1
, with it being1
if and only if the target string would be generated by greedy sampling from the LM (that is, if the target string is the most likely N-token string to be output by the LM given the input. )
- Each request contains
Congrats on implementing your model! Now it's time to test it out.
To make your model usable via the command line interface to lmms_eval
, you'll need to tell lmms_eval
what your model's name is.
This is done via a decorator, lmms_eval.api.registry.register_model
. Using register_model()
, one can both tell the package what the model's name(s) to be used are when invoking it with python -m lm_eval --model <name>
and alert lmms_eval
to the model's existence.
from lmms_eval.api.registry import register_model
@register_model("<name1>", "<name2>")
class MyCustomLM(LM):
The final step is to import your model in lmms_eval/models/__init__.py
:
from .my_model_filename import MyCustomLM