You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now, models are fine-tuned on the WebLINX dataset. As models continue to evolve and become better, if we want the model to work in a certain way, we'll always have to fine-tune the model with the WebLINX dataset. In this case, we're model-dependent, and it'll cost money to upgrade to the new model. Is there some way this can become a model-agnostic framework?
Example: If GPT5 or Llama 4 are launched, instead of fine-tuning them on the WebLINX dataset to behave in a certain way, there's a framework that can help orchestrate the same outcome.
This would be a more cost-effective and sustainable method.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hey, just thinking out loud:
Right now, models are fine-tuned on the WebLINX dataset. As models continue to evolve and become better, if we want the model to work in a certain way, we'll always have to fine-tune the model with the WebLINX dataset. In this case, we're model-dependent, and it'll cost money to upgrade to the new model. Is there some way this can become a model-agnostic framework?
Example: If GPT5 or Llama 4 are launched, instead of fine-tuning them on the WebLINX dataset to behave in a certain way, there's a framework that can help orchestrate the same outcome.
This would be a more cost-effective and sustainable method.
Let me know your thoughts!
Beta Was this translation helpful? Give feedback.
All reactions