Skip to content

Conversation

@yondonfu
Copy link

@yondonfu yondonfu commented Jan 13, 2026

Context

For my use case, I want to specify a local path instead of relying on the default HF hub cache at ~/.cache/huggingface/hub.

Summary

  • Add ae_uri and prompt_encoder_uri parameters to WorldEngine constructor for flexible model loading
  • Fallback to local path in InferenceAE.from_pretrained when HuggingFace download fails
  • Allow overriding default config URIs at runtime

Test plan

  • Verify loading models from HuggingFace Hub still works
  • Verify loading models from local paths works

🤖 Generated with Claude Code

quant: Optional[str] = None,
model_config_overrides: Optional[Dict] = None,
ae_uri: Optional[str] = None,
prompt_encoder_uri: Optional[str] = None,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we make these part of self.model_cfg, which can be updated via model_config_overrides?

The ae and prompt encoder are bound to the model.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is fixed and now the local paths can be passed with this syntax:

WorldEngine(
    model_config_overrides={
        ae_uri=...
        prompt_encoder_uri=...
    }
)

Example

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants