Skip to content

Conversation

@jannismoore
Copy link
Contributor

This aims to simplify the entry for others.

@KyleVasulka
Copy link

I think the repo currently does not work with macos (even with the changes mentioned here). Even after installing bitsandbytes and torch for macos apple sillicon, It seems to expect triton (which I don't believe is available on macos).

Additionally, it seems there is an issue in silentcipher/server.py (used in encode_wav) about trying to convert MPS Tensor to float64 dtype as the MPS framework doesn't support float64. please use float32 instead.

CleanShot 2025-03-13 at 23 11 33@2x

I think it is not as easy as swapping out the device type for mps


model_path = hf_hub_download(repo_id="sesame/csm-1b", filename="ckpt.pt")
generator = load_csm_1b(model_path, "cuda")
generator = load_csm_1b(model_path, "cuda") # Use "mps" for Apple Silicon or "cpu" for Intel MacBooks
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this will work

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm actually I might be wrong. Seems like it should work if we set NO_TORCH_COMPILE=True to disable triton

@jannismoore jannismoore marked this pull request as draft March 14, 2025 04:25
@jannismoore
Copy link
Contributor Author

I think the repo currently does not work with macos (even with the changes mentioned here). Even after installing bitsandbytes and torch for macos apple sillicon, It seems to expect triton (which I don't believe is available on macos).

Additionally, it seems there is an issue in silentcipher/server.py (used in encode_wav) about trying to convert MPS Tensor to float64 dtype as the MPS framework doesn't support float64. please use float32 instead.

CleanShot 2025-03-13 at 23 11 33@2x

I think it is not as easy as swapping out the device type for mps

Thanks, Kyle, you're right. I've been running into the same situation.
Will aim to spend more time going through our options.

@ZackHodari
Copy link
Collaborator

Triton can be disabled with this env var

NO_TORCH_COMPILE=True

Mimi compiles lazily at runtime which requires triton. This didn't provide any speed up right now so it was turned off to simplify requirements.

@lewiswatson55
Copy link

Hey just wanted to hop in here and post my hacky work around was to force my MacBook (apple silicon) to use cpu. It will still run into the silentcipher/server.py encode wav error mentioned before but if you replace line 317 with msg_enc = torch.tensor(msgs, dtype=torch.float32, device=self.device).unsqueeze(0) you can get the model running.

Not the most elegant solution but worked for testing things :)

@dw61
Copy link

dw61 commented Mar 14, 2025

@lewiswatson55 literally we figured out the same hack!

@ZackHodari
Copy link
Collaborator

Thanks for finding this

msg_enc = torch.tensor(msgs, dtype=torch.float32, device=self.device).unsqueeze(0)

It has been merged into our silentcipher fork. I've tested on linux cpu/gpu, but not on a macbook

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants