-
-
Notifications
You must be signed in to change notification settings - Fork 2.7k
WIP feat:Init commit for rust backend #1180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: GitHub <noreply@github.com>
cc @lu-zero |
Co-authored-by: Luca Barbato <luca.barbato@gmail.com> Signed-off-by: Aisuko <urakiny@gmail.com>
Signed-off-by: GitHub <noreply@github.com>
Signed-off-by: Aisuko <urakiny@gmail.com>
Signed-off-by: Aisuko <urakiny@gmail.com>
Co-authored-by: Luca Barbato <luca.barbato@gmail.com> Signed-off-by: Aisuko <urakiny@gmail.com>
Signed-off-by: Aisuko <urakiny@gmail.com>
Signed-off-by: Aisuko <urakiny@gmail.com>
The process of rust backend:
Really appreciate your help @lu-zero but still need your help on "burn" backend, thank you. |
Signed-off-by: Aisuko <urakiny@gmail.com>
Co-authored-by: Luca Barbato <luca.barbato@gmail.com> Signed-off-by: Aisuko <urakiny@gmail.com>
Signed-off-by: Aisuko <urakiny@gmail.com>
An idea of choosing default burn backend for Rust backend #1219 |
Get stuck in some issues like below(only in debug mode), it may related to the Rust implement PyTorch C++ API. dyld[15803]: Library not loaded: @rpath/libtorch_cpu.dylib
Referenced from: <B583CD33-2743-323A-B503-5781B34C078F> /Users/tifa/Downloads/workspace/LocalAI/backend/rust/target/debug/deps/server-bc3eca19368e3b4a
Reason: no LC_RPATH's found It makes hard to debug the program. I am going to refactor some code and add setting file of IDE. Make sure it can be easy for anyone to debug the program. |
it seems to look for libtorch and fails to find it. if you use the ndarray backend does it work? |
Will try it and give a feedback Update
|
On the M1 probably the wgpu backend is the nicest to use, but ndarray is the one that does not depend on the host system. |
Signed-off-by: Aisuko <urakiny@gmail.com>
Signed-off-by: Aisuko <urakiny@gmail.com>
Thanks a lot. I have made some change here. I have been migrated the code which is included
Here I hit an issue on reshaping of the Tensor. So, we can try to implement a simple one instead of getting stuck on the Llama2. |
Signed-off-by: Aisuko <urakiny@gmail.com>
Signed-off-by: Aisuko <urakiny@gmail.com>
// And now the nonlinear scale | ||
let min_log_hz = 1000.0; // beginning of log region (Hz) | ||
let min_log_mel = (min_log_hz - f_min) / f_sp; | ||
let logstep = (6.4f64).ln() / 27.0; // step size for log region |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
those constants are repeated, being always f64 you can just keep them as const
s
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thank you, will do.
Signed-off-by: Aisuko <urakiny@gmail.com>
Signed-off-by: Aisuko <urakiny@gmail.com>
Signed-off-by: Aisuko <urakiny@gmail.com>
❌ Deploy Preview for localai failed.
|
|
||
let tensor3=tensor2.transpose(); | ||
|
||
let tensor41=tensor3.repeat(2, 2); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lu-zero Here, I am going to use wgpu
backend instead of tch
. However, I the repeat function here only support 2 dimensions tensor, (Can only repeat dimension with dim=1) https://github.com/Tracel-AI/burn/blob/b86bc5876149bd73bc59cb5197fd3ee8b92509d4/burn-tensor/src/tensor/ops/tensor.rs#L222C7-L222C7.
I have been tried several solutions, like use swap_dims
and flattern
these internal function of Tensor, but here hard to say it is correct and also causes other issues. Is there a better example for this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
asking upstream probably it is the best route (sorry for the belated reply, I got very busy and the message got lost in the mailbox)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No worries, thanks for your support. I will continue to work on this one after I applied PhD successfully. Currently, sooo busy. But I still want to get this PR to merged.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Once you are more free please contact me, probably a good deal of the issues will be ironed out by upstream meanwhile :)
Description
This PR relates to #939
Notes for Reviewers
Signed commits