I would like to know if vs-mlrt has support for FDAT models or fp16 models?
I tried using the following model with TRT in 720x480, but I got some errors and when I did, the image looked weird, black and white with lots of cracks.
2x_animefilm_light_161k_fp16_static_720x480.onnx
src = core.ffms2.Source(r'C:\anime_720x480.mkv')
clipes = core.resize.Spline64(src,720, 480, format=vs.RGBS, matrix_in_s="709")
clipes = core.trt.Model(clipes, engine_path=r"C:\2x_animefilm_light_161k_fp16_static_720x480.engine")
clipes = core.resize.Spline64(clipes, 1280, 720, format=vs.YUV420P10, matrix_in_s="470bg", matrix_s="709")
clipe.set_output()
I used the following code to generate the engine.
--fp16 --inputIOFormats=fp32:chw --outputIOFormats=fp32:chw --tacticSources=+CUDNN,+CUBLAS --builderOptimizationLevel=2 --skipInference
if it works directly with onnx, that would be good too. I tested it with TRT because it technically works with VideoJanai, which references vs-mlrt for this.
I'd appreciate it if someone with more experience could test it and point out where I'm going wrong.
Thank you.
I would like to know if vs-mlrt has support for FDAT models or fp16 models?
I tried using the following model with TRT in 720x480, but I got some errors and when I did, the image looked weird, black and white with lots of cracks.
2x_animefilm_light_161k_fp16_static_720x480.onnx
I used the following code to generate the engine.
--fp16 --inputIOFormats=fp32:chw --outputIOFormats=fp32:chw --tacticSources=+CUDNN,+CUBLAS --builderOptimizationLevel=2 --skipInferenceif it works directly with onnx, that would be good too. I tested it with TRT because it technically works with VideoJanai, which references vs-mlrt for this.
I'd appreciate it if someone with more experience could test it and point out where I'm going wrong.
Thank you.