It's a great work and thank you for sharing the code!
I have tried to run the code with your chair data, and it did show a significant improvement compared to L1 result.
Then, I edit render_scan.py file to read data from Zhou's fountain model (I choose around 30 RGBD frames as key frames, and use code from "Let there be color" to get the obj and mtl files from all 30 key frames.), I run the code with default parameters (λ=10.0, iter=4001).
But I don't get the fine result as the supplemental shows.
Here's what I get:
Before (L1):


After Iterations:


Is anything missing to run such RGBD datasets?
Thanks again for the interesting work!
It's a great work and thank you for sharing the code!
I have tried to run the code with your chair data, and it did show a significant improvement compared to L1 result.
Then, I edit render_scan.py file to read data from Zhou's fountain model (I choose around 30 RGBD frames as key frames, and use code from "Let there be color" to get the obj and mtl files from all 30 key frames.), I run the code with default parameters (λ=10.0, iter=4001).
But I don't get the fine result as the supplemental shows.
Here's what I get:
Before (L1):


After Iterations:


Is anything missing to run such RGBD datasets?
Thanks again for the interesting work!