-
Notifications
You must be signed in to change notification settings - Fork 57
Uniform buffers for shader props, allow variable number of channels #924
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…ean. some tests are failing with this revision, runtime not tried.
… i is clean." This reverts commit c51bffe.
|
Great job! No new security vulnerabilities introduced in this pull requestUse @Checkmarx to interact with Checkmarx PR Assistant. |
ilan-gold
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's reasonable to ask in deck.gl slack or on the github what the implications of a custom ShaderAssembler would be. I really couldn't know off hand. As you can probably tell, I did a lot of this by just looking at the underlying code and ballparking what would be needed. The upside is it works. The downside is that it has a bunch of private accesses. At the time, I was under paper pressure :/
I don't feel too strongly about how off-the-beaten-path this goes as long as it works. We advertise breaking changes on the minor version so I'm also not worried about any breaking changes. I'll try to download the deck.gl slack on Monday again.
|
|
||
| const _RENDER = `\ | ||
| float intensityArray[6] = float[6](intensityValue0, intensityValue1, intensityValue2, intensityValue3, intensityValue4, intensityValue5); | ||
| float intensityArray[NUM_CHANNELS] = float[NUM_CHANNELS](intensityValue0, intensityValue1, intensityValue2, intensityValue3, intensityValue4, intensityValue5); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is great!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note that there is one other place where I have something more like
float array[NUM_CHANNELS] = float[NUM_CHANNELS](
value<VIV_CHANNEL_INDEX>,
);
and that also works...
Now I'm trying to get that to also work for uniform vec2 constastLimits<VIV_CHANNEL_INDEX>; with the new shaderInputs UBO binding and finding it tricky.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or rather... the syntax I mentioned works ok, but the new
uniform xrLayerUniforms {
vec2 contrastLimits<VIV_CHANNEL_INDEX>;
} xrLayer;
I'm struggling with.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the issue may be that I was trying to implement setting shaderInputs in the draw() method and moving this to updateState() makes a significant difference (just about to try this). edit: also, working in js rather than ts and having typos.
| // does any of this existing logic need to change? | ||
| // (like, is there ever more than one model?) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the way our extensions are structured, no, because extension is attached to XRLayer which does its own rendering?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes I don't anticipate a problem there... In some places I'm looping over this.getModels() and it may be that it would be somewhat safer to do that a bit more consistently, but I'm pretty sure it isn't likely to matter in foreseeable future. Slight balance between consistency and avoiding changes where there isn't a solid reason for it.
|
@ilan-gold Ib already expressed some enthusiasm about viv updating deck versions and I'm sure he'll be happy to give some more feedback at some point. I'll add a pointer to this PR over there now. |
|
n.b. I've made all the CI pass and will try to keep commits clean WRT that, but bear in mind that I don't think the configuration is properly working for |
|
Also I think that volume minimum intensity projection is already buggy - it looks similar in this branch to production avivator build, which is to say, not good. |
Yes it is, should eventually fix that... |
|
I am not sure that minimum intensity projection has bugs, it is just that it is not really useful, conceptually, for most images. I think it was implemented because it is very easy once you implement the max-intensity projection. But why would I want to view the "minimum-intensity" pixels, when they are just going to correspond the black space that I am not typically interested in? I think the only time it would really make sense is if the image colors have been inverted for some reason. (Maybe I am wrong, but before we assume it has bugs we should verify with another viewer that supports MinIP) |
|
You're right @keller-mark I was mostly speaking from an experience where I wanted the inversion actually and remember it not working properly. But I don't know for sure. |
|
Anyway, for this PR, I think I found an actual concrete example of something that I don't think is currently broken with the changes, but easily could be. Where previously in useColorValue = get_use_color_float(vTexCoord, 0);
rgb += max(0., min(1., intensity0)) * max(vec3(colors[0]), (1. - useColorValue) * vec3(1., 1., 1.));
useColorValue = get_use_color_float(vTexCoord, 1);
rgb += max(0., min(1., intensity1)) * max(vec3(colors[1]), (1. - useColorValue) * vec3(1., 1., 1.));
...In that particular instance, we now have for(int i = 0; i < NUM_CHANNELS; i++) {
float useColorValue = get_use_color_float(vTexCoord, i);
rgb += max(0., min(1., intensity[i])) * max(vec3(colors[i]), (1. - useColorValue) * vec3(1., 1., 1.));
}Which is fine - but in other cases where we (or some hypothetical user with a custom extension) aren't able to loop over an array (which even if luma.gl did allow us to have arrays in uniforms, we still wouldn't be able to have arrays of samplers), it might be written like useColorValue = get_use_color_float(vTexCoord, ${I});
rgb += max(0., min(1., intensity${I)) * max(vec3(colors[${I}]), (1. - useColorValue) * vec3(1., 1., 1.));which would then lead to the generated shader being useColorValue = get_use_color_float(vTexCoord, 0);
useColorValue = get_use_color_float(vTexCoord, 1);
rgb += max(0., min(1., intensity0) * max(vec3(colors[0]), (1. - useColorValue) * vec3(1., 1., 1.));
rgb += max(0., min(1., intensity1) * max(vec3(colors[1]), (1. - useColorValue) * vec3(1., 1., 1.));So... the current line duplicating logic is less than ideal. Without over-engineering it too much, we do probably need a way to guard against that particular kind of foot-gun and expose an appropriate abstraction. As alluded, I may if I get a chance want to have extensions that do more elaborate forms of code-generation, in which case I'd probably want to have a proper parser and more robust processes etc... but adding that to viv itself seems like overkill. I'll want a way of plugging in more processing steps where I can do such things (hopefully also some use to others), but with a low surface-area in terms of the API we expose from viv. Hope that makes sense. |
|
Whether we ultimately have a subclass of Still need to get the |
So a different option would be to do our best, be super clear about what Viv does with its pipeline, and then just allow people to provide their own shaders completely (or almost completely, maybe just |
Maybe, I'm hoping when I get back to this we can have something fairly clear and simple... I'm not actually sure that it'll end up easier for anyone trying to figure out how we handle an 'almost complete shader' - I've a feeling the documentation and DX/props handling may not be so easy to figure out... Anyway, fairly confident in coming up with something decent when I get back to it, fingers crossed. |
|
So - it's occurred to me that the semi-hypothetical situation I was worried about where expands to Can actually be avoided by simply warning extension authors of the risk and being very explicit that they should instead collect things into arrays and process however they please... so we can keep just the simpler line-duplicating extension API rather than needing to have semantics for defining blocks of code. I don't think that would have necessarily been too disastrous - but there's a definite risk of it hiding some unintended complexity and still not allowing for more complex cases like where someone might want to have nested permutations or whatever. I think if anyone does actually want to have some complex nest of channel permutations or whatever, or just a bit of mutable state, they will be able to... so, I'll try to make documentation as clear as possible, clean up the code etc (and actually implement the actual UBO binding bits which are still mostly pending). But I think I'm now satisfied that there's a KISS version of this extension API which should allow implementation of whatever arbitrary stuff someone might hypothetically want... and it's likely that hardly anyone (even me) will actually need it anyway - so if it turns out that there are edge-cases that do require some more complexity to support, we can cross that bridge when we come to it. Hope that makes sense. |
…ations, apply to contrast limit processing in xr-layer. Updated expandShaderModule to expand both uniform types and shader code. Adjusted XRLayer to utilize new channel intensity module structure.
move VivShaderAssembler to extensions package, add NUM_PLANES template. faff with extensions build settings in package.json.
|
Awesome @xinaesthete - my biggest concern is definitely just that everythign is 100% clear. If it's a small change or something subtle that most people won't notice, that's ok. Re-request review once you're ready and I'll have a look! |
Yeah it's a bit broken in some ways at the moment - mostly the colormaps in 2d that I'm aware of. Hoping to get that properly fixed soon and do a pass on documentation and cleanup, will let you know. It's a shame that the deck.gl layer-test stuff isn't really working, may even be worth trying to have a pass on that in a separate PR and merging in. Probably once this is in a better state it might make sense to put the actual varying of number of channels at runtime into a separate PR, but I think I'm reasonably satisfied that the API for that is fairly stable and working. |
That seems reasonable! |
|
Also - now that I think it should not be using any APIs that aren't in the latest deck.gl, I will try to do the actual deck.gl update soon and see whether everything does indeed still work. |
|
I might also be able to make a more detailed example of a moderately complex shader extension in the context of a spatialdata.js blog post with interactive sample. |
I would be in favor of decoupling the Regarding the shader assembler, I think it will also need some unit tests that check the final assembled shader string. |
Yes - this is where having a test runner with an actual GL context will be required. Related - I had bugs with rarely-used (non-viv) extensions in MDV after a deck update. The shader extension code fragments referred to the old uniform names rather than the newly namespaced ones, so any test that was trying to actually compile the full shaders would've been able to surface it, but the feature wasn't much used so took a while for me to notice (and then it presents with a very glitchy debug UI at runtime - extra glitchy in MDV but not something you'd want to expose to users at the best of times). So - this is something that needs to be very clear in the documentation (I think it's mentioned, but that documentation isn't in its final form), if you have any custom extensions, as well as changing your own declarations you need to be careful about which things in the surrounding code may have changed. I was obviously conscious of the related issues as I did that update in MDV but managed to miss that particular case. I think we should make another PR to sort out that aspect of the test configuration, and then we can rebase/merge that into here. I don't mind looking into it if you like. |
|
Side-note - WebGPU is on the horizon as well, and can potentially be experimented with once the deck update is in place. That of course means porting all shader code, including any extensions - so it's something to bear in mind for the extension API design here. At least WebGPU contexts can be made outside a browser environment, so that aspect will hopefully not be too painful, but we'll need to think more about the general cost in terms of maintenance burden aside from anything else. Anyway - separate discussion really, but thought it worth mentioning. |
I think a lower-hanging fruit would be to just check the assembled strings in a normal unit test. There seems to be a lot of string manipulation going on. Of course beyond this some kind of integration test that relies on webGL could also help, but that seems more involved and could be done after the string unit test. Since you bring up WebGPU, I believe that may obviate the need for the shader assembler logic here altogether, which is another reason we may want to split this into two separate PRs (potentially holding off on a WebGL implementation and wait for WebGPU to support the variable number of channels). WebGPU supports "runtime sized arrays", so we can do the following: struct Channel {
window: vec2<f32>, // (min, max) for contrast adjustment
color: vec3<f32>, // RGB color for the channel
};
struct Uniforms {
// other uniforms here
// See "runtime sized arrays" info
// Reference: https://webgpufundamentals.org/webgpu/lessons/webgpu-wgsl.html#runtime-sized-arrays
channels: array<Channel>,
};The main caveat is this (from the link in the wgsl comment above):
|
ok, fair enough, I'll try to have a pass on that. Shouldn't be too problematic to have a unit-test along the lines you suggest, although also don't want to lose sight of integration testing as I think I'd have more trust/value in that - it'd have better capacity to catch real problems like the current issue with colormap here, or things like deck changing naming as mentioned before. |
We might still find ourselves somewhat constrained by how luma/deck handle props; that's the main contention here more-so than what can be done with GLSL. It's not a problem that the array sizes are fixed at compile time - we can just compile with a new |
Whether the array-size is compiled in, or varied at runtime, I was hoping that we might have something that looked more like that array-of-structs with the change to UBO. |

Fixes #687
Background
#687 as well as elsewhere for discussion of UBO etc.
This is work-in-progress... started out just trying to update to latest luma.gl/deck.gl to deal with some peer-dependency issues downstream. Latest versions remove the deprecated
setUniformsetc, so this is becoming more pressing.This is still WIP as of this writing, but I think starting to take shape for a way of supporting N-channels without too much pain for extension authors.
Change List
model.shaderInputsrather thansetUniformsupdateState()rather thandraw(), I think this is more in line with how things are supposed to beMAX_CHANNELS = 6is findable in the code etc...NUM_CHANNELSwhich should be used consistently where relevantVivShaderAssemblerclass for expanding lines with<VIV_CHANNEL_INDEX>...Checklist