-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Broken buffer detection - contrast method #94
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! We can examine the dropped frames with the root denoise branch update. It'll be interesting to compare what this and mean_error
drops.
One processing-wise concern is that the comparison unit doesn't match the buffer shapes in data transfer and is redefining an original block shape. I commented more about this inline. My guess is that with these blocks there should be more false-positive/false-negatives depending on the threshold, but I might be wrong.
Another request is to add some tests and do linting if possible, but as this isn't headed to the main branch, we can also take care of that later.
main new commit change: needed to change mean_error buffer_split: 10 to 8 because it didn't work that it cut the buffer into 10 smaller pieces but only up to 8, 8 is the amount of splits/chunks/buffers for the 200x200 included some logging for debug as well |
What do you mean when you say buffer_split 10 doesn't work? Does it just not detect errors correctly or does it get an error? (if It's an error what kind?) |
mio/process/frame_helper.py
Outdated
continue | ||
|
||
mean_intensity = np.mean(block) | ||
std_intensity = np.std(block) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think what we want here is not the standard deviation of the whole block, but of neighboring pixels. otherwise it seems like this would be tripped by an uncorrupted buffer that just has a very bright region and a very dark region.
has a standard deviation of 112.7
and this image
has a standard deviation of 127.5
and i can get the donut image to have the same standard deviation by increasing the size of the donut until half the pixels are 1 and half the pixels are 0.
If we however use the second derivative (in this case over just the -1
th axis, but you could also average the diffs over x and y) they are easily distinguishable.
>>> # the random image
>>> np.mean(np.diff(np.diff(speckle)))
np.float64(95.5089898989899)
>>> # the donut image
>>> np.mean(np.diff(np.diff(donut)))
np.float64(2.87030303030303)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sort of related note. We just chatted that we'll probably need to combine detection methods because there are two modes of broken buffers now: (a) sandstorm and (b) all black (not showing up here, but this happens if the preamble or header is missed). SD won't be good for detecting the latter, and the mean error comparison needs two almost valid frames, so we'll need a fusion of these methods.
Doesn't have to be this PR, but we'll eventually have to combine these or think of a better detection method to reduce false positives/negatives.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agreed on having several, separable methods rather than one huge complicated one
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with your examples and explanation. Thanks for the suggestion @sneakers-the-rat appreciate your contribution to this method.
Have been improving this. Take a look at the new code for this.
Instead of using the standard deviation of the entire block, the code now calculates the second derivatives of pixel intensities within the block to measure localized contrast.
Blocks with a high average of absolute second derivatives are flagged as noisy.
Using this approach I kind of got mixed results. I couldn't really find an thrshold where I don't get a lot of false positives so good frames that are falsely detected as noisy. Which I didn't have in the earlier approach where I just looked at one whole block standard deviation.
Additonally, let's say if we all have merged this work on a method that combines both approaches.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need a way 'ground-truth' data with all kind of buffer errors and neural imaging we can expect to run some test and evaluate the method. Right now it's more like a visual evaluation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, we need a manual annotation session...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we're merging this into the preprocessing branch, and i figure we'll need further work there on refactoring these into separable classes, not commenting on the need for that here, but we do need tests for this - two kinds would be ideal:
- naturalistic, with a short video segment where we have "ground truth" labels for buffers/ known to be corrupted - confirm that we label those and only those labels as corrupted
- unitlike, where we generate a frame with a normal image in it (like that donut image) and then randomly corrupt some buffer-shaped segment within it
I also think we need to not just use plain stdev as i said in a comment bc it's not very specific to the corruption we're filtering for, proposed an example alternative in comments
ok I linted so the tests would run. @MarcelMB check out https://miniscope-io.readthedocs.io/en/latest/meta/contributing.html#linting - your IDE should be checking this for you (it's way less annoying that way to have the IDE warn you about it as you're writing and do the autofixes), but otherwise just run edit: ope i was thinking of another repo, we don't have tests dependent on code quality checks here, it's the PR not being to main. i'll fix that one sec |
added tests with a sample video (very small, just 60 frame segment) with a lot of the speckle noise error of varying sizes. Currently the tests fail because we miss 5 of the frames with smaller patches. I think the more sensitive method described above would let us set a much lower threshold so we could catch those. |
totally agree. I changed the method not just using plain stdev and for test I would happily work on a test video that we can pass through everytime we develop a new filter to see its efficiency. Not sure how tow rite this test yet in a python package like this but would like to learn |
Passes all test after: pdm run test But pushing commit now and hopefully we could merge it with the preprocess branch soon. Anyway its not going to main yet so we can develop further |
…ine before actually working on instantiating, add target wirefree pipeline config
So we don't merge code that fails tests except for rare circumstances, but i think that the problem is actually a pretty squirrelly one that is actually p funny (and super easy to make). so just reading a frame from the video, we get something like this
but then when we do a
what the heck??? we're diffing over columns, so the top left value should be 59-62=-3, but it's 253 instead! since we're a If we cast it to So if we look at frame 22 (21 zero indexed), which has a large corrupted block, and frame 25 (24 zero indexed), which has a small corrupted block, we get values like these:
So it looks like our threshold can be quite low! none of the other uncorrupted frames really have a mean that goes outside of 3-5, and we should expect that to be pretty generalizable: even for frames that have a very high contrast, the regions of real value change are relatively small compared to the solid chunks, so the large values fall out in the average. Given that we also have lots of high frequency banding in the horizontal direction, i think we might have even better specificity taking just the diffs over the horizontal direction, but that would make it less general. (this is a good case for separating the gradient based method from the inter-frame comparison method, so each can have its own parameterization and we can have something like an axis selection config for the gradient method that doesn't apply to the inter-frame method). This would also give us a finer-grained mask: rather than a fixed chunk size, we can just run the diff over the whole array, find the rows that are noisy, and only reject those. I won't do that for now since we don't have the surrounding code to reject row-wise, and are just dropping whole frames so far, but ya in the future that would be both a performance and data loss optimization we should keep in mind. So anyway, if i make that change and then change the threshold to 20, tests pass. |
I'm also gonna change the name of this method to be "gradient" rather than "sd", but we can work out splitting it into a separate class in the main PR. |
… variable errors, corrected types in signatures, use optimized double-diff instead of double-allocating
@@ -61,7 +61,7 @@ class NoisePatchConfig(BaseModel): | |||
default=True, | |||
description="Whether to use patch based noise handling.", | |||
) | |||
method: str = Field( | |||
method: Literal["mean_error", "gradient"] = Field( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this how you specify that a value can only have specific values btw, so this way we don't need to re-do this validation everywhere we might use it, we know for sure it will always be one of these.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i'm good with this except for the one last comment here asking takuya about that changed value. if that's no problem i think we can merge to the intermediate branch and finish tidying up there
mio/process/frame_helper.py
Outdated
buffer_per_frame = len(serialized_current) // noise_patch_config.buffer_size + 1 | ||
if config.method == "mean_error": | ||
if previous_frame is None: | ||
raise ValueError("mean_error requires a previous frame to compare against") | ||
|
||
split_current = self.split_by_length( | ||
serialized_current, | ||
noise_patch_config.buffer_size // noise_patch_config.buffer_split + 1, | ||
) | ||
split_previous = self.split_by_length( | ||
serialized_previous, | ||
noise_patch_config.buffer_size // noise_patch_config.buffer_split + 1, | ||
) | ||
serialized_current = current_frame.flatten().astype(np.int16) | ||
logger.debug(f"Serialized current frame size: {len(serialized_current)}") | ||
|
||
noisy_parts = split_current.copy() | ||
serialized_previous = previous_frame.flatten().astype(np.int16) | ||
logger.debug(f"Serialized previous frame size: {len(serialized_previous)}") | ||
|
||
split_previous = self.split_by_length(serialized_previous, config.buffer_size) | ||
split_current = self.split_by_length(serialized_current, config.buffer_size) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one last question before we merge - @t-sasatani the value given to split_by_length
was changed from
config.buffer_size // config.buffer_split + 1
to just
config.buffer_size
here. was this intentional or will this break your method?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MarcelMB @sneakers-the-rat Sorry for the incomplete documentation around here. I think this part shouldn't change from the original version. The buffer is usually not entirely broken but often partially broken. So, we need some flexibility in splitting the comparison unit into smaller chunks of buffers. This should be able to more accurately detect cases, such as when a quarter of the buffer is broken, which should be an invalid frame.
So, if you don't want to split the comparison unit smaller than the buffer unit, I want to handle that with the noise_patch_config.buffer_split = 0
case. The numbering convention is very confusing here, so maybe we'll need to remove that +1
later.
It is true that if this doesn't work with buffer_split = 10
it's a bug so we'll need to look into that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cool! figured it was just an accident, i'll flip it back to the original version and then we can refine it later
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or wait, i see you're working on this now, why don't you make that change so i don't merge conflict you :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I just committed rather than suggested below. I'm off my PC already, so there's no risk of conflict now. Great that GitHub mobile works nicely.
Basically he takes a buffer and divides it into 10 equal sizes to look at them. But with the unified buffer splitting the for loop didn't allow me to go higher than the number of buffers that's why the value changed from 10 to 8. not a big issue |
Thanks. I didn't catch this. |
Oh yes, signed-unsigned stuff is a mess lol.
I think these two parameters shouldn't be coupled that way, so I guess this is a bug in the root desoising branch. I do want to revert back to splitting buffers into smaller chunks in this PR, but this bug causing the error can be solved in the denoising PR if not here.
|
mio/process/frame_helper.py
Outdated
# buffer_split (splitting each block in smaller segments) cannot be greater than | ||
# number of buffers in a frame | ||
if config.buffer_split > len(split_current): | ||
logger.warning( | ||
f"buffer_split ({config.buffer_split}) exceeds total splits " | ||
f"({len(split_current)}). Adjusting to {len(split_current)}." | ||
) | ||
config.buffer_split = len(split_current) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These parameters shouldn't have this relationship. I took off these and changed Nevermind, it doesn't cause errors in the original PR branch but causes errors here. Anyways unsure why yetbuffer_split
to 10, but it ran finely. Could you double-check if an error really happened? @MarcelMB
# buffer_split (splitting each block in smaller segments) cannot be greater than | |
# number of buffers in a frame | |
if config.buffer_split > len(split_current): | |
logger.warning( | |
f"buffer_split ({config.buffer_split}) exceeds total splits " | |
f"({len(split_current)}). Adjusting to {len(split_current)}." | |
) | |
config.buffer_split = len(split_current) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
looks like the test failures here are coming from not detecting that frame 24 which is the one with the very small strip of noise above.
we might want to do the squared difference to emphasize large differences, again being mindful of dtype :). that might let us get a threshold that separates broken frames from good frames more cleanly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, as posted below, this gets detected correctly with the previous threshold and buffer splitting parameters. Seems fairly tricky to decouple code and parameter defaults as it's expected to fail with bad parameters.
I modified the test to check the Details
Primarily due to the delay in adding proper tests to #83. My bad for that.
id: denoise_example
mio_model: mio.models.process.DenoiseConfig
mio_version: 0.6.1
interactive_display:
enable: true
start_frame: 40
end_frame: 140
noise_patch:
enable: true
method: mean_error
threshold: 30
buffer_size: 5032
buffer_split: 10
diff_multiply: 1
output_result: true
output_noise_patch: true
output_diff: true
output_noisy_frames: true
frequency_masking:
enable: true
spatial_LPF_cutoff_radius: 15
vertical_BEF_cutoff: 2
horizontal_BEF_cutoff: 0
display_mask: false
output_mask: true
output_result: true
output_freq_domain: false
minimum_projection:
enable: true
normalize: true
output_result: true
output_min_proj: true
end_frame: -1 #-1 means all frames
output_result: true
output_dir: user_dir/output |
yeah looks like the problem is that the two methods use the same configuration object, and so share the same makes sense that these should be distinct classes and thus be able to have distinct configuration values. putting them in the same class is leading to a lot of crowding in the implementation and in the config. made two configs, one for gradient and one for mean_error, and now tests pass. we probably want to change that so there are two separate stages and only have one sample config, but we can do that in main PR when we refactor it |
oh yeah of course the threshold should be different. completely overlooked this. im good for merge now. |
Takuyas approach was:
This method works fine most of the time. But has an issue with the data we recorded in December. Because the previous frame is often also broken.
I added another method: block_contrast
Broken buffers typically have higher contrast. Applying local contrast detection to identify regions with unusually bright or dark pixels could be helpful
a broken buffer looks like this (black&white pixels), and therefore has a high contrast, buffers with 'real' neural images don't have this very high contrast:
detect regions with high contrast on a block-by-block basis (not for the entire frame, that didn't work so well when I tried this), each block represents the size of a buffer
The frame is divided into non-overlapping blocks/buffers.
• Each block is analyzed independently for contrast.
For each block, the standard deviation of pixel intensities is calculated.
• If the standard deviation (contrast) exceeds the threshold, the block is flagged as noisy.
worked well with test data and data from December
Its a relatively small PR and I tried to stick strictly to how the code is organized at the moment. And only added one other method for filtering broken buffers. So could be merged easily.
📚 Documentation preview 📚: https://miniscope-io--94.org.readthedocs.build/en/94/