Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add grib_tree method #399

Merged
merged 11 commits into from
Dec 4, 2023
Merged

Add grib_tree method #399

merged 11 commits into from
Dec 4, 2023

Conversation

emfdavid
Copy link
Contributor

@emfdavid emfdavid commented Nov 24, 2023

Add an method to map grib2 data model to zarr hierarchy & xarray datatree

The method is opinionated about how to do this - there are many possible ways. I hope the is generally useful enough that it is worth adding to the kerchunk library.

See examples in Colab

Here is an example of the hierarchical structure for the eastward wind velocity variable which has been aggregated by step, time and level (isobaricInhPa).

Screenshot 2023-11-29 at 11 19 14 PM

Copy link
Member

@martindurant martindurant left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do I understand that:

  • each branch of the tree is combined as usual
  • the branches have their keys copied into the right place in the overall output
  • special renaming of grib keys is going on

kerchunk/grib2.py Outdated Show resolved Hide resolved
# Hard code the filters in the correct order for the group hierarchy
filters = ["stepType", "typeOfLevel"]

zarr_store = {}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Allow for parquet/bring-your-own output?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, we need to figure out how parquet works at all for deep nested data like this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the hierarchy in zarr is just a path embedded in the key... it might must work.
If I want to add more tests, should I checkin some json output from scan_grib for a few variables?


unknown_counter = 0
for msg_ind, group in enumerate(message_groups):
if "version" not in result:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have you tested with zarr v3?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a way to run scan_grib using zarr v3? As implemented, grib_tree is pretty tightly coupled on the output of scan_grib. I see you can set an ENV var to allow access to the experimental API...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, so version should always be 2? Or is this not the zarr version?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah - you are right - this is the kerchunk version, not the zarr version anyway.

# https://confluence.ecmwf.int/display/UDOC/Creating+your+own+local+definitions+-+ecCodes+GRIB+FAQ
# If you process the groups from a single file in order, you can use the msg# to compare with the
# IDX file.
logger.warning(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not include the message about what to do in the logging?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Happy to put more of the answer in the log message - though I have not been successful in doing it myself... yet.
So far I have used the idx file to confirm that I am unlikely to need these unknown variables in my use case with HRRR and GFS grib2 files.

Example: gs://high-resolution-rapid-refresh/hrrr.20230928/conus/hrrr.t01z.wrfsfcf05.grib2
Msg # indexed from zero

Found unknown variable in msg# 1... it will be dropped
Found unknown variable in msg# 2... it will be dropped
Found unknown variable in msg# 37... it will be dropped
Found unknown variable in msg# 38... it will be dropped
Found unknown variable in msg# 42... it will be dropped
Found unknown variable in msg# 44... it will be dropped
...

IDX file entries (messages indexed from 1)
Example: gs://high-resolution-rapid-refresh/hrrr.20230928/conus/hrrr.t01z.wrfsfcf05.grib2.idx

1:0:d=2023092801:REFC:entire atmosphere:5 hour fcst:
2:375155:d=2023092801:RETOP:cloud top:5 hour fcst:
3:517041:d=2023092801:var discipline=0 center=7 local_table=1 parmcat=16 parm=201:entire atmosphere:5 hour fcst:
4:889615:d=2023092801:VIL:entire atmosphere:5 hour fcst:
5:1157550:d=2023092801:VIS:surface:5 hour fcst:
...
37:24333077:d=2023092801:VGRD:1000 mb:5 hour fcst:
38:24956531:d=2023092801:MAXUVV:100-1000 mb above ground:4-5 hour max fcst:
39:25687250:d=2023092801:MAXDVV:100-1000 mb above ground:4-5 hour max fcst:
40:26410531:d=2023092801:DZDT:0.5-0.8 sigma layer:4-5 hour ave fcst:
41:27042519:d=2023092801:MSLMA:mean sea level:5 hour fcst:
42:27650201:d=2023092801:HGT:1000 mb:5 hour fcst:
43:28342907:d=2023092801:MAXREF:1000 m above ground:4-5 hour max fcst:
44:28588455:d=2023092801:REFD:263 K level:4-5 hour max fcst:
45:28736498:d=2023092801:MXUPHL:5000-2000 m above ground:4-5 hour max fcst:
46:28789333:d=2023092801:MNUPHL:5000-2000 m above ground:4-5 hour min fcst:
...

The metadata in the NOAA EMC generated idx files suggests we are missing local definitions for a number of variables that represent min, max or avg values over a layer. I have asked NOAA-EMC for help with decoding these properly.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, we can leave this for future improvement

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Crickets over at NOAA-EMC... I think we might have a working rust gribberish reader before we get a clear answer on how to build the ecCodes tables for these unknown properties.
🤞 @mpiannucci

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah the tables are annoying, need to figure that out in rust too to be fully comprehensive

vname,
msg_ind,
)
# Use unknown as a group or drop it?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any cases like this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the same HRRR SFCF grib2 files discussed above re: unknown variables, here are two examples of unknown levels:
Example logs

Found 'unknown' attribute value for key typeOfLevel in var gh of msg# 161
Found 'unknown' attribute value for key typeOfLevel in var layth of msg# 164

Corresponding idx file entries.

162:129396944:d=2023092801:HGT:level of free convection:5 hour fcst:
165:135154615:d=2023092801:LAYTH:261 K level - 256 K level:5 hour fcst:

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you saying that we should be able to decode them? The grib object has many many possble attributes, so this seems solvable. Not needed for this PR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes - I believe these are just more custom codes, similar to the unknown variables above.
I think they are all documented and verifiable by reading the idx files, but implementing the tables for ecCodes to read them is more than I can take on right now...

Co-authored-by: Martin Durant <martindurant@users.noreply.github.com>
Copy link
Contributor Author

@emfdavid emfdavid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for diving in - I hope this can be useful.

  • each branch of the tree is combined as usual

Yes - each variable can have different level values so only the branches can be combined in general

  • the branches have their keys copied into the right place in the overall output

Yes - similar to what you do in scan_grib, I am just adding a path for the zarr hierarchy.

  • special renaming of grib keys is going on

I did make copies of some attributes (name, stepType and typeOfLevel) to help users understand the hierarchy. I don't think I actually renamed any existing metadata. The implementation is opinionated about the grib level and grib step attributes as well as the coordinate names.

I would love to have more ECMWF data to test generality with.

# https://confluence.ecmwf.int/display/UDOC/Creating+your+own+local+definitions+-+ecCodes+GRIB+FAQ
# If you process the groups from a single file in order, you can use the msg# to compare with the
# IDX file.
logger.warning(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Happy to put more of the answer in the log message - though I have not been successful in doing it myself... yet.
So far I have used the idx file to confirm that I am unlikely to need these unknown variables in my use case with HRRR and GFS grib2 files.

Example: gs://high-resolution-rapid-refresh/hrrr.20230928/conus/hrrr.t01z.wrfsfcf05.grib2
Msg # indexed from zero

Found unknown variable in msg# 1... it will be dropped
Found unknown variable in msg# 2... it will be dropped
Found unknown variable in msg# 37... it will be dropped
Found unknown variable in msg# 38... it will be dropped
Found unknown variable in msg# 42... it will be dropped
Found unknown variable in msg# 44... it will be dropped
...

IDX file entries (messages indexed from 1)
Example: gs://high-resolution-rapid-refresh/hrrr.20230928/conus/hrrr.t01z.wrfsfcf05.grib2.idx

1:0:d=2023092801:REFC:entire atmosphere:5 hour fcst:
2:375155:d=2023092801:RETOP:cloud top:5 hour fcst:
3:517041:d=2023092801:var discipline=0 center=7 local_table=1 parmcat=16 parm=201:entire atmosphere:5 hour fcst:
4:889615:d=2023092801:VIL:entire atmosphere:5 hour fcst:
5:1157550:d=2023092801:VIS:surface:5 hour fcst:
...
37:24333077:d=2023092801:VGRD:1000 mb:5 hour fcst:
38:24956531:d=2023092801:MAXUVV:100-1000 mb above ground:4-5 hour max fcst:
39:25687250:d=2023092801:MAXDVV:100-1000 mb above ground:4-5 hour max fcst:
40:26410531:d=2023092801:DZDT:0.5-0.8 sigma layer:4-5 hour ave fcst:
41:27042519:d=2023092801:MSLMA:mean sea level:5 hour fcst:
42:27650201:d=2023092801:HGT:1000 mb:5 hour fcst:
43:28342907:d=2023092801:MAXREF:1000 m above ground:4-5 hour max fcst:
44:28588455:d=2023092801:REFD:263 K level:4-5 hour max fcst:
45:28736498:d=2023092801:MXUPHL:5000-2000 m above ground:4-5 hour max fcst:
46:28789333:d=2023092801:MNUPHL:5000-2000 m above ground:4-5 hour min fcst:
...

The metadata in the NOAA EMC generated idx files suggests we are missing local definitions for a number of variables that represent min, max or avg values over a layer. I have asked NOAA-EMC for help with decoding these properly.

vname,
msg_ind,
)
# Use unknown as a group or drop it?
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From the same HRRR SFCF grib2 files discussed above re: unknown variables, here are two examples of unknown levels:
Example logs

Found 'unknown' attribute value for key typeOfLevel in var gh of msg# 161
Found 'unknown' attribute value for key typeOfLevel in var layth of msg# 164

Corresponding idx file entries.

162:129396944:d=2023092801:HGT:level of free convection:5 hour fcst:
165:135154615:d=2023092801:LAYTH:261 K level - 256 K level:5 hour fcst:


unknown_counter = 0
for msg_ind, group in enumerate(message_groups):
if "version" not in result:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a way to run scan_grib using zarr v3? As implemented, grib_tree is pretty tightly coupled on the output of scan_grib. I see you can set an ENV var to allow access to the experimental API...

# Hard code the filters in the correct order for the group hierarchy
filters = ["stepType", "typeOfLevel"]

zarr_store = {}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since the hierarchy in zarr is just a path embedded in the key... it might must work.
If I want to add more tests, should I checkin some json output from scan_grib for a few variables?

if "step/.zarray" not in group["refs"]:
group = copy.deepcopy(group)
logger.warning("Adding missing step variable to group path %s", path)
group["refs"]["step/.zarray"] = (
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can I put this in scan_grib where the cfgrib WrongStepUnitError is handled?
I don't like doing this here. Another alternative would be to use the preprocess hook in MultiZarrToZarr?
We should be able to calculate the correct step value by subtracting the valid_time from the time (reference time).
That requires decoding those values though and then encoding the step correctly...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the HRRR SubHF dataset, even the step values that are decoded (don't raise an exception) are wrong.

scans = scan_grib("gs://high-resolution-rapid-refresh/hrrr.20210928/conus/hrrr.t00z.wrfsubhf01.grib2")
dsets = []
zsets = []
vname = "dswrf"
for gg in scans:
  if f"{vname}/.zattrs" in gg['refs']:
    fs = fsspec.filesystem("reference", fo=gg)
    dsets.append(xr.open_dataset(fs.get_mapper(""), engine="zarr", consolidated=False))
    zsets.append(zarr.open_group(fs.get_mapper("")))
for ds, zs in zip(dsets, zsets):
  print(
      f"{ds[vname].attrs['GRIB_stepType']}, zs units {zs.step.attrs['units'] if 'step' in ds else None}: "
      f"ds step - {ds.step.values[()].astype('timedelta64[h]') if 'step' in ds else None}, zs step - {zs.step[()] if 'step' in zs else None};"
      f" valid_time - {ds.valid_time.values[()]}; time - {ds.time.values[()]}"
  )
avg, zs units None: ds step - None, zs step - None; valid_time - 2021-09-28T00:15:00.000000000; time - 2021-09-28T00:00:00.000000000
instant, zs units hours: ds step - 15 hours, zs step - 15.0; valid_time - 2021-09-28T00:15:00.000000000; time - 2021-09-28T00:00:00.000000000
avg, zs units hours: ds step - 30 hours, zs step - 30.0; valid_time - 2021-09-28T00:30:00.000000000; time - 2021-09-28T00:00:00.000000000
instant, zs units hours: ds step - 30 hours, zs step - 30.0; valid_time - 2021-09-28T00:30:00.000000000; time - 2021-09-28T00:00:00.000000000
avg, zs units hours: ds step - 45 hours, zs step - 45.0; valid_time - 2021-09-28T00:45:00.000000000; time - 2021-09-28T00:00:00.000000000
instant, zs units hours: ds step - 45 hours, zs step - 45.0; valid_time - 2021-09-28T00:45:00.000000000; time - 2021-09-28T00:00:00.000000000
avg, zs units hours: ds step - 60 hours, zs step - 60.0; valid_time - 2021-09-28T01:00:00.000000000; time - 2021-09-28T00:00:00.000000000
instant, zs units hours: ds step - 1 hours, zs step - 1.0; valid_time - 2021-09-28T01:00:00.000000000; time - 2021-09-28T00:00:00.000000000

The values are in floating point minute but the units are specified as hours.
Only the stepType instant for the end of the forecast is correct - 1 hour.

@emfdavid
Copy link
Contributor Author

Not sure how to fix the Docs check but I think the rest of this is in pretty good shape!

@TomNicholas
Copy link

@jthielen you might be interested in this

if "fill_value" not in kw:
if data.dtype.kind == "i":
kw["fill_value"] = None
elif k in z:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How can this be true when we create the dataset in a couple of lines below?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is getting the fill_value attribute from the coordinate zarr array instance.
Without the if clause I got errors on some existing tests I think... so I copied the if clause used for copying the attributes (new line 329).
The change is important because timedeltas stored as float are given a default fill_value of 0.0 without this change which becomes NaT when read with xarray. Meaning you can't have a zero timedelta.


unknown_counter = 0
for msg_ind, group in enumerate(message_groups):
if "version" not in result:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, so version should always be 2? Or is this not the zarr version?

# https://confluence.ecmwf.int/display/UDOC/Creating+your+own+local+definitions+-+ecCodes+GRIB+FAQ
# If you process the groups from a single file in order, you can use the msg# to compare with the
# IDX file.
logger.warning(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, we can leave this for future improvement

vname,
msg_ind,
)
# Use unknown as a group or drop it?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you saying that we should be able to decode them? The grib object has many many possble attributes, so this seems solvable. Not needed for this PR.

kerchunk/grib2.py Outdated Show resolved Hide resolved
Co-authored-by: Martin Durant <martindurant@users.noreply.github.com>
Copy link
Contributor Author

@emfdavid emfdavid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@martindurant Thanks for taking a pass on this.
Do you want to add xarray_datatree as a dev dependency for kerchunk so we can make some additional tests?
Thank you for your work on datatree @TomNicholas any thoughts on how I am using or abusing datatree here?

if "fill_value" not in kw:
if data.dtype.kind == "i":
kw["fill_value"] = None
elif k in z:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is getting the fill_value attribute from the coordinate zarr array instance.
Without the if clause I got errors on some existing tests I think... so I copied the if clause used for copying the attributes (new line 329).
The change is important because timedeltas stored as float are given a default fill_value of 0.0 without this change which becomes NaT when read with xarray. Meaning you can't have a zero timedelta.

kerchunk/grib2.py Outdated Show resolved Hide resolved

unknown_counter = 0
for msg_ind, group in enumerate(message_groups):
if "version" not in result:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah - you are right - this is the kerchunk version, not the zarr version anyway.

# https://confluence.ecmwf.int/display/UDOC/Creating+your+own+local+definitions+-+ecCodes+GRIB+FAQ
# If you process the groups from a single file in order, you can use the msg# to compare with the
# IDX file.
logger.warning(
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Crickets over at NOAA-EMC... I think we might have a working rust gribberish reader before we get a clear answer on how to build the ecCodes tables for these unknown properties.
🤞 @mpiannucci

vname,
msg_ind,
)
# Use unknown as a group or drop it?
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes - I believe these are just more custom codes, similar to the unknown variables above.
I think they are all documented and verifiable by reading the idx files, but implementing the tables for ecCodes to read them is more than I can take on right now...

@martindurant
Copy link
Member

Do you want to add xarray_datatree as a dev dependency for kerchunk so we can make some additional tests?

That's fine, add it to the CI envs. Having it in the optional package requirements is fine too, but not necessary (this is experimental, after all).

@emfdavid
Copy link
Contributor Author

this is experimental, after all

Isn't it all experimental, after all?

@martindurant
Copy link
Member

Isn't it all experimental, after all?

Some bits are more experimental than others :)

@emfdavid
Copy link
Contributor Author

@martindurant Still not sure what do about the docs build failure here?
https://github.com/fsspec/kerchunk/actions/runs/7050805133/job/19192272139?pr=399

@martindurant
Copy link
Member

the docs build failure

I don't know either, except that the install raises a number of incompatibility warnings from pip. ammaraskar/sphinx-action@master doesn't appear to have changed, but it is somewhat dated - perhaps we need a concrete environment and run sphinx ourselves.

@emfdavid
Copy link
Contributor Author

Interesting... if I try to access the valid time coordinates from a non square aggregation e.g.

sfcf_scans = scan_grib("testdata/hrrr.t00z.wrfsfcf00.grib2")
sfcf_scans += scan_grib("testdata/hrrr.t00z.wrfsfcf01.grib2")
sfcf_scans += scan_grib("testdata/hrrr.t01z.wrfsfcf00.grib2")

That is two forecast reference times, one of which as two steps and the other only has one step... we get some bad missing values in the valid time.

Screenshot 2023-11-30 at 4 30 01 PM

Where as if I combine outputs for two steps from each reference time...

sfcf_scans = scan_grib("testdata/hrrr.t00z.wrfsfcf00.grib2")
sfcf_scans += scan_grib("testdata/hrrr.t00z.wrfsfcf01.grib2")
sfcf_scans += scan_grib("testdata/hrrr.t01z.wrfsfcf00.grib2")
sfcf_scans += scan_grib("testdata/hrrr.t01z.wrfsfcf01.grib2")
Screenshot 2023-11-30 at 4 39 08 PM

Maybe this is a behavior we could fix in MultiZarrToZarr to fill in NaN or NaT missing coordinate? I think the order in which you aggregate can also be important, but I am not sure why?

@andreall
Copy link

andreall commented Dec 1, 2023

Hi! I'm just getting started in the kerchunk and fsspec world and wanted to do fsspec on our grib files from WRF model. I was having a look at the colab notebook but I'm not sure how to implement it in my files...do I need to implement the correct_hrrr_subhf_step for my structure? Is there a docs I am missing? Thanks!!!

@emfdavid
Copy link
Contributor Author

emfdavid commented Dec 1, 2023

do I need to implement the correct_hrrr_subhf_step for my structure?

@andreall hopefully that is only required for the quirky bad step values in the HRRR Subhourly 2D Surface data. I don't think you will have this problem.

I suggest you try it for one grib file and see what you get - look at the valid_time, step and time coordinates as and see if it is working properly.

something like this...

scans = scan_grib("myfile.grib2")
ref_store = grib_tree(scans)
fs = fsspec.filesystem("reference", fo=ref_store)
zg = zarr.open_group(fs.get_mapper(""))
dt = datatree.open_datatree(fs.get_mapper(""), engine="zarr", consolidated=False)

The colab and the code comments are all I have for docs right now, but if this PR is merged I will help support it.

@martindurant
Copy link
Member

Question: xarray now supports engine="kerchunk" (plugin in this repo) which would combine the equivalent of the last three lines above. Is there some plugin point like that for datatree too?

@emfdavid
Copy link
Contributor Author

emfdavid commented Dec 1, 2023

Is there some plugin point like that for datatree too?

Looks like not yet?

@martindurant
Copy link
Member

I'm sure we could suggest it

@emfdavid
Copy link
Contributor Author

emfdavid commented Dec 1, 2023

@martindurant pushed one more commit
Ran into issues using/testing the reference spec stores with byte values inside.
If there is a reason we need bytes I can roll this back, but I think you get back string anyway if you write/read it as json anyway?
If there is a way to fix this so the values are not created as bytes in the first place that would be ideal... it seems like the store data created by zarr with create_group and attr setter end up as binary consistent with the zarr spec but we end up with a mix of string/bytes in kerchunk - all string values, once you write/read it as json.
Screenshot 2023-12-01 at 12 40 50 PM

@martindurant
Copy link
Member

In JSON, the values are indeed strings on-disk, which become bytes when read by the filesystem: either as ascii, or base64 encoded. In memory (dicts), bytes, ascii strings or base64 strings should all work. Parquet, by contrast, stores bytes only.

@keewis
Copy link
Contributor

keewis commented Dec 4, 2023

I'm sure we could suggest it

See pydata/xarray#7437 (on the xarray repo because we've been thinking of merging datatree into xarray for quite a while now)

@martindurant
Copy link
Member

Merging this now, let's see what use it gets. We can always fix things for cases of gribs that are just a bit different from those trialed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants