-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Does spreading sea ice freshwater flux over multiple layers reduce sensitivity to resolution? #38
Comments
This paper (Sidorenko et al. 2018) is also relevant. |
After conversation we decided to implement the distribution at depth of wfiform array uniformly in the first 5 meters. |
I think I've seen some Arctic plots, so I didn't limit this to 60S. The executable is /home/552/pc5520/access-om2/bin/brine5m_wilton/fms_ACCESS-OM_97e3429-modified_libaccessom2_1bb8904-modified.x You'll need to include these two lines in your diag_table file, 2d and 3d brine rejection arrays. "ocean_model","brine_fwflx","brine_fwflx", "ocean_monthly_3d_basal","all",.true.,"none",2 |
That was fast - Thanks! I believe the important regions would be along the coast, where DSW is formed, so 60S would be alright ( I imagine FWF on the northern boundary of the SO sea ice would have very low impact here. |
I reckon no latitude limit is good Pedro. So fast!!
|
Give it a try for a few months, if you see any differences in the Equator (far from sea-ice formation regions), we should use the exact same code version you are using |
@pedrocol , the model has been looking for the file |
They were set to zero, but I forgot to comment the import of the files, it should work now. Still the same filename for the exe file: /home/552/pc5520/access-om2/bin/brine5m_wilton/fms_ACCESS-OM_97e3429-modified_libaccessom2_1bb8904-modified.x |
Thanks! Looking for |
Sorry for this, I double checked the logicals and they work fine now, you just need to add the following lines to your ocean namelist: |
No worry - and thanks! |
I just tested the new module and is fine now, there was still a problem in reading the namelist in get_ocean_sbc
As we can see control pme (-2.6e11) = pme (8.86e11) + brine (-1.14e12), this means that the part of the code that splits pme and fills the brine array works correctly Sanity checks:
|
I tested this for 1 month, sum(brine2d)=sum(wfiform)=-177.17 kg m2/s However, sum(basal3d) = 183.2 kg m2/s (3% difference). The reason for this may be that we are dealing with very small numbers (10e-4) and that the output precision is simple precision (8 digits) or that python handles by default simple precision. Then I checked brine3d/dz for a specific point, and the profile is linear and it only concerns the first 3 points I checked the brine3d ncfile, and it only has values for the first 3 vertical levels. So, I think this is validated and ready to run, @willaguiar you just have to check sst or sss far from sea-ice formation regions to check that the code version is the same. |
Hi Pedro, thanks for checking that... I did some analysis after running the model for a few months, and it doesn't seem to me that the difference of brine2d and brine3d drifts along the time, so the precision explanation seems reasonable to me. One thing I notice tho is that the brine is distributed over the first 3 levels, which makes reach down to 3.6 m (interface depth) instead of 5.06m that we wanted. So the ideal would be to make it distributed into the upper 4 levels. ( Sorry if I wasn't cleat that the limit was 5.06m instead of hard 5m). Could we change that? |
Cool that everything works as expected. I changed the code so it distributes brine over the first 4 levels, instead of setting a depth threshold, the executable is in /home/552/pc5520/access-om2/bin/brine_kmax4_wilton |
Thanks again @pedrocol I ran the brine redistributed run for over a year now. For that run, the difference between brine2d and brine3d is very small from Jan-Jun, but it increases a lot between Jun-Dec. Especially in December, the difference is 100%, where brine2d has no fluxes while brine3d has. Any idea on why is that? The difference seems to be bigger away from the shelf. Plots below are for the sum of brine fluxes (Plots are similar if I multiply by the surface area to get Kg/s ). Could that be linked to the non-distributed SI melting, or is it something related to the way brine2d/3d are defined. |
Hi Wilton, can you please point to where the outputs are? The antarctic plot is brine3d - brine2d, right? |
By the way, why do you compute the mean and not the sum of the fluxes? |
Hi Yes.... they should be in And sorry - wrong title - they are actually the sum, and not the mean ( only mean is the lower plot, along time) |
Is this brine2d/brine3d variable the equivalent of wfiform? i.e. we're getting more sea ice formation in the new run? |
brine2d = wfiform, brine3d is brine2d distributed over depth. brine2d values are already very small, and therefore brine3d values are even smaller. I don't think there is a problem with the new module coded in mom5. I think it is just a precision issue. |
@willaguiar what does the change in SWMT over the shelf look like? We were expecting less dense water formation when we spread the formation fluxes over the top 5m, but this seems to be making more sea ice. Does wfimelt also change though in an opposing way? |
@adele-morrison I haven't looked up yet the difference between the runs tho, as I found this difference in the validation of the FWFdist run (between the 2d and 3d variables in the same run) @pedrocol , I tried converting the output to double precision after importing in python, so the calculations are carried with higher precision. I still get the same plots tho. I checked out the ncfile, and the brine variables are actually saved as single, not double I can rerun December saving the brine as double to see if it changes the results - do you think it would be worth it? |
Oh ok, I was confused. I thought they were from different runs. |
ok, I reran nov/dec in the model, and saved the brine outputs with double precision. Then I recalculated the plots above with the double precision data. The difference stays mostly the same. So perhaps it is not a precision problem? (single: dotted, double: full line) Do you think that is due to the density weighting? |
Hi Wil, I'm not sure I'm following, the model runs with double precision by default and the outputs were saved with double precision, there is no need to re-run the simulation. I feel confident the split basal2d/basal3d is done correctly in the model because the single time step diagnostics show this. In the previous comment:
Mass of brine is -1.14689565618032349e+12 (brine2d), and Mass from sources is -1.14689565618032349e+12 (brine3d). This means that the precision used by the model (double) is enough, and the split is done correctly for that single time step diagnostic. The issue you are facing now is probably that you still import the data with simple precision. The calculation performed with double precision just adds 8 more zeros to your simple precision imported data, and therefore makes no difference. |
Hi @pedrocol, If the output is in double precision, then this will show up in ncdump as: When you load a single precision (float) array into python using the netCDF4 package, it automatically converts the numbers into double precision. If you load single precision data using xarray, it will keep it in single precision. @willaguiar has now done an explicit test where he changed the output flag to double precision in diag_table. To do that you have to change the final number of the diagnostic request from 2 to 1. The "template" for doing this is:
where the following denotes the packing index:
|
I see, thanks for the clarification, the outputs were in single precision. Now we need to verify that we import the data keeping the double precision, and then that the computation is performed using that double precision |
By the way, here is the piece of the code that splits brine2d into brine3d. Then brine3d is added to mass_source. If in the single time step diagnostics, mass_source equals brine2d, then the split is done correctly. Which is what I mentioned before, and it's the reason why I think this is a precision issue. Let's discuss more tomorrow. |
What's thkocean equal to? Maybe it's useful if we could see the code for writing the diagnostics brine2d and brine3d also? |
Here is the code I used for the calculations. First plot on cell 16 is using only single data, and second plot on line 23 has single and double data overlapping. Let me know if there is an error in they way I added the fluxes. |
Oh, sorry, I meant the fortran code where the diagnostics are defined. The differences seem too large to me (and have a weird seasonal pattern) to be caused by single/double differences. |
I confirm that there is an issue with the saved data. This issue is not present for the basal and icb fields in my simulations. Is only related with the brine field. I also add that there is not an issue with brine3d in the simulation, the single time step diagnostic is ok. The issue seems to be with the saved data only. |
@pedrocol Thanks for figuring out what the issue was ! Any idea why the saved data is differing from the applied forcing? |
Not really, I'll run a few more tests and let you know if I find something else. |
Hi Wil, I've set the single time step diagnostic to appear every 10 time steps: &ocean_tracer_diag_nml "Mass from sources" always equals "Mass of brine". I don't see any problem with the code. However, I can't find where the problem of the output is. If you want to test the code as it is, you can deduce brine3d from the brine array of the outputs. |
I rerun the FWFdist run, with the new fix in the code by Pedro..... Validation DSW response What do you all think? |
It would be interesting to discuss these ideas further in a DSW meeting this week... |
Meeting sounds good. I am out of ideas! 10am Friday again? Probably you need to send a meeting invite for Andy and others who may no longer have that spot free in their calendars |
yep. let's meet to discuss (hflxs?).
P
…On Tue, Jun 11, 2024 at 12:20 PM Adele Morrison ***@***.***> wrote:
Meeting sounds good. I am out of ideas! 10am Friday again? Probably you
need to send a meeting invite for Andy and others who may no longer have
that spot free in their calendars
—
Reply to this email directly, view it on GitHub
<#38 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABSWJXHTKY4OFSRFTIBDTOLZGZNILAVCNFSM6AAAAABFZ5IKA6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNJZGY2DONJYGU>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
--
*I’m sending this message at a time that suits me. I don’t expect you to
reply outside of your own work hours.*
Paul Spence, Assoc. Prof.
ARC Future Fellow
Institute for Marine and Antarctic Studies <https://www.imas.utas.edu.au/>
University of Tasmania, Hobart, Australia
https://paulspence.github.io/
|
Yes let's run longer! I don't understand how the overflow time series can decrease when the curve seems to show no change? What do the curves look like if you plot just for the last year (1993)? |
PS: I checked for all the other runs, sigma levels only differ for FWFdist |
Thanks Wilton - do you mean that the transport time series for FWFdist should in fact match more closely with the standard 1m case if the histogram bins were calculated the same way? |
yes @dkhutch .... or if they were selected for DSW range in the same way.... for example in the case below I fixed just the selection range by adding 0.005 ( |
@pedrocol has been finding strong sensitivity of his basal freshwater runs to what depth range the freshwater flux for sea ice formation is extracted from.
Check out this paper that experiments doing the sea ice brine rejection in NEMO over a fixed depth instead of just the top layer, Barthélemy et al. 2015: https://www.sciencedirect.com/science/article/pii/S1463500314001966
I think this could be worth testing. @pedrocol has already implemented this scheme to distribute the sea ice freshwater fluxes over depth, so we could rerun the control in ACCESS-OM2-01 with this vertical distribution, and then repeat the 1m, vs 5m top layer vertical thickness experiments. If the DSW is no longer sensitive to the top vertical thickness when sea ice fluxes are spread over depth, then this could be a nice recommended solution for future model configs to avoid this spurious behaviour we've been seeing.
The text was updated successfully, but these errors were encountered: