Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make non-OO consumption-saving solvers #1394

Merged
merged 58 commits into from
Mar 27, 2024
Merged

Make non-OO consumption-saving solvers #1394

merged 58 commits into from
Mar 27, 2024

Conversation

mnwhite
Copy link
Contributor

@mnwhite mnwhite commented Mar 6, 2024

Very early on in HARK's lifetime, we decided to make solve_one_period functions be object-oriented, with an inheritance structure among "parent" and "child" models. The (well meaning) intent was to prevent solver code from being redundantly re-written for "small" model changes, and to ensure that improvements in a parent model propagated to child models. The end result, however, was that the solver code was a confusing maze: a user trying to work their way through the solver for (say) ConsMarkovModel would be bounced around that file from method to method, occasionally needing to go up to one or more predecessor classes to find what they were looking for. Moreover, this approach required solver code in "parent" models to be written in an unusual and cryptic way to account for potential downstream alterations... not all of which could be anticipated-- not by a long shot.

This PR goes through and adds straightforward, non-OO solve_one_period functions for our existing models that use the "spaghetti maze" style. In my opinion, the resulting code is much, much easier to read, follow, and understand, and lends itself better to (external) documentation explaining the method step by step. The code is also much shorter: the solvers for ConsPerfForesight and ConsIndShock are 391 lines (inclusive of docstrings, comments, and whitespace) in this PR vs 887 in the legacy version.

This PR is in progress, as I've only done those two solvers right now. I'm making this PR just to verify that tests are passing, which they should (other than whatever black version discrepancies crop up). I will add a "solver checklist" to this post shortly.

EDIT: The OO-solver idea was really the "big thing" we were going for in HARK early on. The Econ-ARK logo is literally a stylized version of the KinkyPrefConsumerType's consumption function, because the solver class for that is nothing more than a double inheritance from KinkedR and PrefShock.

SOLVER CHECKLIST

  • PerfForesightConsumerType
  • IndShockConsumerType
  • KinkedRconsumerType
  • PrefShockConsumerType
  • KinkyPrefConsumerType
  • MedShockConsumerType
  • GenIncProcessConsumerType
  • MarkovConsumerType
  • BequestWarmGlowConsumerType
  • RiskyAssetConsumerType
  • PortfolioConsumerType (except dependent distributions)
  • WealthPortfolioConsumerType (what is this??)

PR CHECKLIST

  • Tests for new functionality/models or Tests to reproduce the bug-fix in code.
  • Updated documentation of features that add new functionality.
  • Update CHANGELOG.md with major/minor changes.

Adds a non-OO solve_one_period function for the ConsPerfectForesight model. Will do for the other solvers as well.

The value function in the PF model isn't right when there's an artificial borrowing constraint (in current HARK). This commit makes it less wrong, but I don't think it's *right* yet.
Still need to add CubicBool and vFuncBool options.
Copy link

codecov bot commented Mar 6, 2024

Codecov Report

Attention: Patch coverage is 63.19773% with 389 lines in your changes are missing coverage. Please review.

Project coverage is 68.28%. Comparing base (bbb07a5) to head (ccbdd76).

Files Patch % Lines
HARK/ConsumptionSaving/ConsBequestModel.py 0.00% 247 Missing ⚠️
HARK/ConsumptionSaving/ConsPrefShockModel.py 66.98% 70 Missing ⚠️
HARK/ConsumptionSaving/ConsMarkovModel.py 80.70% 33 Missing ⚠️
HARK/ConsumptionSaving/ConsIndShockModel.py 87.50% 31 Missing ⚠️
HARK/ConsumptionSaving/ConsPortfolioModel.py 95.53% 8 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1394      +/-   ##
==========================================
- Coverage   71.53%   68.28%   -3.26%     
==========================================
  Files          83       83              
  Lines       13938    14969    +1031     
==========================================
+ Hits         9971    10221     +250     
- Misses       3967     4748     +781     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@mnwhite
Copy link
Contributor Author

mnwhite commented Mar 6, 2024

The only failing test is that the overall codecov percentage fell because new code was added, but the legacy code was not removed. The only two uncovered lines in the new code are an exception that comes up when a degenerate cFunc would be generated.

@alanlujan91
Copy link
Member

I agree our current OO solvers are a painful mess. However, I also don't like long python functions that make it impossible to determine which part of your solution method might be wrong or have a bug. You can not incrementally test, the solver either fully works or doesn't.

I don't have a solution or proposal for a better design, but wanted to note this opinion.

@alanlujan91
Copy link
Member

  • WealthPortfolioConsumerType (what is this??)

Non-separable utility of consumption and wealth (savings), Wealth In the Utility Function, as an alternative to separable utility of consumption and wealth.

@Mv77
Copy link
Contributor

Mv77 commented Mar 7, 2024

Functional solvers also have the big advantage of being much easier to compile using jit or even JAX.

@alanlujan91
Copy link
Member

Functional solvers also have the big advantage of being much easier to compile using jit or even JAX.

This is true in theory, but we'd have to dispense with using HARK objects like LinearInterp, ValueFuncCRRA, and MargValueFuncCRRA

@Mv77
Copy link
Contributor

Mv77 commented Mar 7, 2024

About

I agree our current OO solvers are a painful mess. However, I also don't like long python functions that make it impossible to determine which part of your solution method might be wrong or have a bug. You can not incrementally test, the solver either fully works or doesn't.

I don't have a solution or proposal for a better design, but wanted to note this opinion.

I agree with this and like the way you structure your solvers, with distinct steps in separate functions, very much. Perhaps a good compromise is to encourage the use of sub-functions. That also makes the architecture more modular and allows for other models to import parts of a solver. E.g.

def solve_ind_shock(solution_next, a_grid, crra, disc_fac, ...):
    
    dvda, cont_v = compute_expectations(solution_next, ...)
    c, m = egm_inversion(dvda)
    ...



@alanlujan91
Copy link
Member

@Mv77 yes, i think that's a good idea. Some sort of segmentation of long functions, or hybrid OO + small functions not built into the object, might be a good solution.

If we do a hybrid approach with OO + small functions not built into the object, you don't have the messy inheritance of previous OO solvers, small functions can be jitted, and the OOo can handle the HARK objects and interpolators.

@mnwhite
Copy link
Contributor Author

mnwhite commented Mar 7, 2024 via email

Had an off-by-one error in the "coefficient hacking" line, creating a discrepancy in the fourth decimal place. Now fixed,
@alanlujan91
Copy link
Member

If you are not doing these in any particular order, could I request you do portfolio choice next?

@mnwhite
Copy link
Contributor Author

mnwhite commented Mar 8, 2024 via email

Matches behavior exactly for the default dictionary. Need to add vFunc functionality, then expand to discretized version.

This commit has the new solver commented out in the type init method so as to not create a ton of new failing tests while it's still a WIP.
Have not tested fixed share functionality, but I expect it works.
@mnwhite
Copy link
Contributor Author

mnwhite commented Mar 8, 2024

@alanlujan91 These commits have a single-function solve_one_period_ConsPortfolio that seems to match the behavior of the continuous choice model with independent distributions. I'll add the discrete choice and general distribution stuff on Monday, and probably also restructure the function a little more. It looks like there's some redundant evaluations going on. Note that the new solver isn't actually used in these commits; that line is commented out.

@mnwhite
Copy link
Contributor Author

mnwhite commented Mar 8, 2024

I missed the beginning part of Wednesday's meeting. Did you ask Mridul about these black discrepancies? The current one here is the order of import statements!

@alanlujan91
Copy link
Member

@alanlujan91 These commits have a single-function solve_one_period_ConsPortfolio that seems to match the behavior of the continuous choice model with independent distributions. I'll add the discrete choice and general distribution stuff on Monday, and probably also restructure the function a little more. It looks like there's some redundant evaluations going on. Note that the new solver isn't actually used in these commits; that line is commented out.

Sounds good, I don't think anyone uses the discrete and/or generalized ones, although this might be a good project for @sidd3888 in the summer.

@alanlujan91
Copy link
Member

I missed the beginning part of Wednesday's meeting. Did you ask Mridul about these black discrepancies? The current one here is the order of import statements!

I did not get to ask him this... @MridulS?

@mnwhite
Copy link
Contributor Author

mnwhite commented Mar 9, 2024

Testing and debugging dynamic solution methods is a little bit of an art. Even if we break down long solver functions into smaller steps (and we should, but not too small!), I don't think it's any easier to test or debug. The inputs for the smaller steps often include the outputs of prior steps, which can be complex objects that can't easily be handcrafted in advance for testing-- independent unit testing is very hard.

When I'm writing a solver for a big /difficult model, I do a lot of passing partial solutions. That is, even if there's mode code below, I just throw a return thisthing statement and solve just one non-terminal period, then examine its partial progress. If the complication I'm investigating comes up further back (e.g. only after solving 20 or 50 periods), then I'll add a boolean input to the solver that returns partial output iff that flag is set (and then pass a list of Falses with one True at the right time period).

@MridulS
Copy link
Member

MridulS commented Mar 9, 2024

I did not get to ask him this... @MridulS?

Ignore the pre-commit bit here. I will make the pre-commit config a bit more easy in a new PR.

These updates include previously merged branches that didn't get entries. Whoops.
Had to move one very small agent type into its own file. It looks like it's a PortfolioConsumerType with a slightly different solver.
I said that this was like untangling rope, but it's more like brushing a badly groomed dog.
Redirect to another file.
This is now handled by the same solver: just set boolean flag and let it run.
There used to be a special solver in ConsRiskyAssetModel.py that did the "basic" portfolio model, without discrete choice nor adjustment frictions. A simple solver of that kind has now been added.
There is now a "simple solver" for a model in which the risky share is *fixed* (but might vary by age), but is not necessarily 0 or 1. This previously existed, was temporarily exiled to a new file, and now has been restored.
Two very small compatibility changes
@mnwhite
Copy link
Contributor Author

mnwhite commented Mar 25, 2024

This branch might actually be approaching completion. I made a couple more (very easy) solvers following a conversation with @alanlujan91 this morning. Not right now, but we should have a focused discussion on how many different solvers/models we actually want in HARK. I.e. if model A is just model B with a particular restriction, under what circumstances do we want model A at all?

Copy link
Member

@alanlujan91 alanlujan91 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good to me.
I think this is much needed work. The remaining question is now about how we remove redundancies in a not-OO way, which I think involves creating smaller functions that do specific, testable things. I have started some of this in #1395

I would propose we go ahead and merge this, and take care of "functional" solvers in a different PR

Comment on lines +714 to +717
try:
MPCminNow = 1.0 / (1.0 + PatFac / solution_next.MPCmin)
except:
MPCminNow = 0.0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need this try/except, does a test fail if we take it off?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIRC, this is a failsafe for parameterizations where MPCmin would go negative, or where MPCmin_next is already zero. That makes PatFac / MPCmin_next break, so we need an exception here to catch those possibilities and leave something valid-ish.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

), "Interest factor on debt less than interest factor on savings!"
# If the kink is in the wrong direction, code should break here. If there's
# no kink at all, then just use the ConsIndShockModel solver.
if Rboro == Rsave:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this case should also break and alert the user that they should be using ConsIndShock instead, or to check their parameter inputs

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can maybe add a warning here.

Comment on lines 700 to 706
# WHAT IS THIS STUFF FOR??
aGrid=save_points["a"],
Share_adj=save_points["share_adj"],
EndOfPrddvda_adj=save_points["eop_dvda_adj"],
ShareGrid=save_points["share_grid"],
EndOfPrddvda_fxd=save_points["eop_dvda_fxd"],
EndOfPrddvds_fxd=save_points["eop_dvds_fxd"],
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think at some point we were saving some midway calculations for diagnosing issues?

We should probably remove these, unless they are somehow important downstream for @Mv77

Comment on lines +448 to +451
try:
MPCminNow = 1.0 / (1.0 + PatFac / solution_next.MPCmin)
except:
MPCminNow = 0.0
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I removed the try/except locally and I think all tests still passed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, getting to that except statement requires oddball parameter sets.

Comment on lines 584 to 587
vNvrsFuncNow = CubicInterp(
mNrm_temp, vNvrs_temp, vNvrsP_temp, MPCminNvrs * hNrmNow, MPCminNvrs
)
vFuncNow = ValueFuncCRRA(vNvrsFuncNow, CRRA)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

vFunc is only ever cubic?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The pseudo-inverse value function should use cubic spline interpolation, yes. We already have marginal value as u'(c), and it's one arithmetic operation to turn that into marginal pseudo-inverse value.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but this won't support vFuncs with kinks, correct?

@alanlujan91 alanlujan91 merged commit 3ee3979 into master Mar 27, 2024
19 checks passed
@llorracc
Copy link
Collaborator

llorracc commented Mar 27, 2024 via email

@mnwhite
Copy link
Contributor Author

mnwhite commented Mar 27, 2024 via email

@sbenthall
Copy link
Contributor

Just a quick comment...

The term to use for the 'legacy00solvers' is "deprecated". They are deprecated.

Typically this is signaled in the API documentation. After a few releases of a feature being deprecated, it is removed from the library. (Downstream users that need the old functionality an always use a previous version of the software).

Since you've moved the OO solvers to a new located, you haven't exactly preserved the old API access to them. I guess that makes their functionality especially deprecated, since it will take extra work to adapt downstream code to use them.

That will make it especially easy to remove the OO solvers down the line.

@mnwhite
Copy link
Contributor Author

mnwhite commented Mar 28, 2024 via email

@sbenthall
Copy link
Contributor

makes sense. does the new file show up in the rendered API docs?

@mnwhite
Copy link
Contributor Author

mnwhite commented Mar 28, 2024 via email

@sbenthall
Copy link
Contributor

I'll file an issue...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants