Replies: 5 comments 1 reply
-
@twiecki I find the Bambi syntax far easier to work with than component builder. The great thing is that you would make it easy for people coming from R, but just as importantly you get the support of the educational materials from nlme, brms, etc. on how to work with the syntax generally. |
Beta Was this translation helpful? Give feedback.
-
Now you have me really thinking! How nifty would it be to write something like this:
or this
|
Beta Was this translation helpful? Give feedback.
-
I would like to hear about people's thoughts on the current model config specification. Is this something that people like, clear on how to work and customize, and would like to keep around |
Beta Was this translation helpful? Give feedback.
-
Having a base MMM model, where multiple lifecycle hooks is defined. Then for each component type, define an interface, and concrete class which implement what to do with those hook. For any additional function, decouple them into its own class / object where it operates based on the interface object (basically following dependency inversion principle). Not sure how to implement them with bambi syntax though |
Beta Was this translation helpful? Give feedback.
-
before going to details of API, here are some constraints on how priors affect each other: Data is often needed to correctly set priors on marketing:
Marketing prior affects baseline prior as well
# a very granular way to implement this is the following
marketing_model = Marketing(data, m_config)
baseline_model = Baseline(marketing_model, b_config)
full_model = MMM(marketing_model, baseline_model, observation_config)
# some wrappers and common use case classes are needed for sure
full_model = DelayedSaturatedMMM(marketing=m_config, baseline=b_config, observation=observation_config) Where under the hood, this coupling can be made. MMM Class that takes these two instances should do nothing but combine the effects class MMM:
def apply(self, data):
m = self.marketing(data)
b = self.baseline(data)
return self.observe(pt.exp(b) + m) Since the parameterization of base class includes dependencies on marketing (to make prior predictive happy), there should be a way to tell the baseline model what is left for it to model on the log scale class Marketing:
def log_baseline_leftover_mu_sigma(self) -> tuple[TensorVariable, TensorVariable]: ... Same for baseline class Baseline:
def log_error_leftover_sigma(self) -> TensorVariable: ... These exposed methods can help to guide the downstream models with essential information that can make prior predictive look nice and expected. |
Beta Was this translation helpful? Give feedback.
-
What are common models people want to build? (hierarchical with flexibility (regions), time-varying parameters)
What could a good API for such models look like? Bambi syntax #398 or component builder #284
Beta Was this translation helpful? Give feedback.
All reactions