Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sometimes self.relevance_vectors_.shape[0] == 0, which breaks fit() and predict() #24

Open
wvxvw opened this issue Aug 13, 2022 · 0 comments

Comments

@wvxvw
Copy link

wvxvw commented Aug 13, 2022

Sorry, I'm far from machine learning, just helping a friend to figure out problems with the code left by someone else.

After some debugging, I found that:

    def _prune(self):
        """Remove basis functions based on alpha values."""
        keep_alpha = self.alpha_ < self.threshold_alpha

        if not np.any(keep_alpha):
            keep_alpha[0] = True

        if self.bias_used:
            if not keep_alpha[0]:
                self.bias_used = False
            if self.kernel != "precomputed":
                self.relevance_vectors_ = self.relevance_vectors_[
                    keep_alpha[1:]]
            # Breakpoint added by me
            if not self.relevance_vectors_.shape[0]:
                import pdb
                pdb.set_trace()
            self.relevance_ = self.relevance_[keep_alpha[1:]]
        else:
            if self.kernel != "precomputed":
                self.relevance_vectors_ = self.relevance_vectors_[keep_alpha]
            self.relevance_ = self.relevance_[keep_alpha]

        self.alpha_ = self.alpha_[keep_alpha]
        self._alpha_old = self._alpha_old[keep_alpha]
        self.gamma_ = self.gamma_[keep_alpha]
        self.Phi_ = self.Phi_[:, keep_alpha]
        self.Sigma_ = self.Sigma_[np.ix_(keep_alpha, keep_alpha)]
        self.mu_ = self.mu_[keep_alpha]

taken from em_rvm.py is responsible for "nonsense" values in self.relevance_vectors_. This happens when keep_alpha is [True, False, ..., False]. Unfortunately, variables like "alpha" or other Greek letters don't help me understand why the values are the way they are, so I have to leave this report in the state that it is. The bottom line: the particular value of self.alpha_, when it's causing creation of zero-size self.relevance_vectors_ should be treated exceptionally. It's either an error (Why?) or, if not an error, then it should be made to do something sensible.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant