Skip to content

Commit

Permalink
Replace 'an pytensor' -> 'a pytensor'
Browse files Browse the repository at this point in the history
  • Loading branch information
Armavica committed Oct 8, 2024
1 parent 4a00b49 commit 3c5364b
Show file tree
Hide file tree
Showing 12 changed files with 23 additions and 23 deletions.
10 changes: 5 additions & 5 deletions docs/source/contributing/developer_guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ $$
z \sim \text{Normal}(0, 5)
$$

A call to a {class}`~pymc.Distribution` constructor as shown above returns an PyTensor {class}`~pytensor.tensor.TensorVariable`, which is a symbolic representation of the model variable and the graph of inputs it depends on.
A call to a {class}`~pymc.Distribution` constructor as shown above returns a PyTensor {class}`~pytensor.tensor.TensorVariable`, which is a symbolic representation of the model variable and the graph of inputs it depends on.
Under the hood, the variables are created through the {meth}`~pymc.Distribution.dist` API, which calls the {class}`~pytensor.tensor.random.basic.RandomVariable` {class}`~pytensor.graph.op.Op` corresponding to the distribution.

At a high level of abstraction, the idea behind ``RandomVariable`` ``Op``s is to create symbolic variables (``TensorVariable``s) that can be associated with the properties of a probability distribution.
Expand Down Expand Up @@ -134,7 +134,7 @@ model_logp # ==> -6.6973152

## Behind the scenes of the ``logp`` function

The ``logp`` function is straightforward - it is an PyTensor function within each distribution.
The ``logp`` function is straightforward - it is a PyTensor function within each distribution.
It has the following signature:

:::{warning}
Expand Down Expand Up @@ -277,7 +277,7 @@ as for ``FreeRV`` and ``ObservedRV``, they are ``TensorVariable``\s with
``Factor`` basically `enable and assign the
logp <https://github.com/pymc-devs/pymc/blob/6d07591962a6c135640a3c31903eba66b34e71d8/pymc/model.py#L195-L276>`__
(represented as a tensor also) property to an PyTensor tensor (thus
(represented as a tensor also) property to a PyTensor tensor (thus
making it a random variable). For a ``TransformedRV``, it transforms the
distribution into a ``TransformedDistribution``, and then ``model.Var`` is
called again to added the RV associated with the
Expand Down Expand Up @@ -373,7 +373,7 @@ def logpt(self):
return logp
```

which returns an PyTensor tensor that its value depends on the free parameters in the model (i.e., its parent nodes from the PyTensor graph).
which returns a PyTensor tensor that its value depends on the free parameters in the model (i.e., its parent nodes from the PyTensor graph).
You can evaluate or compile into a python callable (that you can pass numpy as input args).
Note that the logp tensor depends on its input in the PyTensor graph, thus you cannot pass new tensor to generate a logp function.
For similar reason, in PyMC we do graph copying a lot using pytensor.clone_replace to replace the inputs to a tensor.
Expand Down Expand Up @@ -561,7 +561,7 @@ Moreover, transition kernels in TFP do not flatten the tensors, see eg docstring
We love NUTS, or to be more precise Dynamic HMC with complex stopping rules.
This part is actually all done outside of PyTensor, for NUTS, it includes:
The leapfrog, dual averaging, tuning of mass matrix and step size, the tree building, sampler related statistics like divergence and energy checking.
We actually have an PyTensor version of HMC, but it has never been used, and has been removed from the main repository.
We actually have a PyTensor version of HMC, but it has never been used, and has been removed from the main repository.
It can still be found in the [git history](https://github.com/pymc-devs/pymc/pull/3734/commits/0fdae8207fd14f66635f3673ef267b2b8817aa68), though.

#### Variational Inference (VI)
Expand Down
2 changes: 1 addition & 1 deletion docs/source/guides/Gaussian_Processes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,7 @@ other type of random variable. The first argument is the name of the random
variable representing the function we are placing the prior over.
The second argument is the inputs to the function that the prior is over,
:code:`X`. The inputs are usually known and present in the data, but they can
also be PyMC random variables. If the inputs are an PyTensor tensor or a
also be PyMC random variables. If the inputs are a PyTensor tensor or a
PyMC random variable, the :code:`shape` needs to be given.

Usually at this point, inference is performed on the model. The
Expand Down
2 changes: 1 addition & 1 deletion docs/source/learn/core_notebooks/Gaussian_Processes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@ other type of random variable. The first argument is the name of the random
variable representing the function we are placing the prior over.
The second argument is the inputs to the function that the prior is over,
:code:`X`. The inputs are usually known and present in the data, but they can
also be PyMC random variables. If the inputs are an PyTensor tensor or a
also be PyMC random variables. If the inputs are a PyTensor tensor or a
PyMC random variable, the :code:`shape` needs to be given.

Usually at this point, inference is performed on the model. The
Expand Down
2 changes: 1 addition & 1 deletion docs/source/learn/core_notebooks/pymc_pytensor.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -415,7 +415,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### What is in an PyTensor graph?\n",
"### What is in a PyTensor graph?\n",
"\n",
"The following diagram shows the basic structure of an `pytensor` graph.\n",
"\n",
Expand Down
10 changes: 5 additions & 5 deletions pymc/distributions/custom.py
Original file line number Diff line number Diff line change
Expand Up @@ -510,9 +510,9 @@ class CustomDist:
A callable that calculates the log probability of some given ``value``
conditioned on certain distribution parameter values. It must have the
following signature: ``logp(value, *dist_params)``, where ``value`` is
an PyTensor tensor that represents the distribution value, and ``dist_params``
a PyTensor tensor that represents the distribution value, and ``dist_params``
are the tensors that hold the values of the distribution parameters.
This function must return an PyTensor tensor.
This function must return a PyTensor tensor.
When the `dist` function is specified, PyMC will try to automatically
infer the `logp` when this is not provided.
Expand All @@ -523,9 +523,9 @@ class CustomDist:
A callable that calculates the log cumulative log probability of some given
``value`` conditioned on certain distribution parameter values. It must have the
following signature: ``logcdf(value, *dist_params)``, where ``value`` is
an PyTensor tensor that represents the distribution value, and ``dist_params``
a PyTensor tensor that represents the distribution value, and ``dist_params``
are the tensors that hold the values of the distribution parameters.
This function must return an PyTensor tensor. If ``None``, a ``NotImplementedError``
This function must return a PyTensor tensor. If ``None``, a ``NotImplementedError``
will be raised when trying to compute the distribution's logcdf.
support_point : Optional[Callable]
A callable that can be used to compute the finete logp point of the distribution.
Expand All @@ -550,7 +550,7 @@ class CustomDist:
When specified, `ndim_supp` and `ndims_params` are not needed. See examples below.
dtype : str
The dtype of the distribution. All draws and observations passed into the
distribution will be cast onto this dtype. This is not needed if an PyTensor
distribution will be cast onto this dtype. This is not needed if a PyTensor
dist function is provided, which should already return the right dtype!
class_name : str
Name for the class which will wrap the CustomDist methods. When not specified,
Expand Down
2 changes: 1 addition & 1 deletion pymc/distributions/dist_math.py
Original file line number Diff line number Diff line change
Expand Up @@ -236,7 +236,7 @@ def log_normal(x, mean, **kwargs):


class SplineWrapper(Op):
"""Creates an PyTensor operation from scipy.interpolate.UnivariateSpline."""
"""Creates a PyTensor operation from scipy.interpolate.UnivariateSpline."""

__props__ = ("spline",)

Expand Down
2 changes: 1 addition & 1 deletion pymc/distributions/truncated.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@


class TruncatedRV(SymbolicRandomVariable):
"""An `Op` constructed from an PyTensor graph that represents a truncated univariate random variable."""
"""An `Op` constructed from a PyTensor graph that represents a truncated univariate random variable."""

default_output: int = 0
base_rv_op: Op
Expand Down
2 changes: 1 addition & 1 deletion pymc/logprob/rewriting.py
Original file line number Diff line number Diff line change
Expand Up @@ -199,7 +199,7 @@ def construct_ir_fgraph(
A custom IR rewriter can be specified. By default,
`logprob_rewrites_db.query(RewriteDatabaseQuery(include=["basic"]))` is used.
Our measurable IR takes the form of an PyTensor graph that is more-or-less
Our measurable IR takes the form of a PyTensor graph that is more-or-less
equivalent to a given PyTensor graph (i.e. the keys of `rv_values`) but
contains `Op`s that are subclasses of the `MeasurableOp` type in
place of ones that do not inherit from `MeasurableOp` in the original
Expand Down
8 changes: 4 additions & 4 deletions pymc/model/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ def modelcontext(model: Optional["Model"]) -> "Model":


class ValueGradFunction:
"""Create an PyTensor function that computes a value and its gradient.
"""Create a PyTensor function that computes a value and its gradient.
Parameters
----------
Expand Down Expand Up @@ -593,7 +593,7 @@ def isroot(self):
return self.parent is None

def logp_dlogp_function(self, grad_vars=None, tempered=False, **kwargs):
"""Compile an PyTensor function that computes logp and gradient.
"""Compile a PyTensor function that computes logp and gradient.
Parameters
----------
Expand Down Expand Up @@ -1660,7 +1660,7 @@ def compile_fn(
point_fn: bool = True,
**kwargs,
) -> PointFunc | Function:
"""Compiles an PyTensor function.
"""Compiles a PyTensor function.
Parameters
----------
Expand Down Expand Up @@ -2177,7 +2177,7 @@ def compile_fn(
model: Model | None = None,
**kwargs,
) -> PointFunc | Function:
"""Compiles an PyTensor function.
"""Compiles a PyTensor function.
Parameters
----------
Expand Down
2 changes: 1 addition & 1 deletion pymc/pytensorf.py
Original file line number Diff line number Diff line change
Expand Up @@ -277,7 +277,7 @@ def cont_inputs(a):


def floatX(X):
"""Convert an PyTensor tensor or numpy array to pytensor.config.floatX type."""
"""Convert a PyTensor tensor or numpy array to pytensor.config.floatX type."""
try:
return X.astype(pytensor.config.floatX)
except AttributeError:
Expand Down
2 changes: 1 addition & 1 deletion pymc/sampling/jax.py
Original file line number Diff line number Diff line change
Expand Up @@ -122,7 +122,7 @@ def get_jaxified_graph(
inputs: list[TensorVariable] | None = None,
outputs: list[TensorVariable] | None = None,
) -> list[TensorVariable]:
"""Compile an PyTensor graph into an optimized JAX function."""
"""Compile a PyTensor graph into an optimized JAX function."""
graph = _replace_shared_variables(outputs) if outputs is not None else None

fgraph = FunctionGraph(inputs=inputs, outputs=graph, clone=True)
Expand Down
2 changes: 1 addition & 1 deletion tests/test_pytensorf.py
Original file line number Diff line number Diff line change
Expand Up @@ -277,7 +277,7 @@ def test_convert_generator_data(input_dtype):
result = convert_generator_data(square_generator)
apply = result.owner
op = apply.op
# Make sure the returned object is an PyTensor TensorVariable
# Make sure the returned object is a PyTensor TensorVariable
assert isinstance(result, TensorVariable)
assert isinstance(op, GeneratorOp), f"It's a {type(apply)}"
# There are no inputs - because it generates...
Expand Down

0 comments on commit 3c5364b

Please sign in to comment.