-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Devel #3211
base: master
Are you sure you want to change the base?
Conversation
Parameter getters may not be valid when called on a Parameter owned by a type. In this case, use the default value to determine if the Parameter can be associated with a ParameterPort. Do not catch exceptions on instantiated classes.
Numpy 1.25.0 moved warnings and errors to a submodule[0]. Moreover, they are removed from the top-level namespace in Numpy 2.0.0 [1]. Add and use exception import that either points to the new exception submodule, or the global namespace depending on whether the former is available. [0] https://numpy.org/doc/2.1/release/1.25.0-notes.html#numpy-now-has-an-np-exceptions-namespace [1] https://numpy.org/doc/2.1/release/2.0.0-notes.html#numpy-2-0-python-api-removals Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
This mode is not supposed to be used outside of its combination in PTXRun. Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
…e test instances Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
This mode should not be used outside development or testing. Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
It does not support non-trivial scheduling. Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Explicitly list composition compilation mode. Make the new mode private. Only used by developers and tests. Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Use them instead of the COMPILED mask. Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Move compiled execution of output_CIM to the same place as Python execution. Do not report end of trial twice in _LLVMPerNode execution mode. Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Enum members are singletons. Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Make most compiled modes private for test/development use only. Introduce a new _LLVMPerNode execution mode. Do not fall back to per-node mode in automatic fallback. Use ExecutionMode helper methods rather than COMPILED mask to determine compiled mode.
Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
• transferfunctions.py: * Add bounds to Linear (docs are wrong and code doesn't implement them) * Refactor bounds to use getter, and to list dependencies on scale and offset, and to use "natural bounds" or the like - rename bounds -> range - add DeterministicTransferFunction subclass of TransferFunction: - scale and offset Parameters used by all subclasses - add _range_setter() that adjusts range based on scale and/or offset --------- Co-authored-by: jdcpni <pniintel55>
Fixes: 7cacab6 ("refactor/transferfunction/scale_and_offset") Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Enable MAX_VAL and MAX_INDICATOR in compiled SoftMax function.
…#3187) Use nullcontext for cases that do not emit warning. Add missing warning messages for cases that do. Turn expected messages into regular expressions and pass them in "match" parameter to pytest.warns. Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
…ments Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
…eIntegrator Increased tolerance is now only needed for fp32 execution. Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Match Python implementations. Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
The "reset" variants of functions now return a value. Add tests for reset behaviour of: * ProcessingMechanism using an integrator function * DDM mechanism using DriftDiffusionIntegrator function * TransferMechanism in integrator mode The reproducer from #3142 was added as a test of mechanism reset in composition. Fixes: #3142 Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
…tion variants (#3188) Return the value of "previous_value" Parameter in Function "reset" variant for Functions that don't have other Parameters with initializers. Return the values of "previous_value" and "previous_time" in DriftDiffusionIntegrator "reset" variant". Update output ports in Mechanism "reset" execution variants, using the value returned from the Function's "reset" variant. Update compiled test helpers to allow the selection of execution variants for Mechanisms and Functions. Add reproducer from #3142 as a regression test. Closes: #3142
Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Itertools classes won't support copy or pickle starting in Python 3.14. Filter the associated DeprecationWarning. Fixes ~5000 warnings in a full test run using Python 3.12. Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Only explicitly specified FEEDBACK projections are known to Composition.graph. FLEXIBLE projections are determined to be FEEDBACK or NON_FEEDBACK in Composition.graph_processing.
- fix bug in assignment of memory_template when first entry of a field is non-zero
- other Components generally set in their execute methods - setting only in Composition.run fails to set when a Composition is nested, because only execute gets called - must still be set in Composition.run in case Composition is empty
…26 (#3198) Updates the requirements on [pycuda](https://github.com/inducer/pycuda) to permit the latest version. - [Release notes](https://github.com/inducer/pycuda/releases) - [Commits](inducer/pycuda@v2024.1...v2025.1) --- updated-dependencies: - dependency-name: pycuda dependency-type: direct:production ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Provides similar functionality to the skip_file_prefixes argument of warnings.warn in python 3.12+
Override more verbose graph-scheduler strings for GraphStructureCondition parent classes: - _GSCWithOperations - exclude attributes other than nodes - _GSCReposition - remove str of graph for already before/after warnings
condition: simplify output for GraphStructureConditions
would otherwise just fail less clearly later in init
…sition-show-graph-jupyter add return value to show_graph
standard_deviation=1.0, \ | ||
bias=0.0, \ | ||
scale=1.0, \ | ||
offset=0.0, \ |
Check notice
Code scanning / CodeQL
Explicit returns mixed with implicit (fall through) returns Note
@@ -374,7 +374,7 @@ | |||
|
|||
from psyneulink._typing import Optional | |||
|
|||
from psyneulink.core.components.component import Component, parameter_keywords | |||
from psyneulink.core.components.component import Component, ComponentsMeta, parameter_keywords |
Check notice
Code scanning / CodeQL
Cyclic import Note
psyneulink.core.components.component
from psyneulink.core.globals.utilities import ( | ||
get_stacklevel_skip_file_prefixes, | ||
parse_valid_identifier, | ||
toposort_key, | ||
) |
Check notice
Code scanning / CodeQL
Cyclic import Note
psyneulink.core.globals.utilities
This PR causes the following changes to the html docs (ubuntu-latest-3.11):
See CI logs for the full diff. |
* Changes to allow passing BatchedImpl tensors When pytorch vmap is used on a function, the function is called to trace it with tenors passed as inputs which are not normal pytorch tensors. They are in fact BatchedImpl tensors that have no underlying storage. These cannot be converted to double() or numpy arrays so I needed to remove calls to convert these tensors in this special case. * Fix bug in _batch_inputs _batch_inputs was not actually returning batches of size batch_size. * Testing bactch support. * Fix batch_size=1 case. * WIP merge for pytorch batching in autodiff * Fix for when minibatch_size is ndarray * Cleanup * Remove handling of batched_results. * Handle dtype-object in _get_variable_from_input Not sure this works in general to be honest. * Fixes from merging in devel. * Fixes for ragged things. * More fixes for ragged things. * Fix bad torch.stack call in execute_input_ports * Add a real pytorch training test with batching I made a test that implements an equivalent network in PyTorch and PNL, initializes them with the same weights, trains them both and then compares the losses. I think this is similar to some other tests in the file but this one tests batch size > 1 and it actually trains the pytorch network as well. * More fixes * Disable update_autodiff_all_output_values Now that all_output_values contains the batch dimension in the first dimension it is causing and error when passed to output_CIM.execute because it has the incorrect number of input ports (unless batch size happens to match number of input ports). Not sure what the best fix to this is but disabling this line seems to work and pass all tests. Need to ask Jon. * Handle 3D structure better in autodiff update_results. * Fix formatting * Fix ragged processing. A bunch of fixes for things wrong with ragged processing. * Fix logging test. We need to add batch dimension to the logging checks. I also made the test check multiple batch sizes. * Move variable list initialization outside look I am pretty sure this was a bug. * Fix ugly formatting * Default variables need to be forced to atleast_2d Since we have a batch dimension now, all the default variable need to be forced to 2D. * Fix store_memory to support batches. Not sure this is the correct fix, I am getting errors in results, but I don't know how else to implement this. * More fixes for defaults with batch size. * More fixes for defaults with batch size. * Fix store_memory * Fix store_memory * Uncomment some tests in test_emcomposition.py * Revert change to _get_variable_from_input I think I made this change in error, it was before I fixed issues with handling ragged structures. Lets see if tests pass. * Execute the output_CIM on the last element of the batch to update the output ports * Move check for torch to learn method. Since we need torch now to setup inputs (before execute), I moved the check for it to learn. * Add np.VisibleDeprecationWarning to catch. Looks like old versions of numpy didn't throw ValueError * Add pytorch mark to test * Override new weights_only=True default in PyTorch 2.6 * Move the pytorch lower pin up to 1.13 Looks like weights_only argument wasn't supported till then. That is a pretty ancient version of pytorch anyway. Also pushed the upper pin up. * Documentation cleanup * Add a check for batch_size > 1 in EMSStorage * CodeQL cleanup * Better testing of batched_results argument. * Style fixes. --------- Co-authored-by: David Turner <dmturmer@princeton.edu> Co-authored-by: jdcpni <jdc@princeton.edu>
This PR causes the following changes to the html docs (ubuntu-latest-3.11):
See CI logs for the full diff. |
No description provided.