Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
…Link into devel
  • Loading branch information
jdcpni committed Oct 7, 2024
2 parents c83d03d + 0e5a125 commit cd67070
Show file tree
Hide file tree
Showing 16 changed files with 270 additions and 165 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@
def reward_rate(sim_data):
"""
Objective function for PEC to optimize. This function takes in the simulation data,
a 3D array of shape (num_trials, num_estimates, num_outcome_vars), and returns a
a 3d array of shape (num_trials, num_estimates, num_outcome_vars), and returns a
scalar value that is the reward rate.
"""
return np.mean(sim_data[:, :, 0][:] / sim_data[:, :, 1][:])
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,23 +7,23 @@

CONSTRUCT_MODEL = True # THIS MUST BE SET TO True to run the script
DISPLAY_MODEL = ( # Only one of the following can be uncommented:
None # suppress display of model
# { # show simple visual display of model
# None # suppress display of model
{ # show simple visual display of model
# 'show_pytorch': True, # show pytorch graph of model
# 'show_learning': True
# # 'show_projections_not_in_composition': True,
# # 'exclude_from_gradient_calc_style': 'dashed'# show target mechanisms for learning
# # {'show_node_structure': True # show detailed view of node structures and projections
# }
'show_learning': True
# 'show_projections_not_in_composition': True,
# 'exclude_from_gradient_calc_style': 'dashed'# show target mechanisms for learning
# {'show_node_structure': True # show detailed view of node structures and projections
}
)
# RUN_MODEL = False # False => don't run the model
RUN_MODEL = True, # True => run the model
RUN_MODEL = False # False => don't run the model
# RUN_MODEL = True, # True => run the model
# REPORT_OUTPUT = ReportOutput.FULL # Sets console output during run [ReportOutput.ON, .TERSE OR .FULL]
REPORT_OUTPUT = ReportOutput.OFF # Sets console output during run [ReportOutput.ON, .TERSE OR .FULL]
REPORT_PROGRESS = ReportProgress.OFF # Sets console progress bar during run
# PRINT_RESULTS = False # don't print model.results to console after execution
PRINT_RESULTS = True # print model.results to console after execution
PRINT_RESULTS = False # don't print model.results to console after execution
# PRINT_RESULTS = True # print model.results to console after execution
SAVE_RESULTS = False # save model.results to disk
# PLOT_RESULTS = False # don't plot results (PREDICTIONS) vs. TARGETS
PLOT_RESULTS = True # plot results (PREDICTIONS) vs. TARGETS
PLOT_RESULTS = False # don't plot results (PREDICTIONS) vs. TARGETS
# PLOT_RESULTS = True # plot results (PREDICTIONS) vs. TARGETS
ANIMATE = False # {UNIT:EXECUTION_SET} # Specifies whether to generate animation of execution
3 changes: 2 additions & 1 deletion docs/source/BasicsAndPrimer.rst
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,8 @@ of the examples further below.
PsyNeuLink picks sensible defaults when necessary Components are not specified. In the example above no `Projections
<Projection>` were actually specified, so PsyNeuLink automatically created the appropriate types (in this case,
`MappingProjections<MappingProjection>`), and sized them appropriately to connect each pair of Mechanisms. Each
Projection has a `matrix <Projection.matrix>` parameter that weights the connections between the elements of the output
Projection has a `matrix <Projection_Base.matrix>` parameter that weights the connections between the elements of the
output
of its `sender <Projection.sender>` and those of the input to its `receiver <Projection.receiver>`. Here, the
default is to use a `FULL_CONNECTIVITY_MATRIX`, that connects every element of the sender's array to every element of
the receiver's array with a weight of 1. However, it is easy to specify a Projection explicitly, including its
Expand Down
9 changes: 6 additions & 3 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,8 @@ combine these components to implement published models. As an open source proje
to be enhanced and extended, and its library is meant to provide an expanding repository of models, written in a
concise, executable, and easy to interpret form, that can be shared, compared, and extended by the scientific community.

*(Note: the PsyNeuLink development effort was initiated and named in 2016, entirely independently and without
awareness of Neuralink, with which it bears no association nor any intentional relationsip.)*

.. _What_PsyNeuLink_IS:

Expand Down Expand Up @@ -102,11 +104,12 @@ The longterm goal of PsyNeuLink is to provide an environment that integrates com
and behavior at all levels of analysis. While it is designed to be fully general, and can in principle be used to
implement models at any level, it is still under development, and current efficiency considerations make it more
suitable for some of forms of modeling than others. In its present form, it is well suited to the creation of
simple to moderately complex models, and for the integration of disparate models into a single environment, while in
simple to moderately complex models, and for the integration of disparate models into a single environment, and the
creation of systems-level neuroscientific models, as well as cognitive neuroscientific and modestly scaled machine
learning-style models, while in
it is presently less well suited to efforts involving massively large computations, such as:

- extensive model fitting
- large scale simulations
- large scale machine learning simulations
- highly detailed biophysical models of neurons or neuronal populations

Other packages currently better suited to such applications are:
Expand Down
2 changes: 1 addition & 1 deletion psyneulink/core/components/component.py
Original file line number Diff line number Diff line change
Expand Up @@ -1704,7 +1704,7 @@ def checkAndCastInt(x):
# CAVEAT: assuming here that object dtype implies there are list objects (i.e. array with
# different sized arrays/lists inside like [[0, 1], [2, 3, 4]]), even though putting a None
# value in the array will give object dtype. This case doesn't really make sense in our
# context though, so ignoring this case in the interest of quickly fixing 3D variable behavior
# context though, so ignoring this case in the interest of quickly fixing 3d variable behavior
variable = np.atleast_1d(variable)
else:
variable = np.atleast_2d(variable)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -113,9 +113,9 @@ def simulation_likelihood(
Parameters
----------
sim_data: Data collected over many simulations. This must be either a 2D or 3D numpy array.
sim_data: Data collected over many simulations. This must be either a 2d or 3d numpy array.
If 2D, the first dimension is the simulation number and the second dimension is data points. That is,
each row is a simulation. If 3D, the first dimension is the trial, the second dimension is the
each row is a simulation. If 3d, the first dimension is the trial, the second dimension is the
simulation number, and the final dimension is data points.
exp_data: This must be a numpy array with identical format as the simulation data, with the exception
Expand Down Expand Up @@ -275,7 +275,7 @@ class PECOptimizationFunction(OptimizationFunction):
PEC is trying to solve. The function is used to evaluate the `values <Mechanism_Base.value>` of the
`outcome_variables <ParameterEstimationComposition.outcome_variables>`, according to which combinations of
`parameters <ParameterEstimationComposition.parameters>` are assessed; this must be an `Callable`
that takes a 3D array as its only argument, the shape of which must be (**num_estimates**, **num_trials**,
that takes a 3d array as its only argument, the shape of which must be (**num_estimates**, **num_trials**,
number of **outcome_variables**). The function should specify how to aggregate the value of each
**outcome_variable** over **num_estimates** and/or **num_trials** if either is greater than 1.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2931,8 +2931,8 @@ class SoftMax(TransferFunction):
*Thresholding and Adaptive Gain*
For cases in which SoftMax is used with vector that sparse (e.g., one-hots), the value(s) of the (most( significant
entries (e.g., the one's in a one-hot) can be sensitive to (diminished by) the number of other values in the vector
For cases in which SoftMax is used with sparse vectors (e.g., one-hots), the value(s) of the most significant
entries (e.g., the 1s in a one-hot) can be sensitive to (diminished by) the number of other values in the vector
(i.e., its length). For example, whereas for ``[1 0]`` the SoftMax is ``[0.73105858 0.26894142]``, for ``[1 0 0 0]``
it is ``[0.47536689 0.1748777 0.1748777 0.1748777]``. This can be addressed in one of two ways: either by
thresholding `variable <SoftMax.variable>` before applying the SoftMax function, or by adapting the `gain
Expand All @@ -2955,7 +2955,7 @@ class SoftMax(TransferFunction):
.. _SoftMax_Derivative:
*Derivatve*
*Derivative*
`derivative <SoftMax.derivative>` returns the derivative of the SoftMax. If *OUTPUT_TYPE* for the SoftMax
is *ALL*, returns Jacobian matrix (derivative for each element of the output array with respect to each of the
Expand All @@ -2978,12 +2978,12 @@ class SoftMax(TransferFunction):
specifies the value by which to multiply `variable <Linear.variable>` before SoftMax transformation,
which functions as the inverse "temperature" of the function. If it is a scalar, it must be greater
than zero. If *ADAPTIVE* is specified, the value is determined dynamically based on the `variable
<SoftMax.variable>` `SoftMax_AdaptGain` for details).
<SoftMax.variable>`; see `Thresholding and Adaptive Gain <SoftMax_AdaptGain>` for details).
mask_threshold : scalar : default None
specifies whether to mask_threshold the `variable <SoftMax.variable>` before applying the SoftMax function;
this only applies if `gain <SoftMax.gain>` is specified as a scalar; otherwise it is ignored
(see `SoftMax_AdaptGain` for details).
(see `Thresholding and Adaptive Gain <SoftMax_AdaptGain>` for details).
adapt_scale : scalar : default 1
specifies the *scale* parameter using by the `adapt_gain <SoftMax.adapt_gain>` method (see method for details).
Expand Down Expand Up @@ -3027,14 +3027,14 @@ class SoftMax(TransferFunction):
determines how `variable <Logistic.variable>` is scaled before the SoftMax transformation, determining the
"sharpness" of the distribution (it is equivalent to the inverse of the temperature of the SoftMax function);
if it is 'ADAPTIVE', it is determined dynamically adjusted using the `adapt_gain <SoftMax.adapt_gain>` method
(see `SoftMax_AdaptGain` for additional details).
(see `Thresholding and Adaptive Gain <SoftMax_AdaptGain>` for additional details).
mask_threshold : scalar or None
determines whether the `variable <SoftMax.variable>` is thresholded before applying the SoftMax function;
if it is a scalar, only elements of `variable <SoftMax.variable>` with an absolute value greater than that
value are considered when applying the SoftMax function (which are then scaled by the `gain <SoftMax.gain>`
parameter; all other elements are assigned 0. This only applies if `gain <SoftMax.gain>` is specified as a
scalar; otherwise it is ignored (see `SoftMax_AdaptGain` for details).
scalar; otherwise it is ignored (see `Thresholding and Adaptive Gain <SoftMax_AdaptGain>` for details).
adapt_scale : scalar
determined the *scale* parameter using by the `adapt_gain <SoftMax.adapt_gain>` method (see method for details).
Expand All @@ -3049,10 +3049,10 @@ class SoftMax(TransferFunction):
output : ALL, MAX_VAL, MAX_INDICATOR, or PROB
determines how the SoftMax-transformed values of the elements in `variable <SoftMax.variable>` are reported
in the array returned by `function <SoftMax._function>`:
* **ALL**: array of all SoftMax-transformed values (the default);
* **MAX_VAL**: SoftMax-transformed value for the element with the maximum such value, 0 for all others;
* **MAX_INDICATOR**: 1 for the element with the maximum SoftMax-transformed value, 0 for all others;
* **PROB**: probabilistically chosen element based on SoftMax-transformed values after setting the
* *ALL*: array of all SoftMax-transformed values (the default);
* *MAX_VAL*: SoftMax-transformed value for the element with the maximum such value, 0 for all others;
* *MAX_INDICATOR*: 1 for the element with the maximum SoftMax-transformed value, 0 for all others;
* *PROB*: probabilistically chosen element based on SoftMax-transformed values after setting the
sum of values to 1 (i.e., their `Luce Ratio <https://en.wikipedia.org/wiki/Luce%27s_choice_axiom>`_),
0 for all others.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -916,7 +916,7 @@ class ContentAddressableMemory(MemoryFunction): # ------------------------------
value added to `variable <ContentAddressableMemory.variable>`) before storing in
`memory <ContentAddressableMemory.memory>` (see `noise <Stateful_Noise>` for additional details).
If a 2d array (or `Function` that returns one), its shape must be the same as `variable
<ContentAddressableMemory.variable>`; that is, each array in the outer dimension (Axis 0) must have the
<ContentAddressableMemory.variable>`; that is, each array in the outer dimension (axis 0) must have the
same length as the corresponding one in `variable <ContentAddressableMemory.variable>`, so that it
can be added Hadamard style to `variable <ContentAddressableMemory.variable>` before storing it in
`memory <ContentAddressableMemory.memory>`.
Expand Down
2 changes: 1 addition & 1 deletion psyneulink/core/compositions/composition.py
Original file line number Diff line number Diff line change
Expand Up @@ -10481,7 +10481,7 @@ def _instantiate_input_dict(self, inputs):
# shapes of entries will be validated in _validate_input_shapes_and_expand_for_all_trials())

else:
# 3D ragged array or 2d array
# 3d ragged array or 2d array
entry = convert_to_np_array(_inputs)
ragged_array = entry.dtype == object
if ragged_array:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@
* **objective_function** - specifies a function used to evaluate the `values <Mechanism_Base.value>` of the
`outcome_variables <ParameterEstimationComposition.outcome_variables>`, according to which combinations of
`parameters <ParameterEstimationComposition.parameters>` are assessed; this must be an `Callable`
that takes a 3D array as its only argument, the shape of which will be (**num_estimates**, **num_trials**,
that takes a 3d array as its only argument, the shape of which will be (**num_estimates**, **num_trials**,
number of **outcome_variables**). The function should specify how to aggregate the value of each
**outcome_variable** over **num_estimates** and/or **num_trials** if either is greater than 1.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -551,7 +551,7 @@ def convert_to_2d_input(array_like):
if isinstance(array_like, (np.ndarray, list)):
if isinstance(array_like[0], (np.ndarray, list)):
if isinstance(array_like[0][0], (np.ndarray, list)):
print("array_like ({}) is at least 3D, which may cause conversion errors".format(array_like))
print("array_like ({}) is at least 3d, which may cause conversion errors".format(array_like))
out = []
for a in array_like:
out.append(np.array(a))
Expand Down
Loading

0 comments on commit cd67070

Please sign in to comment.