Skip to content

Commit

Permalink
Merge pull request #2016 from PrincetonUniversity/devel
Browse files Browse the repository at this point in the history
Devel
  • Loading branch information
kmantel authored Apr 22, 2021
2 parents 0d56d0f + eb94361 commit 4c60958
Show file tree
Hide file tree
Showing 96 changed files with 10,820 additions and 2,509 deletions.
6 changes: 3 additions & 3 deletions .github/workflows/pnl-ci-docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ jobs:
echo ::set-output name=on_master::$ON_MASTER
- name: Set up Python ${{ matrix.python-version }}
uses: actions/[email protected].1
uses: actions/[email protected].2
with:
python-version: ${{ matrix.python-version }}
architecture: ${{ matrix.python-architecture }}
Expand All @@ -46,7 +46,7 @@ jobs:
echo ::set-output name=pip_cache_dir::$(python -m pip cache dir)
- name: Wheels cache
uses: actions/[email protected].4
uses: actions/[email protected].5
with:
path: ${{ steps.pip_cache.outputs.pip_cache_dir }}/wheels
key: ${{ runner.os }}-python-${{ matrix.python-version }}-${{ matrix.python-architecture }}-pip-wheels-v2-${{ github.sha }}
Expand Down Expand Up @@ -84,7 +84,7 @@ jobs:
run: sphinx-build -b html -aE docs/source pnl-html

- name: Upload Documentation
uses: actions/[email protected].2
uses: actions/[email protected].3
with:
name: Documentation-${{ matrix.os }}-${{ matrix.python-version }}-${{ matrix.python-architecture }}
retention-days: 1
Expand Down
8 changes: 4 additions & 4 deletions .github/workflows/pnl-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ jobs:
fetch-depth: 10

- name: Set up Python ${{ matrix.python-version }}
uses: actions/[email protected].1
uses: actions/[email protected].2
with:
python-version: ${{ matrix.python-version }}
architecture: ${{ matrix.python-architecture }}
Expand All @@ -46,7 +46,7 @@ jobs:
echo ::set-output name=pip_cache_dir::$(python -m pip cache dir)
- name: Wheels cache
uses: actions/[email protected].4
uses: actions/[email protected].5
with:
path: ${{ steps.pip_cache.outputs.pip_cache_dir }}/wheels
key: ${{ runner.os }}-python-${{ matrix.python-version }}-${{ matrix.python-architecture }}-pip-wheels-v2-${{ github.sha }}
Expand Down Expand Up @@ -94,7 +94,7 @@ jobs:
run: pytest --junit-xml=tests_out.xml --verbosity=0 -n auto --maxprocesses=2

- name: Upload test results
uses: actions/[email protected].2
uses: actions/[email protected].3
with:
name: test-results-${{ matrix.os }}-${{ matrix.python-version }}-${{ matrix.python-architecture }}
path: tests_out.xml
Expand All @@ -108,7 +108,7 @@ jobs:
if: contains(github.ref, 'tags')

- name: Upload dist packages
uses: actions/[email protected].2
uses: actions/[email protected].3
with:
name: dist-${{ matrix.os }}-${{ matrix.python-version }}-${{ matrix.python-architecture }}
path: dist/
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/pnl-docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ jobs:
ref: ${{ github.base_ref }}

- name: Set up Python ${{ matrix.python-version }}
uses: actions/[email protected].1
uses: actions/[email protected].2
with:
python-version: ${{ matrix.python-version }}
architecture: ${{ matrix.python-architecture }}
Expand All @@ -54,7 +54,7 @@ jobs:
run: sphinx-build -b html -aE docs/source pnl-html

- name: Upload generated docs
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v2.2.3
with:
name: docs-${{ matrix.pnl-version }}-${{ matrix.os }}-${{ matrix.python-version }}
path: pnl-html
Expand Down Expand Up @@ -92,7 +92,7 @@ jobs:
(diff -r docs-base docs-merge && echo 'No differences!' || true) | tee result.diff
- name: Post comment
uses: actions/github-script@v3
uses: actions/github-script@v4.0.1
# Post comment only if not PR across repos
# if: ${{ github.event.base.full_name }} == ${{ github.event.head.repo.full_name }}
with:
Expand Down
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -188,3 +188,4 @@ tests/*.pdf

# mypy cache
.mypy_cache
/tests/json/model_backprop.json
7 changes: 7 additions & 0 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@
Welcome to PsyNeuLink
=====================

(pronounced: /sīnyoolingk - sigh-new-link)
Documentation is available at https://princetonuniversity.github.io/PsyNeuLink/

Purpose
Expand Down Expand Up @@ -226,6 +227,12 @@ With substantial and greatly appreciated assistance from:
* **Ben Singer**, Princeton Neuroscience Institute, Princeton University
* **Ted Willke**, Intel Labs, Intel Corporation

Support for the development of PsyNeuLink has been provided by:

* The National Institute of Mental Health (R21-MH117548)
* The John Templeton Foundation
* The Templeton World Charitable Foundation

License
-------

Expand Down
12 changes: 6 additions & 6 deletions Scripts/Debug/Jason_Reward_rate_with_penalty_with_inputs.py
Original file line number Diff line number Diff line change
Expand Up @@ -200,12 +200,12 @@ def get_stroop_model(unit_noise_std=.01, dec_noise_std=.1):
model.add_nodes([reward_rate, punish_rate])

controller = pnl.OptimizationControlMechanism(agent_rep=model,
features=[inp_clr.input_port,
inp_wrd.input_port,
inp_task.input_port,
reward.input_port,
punish.input_port],
feature_function=pnl.AdaptiveIntegrator(rate=0.1),
state_features=[inp_clr.input_port,
inp_wrd.input_port,
inp_task.input_port,
reward.input_port,
punish.input_port],
state_feature_function=pnl.AdaptiveIntegrator(rate=0.1),
objective_mechanism=objective_mech,
function=pnl.GridSearch(),
control_signals=[driftrate_control_signal,
Expand Down
4 changes: 2 additions & 2 deletions Scripts/Debug/Predator-Prey Sebastian REDUCED.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,8 +61,8 @@ def get_new_episode_flag():
# ************************************** CONOTROL APPARATUS ***********************************************************

ocm = OptimizationControlMechanism(name='EVC',
features=[trial_type_input_mech],
# feature_function=FEATURE_FUNCTION,
state_features=[trial_type_input_mech],
# state_feature_function=FEATURE_FUNCTION,
agent_rep=RegressionCFA(
name='RegressionCFA',
update_weights=BayesGLM(mu_0=0.5, sigma_0=0.1),
Expand Down
4 changes: 2 additions & 2 deletions Scripts/Debug/Predator-Prey Sebastian.py
Original file line number Diff line number Diff line change
Expand Up @@ -166,8 +166,8 @@ def get_action(variable=[[0,0],[0,0],[0,0]]):
# ************************************** CONOTROL APPARATUS ***********************************************************

ocm = OptimizationControlMechanism(name='EVC',
features=[trial_type_input_mech],
# feature_function=FEATURE_FUNCTION,
state_features=[trial_type_input_mech],
# state_feature_function=FEATURE_FUNCTION,
agent_rep=RegressionCFA(
name='RegressionCFA',
update_weights=BayesGLM(mu_0=0.5, sigma_0=0.1),
Expand Down
6 changes: 3 additions & 3 deletions Scripts/Debug/StabilityFlexibility.py
Original file line number Diff line number Diff line change
Expand Up @@ -170,9 +170,9 @@ def computeAccuracy(variable):
function = computeAccuracy)

meta_controller = pnl.OptimizationControlMechanism(agent_rep = stabilityFlexibility,
features = [inputLayer.input_port,stimulusInfo.input_port],
# features = {pnl.SHADOW_INPUTS: [inputLayer, stimulusInfo]},
# features = [(inputLayer, pnl.SHADOW_INPUTS),
state_features= [inputLayer.input_port, stimulusInfo.input_port],
# state_features = {pnl.SHADOW_INPUTS: [inputLayer, stimulusInfo]},
# state_features = [(inputLayer, pnl.SHADOW_INPUTS),
# (stimulusInfo, pnl.SHADOW_INPUTS)],
objective_mechanism = objective_mech,
function = pnl.GridSearch(),
Expand Down
4 changes: 2 additions & 2 deletions Scripts/Debug/Umemoto_Feb.py
Original file line number Diff line number Diff line change
Expand Up @@ -119,8 +119,8 @@
allocation_samples=signalSearchRange)

Umemoto_comp.add_model_based_optimizer(optimizer=pnl.OptimizationControlMechanism(agent_rep=Umemoto_comp,
features=[Target_Stim.input_port, Distractor_Stim.input_port, Reward.input_port],
feature_function=pnl.AdaptiveIntegrator(rate=1.0),
state_features=[Target_Stim.input_port, Distractor_Stim.input_port, Reward.input_port],
state_feature_function=pnl.AdaptiveIntegrator(rate=1.0),
objective_mechanism=pnl.ObjectiveMechanism(monitor_for_control=[Reward,
(Decision.output_ports[pnl.PROBABILITY_UPPER_THRESHOLD], 1, -1)],
),
Expand Down
8 changes: 4 additions & 4 deletions Scripts/Debug/Umemoto_Feb2.py
Original file line number Diff line number Diff line change
Expand Up @@ -129,10 +129,10 @@

Umemoto_comp.add_model_based_optimizer(optimizer=pnl.OptimizationControlMechanism(
agent_rep=Umemoto_comp,
features=[Target_Stim.input_port,
Distractor_Stim.input_port,
Reward.input_port],
feature_function=pnl.AdaptiveIntegrator(rate=1.0),
state_features=[Target_Stim.input_port,
Distractor_Stim.input_port,
Reward.input_port],
state_feature_function=pnl.AdaptiveIntegrator(rate=1.0),
objective_mechanism=pnl.ObjectiveMechanism(
monitor_for_control=[Reward,
(Decision.output_ports[pnl.PROBABILITY_UPPER_THRESHOLD], 1, -1)],
Expand Down
10 changes: 5 additions & 5 deletions Scripts/Debug/Yotam LCA Model LLVM.py
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ def get_all_tasks(env_bipartite_graph):
# LCAMechanism at the end for performance evaluation
# Params:
# bipartite_graph: bipartite graph representing the task environment (NetworkX object)
# num_features: number of particular features per dimension (e.g. number of colours)
# num_features: number of particular state_features per dimension (e.g. number of colours)
# num_hidden: number of hidden units in the network
# epochs: number of training iterations
# learning_rate: learning rate for SGD or (however pnl train their networks)
Expand Down Expand Up @@ -206,7 +206,7 @@ def get_trained_network(bipartite_graph, num_features=3, num_hidden=200, epochs=
# RecurrentTransferMechanism at the end for performance evaluation
# Params:
# bipartite_graph: bipartite graph representing the task environment (NetworkX object)
# num_features: number of particular features per dimension (e.g. number of colours)
# num_features: number of particular state_features per dimension (e.g. number of colours)
# num_hidden: number of hidden units in the network
# epochs: number of training iterations
# learning_rate: learning rate for SGD or (however pnl train their networks)
Expand Down Expand Up @@ -352,7 +352,7 @@ def get_trained_network_multLCA(bipartite_graph, num_features=3, num_hidden=200,
# equal ordinal output feature nodes (i.e. 1st feature input maps to 1st feature output).
# Params:
# all_tasks: list containing all tasks in the environment
# num_features: number of particular features per dimension (e.g. number of colours)
# num_features: number of particular state_features per dimension (e.g. number of colours)
# num_input_dims: number of input dimensions in the environment
# num_output_dims: number of output dimensions in the environment
# samples_per_feature: how many stimuli will be sampled per feature to be trained within a task
Expand Down Expand Up @@ -400,12 +400,12 @@ def generate_training_data(all_tasks, num_features, num_input_dims, num_output_d
return input_examples, output_examples, control_examples

# Generate data for the network to test on. test_tasks is a performance set (i.e. a multitasking set of tasks to execute).
# As data we generate random features for all input dimensions. To specify a mapping, we use the rule that input
# As data we generate random state_features for all input dimensions. To specify a mapping, we use the rule that input
# feature nodes map to equal ordinal output feature nodes (i.e. 1st feature input maps to 1st feature output).
# Params:
# test_tasks: list containing set of tasks to multitask
# all_tasks: list containing all tasks in the environment
# num_features: number of particular features per dimension (e.g. number of colours)
# num_features: number of particular state_features per dimension (e.g. number of colours)
# num_input_dims: number of input dimensions in the environment
# num_output_dims: number of output dimensions in the environment
# num_test_points: number of test points to generate
Expand Down
10 changes: 5 additions & 5 deletions Scripts/Debug/Yotam LCA Model.py
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ def get_all_tasks(env_bipartite_graph):
# LCAMechanism at the end for performance evaluation
# Params:
# bipartite_graph: bipartite graph representing the task environment (NetworkX object)
# num_features: number of particular features per dimension (e.g. number of colours)
# num_features: number of particular state_features per dimension (e.g. number of colours)
# num_hidden: number of hidden units in the network
# epochs: number of training iterations
# learning_rate: learning rate for SGD or (however pnl train their networks)
Expand Down Expand Up @@ -198,7 +198,7 @@ def get_trained_network(bipartite_graph, num_features=3, num_hidden=200, epochs=
# RecurrentTransferMechanism at the end for performance evaluation
# Params:
# bipartite_graph: bipartite graph representing the task environment (NetworkX object)
# num_features: number of particular features per dimension (e.g. number of colours)
# num_features: number of particular state_features per dimension (e.g. number of colours)
# num_hidden: number of hidden units in the network
# epochs: number of training iterations
# learning_rate: learning rate for SGD or (however pnl train their networks)
Expand Down Expand Up @@ -335,7 +335,7 @@ def get_trained_network_multLCA(bipartite_graph, num_features=3, num_hidden=200,
# equal ordinal output feature nodes (i.e. 1st feature input maps to 1st feature output).
# Params:
# all_tasks: list containing all tasks in the environment
# num_features: number of particular features per dimension (e.g. number of colours)
# num_features: number of particular state_features per dimension (e.g. number of colours)
# num_input_dims: number of input dimensions in the environment
# num_output_dims: number of output dimensions in the environment
# samples_per_feature: how many stimuli will be sampled per feature to be trained within a task
Expand Down Expand Up @@ -383,12 +383,12 @@ def generate_training_data(all_tasks, num_features, num_input_dims, num_output_d
return input_examples, output_examples, control_examples

# Generate data for the network to test on. test_tasks is a performance set (i.e. a multitasking set of tasks to execute).
# As data we generate random features for all input dimensions. To specify a mapping, we use the rule that input
# As data we generate random state_features for all input dimensions. To specify a mapping, we use the rule that input
# feature nodes map to equal ordinal output feature nodes (i.e. 1st feature input maps to 1st feature output).
# Params:
# test_tasks: list containing set of tasks to multitask
# all_tasks: list containing all tasks in the environment
# num_features: number of particular features per dimension (e.g. number of colours)
# num_features: number of particular state_features per dimension (e.g. number of colours)
# num_input_dims: number of input dimensions in the environment
# num_output_dims: number of output dimensions in the environment
# num_test_points: number of test points to generate
Expand Down
Loading

0 comments on commit 4c60958

Please sign in to comment.