Skip to content

Commit

Permalink
Merge branch 'development' into 'master'
Browse files Browse the repository at this point in the history
Tapas 3.1.0

See merge request TNU/tapas!50
  • Loading branch information
tnutapas committed Mar 26, 2019
2 parents f9a9d95 + 6a4ede6 commit 51b41f2
Show file tree
Hide file tree
Showing 269 changed files with 10,293 additions and 3,749 deletions.
Binary file added .DS_Store
Binary file not shown.
1 change: 0 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,3 @@ lib
*.so
*.mexw64
*.asv
examples/
2 changes: 1 addition & 1 deletion PhysIO/.gitmodules → .gitmodules
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
[submodule "wikidocs"]
path = wikidocs
path = PhysIO/wikidocs
url = [email protected]:physio/physio-public.wiki.git
branch = master
29 changes: 29 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,35 @@
# Changelog
TAPAS toolbox

## [3.1.0] 2019-03-26

### Added
- Get revision info from Matlab.
- PhysIO R2018.1.2: BioPac txt-file reader (for single file, resp/cardiac/trigger data in different columns)
- SERIA: Automatic plotting of the seria model.
- SERIA: Example for single subject.

### Fixed
- Huge: minor bugs.

### Changed
- Huge: Improved documentation.
- New version of the HGF toolbox (v5.3). Details in tapas/HGF/README.md
- New version of the rDCM toolbox (v1.1). Details in tapas/rDCM/CHANGELOG.md.
- New version of the PhysIO Toolbox (R2019a-v7.1.0)
- BIDS and BioPac readers; code sorted in modules (`readin`, `preproc` etc.),
also reflected in figure names
- Updated and extended all examples, and introduced unit testing
- Full details in tapas/PhysIO/CHANGELOG.md
- Improved the documentation of SERIA.

## [3.0.1] 2018-10-17

### Fixed
- PhysIO R2018.1.2: fixed bug for 3D nifti array read-in in tapas_physio_create_noise_rois_regressors (github issue #24, gitlab #52)

### Added

## [3.0.0] 2018-09-09

### Added
Expand Down
8 changes: 5 additions & 3 deletions Contributor-License-Agreement.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,13 @@ and
each GitHub User listed in the following table:

Name | Company/Institution/Address | City | Country | E-Mail/GitHub Username
------------------------ | ----------------------------| --------- | ------- | ----------------------
------------------------ | --------------------------- | --------- | ------- | ----------------------
Lars Kasper | TNU, University of Zurich | Zurich | CH | mrikasper
Eduardo Aponte | TNU, University of Zurich | Zurich | CH | tnutapas
| | | |
**-> Add Entry here <-** | | | |
Daniel Hoffmann Ayala | Technical University | Munich | D | DanielHoffmannAyala
Benoît Béranger | CENIR, ICM | Paris | FR | benoitberanger
**- Add Entry here -** | **- Add Entry here -** | **Add** | **Add** | **Add**

(hereinafter referred to as "Contributor")

relating to the Contribution (as defined below) that Contributor makes to the following Project:
Expand Down
12 changes: 11 additions & 1 deletion HGF/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ Release ID: $Format:%h %d$

---

Copyright (C) 2012-2018 Christoph Mathys <[email protected]>
Copyright (C) 2012-2019 Christoph Mathys <[email protected]>
Translational Neuromodeling Unit (TNU)
University of Zurich and ETH Zurich

Expand Down Expand Up @@ -46,6 +46,16 @@ hgf_demo.pdf.

## Release notes

### v5.3
- Enabled setting and storing of seed for random number generator in simulations
- Debugged reading of response model configuration in simModel
- Reduced default maxStep from 2 to 1 in quasinewton_oqptim_config
- Improved readability of siem files for unitsq_sgm and softmax_binary
- Added simulation capability for softmax_wld and softmax_mu3_wld
- Added softmax_wld response model
- Improved readability of softmax_mu3_wld code
- Improved readability of softmax and softmax_mu3 code

### v5.2
- Brought hgf_demo.pdf up to date
- Added gaussian_obs_offset response model
Expand Down
76 changes: 41 additions & 35 deletions HGF/hgf_demo.m
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
%%
% The inputs are simply a time series of 320 0s and 1s. This is the input
% sequence used in the task of Iglesias et al. (2013), _Neuron_, *80*(2), 519-530.
%%

scrsz = get(0,'ScreenSize');
outerpos = [0.2*scrsz(3),0.7*scrsz(4),0.8*scrsz(3),0.3*scrsz(4)];
figure('OuterPosition', outerpos)
Expand All @@ -31,17 +31,18 @@
%
% * The first argument, which would normally be the observed responses, is empty
% (ie, []) here because the optimal parameter values are independent of any responses.
% * The second argument is the perceptual model, _hgf_binary_ here. We need
% to use the prefix 'tapas_' and the suffix '_config' in order to find the correct
% * The second argument is the inputs _u._
% * The third argument is the perceptual model, _hgf_binary_ here. We need to
% use the prefix 'tapas_' and the suffix '_config' in order to find the correct
% configuration file
% * The third argument is the response model, _bayes_optimal_binary_ here. Again
% we need to use the same prefix and suffix. In fact, bayes_optimal_binary is
% a kind of pseudo-response model because instead of providing response probabilities
% * The fourth argument is the response model, _bayes_optimal_binary_ here.
% Again we need to use the same prefix and suffix. In fact, bayes_optimal_binary
% is a kind of pseudo-response model because instead of providing response probabilities
% it simply calculates the Shannon surprise elicited by each new input given the
% current perceptual state.
% * The fourth argument is the optimization algorithm to be used, _quasinewton_optim_
% * The fifth argument is the optimization algorithm to be used, _quasinewton_optim_
% here, which is a variant of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm.
%%

bopars = tapas_fitModel([],...
u,...
'tapas_hgf_binary_config',...
Expand Down Expand Up @@ -74,13 +75,15 @@
% that, we simply choose values for $\omega$. Here, we take $\omega_2=-2.5$ and
% $\omega_3=-6$. But in addition to the perceptual model _hgf_binary_, we now
% need a response model. Here, we take the unit square sigmoid model, _unitsq_sgm_,
% with parameter $\zeta=5$.
%%
% with parameter $\zeta=5$. The last argument is an optional seed for the random
% number generator.

sim = tapas_simModel(u,...
'tapas_hgf_binary',...
[NaN 0 1 NaN 1 1 NaN 0 0 1 1 NaN -2.5 -6],...
'tapas_unitsq_sgm',...
5);
5,...
12345);
%%
% The general meaning of the arguments supplied to simModel is explained
% in the manual and in the file _tapas_simModel.m_. The specific meaning of each
Expand All @@ -89,12 +92,12 @@
%% Plot simulated responses
% We can plot our simulated responses $y$ using the plotting function for _hgf_binary_
% models.
%%

tapas_hgf_binary_plotTraj(sim)
%% Recover parameter values from simulated responses
% We can now try to recover the parameters we put into the simulation ($\omega_2=-2.5$
% and $\omega_3=-6$) using fitModel.
%%

est = tapas_fitModel(sim.y,...
sim.u,...
'tapas_hgf_binary_config',...
Expand All @@ -121,15 +124,15 @@
% -1). In these cases the two parameters cannot be identified independently and
% one of them needs to be fixed. The other parameter can then be estimated conditional
% on the value of the one that has been fixed.
%%

tapas_fit_plotCorr(est)
%%
% In this case, there is nothing to worry about. Unless their correlation
% is very close to +1 or -1, two parameters are identifiable, meaning that they
% describe distinct aspects of the data.
%
% The posterior parameter correlation matrix is stored in est.optim.Corr,
%%

disp(est.optim.Corr)
%%
% while the posterior parameter covariance matrix is stored in est.optim.Sigma
Expand All @@ -139,7 +142,7 @@
% The posterior means of the estimated as well as the fixed parameters can be
% found in est.p_prc for the perceptual model and in est.p_obs for the observation
% model:
%%

disp(est.p_prc)
disp(est.p_obs)
%%
Expand All @@ -150,16 +153,16 @@
%% Inferred belief trajectories
% As with the simulated trajectories, we can plot the inferred belief trajectories
% implied by the estimated parameters.
%%

tapas_hgf_binary_plotTraj(est)
%%
% These trajectories can be found in est.traj:
%%

disp(est.traj)
%% Changing the perceptual model
% Next, let's try to fit the same data using a different perceptual model while
% keeping the same response model. We will take the Rescorla-Wagner model _rw_binary_.
%%

est1a = tapas_fitModel(sim.y,...
sim.u,...
'tapas_rw_binary_config',...
Expand All @@ -171,38 +174,39 @@
%
% Just as for _hgf_binary_, we can plot posterior correlations and inferred
% trajectories for _rw_binary_.
%%

tapas_fit_plotCorr(est1a)
tapas_rw_binary_plotTraj(est1a)
%% Input on a continuous scale
% Up to now, we've only used binary input - 0 or 1. However, many of the most
% interesting time series are on a continuous scale. As an example, we'll use
% the exchange rate of the US Dollar to the Swiss Franc during much of 2010 and
% 2011.
%%

usdchf = load('example_usdchf.txt');
%%
% As before, we'll first estimate the Bayes optimal parameter values. This
% time, we'll take a 2-level HGF for continuous-scaled inputs.
%%

bopars2 = tapas_fitModel([],...
usdchf,...
'tapas_hgf_config',...
'tapas_bayes_optimal_config',...
'tapas_quasinewton_optim_config');
%%
% And again, let's check the posterior correlation and the trajectories:
%%

tapas_fit_plotCorr(bopars2)
tapas_hgf_plotTraj(bopars2)
%%
% Now, let's simulate an agent and plot the resulting trajectories:
%%

sim2 = tapas_simModel(usdchf,...
'tapas_hgf',...
[1.04 1 0.0001 0.1 0 0 1 -13 -2 1e4],...
'tapas_gaussian_obs',...
0.00002);
0.00002,...
12345);
tapas_hgf_plotTraj(sim2)
%%
% Looking at the volatility (ie, the second) level, we see that there are
Expand All @@ -217,12 +221,13 @@
% shows up as another spike in volatitlity.
%% Adding levels
% Let's see what happens if we add another level:
%%

sim2a = tapas_simModel(usdchf,...
'tapas_hgf',...
[1.04 1 1 0.0001 0.1 0.1 0 0 0 1 1 -13 -2 -2 1e4],...
'tapas_gaussian_obs',...
0.00005);
0.00005,...
12345);
tapas_hgf_plotTraj(sim2a)
%%
% Owing to the presence of the third level, the second level is a lot smoother
Expand All @@ -234,7 +239,7 @@
% While the third level is very smooth overall, the two salient events discussed
% above are still visible. Let's see how these events are reflected in the precision
% weighting of the updates at each level:
%%

figure
plot(sim2a.traj.wt)
xlim([1, length(sim2a.traj.wt)])
Expand All @@ -252,23 +257,23 @@
%% Parameter recovery
% Now, let's try to recover the parameters we put into the simulation by fitting
% the HGF to our simulated responses:
%%

est2 = tapas_fitModel(sim2.y,...
usdchf,...
'tapas_hgf_config',...
'tapas_gaussian_obs_config',...
'tapas_quasinewton_optim_config');
%%
% Again, we fit the posterior correlation and the estimated trajectories:
%%

tapas_fit_plotCorr(est2)
tapas_hgf_plotTraj(est2)
%% Plotting residual diagnostics
% It's often helpful to look at the residuals (ie, the differences between predicted
% and actual responses) of a model. If the residual show any obvious patterns,
% that's an indication that your model fails to capture aspects of the data that
% should in princple be predictable.
%%

tapas_fit_plotResidualDiagnostics(est2)
%%
% Everything looks fine here - no obvious patterns to be seen.
Expand All @@ -283,7 +288,7 @@
%
% $$r^{(k)} =\frac{y^{(k)} - \hat{\mu}_1^{(k)}}{\sqrt{\hat{\mu}_1^{(k)} \left(1-\hat{\mu}_1^{(k)}\right)
% }}$$
%%

tapas_fit_plotResidualDiagnostics(est)
%%
% In the case of our binary response example, we see some patterns in the
Expand All @@ -301,12 +306,13 @@
%
% We begin by simulating responses from another fictive agent and estimating
% the parameters behind the simulated responses:
%%

sim2b = tapas_simModel(usdchf,...
'tapas_hgf',...
[1.04 1 0.0001 0.1 0 0 1 -14.5 -2.5 1e4],...
'tapas_gaussian_obs',...
0.00002);
0.00002,...
12345);
tapas_hgf_plotTraj(sim2b)
est2b = tapas_fitModel(sim2b.y,...
usdchf,...
Expand All @@ -317,7 +323,7 @@
tapas_hgf_plotTraj(est2b)
%%
% Now we can take the Bayesian parameter average of our two:
%%

bpa = tapas_bayesian_parameter_average(est2, est2b);
tapas_fit_plotCorr(bpa)
tapas_hgf_plotTraj(bpa)
Binary file modified HGF/hgf_demo.mlx
Binary file not shown.
Binary file modified HGF/hgf_demo.pdf
Binary file not shown.
6 changes: 5 additions & 1 deletion HGF/tapas_beta_obs_sim.m
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,11 @@
be = nu - al;

% Initialize random number generator
rng('shuffle');
if isnan(r.c_sim.seed)
rng('shuffle');
else
rng(r.c_sim.seed);
end

% Simulate
y = betarnd(al, be);
Expand Down
6 changes: 5 additions & 1 deletion HGF/tapas_condhalluc_obs2_sim.m
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,11 @@
prob = tapas_sgm(be.*(2.*x-1),1);

% Initialize random number generator
rng('shuffle');
if isnan(r.c_sim.seed)
rng('shuffle');
else
rng(r.c_sim.seed);
end

% Simulate
y = binornd(1, prob);
Expand Down
6 changes: 5 additions & 1 deletion HGF/tapas_condhalluc_obs_sim.m
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,11 @@
prob = tapas_sgm(be.*(2.*x-1),1);

% Initialize random number generator
rng('shuffle');
if isnan(r.c_sim.seed)
rng('shuffle');
else
rng(r.c_sim.seed);
end

% Simulate
y = binornd(1, prob);
Expand Down
6 changes: 5 additions & 1 deletion HGF/tapas_gaussian_obs_offset_sim.m
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,11 @@
n = length(yhat);

% Initialize random number generator
rng('shuffle');
if isnan(r.c_sim.seed)
rng('shuffle');
else
rng(r.c_sim.seed);
end

% Simulate
y = yhat +sqrt(ze)*randn(n, 1);
Expand Down
Loading

0 comments on commit 51b41f2

Please sign in to comment.