Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add sum_to_zero constraint, free, and check #3099

Merged
merged 8 commits into from
Aug 7, 2024

Conversation

WardBrian
Copy link
Member

Summary

This adds the functions necessary to implement a sum_to_zero constraint. This is a vector with sum(x) == 0. The constraining transform is to add an element set to -sum(previous elements), and the freeing transform just takes the first N-1 elements out.

Because this is linear, there is no jacobian adjustment

Tests

Added tests in the err and constraint folders, based on the existing tests for simplex.

Side Effects

None

Release notes

Added sum_to_zero_constrain, sum_to_zero_free, and check_sum_to_zero.

Checklist

  • Copyright holder: Simons Foundation

    The copyright holder is typically you or your assignee, such as a university or company. By submitting this pull request, the copyright holder is agreeing to the license the submitted work under the following licenses:
    - Code: BSD 3-clause (https://opensource.org/licenses/BSD-3-Clause)
    - Documentation: CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)

  • the basic tests are passing

    • unit tests pass (to run, use: ./runTests.py test/unit)
    • header checks pass, (make test-headers)
    • dependencies checks pass, (make test-math-dependencies)
    • docs build, (make doxygen)
    • code passes the built in C++ standards checks (make cpplint)
  • the code is written in idiomatic C++ and changes are documented in the doxygen

  • the new changes are tested

@spinkney
Copy link
Collaborator

@WardBrian it is perhaps better if we implement the transform from https://discourse.mc-stan.org/t/test-soft-vs-hard-sum-to-zero-constrain-choosing-the-right-prior-for-soft-constrain/3884/31?u=spinkney? It's unclear if this is better than what is in this pr from the post but it does give the math to get the correct uniform marginals.

@WardBrian
Copy link
Member Author

@spinkney -- this just implements the transform @bob-carpenter gave me, I'm not qualified to access it versus an alternative

@stan-buildbot
Copy link
Contributor


Name Old Result New Result Ratio Performance change( 1 - new / old )
arma/arma.stan 0.38 0.32 1.21 17.07% faster
low_dim_corr_gauss/low_dim_corr_gauss.stan 0.01 0.01 1.13 11.24% faster
gp_regr/gen_gp_data.stan 0.02 0.02 1.04 3.58% faster
gp_regr/gp_regr.stan 0.13 0.09 1.41 29.19% faster
sir/sir.stan 69.24 67.7 1.02 2.22% faster
irt_2pl/irt_2pl.stan 4.23 4.39 0.96 -3.79% slower
eight_schools/eight_schools.stan 0.06 0.06 0.98 -2.07% slower
pkpd/sim_one_comp_mm_elim_abs.stan 0.25 0.25 0.99 -0.75% slower
pkpd/one_comp_mm_elim_abs.stan 19.49 19.89 0.98 -2.05% slower
garch/garch.stan 0.43 0.41 1.06 5.78% faster
low_dim_gauss_mix/low_dim_gauss_mix.stan 2.73 2.6 1.05 4.73% faster
arK/arK.stan 1.8 1.75 1.03 2.77% faster
gp_pois_regr/gp_pois_regr.stan 2.86 2.76 1.03 3.34% faster
low_dim_gauss_mix_collapse/low_dim_gauss_mix_collapse.stan 8.79 8.4 1.05 4.42% faster
performance.compilation 184.75 181.07 1.02 1.99% faster
Mean result: 1.0640541017253144

Jenkins Console Log
Blue Ocean
Commit hash: 2cd95be39b0482d0fedb7fe9a1e8c77b7084592a


Machine information No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.3 LTS Release: 20.04 Codename: focal

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 80
On-line CPU(s) list: 0-79
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
Stepping: 4
CPU MHz: 2400.000
CPU max MHz: 3700.0000
CPU min MHz: 1000.0000
BogoMIPS: 4800.00
Virtualization: VT-x
L1d cache: 1.3 MiB
L1i cache: 1.3 MiB
L2 cache: 40 MiB
L3 cache: 55 MiB
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke md_clear flush_l1d arch_capabilities

G++:
g++ (Ubuntu 9.4.0-1ubuntu1~20.04) 9.4.0
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Clang:
clang version 10.0.0-4ubuntu1
Target: x86_64-pc-linux-gnu
Thread model: posix
InstalledDir: /usr/bin

Copy link
Collaborator

@SteveBronder SteveBronder left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

few small comments but generally everything looks good

Comment on lines 27 to 29
inline plain_type_t<Vec> sum_to_zero_constrain(const Vec& y) {
const auto Km1 = y.size();
plain_type_t<Vec> x(Km1 + 1);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Check for size 0 here

void check_sum_to_zero(const char* function, const char* name, const T& theta) {
using std::fabs;
auto&& theta_ref = to_ref(value_of_rec(theta));
if (!(fabs(theta_ref.sum()) <= CONSTRAINT_TOLERANCE)) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if (!(fabs(theta_ref.sum()) <= CONSTRAINT_TOLERANCE)) {
if (unlikely(!(fabs(theta_ref.sum()) <= CONSTRAINT_TOLERANCE))) {

arena_y.adj().array() -= arena_x.adj_op()(N);
arena_y.adj() += arena_x.adj_op().head(N);
});
return ret_type(arena_x);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return ret_type(arena_x);
return arena_x;


const auto N = y.size();
if (unlikely(N == 0)) {
return ret_type(Eigen::VectorXd{{0}});
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return ret_type(Eigen::VectorXd{{0}});
return arena_t<ret_type>(Eigen::VectorXd{{0}});

* @return Zero-sum vector of dimensionality K.
*/
template <typename T, require_rev_col_vector_t<T>* = nullptr>
auto sum_to_zero_constrain(const T& y, scalar_type_t<T>& lp) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
auto sum_to_zero_constrain(const T& y, scalar_type_t<T>& lp) {
inline auto sum_to_zero_constrain(const T& y, scalar_type_t<T>& lp) {

@WardBrian
Copy link
Member Author

@SteveBronder all addressed. I'd like to wait on the mathematical discussion above before actually merging

@bob-carpenter
Copy link
Contributor

I commented on the stanc3 issue and will repeat myself here: using the QR or SVD-based decompositions break our current abstraction that just treats the constraining and unconstraining functions as standalones that are not data-dependent. It wouldn't be absolutely impossible to shoehorn something with data into our current framework, but it'd be a lot of work.

@spinkney
Copy link
Collaborator

I commented on the stanc3 issue and will repeat myself here: using the QR or SVD-based decompositions break our current abstraction that just treats the constraining and unconstraining functions as standalones that are not data-dependent. It wouldn't be absolutely impossible to shoehorn something with data into our current framework, but it'd be a lot of work.

We don't need to have a data input. The only thing that needs to be known is the size of the vector, which we already get. This is similar to what is done in the ilr transform. In fact, that transform is nearly the what we want since a log of a simplex sums to zero. The issue is that this is on the positive orthant. I believe a simple modification of the ilr simplex could give us the sum-to-zero vector we want.

Is there an issue with making a linspaced_vector within the transform?

@WardBrian
Copy link
Member Author

Looking at the forum thread again, the thing being computed is relatively cheap. The memory overhead (2*N for autodiff compared to this method) would probably be the bigger concern with such a scheme.

The specific transform used is technically an implementation detail, so there's nothing holding us to one choice going forward if we wanted to proceed with the feature with this transform for now

@spinkney
Copy link
Collaborator

spinkney commented Jul 31, 2024

I believe this is what I'm suggesting. It's a bit concerning that summing all the values isn't exactly 0 but I believe that's a finite precision thing.

functions {
  vector inv_ilr_sum_to_zero_constrain_lp(vector y) {
    int N = rows(y) + 1;
    vector[N - 1] ns = linspaced_vector(N - 1, 1, N - 1);
    vector[N - 1] w = y ./ sqrt(ns .* (ns + 1));
    vector[N] z = append_row(reverse(cumulative_sum(reverse(w))), 0) - append_row(0, ns .* w);
    return z;
  }
}
data {
  int<lower=0> N;
}
parameters {
  vector[N - 1] y;
}
transformed parameters {
   vector[N] x = inv_ilr_sum_to_zero_constrain_lp(y);
}
model {
  x ~ normal(0, 10);
}
generated quantities {
  real sum_x = sum(x);
  vector[N] alpha = softmax(x);
}
  variable     mean   median    sd   mad    q5   q95  rhat ess_bulk ess_tail
  <chr>       <dbl>    <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>    <dbl>    <dbl>
1 y[1]     -0.0819  -0.0453  10.0  10.1  -16.5  16.3  1.00   59376.   30953.
2 y[2]     -0.00142 -0.0227  10.0   9.95 -16.4  16.5  1.00   60247.   29833.
3 y[3]      0.0466   0.0560   9.94  9.90 -16.3  16.4  1.00   57789.   29357.
4 y[4]      0.0276   0.0259   9.95  9.89 -16.3  16.4  1.00   58617.   30486.
5 y[5]     -0.0619  -0.0333  10.0   9.98 -16.5  16.4  1.00   59373.   30665.
6 y[6]      0.0395  -0.00871  9.98  9.95 -16.4  16.5  1.00   61190.   31503.
7 y[7]     -0.0313   0.0543   9.95  9.97 -16.4  16.4  1.00   57335.   30404.

1 x[1]     -0.0482 -0.0709   9.37  9.34 -15.4  15.3  1.00   56797.   29396.
2 x[2]      0.0676  0.0499   9.33  9.32 -15.3  15.4  1.00   61234.   32163.
3 x[3]      0.0114  0.0165   9.36  9.42 -15.4  15.5  1.00   60462.   31845.
4 x[4]     -0.0436 -0.0135   9.30  9.23 -15.4  15.2  1.00   58376.   30831.
5 x[5]     -0.0340 -0.0618   9.31  9.31 -15.4  15.3  1.00   58947.   29524.
6 x[6]      0.0584  0.0153   9.38  9.33 -15.3  15.5  1.00   59096.   31725.
7 x[7]     -0.0408  0.00275  9.34  9.30 -15.4  15.2  1.00   60257.   31300.
8 x[8]      0.0293 -0.0508   9.31  9.32 -15.3  15.4  1.00   57335.   30404.

1 sum_x    -9.94e-18      0 2.67e-15 2.63e-15 -3.55e-15 3.55e-15  1.00   39253.   36969.

@stan-buildbot
Copy link
Contributor


Name Old Result New Result Ratio Performance change( 1 - new / old )
arma/arma.stan 0.35 0.34 1.03 2.54% faster
low_dim_corr_gauss/low_dim_corr_gauss.stan 0.01 0.01 0.94 -6.05% slower
gp_regr/gen_gp_data.stan 0.02 0.02 1.12 10.68% faster
gp_regr/gp_regr.stan 0.1 0.1 1.07 6.85% faster
sir/sir.stan 69.48 70.37 0.99 -1.28% slower
irt_2pl/irt_2pl.stan 4.12 4.3 0.96 -4.56% slower
eight_schools/eight_schools.stan 0.06 0.06 1.07 6.39% faster
pkpd/sim_one_comp_mm_elim_abs.stan 0.25 0.24 1.06 5.7% faster
pkpd/one_comp_mm_elim_abs.stan 19.56 18.78 1.04 4.02% faster
garch/garch.stan 0.44 0.41 1.06 5.65% faster
low_dim_gauss_mix/low_dim_gauss_mix.stan 2.77 2.6 1.07 6.11% faster
arK/arK.stan 1.78 1.71 1.05 4.34% faster
gp_pois_regr/gp_pois_regr.stan 2.91 2.72 1.07 6.45% faster
low_dim_gauss_mix_collapse/low_dim_gauss_mix_collapse.stan 8.87 8.44 1.05 4.84% faster
performance.compilation 183.93 188.14 0.98 -2.29% slower
Mean result: 1.0362876090048088

Jenkins Console Log
Blue Ocean
Commit hash: 2cd95be39b0482d0fedb7fe9a1e8c77b7084592a


Machine information No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 20.04.3 LTS Release: 20.04 Codename: focal

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 46 bits physical, 48 bits virtual
CPU(s): 80
On-line CPU(s) list: 0-79
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6148 CPU @ 2.40GHz
Stepping: 4
CPU MHz: 2400.000
CPU max MHz: 3700.0000
CPU min MHz: 1000.0000
BogoMIPS: 4800.00
Virtualization: VT-x
L1d cache: 1.3 MiB
L1i cache: 1.3 MiB
L2 cache: 40 MiB
L3 cache: 55 MiB
NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78
NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59,61,63,65,67,69,71,73,75,77,79
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; IBRS, IBPB conditional, STIBP conditional, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT vulnerable
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke md_clear flush_l1d arch_capabilities

G++:
g++ (Ubuntu 9.4.0-1ubuntu1~20.04) 9.4.0
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

Clang:
clang version 10.0.0-4ubuntu1
Target: x86_64-pc-linux-gnu
Thread model: posix
InstalledDir: /usr/bin

@WardBrian
Copy link
Member Author

@spinkney when freeing, we only assume a tolerance of 1e-8, so being around 1e-18 is definitely "close enough" to zero if it's just floating point issues.

Perhaps a naive question, but shouldn't the xs have standard deviation 10, not the ys? The original forum post had a correction (similar to the unit vector, I think)

@bob-carpenter
Copy link
Contributor

Thanks so much, @spinkney.

It's a bit concerning that summing all the values isn't exactly 0.

No worries---you're right at double machine precision there: 1e-18.

Brian was asking about a Jacobian adjustment, but we don't need one here---this is a linear transform on this scale (the ns are constant and cumulative_sum, rearrangement, and subtraction are linear).

We can write out the constraining transform as a function in the math library in C++ so that we can write custom autodiff for it to make it efficient. In C++, it can be written without allocating the linearly spaced vector---it'll be an expression template or loop.

since a log of a simplex sums to zero

I initially read that the wrong way. We have log(sum(simplex)) = 0, but sum(log(simplex)) < 0 if the simplex is more than zero-dimensional (i.e., has more than one element).

@spinkney
Copy link
Collaborator

spinkney commented Aug 1, 2024

Perhaps a naive question, but shouldn't the xs have standard deviation 10, not the ys? The original forum post had a correction (similar to the unit vector, I think)

Yes! There is one potential issue and that is that in the forum post they put the prior on the unconstrained values. You're right that we can mimic this like in the unit_vector case.

functions {
  vector inv_ilr_sum_to_zero_constrain_lp(vector y) {
  int N = rows(y) + 1;
  vector[N - 1] ns = linspaced_vector(N - 1, 1, N - 1);
  vector[N - 1] w = y .* inv_sqrt(ns .* (ns + 1));
  vector[N] z = append_row(reverse(cumulative_sum(reverse(w))), 0) - append_row(0, ns .* w);
  
  target += normal_lpdf(y | 0, inv_sqrt(1 - inv(N)));
  
  return z;
}
}
data {
  int<lower=0> N;
}
parameters {
  vector[N - 1] y;
  real z;
}
transformed parameters {
   vector[N] x = inv_ilr_sum_to_zero_constrain_lp(y);
}
model {
  z ~ std_normal();
}
generated quantities {
  real sum_x = sum(x);
}

We see that the margins of x match that of a standard normal (z for reference).

  variable        mean     median    sd   mad    q5   q95  rhat ess_bulk ess_tail
   <chr>          <dbl>      <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>    <dbl>    <dbl>
 1 x[1]      0.00214     0.00123   0.998 0.997 -1.64  1.64  1.00  188253.   57424.
 2 x[2]     -0.00227    -0.0000932 1.00  1.00  -1.65  1.64  1.00  193341.   61674.
 3 x[3]     -0.000257   -0.00233   0.994 0.992 -1.63  1.64  1.00  194002.   59248.
 4 x[4]      0.000861    0.00334   1.01  1.01  -1.65  1.65  1.00  198643.   56607.
 5 x[5]      0.00271     0.00272   0.994 0.996 -1.63  1.63  1.00  182120.   59134.
 6 x[6]      0.00328     0.00399   1.00  1.00  -1.65  1.66  1.00  182355.   58314.
 7 x[7]     -0.00254    -0.00450   1.00  1.00  -1.64  1.65  1.00  203890.   60267.
 8 x[8]      0.00000966  0.00172   1.00  1.00  -1.65  1.65  1.00  198791.   58403.
 9 x[9]     -0.000147    0.00395   0.999 0.998 -1.65  1.65  1.00  193653.   60290.
10 x[10]     0.000846    0.00319   0.996 0.995 -1.64  1.65  1.00  195193.   57673.
11 x[11]    -0.000333    0.000160  0.997 0.993 -1.63  1.64  1.00  196003.   60003.
12 x[12]     0.00346     0.00541   1.00  1.01  -1.65  1.65  1.00  193099.   58743.
13 x[13]    -0.00376    -0.00444   0.999 0.999 -1.66  1.63  1.00  199063.   61333.
14 x[14]    -0.00325    -0.0000660 1.00  1.01  -1.66  1.65  1.00  197189.   56415.
15 x[15]    -0.000674   -0.00130   0.999 0.994 -1.64  1.64  1.00  198973.   54759.
16 x[16]    -0.0000623  -0.00202   1.01  1.02  -1.66  1.66  1.00  184838.   59584.
17 z         0.00259     0.00135   1.00  1.01  -1.64  1.65  1.00  188801.   59141.

The issue with the above is that a standard normal is fairly constrained. I think it would be better if we did something like a normal(0, 10) and then the user can put a more restrictive prior on x. However, x will not be marginally distributed exactly as whatever the user puts because we have 2 priors - the normal(0, 10) one that's in the transform and the one the user specifies.

We can make this more flexible by using the multiplier keyword to lengthen or shorten the variance of the normal. First we "back out" the inv_sqrt(1 - inv(N)) standard error. Because this is a normal distribution with mean = 0

$$ \log f(x \mid 0, \sigma) = -0.5 (\log(2) + \log(\pi) + \sigma) - \frac{x^2}{2 \sigma^2} $$

where $\sigma = \frac{1}{\sqrt{1 - \frac{1}{N}}}$. Since $\sigma$ is just a constant we can forget the normalizing constant and focus on the second part:

$$ \frac{-x^2}{2 \sigma^2} = -0.5 x^2 (1 - \frac{1}{N}) = -0.5 x^2 \frac{N - 1}{N}. $$

Since this is just a normal distribution the multiplier divides that expression above by the variance of the normal that one wants, i.e.,

functions {
  vector inv_ilr_sum_to_zero_constrain_lp(vector y, real val) {
  int N = rows(y) + 1;
  vector[N - 1] ns = linspaced_vector(N - 1, 1, N - 1);
  vector[N - 1] w = y .* inv_sqrt(ns .* (ns + 1));
  vector[N] z = append_row(reverse(cumulative_sum(reverse(w))), 0) - append_row(0, ns .* w);
  
  target += -0.5 * z^2 * (N - 1) * inv(N) / val^2; 

  return z;
}
}
data {
  int<lower=0> N;
  real<lower=0> val;
}
parameters {
  vector[N - 1] y;
  real z;
}
transformed parameters {
   vector[N] x = inv_ilr_sum_to_zero_constrain_lp(y, val);
}
model {
//  x ~ normal(0, val);
  z ~ normal(0, val);
}
generated quantities {
  real sum_x = sum(x);
}

where val means the multiplier value. I believe this should work with vectors in the multiplier so that one could put different standard deviations on each element of the zero sum vector.

For example with N = 16 and a multiplier of val = 4 I get

   variable      mean    median    sd   mad    q5   q95  rhat ess_bulk ess_tail
   <chr>        <dbl>     <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>    <dbl>    <dbl>
 1 x[1]     -0.00615  -0.0118    4.01  4.00 -6.59  6.60  1.00  505206.  148873.
 2 x[2]      0.00602   0.0153    3.99  3.99 -6.55  6.55  1.00  493809.  150968.
 3 x[3]     -0.00767  -0.0136    4.01  4.01 -6.59  6.59  1.00  489828.  145973.
 4 x[4]      0.00270   0.0117    4.00  4.01 -6.58  6.59  1.00  495086.  146852.
 5 x[5]      0.00980   0.00630   4.00  3.98 -6.58  6.58  1.00  471650.  148247.
 6 x[6]      0.00886   0.00170   4.00  4.00 -6.57  6.58  1.00  482777.  147873.
 7 x[7]     -0.000581 -0.00169   4.00  4.00 -6.59  6.58  1.00  496548.  149560.
 8 x[8]     -0.00186  -0.00476   3.99  4.00 -6.54  6.56  1.00  485656.  147287.
 9 x[9]     -0.00414  -0.00847   4.00  3.99 -6.58  6.57  1.00  493152.  146167.
10 x[10]    -0.00545  -0.0167    3.98  3.97 -6.55  6.57  1.00  490037.  146549.
11 x[11]     0.00150  -0.00995   4.01  4.01 -6.57  6.60  1.00  469046.  145016.
12 x[12]     0.00110  -0.0101    4.00  4.00 -6.58  6.56  1.00  510886.  147406.
13 x[13]    -0.00696  -0.00231   4.00  3.99 -6.60  6.60  1.00  484046.  148396.
14 x[14]    -0.000128 -0.00496   4.00  4.01 -6.57  6.61  1.00  488574.  150718.
15 x[15]    -0.00184   0.000714  4.01  4.01 -6.62  6.62  1.00  493949.  149995.
16 x[16]     0.00480   0.0135    3.99  3.98 -6.57  6.58  1.00  486966.  146357.
17 z        -0.000141  0.00174   4.00  4.00 -6.58  6.57  1.00  471868.  144767.

@bob-carpenter
Copy link
Contributor

I find @spinkney's constraining transform easier to understand with N = rows(y) than N = rows(y) + 1. I also unfolded the code to define some intermediates to work through values.

vector inv_ilr_sum_to_zero_constrain_lp(vector y) {
  int N = rows(y);
  vector[N] ns = linspaced_vector(N, 1, N); 
  vector[N] w = y ./ sqrt(ns .* (ns + 1)); 
  vector[N] u = reverse(cumulative_sum(reverse(w)));
  vector[N] v = ns .* w;
  vector[N + 1] z = append_row(u, 0) - append_row(0, v);
  return z; 
}

Then we need to derive the unconstraining transform from z[1:N+1] to y[1:N]. From the above code, we can derive the following relations.

  1. ns[n] = n
  2. w[n] = y[n] / sqrt(n * (n + 1))
  3. u[n] = w[n] + w[n + 1] + ... + w[N]
  4. v[n] = n * y[n] / sqrt(n * (n + 1)) = y[n] * sqrt(n / (n + 1.0))
  5. z[1] = u[1];   z[N + 1] = v[N];   z[n] = u[n] + v[n - 1] for 2 <= n <= N

Going in the reverse direction, we know

A. y[n] = w[n] * sqrt((n + 1.0) / n) and
B. w[n] = v[n] * n, thus
C. y[n] = v[n] * sqrt(n * (n + 1))
D. v[n - 1] = z[n] - u[n]

Starting with n = N, we have the base case

v[N] = z[N + 1]
y[N] = v[N] * sqrt(n * (n + 1))

and thus

y[N] = z[N + 1] * sqrt(n * (n + 1))

For the inductive case,

v[n - 1] = z[n] - u[n]

where u[n] = w[n] + ... + w[N] and w[n] = v[n] * n, so that

and thus

v[n - 1] = z[n] - u[n]

where

u[n] = v[n] * n + ... + v[N] * N

and hence

v[n-1] = z[n] - (v[n] * n + ... + v[N] * N)
y[n - 1] = v[n - 1] * sqrt(n * (n + 1))

Whew. That's still not code, though---the code will have to efficiently keep the running v[n] * n totals.

@WardBrian
Copy link
Member Author

@bob-carpenter I think there's either a mistake in your code or I'm missing something in translation.

Starting with n = N, we have the base case

v[N] = z[N + 1]
y[N] = v[N] * sqrt(n * (n + 1))

The final element seems to be too big by a factor of -N, and the rest are then incorrect as a result.

@spinkney
Copy link
Collaborator

spinkney commented Aug 2, 2024

You need $Y_n = -Z_{n + 1} \sqrt{n(n+1)} / n$

@WardBrian WardBrian marked this pull request as draft August 5, 2024 15:51
@WardBrian WardBrian merged commit cf9b012 into develop Aug 7, 2024
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants