You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+1
Original file line number
Diff line number
Diff line change
@@ -41,6 +41,7 @@ You can also clone the repostiory and open the notebooks in [docs/src](https://g
41
41
*_(recommended)_ FFTW.jl built with [`JULIA_FFTW_PROVIDER=MKL`](https://juliamath.github.io/FFTW.jl/stable/#Installation-1) for faster CPU FFTs
42
42
*_(recommended)_ Python 3 + matplotlib (used for plotting)
43
43
*_(recommended)_[pycamb](https://github.com/cmbant/CAMB) to generate $C_\ell$'s
44
+
*_(recommended)_[JuliaMono](https://github.com/cormullion/juliamono/releases) font to ensure characters like `f̃, ϕ, ∇, ℓ`, etc... are rendered correctly
44
45
*_(optional)_[healpy](https://github.com/healpy/healpy) for experimental curved sky support
Copy file name to clipboardexpand all lines: docs/src/01_lense_a_map.md
+4-2
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ jupyter:
8
8
format_version: '1.2'
9
9
jupytext_version: 1.4.1
10
10
kernelspec:
11
-
display_name: Julia 1.5.1
11
+
display_name: Julia 1.5.3
12
12
language: julia
13
13
name: julia-1.5
14
14
---
@@ -113,8 +113,10 @@ using BenchmarkTools
113
113
@benchmarkcache(LenseFlow(ϕ),f)
114
114
```
115
115
116
-
Once cached, it's very fast and memory non-intensive to repeatedly apply the operator:
116
+
Once cached, it's faster and less memory intensive to repeatedly apply the operator:
117
117
118
118
```julia
119
119
@benchmark Lϕ * f setup=(Lϕ=cache(LenseFlow(ϕ),f))
120
120
```
121
+
122
+
Note that this documentation is generated on limited-performance cloud servers. Actual benchmarks are likely much faster locally or on a cluster, and yet (much) faster on [GPU](../06_gpu/).
* $\mathbb{B}$ is an instrumental transfer function or "beam"
49
49
* $\mathbb{M}$ is a user-chosen mask
50
-
* $\mathbb{P}$ is a pixelization operation which allows one to estimate $f$ on a higher resolution than the data
51
50
* $n$ is the instrumental noise.
52
51
53
52
@@ -93,7 +92,7 @@ To evaluate this posterior, we need the arguments of the probability distributio
93
92
First lets load up some simulated data. The function `load_sim` handles constructing a `DataSet` and is the recommended way to create the various fields and covariances needed. In this case, let's use 1$\mu$K-arcmin noise and a border mask:
94
93
95
94
```julia
96
-
@unpack f, f̃, ϕ, ds, L=load_sim(
95
+
@unpack f, f̃, ϕ, ds =load_sim(
97
96
θpix =2,
98
97
Nside =256,
99
98
T = Float64,
@@ -142,7 +141,7 @@ For the unlensed and lensed parametrizations, pass `0` and `1` as the first argu
142
141
For example, the following is the same point in parameter space that we evaluated above, just in a different parametrization (any differences to the above value are numerical):
143
142
144
143
```julia
145
-
-2*lnP(1, L(ϕ)*f, ϕ, ds)
144
+
-2*lnP(1, ds.L(ϕ)*f, ϕ, ds)
146
145
```
147
146
148
147
We expect minus twice the posterior evaluated at the truth to be distributed like a $\chi^2$ distribution where the degrees of freedom equals the number of pixels in $d$, $f$, and $\phi$ (i.e. in each of the three Gaussian terms in the posterior). Since these maps are 256x256 and $d$ and $f$ have both Q and U maps, this is:
Here is the difference in terms of the power spectra. Note the best-fit has high-$\ell$ power suppressed, like a Wiener filter solution (in fact what we're doing here is akin to a non-linear Wiener filter). In the high S/N region ($\ell\lesssim1000$), the difference is approixmately equal to the noise, which you can see is almost two orders of magnitude below the signal.
0 commit comments