-
-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Complex numbers #321
base: master
Are you sure you want to change the base?
Complex numbers #321
Conversation
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## master #321 +/- ##
==========================================
- Coverage 95.52% 95.48% -0.05%
==========================================
Files 47 47
Lines 3691 3789 +98
==========================================
+ Hits 3526 3618 +92
- Misses 165 171 +6 ☔ View full report in Codecov by Sentry. |
This is how benchmark results would change (along with a 95% confidence interval in relative change) if c97b1de is merged into master:
|
This is how benchmark results would change (along with a 95% confidence interval in relative change) if f70d6b9 is merged into master:
|
This is how benchmark results would change (along with a 95% confidence interval in relative change) if a76290c is merged into master:
|
Does this handle the automatic re-shaping if I pass something that has |
If I understand correctly, you're asking if it matters where in the order of dimensions the dimension indexing the real and imaginary parts is? I don't think it should make a difference; array slicing should handle that. e.g. here's an rvar where the second dimension indexes the real/imaginary parts: x = rvar(array(
1:96, dim = c(4,4,2,3),
dimnames = list(NULL, NULL, c("real", "imag"), NULL)
))
x
#> rvar<4>[4,2,3] mean ± sd:
#> , , 1
#>
#> real imag
#> [1,] 2.5 ± 1.3 18.5 ± 1.3
#> [2,] 6.5 ± 1.3 22.5 ± 1.3
#> [3,] 10.5 ± 1.3 26.5 ± 1.3
#> [4,] 14.5 ± 1.3 30.5 ± 1.3
#>
#> , , 2
#>
#> real imag
#> [1,] 34.5 ± 1.3 50.5 ± 1.3
#> [2,] 38.5 ± 1.3 54.5 ± 1.3
#> [3,] 42.5 ± 1.3 58.5 ± 1.3
#> [4,] 46.5 ± 1.3 62.5 ± 1.3
#>
#> , , 3
#>
#> real imag
#> [1,] 66.5 ± 1.3 82.5 ± 1.3
#> [2,] 70.5 ± 1.3 86.5 ± 1.3
#> [3,] 74.5 ± 1.3 90.5 ± 1.3
#> [4,] 78.5 ± 1.3 94.5 ± 1.3 Showing it as a as_draws_df(x)
#> # A draws_df: 4 iterations, 1 chains, and 24 variables
#> x[1,real,1] x[2,real,1] x[3,real,1] x[4,real,1] x[1,imag,1] x[2,imag,1]
#> 1 1 5 9 13 17 21
#> 2 2 6 10 14 18 22
#> 3 3 7 11 15 19 23
#> 4 4 8 12 16 20 24
#> x[3,imag,1] x[4,imag,1]
#> 1 25 29
#> 2 26 30
#> 3 27 31
#> 4 28 32
#> # ... with 16 more variables
#> # ... hidden reserved variables {'.chain', '.iteration', '.draw'} We can still just slice the two components and sum them. We could also do x[,"real"] + x[,"imag"] * 1i
#> rvar<4>[4,1,3] mean ± sd:
#> , , 1
#>
#> real
#> [1,] 2+18i ± 1.8
#> [2,] 6+22i ± 1.8
#> [3,] 10+26i ± 1.8
#> [4,] 14+30i ± 1.8
#>
#> , , 2
#>
#> real
#> [1,] 34+50i ± 1.8
#> [2,] 38+54i ± 1.8
#> [3,] 42+58i ± 1.8
#> [4,] 46+62i ± 1.8
#>
#> , , 3
#>
#> real
#> [1,] 66+82i ± 1.8
#> [2,] 70+86i ± 1.8
#> [3,] 74+90i ± 1.8
#> [4,] 78+94i ± 1.8 If you don't want the (now-useless) "real" dim, you can drop it: drop(x[,"real"] + x[,"imag"] * 1i)
#> rvar<4>[4,3] mean ± sd:
#> [,1] [,2] [,3]
#> [1,] 2+18i ± 1.8 34+50i ± 1.8 66+82i ± 1.8
#> [2,] 6+22i ± 1.8 38+54i ± 1.8 70+86i ± 1.8
#> [3,] 10+26i ± 1.8 42+58i ± 1.8 74+90i ± 1.8
#> [4,] 14+30i ± 1.8 46+62i ± 1.8 78+94i ± 1.8 Which also looks right if we again convert to a format like draws_df: as_draws_df(drop(x[,"real"] + x[,"imag"] * 1i))
#> # A draws_df: 4 iterations, 1 chains, and 12 variables
#> x[1,1] x[2,1] x[3,1] x[4,1] x[1,2] x[2,2] x[3,2] x[4,2]
#> 1 1+17i 5+21i 9+25i 13+29i 33+49i 37+53i 41+57i 45+61i
#> 2 2+18i 6+22i 10+26i 14+30i 34+50i 38+54i 42+58i 46+62i
#> 3 3+19i 7+23i 11+27i 15+31i 35+51i 39+55i 43+59i 47+63i
#> 4 4+20i 8+24i 12+28i 16+32i 36+52i 40+56i 44+60i 48+64i
#> # ... with 4 more variables
#> # ... hidden reserved variables {'.chain', '.iteration', '.draw'} |
This is how benchmark results would change (along with a 95% confidence interval in relative change) if 8a371ac is merged into master:
|
Summary
This is a draft PR to support complex numbers in draws and rvar data types. It closes #319. The main changes, plus some points of conversation:
sd()
,variance()
, andvar()
for complex values. This fixes printing issues, especially withrvar
s. The implementation ofvariance()
follows the definition of the variance of a complex random variable as the sum of the variances of its real and imaginary components; and its sd is taken as the square root of this.summarise_draws()
, the base R approach could make sense. Conservatively I've let this throw an error, but would be curious thoughts.quantile()
on complex numbers seems (like median) to just treat each component separately, which I think is wrong, but again might be okay for us (inrvar_quantile
and[rvar_]quantile2
) if people just want the separate point estimates for the two components. I'm not sure.is_constant()
that requiresmin
/max
, which are not defined on complex numbers. However, some succeed but I think give incorrect answers: e.g.rhat()
appears to work, but I think returns incorrect values, as it relies onrank()
inz_scale()
, but base R'srank()
ranks complex numbers in lexicographic order, when in reality it should probably return an error. I'm not sure if there's a good definition of any of these convergence metrics for complex numbers (maybe something like max rhat of both components?) or if these should all just return errors.The main motivating example for this, inspired by @WardBrian's comment, does work now:
Copyright and Licensing
By submitting this pull request, the copyright holder is agreeing to
license the submitted work under the following licenses: