-
-
Notifications
You must be signed in to change notification settings - Fork 152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Evaluation of etuplized RandomVariables fails for some distributions #1002
Comments
Is your |
Yes, if you print norm_et
# e(e(<class '__main__.MakeRVNodeOp'>, e(<class 'aesara.tensor.random.basic.NormalRV'>, normal, 0, (0, 0), floatX, False)), RandomGeneratorSharedVariable(<Generator(PCG64) at 0x7F65BBBE9A80>), TensorConstant{[]}, TensorConstant{11}, TensorConstant{1}, TensorConstant{1}) i.e. My understanding is that while we dispatch The only way is through dispatching |
The following works: from dataclasses import dataclass
from aesara.graph import Op, Variable
import aesara.tensor as at
from aesara.tensor.random.basic import RandomVariable
from cons.core import ConsError, _car, _cdr
from etuples import etuplize, etuple
@dataclass
class MakeRVNodeOp:
op: Op
def __call__(self, *args):
return self.op.make_node(*args)
@etuplize.register(RandomVariable)
def etuplize_random(*args, **kwargs):
return etuple(MakeRVNodeOp, etuplize.funcs[(object,)](*args, **kwargs))
def car_RVariable(x):
if x.owner:
return MakeRVNodeOp(x.owner.op)
else:
raise ConsError("Not a cons pair.")
_car.add((Variable,), car_RVariable)
def car_MakeNodeOp(x):
return type(x)
_car.add((MakeRVNodeOp,), car_MakeNodeOp)
def cdr_MakeNodeOp(x):
x_e = etuple(_car(x), x.op, evaled_obj=x)
return x_e[1:]
_cdr.add((MakeRVNodeOp,), cdr_MakeNodeOp)
srng = at.random.RandomStream(0)
Y_rv = srng.normal(1, 1)
Y_et = etuplize(Y_rv)
print(Y_et)
# e(e(<class '__main__.MakeRVNodeOp'>, e(<class 'aesara.tensor.random.basic.NormalRV'>, normal, 0, (0, 0), floatX, False)), RandomGeneratorSharedVariable(<Generator(PCG64) at 0x7F07F31FD2A0>), TensorConstant{[]}, TensorConstant{11}, TensorConstant{1}, TensorConstant{1})
norm_et = etuple(etuplize(at.random.normal), *Y_et[1:])
print(norm_et)
# e(e(<class '__main__.MakeRVNodeOp'>, e(<class 'aesara.tensor.random.basic.NormalRV'>, normal, 0, (0, 0), floatX, False)), RandomGeneratorSharedVariable(<Generator(PCG64) at 0x7F07F31FD2A0>), TensorConstant{[]}, TensorConstant{11}, TensorConstant{1}, TensorConstant{1}) which does feel a bit hacky; ideally dispatching |
Finally, just making sure this solves the motivating issue (aesara-devs/aemcmc#29) with running from kanren import eq, lall, run, var
def normaleo(in_expr, out_expr):
mu_lv, std_lv = var(), var()
rng_lv, size_lv, dtype_lv = var(), var(), var()
norm_in_lv = etuple(etuplize(at.random.normal), rng_lv, size_lv, dtype_lv, mu_lv, std_lv)
norm_out_lv = etuple(etuplize(at.random.normal), rng_lv, size_lv, dtype_lv, mu_lv, std_lv)
return lall(eq(in_expr, norm_in_lv), eq(out_expr, norm_out_lv))
srng = at.random.RandomStream(0)
Y_rv = srng.normal(1, 1)
Y_et = etuplize(Y_rv)
y_lv = var()
(y_expr,) = run(1, y_lv, normaleo(Y_et, y_lv))
y_expr.evaled_obj
# normal_rv{0, (0, 0), floatX, False}(RandomGeneratorSharedVariable(<Generator(PCG64) at 0x7FC8E9B784A0>), TensorConstant{[]}, TensorConstant{11}, TensorConstant{1}, TensorConstant{1})
y_lv_rev = var()
(y_expr_rev,) = run(1, y_lv_rev, normaleo(y_lv_rev, Y_et))
y_expr_rev.evaled_obj
# normal_rv{0, (0, 0), floatX, False}(RandomGeneratorSharedVariable(<Generator(PCG64) at 0x7FC8E9B784A0>), TensorConstant{[]}, TensorConstant{11}, TensorConstant{1}, TensorConstant{1}) |
eval_if_etuple
does not work when naively evaluating etuplized random variables. Let us consider the following model:The original object is stored in the
ExpressionTuple
instance, so the following works:We can force the
ExpressionTuple
to re-evaluate instead of returning the saved object; in this case evaluation fails:The issue was discussed in aesara-devs/aemcmc#29 and prevents us to write miniKanren goals that represent two-way relations between expressions that contain random variables:
The reason, explained in this comment, is that the
evaled_obj
method performsop.__call__
when evaluating. However,__call__
does not dispatch tomake_node
systematically for =RandomVariable=s, hence the error.Instead of dispatching
op.__call__
toop.make_node
for every random variable (which would be restrictive according to @brandonwillard), we can wrapRandomVariable
Ops with aMakeNodeOp
Op that dispatches__call__
tomake_node
:The downside to this approach is that defining
kanren
goals becomes more cumbersome as we need to wrap random variables withMakeNodeOp
like:I suggest instead to dispatch
etuplize
forRandomVariable
, which handles both the etuplization of random variables and ofRandomVariables
ops:And now the above
kanren
goal works both ways:Note that the following code will still fail, however:
So we wil need a mix of the two approches. I will open a PR soon.
The text was updated successfully, but these errors were encountered: