### Does the soft-constraint converge to rigid-constraint

textitle: “Does the soft-constraint converge to rigid-constraint?”
author: “Yuling Yao”
date: “10/21/2018”
output: html_documentMotivationsRigid constraintSoft constraintAdding a Jacobian?Generative vs Embedding, or change-of-variable vs conditioningPrior-Prior conflict?Conclusion
title: “Does the soft-constraint converge to rigid-constraint?”

author: “Yuling Yao”

date: “10/21/2018”

output: html_document tl;nr.

No.

Motivations I have often seen the recommendation of soft-constraints. The main reason is because a bayesian will rarely prefer point mass. Apart from that, I am now curious whether the current implementation of these two in Stan are essentially equivalent, in the limit case.

Let me frame the problem more religiously. Consider the parameter space θ∈Θ=Rd\theta \in \Theta = R^dθ∈Θ=Rd , and a transformed parameters

z=ξ(θ),z∈Rm z=\xi(\theta),\quad z \in R^m z=ξ(θ),z∈Rm

where $\xi : R^d \to R^m, d>m $ is a smooth function. We also need some regularization on ξ\xiξ to make…

author: “Yuling Yao”

date: “10/21/2018”

output: html_document tl;nr.

No.

Motivations I have often seen the recommendation of soft-constraints. The main reason is because a bayesian will rarely prefer point mass. Apart from that, I am now curious whether the current implementation of these two in Stan are essentially equivalent, in the limit case.

Let me frame the problem more religiously. Consider the parameter space θ∈Θ=Rd\theta \in \Theta = R^dθ∈Θ=Rd , and a transformed parameters

z=ξ(θ),z∈Rm z=\xi(\theta),\quad z \in R^m z=ξ(θ),z∈Rm

where $\xi : R^d \to R^m, d>m $ is a smooth function. We also need some regularization on ξ\xiξ to make…