This document describes a proposed method for Bayesian optimization with constraints in high-dimensional spaces. The key ideas are:
1) Decompose parameters into variable and fixed parts to handle known equality constraints.
2) Introduce disentangled representation learning into nonlinear embedding to handle unknown inequality constraints by exploring each latent dimension independently.
3) Apply the method to parameter optimization for a powder weighing system with constraints like non-negativity and monotonicity. Experiments demonstrate the ability to efficiently explore the parameter space while respecting constraints.