This document summarizes a journal article that proposes an alternative approach to variable selection called the KL adaptive lasso. The KL adaptive lasso replaces the squared error loss used in traditional adaptive lasso with Kullback-Leibler divergence loss. The paper shows that the KL adaptive lasso enjoys oracle properties, meaning it performs as well as if the true underlying model was given. Specifically, it consistently selects the true variables and estimates their coefficients at optimal rates. The KL adaptive lasso can also be solved using efficient algorithms like LARS. The approach is extended to generalized linear models, and theoretical properties are discussed.