Measuring and improving code readability is important, since a lot of human effort is required during the project lifetime in reading and understanding code. Various models have been proposed to automatically evaluate how easy it is for a human to read or understand a piece of source code. In this thesis, we are looking into which models and metrics are sensitive to small changes in readability (one commit). After searching in open-source code repositories, we found readability-improving commits and also selected some random non-readability improving commits. For each changed file we calculated various metrics and readability models before and after the commit. Then, we measured the difference in each metric before and after the commits, and also between readability and non-readability improving commits. We also developed a new model that is sensitive to such small changes. To build our candidate models, we employed Support Vector Regression with linear, RBF, or polynomial kernels, and cross-validation for training. To determine the input features we applied Sequential backward selection. We found that most metrics show no statistically significant changes after readability commits, and the rest had a very small effect size. When comparing changes after readability commits to non-readability commits, the effect size is larger: almost all metrics have a noticable change and at least a small or very small effect size. The SVR code readability model that we trained employs 9 features, and has approximately the same or slightly larger differences after readability commits, compared to the existing readability models.