The document provides a step-by-step guide for using large language models (LLMs) to synthesize training data. It begins by explaining the importance of training data and benefits of synthetic data. It then outlines the process, which includes: 1) Choosing the right LLM based on task requirements, data availability, and other factors. 2) Training the chosen LLM model with the synthesized data to generate additional data. 3) Evaluating the quality of the synthesized data based on fidelity, utility and privacy. The guide uses generating synthetic sales data for a coffee shop sales prediction app as an example.