Building scalable data pipelines for big data involves dealing with legacy systems, implementing data lineage and provenance, managing the data lifecycle, and engineering pipelines that can handle large volumes of data. Effective data pipeline engineering requires understanding how to extract, transform and load data while addressing issues like privacy, security, and integrating diverse data sources. Frameworks like Cascading can help build pipelines, but proper testing and scaling is also required to develop robust solutions.