This dissertation proposes and analyzes distributed algorithms for solving convex optimization problems in networked systems. Specifically, it considers algorithms for distributed convex optimization that are robust to noise, distributed online convex optimization over time-varying graphs, distributed saddle-point subgradient methods with consensus constraints, and distributed nuclear norm regularization for multi-task learning. It also studies stability properties of stochastic differential equations with persistent noise. The analysis establishes convergence and performance guarantees for the distributed optimization algorithms and characterizes different notions of stability for stochastic systems.