As artificial intelligence systems become more advanced, there is an ongoing debate about who should determine how these systems are designed to behave and make decisions in a way that is beneficial to humanity. While computer scientists and engineers build AI, many argue that input from fields like philosophy, ethics and social science are needed to help guide the development of AI in a manner that properly considers human values and priorities. Ensuring that AI is developed and applied responsibly will require open discussion and cooperation between experts from different backgrounds to establish standards that allow for the responsible development of advanced AI.