Bring AI and Serverless together and you create a new world in which what you thought you know about software may need to be adjusted. Developers transition from “micro-managers” - telling computers what to do step by step, into “teachers” - assisting computers in learning. The code alone is not enough anymore; if the past decades were spent producing tools to handle the code better, now there’s something more abstract than code, shaped in a human-unfriendly language, called “model”, which is entangled with the code. How does this impact developer experience? Is it easy to manage? Can serverless architectures improve it?
This presentation will walk you through a demo AI app built with serverless, composing multiple AI functions into one workflow. The functions will be deployed into a FaaS platform powered by Apache OpenWhisk - the most popular open source serverless platform. You’ll learn about FaaS architectures, open source technologies, as well as areas where serverless streamlines the experience for developers. We'll try to answer the question: is AI development FaaSter with serverless?
If you want to learn about emerging technologies enhancing developer experience, or if you’re passionate about AI applications, then this presentation is for you.
25. "With AI, we should look at the
programmer more as a teacher,
rather than a micro-manager. "
— Peter Norvig, Director of Research at Google.
26. " We spent the last 40 years building
up tools to build programs to deal
with text (code) in a good way …"
27. "… but right now we are creating
models instead of text, and we just
don’t have the tools to deal with
that. We need to retool the
industry."
— Peter Norvig, Director of Research at Google.
28. "Neural networks are not just
another classifier, they represent the
beginning of a fundamental shift in
how we write software. They are
Software 2.0."
— (Nov, 2017) - Andrej Karpathy, Director of AI at Tesla
39. Inference matches the FaaS model
Enough code for a function
Each function processes one request at a time
function (input) {
//1. download and cache model
//2. return inference(input)
}