Igor L. Markov is currently working at Google on search infrastructure, while holding appointments at Michigan and occasionally teaching at Stanford. He received his M.A. in Mathematics and Ph.D. in Computer Science from UCLA. He is an IEEE Fellow and an ACM Distinguished Scientist.
Igor is interested in computers that make computers, including algorithms and mathematical models, as well as software and hardware. Some of his results lead to order-of-magnitude improvements in practice, and many of them are being used in commercial tools and open-source software. During the 2011 redesign of the ACM Computing Classification System, he lead the effort on the Hardware tree. At the University of Michigan, he chaired the undergraduate program in Computer Engineering (ranked #7 in the US) for a number of years.
Igor co-authored five books and over 200 refereed publications. He served on program committees and chaired tracks at top conferences in Electronic Design Automation. Twelve Ph.D. degrees were defended under his guidance. Current and former students interned at or have been employed by AMD, Altera, Amazon, Berkeley, Cadence, Calypto, Columbia University, the US Department of Defense, the US Department of Energy, the Howard Hughes Medical Inst., Google, IBM Research, Lockheed Martin, Microsoft, MIT, Qualcomm, Samsung, Synopsys, Texas Instruments.
Igor lead student teams that won 1st places at multi-month research competitions in optimization software (organized by IBM Research and Intel Labs) in 2007, 2009, 2010, 2012 and 2013.
Dealing with AI as a Dystopian Threat to Humanity
We have seen how emergent AI could attack the humanity without a warning - either to terminate it or enslave it. How realistic are such sci fi scenarios in our future? How can they be avoided? In this presentation, we argue that the humanity's evolutionary obsession with survival has deep roots and teaches us to rely on the physical world when addressing grave threats. We propose a set of rules and constraints on AI to limit its destructive potential. We also point out that defending the humanity may become more difficult as technology changes who we are.