Gayle McDowell discusses best practices for interview architecture. Consistency, efficiency, high standards and candidate happiness should be priorities. She recommends either a bar raiser model or hiring committee to maintain consistency. Code assessments or homework projects can efficiently evaluate candidates before onsite interviews. A variety of question styles like algorithms, design and pair programming assess different skills. Coding platforms also vary in appropriateness depending on the problem. With the right training and guidelines, many approaches can work well but there is no perfect system.
C* Summit 2013: The World's Next Top Data Model by Patrick McFadinDataStax Academy
You know you need Cassandra for it's uptime and scaling, but what about that data model? Let's bridge that gap and get you building your game changing app. We'll break down topics like storing objects and indexing for fast retrieval. You will see by understanding a few things about Cassandra internals, you can put your data model in the spotlight. The goal of this talk is to get you comfortable working with data in Cassandra throughout the application lifecycle. What are you waiting for? The cameras are waiting!
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013mumrah
Apache Kafka is a new breed of messaging system built for the "big data" world. Coming out of LinkedIn (and donated to Apache), it is a distributed pub/sub system built in Scala. It has been an Apache TLP now for several months with the first Apache release imminent. Built for speed, scalability, and robustness, Kafka should definitely be one of the data tools you consider when designing distributed data-oriented applications.
The talk will cover a general overview of the project and technology, with some use cases, and a demo.
Everything you ever needed to know about Kafka on Kubernetes but were afraid ...HostedbyConfluent
Kubernetes became the de-facto standard for running cloud-native applications. And many users turn to it also to run stateful applications such as Apache Kafka. You can use different tools to deploy Kafka on Kubernetes - write your own YAML files, use Helm Charts, or go for one of the available operators. But there is one thing all of these have in common. You still need very good knowledge of Kubernetes to make sure your Kafka cluster works properly in all situations. This talk will cover different Kubernetes features such as resources, affinity, tolerations, pod disruption budgets, topology spread constraints and more. And it will explain why they are important for Apache Kafka and how to use them. If you are interested in running Kafka on Kubernetes and do not know all of these, this is a talk for you.
Cassandra Day NY 2014: Apache Cassandra & Python for the The New York Times ⨍...DataStax Academy
In this session, you’ll learn about how Apache Cassandra is used with Python in the NY Times ⨍aбrik messaging platform. Michael will start his talk off by diving into an overview of the NYT⨍aбrik global message bus platform and its “memory” features and then discuss their use of the open source Apache Cassandra Python driver by DataStax. Progressive benchmark to test features/performance will be presented: from naive and synchronous to asynchronous with multiple IO loops; these benchmarks tailored to usage at the NY Times. Code snippets, followed by beer, for those who survive. All code available on Github!
Accelerating Apache Spark Shuffle for Data Analytics on the Cloud with Remote...Databricks
The increasing challenge to serve ever-growing data driven by AI and analytics workloads makes disaggregated storage and compute more attractive as it enables companies to scale their storage and compute capacity independently to match data & compute growth rate. Cloud based big data services is gaining momentum as it provides simplified management, elasticity, and pay-as-you-go model.
Geospatial Indexing at Scale: The 15 Million QPS Redis Architecture Powering ...Daniel Hochman
Talk given at RedisConf 17 on June 1, 2017 by Daniel Hochman. A video will be published by the conference organizers.
Abstract:
Built-in GEO commands in Redis provide a solid foundation for location-based applications. The scale of Lyft requires a completely different approach to the problem. Learn how to push beyond your constraints to build a highly available, high throughput, horizontally scalable Redis architecture. The techniques presented in this case study are broadly applicable to scaling any type of application powered by Redis. The talk will cover data modeling, open-source solutions, reliability engineering, and Lyft platform.
Common Strategies for Improving Performance on Your Delta LakehouseDatabricks
The Delta Architecture pattern has made the lives of data engineers much simpler, but what about improving query performance for data analysts? What are some common places to look at for tuning query performance? In this session we will cover some common techniques to apply to our delta tables to make them perform better for data analysts queries. We will look at a few examples of how you can analyze a query, and determine what to focus on to deliver better performance results.
Everyday I'm Shuffling - Tips for Writing Better Spark Programs, Strata San J...Databricks
Watch video at: http://youtu.be/Wg2boMqLjCg
Want to learn how to write faster and more efficient programs for Apache Spark? Two Spark experts from Databricks, Vida Ha and Holden Karau, provide some performance tuning and testing tips for your Spark applications
Improving Kafka at-least-once performance at UberYing Zheng
At Uber, we are seeing an increasing demand for Kafka at-least-once delivery (asks=all). So far, we are running a dedicated at-least-once Kafka cluster with special settings. With a very low workload, the dedicated at-least-once cluster has been working well for more than a year. When trying to allow at-least-once producing on the regular Kafka clusters, the producing performance was the main concern. We spent some effort on this issue in the recent months, and managed to reduce at-least-once producer latency by about 80% with code changes and configuration tuning. When acks=0, these improvements also help increasing Kafka throughput and reducing Kafka end-to-end latency.
These slides are from the recent meetup @ Uber - Apache Cassandra at Uber and Netflix on new features in 4.0.
Abstract:
A glimpse of Cassandra 4.0 features:
There are a lot of exciting features coming in 4.0, but this talk covers some of the features that we at Netflix are particularly excited about and looking forward to. In this talk, we present an overview of just some of the many improvements shipping soon in 4.0.
This is the presentation I made on JavaDay Kiev 2015 regarding the architecture of Apache Spark. It covers the memory model, the shuffle implementations, data frames and some other high-level staff and can be used as an introduction to Apache Spark
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBaseHBaseCon
In this presentation, we will introduce Hotspot's Garbage First collector (G1GC) as the most suitable collector for latency-sensitive applications running with large memory environments. We will first discuss G1GC internal operations and tuning opportunities, and also cover tuning flags that set desired GC pause targets, change adaptive GC thresholds, and adjust GC activities at runtime. We will provide several HBase case studies using Java heaps as large as 100GB that show how to best tune applications to remove unpredicted, protracted GC pauses.
How the new operation of Hadoop Distributed FIle System (HDFS) -- Append works. The internals of the processing. The new states that are more than the write operation.
Apply Hammer Directly to Thumb; Avoiding Apache Spark and Cassandra AntiPatt...Databricks
Learn from someone who has made just about every basic Apache Spark mistake possible so you don’t have to! We’ll go over some of the most common things that users do that end up doing that cause unnecessary pain and actually explain how to avoid them.
Confused about serialization? Not sure what is meant by use a singleton to share connections? Together we will walk through concrete examples of how to handle these situation. Learn how to: do all your work remotely, not break your catalyst optimizations, use all your resources, and much more! Together lets learn how to make our Spark Applications better!
There are different dimensions for scalability of a distributed storage system: more data, more stored objects, more nodes, more load, additional data centers, etc. This presentation addresses the geographic scalability of HDFS. It describes unique techniques implemented at WANdisco, which allow scaling HDFS over multiple geographically distributed data centers for continuous availability. The distinguished principle of our approach is that metadata is replicated synchronously between data centers using a coordination engine, while the data is copied over the WAN asynchronously. This allows strict consistency of the namespace on the one hand and fast LAN-speed data ingestion on the other. In this approach geographically separated parts of the system operate as a single HDFS cluster, where data can be actively accessed and updated from any data center. The presentation also cover advanced features such as selective data replication.
Extended version of presentation at Strata + Hadoop World. November 20, 2014. Barcelona, Spain.
http://strataconf.com/strataeu2014/public/schedule/detail/39174
Using Spark to Load Oracle Data into CassandraJim Hatcher
This presentation describes how you can use Spark as an ETL tool to get data from a relational database into Cassandra. I go through the concept in general and then talk about some specific issues you might run into and how to fix them.
Data Warehouses in Kubernetes Visualized: the ClickHouse Kubernetes Operator UIAltinity Ltd
Graham Mainwaring and Robert Hodges summarize management of ClickHouse on Kubernetes using the ClickHouse Kubernetes Operator and introduce a new UI for it. Presented at the 15 Dec '22 SF Bay Area ClickHouse Meetup.
C* Summit 2013: The World's Next Top Data Model by Patrick McFadinDataStax Academy
You know you need Cassandra for it's uptime and scaling, but what about that data model? Let's bridge that gap and get you building your game changing app. We'll break down topics like storing objects and indexing for fast retrieval. You will see by understanding a few things about Cassandra internals, you can put your data model in the spotlight. The goal of this talk is to get you comfortable working with data in Cassandra throughout the application lifecycle. What are you waiting for? The cameras are waiting!
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013mumrah
Apache Kafka is a new breed of messaging system built for the "big data" world. Coming out of LinkedIn (and donated to Apache), it is a distributed pub/sub system built in Scala. It has been an Apache TLP now for several months with the first Apache release imminent. Built for speed, scalability, and robustness, Kafka should definitely be one of the data tools you consider when designing distributed data-oriented applications.
The talk will cover a general overview of the project and technology, with some use cases, and a demo.
Everything you ever needed to know about Kafka on Kubernetes but were afraid ...HostedbyConfluent
Kubernetes became the de-facto standard for running cloud-native applications. And many users turn to it also to run stateful applications such as Apache Kafka. You can use different tools to deploy Kafka on Kubernetes - write your own YAML files, use Helm Charts, or go for one of the available operators. But there is one thing all of these have in common. You still need very good knowledge of Kubernetes to make sure your Kafka cluster works properly in all situations. This talk will cover different Kubernetes features such as resources, affinity, tolerations, pod disruption budgets, topology spread constraints and more. And it will explain why they are important for Apache Kafka and how to use them. If you are interested in running Kafka on Kubernetes and do not know all of these, this is a talk for you.
Cassandra Day NY 2014: Apache Cassandra & Python for the The New York Times ⨍...DataStax Academy
In this session, you’ll learn about how Apache Cassandra is used with Python in the NY Times ⨍aбrik messaging platform. Michael will start his talk off by diving into an overview of the NYT⨍aбrik global message bus platform and its “memory” features and then discuss their use of the open source Apache Cassandra Python driver by DataStax. Progressive benchmark to test features/performance will be presented: from naive and synchronous to asynchronous with multiple IO loops; these benchmarks tailored to usage at the NY Times. Code snippets, followed by beer, for those who survive. All code available on Github!
Accelerating Apache Spark Shuffle for Data Analytics on the Cloud with Remote...Databricks
The increasing challenge to serve ever-growing data driven by AI and analytics workloads makes disaggregated storage and compute more attractive as it enables companies to scale their storage and compute capacity independently to match data & compute growth rate. Cloud based big data services is gaining momentum as it provides simplified management, elasticity, and pay-as-you-go model.
Geospatial Indexing at Scale: The 15 Million QPS Redis Architecture Powering ...Daniel Hochman
Talk given at RedisConf 17 on June 1, 2017 by Daniel Hochman. A video will be published by the conference organizers.
Abstract:
Built-in GEO commands in Redis provide a solid foundation for location-based applications. The scale of Lyft requires a completely different approach to the problem. Learn how to push beyond your constraints to build a highly available, high throughput, horizontally scalable Redis architecture. The techniques presented in this case study are broadly applicable to scaling any type of application powered by Redis. The talk will cover data modeling, open-source solutions, reliability engineering, and Lyft platform.
Common Strategies for Improving Performance on Your Delta LakehouseDatabricks
The Delta Architecture pattern has made the lives of data engineers much simpler, but what about improving query performance for data analysts? What are some common places to look at for tuning query performance? In this session we will cover some common techniques to apply to our delta tables to make them perform better for data analysts queries. We will look at a few examples of how you can analyze a query, and determine what to focus on to deliver better performance results.
Everyday I'm Shuffling - Tips for Writing Better Spark Programs, Strata San J...Databricks
Watch video at: http://youtu.be/Wg2boMqLjCg
Want to learn how to write faster and more efficient programs for Apache Spark? Two Spark experts from Databricks, Vida Ha and Holden Karau, provide some performance tuning and testing tips for your Spark applications
Improving Kafka at-least-once performance at UberYing Zheng
At Uber, we are seeing an increasing demand for Kafka at-least-once delivery (asks=all). So far, we are running a dedicated at-least-once Kafka cluster with special settings. With a very low workload, the dedicated at-least-once cluster has been working well for more than a year. When trying to allow at-least-once producing on the regular Kafka clusters, the producing performance was the main concern. We spent some effort on this issue in the recent months, and managed to reduce at-least-once producer latency by about 80% with code changes and configuration tuning. When acks=0, these improvements also help increasing Kafka throughput and reducing Kafka end-to-end latency.
These slides are from the recent meetup @ Uber - Apache Cassandra at Uber and Netflix on new features in 4.0.
Abstract:
A glimpse of Cassandra 4.0 features:
There are a lot of exciting features coming in 4.0, but this talk covers some of the features that we at Netflix are particularly excited about and looking forward to. In this talk, we present an overview of just some of the many improvements shipping soon in 4.0.
This is the presentation I made on JavaDay Kiev 2015 regarding the architecture of Apache Spark. It covers the memory model, the shuffle implementations, data frames and some other high-level staff and can be used as an introduction to Apache Spark
HBaseCon 2015: Taming GC Pauses for Large Java Heap in HBaseHBaseCon
In this presentation, we will introduce Hotspot's Garbage First collector (G1GC) as the most suitable collector for latency-sensitive applications running with large memory environments. We will first discuss G1GC internal operations and tuning opportunities, and also cover tuning flags that set desired GC pause targets, change adaptive GC thresholds, and adjust GC activities at runtime. We will provide several HBase case studies using Java heaps as large as 100GB that show how to best tune applications to remove unpredicted, protracted GC pauses.
How the new operation of Hadoop Distributed FIle System (HDFS) -- Append works. The internals of the processing. The new states that are more than the write operation.
Apply Hammer Directly to Thumb; Avoiding Apache Spark and Cassandra AntiPatt...Databricks
Learn from someone who has made just about every basic Apache Spark mistake possible so you don’t have to! We’ll go over some of the most common things that users do that end up doing that cause unnecessary pain and actually explain how to avoid them.
Confused about serialization? Not sure what is meant by use a singleton to share connections? Together we will walk through concrete examples of how to handle these situation. Learn how to: do all your work remotely, not break your catalyst optimizations, use all your resources, and much more! Together lets learn how to make our Spark Applications better!
There are different dimensions for scalability of a distributed storage system: more data, more stored objects, more nodes, more load, additional data centers, etc. This presentation addresses the geographic scalability of HDFS. It describes unique techniques implemented at WANdisco, which allow scaling HDFS over multiple geographically distributed data centers for continuous availability. The distinguished principle of our approach is that metadata is replicated synchronously between data centers using a coordination engine, while the data is copied over the WAN asynchronously. This allows strict consistency of the namespace on the one hand and fast LAN-speed data ingestion on the other. In this approach geographically separated parts of the system operate as a single HDFS cluster, where data can be actively accessed and updated from any data center. The presentation also cover advanced features such as selective data replication.
Extended version of presentation at Strata + Hadoop World. November 20, 2014. Barcelona, Spain.
http://strataconf.com/strataeu2014/public/schedule/detail/39174
Using Spark to Load Oracle Data into CassandraJim Hatcher
This presentation describes how you can use Spark as an ETL tool to get data from a relational database into Cassandra. I go through the concept in general and then talk about some specific issues you might run into and how to fix them.
Data Warehouses in Kubernetes Visualized: the ClickHouse Kubernetes Operator UIAltinity Ltd
Graham Mainwaring and Robert Hodges summarize management of ClickHouse on Kubernetes using the ClickHouse Kubernetes Operator and introduce a new UI for it. Presented at the 15 Dec '22 SF Bay Area ClickHouse Meetup.
Prepping Your Engineering Candidates to Reduce Your False NegativesGayle McDowell
How do
Have you ever sourced the perfect software developer, only to have him or her bomb the interview? What should programmers expect when asked to go to the whiteboard? How “buttoned up” should their code be? How do your hiring managers assess problem solving capabilities? What key behaviors are highly valued at your company? You will learn how to coach your tech candidates effectively, helping more of them survive the interview process, and increase your recruiting ROI.
Product managers are sometimes called the "CEO of a product." But what is a product manager really and how you do you land this role? How to crack the PM interview?
As professionals we pride ourselves on adapting our skills to changing technology patterns. However, we might freeze up when we sit in the interviewer side of the hiring table. How do you ask effective questions? Are you giving the candidate the right impression of your team & the problems you need to solve? Just how many interviews are enough? Is the candidate engaged or feeling unwelcome from the start?
Whether your are a manager or a valued contributor, knowing how to conduct job interviews well is essential to your career. If you find yourself at a loss for ideas or afraid that your interviewer skills are stuck in the past, this session will help! You will leave with practical examples of questions and interview patterns to get the information you need to grow your data technology teams.
Reverse Engineering Engineering Interviewing: How to Be a Great InterviewerGayle McDowell
Why do some great software developers fail interviews? How do you design more effective algorithm/problem-solving interview questions? Interviewers and recruiters can help reduce false negatives, ensuring that more good candidates do well.
Gayle Laakmann McDowell is the founder/CEO of CareerCup.com and the author of Cracking the Coding Interview (Amazon.com's best-selling interview book) and Cracking the Product Manager Interview. Gayle is a former Google, Microsoft, and Apple software engineer and served on Google's hiring committee.
Cracking the Coding Interview (http://www.amazon.com/dp/098478280X) is the #1 best selling interview book on Amazon.
Cracking the Coding Interview gives you the interview preparation you need to get the top software developer jobs. This is a deeply technical book and focuses on the software engineering skills to ace your interview. The book is over 500 pages and includes 150 programming interview questions and answers, as well as other advice.
Conversion Optimization Webninar with Peep Laja Optimizely
During this webinar conversion expert Peep Laja shares his 6 step framework for continuous optimization. Learn what simple steps you can take to maximize online conversions and help turn clicks into customers.
During this webinar conversion expert Peep Laja shares his 6 step framework for continuous optimization. Learn what simple steps you can take to maximize online conversions and help turn clicks into customers.
Cracking the Coding & PM Interview (Jan 2014)Gayle McDowell
CS interviews are a different breed from other interviews and, as such, require specialized skills and techniques. This talk will teach you how to prepare for coding and PM interviews, what top companies like Google, Amazon, and Microsoft really look for, and how to tackle the toughest programming and algorithm problems. This is not a fluffy be-your-best talk; it is deeply technical and will discuss specific algorithm and data structure topics.
Handouts to help you prepare for programming/coding/algorithm interview + behavioral interview questions + product management questions, especially at the top tech companies.
CS interviews are a different breed from other interviews and, as such, require specialized skills and techniques. This talk will teach you how to prepare for coding interviews, what top companies like Google, Amazon, and Microsoft really look for, and how to tackle the toughest programming and algorithm problems. This is not a fluffy be-your-best talk; it is deeply technical and will discuss specific algorithm and data structure topics.
Interviewing Great Developers: Reverse Engineering Interview Coaching to Crea...Gayle McDowell
Every engineering department says it wants to hire the very best, but few actually do. Most coding interviews focus on programming language knowledge and trivia. But companies that hire the very best ask questions that go much deeper. In this session, you will discover what hiring managers at “elite” companies look for when hiring developers, architects and program managers. Discover why, in some cases, it’s far more important that engineers exhibit “soft skills” like communication, structured thinking and creativity than exhibit proficiency in a specific language. Gayle, author of three books on interviewing (for devs and PMs), will “reverse engineer” her coaching and advice to hopeful candidates, to help recruiters screen and select the ever-elusive A-players, gurus, rock stars, and ninjas.
3. gayle in/gaylemcdgayle 3Gayle Laakmann McDowell
They
Don’t
Know…
Howmany interviews
Who will be interviewing
If they’ll code?How?
What they need to know
Howdecision gets made
WHY?
Lotsofmyths
(andmisinformation)!
5. Gayle Laakmann McDowell 5gayle in/gaylemcdgayle
Consistency & Efficiency
Consistency
Outcome
Process
Questions
Efficiency
Speedy process
Able to expedite
Minimal overhead
Minimal false negatives
6. Gayle Laakmann McDowell 6gayle in/gaylemcdgayle
High Bar & Happiness
High Bar
Minimize false positives
Good, adaptable people
Happiness
Enjoyable experience
Makes company look good
Transparency
7. 7 gayle in/gaylemcdgayle
The Process
Resume Selection
Intro Call w/
Recruiter
Email that outlines
process
Code Assessment
Phone Interview
~4 onsite
interviews
Discussion &
Decision
“Sell” Call / Dinner
8. gayle in/gaylemcdgayle 8Gayle Laakmann McDowell
Stuff
I’ll
Discuss
BarRaisersvs. Hiring Committees
OfflineWork
Homework vs code assessment tools
QuestionStyle
Knowledge, algorithms, pair programming
Coding Platform
Real code vs. pseudocode
Whiteboard vs. computer
12. Gayle Laakmann McDowell 12gayle in/gaylemcdgayle
Who’s it good for?
Companies that:
See5or more devcandidatesperweek
Wanttoimproveprocess
Hireforcompany,notteam
Arenotveryknowledgefocused
Easier to implement early!
13. gayle in/gaylemcdgayleGayle Laakmann McDowell 13
HiringCommittee: Best Practices
Meet at least 2x per week
Multiple HCs:
Beware ofbarcreep /inconsistencies
Let interviewers observe HC
Traininterviewers to write feedback
Quality of decisions rests onfeedback
14. gayle in/gaylemcdgayleGayle Laakmann McDowell 14
Bar Raisers
Cons
Need consistency across
company
Need to scale team
Pros
Many ofHC benefits:
Consistency
High bar
Transparency
But easierto implement
No bottleneck
15. gayle in/gaylemcdgayleGayle Laakmann McDowell 15
Bar Raisers:Best Practices
Select people who areinherently good
Experiencedatinterviewing
Nice, empathetic
Smart&can challengecandidate
Train them thoroughly
Empowerthem
Assign outside of team
Watch out for scale/exhaustion!
18. gayle in/gaylemcdgayleGayle Laakmann McDowell 18
Homework Projects
Big
Very Practical
Some love this
Lesscheating
Except:algos
Too immediate
Needs eng time
Disproportionate workload
Scales poorly for candidate
19. gayle in/gaylemcdgayleGayle Laakmann McDowell 19
Homework: Best Practices
Show candidate
interestfirst
< 4 hours
If >4, onsite project review
Architecture, not algorithms
Define review criteria
Avoid confusion with
company work
20. gayle in/gaylemcdgayleGayle Laakmann McDowell 20
Homework: Who It’s Good For
Language focused
Low priority on algorithms / thought process
Experienced candidates (maybe)
21. gayle in/gaylemcdgayleGayle Laakmann McDowell 21
Code Assessment Tools
Fast, cheap eval
Morecandidates
Non-traditional
Sets expectations for onsite
Consistent data point
Cheating
May turn off senior
candidates
23. gayle in/gaylemcdgayleGayle Laakmann McDowell 23
Who It’s GoodFor
Small, mid-sized, and big companies
Value algorithms / problem solving
Lots of candidates
Want to look at non-traditional candidates
24. gayle in/gaylemcdgayleGayle Laakmann McDowell 24
Code Assessment: Best Practices
Show candidate interest
first
Beware of cheating
(But nobiggie!)
Clearexpectations
Pick GREAT questions
Similar to real interviews
Unique questions
1 – 2 hour test
30. gayle in/gaylemcdgayleGayle Laakmann McDowell 30
PairProgramming
Many candidates enjoy it
Feels fair & real world
Assesses codestyle / structure
Shows interpersonal interaction
Less understood
Not greatfor algos
Interviewer really matters
Biased by tools
31. Gayle Laakmann McDowell 31gayle in/gaylemcdgayle
PairProgramming: Best Practices
Prep/warn candidates
Need GREAT interviewer
Give choice of problems
Okay/good to pick unreasonably big problems
Guide candidates
(Okaytoaskquestions,notknowtools,etc.)
33. Gayle Laakmann McDowell 33gayle in/gaylemcdgayle
Why We Make Them Code
Can theyput“thoughts”into “actions”?
Do they show good structure and style?
Do they thinkabout theimpact of decisions?
35. A Game with Secret Rules
… and this is for a
simple problem
36. Gayle Laakmann McDowell 36gayle in/gaylemcdgayle
Don’t Allow Pseudocode
Unpredictable playing field
Detailsmatter
If “real code” is too hard for them…
37. gayle in/gaylemcdgayleGayle Laakmann McDowell 37
How toCode
Big Practical Stuff
Use computer
Pair Programming
Small Stuff
Algorithm-focused
Computer or
whiteboard
41. gayle in/gaylemcdgayle 41
z
Gayle Laakmann McDowell
Computer
Best
Practices
Let candidate bring laptop
Instruct: not every detail
Encourage communication and
thinking
Recognize the bias!
42. gayle in/gaylemcdgayleGayle Laakmann McDowell 42
A Case for Whiteboards
Encourages thinking & communication
More language agnostic
Consistent across candidates
Better laptop/tools doesn’t matter
It’s “standard”
44. gayle in/gaylemcdgayle 44
z
Gayle Laakmann McDowell
Whiteboard
Best
Practices
Encourage shorthand
Be upbeat & encouraging
Reasonable expectations
45. gayle in/gaylemcdgayleGayle Laakmann McDowell 45
Recommendations
If skill-focused:
then Computer
If algos-focused:
then Whiteboard
If a little of each:
then Either/or
Both can work!
… with proper training
Whynot let candidate choose?