Flexible Authentication Strategies with SASL/OAUTHBEARER (Michael Kaminski, The New York Times; Ron Dagostino, State Street Corp.) Kafka Summit NYC 2019
In order to maximize Kafka accessibility within an organization, Kafka operators must choose an authentication option that balances security with ease of use. Kafka has been historically limited to a small number of authentication options that are difficult to integrate with a Single Signon (SSO) strategy, such as mutual TLS, basic auth, and Kerberos. The arrival of SASL/OAUTHBEARER in Kafka 2.0.0 affords system operators a flexible framework for integrating Kafka with their existing authentication infrastructure. Ron Dagostino (State Street Corporation) and Mike Kaminski (The New York Times) team up to discuss SASL/OAUTHBEARER and it’s real-world applications. Ron, who contributed the feature to core Kafka, explains the origins and intricacies of its development along with additional, related security changes, including client re-authentication (merged and scheduled for release in v2.2.0) and the plans for support of SASL/OAUTHBEARER in librdkafka-based clients. Mike Kaminski, a developer on The Publishing Pipeline team at The New York Times, talks about how his team leverages SASL/OAUTHBEARER to break down silos between teams by making it easy for product owners to get connected to the Publishing Pipeline’s Kafka cluster.
Similar to Flexible Authentication Strategies with SASL/OAUTHBEARER (Michael Kaminski, The New York Times; Ron Dagostino, State Street Corp.) Kafka Summit NYC 2019
Elastically Scaling Kafka Using Confluentconfluent
Similar to Flexible Authentication Strategies with SASL/OAUTHBEARER (Michael Kaminski, The New York Times; Ron Dagostino, State Street Corp.) Kafka Summit NYC 2019 (20)
Integrating Telephony Systems with Salesforce: Insights and Considerations, B...
Flexible Authentication Strategies with SASL/OAUTHBEARER (Michael Kaminski, The New York Times; Ron Dagostino, State Street Corp.) Kafka Summit NYC 2019
5. 1. Define when a client will retrieve credentials
2. Define how a client will retrieve credentials
3. Define the transfer of the client’s credentials from JAAS to SASL
4. Define how a broker will validate the client’s credentials
KIP-86: Configurable SASL Callback
Handlers
9. SASL/OAUTHBEARER: PROD Token Retrieval
public class MySaslLoginCbHandler implements
o.a.k.common.security.auth.AuthenticateCallbackHandler {
public void handle(Callback[] callbacks) throws ... {
for (Callback callback : callbacks) {
/*
* For callback of type OAuthBearerTokenCallback,
* must retrieve token and ultimately invoke
* ((OAuthBearerTokenCallback)callback)
* .token(theRetrievedOAuthBearerToken)
*/
See OAuthBearerUnsecuredLoginCallbackHandler for guidance
11. OAUTHBEARER and KIP-86 (4: Validate)
listener.name.sasl_ssl.oauthbearer.sasl.server.callback.handler.class=...
12. SASL/OAUTHBEARER: PROD Token
Validation
public void handle(Callback[] callbacks) throws ... {
for (Callback callback : callbacks) {
/*
* For callback of type OAuthBearerValidatorCallback,
* must retrieve token value via .token()
* and ultimately invoke
* ((OAuthBearerValidatorCallback)callback)
* .token(theValidatedOAuthBearerToken)
*/
13. Non-JVM Clients
• librdkafka: https://github.com/edenhill/librdkafka/pull/2189
• Adds C/C++ support in next release after current v1.0 release (v1.0.1?)
• Go (https://github.com/confluentinc/confluent-kafka-go/pull/300/)
• Python, .NET Support
• Shopify/sarama -- Go client (Mike)
• zendesk/ruby-kafka
14. Long-Lived Kafka Connections
● OAuth tokens have a fixed lifetime
● What if we want to use the token contents for authz?
○ (OAUTHBEARER.token negotiated property)
● Need to also remove ACLs when disabling an identity
15. KIP-368: SASL Client Re-Authentication
● Released in v2.2.0
● Adds connections.max.reauth.ms broker property
(optional prefix: listener.name.sasl_[plaintext|ssl].<mechanism>.)
● Opt-in
16. KIP-368: SASL Client Re-Authentication
● Broker tells clients when they must re-authenticate by
● v2.2.0+ Java clients “understand”
○ will transparently re-authenticate
● Broker closes the connection when used if not properly re-
authenticated
○ clients that don’t understand: disconnected
○ will re-connect (forcing a new authentication)
17. The KIP Process
● Governance
○ Create the KIP, discuss over email
○ Maybe include a pull request
○ Vote after discussion completes
● Want: “Yes, this is a good feature to add!”
● Stay focused
● Don’t be defensive
○ “Hmm... The PR looks quite a lot different from what I hoped we would do...”
○ “lol. Yeah, I'm not surprised to get this feedback... this feedback is excellent.
Let me try to unpack/address it, and let’s see where we end up.”
24. 1. How we use Kafka.
2. Authentication challenges.
3. Reimagining the on-boarding experience.
4. SASL/OAUTHBEARER to the rescue!
25. Sources of Development Friction
• Existing Kafka auth features—SASL/{GSSAPI, PLAIN, SCAM}, SSL—
did not fit with GCP auth model employed by other Pipeline
microservices.
• Onboarding a new team often involved 1:1 assistance.
• Users were discouraged from prototyping apps.
• In general, developers preferred Google Cloud Pub/Sub because it
was easier to set up than a monolog Kafka consumer.
26. 1. How we use Kafka.
2. Authentication challenges.
3. Reimagining the on-boarding experience.
4. SASL/OAUTHBEARER to the rescue!
27. Setting up authentication is often a developer’s first experience with
Kafka.
It’s important to make client on-boarding simple.
28. Goal: Make creating a Kafka consumer as easy as creating a Cloud
Pub/Sub client.
32. NYT Kafka client landscape
• Primarily Java and Golang
(Sarama) clients.
• Only the Java client
supported SASL/O.B.
• Keep legacy support for
unsupported clients (Node.js,
etc).
• Write helper libraries to
abstract away client configs
33. try (Consumer <String, Event> consumer = ConsumerBuilder.newInstance()
.authWithComputeEngine()
.consumerId("my-service-name")
.defaultStartAtEnd()
.environment(Config.Environment.ORIGIN_PRD_CENTRAL)
.buildMonologConsumer()) {
while (true) {
for (ConsumerRecord < String, Event > record: kafkaConsumer.poll(100)) {
// ...do something with the message
}
kafkaConsumer.commitAsync();
}
}
34. Rollout Process
1. Upgrade to Kafka 2.x.x, run in production for a few weeks.
2. Enable SASL/OAUTHBEARER using a custom callback handler that
leverages the same library the other Publishing Pipeline services use
for authentication.
3. Slowly enable SASL/O.B. on core Pipeline services with option to roll
back by flipping a feature flag.
4. Guide teams through migration process.
35. So far, so good...
• We’ve cut lots of red tape by helping developers focus on delivering
great products instead of wrangling credentials.
36. $ pubp consume monolog | jq .
{ "publish": {
"movie": {
"publicationProperties": {
"uri": "nyt://movie/a57bc6f7-5922-593a-bac5-f3ca780d5121",
"type": "movie",
"firstPublished": "2019-03-04T19:46:00.841Z",
"lastModified": "2019-03-04T19:46:00.841Z",
"source": "CMS-1",
"sourceApplication": "pub-app-1",
"eventId": "pubp://event/90590558-0187-430f-b1b7-edb0a51955f1"
},
"imdbId": "tt8268916",
"title": "90 Ml",
"year": 2019
}
}
}
Individual users can tail the log
from a CLI utility using their G-
Suite credentials—big win for
productivity!
37. Easy to grant ACL access via service account email:
kafka-acls
--authorizer-properties zookeeper.connect=localhost:2181
--add
--allow-principal User:"000000-
compute@developer.gserviceaccount.com"
--consumer
--topic topic-name
--group *
38. So far, so good...
• We’ve cut lots of red tape by helping developers focus on delivering
great products instead of wrangling credentials.
• Much work remains on nailing down documentation and helper
libraries for additional languages.
• Confluent Cloud support?