This session, led by James Hamilton, VP and Distinguished Engineer, gives an insider view of some the innovations that help make the AWS cloud unique. He will show examples of AWS networking innovations from the interregional network backbone, through custom routers and networking protocol stack, all the way down to individual servers. He will show examples from AWS server hardware, storage, and power distribution and then, up the stack, in high scale streaming data processing. James will also dive into fundamental database work AWS is delivering to open up scaling and performance limits, reduce costs, and eliminate much of the administrative burden of managing databases. Join this session and walk away with a deeper understanding of the underlying innovations powering the cloud.
AWS Black Belt Online Seminarの最新コンテンツ: https://aws.amazon.com/jp/aws-jp-introduction/#new
過去に開催されたオンラインセミナーのコンテンツ一覧: https://aws.amazon.com/jp/aws-jp-introduction/aws-jp-webinar-service-cut/
This session, led by James Hamilton, VP and Distinguished Engineer, gives an insider view of some the innovations that help make the AWS cloud unique. He will show examples of AWS networking innovations from the interregional network backbone, through custom routers and networking protocol stack, all the way down to individual servers. He will show examples from AWS server hardware, storage, and power distribution and then, up the stack, in high scale streaming data processing. James will also dive into fundamental database work AWS is delivering to open up scaling and performance limits, reduce costs, and eliminate much of the administrative burden of managing databases. Join this session and walk away with a deeper understanding of the underlying innovations powering the cloud.
AWS Black Belt Online Seminarの最新コンテンツ: https://aws.amazon.com/jp/aws-jp-introduction/#new
過去に開催されたオンラインセミナーのコンテンツ一覧: https://aws.amazon.com/jp/aws-jp-introduction/aws-jp-webinar-service-cut/
OpenStack上に展開するContainer as a Service を本番で利用するために必要だったことMasaya Aoyama
OpenStack上に展開するContainer as a Service を本番で利用するために必要だったこと
at OpenStack Days / Cloud Native Days 2018
CyberAgent, Inc
adtech studio
Strategic Infrastructure Agency
@amsy810
@makocchi
[Japan Container Days v18.04 Keynote (Production User Stories)]
CyberAgentではプライベートクラウド上にGKEライクなコンテナ基盤を展開するサービスを提供しています。最近では様々な利便性からコンテナでの開発が増えており、オンプレ環境でも Kubernetes as a Serviceの需要があります。サーバ上にKubernetesを展開するだけでは利用できないLoadBalancerやIngressを実現する方法やOpenStackとの連携方法について説明しながら、アドテク領域での利用に耐えうるコンテナ基盤の事例を紹介します。
by Masaya Aoyama (@amsy810)
Recap: [Code fresh] Deploying to kubernetes thousands of times per day @kuber...Masaya Aoyama
Kubernetes Meetup #9 @CyberAgent は KubeCon + CNCon 2017 North America Austin の Recap スペシャルということで、「Deploying to Kubernetes Thousands of Times Per Day」 についてお話させていただきました。
High Velocity の重要性と、CI/CD Pipeline を作るときに注意するべきポイントを話した上で、CodeFresh について紹介しました。
20. Archtecture
public-api destination proxy-api
conduit-proxy
App a
conduit-proxy
App b
conduit-proxy
App c
Deployment a Deployment b Deployment c
Pod
DataPlaneControlPlane
proxy-api
Proxy インスタンスからの要求
を受けて適切なコントローラ
へtap (conduit-proxy)
gRPC
gRPC gRPCgRPC
25. Simple stats
$ conduit stat deployment -n emojivoto
NAME MESHED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99
emoji 1/1 100.00% 2.0rps 1ms 1ms 1ms
voting 1/1 81.36% 1.0rps 1ms 1ms 1ms
web 1/1 90.83% 2.0rps 3ms 4ms 5ms
$ conduit stat deploy/web --to deploy/voting -n emojivoto
NAME MESHED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99
web 1/1 77.97% 1.0rps 1ms 2ms 2ms
voting web emoji
26. $ conduit tap deploy/web --namespace emojivoto --path /leaderboard
req id=0:50399 src=10.240.0.23:61020 dst=10.20.5.9:80 :method=GET :authority=35.xxx.xxx.xxx :path=/leaderboard
rsp id=0:50399 src=10.240.0.23:61020 dst=10.20.5.9:80 :status=200 latency=769µs
end id=0:50399 src=10.240.0.23:61020 dst=10.20.5.9:80 duration=117µs response-length=560B
…(ほぼリアルタイムでリクエストを識別可能)
Realtime monitoring
--max-rps float32 Maximum requests per second to tap. (default 1)
--path string Display requests with paths that start with this prefix
--scheme string Display requests with this scheme
--method string Display requests with this HTTP method
--namespace string Namespace of the specified resource (default "default")
--to string Display requests to this resource
--to-namespace string Sets the namespace used to lookup the "--to" resource
Options
30. Deployment a
Deployment b
Deployment cApp c
App b
App a
App c
App b
App a
App c
App b
App a
Node A Node B Node C
各 Application から
数珠つなぎで呼び出される
Micro Service Archtecture
31. Archtecture (per Host)
l5d l5d l5d
Deployment a
Deployment b
Deployment c
DataPlaneControlPlane
l5d (Scala)
全てのトラフィックを中継し
Service Mesh を構成する
App c
App b
App a
App c
App b
App a
App c
App b
App a
Node A Node B Node C
namerd
32. Archtecture (per Host)
l5d l5d l5d
Deployment a
Deployment b
Deployment c
DataPlaneControlPlane
App c
App b
App a
App c
App b
App a
App c
App b
App a
Node A Node B Node C
namerd
namerd
Dtabs のリストを管理
使用せず l5d に静的設定も可能
33. namerd traffic control
Request
[GET foo.com]
Service Name
[/srv/foo.com]
Client Name
[/#/io.l5d.k8d/default/http/world-v1]
Replicas
[1.1.1.1:80, 1.1.1.2:80]
Identification
論理名への変換
Identifier
Binding
具体名への変換
dtab (Delegation Table)
Resolution
物理エンドポイントへの変
換
namer
静的な Config を
Linkerd に書いておく
34. namerd traffic control
Request
[GET foo.com]
Service Name
[/srv/foo.com]
Client Name
[/#/io.l5d.k8d/default/http/world-v1]
Replicas
[1.1.1.1:80, 1.1.1.2:80]
Identification
論理名への変換
Identifier
Binding
具体名への変換
dtab (Delegation Table)
Resolution
物理エンドポイントへの変
換
namer
namerd
Etcd,
Zookeeper
Dtab の中央管理により
動的な Traffic control
35. namerd traffic control
Request
[GET foo.com]
Service Name
[/srv/foo.com]
Client Name
[/#/io.l5d.k8d/default/http/world-v1] (99%)
or
[/#/io.l5d.k8d/default/http/world-v2] (1%)
Replicas
[1.1.1.1:80, 1.1.1.2:80] (99%)
or
[2.2.2.1:80, 2.2.2.2:80] (1%)
$ namerctl dtab update …
/host/foo.com=>
99 * /srv/world-v1 &
1 * /srv/world-v2
…
Identification
論理名への変換
Identifier
Binding
具体名への変換
dtab (Delegation Table)
Resolution
物理エンドポイントへの変
換
namer
namerd
Etcd,
Zookeeper
Dtab の中央管理により
動的な Traffic control
Update
36. Archtecture (per Pod)
namerd
l5d
App a
l5d
App b
l5d
App c
Deployment a Deployment b Deployment c
Pod
DataPlaneControlPlane
l5d が重めのプロセスなため、
per Host 形式を推奨
38. サポート(開発含む?)は続行するそうです
@KubeCon + CloudNativeCon 2017 NorthAmerica Keynote
Roadmap and difference with
Multi platform vs Kubernetes Specific
Feature-rich vs lightweight
Maximum config vs Minimum config
per Host proxy vs per Pod proxy
40. Istio Overview
Google, IBM, Lyft, etc.
178+ Contributors
Developers
2017-05 (0.1.0)
Release
0.7.1 (2018-03)
Version
4300+ Commits, 735 ksteps
7900+ Stars
Commit
Beta – Alpha
Not yet for production
Adaption
Maybe most famous
Others
41. Micro Service Archtecture
App a App b App c
Deployment a Deployment b Deployment c
Pod
各 Application から
数珠つなぎで呼び出される
42. Istio Archtecture
Pilot Mixer Istio-Auth
Envoy
App a
Envoy
App b
Envoy
App c
Deployment a Deployment b Deployment c
Pod
Data Plane
Control Plane
Envoy (C++)
全てのトラフィックを中継し
Service Mesh を構成する
43. Pilot Mixer Istio-Auth
Data Plane
Control Plane
Envoy
App a
Envoy
App b
Envoy
App c
Deployment a Deployment b Deployment c
Pod
Pilot
Service Discovery の結果を元に
Envoy 設定の動的更新を行う
Istio Archtecture
44. Pilot Mixer Istio-Auth
Data Plane
Control Plane
Envoy
App a
Envoy
App b
Envoy
App c
Deployment a Deployment b Deployment c
Pod
Mixer
メトリクスの収集
Quota や Policy のチェック
Istio Archtecture
45. Pilot Mixer Istio-Auth
Data Plane
Control Plane
Envoy
App a
Envoy
App b
Envoy
App c
Deployment a Deployment b Deployment c
Pod
Istio-Auth
SA ベースの認証機能の提供
mTLS の提供
Istio Archtecture
mTLS mTLS
49. Performance up
V0.5.0 > v0.7.1 (~= 2 month)
Throughput: +142% (total: 242%)
Latency (p50): -59%
Istio mesh expansion
Join VM and baremetal to Kubernetes istio mesh
Controller reachable and w/o NATS,FW
Fine-grained Access Control and Auditing
Attribute and role-based access controll, etc
Istio multi cluster expansion
Join K8s istio mesh and K8s istio mesh
Istio controller installed on one side
Istio Roadmap & latest action
NodeNodeNode
VM or
Metal
NodeNodeNode
NodeNodeNode
50. Performance up
V0.5.0 > v0.7.1 (~= 2 month)
Throughput: +142% (total: 242%)
Latency (p50): -59%
Istio mesh expansion
Join VM and baremetal to Kubernetes istio mesh
Controller reachable and w/o NATS,FW
Fine-grained Access Control and Auditing
Attribute and role-based access controll, etc
Istio multi cluster expansion
Join K8s istio mesh and K8s istio mesh
Istio controller installed on one side
Istio Roadmap & latest action
NodeNodeNode
VM or
Metal
NodeNodeNode
NodeNodeNode
51. ServiceMesh friends
Istio & Conduit & Linkerd
Index >
Compare for catalog spec
Compare for feature
Performance test for multi-tier microservice
Conclusions
52. Compare for catalog spec
Istio
For production Beta - Alpha Alpha GA
Contributors 178+ 25+ 71+
Commits 4300+ 400+ 1200+
Stars 7900+ 1600+ 4500+
Codes 735 ksteps 107 ksteps 172 ksteps
Released at 2017-05 2017-12
2016-01
(GA: 2017-04)
Latest version 0.7.1 0.4.1 1.4.0
Base technology Envoy (C++) ConduitProxy (Rust) Linkerd (Scala)
Configration Method CRD (K8s resource) Conduit CLI? namerctl (CLI)
55. Tier Performance Test
App a App b App c
Deployment a Deployment b Deployment c
Pod
CPU: Intel Xeon E5 2.2GHz (Broadwell)
Kubernetes: 1.9.7-gke.0
Node: n1-starndard-8 (vCPU: 8, Memory 30 GB) * 3
Deployment: nginx:1.13 * 6 pods * 5 deployments (5-tier)
ServiceMesh: Latest version
Istio
native
vs
vs