©2015 AKAMAI | FASTER FORWARDTM
Quick  QUIC  Technical  Update
(Translated  from  original  Japanese  version)
Taisuke  Yamada  (tayamada@akamai.com)
July  4,  2017
©2015 AKAMAI | FASTER FORWARDTM
1.  Overview  of  QUIC
2.  TCP,  HTTP,  and  QUIC  – Lookback  on  Web  Protocols
3.  Demo!
4.  Wrap-­up
Today's  Agenda
©2015 AKAMAI | FASTER FORWARDTM
What  is  QUIC,  anyway?
©2015 AKAMAI | FASTER FORWARDTM
What  is  QUIC,  anyway?
Remember:  It  is  not  "QUICK"  with  a  "K"  !
(Had  to  say  this  too  many  times  to  Sales/Marketing  people...)
©2015 AKAMAI | FASTER FORWARDTM
QUIC:  Important  Basics
1.  UDP-­based  new  protocol  with  TCP+TLS+HTTP/2  features
2.  Eliminated  overhead  mainly  due  to  strict  layering  between
three  separately-­designed  protocols.
3.  Implemented  in  application-­level,  and  already  in  and  
enabled  in  Chrome.
4.  Google  used  to  be  the  only  one  serving  this,  but  now,
Akamai  is  under  beta  release  for  customer!
©2015 AKAMAI | FASTER FORWARDTM
QUIC  Background  – So  what  are  we  trying  to  solve?
1.  Problem  with  HTTP
Time  loss  due  to  inefficient  request-­response  design.
2.  Problem  with  TLS
Extra  connection  overhead  due  to  initial  handshakes.  
3.  Problem  with  TCP
Extra  performance  degradation  under  congestion  due  to
in-­order  delivery  restriction.
4.  Problem  with  the  Internet
Excessive  latency/jitter  caused  by  BufferBloat.
©2015 AKAMAI | FASTER FORWARDTM
HTTP:  Inefficient  request-­response  design
client server
GET
GET
GET
response
response
response
Every  request  consumes
1-­RTT  amount  of  time
①
②
③
Rich  page  content  usually
have more  objects  to  fetch,
meaning  it  will  be  impacted
more  as  page  gets  richer!
©2015 AKAMAI | FASTER FORWARDTM
Traditional  Workaround:  Multiple  connections  to  a  server
www.example.com
With  multiple  connections, we  can  eliminate  0.5-­RTT  
overhead  by  sending  in  multiple  requests  in  parallel!
©2015 AKAMAI | FASTER FORWARDTM
The  Birth  of  "Domain  Sharding"
www-­001.example.com
www-­002.example.com
www-­003.example.com
NOTE:  Each  browser  has  different  limit  on  max  connections  it      
can  keep.  Also,  having  too  many  connections  will  have
negative  impact  in  a  form  like  lower  throughput.
Assign multiple
names to the
same server, and
trick browser to
make even more
connections.
©2015 AKAMAI | FASTER FORWARDTM
Inefficient  TCP  usage  upon  opening  multiple  connections
client server
SYN
ACK
GET
SYN+ACK
response
Each  TCP  session  requires  
initial  3-­way  handshake.
Doing  this  over  and  over  
again  with  the  same  server
just  to  open  multiple  sessions
is  obviously  inefficient.
©2015 AKAMAI | FASTER FORWARDTM
Inefficient  TCP  usage  upon  using  multiple  connections
time
Bandwidth
Available  Bandwidth
Useable  TCP  Bandwidth
TCP,  though  detail  depends  on  implementation,  basically  cannot  fully  
use  available  BW  in  its  initial  phase.
Going  through  this  phase  for  every  connections  is  also  inefficient.  
How  real-­world  TCP  behaves  depends
on  actual  network  status  and  congestion
algorithm  in  TCP  stack.
©2015 AKAMAI | FASTER FORWARDTM
TLS  weakness:  Initial  negotiation  overhead
client server
SYN
ACK
ClientHello
SYN+ACK
ServerHello,  ...
TLS  does  its  own  negotiation
on  top  of  TCP.
This  means  HTTPS  session  is  
penalized  with  3-­RTT  before  it  
can  even  start speaking  HTTP.ClientFinished
ServerFinished
GET
response
*  TCP  FastOpen and  TLS  False  Start  are
proposed  for  improvements.  However      
time  is  needed  to  get  both  servers  and  
clients  updated.
©2015 AKAMAI | FASTER FORWARDTM
"Connection  Establishment"  and  "TLS"  on  QUIC
client server
ClientHello
ServerHello
QUIC  can  setup  connection  and  
encrypted TLS-­equivalent  
session  in  one  shot,  with 0-­or-­1  
RTT  overhead  (0-­RTT,  meaning  
zero  overhead,  for  re-­connection).
GET
response
*  Feedback  was  made  from  QUIC  to  TLS-­1.3
design,  and  effort  is  now  being  made  to
integrate  TLS-­1.3  back  into  QUIC.
GET
response
0−RTT case
1−RTT case
©2015 AKAMAI | FASTER FORWARDTM
Pre-­QUIC  Web  Protocols  – SPDY  and  HTTP/2
There  were  several  pre-­QUIC  efforts  to  improve  
HTTP  over  TCP  -­ SPDY  and  its  successor,  HTTP/2
©2015 AKAMAI | FASTER FORWARDTM
Key  improvements  by  HTTP/2
*  Request  to  multiple  objects  can  be  pipelined  over  a  single
TCP  connection,  and  server  can  respond  asynchronously.
*  Server  can  push  extra  "will-­be-­needed"  objects  in  response,
without  explicit  client  request.
* Header  compression  (HPACK)  reduces  overhead  when
multiple  small-­sized  objects  are  sent.
NOTE:  QUIC  is  feature  compatible  with  HTTP/2,
and  all  these  key  characteristics  are  supported.
©2015 AKAMAI | FASTER FORWARDTM
HTTP/2:  Pipelined  Request  and  Asynchronous  Response
client Server
GET  #1
GET  #2
GET  #3
response
response
response
#1
#2
#3
With  pipelined  request,  0.5-­RTT  
time  is  saved  for  every  request.
Also,  async-­response  allows  
server  to  send  back  object  in  any  
order.  This  allows  server  to  keep  
filling  up  available  bandwidth.
©2015 AKAMAI | FASTER FORWARDTM
HTTP/2:  Efficient  use  of  available  bandwidth
time
Bandwidth
Available  Bandwidth
Useable  TCP  Bandwidth
With  continuous  pipelined  transfer  over  a  single  TCP  connection,  
HTTP/2  can  use  "juicy"  phase  of  TCP,  where  available  bandwidth  
is  closer  to  its  limit.
How  real-­world  TCP  behaves  depends
on  actual  network  status  and  congestion
algorithm  in  TCP  stack.
©2015 AKAMAI | FASTER FORWARDTM
TCP:  Bottleneck  of  HTTP/2
client server
#3 #2 #3
HTTP/2  manages  virtual  "streams"  over  a  single  TCP  session  to
manage  transfer  of  each  object.
Let's  see  what  happens  when  congestion  occurs...
#3 #1
©2015 AKAMAI | FASTER FORWARDTM
TCP:  Bottleneck  of  HTTP/2
client server
#3 #1 #3 #2 #3
If  second  packet,  which  contains  data  for  stream  #1  is  lost,
ALL  following  streams/packets  are  negatively  impacted  by  reduced
bandwidth  and  re-­sends  (though  TCP  SACK  helps  latter  case).
This  is  because  all  streams  belong  to  the  same  TCP  session.
packet
lost!
Negatively  Impacted
©2015 AKAMAI | FASTER FORWARDTM
Impact  of  Congestion:HTTP/2  vs  Domain  Sharding
client server
client server
server
server
Assume  TCP  B/W  reduces  by
half  when  congestion  occurs.
With  HTTP/2,  a  congestion  will
impact  50%  of  total  B/W  as
there  is  only  one  TCP  session.
With  DS  of  3  servers,  impact  
will  only  be  16%  (1/6)  as  other  
established  TCP  sessions  are
unaffected.
Domain Sharding
HTTP/2
©2015 AKAMAI | FASTER FORWARDTM
#3 #1 #3 #2 #3
packet
lost!
Advantage  of  QUIC  "session"  over  TCP
client server
Streams  in  QUIC  session  are  independently  managed  with  different  
congestion  control  algorithm,  so  there  is no  impact  to  other  stream  
packets.
"No  impact" to  following  streams!
©2015 AKAMAI | FASTER FORWARDTM
Last,  but  not  least:  Problem  of  the  Internet
BufferBloat
©2015 AKAMAI | FASTER FORWARDTM
What  is  BufferBloat,  anyway?
client
server
N
N
N N
Packet  transferred  over  the  
Internet passes  through  many  
nodes/network  equipments.
©2015 AKAMAI | FASTER FORWARDTM
BufferBloat – Excessive  Buffering  in  the  Network
Each  node  usually  has  a  buffer  (memory)  to  queue  
packets  to  be  delivered.  So  every  packet  is  first  queued,  
and  then  sent  out  to  further  destination.
node
空 (  ) (  ) (  ) (  )
(packet  queue)
#3 #2 #3#3
#1
©2015 AKAMAI | FASTER FORWARDTM
BufferBloat – Excessive  Buffering  in  the  Network
During  a  normal  time,  packets  are  queued  and  dequeued
without  much  delay.  So  no  issue  will  be  observed.
node
空 (  ) (  ) (  ) (  )
(packet  queue)
#3 #2 #3#3
#1
©2015 AKAMAI | FASTER FORWARDTM
BufferBloat – Excessive  Buffering  in  the  Network
As  network  starts  to  get  busy,  internal  queue  also  gets  closer  to  fully  
buffered  state.  This  means  packet's  average  residence  time  in  
queue  gets  longer  as  well.
node
(packet  queue)
#3 #4 空 空 空#3 #2#1
Packet  #2  needs  to  wait  until  all  prior-­packets  is  sent  out
#3 #4 #3 #2#1 #3 #4 空 空 空#3 #2#1
©2015 AKAMAI | FASTER FORWARDTM
BufferBloat – Excessive  Buffering  in  the  Network
It  was  discovered  that  many  devices  have  excessively  large  
buffer  compared  to  its  packet  processing  performance.
Each  resulting  delay  adds  up  to  cause  large  latency  and  jitter  
issue  at  the  global  scale.  Real-­time  media  communication  are  
especially  impacted  by  this  issue.
node
(excessive amount of packet queue)
④ ③①
Packet  #2  needs  to  wait  until  all  prior-­packets  are  sent  out!
③ ④ ③ ④① ③ ④ ③ ④① ③ ④ ③ ④① ③ ④ ③ ④①
③ ④ ③ ④① ③ ④ ③ ④① ③ ④ ③ ④① ③ ④ ③ ④①
③ ④ ③ ④① ③ ④ ③ ④① ③ ④ ③ ④① ③ ④ ③ ④①
③ ④ ③ ④① ③ ④ ③ ④① ③ ④ ③ ④① ③ ④ ③ ②①
④ ③①
©2015 AKAMAI | FASTER FORWARDTM
BufferBloat:  Resulting  Issues  We  are  Facing
*  Not  only  there's  an  increase  in  latency  that  impacts network  
performance,  but  it  also  comes  with  a  large  jitter,  which  
impacts quality  of  network  performance.
This  has  significant  meaning  to  media-­related  application.
*  It  is  also  destroys  TCP  congestion  control  mechanism  as  
packets  are  never  dropped  but  only  delayed  even  more
until  the  last  devastating  moment,  when  it  is  now  difficult  to      
recover  gracefully.
©2015 AKAMAI | FASTER FORWARDTM
QUIC  to  the  Rescue!
-­ New  CC  algorithm  proposed  by  QUIC
-­ Instead  of  waiting  for  packet  drop  or  ECN  to  arrive,
BBR  detects  queue  buildup  through  advanced  monitoring.
BBR  (Bottleneck  Bandwidth  and  RTprop)
Paced  Transmission
-­ Sends  out  packet  in  a  pace  that  matches  with  estimated
end-­to-­end  network  performance.
-­ Suppressing  burst  transfer  helps  receiving  end's  queue
from  filling  up.
©2015 AKAMAI | FASTER FORWARDTM
QUIC  – More  advantages
As  it  can  be  deployed  much  faster  than  OS  network  stack,  
QUIC  is  now  viewed  as  a  testbed  of  "future  TCP"  features.
Advanced  "Future  TCP"  features
Connection  is  managed  with  a  protocol-­defined  64-­bit  ID.
This  means  session  can  continue  even  if  IP  address  changes!
Seamless  access  between  LTE/5G<-­>LAN
With  FEC  (Forward  Error  Correction),  small  packet  drop  will  
not  require  a  re-­send.  This  would  improve  quality  of  service  
in  unreliable  network.
Better  quality  for  wireless  service
©2015 AKAMAI | FASTER FORWARDTM
DEMO!
As  part  of  our  "Media  Acceleration"  package,  Akamai  already  
provides  beta  of  this  QUIC-­based  delivery  for  customers.
Let's  see  how  it  performs!
QUIC DEMO
©2015 AKAMAI | FASTER FORWARDTM
Wrap-Up
We  have  covered  technical  aspects  of  QUIC,  one  of  core  
component  of  our  "Media  Acceleration"  offering.
*  Comprehensive  solution  to  solve  issues  across
traditional  HTTP,  TLS,  and  TCP  stacks.
*  Advanced  features  like  end-­to-­end  network  state  detection  
for  congestion  control  and  paced  transfer.
*  Active  development  and  standardization  effort  as  a
"Future  TCP"  testbed.
With  these,  QUIC  would  be  the  key  protocol  in  future  Internet.
So  why  not  start  trying  it  out?
©2015 AKAMAI | FASTER FORWARDTM

Quick QUIC Technical Update (2017)

  • 1.
    ©2015 AKAMAI |FASTER FORWARDTM Quick  QUIC  Technical  Update (Translated  from  original  Japanese  version) Taisuke  Yamada  (tayamada@akamai.com) July  4,  2017
  • 2.
    ©2015 AKAMAI |FASTER FORWARDTM 1.  Overview  of  QUIC 2.  TCP,  HTTP,  and  QUIC  – Lookback  on  Web  Protocols 3.  Demo! 4.  Wrap-­up Today's  Agenda
  • 3.
    ©2015 AKAMAI |FASTER FORWARDTM What  is  QUIC,  anyway?
  • 4.
    ©2015 AKAMAI |FASTER FORWARDTM What  is  QUIC,  anyway? Remember:  It  is  not  "QUICK"  with  a  "K"  ! (Had  to  say  this  too  many  times  to  Sales/Marketing  people...)
  • 5.
    ©2015 AKAMAI |FASTER FORWARDTM QUIC:  Important  Basics 1.  UDP-­based  new  protocol  with  TCP+TLS+HTTP/2  features 2.  Eliminated  overhead  mainly  due  to  strict  layering  between three  separately-­designed  protocols. 3.  Implemented  in  application-­level,  and  already  in  and   enabled  in  Chrome. 4.  Google  used  to  be  the  only  one  serving  this,  but  now, Akamai  is  under  beta  release  for  customer!
  • 6.
    ©2015 AKAMAI |FASTER FORWARDTM QUIC  Background  – So  what  are  we  trying  to  solve? 1.  Problem  with  HTTP Time  loss  due  to  inefficient  request-­response  design. 2.  Problem  with  TLS Extra  connection  overhead  due  to  initial  handshakes.   3.  Problem  with  TCP Extra  performance  degradation  under  congestion  due  to in-­order  delivery  restriction. 4.  Problem  with  the  Internet Excessive  latency/jitter  caused  by  BufferBloat.
  • 7.
    ©2015 AKAMAI |FASTER FORWARDTM HTTP:  Inefficient  request-­response  design client server GET GET GET response response response Every  request  consumes 1-­RTT  amount  of  time ① ② ③ Rich  page  content  usually have more  objects  to  fetch, meaning  it  will  be  impacted more  as  page  gets  richer!
  • 8.
    ©2015 AKAMAI |FASTER FORWARDTM Traditional  Workaround:  Multiple  connections  to  a  server www.example.com With  multiple  connections, we  can  eliminate  0.5-­RTT   overhead  by  sending  in  multiple  requests  in  parallel!
  • 9.
    ©2015 AKAMAI |FASTER FORWARDTM The  Birth  of  "Domain  Sharding" www-­001.example.com www-­002.example.com www-­003.example.com NOTE:  Each  browser  has  different  limit  on  max  connections  it       can  keep.  Also,  having  too  many  connections  will  have negative  impact  in  a  form  like  lower  throughput. Assign multiple names to the same server, and trick browser to make even more connections.
  • 10.
    ©2015 AKAMAI |FASTER FORWARDTM Inefficient  TCP  usage  upon  opening  multiple  connections client server SYN ACK GET SYN+ACK response Each  TCP  session  requires   initial  3-­way  handshake. Doing  this  over  and  over   again  with  the  same  server just  to  open  multiple  sessions is  obviously  inefficient.
  • 11.
    ©2015 AKAMAI |FASTER FORWARDTM Inefficient  TCP  usage  upon  using  multiple  connections time Bandwidth Available  Bandwidth Useable  TCP  Bandwidth TCP,  though  detail  depends  on  implementation,  basically  cannot  fully   use  available  BW  in  its  initial  phase. Going  through  this  phase  for  every  connections  is  also  inefficient.   How  real-­world  TCP  behaves  depends on  actual  network  status  and  congestion algorithm  in  TCP  stack.
  • 12.
    ©2015 AKAMAI |FASTER FORWARDTM TLS  weakness:  Initial  negotiation  overhead client server SYN ACK ClientHello SYN+ACK ServerHello,  ... TLS  does  its  own  negotiation on  top  of  TCP. This  means  HTTPS  session  is   penalized  with  3-­RTT  before  it   can  even  start speaking  HTTP.ClientFinished ServerFinished GET response *  TCP  FastOpen and  TLS  False  Start  are proposed  for  improvements.  However       time  is  needed  to  get  both  servers  and   clients  updated.
  • 13.
    ©2015 AKAMAI |FASTER FORWARDTM "Connection  Establishment"  and  "TLS"  on  QUIC client server ClientHello ServerHello QUIC  can  setup  connection  and   encrypted TLS-­equivalent   session  in  one  shot,  with 0-­or-­1   RTT  overhead  (0-­RTT,  meaning   zero  overhead,  for  re-­connection). GET response *  Feedback  was  made  from  QUIC  to  TLS-­1.3 design,  and  effort  is  now  being  made  to integrate  TLS-­1.3  back  into  QUIC. GET response 0−RTT case 1−RTT case
  • 14.
    ©2015 AKAMAI |FASTER FORWARDTM Pre-­QUIC  Web  Protocols  – SPDY  and  HTTP/2 There  were  several  pre-­QUIC  efforts  to  improve   HTTP  over  TCP  -­ SPDY  and  its  successor,  HTTP/2
  • 15.
    ©2015 AKAMAI |FASTER FORWARDTM Key  improvements  by  HTTP/2 *  Request  to  multiple  objects  can  be  pipelined  over  a  single TCP  connection,  and  server  can  respond  asynchronously. *  Server  can  push  extra  "will-­be-­needed"  objects  in  response, without  explicit  client  request. * Header  compression  (HPACK)  reduces  overhead  when multiple  small-­sized  objects  are  sent. NOTE:  QUIC  is  feature  compatible  with  HTTP/2, and  all  these  key  characteristics  are  supported.
  • 16.
    ©2015 AKAMAI |FASTER FORWARDTM HTTP/2:  Pipelined  Request  and  Asynchronous  Response client Server GET  #1 GET  #2 GET  #3 response response response #1 #2 #3 With  pipelined  request,  0.5-­RTT   time  is  saved  for  every  request. Also,  async-­response  allows   server  to  send  back  object  in  any   order.  This  allows  server  to  keep   filling  up  available  bandwidth.
  • 17.
    ©2015 AKAMAI |FASTER FORWARDTM HTTP/2:  Efficient  use  of  available  bandwidth time Bandwidth Available  Bandwidth Useable  TCP  Bandwidth With  continuous  pipelined  transfer  over  a  single  TCP  connection,   HTTP/2  can  use  "juicy"  phase  of  TCP,  where  available  bandwidth   is  closer  to  its  limit. How  real-­world  TCP  behaves  depends on  actual  network  status  and  congestion algorithm  in  TCP  stack.
  • 18.
    ©2015 AKAMAI |FASTER FORWARDTM TCP:  Bottleneck  of  HTTP/2 client server #3 #2 #3 HTTP/2  manages  virtual  "streams"  over  a  single  TCP  session  to manage  transfer  of  each  object. Let's  see  what  happens  when  congestion  occurs... #3 #1
  • 19.
    ©2015 AKAMAI |FASTER FORWARDTM TCP:  Bottleneck  of  HTTP/2 client server #3 #1 #3 #2 #3 If  second  packet,  which  contains  data  for  stream  #1  is  lost, ALL  following  streams/packets  are  negatively  impacted  by  reduced bandwidth  and  re-­sends  (though  TCP  SACK  helps  latter  case). This  is  because  all  streams  belong  to  the  same  TCP  session. packet lost! Negatively  Impacted
  • 20.
    ©2015 AKAMAI |FASTER FORWARDTM Impact  of  Congestion:HTTP/2  vs  Domain  Sharding client server client server server server Assume  TCP  B/W  reduces  by half  when  congestion  occurs. With  HTTP/2,  a  congestion  will impact  50%  of  total  B/W  as there  is  only  one  TCP  session. With  DS  of  3  servers,  impact   will  only  be  16%  (1/6)  as  other   established  TCP  sessions  are unaffected. Domain Sharding HTTP/2
  • 21.
    ©2015 AKAMAI |FASTER FORWARDTM #3 #1 #3 #2 #3 packet lost! Advantage  of  QUIC  "session"  over  TCP client server Streams  in  QUIC  session  are  independently  managed  with  different   congestion  control  algorithm,  so  there  is no  impact  to  other  stream   packets. "No  impact" to  following  streams!
  • 22.
    ©2015 AKAMAI |FASTER FORWARDTM Last,  but  not  least:  Problem  of  the  Internet BufferBloat
  • 23.
    ©2015 AKAMAI |FASTER FORWARDTM What  is  BufferBloat,  anyway? client server N N N N Packet  transferred  over  the   Internet passes  through  many   nodes/network  equipments.
  • 24.
    ©2015 AKAMAI |FASTER FORWARDTM BufferBloat – Excessive  Buffering  in  the  Network Each  node  usually  has  a  buffer  (memory)  to  queue   packets  to  be  delivered.  So  every  packet  is  first  queued,   and  then  sent  out  to  further  destination. node 空 (  ) (  ) (  ) (  ) (packet  queue) #3 #2 #3#3 #1
  • 25.
    ©2015 AKAMAI |FASTER FORWARDTM BufferBloat – Excessive  Buffering  in  the  Network During  a  normal  time,  packets  are  queued  and  dequeued without  much  delay.  So  no  issue  will  be  observed. node 空 (  ) (  ) (  ) (  ) (packet  queue) #3 #2 #3#3 #1
  • 26.
    ©2015 AKAMAI |FASTER FORWARDTM BufferBloat – Excessive  Buffering  in  the  Network As  network  starts  to  get  busy,  internal  queue  also  gets  closer  to  fully   buffered  state.  This  means  packet's  average  residence  time  in   queue  gets  longer  as  well. node (packet  queue) #3 #4 空 空 空#3 #2#1 Packet  #2  needs  to  wait  until  all  prior-­packets  is  sent  out #3 #4 #3 #2#1 #3 #4 空 空 空#3 #2#1
  • 27.
    ©2015 AKAMAI |FASTER FORWARDTM BufferBloat – Excessive  Buffering  in  the  Network It  was  discovered  that  many  devices  have  excessively  large   buffer  compared  to  its  packet  processing  performance. Each  resulting  delay  adds  up  to  cause  large  latency  and  jitter   issue  at  the  global  scale.  Real-­time  media  communication  are   especially  impacted  by  this  issue. node (excessive amount of packet queue) ④ ③① Packet  #2  needs  to  wait  until  all  prior-­packets  are  sent  out! ③ ④ ③ ④① ③ ④ ③ ④① ③ ④ ③ ④① ③ ④ ③ ④① ③ ④ ③ ④① ③ ④ ③ ④① ③ ④ ③ ④① ③ ④ ③ ④① ③ ④ ③ ④① ③ ④ ③ ④① ③ ④ ③ ④① ③ ④ ③ ④① ③ ④ ③ ④① ③ ④ ③ ④① ③ ④ ③ ④① ③ ④ ③ ②① ④ ③①
  • 28.
    ©2015 AKAMAI |FASTER FORWARDTM BufferBloat:  Resulting  Issues  We  are  Facing *  Not  only  there's  an  increase  in  latency  that  impacts network   performance,  but  it  also  comes  with  a  large  jitter,  which   impacts quality  of  network  performance. This  has  significant  meaning  to  media-­related  application. *  It  is  also  destroys  TCP  congestion  control  mechanism  as   packets  are  never  dropped  but  only  delayed  even  more until  the  last  devastating  moment,  when  it  is  now  difficult  to       recover  gracefully.
  • 29.
    ©2015 AKAMAI |FASTER FORWARDTM QUIC  to  the  Rescue! -­ New  CC  algorithm  proposed  by  QUIC -­ Instead  of  waiting  for  packet  drop  or  ECN  to  arrive, BBR  detects  queue  buildup  through  advanced  monitoring. BBR  (Bottleneck  Bandwidth  and  RTprop) Paced  Transmission -­ Sends  out  packet  in  a  pace  that  matches  with  estimated end-­to-­end  network  performance. -­ Suppressing  burst  transfer  helps  receiving  end's  queue from  filling  up.
  • 30.
    ©2015 AKAMAI |FASTER FORWARDTM QUIC  – More  advantages As  it  can  be  deployed  much  faster  than  OS  network  stack,   QUIC  is  now  viewed  as  a  testbed  of  "future  TCP"  features. Advanced  "Future  TCP"  features Connection  is  managed  with  a  protocol-­defined  64-­bit  ID. This  means  session  can  continue  even  if  IP  address  changes! Seamless  access  between  LTE/5G<-­>LAN With  FEC  (Forward  Error  Correction),  small  packet  drop  will   not  require  a  re-­send.  This  would  improve  quality  of  service   in  unreliable  network. Better  quality  for  wireless  service
  • 31.
    ©2015 AKAMAI |FASTER FORWARDTM DEMO! As  part  of  our  "Media  Acceleration"  package,  Akamai  already   provides  beta  of  this  QUIC-­based  delivery  for  customers. Let's  see  how  it  performs! QUIC DEMO
  • 32.
    ©2015 AKAMAI |FASTER FORWARDTM Wrap-Up We  have  covered  technical  aspects  of  QUIC,  one  of  core   component  of  our  "Media  Acceleration"  offering. *  Comprehensive  solution  to  solve  issues  across traditional  HTTP,  TLS,  and  TCP  stacks. *  Advanced  features  like  end-­to-­end  network  state  detection   for  congestion  control  and  paced  transfer. *  Active  development  and  standardization  effort  as  a "Future  TCP"  testbed. With  these,  QUIC  would  be  the  key  protocol  in  future  Internet. So  why  not  start  trying  it  out?
  • 33.
    ©2015 AKAMAI |FASTER FORWARDTM