33. context 'when service UP' do
before { Cache.put('key', 'value') }
it 'saves value' do
expect(Cache.get('key')).to eq('value')
end
end
context 'when service DOWN' do
it 'will raises error' do
Toxiproxy[:redis].down do
expect { Cache.put('key', 'value') }.to raise_error(Redis::CannotConnectError)
end
end
end
34. context 'when service UP' do
before { Cache.put('key', 'value') }
it 'saves value' do
expect(Cache.get('key')).to eq('value')
end
end
context 'when service DOWN' do
it 'will raises error' do
Toxiproxy[:redis].down do
expect { Cache.put('key', 'value') }.to raise_error(Redis::CannotConnectError)
end
end
end
36. An application with an average Response Time of 60ms can process
1.000 Requests Per Minute (RPM) per Thread.
37. An application with an average Response Time of 60ms can process
1.000 Requests Per Minute (RPM) per Thread.
How many Threads we need to handle 100.000 RPM of Throughput ?
39. Imagine that 1% of the traffic timeout on a Service after 30 seconds,
the Response Time will raise to 360 ms.
40. Imagine that 1% of the traffic timeout on a Service after 30 seconds,
the Response Time will raise to 360 ms.
How many Threads we need to handle 100.000 RPM of Throughput ?
52. class Cache
def self.put(key, value)
service.set(key, value)
end
def self.get(key)
service.get(key)
end
end
end
Cache.put('key', 'value')
Cache.get('key')
76. Summary
Know your dependencies
Improve your test suite
Fail Fast
Timeouts
Fail Gracefully
Fallbacks
Don't try if you can't succeed
Circuit Breakers and Bulkheads are friends
Monitor Service Calls
Notice problems