Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Writing Server in Python

251 views

Published on

An overview about the strategies for building servers with python, using single thread model, one thread per connection, nonblocking io and await.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Writing Server in Python

  1. 1. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 # client.py import socket s = socket.create_connection(('localhost', 8000)) while True: data = input('>> ') s.send(data.encode()) data = s.recv(1024) print(data) # server.py import socket s = socket.socket() s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind(('localhost', 8000)) s.listen(100) while True: client, __ = s.accept() data = client.recv(1024) while data: client.send(data) data = client.recv(1024) client.close()
  2. 2. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 # simple_http.py from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer class Handler(BaseHTTPRequestHandler): def do_GET(self): self.send_response(200) self.end_headers() self.wfile.write("Hello World".encode('utf8')) HTTPServer(('localhost', 8000), Handler).serve_forever()
  3. 3. Thread Stats Avg Stdev Max +/- Stdev Latency 9.94s 21.79s 1.45m 83.49% Req/Sec 0.03 1.92 111.00 99.97% 703 requests in 1.67m, 70.71KB read Socket errors: connect 0, read 32, write 0, timeout 4604 Requests/sec: 7.03 Transfer/sec: 723.95B
  4. 4. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 # threaded_http.py from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer from SocketServer import ThreadingMixIn class ThreadServer(ThreadingMixIn, HTTPServer): pass class Handler(BaseHTTPRequestHandler): def do_GET(self): self.send_response(200) self.end_headers() self.wfile.write("Hello World".encode('utf8')) ThreadServer(('localhost', 8000), Handler).serve_forever()
  5. 5. Thread Stats Avg Stdev Max +/- Stdev Latency 9.94s 21.79s 1.45m 83.49% Req/Sec 0.03 1.92 111.00 99.97% 703 requests in 1.67m, 70.71KB read Socket errors: connect 0, read 32, write 0, timeout 4604 Requests/sec: 7.03 Transfer/sec: 723.95B Thread Stats Avg Stdev Max +/- Stdev Latency 215.55ms 740.19ms 7.41s 92.21% Req/Sec 1.01 9.51 666.00 98.71% 10052 requests in 1.67m, 0.99MB read Socket errors: connect 0, read 0, write 0, timeout 344 Requests/sec: 100.50 Transfer/sec: 10.11KB
  6. 6. import asyncio @asyncio.coroutine def handle_echo(reader, writer): data = yield from reader.read(100) while data: writer.write(data.decode()) yield from writer.drain() data = yield from reader.read(100) writer.close() try: loop = asyncio.get_event_loop() coro = asyncio.start_server(handle_echo, '127.0.0.1', 8000, loop=loop) server = loop.run_until_complete(coro) loop.run_forever() except: server.close() loop.run_until_complete(server.wait_closed()) loop.close() 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
  7. 7. import asyncio async def handle_echo(reader, writer): data = await reader.read(100) while data: writer.write(data) await writer.drain() data = await reader.read(100) writer.close() try: loop = asyncio.get_event_loop() coro = asyncio.start_server(handle_echo, '127.0.0.1', 8000, loop=loop) server = loop.run_until_complete(coro) loop.run_forever() except: server.close() loop.run_until_complete(server.wait_closed()) loop.close() 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 3.5
  8. 8. 1. Compatibilidade entre bibliotecas 2. Útil para aplicações IO bound ou com muitas conexões abertas mas sem processamento 3. Pode ser mais dificil de programar 4. Starvation
  9. 9. gevent

×