gRPC is an open-source framework for building language agnostic services and clients. This hands-on session will cover techniques for building, testing and monitoring gRPC services using Docker and Go. During this session you will build a simple gRPC service and client, as well as an HTTP reverse-proxy to allow your service to also receive HTTP traffic.
gRPC in Golang presentation
In this talk, I introduced gRPC, Protocol buffer, and how to use them with golang.
Source code used in the presentation: http://github.com/AlmogBaku/grpc-in-go
You’re ready to make your applications more responsive, scalable, fast and secure. Then it’s time to get started with NGINX. In this webinar, you will learn how to install NGINX from a package or from source onto a Linux host. We’ll then look at some common operating system tunings you could make to ensure your NGINX install is ready for prime time.
View full webinar on demand at http://nginx.com/resources/webinars/installing-tuning-nginx/
Back in 2015, Square and Google collaborated to launch gRPC, an open source RPC framework backed by protocol buffers and HTTP/2, based on real-world experiences operating microservices at scale. If you build microservices, you will be interested in gRPC.
This webcast covers:
- a technical overview of gRPC
- use cases and applicability in your stack
- a deep dive into the practicalities of operationalizing gRPC
gRPC in Golang presentation
In this talk, I introduced gRPC, Protocol buffer, and how to use them with golang.
Source code used in the presentation: http://github.com/AlmogBaku/grpc-in-go
You’re ready to make your applications more responsive, scalable, fast and secure. Then it’s time to get started with NGINX. In this webinar, you will learn how to install NGINX from a package or from source onto a Linux host. We’ll then look at some common operating system tunings you could make to ensure your NGINX install is ready for prime time.
View full webinar on demand at http://nginx.com/resources/webinars/installing-tuning-nginx/
Back in 2015, Square and Google collaborated to launch gRPC, an open source RPC framework backed by protocol buffers and HTTP/2, based on real-world experiences operating microservices at scale. If you build microservices, you will be interested in gRPC.
This webcast covers:
- a technical overview of gRPC
- use cases and applicability in your stack
- a deep dive into the practicalities of operationalizing gRPC
EFK Stack이란 ElasticSearch, Fluentd, Kibana라는 오픈소스의 조합으로, 방대한 양의 데이터를 신속하고 실시간으로 수집/저장/분석/시각화 할 수 있는 솔루션입니다. 특히 컨테이너 환경에서 로그 수집을 위해 주로 사용되는 기술 스택입니다.
Elasitc Stack에 대한 소개와 EFK Stack 설치 방법에 대해 설명합니다.
Creare una Progressive Web App in Angular è un processo semplice e potente. Vediamone tutte le caratteristiche messe a disposizione dal famoso framework frontend, per rendere le nostre Single Page Application pronte per lavorare offline, ricevere notifiche e tanto altro!
Repository GitHub: https://github.com/fsciuti/ng-pwa-conf-demo
On-Demand: https://www.nginx.com/resources/webinars/nginx-adc-basics-best-practices/
In this webinar, we help you get started with NGINX, industry’s most ubiquitous web server and API gateway. We cover best practices for installing, configuring, and troubleshooting both NGINX Open Source and the enterprise-grade NGINX Plus. We provide insights about using NGINX Controller to manage your NGINX Plus instances.
Watch this webinar to learn:
- How to create NGINX configurations for web server, load balancer, etc.
- About improving performance using keepalives and other NGINX directives
- How the NGINX Controller Load Balancing Module can manage NGINX Plus instances at scale
- About augmenting your existing ADC with NGINX
Setting up Notifications, Alerts & Webhooks with Flux v2 by Alison DowdneyWeaveworks
Watch the recording here: https://youtu.be/cakxixc-yQk
❗️ Notifications & Alerts ⚠️
When operating a cluster, different teams may wish to receive notifications about the status of their GitOps pipelines. For example, the on-call team would receive alerts about reconciliation failures in the cluster, while the dev team may wish to be alerted when a new version of an app was deployed and if the deployment is healthy.
Webhook Receivers
The GitOps toolkit controllers are by design pull-based. In order to notify the controllers about changes in Git or Helm repositories, you can setup webhooks and trigger a cluster reconciliation every time a source changes. Using webhook receivers, you can build push-based GitOps pipelines that react to external events.
Alison Dowdney, Developer Experience Engineer at Weaveworks and CNCF Ambassador, walks through how to define a provider, an alert, git commit status, exposing the webhook receiver and defining a git repository and receiver.
Resources
Flux2 Documentation: https://fluxcd.io/docs/
Flux Guide: Setup Notifications: https://fluxcd.io/docs/guides/notifications/
Flux Guide: Setup Webhook receivers: https://fluxcd.io/docs/guides/webhook-receivers/
Flux Roadmap: https://fluxcd.io/docs/roadmap/
Alison's Demo Repo: https://github.com/alisondy/flux-demos
Everything you want to know about IngressJanakiram MSV
An Ingress in Kubernetes is a collection of rules that allow inbound connections to reach services. Attend this webinar to learn when to use Ingress controllers. It will cover the types of ingress controllers along with relevant use cases. We will walk
you through a demo of configuring Ingress for a web application.
Installing and Configuring NGINX Open SourceNGINX, Inc.
This pre-recorded 101-level lab and demo takes you from a “blank” LINUX system to a full-featured NGINX application delivery configuration for serving web content and load balancing.
Project onion - Project Architecture for .Net Core ApplicationAbhinav Jha
This slide will explain you the concept of loosely coupled architectures and how we can use Onion Architecture to create Large Scape Project based on this template
EFK Stack이란 ElasticSearch, Fluentd, Kibana라는 오픈소스의 조합으로, 방대한 양의 데이터를 신속하고 실시간으로 수집/저장/분석/시각화 할 수 있는 솔루션입니다. 특히 컨테이너 환경에서 로그 수집을 위해 주로 사용되는 기술 스택입니다.
Elasitc Stack에 대한 소개와 EFK Stack 설치 방법에 대해 설명합니다.
Creare una Progressive Web App in Angular è un processo semplice e potente. Vediamone tutte le caratteristiche messe a disposizione dal famoso framework frontend, per rendere le nostre Single Page Application pronte per lavorare offline, ricevere notifiche e tanto altro!
Repository GitHub: https://github.com/fsciuti/ng-pwa-conf-demo
On-Demand: https://www.nginx.com/resources/webinars/nginx-adc-basics-best-practices/
In this webinar, we help you get started with NGINX, industry’s most ubiquitous web server and API gateway. We cover best practices for installing, configuring, and troubleshooting both NGINX Open Source and the enterprise-grade NGINX Plus. We provide insights about using NGINX Controller to manage your NGINX Plus instances.
Watch this webinar to learn:
- How to create NGINX configurations for web server, load balancer, etc.
- About improving performance using keepalives and other NGINX directives
- How the NGINX Controller Load Balancing Module can manage NGINX Plus instances at scale
- About augmenting your existing ADC with NGINX
Setting up Notifications, Alerts & Webhooks with Flux v2 by Alison DowdneyWeaveworks
Watch the recording here: https://youtu.be/cakxixc-yQk
❗️ Notifications & Alerts ⚠️
When operating a cluster, different teams may wish to receive notifications about the status of their GitOps pipelines. For example, the on-call team would receive alerts about reconciliation failures in the cluster, while the dev team may wish to be alerted when a new version of an app was deployed and if the deployment is healthy.
Webhook Receivers
The GitOps toolkit controllers are by design pull-based. In order to notify the controllers about changes in Git or Helm repositories, you can setup webhooks and trigger a cluster reconciliation every time a source changes. Using webhook receivers, you can build push-based GitOps pipelines that react to external events.
Alison Dowdney, Developer Experience Engineer at Weaveworks and CNCF Ambassador, walks through how to define a provider, an alert, git commit status, exposing the webhook receiver and defining a git repository and receiver.
Resources
Flux2 Documentation: https://fluxcd.io/docs/
Flux Guide: Setup Notifications: https://fluxcd.io/docs/guides/notifications/
Flux Guide: Setup Webhook receivers: https://fluxcd.io/docs/guides/webhook-receivers/
Flux Roadmap: https://fluxcd.io/docs/roadmap/
Alison's Demo Repo: https://github.com/alisondy/flux-demos
Everything you want to know about IngressJanakiram MSV
An Ingress in Kubernetes is a collection of rules that allow inbound connections to reach services. Attend this webinar to learn when to use Ingress controllers. It will cover the types of ingress controllers along with relevant use cases. We will walk
you through a demo of configuring Ingress for a web application.
Installing and Configuring NGINX Open SourceNGINX, Inc.
This pre-recorded 101-level lab and demo takes you from a “blank” LINUX system to a full-featured NGINX application delivery configuration for serving web content and load balancing.
Project onion - Project Architecture for .Net Core ApplicationAbhinav Jha
This slide will explain you the concept of loosely coupled architectures and how we can use Onion Architecture to create Large Scape Project based on this template
Download this Presentation for free from www.ecti.co.in/downloads.html
No SIGN UP REQUIRED.
C++ Programming Training PPTs / PDFs for free.
Download free C++ Programming study material. Learn C++ Programming for free in 2 hours.
The sole purpose of sharing these slides are to educate the beginners of IT and Computer Science/Engineering. Credits should go to the referred material and also CICRA campus, Colombo 4, Sri Lanka where I taught these in 2017.
C++ is most often used programming language. This slide will help you to gain more knowledge on C++ programming. In this slide you will learn the fundamentals of C++ programming. The slide will also help you to fetch more details on Object Oriented Programming concepts. Each of the concept under Object Oriented Programming is explained in detail and in more smoother way as it will helpful for everyone to understand.
Scalable code Design with slimmer Django models .. and moreDawa Sherpa
Code scalability is the capability to allow your software and processes to allow increase in productivity efficiently when you add new engineers.
Scalable code design strategy:
* Design for human in mind
* Focus on productivity leak areas
* How fat models are bad for code scalability
The PVS-Studio team is now actively developing a static analyzer for C# code. The first version is expected by the end of 2015. And for now my task is to write a few articles to attract C# programmers' attention to our tool in advance. I've got an updated installer today, so we can now install PVS-Studio with C#-support enabled and even analyze some source code. Without further hesitation, I decided to scan whichever program I had at hand. This happened to be the Umbraco project. Of course we can't expect too much of the current version of the analyzer, but its functionality has been enough to allow me to write this small article.
cpp-streams.ppt,C++ is the top choice of many programmers for creating powerf...bhargavi804095
C++ is the top choice of many programmers for creating powerful and scalable applications. From operating systems to video games, C++ is the proven language for delivering high-performance solutions across a range of industries.
One of the standout features of C++ is its built-in support of streams. C++ makes it easy to channel data in and out of your programs like a pro. Whether you’re pushing data out to cout or pulling it in from cin, C++ streams are the key to keeping your code in the zone.
The front-end React developer world is all abuzz with the fondness of using and preferring TypeScript over JavaScript. Although it’s not recommended for all types of projects it strongly overcomes many shortcomings of JavaScript and improves over it.
Similar to Building Services With gRPC, Docker and Go (20)
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
3. Agenda
● Background About Namely
● Why Services?
● Protobufs and gRPC - Defining Interfaces
● JSON
● Docker and Docker Compose
● Questions
4. About Namely
● Mission: Build A Better Workplace
● HR, Benefits and Payroll
● 1200 customers
● ~$1 billion in payroll/month
● ~100 engineers
● ~40 services, more shipping every week
● Polyglot environment: React, C# (.NET Core), Go, Ruby and Python
● Modern infrastructure: Kubernetes, Istio, AWS, Docker, Spinnaker.
● Big believers in open-source. We've contributed to the official gRPC C# repo. We
open source a lot of the tools we build.
6. A service is software that...
● is the source of truth for its data.
● is independently deployable.
● prevents coupling through use of API
contracts.
● adds business value and open up new
opportunities.
● has a clear definition of availability (an SLO).
7. Domain Ownership
Services don't mean containers or AWS or Kubernetes. It
means that pieces of software that own their domain.
Services own the reads and writes for their data. Access to
this data should be done through APIs (not a shared DB).
Don't build a distributed monolith or you'll get all of the
weaknesses of services and none of the benefits.
8. Why Namely Uses Services
● In a monolith, teams ended up stepping on each others
feet.
○ Accidentally releasing each other team's features.
○ Big changes touching lots of code accidentally break things.
○ Unclear ownership of large parts of the codebase or data.
● Services make teams think in terms of API contracts.
● Teams can use language and tools of their choice.
● Give teams ownership and mastery of their domain.
10. Companies And Employees
A Company is a collection of Employee
objects and has an Office Location.
Every Employee has a name, works for a
Company and has a badge number.
Every Company has a CEO, who is also an
Employee.
Company
+ company_uuid: uuid
+ ceo_employee_uuid: uuid
+ office_location: Address
Employee
+ employee_uuid: uuid
+ company_uuid: uuid
+ name: string
+ badge_number: int32
11. A Problem
These models are almost certainly wrong.
Do all companies have a CEO? Do all companies have one CEO?
Do all companies have an office location? Do all companies have only one
office location? Are all companies based in America?
Do all employees have badge numbers? Is a single name field the best choice?
Of course not.
13. Anticipating Change
There is no perfect domain model, but our model might be good enough for
our current customers. Don't design for a future that might not exist. We want
to start with this model and iterate. But in doing so, some things to consider:
● What if you can’t force your old API clients to update?
● How do you release API clients and API servers separately?
○ Very important when doing slow rollouts of software.
● How do you avoid breaking updated API clients after a rollback?
● What if your data is stored on disk?
○ In a message queue, a file or a database.
14. Protocol Buffers
Use protocol buffers aka "protobufs"!
Message format invented by Google. Supports forward and backward
compatibility: newer servers can read messages from old clients and vice
versa.
A .proto file gets compiled into many languages (C#, Java, Go, Ruby, etc.)
Think fancy JSON with a schema.
15. A Simple Proto File
example.proto
Think of a message as a
C struct/POJO/POCO -
just data.
On each field in the
message is the field
number (i.e. = 4), this is
used when serializing
protos. It's not a (default)
value.
syntax = "proto3";
package examples;
message Employee {
string employee_uuid = 1;
string company_uuid = 2;
string name = 3;
int32 badge_number = 4;
}
message Address {
string address1 = 1;
string address2 = 2;
string zip = 3;
string state = 4;
}
message Company {
string company_uuid = 1;
Address office_location = 2;
string ceo_employee_uuid = 3;
}
github.com/namely/codecamp-2018-go
16. Compiling Protos
The protobuf compiler
turns protos into code for
your language.
On the right, we turn our
employee.proto from the
previous slide into Go
code.
Can also do C#, JS, Ruby,
Python, Java and many
other languages.
$ docker run
-v `pwd`:/defs namely/protoc-all
-f example.proto
-l go
The above command runs the docker container
namely/protoc-all to compile the example.proto
file into Go code and output the results to `pwd` (the
current directory).
$ ls
example.proto gen/
$ ls gen/pb-go/
example.pb.go
github.com/namely/codecamp-2018-go
17. The Generated
Code
example.pb.go looks
something like the right.
This code is generated
automatically by the
namely/protoc-all
container.
Try running
namely/protoc-all with
-l python instead.
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: example.proto
package examples
... snip ...
type Employee struct {
EmployeeUuid string
`protobuf:"bytes,1,opt,name=employee_uuid,json=employeeUuid"
json:"employee_uuid,omitempty"`
CompanyUuid string
`protobuf:"bytes,2,opt,name=company_uuid,json=companyUuid"
json:"company_uuid,omitempty"`
Name string
`protobuf:"bytes,3,opt,name=name" json:"name,omitempty"`
BadgeNumber int32
`protobuf:"varint,4,opt,name=badge_number,json=badgeNumber"
json:"badge_number,omitempty"`
}
func (m *Employee) Reset() { *m = Employee{} }
func (m *Employee) String() string { return
proto.CompactTextString(m) }
func (*Employee) ProtoMessage() {}
func (*Employee) Descriptor() ([]byte, []int) { return fileDescriptor0,
[]int{0} }
... snip ...
github.com/namely/codecamp-2018-go
19. We need a way for our services to talk
to each other.
Remote Procedure Calls (RPCs) are
function calls that can be made over
the network.
20. is an open-source RPC framework for
building language agnostic servers and
clients that can talk to each other.
This means your Go/Ruby/C# client can talk
to your Python/Java/C++ server (and more).
It uses protocol buffers as its message
format.
21. Adding Services to
example.proto
You can also define
services in your proto file.
These get compiled to
gRPC servers and clients
that can speak protocol
buffers to each other.
You can write your server
and client in any
supported language.
service EmployeeService {
rpc CreateEmployee(CreateEmployeeRequest)
returns (Employee) {}
rpc ListEmployees(ListEmployeesRequest)
returns (ListEmployeesResponse) {}
}
message CreateEmployeeRequest {
Employee employee = 1;
}
message ListEmployeesRequest {
string company_uuid = 1;
}
message ListEmployeesResponse {
repeated Employee employees = 1;
}
github.com/namely/codecamp-2018-go
23. Application
Structure
The Company Service in
company/
The Employee Service in
employee/
The protobufs in
protos/
gen_protos.sh to
compile the protos
Check out the code!
$ git clone
github.com/namely/codecamp-2018-go
$ ls
CODEOWNERS LICENSE README.md
docker-compose.yml
example.proto
gen_protos.sh
protos/
company/
employee/
github.com/namely/codecamp-2018-go
24. Diving Into
Employee Service
Diving into
employee/main.go.
The main() function
listens on a TCP port,
creates a new gRPC
server and registers our
server interface to handle
gRPC calls
func main() {
flag.Parse()
lis, err := net.Listen("tcp",
fmt.Sprintf("0.0.0.0:%d", *port))
if err != nil {
log.Fatalf("error listening: %v", err)
}
server := grpc.NewServer()
pb.RegisterEmployeeServiceServer(
server, newServer())
server.Serve(lis)
}
github.com/namely/codecamp-2018-go
25. The Employee
Server
The EmployeeServer
stores all of the
employees in memory.
For a real server you
would use a database.
It also creates a client
that talks to company
service to check that
companies exist.
type EmployeeServer struct {
companies map[string]*EmployeeCollection
conn *grpc.ClientConn
companyClient company_pb.CompanyServiceClient
}
func newServer() *EmployeeServer {
s := &EmployeeServer{}
s.companies =
make(map[string]*EmployeeCollection)
s.conn, _ = grpc.Dial(
*companyAddr, grpc.WithInsecure())
s.companyClient =
company_pb.NewCompanyServiceClient(s.conn)
return s
}
github.com/namely/codecamp-2018-go
26. Looking at a
Handler
Let's look at the
CreateEmployee handler
It does three things:
1. Validate the input.
2. Call company service
to make sure the
company exists
3. Saves the employee.
This is the signature of the CreateEmployee function on
the EmployeeServer.
Input is:
- the call's context
- a CreateEmployeeRequest proto - the same one we
defined in our proto file earlier!
func (s *EmployeeServer)
CreateEmployee(
ctx context.Context,
req *employee_pb.CreateEmployeeRequest)
(*employee_pb.Employee, error) {
....
}
Input
parameters
Return
Type
(A tuple)
github.com/namely/codecamp-2018-go
27. Looking at a
Handler
Let's look at the
CreateEmployee handler
It does three things:
1. Validate the input.
2. Call company service
to make sure the
company exists
3. Saves the employee.
Here we check that the employee's name is set. If not,
we return an Invalid Argument error to the client.
func (s *EmployeeServer) CreateEmployee(
ctx context.Context,
req *employee_pb.CreateEmployeeRequest)
(*employee_pb.Employee, error) {
// The employee must have a name.
if req.Employee.Name == "" {
return nil, status.Error(
codes.InvalidArgument, "employee must have name")
}
....
}
github.com/namely/codecamp-2018-go
28. Looking at a
Handler
Let's look at the
CreateEmployee handler
It does three things:
1. Validate the input.
2. Call company service
to make sure the
company exists
3. Saves the employee.
Next we call CompanyService.GetCompany with a
GetCompanyRequest to check that the employee's
company exists.
func (s *EmployeeServer) CreateEmployee(
ctx context.Context,
req *employee_pb.CreateEmployeeRequest)
(*employee_pb.Employee, error) {
....
_, err := s.companyClient.GetCompany(
ctx, &company_pb.GetCompanyRequest{
CompanyUuid: req.Employee.CompanyUuid,
})
if err != nil {
return nil, status.Error(
codes.InvalidArgument, "company does not exist")
}
....
}
github.com/namely/codecamp-2018-go
29. Looking at a
Handler
Let's look at the
CreateEmployee handler
It does three things:
1. Validate the input.
2. Call company service
to make sure the
company exists
3. Saves the employee.
Finally, we save the employee and return the saved
employee to the caller. In our example, we just save it
in memory, but in real life you'd want to use some data
storage for this (i.e. a database).
func (s *EmployeeServer) CreateEmployee(
ctx context.Context,
req *employee_pb.CreateEmployeeRequest)
(*employee_pb.Employee, error) {
....
// If we're here, we can save the employee.
return s.SaveEmployee(req.Employee), nil
}
github.com/namely/codecamp-2018-go
31. Docker lets you build your applications into
container (which is sort of like a
lightweight virtual machine).
This makes it easy to distribute your
software and run it anywhere.
You make containers by writing a
Dockerfile.
32. Dockerfiles
Package your application
in a container that can be
run in various cloud
infrastructure.
Makes it easy to
distribute applications.
Here's the Dockerfile for
employees.
Try building this with
$ docker build -t company .
The above command builds the Dockerfile in the current
directory, and gives the resulting container the name
"company".
FROM golang:alpine AS build
RUN apk add --no-cache git
WORKDIR /go/src/github.com/namely/codecamp-2018-go/employee
COPY . .
RUN go get -d -v ./...
RUN go install -v ./...
FROM alpine
COPY --from=build /go/bin/employee /usr/local/bin/
CMD ["employee"]
github.com/namely/codecamp-2018-go
34. Docker-Compose lets you run and configure
multiple docker containers.
It makes starting and stopping containers
easy.
It creates DNS names for your containers so
they can talk to each other.
35. docker-compose.yml
Defines two services
company and employee.
The build field tells
docker-compose how to
find your Dockerfile to
build your services.
github.com/namely/codecamp-2018-go
version: "3.6"
services:
company:
build: ./company
command: company -port 50051
ports:
- 50051:50051
employee:
build: ./employee
command: >
employee -port=50051
-company_addr=company:50051
ports:
- 50052:50051
depends_on:
- company
36. Bringing Everything Up
Build your services with:
$ docker-compose build
And start them up (in the background) with
$ docker-compose up -d
github.com/namely/codecamp-2018-go
38. Using the gRPC CLI
Namely provides a Docker container that contains the official gRPC CLI for
querying gRPC services. Get it with
$ docker pull namely/grpc-cli
Create some aliases to make calling it easier. docker.for.mac.localhost is how
the namely/grpc-cli reaches your local machine where the service is running.
$ alias company_call='docker run -v
`pwd`/protos/company:/defs --rm -it namely/grpc-cli
call docker.for.mac.localhost:50051'
$ alias employee_call='docker run -v
`pwd`/protos/employee:/defs --rm -it namely/grpc-cli
call docker.for.mac.localhost:50052'
docker.for.win.localhost
on Windows!
39. Creating a Company
Let's use the grpc_cli to call CompanyService.CreateCompany. We say
docker.for.mac.localhost to let the grpc-cli docker container find localhost on
your local machine (where we exposed the port in docker-compose)
$ company_call CompanyService.CreateCompany
"" --protofiles=company.proto
company_uuid: "3ac4f180-9410-467f-92b7-06763db0a8f1"
40. Creating an Employee
We'll take the company_uuid from the
$ employee_call EmployeeService.CreateEmployee
"employee: {name:'Martin',
company_uuid: '3ac4f180-9410-467f-92b7-06763db0a8f1'}"
--protofiles=employee.proto
employee_uuid: "10b286b2-247a-4864-afe5-f56163681af6"
company_uuid: "3ac4f180-9410-467f-92b7-06763db0a8f1"
name: "Martin"
46. Just Kidding No New Code!
Just run Namely's Docker container to generate a new server
$ docker run -v `pwd`:/defs namely/gen-grpc-gateway
-f protos/company/company.proto -s CompaniesService
This generates a complete server in gen/grpc-gateway. Now build it. The
example repo has this in the docker-compose file as well.
$ docker build -t companies-gw
-f gen/grpc-gateway/Dockerfile gen/grpc-gateway/
github.com/namely/codecamp-2018-go
47. Using cURL to try
our HTTP API
Let's wire these together
and use cURL to try out
our new API.
gRPC-Gateway makes it
easy to share your
services with a front-end
application.
Bring up our gateway
$ docker-compose up -d companies-gw companies
Create a company
$ curl -X POST
-d '{"office_location":{"address1":"foo"}}'
localhost:8082/companies
{"company_uuid":"d13ecefd-6b63-4919-9b33-e0006ee676ec"
,"office_location":{"address1":"foo"}}
Get that company
$ curl
localhost:8082/companies/d13ecefd-6b63-4919-9b33-e00
06ee676ec
{"company_uuid":"d13ecefd-6b63-4919-9b33-e0006ee676ec
","office_location":{"address1":"foo"}}
EASY!
github.com/namely/codecamp-2018-go
48. Organizing Protos
Namely uses a monorepo
of all of our protobufs.
Services add this as a git
submodule.
This lets everyone stay up
to date. It also serves as
a central point for design
and API discussions.
49. What Did We Learn?
● How to build services with gRPC, Docker and Go.
● Thinking about services and how to get value from them.
● The importance of backward compatibility and how protobufs help.
● How to compile protobufs using namely/docker-protoc.
● How to use namely/grpc-cli to call your services.
● Using namely/gen-grpc-gateway to create HTTP services for your APIs.
● Use Docker to build your services into containers.
● Using Docker-Compose to bring up multiple containers.
52. GRPC Interceptors
Interceptors let you catch calls before they get to the handlers, and before
they're returned to the client.
1 2
34
RPC
Handler
Interceptor
Client
RPC
53. GRPC Interceptors
An interceptor is a function with the signature:
func(ctx context.Context, // Info about the call (i.e. deadline)
req interface{}, // Request (i.e. CreateCompanyRequest)
info *grpc.UnaryServerInfo, // Server info (i.e. RPC method name)
handler grpc.UnaryHandler // Your handler for the RPC.
)
(resp interface{}, err error) // The response to send, or an error
54. A Typical
Interceptor
Interceptors let you do some
cool stuff:
1. Transform the request before
your handler gets it.
2. Transform the response
before the client sees it.
3. Add logging and other tooling
around the request without
having to copy-paste code in all
of your handlers.
func MetricsInterceptor(
ctx context.Context, req interface{},
info *grpc.UnaryServerInfo,
handler grpc.UnaryHandler)
(resp interface{}, err error) {
// Get the RPC name (i.e. "CreateCompany")
name := info.FullMethod
// Start a timer to see how long things take.
start := time.Now()
// Actually call the handler - your function.
out, err := handler(ctx, req)
// Check for errors
stat, ok := status.FromError(err)
// Log to our metrics system (maybe statsd)
LogMethodTime(name, start, time.Now(), stat)
// Return to the client. We could also change
// the response, perhaps by stripping out PII or
// doing error normalization/sanitization.
return out, err
}
} github.com/namely/codecamp-2018-go
58. Mock Services
As you grow, you won't
want to bring up all of
your services.
With Go and Mockgen,
you can make your tests
act like a real service.
59. Combining Unit and Integration Tests
Your tests can be a hybrid of using unit testing techniques (Mocks) and
integration techniques.
Mock out some of the dependent services. This is very powerful when testing
gRPC servers since we can have tighter control over some dependencies.
Instead of bringing up everything, just bring up the dependencies in your
service's domain. For Employment, we bring up the database, but not
companies service.
60. Hybrid Integration
Tests
Bring up actual
implementations of main
services.
Use mocks for anything
out of the main flow that
are used for checks.
docker-compose run
--use-aliases
--service-ports
employment-tests
Employee
Tests
Employee
Service
Employees.CreateEmployee
Employee
DB
CompanyService.GetCompany
Calls your test (instead of the
real Company service) so that
you can control behavior.
Docker Compose
services:
# ... snip ...
employee-tests:
ports:
- 50052
environment:
- COMPANIES_PORT=50052
employee:
environment:
- COMPANIES_HOST=employee-tests
- COMPANIES_PORT=50052