gRPC can help minimize the barrier of cross-system communication by providing language-agnostic API definitions, backward and forward compatible versioning with protocol buffers, and pluggable load balancing and tracing. You will see how to quickly get up and running with the gRPC framework using Node.js from creating a protocol definition, creating meaningful health checks, and securing the endpoint. Additionally, this session will go over best practices and how to take full advantage of what gRPC has to offer.
Presentation at NetPonto community: "We’re going to discuss gRPC, Google’s open-source RPC framework. I’ll dive a bit into the history of RPC as a protocol, and what its historical use has been. I’ll also highlight some benefits to adopt gRPC and how its possible to swap out parts of gRPC and still take advantage of gRPC’s benefits. Finally I’ll answer the question that has been on many lips since gRPC was announced — what does this mean for REST?"
gRPC in Golang presentation
In this talk, I introduced gRPC, Protocol buffer, and how to use them with golang.
Source code used in the presentation: http://github.com/AlmogBaku/grpc-in-go
Presentation at NetPonto community: "We’re going to discuss gRPC, Google’s open-source RPC framework. I’ll dive a bit into the history of RPC as a protocol, and what its historical use has been. I’ll also highlight some benefits to adopt gRPC and how its possible to swap out parts of gRPC and still take advantage of gRPC’s benefits. Finally I’ll answer the question that has been on many lips since gRPC was announced — what does this mean for REST?"
gRPC in Golang presentation
In this talk, I introduced gRPC, Protocol buffer, and how to use them with golang.
Source code used in the presentation: http://github.com/AlmogBaku/grpc-in-go
Slides for the talk in Ruby SG meetup. (Apr 2018)
https://github.com/sathiyaseelan/grpc-client-ruby
https://github.com/sathiyaseelan/grpc-client-go
https://github.com/sathiyaseelan/grpc-rails-api
Introduction to gRPC: A general RPC framework that puts mobile and HTTP/2 fir...Codemotion
gRPC is a high performance, language-neutral, general RPC framework developed and open sourced by Google. Built on the HTTP/2 standard, gRPC brings many benefits such as bidirectional streaming, flow control, header compression, multiplexing and more. In this session, you will learn about gRPC and how you can use it in your applications.
gRPC is used to communicate between microservices. You can generate code in more than 12 languages using gRPC .It is 25 times more performant than REST API. The session covers grpc internals deep dive and types of streaming possible in gRPC and short demo on grpc
During my journey in micro-services, it became apparent that the REST standard has been widely used in communication between micro-services for a long time. But recently the gRPC started to invade its territory. It turns out that there are some good reasons for this. In this lecture, I will present an introduction to gRCP, its main characteristics and the reasons why companies like Google, Netflix, and Docker are adopting this flexible and performative medium of communication.
Przemek Nowakowski: Podejście mikroserwisowe w naszym projekcie, oprócz oczywistych zalet, potrafi wygenerować też szereg problemów – między innymi z komunikacją pomiędzy poszczególnymi serwisami. W prezentacji omówimy od podstaw framework gRPC, dzięki któremu będziemy w stanie w szybki i przyjemny sposób połączyć nasze serwisy. Sprawdzimy też, do czego się nadaje, do czego nie oraz prześledzimy podstawowe problemy, jakie możemy napotkać przy takim podejściu. Nie obejdzie się również bez porównania z obecnymi na rynku alternatywami i sprawdzenia, co z tą obiecywaną wysoką wydajnością.
Back in 2015, Square and Google collaborated to launch gRPC, an open source RPC framework backed by protocol buffers and HTTP/2, based on real-world experiences operating microservices at scale. If you build microservices, you will be interested in gRPC.
This webcast covers:
- a technical overview of gRPC
- use cases and applicability in your stack
- a deep dive into the practicalities of operationalizing gRPC
Designing a complete ci cd pipeline using argo events, workflow and cd productsJulian Mazzitelli
https://www.youtube.com/watch?v=YmIAatr3Who
Presented at Cloud and AI DevFest GDG Montreal on September 27, 2019.
Are you looking to get more flexibility out of your CICD platform? Interested how GitOps fits into the mix? Learn how Argo CD, Workflows, and Events can be combined to craft custom CICD flows. All while staying Kubernetes native, enabling you to leverage existing observability tooling.
You’re ready to make your applications more responsive, scalable, fast and secure. Then it’s time to get started with NGINX. In this webinar, you will learn how to install NGINX from a package or from source onto a Linux host. We’ll then look at some common operating system tunings you could make to ensure your NGINX install is ready for prime time.
View full webinar on demand at http://nginx.com/resources/webinars/installing-tuning-nginx/
CocoaConf: The Language of Mobile Software is APIsTim Burks
We’re all excited about using the same language to write our mobile apps and cloud services, but as we do, we’ll still need to work with a few things that aren’t written with Swift. Fortunately, there are some great patterns that we can use for doing that. In this session we’ll talk about two technologies that you can use to make your app speak with APIs written in any language: OpenAPI and Protocol Buffers, and then we’ll see how to use them from clients and servers that are written in Swift.
Presented Friday November 4, 2016 in San Jose.
Slides for the talk in Ruby SG meetup. (Apr 2018)
https://github.com/sathiyaseelan/grpc-client-ruby
https://github.com/sathiyaseelan/grpc-client-go
https://github.com/sathiyaseelan/grpc-rails-api
Introduction to gRPC: A general RPC framework that puts mobile and HTTP/2 fir...Codemotion
gRPC is a high performance, language-neutral, general RPC framework developed and open sourced by Google. Built on the HTTP/2 standard, gRPC brings many benefits such as bidirectional streaming, flow control, header compression, multiplexing and more. In this session, you will learn about gRPC and how you can use it in your applications.
gRPC is used to communicate between microservices. You can generate code in more than 12 languages using gRPC .It is 25 times more performant than REST API. The session covers grpc internals deep dive and types of streaming possible in gRPC and short demo on grpc
During my journey in micro-services, it became apparent that the REST standard has been widely used in communication between micro-services for a long time. But recently the gRPC started to invade its territory. It turns out that there are some good reasons for this. In this lecture, I will present an introduction to gRCP, its main characteristics and the reasons why companies like Google, Netflix, and Docker are adopting this flexible and performative medium of communication.
Przemek Nowakowski: Podejście mikroserwisowe w naszym projekcie, oprócz oczywistych zalet, potrafi wygenerować też szereg problemów – między innymi z komunikacją pomiędzy poszczególnymi serwisami. W prezentacji omówimy od podstaw framework gRPC, dzięki któremu będziemy w stanie w szybki i przyjemny sposób połączyć nasze serwisy. Sprawdzimy też, do czego się nadaje, do czego nie oraz prześledzimy podstawowe problemy, jakie możemy napotkać przy takim podejściu. Nie obejdzie się również bez porównania z obecnymi na rynku alternatywami i sprawdzenia, co z tą obiecywaną wysoką wydajnością.
Back in 2015, Square and Google collaborated to launch gRPC, an open source RPC framework backed by protocol buffers and HTTP/2, based on real-world experiences operating microservices at scale. If you build microservices, you will be interested in gRPC.
This webcast covers:
- a technical overview of gRPC
- use cases and applicability in your stack
- a deep dive into the practicalities of operationalizing gRPC
Designing a complete ci cd pipeline using argo events, workflow and cd productsJulian Mazzitelli
https://www.youtube.com/watch?v=YmIAatr3Who
Presented at Cloud and AI DevFest GDG Montreal on September 27, 2019.
Are you looking to get more flexibility out of your CICD platform? Interested how GitOps fits into the mix? Learn how Argo CD, Workflows, and Events can be combined to craft custom CICD flows. All while staying Kubernetes native, enabling you to leverage existing observability tooling.
You’re ready to make your applications more responsive, scalable, fast and secure. Then it’s time to get started with NGINX. In this webinar, you will learn how to install NGINX from a package or from source onto a Linux host. We’ll then look at some common operating system tunings you could make to ensure your NGINX install is ready for prime time.
View full webinar on demand at http://nginx.com/resources/webinars/installing-tuning-nginx/
CocoaConf: The Language of Mobile Software is APIsTim Burks
We’re all excited about using the same language to write our mobile apps and cloud services, but as we do, we’ll still need to work with a few things that aren’t written with Swift. Fortunately, there are some great patterns that we can use for doing that. In this session we’ll talk about two technologies that you can use to make your app speak with APIs written in any language: OpenAPI and Protocol Buffers, and then we’ll see how to use them from clients and servers that are written in Swift.
Presented Friday November 4, 2016 in San Jose.
Building Services With gRPC, Docker and GoMartin Kess
gRPC is an open-source framework for building language agnostic services and clients. This hands-on session will cover techniques for building, testing and monitoring gRPC services using Docker and Go. During this session you will build a simple gRPC service and client, as well as an HTTP reverse-proxy to allow your service to also receive HTTP traffic.
What I learned about APIs in my first year at GoogleTim Burks
Tim Burks spent a decade building Electronic Design Automation systems and another building mobile apps. Now he's focused on the thing that holds them all together: APIs. In 2016 he joined Google where he works on open source software that helps developers use gRPC and OpenAPI.
Managing gRPC Services using Kong KONNECT and the KONG API GatewayJoão Esperancinha
This document is the base for my presentation about the Kong Gateway and its usages with the gRPC plugins. The Kong gateway provides two kinds of plugins that are very crucial when thinking about working with gRPC services. There is a grpc gateway plugin and a grpc web plugin. How to work with them and how to configure them isn't difficult but some help is always good. I provide with this presentation a general look on gRPC itself and how to manage that using an API gateway.
Please help with the below 3 questions, the python script is at the.pdfsupport58
Please help with the below 3 questions, the python script is at the bottom, I cannot get it to work
correctly please indicate where the error is. Thanks
Question-01: Approximately how much longer does it take to do a round-trip ping from/to a
remote machine than from/to localhost? (Note, answers may vary if you are doing the
experiment from your home or from the CS building itself and whether the destination is in
North America or some other continent).
Question-02: Currently, the program calculates the round-trip time for each packet and prints it
out individually. Modify this to correspond to the way the standard ping program works. You
will need to report the minimum, maximum, and average RTTs at the end of all pings from the
client. In addition, calculate the packet loss rate (in percentage).
Question-03: Your program can only detect timeouts in receiving ICMP echo responses. Modify
the Pinger program to parse the ICMP response error codes and display the corresponding error
results to the user. Examples of ICMP response error codes are 0: Destination Network
Unreachable, 1: Destination Host Unreachable.
In this lab, you will gain a better understanding of Internet Control Message Protocol (ICMP).
You will learn to implement a Ping application using ICMP request and reply messages. Ping is
a computer network application used to test whether a particular host is reachable across an IP
network. It is also used to self-test the network interface card of the computer or as a latency test.
It works by sending ICMP echo reply packets to the target host and listening for ICMP echo
reply replies. The "echo reply" is sometimes called a pong. Ping measures the round-trip time,
records packet loss, and prints a statistical summary of the echo reply packets received (the
minimum, maximum, and the mean of the round-trip times and in some versions the standard
deviation of the mean).
Your task is to develop your own Ping application in Python. Your application will use ICMP
but, in order to keep it simple, will not exactly follow the official specification in RFC 1739.
Note that you will only need to write the client side of the program, as the functionality needed
on the server side is built into almost all operating systems. You should complete the Ping
application so that it sends ping requests to a specified host separated by approximately one
second. Each message contains a payload of data that includes a timestamp. After sending each
packet, the application waits up to one second to receive a reply. If one second goes by without a
reply from the server, then the client assumes that either the ping packet or the pong packet was
lost in the network (or that the server is down).
This lab requires you to compose new python code. A skeleton framework is given, you will
need to fill in the blanks.
This lab will require you to build and/or decode a packed binary array of data that is specified by
the ICMP protocol. To assist you, the ICMP protocol speci.
The session is about how you can use dgraph in your project as a database with java. It has detail information's about the dgraph sync client and overview of the asynchronous client. It also has information about dgraph HTTP endpoints.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
3. Why gRPC?
From gRPC’s website:
“gRPC is a modern open source high performance RPC framework that can run in
any environment. It can efficiently connect services in and across data centers with
pluggable support for load balancing, tracing, health checking and authentication. It
is also applicable in last mile of distributed computing to connect devices, mobile
applications and browsers to back end services.”
4. Benefits of gRPC
● Low latency
○ Lack of parsing call parameters from paths and query strings allows for faster services
● Full-duplex streaming
○ Both requests and responses can be optionally streamed - gRPC uses HTTP/2 by default
● Supports multiple data formats
○ Protobuf, JSON, XML, FlatBuffers, and Thrift (varying levels of support)
● Static Types & Versioned Service Interface
○ Alleviates headaches where fields may be an object or an array/usually is an int, but under
some circumstance is a string
○ Fields can be depreciated without negative downstream consequences
(when handled properly)
6. gRPC vs REST
gRPC
● Supports streaming APIs over HTTP/2
● Uses Messages
● Strong Typing
● Not as straightforward to call from the the
browser as REST
● Supports many types of encoding
(protocol buffers by default)
● Fields can be deprecated without causing
breaking changes
REST
● Request/Response model over HTTP/1.1
● Utilizes resources and verbs
● Serialization
● Easy to call from the browser
● Supports limited number of encoding
(JSON by default)
● Deprecating fields is a breaking change
that requires versioning
8. What is a protocol buffer?
From Google:
Protocol buffers are Google's language-neutral, platform-neutral, extensible mechanism for serializing
structured data – think XML, but smaller, faster, and simpler. You define how you want your data to be
structured once, then you can use special generated source code to easily write and read your structured
data to and from a variety of data streams and using a variety of languages.
In plain speak:
Protocol buffers are essentially a way to serialize a binary wire format message, where fields are
identified by order.
9. Protocol Buffers
gRPC uses Protocol Buffers as its default data format. To get started with gRPC,
you will need to create a .proto file that defines:
1. What services you will implement
2. The message definition for requests
3. The message definition for responses
10. Service Definition
Let’s create a service for running a pet store
(Lovingly borrowed from Swagger’s example
docs).
We’ll create endpoints for:
● Retrieving all pets
● Retrieving a pet by ID
● Creating a new pet
● Updating an existing pet
● Deleting a pet
We’ll define a pet as an entity with the following:
● id: unique identifier for the pet
● name: the pet’s name
● status: one of available, pending, or sold
11. The .proto file
Before we can write any code for our services,
we’ll need to define them in a .proto file.
The file will need to first define the syntax and
package name.
After that we can define what messages we’ll
send/receive and what services are available.
syntax = "proto3";
package petstore;
12. Creating the Pet Message
We’ve already determined a Pet is an entity with
the following:
● id: unique identifier for the pet
● name: the pet’s name
● status: one of available, pending, or sold
We’ll create a message that has three string
fields to represent this data.
By default, all fields are optionally. This will
allow us to use the same message for the
request and response for retrieving a pet.
message Pet {
string id = 1;
string name = 2;
string status = 3;
}
13. Defining Messages
Messages can be composed of a mixture of
scalar and custom types.
Scalar types include:
double, float, int32, int64, uint32, uint64, sint32,
sint64, fixed32, fixed64, sfixed32, sfixed64, bool,
string, bytes
Numeric values default to zero, booleans to
false, and strings to an empty string.
message Pet {
string id = 1;
string name = 2;
string status = 3;
}
14. Assigning Field Numbers
You’ll notice that each field in the message definition has a
unique number.
These fields numbers are used to identify your fields for the
binary parser and should not be changed once your
message is in use.
Fields 1-15 take one byte to encode, and should be reserved
for frequently occurring message elements.
Fields 16-2047 take two bytes to encode.
Deprecated fields should still be defined in your message
definition with their original field number.
message Pet {
string id = 1;
string name = 2;
string status = 3;
}
15. Enums
We wanted to limit status to be one of available,
pending, or sold. We can do this by defining an
enum.
In order to future proof our service, we’ll define
the default status as unknown in case we
decide to deprecate the field in the future.
enum Status {
UNKNOWN = 0;
AVAILABLE = 1;
PENDING = 2;
SOLD = 3;
}
message Pet {
string id = 1;
string name = 2;
Status status = 3;
}
16. Repeated Fields and Empty Messages
What if we want to return multiple pets? We can
do this by using the repeated key.
How about instances where we want to send
empty messages? We will still need to define a
message for those instances, but we will not
give it any fields.
message Pet {
string id = 1;
string name = 2;
Status status = 3;
}
message Pets {
repeated Pet pets = 1;
}
message Empty {}
17. Defining Services
Now that we’ve defined our messages, we’ll
need to define what services we’ll make
available.
We’ll create services for:
● Retrieving all pets
● Retrieving a pet by ID
● Creating a new pet
● Updating an existing pet
● Deleting a pet
service PetStore {
rpc GetAll(Empty) returns (Pets) {}
rpc GetPet(Pet) returns (Pet) {}
rpc CreatePet(Pet) returns (Pet) {}
rpc UpdatePet(Pet) returns (Pet) {}
rpc DeletePet(Pet) returns (Empty) {}
}
18. Putting it all Together
syntax = "proto3";
package petstore;
service PetStore {
rpc GetAll(Empty) returns (Pets) {}
rpc GetPet(Pet) returns (Pet) {}
rpc CreatePet(Pet) returns (Pet) {}
rpc UpdatePet(Pet) returns (Pet) {}
rpc DeletePet(Pet) returns (Empty) {}
}
message Empty {}
enum Status {
UNKNOWN = 0;
AVAILABLE = 1;
PENDING = 2;
SOLD = 3;
}
message Pet {
string id = 1;
string name = 2;
Status status = 3;
}
message Pets {
repeated Pet pets = 1;
}
20. Using Protobufs in Node
In Node, there are two options for using Protocol Buffers. Static generation or
dynamic loading.
With static generation, you’ll use the proto tool to load your protobuf definition and
create static JavaScript files that you’ll use for creating your server and client. This
is how protobufs are handled in most other supported languages.
With Node you also have the option to dynamically load your proto definition,
making it function similar to any other dependency you might have.
For our example, we’ll use dynamic loading.
21. Loading the Protobuf
You’ll need two modules to get started
● grpc
● @grpc/proto-loader
When loading the protocol definition, you’ll have a couple of options.
● keepCase: keep field casing instead of converting to camelcase
● longs: Should long numbers be treated as strings or numbers
● enums: Should enums use the string or numeric value
● defaults: Should default values be set
24. Creating the gRPC Server
To run a gRPC server, you will need to create one in Node.
const grpc = require('grpc')
const server = new grpc.Server()
Additionally, you must tell it what security to use and what port to run on. For our
example, we’ll use the simplest type of security: insecure.
server.bind('0.0.0.0:50051', grpc.ServerCredentials.createInsecure())
server.start()
25. Adding the Service Definition
We now have a running server, but it doesn’t actually do anything. Let’s assume
we have a local package that already contains all of the service logic. We’ll also
need to load the proto definition from before.
const services = require('./services')
const proto = require('../proto')
For our server to register as a valid petstore server for our protobuf definition, it
must implement all of the services we defined.
26. Full Server File
const grpc = require('grpc')
const proto = require('../proto')
const services = require('./services')
const server = new grpc.Server()
server.addService(proto.petstore.PetStore.service, {
getAll: services.getPets,
getPet: services.getPet,
createPet: services.createPet,
updatePet: services.updatePet,
deletePet: services.deletePet
})
server.bind('0.0.0.0:50051', grpc.ServerCredentials.createInsecure())
server.start()
27. Writing Services
A service will receive two arguments, a call and a callback. The call contains the
request along with some other metadata. The callback has a signature of (error,
message).
const getPet = (call, callback) => {
const pet = db.getPet(call.request)
callback(null, pet)
}
28. Error Handling
We have a service that looks up a pet by ID, but what if we don’t have a record for that pet? Should we
send back a pet message with defaulted fields? While that is an option, gRPC has standardized status
codes, much like REST. We can build standardized status messages using the @grpc/grpc-js package.
const { status } = require('grpc')
const grpc = require('@grpc/grpc-js')
const getPet = (call, callback) => {
const pet = db.getPet(call.request)
if (!pet) {
const err = new grpc.StatusBuilder().withCode(status.NOT_FOUND).withDetails('Pet Not Found').build()
callback(err)
return
}
callback(null, pet)
}
29. Unimplemented Methods
As stated earlier, for our server to be of type petstore, it must implement all five
services, but what if we didn’t want to implement one of them? gRPC provides a
status code for unimplemented services.
const { status } = require('grpc')
const grpc = require('@grpc/grpc-js')
const deletePet = (call, callback) => {
const err = new grpc.StatusBuilder().withCode(status.UNIMPLEMENTED).withDetails('Service Not Implemented').build()
callback(err)
}
32. Creating the gRPC Client
Creating a gRPC client is fairly straightforward. You will just need to load a
protocol definition and know where the gRPC server is running.
const { credentials } = require('grpc')
const proto = require('../proto')
const client = new proto.petstore.PetStore('localhost:50051', credentials.createInsecure())
From there, you can make calls to your server and handle any errors.
client.getPet({ id:'23' }, (err, pet) => {
if (err) {
console.log(err.details) // Prints the error message, in this case “Pet Not Found”
}
console.log(pet)
})
34. Health Checks
● gRPC provides a standard protobuf definition for health checks.
● This defines two methods, Check and Watch.
○ Check is standard call. Watch is a streaming call.
● Returns one of three statuses: UNKNOWN, SERVING, NOT_SERVING
● The client can decide how to handle these statuses
○ For instance, if the health check returns NOT_SERVING you can fail the request, or queue it
and try again with an exponential backoff
35. Implementing Multiple Server Types
A single server can implement multiple server types. For instance, you can
implement both petstore and a health check server.
const server = new grpc.Server()
server.addService(proto.petstore.PetStore.service, {
// service functions
})
server.addService(proto.health.Health.service, {
// service functions
})
36. Importing Proto Definitions
You can reuse definitions from one proto file by importing them in another.
import "myproject/other_protos.proto";
For well known types, you can use the “Any” type which takes a definition URL for
deserializing arbitrary JSON and the value
{
"@type": "type.googleapis.com/google.protobuf.Duration",
"value": "1.212s"
}
37. Security
gRPC supports 3 security schemes:
1. Unsecure
2. SSL/TLS
3. Token-based authentication with Google (should only be used with Google
services)