GraphQL is a popular alternative to REST for front-end applications as it offers flexibility and developer-friendly tooling. In this talk, we will look into the differences between REST and GraphQL, how GraphQL API Management presents a new set of challenges, and finally, how we can address those challenges by leveraging Kong extensibility.
This document summarizes a presentation given by Professor Pekka Abrahamsson on how ChatGPT and AI-assisted coding is profoundly changing software engineering. The presentation covers several key points:
- ChatGPT and AI tools like Copilot are beginning to be adopted in software engineering to provide code snippets, answers to technical questions, and assist with debugging, but issues around code ownership, reliability, and security need to be addressed.
- Early studies show potential benefits of ChatGPT for tasks like software testing education, code quality improvement, and requirements elicitation, but more research is still needed.
- Prompt engineering techniques can help maximize the usefulness of ChatGPT for software engineering tasks. Overall, AI
Callbacks, Promises, and Coroutines (oh my!): Asynchronous Programming Patter...Domenic Denicola
This talk takes a deep dive into asynchronous programming patterns and practices, with an emphasis on the promise pattern.
We go through the basics of the event loop, highlighting the drawbacks of asynchronous programming in a naive callback style. Fortunately, we can use the magic of promises to escape from callback hell with a powerful and unified interface for async APIs. Finally, we take a quick look at the possibilities for using coroutines both in current and future (ECMAScript Harmony) JavaScript.
This document discusses AI and ChatGPT. It begins with an introduction to David Cieslak and his company RKL eSolutions, which provides ERP sales and consulting. It then provides definitions for key AI concepts like artificial intelligence, generative AI, large language models, and ChatGPT. The document discusses OpenAI's ChatGPT tool and how it works. It covers prompts, commands, and potential uses and impacts of generative AI technologies. Finally, it discusses concerns regarding generative AI and the future of life institute's call for more oversight of advanced AI.
In this session, you'll get all the answers about how ChatGPT and other GPT-X models can be applied to your current or future project. First, we'll put in order all the terms – OpenAI, GPT-3, ChatGPT, Codex, Dall-E, etc., and explain why Microsoft and Azure are often mentioned in this context. Then, we'll go through the main capabilities of the Azure OpenAI and respective usecases that might inspire you to either optimize your product or build a completely new one.
An Introduction to Generative AI - May 18, 2023CoriFaklaris1
For this plenary talk at the Charlotte AI Institute for Smarter Learning, Dr. Cori Faklaris introduces her fellow college educators to the exciting world of generative AI tools. She gives a high-level overview of the generative AI landscape and how these tools use machine learning algorithms to generate creative content such as music, art, and text. She then shares some examples of generative AI tools and demonstrate how she has used some of these tools to enhance teaching and learning in the classroom and to boost her productivity in other areas of academic life.
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? How do we protect the privacy of users when building large-scale AI based systems? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains such as hiring, lending, and healthcare. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of responsible AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
ChatGPT (Chat Generative pre-defined transformer) is OpenAI's application that performs human like interactions. GitHub Copilot uses the OpenAI Codex to suggest code and entire functions in real-time, right from your editor. Deck contains more details about ChatGPT, AI, AGI, CoPilot, OpenAI API, and use case scenarios.
This document summarizes a presentation given by Professor Pekka Abrahamsson on how ChatGPT and AI-assisted coding is profoundly changing software engineering. The presentation covers several key points:
- ChatGPT and AI tools like Copilot are beginning to be adopted in software engineering to provide code snippets, answers to technical questions, and assist with debugging, but issues around code ownership, reliability, and security need to be addressed.
- Early studies show potential benefits of ChatGPT for tasks like software testing education, code quality improvement, and requirements elicitation, but more research is still needed.
- Prompt engineering techniques can help maximize the usefulness of ChatGPT for software engineering tasks. Overall, AI
Callbacks, Promises, and Coroutines (oh my!): Asynchronous Programming Patter...Domenic Denicola
This talk takes a deep dive into asynchronous programming patterns and practices, with an emphasis on the promise pattern.
We go through the basics of the event loop, highlighting the drawbacks of asynchronous programming in a naive callback style. Fortunately, we can use the magic of promises to escape from callback hell with a powerful and unified interface for async APIs. Finally, we take a quick look at the possibilities for using coroutines both in current and future (ECMAScript Harmony) JavaScript.
This document discusses AI and ChatGPT. It begins with an introduction to David Cieslak and his company RKL eSolutions, which provides ERP sales and consulting. It then provides definitions for key AI concepts like artificial intelligence, generative AI, large language models, and ChatGPT. The document discusses OpenAI's ChatGPT tool and how it works. It covers prompts, commands, and potential uses and impacts of generative AI technologies. Finally, it discusses concerns regarding generative AI and the future of life institute's call for more oversight of advanced AI.
In this session, you'll get all the answers about how ChatGPT and other GPT-X models can be applied to your current or future project. First, we'll put in order all the terms – OpenAI, GPT-3, ChatGPT, Codex, Dall-E, etc., and explain why Microsoft and Azure are often mentioned in this context. Then, we'll go through the main capabilities of the Azure OpenAI and respective usecases that might inspire you to either optimize your product or build a completely new one.
An Introduction to Generative AI - May 18, 2023CoriFaklaris1
For this plenary talk at the Charlotte AI Institute for Smarter Learning, Dr. Cori Faklaris introduces her fellow college educators to the exciting world of generative AI tools. She gives a high-level overview of the generative AI landscape and how these tools use machine learning algorithms to generate creative content such as music, art, and text. She then shares some examples of generative AI tools and demonstrate how she has used some of these tools to enhance teaching and learning in the classroom and to boost her productivity in other areas of academic life.
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? How do we protect the privacy of users when building large-scale AI based systems? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains such as hiring, lending, and healthcare. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of responsible AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
ChatGPT (Chat Generative pre-defined transformer) is OpenAI's application that performs human like interactions. GitHub Copilot uses the OpenAI Codex to suggest code and entire functions in real-time, right from your editor. Deck contains more details about ChatGPT, AI, AGI, CoPilot, OpenAI API, and use case scenarios.
The document summarizes Apache Hadoop, an open-source software framework for distributed storage and processing of large datasets across clusters of computers. It describes the key components of Hadoop including the Hadoop Distributed File System (HDFS) which stores data reliably across commodity hardware, and the MapReduce programming model which allows distributed processing of large datasets in parallel. The document provides an overview of HDFS architecture, data flow, fault tolerance, and other aspects to enable reliable storage and access of very large files across clusters.
This document provides an overview of big data and Hadoop. It discusses why Hadoop is useful for extremely large datasets that are difficult to manage in relational databases. It then summarizes what Hadoop is, including its core components like HDFS, MapReduce, HBase, Pig, Hive, Chukwa, and ZooKeeper. The document also outlines Hadoop's design principles and provides examples of how some of its components like MapReduce and Hive work.
This document provides information about a bootcamp to build applications using Large Language Models (LLMs). The bootcamp consists of 11 modules covering topics such as introduction to generative AI, text analytics techniques, neural network models for natural language processing, transformer models, embedding retrieval, semantic search, prompt engineering, fine-tuning LLMs, orchestration frameworks, the LangChain application platform, and a final project to build a custom LLM application. The bootcamp will be held in various locations and dates between September 2023 and January 2024.
Internet of Things - protocols review (MeetUp Wireless & Networks, Poznań 21....Marcin Bielak
- The document provides an overview of various Internet of Things (IoT) communication protocols including MQTT, HTTP/REST, and DDS.
- It discusses the key aspects of MQTT including its publish-subscribe model, use of a message broker, lightweight design, and quality of service levels. HTTP/REST is described as using a client-server model with status codes and uniform interfaces.
- The document also compares MQTT and HTTP/REST, noting MQTT is simpler, message-centric, and ideal for low-power IoT devices, while HTTP/REST is more complex, document-centric, and the standard web protocol.
(Big) Data Serialization with Avro and ProtobufGuido Schmutz
The document describes data serialization formats Avro and Protobuf. It provides an overview of their schema definition approaches, data types, code generation capabilities, and usage from Java. Key differences noted are Protobuf uses field numbers while Avro relies on schemas and uses variable length encoding. The document also shows examples of defining schemas in IDL format and generating/serializing data from Java code.
This document provides a 50-hour roadmap for building large language model (LLM) applications. It introduces key concepts like text-based and image-based generative AI models, encoder-decoder models, attention mechanisms, and transformers. It then covers topics like intro to image generation, generative AI applications, embeddings, attention mechanisms, transformers, vector databases, semantic search, prompt engineering, fine-tuning foundation models, orchestration frameworks, autonomous agents, bias and fairness, and recommended LLM application projects. The document recommends several hands-on exercises and lists upcoming bootcamp dates and locations for learning to build LLM applications.
Unlocking the Power of ChatGPT and AI in Testing - NextSteps, presented by Ap...Applitools
The document discusses AI tools for software testing such as ChatGPT, Github Copilot, and Applitools Visual AI. It provides an overview of each tool and how they can help with testing tasks like test automation, debugging, and handling dynamic content. The document also covers potential challenges with AI like data privacy issues and tools having superficial knowledge. It emphasizes that AI should be used as an assistance to humans rather than replacing them and that finding the right balance and application of tools is important.
Airflow at lyft for Airflow summit 2020 conferenceTao Feng
1) Lyft uses Airflow for ETL workflows to move data from mobile apps and events to data warehouses.
2) Lyft has customized Airflow with features like UI auditing, DAG dependency graphs, and integrating Amundsen for data lineage.
3) Current focuses at Lyft include an ETL expiration system, upgrading DAGs to Python 3, and leveraging new Airflow features in a multi-tenant cluster.
A chatterbot (also known as a talkbot, chatbot, Bot, chatterbox, Artificial Conversational Entity) is a computer program which conducts a conversation via auditory or textual methods.
To find more about it, checkout these slides. For more info, visit our website, www.appgalleryinc.com
This document discusses a coin sharing structure for translation services using a blockchain. It proposes recording token transactions, translation data leases, and database contribution information on the blockchain. Contributors would receive points based on their database contribution, and profits would be regularly shared. A Mother of Language platform would provide ready-to-use translation data and confirm data through consensus among point holders. The translation data could also be leased to linguistic AI companies to share profits. The performance of AI translators could improve by learning from specialized translation data sets tagged with metadata like author, translator, and language pairs.
This document provides an overview of Microsoft's conversational computing platforms, including the Azure Bot Service and Bot Builder SDK. It describes how bots can be built and connected using these tools, and how cognitive services like LUIS and speech APIs can be integrated to add intelligence. The document also outlines the bot development lifecycle and provides information on new features for conversational AI like integrated language understanding and multi-lingual support.
Ingesting and Processing IoT Data Using MQTT, Kafka Connect and Kafka Streams...confluent
(Guido Schmutz, Trivadis) Kafka Summit SF 2018
Internet of Things use cases are a perfect match for processing with a streaming platform such as Kafka and the Confluent Platform. Some of the questions to be answered are: How do we feed the data from our devices into Kafka? Do we directly send data to Kafka? Is Kafka accessible from outside the organization over the internet? What if we want to use a more specific IoT protocol such as MQTT or CoAP in between? How would we integrate it with Kafka? How can we enrich IoT streaming data with static data sitting in a traditional system?
This session will provide answers to these and other questions using a fictitious use case of a trucking company. Trucks are constantly sending data about position and driving habits, which can be used to derive real-time information and actions. A large part of the presentation will be a live demo. The demo will show the implementation of the pipeline incrementally: starting with sending the truck movement events directly to Kafka, then adding MQTT to the sensor data ingestion, followed by using Kafka Streams and KSQL to apply stream processing on the information received. The final pipeline will demonstrate the application of Kafka Connect with MQTT and JDBC source connectors for data ingestion and event stream enrichment, and Kafka Streams and KSQL for stream processing. The key takeaway is the live demonstration of a working end-to-end IoT streaming data ingestion pipeline using Kafka technologies.
Build an LLM-powered application using LangChain.pdfAnastasiaSteele10
LangChain is an advanced framework that allows developers to create language model-powered applications. It provides a set of tools, components, and interfaces that make building LLM-based applications easier. With LangChain, managing interactions with language models, chaining together various components, and integrating resources like APIs and databases is a breeze. The platform includes a set of APIs that can be integrated into applications, allowing developers to add language processing capabilities without having to start from scratch.
Details regarding the working of chatgpt and basic use cases can be found in this presentation. The presentation also contains details regarding other Open AI products and their useability. You can also find ways in which chatgpt can be implemented in existing App and websites.
A brief introduction to generative models in general is given, followed by a succinct discussion about text generation models and the "Transformer" architecture. Finally, the focus is set on a non-technical discussion about ChatGPT with a selection of recent news articles.
Chatbot and Virtual AI Assistant Implementation in Natural Language Processing Shrutika Oswal
In this presentation, I have given a short overview of hot recent topics of research in artificial intelligence. These topics include Gaming, Expert System, Vision System, Speech Recognition, Handwriting Recognition, Intelligent Robots, Machine Learning, Deep Learning, Robotics, Reinforcement Learning, Internet of Things, Neuromorphic Computing, Computer Vision and most important NLP (Natural language Processing). Here I have mentioned different fields and components of NLP along with the steps of implementation. In the further part of the presentation, I have described the general structure of chatbot in NLP along with its implementation algorithm in python language. Also, I have given some informative descriptions, technologies, usage, and working of virtual AI assistants along with this I implemented one virtual assistant for laptop who will able to perform some interesting tasks.
This document discusses generative AI and its potential transformations and use cases. It outlines how generative AI could enable more low-cost experimentation, blur division boundaries, and allow "talking to data" for innovation and operational excellence. The document also references responsible AI frameworks and a pattern catalogue for developing foundation model-based systems. Potential use cases discussed include automated reporting, digital twins, data integration, operation planning, communication, and innovation applications like surrogate models and cross-discipline synthesis.
This document provides an overview of ChatGPT and how it works. It begins with introductions and then provides examples of deep learning applications. It explains that ChatGPT is a type of neural network called a Generative Pre-Trained Transformer (GPT) that is trained on large amounts of text data to predict the next word. GPTs work using an autoregressive approach where each word prediction depends on the previous words generated. The document concludes by explaining how very large GPT models like GPT-3 are able to generate full sentences and conversations.
Implementing OpenAPI and GraphQL services with gRPCTim Burks
Behind every API there's code. REST and GraphQL are powerful interface abstractions but are not so great for writing code (we’re still looking for the programming language where every command is a GET, POST, PUT, or DELETE). When programmers work, they are usually making function calls, and an RPC framework like gRPC allows those functions to be written in a mixture of languages and distributed among many servers. This means that gRPC can be a great way to implement REST and GraphQL APIs at scale. We’ll share open source projects from Google that can be used to implement OpenAPI and GraphQL services with gRPC and give you hands-on experience with both.
Presented at the 2019 API Specifications Conference.
https://asc2019.sched.com/event/T6u9/workshop-implementing-openapi-and-graphql-services-with-grpc-tim-burks-google
GraphQL across the stack: How everything fits togetherSashko Stubailo
My talk from GraphQL Summit 2017!
In this talk, I talk about a future for GraphQL which builds on the idea that GraphQL enables lots of tools to work together seamlessly across the stack. I present this through the lens of 3 examples: Caching, performance tracing, and schema stitching.
Stay tuned for the video recording from GraphQL Summit!
The document summarizes Apache Hadoop, an open-source software framework for distributed storage and processing of large datasets across clusters of computers. It describes the key components of Hadoop including the Hadoop Distributed File System (HDFS) which stores data reliably across commodity hardware, and the MapReduce programming model which allows distributed processing of large datasets in parallel. The document provides an overview of HDFS architecture, data flow, fault tolerance, and other aspects to enable reliable storage and access of very large files across clusters.
This document provides an overview of big data and Hadoop. It discusses why Hadoop is useful for extremely large datasets that are difficult to manage in relational databases. It then summarizes what Hadoop is, including its core components like HDFS, MapReduce, HBase, Pig, Hive, Chukwa, and ZooKeeper. The document also outlines Hadoop's design principles and provides examples of how some of its components like MapReduce and Hive work.
This document provides information about a bootcamp to build applications using Large Language Models (LLMs). The bootcamp consists of 11 modules covering topics such as introduction to generative AI, text analytics techniques, neural network models for natural language processing, transformer models, embedding retrieval, semantic search, prompt engineering, fine-tuning LLMs, orchestration frameworks, the LangChain application platform, and a final project to build a custom LLM application. The bootcamp will be held in various locations and dates between September 2023 and January 2024.
Internet of Things - protocols review (MeetUp Wireless & Networks, Poznań 21....Marcin Bielak
- The document provides an overview of various Internet of Things (IoT) communication protocols including MQTT, HTTP/REST, and DDS.
- It discusses the key aspects of MQTT including its publish-subscribe model, use of a message broker, lightweight design, and quality of service levels. HTTP/REST is described as using a client-server model with status codes and uniform interfaces.
- The document also compares MQTT and HTTP/REST, noting MQTT is simpler, message-centric, and ideal for low-power IoT devices, while HTTP/REST is more complex, document-centric, and the standard web protocol.
(Big) Data Serialization with Avro and ProtobufGuido Schmutz
The document describes data serialization formats Avro and Protobuf. It provides an overview of their schema definition approaches, data types, code generation capabilities, and usage from Java. Key differences noted are Protobuf uses field numbers while Avro relies on schemas and uses variable length encoding. The document also shows examples of defining schemas in IDL format and generating/serializing data from Java code.
This document provides a 50-hour roadmap for building large language model (LLM) applications. It introduces key concepts like text-based and image-based generative AI models, encoder-decoder models, attention mechanisms, and transformers. It then covers topics like intro to image generation, generative AI applications, embeddings, attention mechanisms, transformers, vector databases, semantic search, prompt engineering, fine-tuning foundation models, orchestration frameworks, autonomous agents, bias and fairness, and recommended LLM application projects. The document recommends several hands-on exercises and lists upcoming bootcamp dates and locations for learning to build LLM applications.
Unlocking the Power of ChatGPT and AI in Testing - NextSteps, presented by Ap...Applitools
The document discusses AI tools for software testing such as ChatGPT, Github Copilot, and Applitools Visual AI. It provides an overview of each tool and how they can help with testing tasks like test automation, debugging, and handling dynamic content. The document also covers potential challenges with AI like data privacy issues and tools having superficial knowledge. It emphasizes that AI should be used as an assistance to humans rather than replacing them and that finding the right balance and application of tools is important.
Airflow at lyft for Airflow summit 2020 conferenceTao Feng
1) Lyft uses Airflow for ETL workflows to move data from mobile apps and events to data warehouses.
2) Lyft has customized Airflow with features like UI auditing, DAG dependency graphs, and integrating Amundsen for data lineage.
3) Current focuses at Lyft include an ETL expiration system, upgrading DAGs to Python 3, and leveraging new Airflow features in a multi-tenant cluster.
A chatterbot (also known as a talkbot, chatbot, Bot, chatterbox, Artificial Conversational Entity) is a computer program which conducts a conversation via auditory or textual methods.
To find more about it, checkout these slides. For more info, visit our website, www.appgalleryinc.com
This document discusses a coin sharing structure for translation services using a blockchain. It proposes recording token transactions, translation data leases, and database contribution information on the blockchain. Contributors would receive points based on their database contribution, and profits would be regularly shared. A Mother of Language platform would provide ready-to-use translation data and confirm data through consensus among point holders. The translation data could also be leased to linguistic AI companies to share profits. The performance of AI translators could improve by learning from specialized translation data sets tagged with metadata like author, translator, and language pairs.
This document provides an overview of Microsoft's conversational computing platforms, including the Azure Bot Service and Bot Builder SDK. It describes how bots can be built and connected using these tools, and how cognitive services like LUIS and speech APIs can be integrated to add intelligence. The document also outlines the bot development lifecycle and provides information on new features for conversational AI like integrated language understanding and multi-lingual support.
Ingesting and Processing IoT Data Using MQTT, Kafka Connect and Kafka Streams...confluent
(Guido Schmutz, Trivadis) Kafka Summit SF 2018
Internet of Things use cases are a perfect match for processing with a streaming platform such as Kafka and the Confluent Platform. Some of the questions to be answered are: How do we feed the data from our devices into Kafka? Do we directly send data to Kafka? Is Kafka accessible from outside the organization over the internet? What if we want to use a more specific IoT protocol such as MQTT or CoAP in between? How would we integrate it with Kafka? How can we enrich IoT streaming data with static data sitting in a traditional system?
This session will provide answers to these and other questions using a fictitious use case of a trucking company. Trucks are constantly sending data about position and driving habits, which can be used to derive real-time information and actions. A large part of the presentation will be a live demo. The demo will show the implementation of the pipeline incrementally: starting with sending the truck movement events directly to Kafka, then adding MQTT to the sensor data ingestion, followed by using Kafka Streams and KSQL to apply stream processing on the information received. The final pipeline will demonstrate the application of Kafka Connect with MQTT and JDBC source connectors for data ingestion and event stream enrichment, and Kafka Streams and KSQL for stream processing. The key takeaway is the live demonstration of a working end-to-end IoT streaming data ingestion pipeline using Kafka technologies.
Build an LLM-powered application using LangChain.pdfAnastasiaSteele10
LangChain is an advanced framework that allows developers to create language model-powered applications. It provides a set of tools, components, and interfaces that make building LLM-based applications easier. With LangChain, managing interactions with language models, chaining together various components, and integrating resources like APIs and databases is a breeze. The platform includes a set of APIs that can be integrated into applications, allowing developers to add language processing capabilities without having to start from scratch.
Details regarding the working of chatgpt and basic use cases can be found in this presentation. The presentation also contains details regarding other Open AI products and their useability. You can also find ways in which chatgpt can be implemented in existing App and websites.
A brief introduction to generative models in general is given, followed by a succinct discussion about text generation models and the "Transformer" architecture. Finally, the focus is set on a non-technical discussion about ChatGPT with a selection of recent news articles.
Chatbot and Virtual AI Assistant Implementation in Natural Language Processing Shrutika Oswal
In this presentation, I have given a short overview of hot recent topics of research in artificial intelligence. These topics include Gaming, Expert System, Vision System, Speech Recognition, Handwriting Recognition, Intelligent Robots, Machine Learning, Deep Learning, Robotics, Reinforcement Learning, Internet of Things, Neuromorphic Computing, Computer Vision and most important NLP (Natural language Processing). Here I have mentioned different fields and components of NLP along with the steps of implementation. In the further part of the presentation, I have described the general structure of chatbot in NLP along with its implementation algorithm in python language. Also, I have given some informative descriptions, technologies, usage, and working of virtual AI assistants along with this I implemented one virtual assistant for laptop who will able to perform some interesting tasks.
This document discusses generative AI and its potential transformations and use cases. It outlines how generative AI could enable more low-cost experimentation, blur division boundaries, and allow "talking to data" for innovation and operational excellence. The document also references responsible AI frameworks and a pattern catalogue for developing foundation model-based systems. Potential use cases discussed include automated reporting, digital twins, data integration, operation planning, communication, and innovation applications like surrogate models and cross-discipline synthesis.
This document provides an overview of ChatGPT and how it works. It begins with introductions and then provides examples of deep learning applications. It explains that ChatGPT is a type of neural network called a Generative Pre-Trained Transformer (GPT) that is trained on large amounts of text data to predict the next word. GPTs work using an autoregressive approach where each word prediction depends on the previous words generated. The document concludes by explaining how very large GPT models like GPT-3 are able to generate full sentences and conversations.
Implementing OpenAPI and GraphQL services with gRPCTim Burks
Behind every API there's code. REST and GraphQL are powerful interface abstractions but are not so great for writing code (we’re still looking for the programming language where every command is a GET, POST, PUT, or DELETE). When programmers work, they are usually making function calls, and an RPC framework like gRPC allows those functions to be written in a mixture of languages and distributed among many servers. This means that gRPC can be a great way to implement REST and GraphQL APIs at scale. We’ll share open source projects from Google that can be used to implement OpenAPI and GraphQL services with gRPC and give you hands-on experience with both.
Presented at the 2019 API Specifications Conference.
https://asc2019.sched.com/event/T6u9/workshop-implementing-openapi-and-graphql-services-with-grpc-tim-burks-google
GraphQL across the stack: How everything fits togetherSashko Stubailo
My talk from GraphQL Summit 2017!
In this talk, I talk about a future for GraphQL which builds on the idea that GraphQL enables lots of tools to work together seamlessly across the stack. I present this through the lens of 3 examples: Caching, performance tracing, and schema stitching.
Stay tuned for the video recording from GraphQL Summit!
apidays LIVE Helsinki - Implementing OpenAPI and GraphQL Services with gRPC b...apidays
apidays LIVE Helsinki - APIs, Platforms, And Ecosystems - Transforming Industries And Experiences
Implementing OpenAPI and GraphQL Services with gRPC
Tim Burks, Software Engineer at Google
GraphQL is a data query and manipulation language for APIs that provides several advantages over REST APIs:
- GraphQL allows clients to define the structure of the data required, and exactly the fields they need from the server. This prevents over-fetching and under-fetching compared to REST.
- GraphQL queries use a typed schema so clients can know the types of data available without having to make a request. It also allows for nested queries to fetch multiple objects in one request.
- GraphQL uses HTTP POST requests with a JSON body to specify the query and variables. This provides security advantages over REST by allowing all requests to go to a single endpoint.
- GraphQL queries support variables which
The document discusses versioning challenges for open source services deployed across multiple cloud platforms. It describes typical REST API versioning that works for single vendors but breaks down when different teams develop and deploy the software. The document introduces the concept of microversions to allow incremental feature updates while maintaining backwards compatibility. It also questions how to manage raising minimum versions and backwards compatibility over time.
The document discusses an introduction to the CloudStack API. It covers topics like API documentation, clients that interface with the API, exploring the API by examining HTTP calls from the UI, making authenticated and unauthenticated API calls, asynchronous calls, error handling, and includes an exercise on building a REST interface to CloudStack using Flask.
Posons-nous et profitons de ce talk pour prendre un peu de hauteur sur l’état de l’industrie tech autour de la création d’API de persistence (CRUD).
D’où venons-nous, ou allons-nous ? Pourquoi le choix entre RPC, SOAP, REST et GraphQL n’est peut-être qu’un sujet de surface qui cache un problème bien plus profond…
Youtube: https://www.youtube.com/watch?v=IskE3m3VjRY
apidays LIVE Australia 2020 - Have your cake and eat it too: GraphQL? REST? W...apidays
apidays LIVE Australia 2020 - Building Business Ecosystems
Have your cake and eat it too: GraphQL? REST? Why not have both!
Roy Mor, Technical Lead at Sisense
The document discusses Prisma and GraphQL. It provides an overview of GraphQL concepts like schema, queries, and resolvers. It then covers the typical architecture of a GraphQL server including the schema definition, resolver functions, and server setup. Finally, it introduces Prisma as a database access layer that can be used to build GraphQL servers.
GraphQL is a query language for APIs that was created by Facebook in 2012. It allows clients to define the structure of the data required, and exactly the data they need from the server. This prevents over- and under-fetching of data. GraphQL has grown in popularity with the release of tools like Apollo and GraphQL code generation. GraphQL can be used to build APIs that integrate with existing backend systems and databases, with libraries like Express GraphQL and GraphQL Yoga making it simple to create GraphQL servers.
GTLAB Installation Tutorial for SciDAC 2009marpierc
GTLAB is a Java Server Faces tag library that wraps Grid and web services to build portal-based and standalone applications. It contains tags for common tasks like job submission, file transfer, credential management. GTLAB applications can be deployed as portlets or converted to Google Gadgets. The document provides instructions for installing GTLAB, examples of tags, and making new custom tags.
Easing offline web application development with GWTArnaud Tournier
At this current time, HTML5 APIs are mature enough so that the web browser can now be a very good platform for applications that were before only implemented as native applications : offline applications with locally stored data, embedded SQL engines, etc. Although there are many good Javascript frameworks out there, the Java language allows to build, maintain, debug and work with ease on really big applications (> 100,000 LOC).
You'll discover in this presentation all the tools we assembled to make an application available with its data 100% of the time, even without internet!
The document discusses the evolution of router architectures away from traditional router designs. It argues that routers should move from being chassis-based systems running proprietary operating systems to being more modular, microservices-based architectures using open standards like Linux. Key points of the new model outlined include using many small independent software and hardware units for increased resilience, running software in containers, and having a database-driven management and control plane. The document suggests this type of architecture could make routers more programmable, scalable, and adaptable to changing technology needs over time.
Jeff Scudder, Eric Bidelman
The number of APIs made available for Google products has exploded from a handful to a slew! Get
the big picture on what is possible with the APIs for everything from YouTube, to Spreadsheets, to
Search, to Translate. We'll go over a few tools to help you get started and the things these APIs share
in common. After this session picking up new Google APIs will be a snap.
The document provides an overview of OGCE (Open Grid Computing Environment), which develops and packages reusable software components for science portals. Key components described include services, gadgets, tags, and how they fit together. Installation and usage of the various OGCE components is discussed at a high level.
GraphQL - A query language to empower your API consumers (NDC Sydney 2017)Rob Crowley
The shift to microservices, cloud native and rich web apps have made it challenging to deliver compelling API experiences. REST, as specified in Roy Fielding’s seminal dissertation, has become the architectural pattern of choice for APIs and when applied correctly allows for clients and servers to evolve in a loosely coupled manner. There are areas however where REST can deliver less than ideal client experiences. Often many HTTP requests are required to render a single view.
While this may be a minor concern for a web app running on a WAN with low latency and high bandwidth, it can yield poor client experiences for mobile clients in particular. GraphQL is Facebook’s response to this challenge and it is quickly proving itself as an exciting alternative to RESTful APIs for a wide range of contexts. GraphQL is a query language that provides a clean and simple syntax for consumers to interrogate your APIs. These queries are strongly types, hierarchical and enable clients to retrieve only the data they need.
In this session, we will take a hands-on look at GraphQL and see how it can be used to build APIs that are a joy to use.
How easy (or hard) it is to monitor your graph ql service performanceRed Hat
- GraphQL performance monitoring can be challenging as queries can vary significantly even when requesting the same data. Traditional endpoint monitoring provides little insight.
- Distributed tracing using OpenTracing allows tracing queries to monitor performance at the resolver level. Tools like Jaeger and plugins for Apollo Server and other GraphQL servers can integrate tracing.
- A demo showed using the Apollo OpenTracing plugin to trace a query through an Apollo server and resolver to an external API. The trace data was sent to Jaeger for analysis to help debug performance issues.
Saving Money by Optimizing Your Cloud Add-On InfrastructureAtlassian
People love the freedom and control that comes with hosting an add-on in the cloud, but financially speaking, that freedom doesn't come for free. You'll end up paying for your servers whether they serve requests or not, and of course, someone is needed to monitor and upgrade these servers.
In this talk, we will cover best practices on how to get a simple and inexpensive cloud add-on going for Hipchat, Bitbucket, Confluence and JIRA without all the overhead of running your own servers. We'll cover serverless technologies like AWS Lambda, static frontends, and how you can use JIRA and Confluence to host your add-on data.
Products covered:
JIRA Software, JIRA Core, Confluence, HipChat
Gohan : YAML-based REST API Service Definition Language
API Definition Generation (including Swagger)
DB Table Generation & OR Mapping
Support Custom Logic using Gohan Script (Javascript, and Go)
Extensible Role-Based Access Control
etcd integration
Similar to Connecting the Dots: Kong for GraphQL Endpoints (20)
A neural network is a machine learning program, or model, that makes decisions in a manner similar to the human brain, by using processes that mimic the way biological neurons work together to identify phenomena, weigh options and arrive at conclusions.
Everything You Need to Know About X-Sign: The eSign Functionality of XfilesPr...XfilesPro
Wondering how X-Sign gained popularity in a quick time span? This eSign functionality of XfilesPro DocuPrime has many advancements to offer for Salesforce users. Explore them now!
Why Apache Kafka Clusters Are Like Galaxies (And Other Cosmic Kafka Quandarie...Paul Brebner
Closing talk for the Performance Engineering track at Community Over Code EU (Bratislava, Slovakia, June 5 2024) https://eu.communityovercode.org/sessions/2024/why-apache-kafka-clusters-are-like-galaxies-and-other-cosmic-kafka-quandaries-explored/ Instaclustr (now part of NetApp) manages 100s of Apache Kafka clusters of many different sizes, for a variety of use cases and customers. For the last 7 years I’ve been focused outwardly on exploring Kafka application development challenges, but recently I decided to look inward and see what I could discover about the performance, scalability and resource characteristics of the Kafka clusters themselves. Using a suite of Performance Engineering techniques, I will reveal some surprising discoveries about cosmic Kafka mysteries in our data centres, related to: cluster sizes and distribution (using Zipf’s Law), horizontal vs. vertical scalability, and predicting Kafka performance using metrics, modelling and regression techniques. These insights are relevant to Kafka developers and operators.
Using Query Store in Azure PostgreSQL to Understand Query PerformanceGrant Fritchey
Microsoft has added an excellent new extension in PostgreSQL on their Azure Platform. This session, presented at Posette 2024, covers what Query Store is and the types of information you can get out of it.
Project Management: The Role of Project Dashboards.pdfKarya Keeper
Project management is a crucial aspect of any organization, ensuring that projects are completed efficiently and effectively. One of the key tools used in project management is the project dashboard, which provides a comprehensive view of project progress and performance. In this article, we will explore the role of project dashboards in project management, highlighting their key features and benefits.
WWDC 2024 Keynote Review: For CocoaCoders AustinPatrick Weigel
Overview of WWDC 2024 Keynote Address.
Covers: Apple Intelligence, iOS18, macOS Sequoia, iPadOS, watchOS, visionOS, and Apple TV+.
Understandable dialogue on Apple TV+
On-device app controlling AI.
Access to ChatGPT with a guest appearance by Chief Data Thief Sam Altman!
App Locking! iPhone Mirroring! And a Calculator!!
Unlock the Secrets to Effortless Video Creation with Invideo: Your Ultimate G...The Third Creative Media
"Navigating Invideo: A Comprehensive Guide" is an essential resource for anyone looking to master Invideo, an AI-powered video creation tool. This guide provides step-by-step instructions, helpful tips, and comparisons with other AI video creators. Whether you're a beginner or an experienced video editor, you'll find valuable insights to enhance your video projects and bring your creative ideas to life.
DECODING JAVA THREAD DUMPS: MASTER THE ART OF ANALYSISTier1 app
Are you ready to unlock the secrets hidden within Java thread dumps? Join us for a hands-on session where we'll delve into effective troubleshooting patterns to swiftly identify the root causes of production problems. Discover the right tools, techniques, and best practices while exploring *real-world case studies of major outages* in Fortune 500 enterprises. Engage in interactive lab exercises where you'll have the opportunity to troubleshoot thread dumps and uncover performance issues firsthand. Join us and become a master of Java thread dump analysis!
Preparing Non - Technical Founders for Engaging a Tech AgencyISH Technologies
Preparing non-technical founders before engaging a tech agency is crucial for the success of their projects. It starts with clearly defining their vision and goals, conducting thorough market research, and gaining a basic understanding of relevant technologies. Setting realistic expectations and preparing a detailed project brief are essential steps. Founders should select a tech agency with a proven track record and establish clear communication channels. Additionally, addressing legal and contractual considerations and planning for post-launch support are vital to ensure a smooth and successful collaboration. This preparation empowers non-technical founders to effectively communicate their needs and work seamlessly with their chosen tech agency.Visit our site to get more details about this. Contact us today www.ishtechnologies.com.au
Enhanced Screen Flows UI/UX using SLDS with Tom KittPeter Caitens
Join us for an engaging session led by Flow Champion, Tom Kitt. This session will dive into a technique of enhancing the user interfaces and user experiences within Screen Flows using the Salesforce Lightning Design System (SLDS). This technique uses Native functionality, with No Apex Code, No Custom Components and No Managed Packages required.
INTRODUCTION TO AI CLASSICAL THEORY TARGETED EXAMPLESanfaltahir1010
Image: Include an image that represents the concept of precision, such as a AI helix or a futuristic healthcare
setting.
Objective: Provide a foundational understanding of precision medicine and its departure from traditional
approaches
Role of theory: Discuss how genomics, the study of an organism's complete set of AI ,
plays a crucial role in precision medicine.
Customizing treatment plans: Highlight how genetic information is used to customize
treatment plans based on an individual's genetic makeup.
Examples: Provide real-world examples of successful application of AI such as genetic
therapies or targeted treatments.
Importance of molecular diagnostics: Explain the role of molecular diagnostics in identifying
molecular and genetic markers associated with diseases.
Biomarker testing: Showcase how biomarker testing aids in creating personalized treatment plans.
Content:
• Ethical issues: Examine ethical concerns related to precision medicine, such as privacy, consent, and
potential misuse of genetic information.
• Regulations and guidelines: Present examples of ethical guidelines and regulations in place to safeguard
patient rights.
• Visuals: Include images or icons representing ethical considerations.
Content:
• Ethical issues: Examine ethical concerns related to precision medicine, such as privacy, consent, and
potential misuse of genetic information.
• Regulations and guidelines: Present examples of ethical guidelines and regulations in place to safeguard
patient rights.
• Visuals: Include images or icons representing ethical considerations.
Content:
• Ethical issues: Examine ethical concerns related to precision medicine, such as privacy, consent, and
potential misuse of genetic information.
• Regulations and guidelines: Present examples of ethical guidelines and regulations in place to safeguard
patient rights.
• Visuals: Include images or icons representing ethical considerations.
Real-world case study: Present a detailed case study showcasing the success of precision
medicine in a specific medical scenario.
Patient's journey: Discuss the patient's journey, treatment plan, and outcomes.
Impact: Emphasize the transformative effect of precision medicine on the individual's
health.
Objective: Ground the presentation in a real-world example, highlighting the practical
application and success of precision medicine.
Data challenges: Address the challenges associated with managing large sets of patient data in precision
medicine.
Technological solutions: Discuss technological innovations and solutions for handling and analyzing vast
datasets.
Visuals: Include graphics representing data management challenges and technological solutions.
Objective: Acknowledge the data-related challenges in precision medicine and highlight innovative solutions.
Data challenges: Address the challenges associated with managing large sets of patient data in precision
medicine.
Technological solutions: Discuss technological innovations and solutions
Odoo releases a new update every year. The latest version, Odoo 17, came out in October 2023. It brought many improvements to the user interface and user experience, along with new features in modules like accounting, marketing, manufacturing, websites, and more.
The Odoo 17 update has been a hot topic among startups, mid-sized businesses, large enterprises, and Odoo developers aiming to grow their businesses. Since it is now already the first quarter of 2024, you must have a clear idea of what Odoo 17 entails and what it can offer your business if you are still not aware of it.
This blog covers the features and functionalities. Explore the entire blog and get in touch with expert Odoo ERP consultants to leverage Odoo 17 and its features for your business too.
An Overview of Odoo ERP
Odoo ERP was first released as OpenERP software in February 2005. It is a suite of business applications used for ERP, CRM, eCommerce, websites, and project management. Ten years ago, the Odoo Enterprise edition was launched to help fund the Odoo Community version.
When you compare Odoo Community and Enterprise, the Enterprise edition offers exclusive features like mobile app access, Odoo Studio customisation, Odoo hosting, and unlimited functional support.
Today, Odoo is a well-known name used by companies of all sizes across various industries, including manufacturing, retail, accounting, marketing, healthcare, IT consulting, and R&D.
The latest version, Odoo 17, has been available since October 2023. Key highlights of this update include:
Enhanced user experience with improvements to the command bar, faster backend page loading, and multiple dashboard views.
Instant report generation, credit limit alerts for sales and invoices, separate OCR settings for invoice creation, and an auto-complete feature for forms in the accounting module.
Improved image handling and global attribute changes for mailing lists in email marketing.
A default auto-signature option and a refuse-to-sign option in HR modules.
Options to divide and merge manufacturing orders, track the status of manufacturing orders, and more in the MRP module.
Dark mode in Odoo 17.
Now that the Odoo 17 announcement is official, let’s look at what’s new in Odoo 17!
What is Odoo ERP 17?
Odoo 17 is the latest version of one of the world’s leading open-source enterprise ERPs. This version has come up with significant improvements explained here in this blog. Also, this new version aims to introduce features that enhance time-saving, efficiency, and productivity for users across various organisations.
Odoo 17, released at the Odoo Experience 2023, brought notable improvements to the user interface and added new functionalities with enhancements in performance, accessibility, data analysis, and management, further expanding its reach in the market.
Malibou Pitch Deck For Its €3M Seed Roundsjcobrien
French start-up Malibou raised a €3 million Seed Round to develop its payroll and human resources
management platform for VSEs and SMEs. The financing round was led by investors Breega, Y Combinator, and FCVC.
Top Benefits of Using Salesforce Healthcare CRM for Patient Management.pdfVALiNTRY360
Salesforce Healthcare CRM, implemented by VALiNTRY360, revolutionizes patient management by enhancing patient engagement, streamlining administrative processes, and improving care coordination. Its advanced analytics, robust security, and seamless integration with telehealth services ensure that healthcare providers can deliver personalized, efficient, and secure patient care. By automating routine tasks and providing actionable insights, Salesforce Healthcare CRM enables healthcare providers to focus on delivering high-quality care, leading to better patient outcomes and higher satisfaction. VALiNTRY360's expertise ensures a tailored solution that meets the unique needs of any healthcare practice, from small clinics to large hospital systems.
For more info visit us https://valintry360.com/solutions/health-life-sciences
4. Agenda
• Quick introduction to GraphQL
• Differences between REST and GraphQL
• API Management for GraphQL
• Kong Plugins (demo)
5. • Developed by Facebook in 2012 / publicly released in 2015 / GraphQL Foundation in 2018
• Server and Client implementations are available for major languages (JS, Java, Python, C#...)
• Supports reading (query), writing (mutation) and subscribing to data changes (subscriptions)
• Solves the Over-Fetching and Under-Fetching problems
(Credits: https://graphql.org/)
10. API Management with REST vs GraphQL
• API has many endpoints
• Resource selection is defined in route
• HTTP verbs define the operation
(GET, POST, DELETE...)
REST
• API has a single endpoint
• Resource selection is defined in body
• HTTP POST for every operations
(query or mutation defined in request body)
GraphQL
To manage GraphQL Endpoints, we have to look into the query
and extract some characteristics to implement policies.
11. Query characteristics examples
Nesting
Measure the nesting level of a query.
Query Cost Analysis
Count the amount of resources requested by a query.
Query whitelisting
Verify the query belongs to a group of authorized queries.
12. Query characteristics examples
Nesting
Measure the nesting level of a query.
Query Cost Analysis
Count the amount of resources requested by a query.
Query whitelisting
Verify the query belongs to a group of authorized queries.
50 = 50 repositories
+
50 x 10 = 500 repository issues
= 550 total nodes
13. Query characteristics examples
Nesting
Measure the nesting level of a query.
Query Cost Analysis
Count the amount of resources requested by a query.
Query whitelisting
Verify the query belongs to a group of authorized queries.
14. Existing solutions are language-specific libraries
API - 1
(JS)
Nesting Limit
Node Count Limit
Query Whitelisting
API - 2
(Java)
Nesting Limit
Node Count Limit
API - 3
(Python)
API - 1
(JS)
API - 2
(Java)
API - 3
(Python)
Kong
Plugins: Nesting Limit, Node Count Limit,
Query Whitelisting...
Non-intrusive: no code or configuration change on your
GraphQL server.
Language-agnostic: same features and performance
for all GraphQL implementations .
15. Two proof-of-concept Kong plugins developed at Rakuten
1. Depth Limit
Limit the complexity of GraphQL queries based on their depth.
https://github.com/rakutentech/kong-plugin-graphql-depth-limit
2. Operation Whitelist
Whitelist operations that your consumers can send to your GraphQL server.
https://github.com/rakutentech/kong-plugin-graphql-operation-whitelist
16. Operation Whitelist Plugin
Requirements
Queries and Mutations blocked if not whitelisted
Equivalent operations represented as a single entry
PDK Features Usage
Storing/Caching Custom Entities
Admin API Extension to manage the Whitelist
Client UpstreamKong
Query
Parsing
Signature
Generation
Signature
Hashing
Whitelist
Check
18. Credits and references
• Securing Your GraphQL API from Malicious Queries (Apollo)
https://blog.apollographql.com/securing-your-graphql-api-from-malicious-queries-16130a324a6b
• GraphQL API Management (IBM)
https://www.ibm.com/blogs/research/2019/02/graphql-api-management/
• GraphQL Lua (@bjornbytes)
https://github.com/bjornbytes/graphql-lua
20. Conclusion and Next Steps
• Kong extensibility is a key factor, look into plugin and Admin API
• GraphQL is still relatively new, but it’s popular and we need to address the security aspect
• Load and Performance testing
• Hardening the code
• Merging all the plugins in a single one
• Implement a Query Cost Analysis Plugin
Editor's Notes
Good afternoon everyone, I'm Julien Bataillé , I'm a software engineer at Rakuten and I work with a team in charge of developing and maintaining the API Gateway for our entire group of companies.
If you attended the session this morning "Building the Next Era of Software" maybe you heard my colleague Alex talking about the challenges of providing Kong to such a large and diverse organization.
Today, I'd like to talk about one particular use case that came to us earlier this year. We were talking with one of our largest team here in the US about getting onboard and expose their APIs through our shared instance of Kong.
They were interested, Kong is a great product after all, but they raised one important question:
how Kong can help to manage GraphQL APIs?
And this is the question I'd like to try to answer with today’s presentation.
this is the agenda for today’s talk.
First, I will start with a very quick introduction to GraphQL.
Then I will try to highlight the differences between REST and GraphQL and how it’s impacting the rules and policies we use to manage APIs.
Finally, I will show you some examples of Kong plugins we developed with a live demo if we have enough time.
But first, a few words about GraphQL.
It’s a very popular alternative to REST for front end applications.
Since it was open sourced by Facebook in 2015, adoption has been really strong and nowadays you can find both server and client implementations for almost every stacks.
It allows the client to define the structure of the data required and the server will return exactly that and nothing else.
This is why it’s often considered a great solution to solve the so-called Over-fetching and under-fetching problems.
It’s doing much more than that but I’d like to insist on this point because I think this is one of the most relevant to today’s topic.
So to illustrate this I’d like to take an example that is probably very familiar to today’s audience. The Kong Admin REST API.
How many of you used or know about the Kong Admin API?
So let’s say I want to display the list of services configured on my Kong cluster and in the same page I want to see the list of plugins activated on each service.
To achieve this, I first need to call the services endpoints and it will return the name, host and creation time for each of my services.
Notice that I also receive a lot of fields in the response that are not required to display this page to the user.
This is Over-fetching: I get data in the server’s response that are useless to my application.
But the plugins for each service are missing from this first response so I need to make another round trip to the server to get this additional piece of information. Not only one but 2 calls in this example because I need to display 2 serrvices.
At least I can send those two last requests in parallel but in more complex scenarios it is sometime not even possible to do so. This I hope is a good example of under-fetching.
Now let’s compare it to how we would achieve the same result with GraphQL:
First on the client we would build a query that would contain only the information we need: name, host, creation time, plugins. On this plugins entity we specify only the fields we want, in this example the name of the plugin.
We would POST this query inside the body of a HTTP request to the Kong GraphQL Admin API
and the response would contain exactly the fields specified in the query. We get the all the information we need to display our page in a single round trip to the server.
So from this example you can already notice a few differences between REST and GraphQL that will have an impact how we implement API Management policies.
First, instead of many endpoints in a typical REST API we now have a single endpoint for GraphQL.
The resource selection with REST is usually defined in the route or path of the request whereas with GraphQL this resource selection is specified by the operation sent in the body.
With REST, we are used to conventions on the HTTP verb to define operations: GET, POST, PATCH, DELETE can be used to implement policies or restrictions on the API usage.
For most common GraphQL implementations only POST operations are necessary.
Finally, as we just saw in the previous example One GraphQL call can replace multiple REST calls.
How do we implement Rate Limiting in this case, does it even make sense to use rate limiting?
I hope a this point you will agree that to manage GraphQL endpoints, we have to look into the GraphQL operation to extract some characteristics about the query or mutation and use those characteristics to implement our API Management policies.
To make things more concrete let me share a few examples of what we can look into.
First we could measure the nesting of a query and impose some arbitrary limits to avoid this kind of recursive query.
Next, we can measure the cost of a query by counting the number of entities required by the client.
this example is from the Github GraphQL API: the client requested the 50 first repositories from an account and for each repository the first 10 issues for a total of 550 nodes.
This is how Github implements rate limiting: instead of a number of 5,000 request per hour, they set a limit of points per hour. Each type of node costing an arbitrary number of points.
Query whitelisting is another policy we can implement if we have the capability to compare GraphQL operations and determine when two operations are functionally equivalent or not. I will develop this one in just a moment.
But first I want to mention that you will find libraries that implement the policies I just showed.
Those are language specific solutions so it means you need to modify or reconfigure your GraphQL server to enable it.
This is where I believe Kong brings a better alternative: as for REST APIs, we want to move the implementation to Kong plugins instead of each individual upstream API.
It gives us the opportunity to enforce the same policies across all our GraphQL servers implemented in Javascript, Python or Java.
In the past few months we implemented two Kong plugins at Rakuten to validate this approach:
the first one is fairly basic and implements the Depth limit policy I talked about earlier. It allowed us to verify we could parse a GraphQL query in a Kong plugin.
The second one is a little more complex and this is the one I’d like to demo today.