- GraphQL performance monitoring can be challenging as queries can vary significantly even when requesting the same data. Traditional endpoint monitoring provides little insight.
- Distributed tracing using OpenTracing allows tracing queries to monitor performance at the resolver level. Tools like Jaeger and plugins for Apollo Server and other GraphQL servers can integrate tracing.
- A demo showed using the Apollo OpenTracing plugin to trace a query through an Apollo server and resolver to an external API. The trace data was sent to Jaeger for analysis to help debug performance issues.
Modern businesses have data at their core, and this data is changing continuously. How can we harness this torrent of information in real-time? The answer is stream processing, and the technology that has since become the core platform for streaming data is Apache Kafka. Among the thousands of companies that use Kafka to transform and reshape their industries are the likes of Netflix, Uber, PayPal, and AirBnB, but also established players such as Goldman Sachs, Cisco, and Oracle.
Unfortunately, today’s common architectures for real-time data processing at scale suffer from complexity: there are many technologies that need to be stitched and operated together, and each individual technology is often complex by itself. This has led to a strong discrepancy between how we, as engineers, would like to work vs. how we actually end up working in practice.
In this session we talk about how Apache Kafka helps you to radically simplify your data processing architectures. We cover how you can now build normal applications to serve your real-time processing needs — rather than building clusters or similar special-purpose infrastructure — and still benefit from properties such as high scalability, distributed computing, and fault-tolerance, which are typically associated exclusively with cluster technologies. Notably, we introduce Kafka’s Streams API, its abstractions for streams and tables, and its recently introduced Interactive Queries functionality. As we will see, Kafka makes such architectures equally viable for small, medium, and large scale use cases.
The evolution of Apache Calcite and its CommunityJulian Hyde
Apache Calcite is an open source framework for building databases, and includes a SQL parser, relational algebra, and a highly extensible query optimizer.
It has achieved wide adoption, used in many commercial products, open source projects, and as a test bed for computer science research.
But there is a bootstrap problem: If software is written by a community of contributors, and each contributor acts in their own self-interest, how do you get the first working version of the product? The answer is in the story of how the technology evolved, and how the community evolved with it, and in this talk we tell that story.
The RAPIDS suite of software libraries gives you the freedom to execute end-to-end data science and analytics pipelines entirely on GPUs. It relies on NVIDIA® CUDA® primitives for low-level compute optimization, but exposes that GPU parallelism and high-bandwidth memory speed through user-friendly Python interfaces.
Netflix changed its data pipeline architecture recently to use Kafka as the gateway for data collection for all applications which processes hundreds of billions of messages daily. This session will discuss the motivation of moving to Kafka, the architecture and improvements we have added to make Kafka work in AWS. We will also share the lessons learned and future plans.
2 hour session where I cover what is Apache Camel, latest news on the upcoming Camel v3, and then the main topic of the talk is the new Camel K sub-project for running integrations natively on the cloud with kubernetes. The last part of the talk is about running Camel with GraalVM / Quarkus to archive native compiled binaries that has impressive startup and footprint.
Modern businesses have data at their core, and this data is changing continuously. How can we harness this torrent of information in real-time? The answer is stream processing, and the technology that has since become the core platform for streaming data is Apache Kafka. Among the thousands of companies that use Kafka to transform and reshape their industries are the likes of Netflix, Uber, PayPal, and AirBnB, but also established players such as Goldman Sachs, Cisco, and Oracle.
Unfortunately, today’s common architectures for real-time data processing at scale suffer from complexity: there are many technologies that need to be stitched and operated together, and each individual technology is often complex by itself. This has led to a strong discrepancy between how we, as engineers, would like to work vs. how we actually end up working in practice.
In this session we talk about how Apache Kafka helps you to radically simplify your data processing architectures. We cover how you can now build normal applications to serve your real-time processing needs — rather than building clusters or similar special-purpose infrastructure — and still benefit from properties such as high scalability, distributed computing, and fault-tolerance, which are typically associated exclusively with cluster technologies. Notably, we introduce Kafka’s Streams API, its abstractions for streams and tables, and its recently introduced Interactive Queries functionality. As we will see, Kafka makes such architectures equally viable for small, medium, and large scale use cases.
The evolution of Apache Calcite and its CommunityJulian Hyde
Apache Calcite is an open source framework for building databases, and includes a SQL parser, relational algebra, and a highly extensible query optimizer.
It has achieved wide adoption, used in many commercial products, open source projects, and as a test bed for computer science research.
But there is a bootstrap problem: If software is written by a community of contributors, and each contributor acts in their own self-interest, how do you get the first working version of the product? The answer is in the story of how the technology evolved, and how the community evolved with it, and in this talk we tell that story.
The RAPIDS suite of software libraries gives you the freedom to execute end-to-end data science and analytics pipelines entirely on GPUs. It relies on NVIDIA® CUDA® primitives for low-level compute optimization, but exposes that GPU parallelism and high-bandwidth memory speed through user-friendly Python interfaces.
Netflix changed its data pipeline architecture recently to use Kafka as the gateway for data collection for all applications which processes hundreds of billions of messages daily. This session will discuss the motivation of moving to Kafka, the architecture and improvements we have added to make Kafka work in AWS. We will also share the lessons learned and future plans.
2 hour session where I cover what is Apache Camel, latest news on the upcoming Camel v3, and then the main topic of the talk is the new Camel K sub-project for running integrations natively on the cloud with kubernetes. The last part of the talk is about running Camel with GraalVM / Quarkus to archive native compiled binaries that has impressive startup and footprint.
Watch this talk here: https://www.confluent.io/online-talks/how-apache-kafka-works-on-demand
Pick up best practices for developing applications that use Apache Kafka, beginning with a high level code overview for a basic producer and consumer. From there we’ll cover strategies for building powerful stream processing applications, including high availability through replication, data retention policies, producer design and producer guarantees.
We’ll delve into the details of delivery guarantees, including exactly-once semantics, partition strategies and consumer group rebalances. The talk will finish with a discussion of compacted topics, troubleshooting strategies and a security overview.
This session is part 3 of 4 in our Fundamentals for Apache Kafka series.
Using ANTLR on real example - convert "string combined" queries into paramete...Alexey Diyan
1. Hello ANTLR: ANother Tool for Language Recognition
2. Where we can use ANTLR?
3. Why just not use regular expression language?
4. Tools under ANTLR umbrella
5. ANTLR basic syntax
6. ANTLR on real example
Keynote presentation for the International Semantic Web Conference in Athens Greece, on November 9, 2023. The talk addresses the generative AI explosion and its potential impacts on the Semantic Web and Knowledge Graph communities and, in fact, may spark a research Renaissance.
Abstract:
We are living in an age of rapidly advancing technology. History may view this period as one in which generative artificial intelligence is seen as reshaping the landscape and narrative of many technology-based fields of research and application. Times of disruptions often present both opportunities and challenges. We will discuss some areas that may be ripe for consideration in the field of Semantic Web research and semantically-enabled applications. Semantic Web research has historically focused on representation and reasoning and enabling interoperability of data and vocabularies. At the core are ontologies along with ontology-enabled (or ontology-compatible) knowledge stores such as knowledge graphs. Ontologies are often manually constructed using a process that (1) identifies existing best practice ontologies (and vocabularies) and (2) generates a plan for how to leverage these ontologies by aligning and augmenting them as needed to address requirements. While semi-automated techniques may help, there is typically a significant portion of the work that is often best done by humans with domain and ontology expertise. This is an opportune time to rethink how the field generates, evolves, maintains, and evaluates ontologies. We consider how hybrid approaches, i.e., those that leverage generative AI components along with more traditional knowledge representation and reasoning approaches to create improved processes. The effort to build a robust ontology that meets a use case can be large. Ontologies are not static however and they need to evolve along with knowledge evolution and expanded usage. There is potential for hybrid approaches to help identify gaps in ontologies and/or refine content. Further, ontologies need to be documented with term definitions and their provenance. Opportunities exist to consider semi-automated techniques for some types of documentation, provenance, and decision rationale capture for annotating ontologies. The area of human-AI collaboration for population and verification presents a wide range of areas of research collaboration and impact. Ontologies need to be populated with class and relationship content. Knowledge graphs and other knowledge stores need to be populated with instance data in order to be used for question answering and reasoning. Population of large knowledge graphs can be time consuming. Generative AI holds the promise to create candidate knowledge graphs that are compatible with the ontology schema. The knowledge graph should contain provenance information identifying how the content was populated and its source and correctness and currency should be checked. A human-AI assistant approach is presented.
Building distributed systems is challenging. Luckily, Apache Kafka provides a powerful toolkit for putting together big services as a set of scalable, decoupled components. In this talk, I'll describe some of the design tradeoffs when building microservices, and how Kafka's powerful abstractions can help. I'll also talk a little bit about what the community has been up to with Kafka Streams, Kafka Connect, and exactly-once semantics.
Presentation by Colin McCabe, Confluent, Big Data Day LA
KSQL is an open source streaming SQL engine for Apache Kafka. Come hear how KSQL makes it easy to get started with a wide-range of stream processing applications such as real-time ETL, sessionization, monitoring and alerting, or fraud detection. We'll cover both how to get started with KSQL and some under-the-hood details of how it all works.
다양한 하둡에코 소프트웨어 성능을 검증하려는 목적으로 성능 테스트 환경을 구성해보았습니다. ELK, JMeter를 활용해 구성했고 Kafka에 적용해 보았습니다.
프로젝트에서 요구되는 성능요건을 고려해 다양한 옵션을 조정해 시뮬레이션 해볼수 있습니다.
처음 적용한 뒤 2년 정도가 지났지만, kafka 만이 아니다 다른 Hadoop eco 및 Custom Solution에도 유용하게 활용 가능하겠습니다.
KafkaConsumer - Decoupling Consumption and Processing for Better Resource Uti...confluent
When working with KafkaConsumer, we usually employ single thread both for reading and processing of messages. KafkaConsumer is not thread-safe, so using single thread fits in well. Downside of this approach is that you are limited to single thread for processing messages.
By decoupling consumption and processing, we can achieve processing parallelization with single consumer and get the most out of multi-core CPU architectures available today. While this can be very useful in certain use-case scenarios, it's not trivial to implement.
How do we use multiple threads with KafkaConsumer which is not thread safe? How do we react to consumer group rebalancing? Can we get desired processing and ordering guarantees? In this talk we 'll try to answer these questions and explore challenges we face on our path.
Apache Kafka is the de facto standard for data streaming to process data in motion. With its significant adoption growth across all industries, I get a very valid question every week: When NOT to use Apache Kafka? What limitations does the event streaming platform have? When does Kafka simply not provide the needed capabilities? How to qualify Kafka out as it is not the right tool for the job?
This session explores the DOs and DONTs. Separate sections explain when to use Kafka, when NOT to use Kafka, and when to MAYBE use Kafka.
No matter if you think about open source Apache Kafka, a cloud service like Confluent Cloud, or another technology using the Kafka protocol like Redpanda or Pulsar, check out this slide deck.
A detailed article about this topic:
https://www.kai-waehner.de/blog/2022/01/04/when-not-to-use-apache-kafka/
Building a Replicated Logging System with Apache KafkaGuozhang Wang
Apache Kafka is a scalable publish-subscribe messaging system
with its core architecture as a distributed commit log.
It was originally built as its centralized event
pipelining platform for online data integration tasks. Over
the past years developing and operating Kafka, we extend
its log-structured architecture as a replicated logging backbone
for much wider application scopes in the distributed
environment. I am going to talk about our design
and engineering experience to replicate Kafka logs for various
distributed data-driven systems, including
source-of-truth data storage and stream processing.
Performance Tuning RocksDB for Kafka Streams' State Stores (Dhruba Borthakur,...confluent
RocksDB is the default state store for Kafka Streams. In this talk, we will discuss how to improve single node performance of the state store by tuning RocksDB and how to efficiently identify issues in the setup. We start with a short description of the RocksDB architecture. We discuss how Kafka Streams restores the state stores from Kafka by leveraging RocksDB features for bulk loading of data. We give examples of hand-tuning the RocksDB state stores based on Kafka Streams metrics and RocksDB’s metrics. At the end, we dive into a few RocksDB command line utilities that allow you to debug your setup and dump data from a state store. We illustrate the usage of the utilities with a few real-life use cases. The key takeaway from the session is the ability to understand the internal details of the default state store in Kafka Streams so that engineers can fine-tune their performance for different varieties of workloads and operate the state stores in a more robust manner.
Extending the Apache Kafka® Replication Protocol Across Clusters, Sanjana Kau...HostedbyConfluent
Extending the Apache Kafka® Replication Protocol Across Clusters, Sanjana Kaundinya | Current 2022
When Apache Kafka® was first created, one of the hallmarks was its native replication protocol, which provided built-in resiliency in the system. As a business scales, there’s a need to have this fault-tolerance transcend beyond the local data center, and a multi-geographic deployment becomes critical. Traditionally, Kafka Connect based solutions have tried their hand at enabling these types of deployments. However, this presents its own set of operational challenges that can be quite costly.
In this talk, we will go over how you can use the existing replication protocol across clusters. You will learn how to use Cluster Linking to run a multi-region data streaming deployment without the burden and operational overhead of running yet another data system. We will discuss:.
* Automation options for creating mirror topics
* Failover processes and caveats to consider
* Handling ACL replication and consumer offset synchronization
* And more!
So, join us on this intergalactic journey to discover how you can use Cluster Linking to decrease your operational overhead, maintain a multi-geographic deployment, and perhaps even reach infinity (and beyond)!
Meet Up - Spark Stream Processing + KafkaKnoldus Inc.
Stream processing is the real-time processing of data continuously, concurrently, and in a record-by-record fashion.
It treats data not as static tables or files, but as a continuous infinite stream of data integrated from both live and historical sources.
In these slides we'll be looking into Sprak Stream Processing with Kafka.
GraphQL across the stack: How everything fits togetherSashko Stubailo
My talk from GraphQL Summit 2017!
In this talk, I talk about a future for GraphQL which builds on the idea that GraphQL enables lots of tools to work together seamlessly across the stack. I present this through the lens of 3 examples: Caching, performance tracing, and schema stitching.
Stay tuned for the video recording from GraphQL Summit!
It is a basic presentation which can help you understand the basic concepts about Graphql and how it can be used to resolve the frontend integration of projects and help in reducing the data fetching time
This presentation also explains the core features of Graphql and why It is a great alternative for REST APIs along with the procedure with which we can integrate it into our projects
Watch this talk here: https://www.confluent.io/online-talks/how-apache-kafka-works-on-demand
Pick up best practices for developing applications that use Apache Kafka, beginning with a high level code overview for a basic producer and consumer. From there we’ll cover strategies for building powerful stream processing applications, including high availability through replication, data retention policies, producer design and producer guarantees.
We’ll delve into the details of delivery guarantees, including exactly-once semantics, partition strategies and consumer group rebalances. The talk will finish with a discussion of compacted topics, troubleshooting strategies and a security overview.
This session is part 3 of 4 in our Fundamentals for Apache Kafka series.
Using ANTLR on real example - convert "string combined" queries into paramete...Alexey Diyan
1. Hello ANTLR: ANother Tool for Language Recognition
2. Where we can use ANTLR?
3. Why just not use regular expression language?
4. Tools under ANTLR umbrella
5. ANTLR basic syntax
6. ANTLR on real example
Keynote presentation for the International Semantic Web Conference in Athens Greece, on November 9, 2023. The talk addresses the generative AI explosion and its potential impacts on the Semantic Web and Knowledge Graph communities and, in fact, may spark a research Renaissance.
Abstract:
We are living in an age of rapidly advancing technology. History may view this period as one in which generative artificial intelligence is seen as reshaping the landscape and narrative of many technology-based fields of research and application. Times of disruptions often present both opportunities and challenges. We will discuss some areas that may be ripe for consideration in the field of Semantic Web research and semantically-enabled applications. Semantic Web research has historically focused on representation and reasoning and enabling interoperability of data and vocabularies. At the core are ontologies along with ontology-enabled (or ontology-compatible) knowledge stores such as knowledge graphs. Ontologies are often manually constructed using a process that (1) identifies existing best practice ontologies (and vocabularies) and (2) generates a plan for how to leverage these ontologies by aligning and augmenting them as needed to address requirements. While semi-automated techniques may help, there is typically a significant portion of the work that is often best done by humans with domain and ontology expertise. This is an opportune time to rethink how the field generates, evolves, maintains, and evaluates ontologies. We consider how hybrid approaches, i.e., those that leverage generative AI components along with more traditional knowledge representation and reasoning approaches to create improved processes. The effort to build a robust ontology that meets a use case can be large. Ontologies are not static however and they need to evolve along with knowledge evolution and expanded usage. There is potential for hybrid approaches to help identify gaps in ontologies and/or refine content. Further, ontologies need to be documented with term definitions and their provenance. Opportunities exist to consider semi-automated techniques for some types of documentation, provenance, and decision rationale capture for annotating ontologies. The area of human-AI collaboration for population and verification presents a wide range of areas of research collaboration and impact. Ontologies need to be populated with class and relationship content. Knowledge graphs and other knowledge stores need to be populated with instance data in order to be used for question answering and reasoning. Population of large knowledge graphs can be time consuming. Generative AI holds the promise to create candidate knowledge graphs that are compatible with the ontology schema. The knowledge graph should contain provenance information identifying how the content was populated and its source and correctness and currency should be checked. A human-AI assistant approach is presented.
Building distributed systems is challenging. Luckily, Apache Kafka provides a powerful toolkit for putting together big services as a set of scalable, decoupled components. In this talk, I'll describe some of the design tradeoffs when building microservices, and how Kafka's powerful abstractions can help. I'll also talk a little bit about what the community has been up to with Kafka Streams, Kafka Connect, and exactly-once semantics.
Presentation by Colin McCabe, Confluent, Big Data Day LA
KSQL is an open source streaming SQL engine for Apache Kafka. Come hear how KSQL makes it easy to get started with a wide-range of stream processing applications such as real-time ETL, sessionization, monitoring and alerting, or fraud detection. We'll cover both how to get started with KSQL and some under-the-hood details of how it all works.
다양한 하둡에코 소프트웨어 성능을 검증하려는 목적으로 성능 테스트 환경을 구성해보았습니다. ELK, JMeter를 활용해 구성했고 Kafka에 적용해 보았습니다.
프로젝트에서 요구되는 성능요건을 고려해 다양한 옵션을 조정해 시뮬레이션 해볼수 있습니다.
처음 적용한 뒤 2년 정도가 지났지만, kafka 만이 아니다 다른 Hadoop eco 및 Custom Solution에도 유용하게 활용 가능하겠습니다.
KafkaConsumer - Decoupling Consumption and Processing for Better Resource Uti...confluent
When working with KafkaConsumer, we usually employ single thread both for reading and processing of messages. KafkaConsumer is not thread-safe, so using single thread fits in well. Downside of this approach is that you are limited to single thread for processing messages.
By decoupling consumption and processing, we can achieve processing parallelization with single consumer and get the most out of multi-core CPU architectures available today. While this can be very useful in certain use-case scenarios, it's not trivial to implement.
How do we use multiple threads with KafkaConsumer which is not thread safe? How do we react to consumer group rebalancing? Can we get desired processing and ordering guarantees? In this talk we 'll try to answer these questions and explore challenges we face on our path.
Apache Kafka is the de facto standard for data streaming to process data in motion. With its significant adoption growth across all industries, I get a very valid question every week: When NOT to use Apache Kafka? What limitations does the event streaming platform have? When does Kafka simply not provide the needed capabilities? How to qualify Kafka out as it is not the right tool for the job?
This session explores the DOs and DONTs. Separate sections explain when to use Kafka, when NOT to use Kafka, and when to MAYBE use Kafka.
No matter if you think about open source Apache Kafka, a cloud service like Confluent Cloud, or another technology using the Kafka protocol like Redpanda or Pulsar, check out this slide deck.
A detailed article about this topic:
https://www.kai-waehner.de/blog/2022/01/04/when-not-to-use-apache-kafka/
Building a Replicated Logging System with Apache KafkaGuozhang Wang
Apache Kafka is a scalable publish-subscribe messaging system
with its core architecture as a distributed commit log.
It was originally built as its centralized event
pipelining platform for online data integration tasks. Over
the past years developing and operating Kafka, we extend
its log-structured architecture as a replicated logging backbone
for much wider application scopes in the distributed
environment. I am going to talk about our design
and engineering experience to replicate Kafka logs for various
distributed data-driven systems, including
source-of-truth data storage and stream processing.
Performance Tuning RocksDB for Kafka Streams' State Stores (Dhruba Borthakur,...confluent
RocksDB is the default state store for Kafka Streams. In this talk, we will discuss how to improve single node performance of the state store by tuning RocksDB and how to efficiently identify issues in the setup. We start with a short description of the RocksDB architecture. We discuss how Kafka Streams restores the state stores from Kafka by leveraging RocksDB features for bulk loading of data. We give examples of hand-tuning the RocksDB state stores based on Kafka Streams metrics and RocksDB’s metrics. At the end, we dive into a few RocksDB command line utilities that allow you to debug your setup and dump data from a state store. We illustrate the usage of the utilities with a few real-life use cases. The key takeaway from the session is the ability to understand the internal details of the default state store in Kafka Streams so that engineers can fine-tune their performance for different varieties of workloads and operate the state stores in a more robust manner.
Extending the Apache Kafka® Replication Protocol Across Clusters, Sanjana Kau...HostedbyConfluent
Extending the Apache Kafka® Replication Protocol Across Clusters, Sanjana Kaundinya | Current 2022
When Apache Kafka® was first created, one of the hallmarks was its native replication protocol, which provided built-in resiliency in the system. As a business scales, there’s a need to have this fault-tolerance transcend beyond the local data center, and a multi-geographic deployment becomes critical. Traditionally, Kafka Connect based solutions have tried their hand at enabling these types of deployments. However, this presents its own set of operational challenges that can be quite costly.
In this talk, we will go over how you can use the existing replication protocol across clusters. You will learn how to use Cluster Linking to run a multi-region data streaming deployment without the burden and operational overhead of running yet another data system. We will discuss:.
* Automation options for creating mirror topics
* Failover processes and caveats to consider
* Handling ACL replication and consumer offset synchronization
* And more!
So, join us on this intergalactic journey to discover how you can use Cluster Linking to decrease your operational overhead, maintain a multi-geographic deployment, and perhaps even reach infinity (and beyond)!
Meet Up - Spark Stream Processing + KafkaKnoldus Inc.
Stream processing is the real-time processing of data continuously, concurrently, and in a record-by-record fashion.
It treats data not as static tables or files, but as a continuous infinite stream of data integrated from both live and historical sources.
In these slides we'll be looking into Sprak Stream Processing with Kafka.
GraphQL across the stack: How everything fits togetherSashko Stubailo
My talk from GraphQL Summit 2017!
In this talk, I talk about a future for GraphQL which builds on the idea that GraphQL enables lots of tools to work together seamlessly across the stack. I present this through the lens of 3 examples: Caching, performance tracing, and schema stitching.
Stay tuned for the video recording from GraphQL Summit!
It is a basic presentation which can help you understand the basic concepts about Graphql and how it can be used to resolve the frontend integration of projects and help in reducing the data fetching time
This presentation also explains the core features of Graphql and why It is a great alternative for REST APIs along with the procedure with which we can integrate it into our projects
Sashko Stubailo - The GraphQL and Apollo Stack: connecting everything togetherReact Conf Brasil
Apresentado na React Conf Brasil, em São Paulo, 7 de Outubro de 2017 #reactconfbr
I’ve been exploring the space of declarative developer tools and frameworks for over five years. Most recently, I was the founding member of the Apollo project at Meteor Development Group. My greatest passion is to make software development simpler, and enable more people to create software to bring good to the world.
https://medium.com/@stubailo
@stubailo
- Patrocínio: Pipefy, Globo.com, Meteor, Apollo, Taller, Fullcircle, Quanto, Udacity, Cubos, Segware, Entria
- Apoio: Concrete, Rung, LuizaLabs, Movile, Rivendel, GreenMile, STQ, Hi Platform
- Promoção: InfoQ, DevNaEstrada, CodamosClub, JS Ladies, NodeBR, Training Center, BrazilJS, Tableless, GeekHunter
- Afterparty: An English Thing
GraphQL is a wonderful abstraction for describing and querying data. Apollo is an ambitious project to help you build apps with GraphQL. In this talk, we'll go over how all the parts—Client, Server, Dev Tools, Codegen, and more—create an end-to-end experience for building apps on top of any data.
## Detailed description
In today's development ecosystem, there are tons of options for almost every part of your application development process: UI rendering, styling, server side rendering, build systems, type checking, databases, frontend data management, and more. However, there's one part of the stack that hasn't gotten as much love in the last decade, because it usually falls in the cracks between frontend and backend developers: Data fetching.
The most common way to load data in apps today is to use a REST API on the server and manage the data manually on the client. Whether you're using Redux, MobX, or something else, you're usually doing everything yourself—deciding when to load data, how to keep it fresh, updating the store after sending updates to the server, and more. But if you're trying to develop the best user experience for your app, all of that gets in the way; you shouldn't have to become a systems engineer to create a great frontend. The Apollo project is based on the belief that data loading doesn't have to be complicated; instead, you should be able to easily get the data you want, when you want it, and it should be managed for you just like React manages updating your UI.
Because data loading touches both the frontend and backend of your app, GraphQL and Apollo have to include many parts to fulfill that promise of being able to seamlessly connect your data together. First, we need client libraries not only for React and JavaScript, but also for native iOS and Android. Then, we must bring server-side support for GraphQL queries, mutations, and most recently subscriptions to every server technology and make those servers easier to write. And finally, we want not only all of the tools that people are used to with REST APIs, but many more thanks to all of the capabilities enabled by GraphQL.
In this talk, we'll go over all of the parts of a GraphQL-oriented app architecture, and how different GraphQL and Apollo technologies come together to solve all of the parts of data loading and management for React developers.
GraphQL is quickly becoming mainstream as one of the best ways to get data into your React application. When we see people modernize their app architecture and move to React, they often want to migrate their API to GraphQL as part of the same effort. But while React is super easy to adopt in a small part of your app at a time, GraphQL can seem like a much larger investment. In this talk, we’ll go over the fastest and most effective ways for React developers to incrementally migrate their existing APIs and backends to GraphQL, then talk about opportunities for improvement in the space. If you’re using React and are interested in GraphQL, but are looking for an extra push to get it up and running at your company, this is the talk for you!
Graphql for Frontend Developers Simplifying Data Fetching.docxssuser5583681
In today’s digital landscape, the demand for efficient and flexible APIs (Application Programming Interfaces) has grown exponentially. Developers are constantly seeking ways to improve data retrieval and manipulation processes while ensuring seamless integration between client applications and server resources. One technology that has gained significant popularity in recent years is GraphQL Server.
GraphQL can be one of the best ways to make your product development more fun and productive. In this presentation I talk about how GraphQL makes your life simpler, and how to write and deploy a GraphQL API with Apollo Server 2.0 and serverless deployment via Netlify Functions.
Implementing OpenAPI and GraphQL services with gRPCTim Burks
Behind every API there's code. REST and GraphQL are powerful interface abstractions but are not so great for writing code (we’re still looking for the programming language where every command is a GET, POST, PUT, or DELETE). When programmers work, they are usually making function calls, and an RPC framework like gRPC allows those functions to be written in a mixture of languages and distributed among many servers. This means that gRPC can be a great way to implement REST and GraphQL APIs at scale. We’ll share open source projects from Google that can be used to implement OpenAPI and GraphQL services with gRPC and give you hands-on experience with both.
Presented at the 2019 API Specifications Conference.
https://asc2019.sched.com/event/T6u9/workshop-implementing-openapi-and-graphql-services-with-grpc-tim-burks-google
apidays LIVE Paris - GraphQL meshes by Jens Neuseapidays
apidays LIVE Paris - Responding to the New Normal with APIs for Business, People and Society
December 8, 9 & 10, 2020
GraphQL meshes
Jens Neuse, Founder of Wundergraph
GraphQL - A query language to empower your API consumers (NDC Sydney 2017)Rob Crowley
The shift to microservices, cloud native and rich web apps have made it challenging to deliver compelling API experiences. REST, as specified in Roy Fielding’s seminal dissertation, has become the architectural pattern of choice for APIs and when applied correctly allows for clients and servers to evolve in a loosely coupled manner. There are areas however where REST can deliver less than ideal client experiences. Often many HTTP requests are required to render a single view.
While this may be a minor concern for a web app running on a WAN with low latency and high bandwidth, it can yield poor client experiences for mobile clients in particular. GraphQL is Facebook’s response to this challenge and it is quickly proving itself as an exciting alternative to RESTful APIs for a wide range of contexts. GraphQL is a query language that provides a clean and simple syntax for consumers to interrogate your APIs. These queries are strongly types, hierarchical and enable clients to retrieve only the data they need.
In this session, we will take a hands-on look at GraphQL and see how it can be used to build APIs that are a joy to use.
GraphQL is a query language for APIs and a server-side runtime. It allows fulfilling queries by using a type system you define for your data. Why use GraphQL? What are the pros and cons? We did research and summarised our conclusions.
apidays LIVE Australia 2020 - Have your cake and eat it too: GraphQL? REST? W...apidays
apidays LIVE Australia 2020 - Building Business Ecosystems
Have your cake and eat it too: GraphQL? REST? Why not have both!
Roy Mor, Technical Lead at Sisense
How to Deploy a GraphQL API A Comprehensive Guide.docxssuser5583681
In today’s digital landscape, APIs (Application Programming Interfaces) play a crucial role in connecting and integrating different software systems. GraphQL has emerged as a powerful query language and runtime for APIs, providing efficient and flexible data retrieval. If you’re looking to harness the benefits of GraphQL, this article will guide you through the process of deploying a GraphQL API. From setting up the infrastructure to implementing best practices, we’ll cover it all. Let’s dive in!
GraphQL is a syntax that describes how to ask for data, and is generally used to load data from a server to a client. GraphQL has three main characteristics:
It lets the client specify exactly what data it needs.
It makes it easier to aggregate data from multiple sources.
It uses a type system to describe data.
describing and comparing different protocols when it come to deploying apis on edge computing devices.
5 different categories are analyzed and 7 protocols are examined
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
Corporate Management | Session 3 of 3 | Tendenci AMS
How easy (or hard) it is to monitor your graph ql service performance
1. Public Produced by Luca Ferrari Version 0.9
How easy (or hard) it is to monitor
your GraphQL service performance
by Luca Ferrari
EMEA Solution Architect
in Red Hat
2. Public Prepared by Luca Ferrari Version 0.9
Agenda
graphql {
what {
challenges
solutions
}
possible {
demo
}
questions(limit: None) {
answers(0:N)
}
}
3. Graphql boring definition
GraphQL is a query language for APIs and a runtime for fulfilling those queries with your
existing data.
GraphQL provides a complete and understandable description of the data in your API as well
as gives clients the power to ask for exactly what they need and nothing more.
It simplifies evolving APIs over time and enables powerful developer tools
Reference: graphql.org/learn
8. Graphql advantages
Exact data fetching / Client specific shape of response: with GraphQL, you send a query to your API
and get exactly what you need, nothing more and nothing less. GraphQL minimizes the amount of data
that is transferred across the wire by being selective about the data depending on the client
application’s needs. Thus, a mobile client can fetch less information.
vs
Overfetching or Underfetching
9. Graphql advantages
One request, many resources / network efficiency: it makes it simple to fetch all required data with
one single request. The structure of GraphQL servers makes it possible to declaratively fetch data as it
only exposes a single endpoint.
vs
Multiple requests to get a composite result and Using network as an unlimited resource /
Network inefficient
10. Graphql advantages
Modern compatibility: Modern applications are now built-in comprehensive ways where a single
backend application supplies the data that is needed to run multiple clients.
GraphQL embraces these new trends as it can be used to connect the backend application and fulfill
each client’s requirements ( nested relationships of data, fetching only the required data, network usage
requirements, etc.) without dedicating a separate API for each client.
Schema stitching makes it possible to create a single general schema from different schemas. As a
result, each microservice can define its own GraphQL schema.
vs
Multiple APIs for omnichannel experience
11. Graphql advantages
Field level deprecation: As developers, we are used to calling different versions of an API and often
times getting really weird responses. Traditionally, we version APIs when we’ve made changes to the
resources or to the structure of the resources we currently have hence, the need to deprecate and
evolve a new version.
In GraphQL, it is possible to deprecate API’s on a field level. When a particular field is to be deprecated, a
client receives a deprecation warning when querying the field. After a while, the deprecated field may be
removed from the schema when not many clients are using it anymore.
vs
Versioning / Deprecation / Outdated documentation
12. Possible issues
1
Caching:
with REST you access resources with URLs,
and thus you would be able to cache on a
resource level. In GraphQL, this becomes
complex as each query can be different
even though it operates on the same
entity.
2
Query performance:
GraphQL gives clients the power to
execute queries to get exactly what they
need. It could also mean that users can ask
for as many fields in as many resources as
they want and build highly complex queries
that slow systems down
3
Security:
OIDC scopes, granular authz or rate limiting
might not be as easy to implement as with
REST services
4
Monitoring performance:
Measuring response time of a GraphQL
endpoint gives us almost no insight into the
health of our GraphQL API.
13. Performance Monitoring
Imagine a simple query:
query {
viewer {
name
bestFriend {
name
}
}
}
Now a new API client starts using our GraphQL API:
query {
viewer {
friends(first: 1000) {
bestFriend {
name
}
}
}
}
14. Performance Monitoring
● The queries are not so different, but the second one will see a much higher response time
● Response are slower as we serve more and more complex queries
● But what we really would like to know is the general behaviour of our backend given a comparable
workload
● We are not interested in monitoring the endpoint, but the queries.
● If we are running a private Graphql API with known clients we can control the situation, but
otherwise ...
16. Observability
Observability is defined as the ability of the internal
states of a system to be determined by its external
outputs.
Observability consists of three pillars - metrics,
traces, and logs.
Drawing conclusions from any one of these pillars alone
is difficult.
Observability means bringing together the information
from all in a coordinated way toward finding bugs and
bottlenecks.
17. OpenTracing
Distributed tracing is a method used to profile and
monitor applications, especially those built using a
microservices architecture. Distributed tracing helps
pinpoint where failures occur and what causes poor
performance.
OpenTracing is comprised of an API specification,
frameworks and libraries that have implemented the
specification, and documentation for the project.
OpenTracing allows developers to add
instrumentation to their application code using APIs
that do not lock them into any one particular
product or vendor.
18. OpenTracing and OpenCensus have merged to form
OpenTelemetry!
OpenTelemetry is a collection of tools, APIs, and
SDKs.
You use it to instrument, generate, collect, and
export telemetry data (metrics, logs, and traces) for
analysis in order to understand your software's
performance and behavior.
19. Jaeger
● A product built at Uber
● Inspired by Dapper from Google
● Donated to CNCF
● Supported libraries in Go, Java, Node.js, Python, C++, C#
● Accepts span in Zipkin format for backward compatibility
● Emit prometheus metrics
23. Apollo Tracing extension
Apollo Tracing is a GraphQL extension for performance
monitoring that works with most popular GraphQL server
libraries, including Node, Ruby, Scala, Java, and Elixir, and it
enables you to easily get resolver-level performance
information as part of a GraphQL response.
Apollo Tracing works by including data in the extensions field
of the GraphQL response, which is reserved by the GraphQL
spec for extra information that a server wants to return. That
way, you have access to performance traces alongside the
data returned by your query
Reference: https://www.apollographql.com/blog/exposing-trace-data-for-your-graphql-server-with-apollo-tracing-97c5dd391385/
24. Instana tracing support
Instana offers tracing for GraphQL queries, mutations and
subscriptions. GraphQL tracing is currently supported for the
Ruby, Node.js and Java runtimes.
For each operation, we capture
● the operation type (query, mutation or
subscription-update),
● the operation name (if provided),
● all involved object types,
● the arguments used for each object type, and
● the selected fields for each object type.
Each time a client receives an update due to one of its active
GraphQL subscriptions, we trace this subscription update as a
call from the GraphQL server to the client.
Reference: https://www.instana.com/docs/ecosystem/graphql/
25. DataDog tracer plugin
DataDog Javascript Tracer provides out-of-the-box
instrumentation for many popular frameworks and libraries
by using a plugin system. By default all built-in plugins are
enabled.
This library is OpenTracing compliant. Use the OpenTracing
API and the Datadog Tracer (dd-trace) library to measure
execution times for specific pieces of code.
This plugin automatically instruments the graphql module.
The graphql integration uses the operation name as the span
resource name. If no operation name is set, the resource
name will always be just query, mutation or subscription.
Reference: https://datadoghq.dev/dd-trace-js/
26. NewRelic plugin
By using the New Relic Apollo Server plugin to instrument
your applications, you can get to the root cause of issues.
The plugin records the overall timing of the operations and
then parses the payload so you can uncover and diagnose the
cause of your slow GraphQL operations.
Distributed tracing goes further and provides the capability
to understand if the latency is coming from the application
itself or other services.
Reference: https://blog.newrelic.com/product-news/apollo-server-plugin/
27. Apollo OpenTracing plugin
Apollo Opentracing allows you to integrate open source
baked performance tracing to your Apollo server based on
industry standards for tracing.
➢ Request & Field level resolvers are traced out of the
box
➢ Queries and results are logged, to make debugging
easier
➢ Select which requests you want to trace
➢ Spans transmitted through the HTTP Headers are
picked up
➢ Use the opentracing compatible tracer you like
Reference: https://github.com/DanielMSchmidt/apollo-opentracing
29. Apollo server
Apollo Server is an open-source, spec-compliant GraphQL server that's compatible with any GraphQL client, including
Apollo Client.
You can use Apollo Server as:
● A stand-alone GraphQL server, including in a serverless environment
● An add-on to your application's existing Node.js middleware (such as Express or Fastify)
● A gateway for a federated data graph
Reference: https://www.apollographql.com/