A small attempt to deliver a session on GraphQL Introduction.
I have prepared small Demo by using below technologies: Spring Boot, h2 Database (Server)
Angular 8, Apollo GraphQL Client
Oracle 21c: New Features and Enhancements of Data Pump & TTSChristian Gohmann
At the end of the year 2020, Oracle released 21c on its Cloud infrastructure. The on-premises version will follow later this year. As with every new Oracle version, the Data Pump utility gets new features or enhancements for existing features.
This presentation gives an overview of the enhancements of Data Pump and Transportable Tablespaces. The following list is an excerpt of the points I will talk about
- Simultaneous use of EXCLUDE and INCLUDE
- Parallelized import of metadata during a TTS import operation
- Checksum support for dump files
- Direct access to Oracle Cloud Object Store for exports and imports
My use case is to provide monitoring, and improving the overall search data quality, also to find the unusual patterns of user’s search behavior, and notifying the intent on-site back to the respective business stakeholders. To achieve the same, I explored various big data processing engines, which can process the huge data with complex business logic in real time. Eventually, I used Flink Stream processing. This talk will showcase how I used Flink to accomplish my goal.
This presentation shortly describes key features of Apache Cassandra. It was held at the Apache Cassandra Meetup in Vienna in January 2014. You can access the meetup here: http://www.meetup.com/Vienna-Cassandra-Users/
Watch this talk here: https://www.confluent.io/online-talks/apache-kafka-architecture-and-fundamentals-explained-on-demand
This session explains Apache Kafka’s internal design and architecture. Companies like LinkedIn are now sending more than 1 trillion messages per day to Apache Kafka. Learn about the underlying design in Kafka that leads to such high throughput.
This talk provides a comprehensive overview of Kafka architecture and internal functions, including:
-Topics, partitions and segments
-The commit log and streams
-Brokers and broker replication
-Producer basics
-Consumers, consumer groups and offsets
This session is part 2 of 4 in our Fundamentals for Apache Kafka series.
JSON, or JavaScript Object Notation, is a minimal, readable format for structuring data. It is used primarily to transmit data between a server and web application, as an alternative to XML.
Presto, an open source distributed SQL engine, is widely recognized for its low-latency queries, high concurrency, and native ability to query multiple data sources. Proven at scale in a variety of use cases at Facebook, Airbnb, Netflix, Uber, Twitter, Bloomberg, and FINRA, Presto experienced an unprecedented growth in popularity in both on-premises and cloud deployments in the last few years.
Inspired by the increasingly complex SQL queries run by the Presto user community, engineers at Facebook and Starburst have recently focused on cost-based query optimization. In this talk we will present the initial design and implementation of the CBO, support for connector-provided statistics, estimating selectivity, and choosing efficient query plans. Then, our detailed experimental evaluation will illustrate the performance gains for several classes of queries achieved thanks to the optimizer. Finally, we will discuss our future work enhancing the initial CBO and present the general Presto roadmap for 2018 and beyond.
Speakers
Kamil Bajda-Pawlikowski, Starburst Data, CTO & Co-Founder
Martin Traverso
The talk covers the following topics:
* Fundamental parts of a GraphQL server
* Defining API shape - GraphQL schema
* Resolving object fields
* Mutative APIs
* Making requests to a GraphQL server
* Solving N+1 query problem: dataloader
Oracle 21c: New Features and Enhancements of Data Pump & TTSChristian Gohmann
At the end of the year 2020, Oracle released 21c on its Cloud infrastructure. The on-premises version will follow later this year. As with every new Oracle version, the Data Pump utility gets new features or enhancements for existing features.
This presentation gives an overview of the enhancements of Data Pump and Transportable Tablespaces. The following list is an excerpt of the points I will talk about
- Simultaneous use of EXCLUDE and INCLUDE
- Parallelized import of metadata during a TTS import operation
- Checksum support for dump files
- Direct access to Oracle Cloud Object Store for exports and imports
My use case is to provide monitoring, and improving the overall search data quality, also to find the unusual patterns of user’s search behavior, and notifying the intent on-site back to the respective business stakeholders. To achieve the same, I explored various big data processing engines, which can process the huge data with complex business logic in real time. Eventually, I used Flink Stream processing. This talk will showcase how I used Flink to accomplish my goal.
This presentation shortly describes key features of Apache Cassandra. It was held at the Apache Cassandra Meetup in Vienna in January 2014. You can access the meetup here: http://www.meetup.com/Vienna-Cassandra-Users/
Watch this talk here: https://www.confluent.io/online-talks/apache-kafka-architecture-and-fundamentals-explained-on-demand
This session explains Apache Kafka’s internal design and architecture. Companies like LinkedIn are now sending more than 1 trillion messages per day to Apache Kafka. Learn about the underlying design in Kafka that leads to such high throughput.
This talk provides a comprehensive overview of Kafka architecture and internal functions, including:
-Topics, partitions and segments
-The commit log and streams
-Brokers and broker replication
-Producer basics
-Consumers, consumer groups and offsets
This session is part 2 of 4 in our Fundamentals for Apache Kafka series.
JSON, or JavaScript Object Notation, is a minimal, readable format for structuring data. It is used primarily to transmit data between a server and web application, as an alternative to XML.
Presto, an open source distributed SQL engine, is widely recognized for its low-latency queries, high concurrency, and native ability to query multiple data sources. Proven at scale in a variety of use cases at Facebook, Airbnb, Netflix, Uber, Twitter, Bloomberg, and FINRA, Presto experienced an unprecedented growth in popularity in both on-premises and cloud deployments in the last few years.
Inspired by the increasingly complex SQL queries run by the Presto user community, engineers at Facebook and Starburst have recently focused on cost-based query optimization. In this talk we will present the initial design and implementation of the CBO, support for connector-provided statistics, estimating selectivity, and choosing efficient query plans. Then, our detailed experimental evaluation will illustrate the performance gains for several classes of queries achieved thanks to the optimizer. Finally, we will discuss our future work enhancing the initial CBO and present the general Presto roadmap for 2018 and beyond.
Speakers
Kamil Bajda-Pawlikowski, Starburst Data, CTO & Co-Founder
Martin Traverso
The talk covers the following topics:
* Fundamental parts of a GraphQL server
* Defining API shape - GraphQL schema
* Resolving object fields
* Mutative APIs
* Making requests to a GraphQL server
* Solving N+1 query problem: dataloader
Thrift vs Protocol Buffers vs Avro - Biased ComparisonIgor Anishchenko
Igor Anishchenko
Odessa Java TechTalks
Lohika - May, 2012
Let's take a step back and compare data serialization formats, of which there are plenty. What are the key differences between Apache Thrift, Google Protocol Buffers and Apache Avro. Which is "The Best"? Truth of the matter is, they are all very good and each has its own strong points. Hence, the answer is as much of a personal choice, as well as understanding of the historical context for each, and correctly identifying your own, individual requirements.
YugaByte DB Internals - Storage Engine and Transactions Yugabyte
This document introduces YugaByte DB, a high-performance, distributed, transactional database. It is built to scale horizontally on commodity servers across data centers for mission-critical applications. YugaByte DB uses a transactional document store based on RocksDB, Raft-based replication for resilience, and automatic sharding and rebalancing. It supports ACID transactions across documents, provides APIs compatible with Cassandra and Redis, and is open source. The architecture is designed for high performance, strong consistency, and cloud-native deployment.
Over the years there have been countless technical and social presentations doting on 5, 10, 12 ways to improve this, that and the other.
I will go through various performance tweaks (not tweets) for Oracle Application Express without limiting myself to a golden number.
These improvements will vary from simple PL/SQL refactoring; to monitoring for bottlenecks in your application; to cutting down maintenance time - which relates to the performance of you as an Oracle developer with only 24 hours in a day.
We may even visit a little APEX instrumentation on the way.
This document provides an overview of Apache Phoenix, including:
- A brief history of how it originated as an internal project at Salesforce before becoming a top-level Apache project.
- An architectural overview explaining that Phoenix provides a SQL interface for Apache HBase and runs on top of HDFS to enable next-generation data applications on HBase.
- Descriptions of Phoenix's key capabilities like SQL support, transactions, user-defined functions, and secondary indexes to boost query performance.
- Examples of how Phoenix can be used for common scenarios like analyzing server metrics data.
This document provides an introduction and overview of GraphQL, including:
- A brief history of GraphQL and how it was created by Facebook and adopted by other companies.
- How GraphQL provides a more efficient alternative to REST APIs by allowing clients to specify exactly the data they need in a request.
- Some key benefits of GraphQL like its type system, declarative data fetching, schema stitching, introspection, and versioning capabilities.
- Some disadvantages like potential complexity in queries and challenges with rate limiting.
Lambdas and streams are key new features in Java 8. Lambdas allow blocks of code to be passed around as if they were objects. Streams provide an abstraction for processing collections of objects in a declarative way using lambdas. Optional is a new class that represents null-safe references and helps avoid null pointer exceptions. Checked exceptions can cause issues with lambdas, so helper methods are recommended to convert checked exceptions to unchecked exceptions.
MongoDB is an open-source, document-oriented database that provides high performance and horizontal scalability. It uses a document-model where data is organized in flexible, JSON-like documents rather than rigidly defined rows and tables. Documents can contain multiple types of nested objects and arrays. MongoDB is best suited for applications that need to store large amounts of unstructured or semi-structured data and benefit from horizontal scalability and high performance.
This document provides an overview of Postgresql, including its history, capabilities, advantages over other databases, best practices, and references for further learning. Postgresql is an open source relational database management system that has been in development for over 30 years. It offers rich SQL support, high performance, ACID transactions, and extensive extensibility through features like JSON, XML, and programming languages.
This document discusses GraalVM and its key components. GraalVM is a virtual machine that supports multiple languages including Java, JavaScript, Ruby, and Python. It includes Graal, a new just-in-time compiler that aims to replace the C2 compiler in HotSpot VM. GraalVM also features Truffle, a framework for building languages, and Native Image, an ahead-of-time compiler.
This document provides an overview of Spring Boot, a framework for creating stand-alone, production-grade Spring based applications. It discusses how Spring Boot aims to make it easy to create Spring applications with default configurations and minimal code. The key topics covered include: using Maven and Gradle build tools with Spring Boot, common features and conventions like auto-configuration and main application classes, Spring Data and JPA for database access, Spring MVC features for web applications, and testing Spring applications.
This document provides an overview of patterns for scalability, availability, and stability in distributed systems. It discusses general recommendations like immutability and referential transparency. It covers scalability trade-offs around performance vs scalability, latency vs throughput, and availability vs consistency. It then describes various patterns for scalability including managing state through partitioning, caching, sharding databases, and using distributed caching. It also covers patterns for managing behavior through event-driven architecture, compute grids, load balancing, and parallel computing. Availability patterns like fail-over, replication, and fault tolerance are discussed. The document provides examples of popular technologies that implement many of these patterns.
Koalas: Making an Easy Transition from Pandas to Apache SparkDatabricks
In this talk, we present Koalas, a new open-source project that aims at bridging the gap between the big data and small data for data scientists and at simplifying Apache Spark for people who are already familiar with the pandas library in Python.
Pandas is the standard tool for data science in python, and it is typically the first step to explore and manipulate a data set by data scientists. The problem is that pandas does not scale well to big data. It was designed for small data sets that a single machine could handle.
When data scientists work today with very large data sets, they either have to migrate to PySpark to leverage Spark or downsample their data so that they can use pandas. This presentation will give a deep dive into the conversion between Spark and pandas dataframes.
Through live demonstrations and code samples, you will understand: – how to effectively leverage both pandas and Spark inside the same code base – how to leverage powerful pandas concepts such as lightweight indexing with Spark – technical considerations for unifying the different behaviors of Spark and pandas
Redis is an in-memory key-value store that is often used as a database, cache, and message broker. It supports various data structures like strings, hashes, lists, sets, and sorted sets. While data is stored in memory for fast access, Redis can also persist data to disk. It is widely used by companies like GitHub, Craigslist, and Engine Yard to power applications with high performance needs.
Highly Available Kafka Consumers and Kafka Streams on Kubernetes with Adrian ...HostedbyConfluent
"Getting started with a Kafka Consumer or Streams application is relatively straight forward, but having those clients be highly available and resilient in a real-world application where compute is becoming more ephemeral is another matter. Kubernetes is a popular place to deploy these workloads, making these challenges more accessible and prevalent than ever.
In this talk I will introduce a simple consumer implementation with a default configuration and discuss the KIPs and features that have been introduced over time to limit how the hostile world of cloud computing can impact your real-time consuming applications.
Once the Kafka Consumer configurations are under our belt, we can see how these same concepts are applied and augmented in Kafka Streams, and then cater for new concepts such as maintaining and restoring local state store data.
Throughout the talk I will show out-of-the box Kubernetes features and deployment configurations that harmonise with the Kafka clients and their configurations to achieve a highly available consumer or streams deployment.
If you have found your real-time streaming applications stopping the world through rebalancing, starting up slower than expected through a routine deployment, taking an age to restore state or over time found them to be less reliable as your platform engineers make your Kubernetes cluster more awesome, you will hopefully find something in this talk you could apply tomorrow."
Maximum Availability Architecture - Best Practices for Oracle Database 19cGlen Hawkins
Provides the latest updates on high availability (HA) best practices in this well-established technical deep-dive session. Learn how to optimize all aspects of Oracle Active Data Guard 19c. See how to use session draining, transparent application continuity, Oracle RAC, and Oracle GoldenGate to mask outages and planned maintenance from users and to accelerate time to repair for single database or your fleet of databases. Hear about the latest HA best practices with Oracle Multitenant and understand how the new sharded architecture can achieve even higher levels of HA and fault isolation for OLTP applications. Find out how everything you know about Oracle Maximum Availability Architecture (MAA) on-premises can be deployed in the cloud.
Relational databases vs Non-relational databasesJames Serra
There is a lot of confusion about the place and purpose of the many recent non-relational database solutions ("NoSQL databases") compared to the relational database solutions that have been around for so many years. In this presentation I will first clarify what exactly these database solutions are, compare them, and discuss the best use cases for each. I'll discuss topics involving OLTP, scaling, data warehousing, polyglot persistence, and the CAP theorem. We will even touch on a new type of database solution called NewSQL. If you are building a new solution it is important to understand all your options so you take the right path to success.
Using Apache Spark and MySQL for Data AnalysisSveta Smirnova
The document discusses using Apache Spark and MySQL for data analysis. It provides examples of loading Wikipedia usage statistics (Wikistats) data into both MySQL and Spark for analysis. Loading the full 10+ TB of Wikistats data into MySQL took over a month, while Spark was able to scan and analyze the entire dataset in under an hour by leveraging its ability to perform distributed, parallel processing across multiple nodes. The document compares key differences between Spark and MySQL for big data processing, such as Spark's lack of indexes but ability to perform full scans in parallel across nodes.
This presentation covers a number of the way that you can tune PostgreSQL to better handle high write workloads. We will cover both application and database tuning methods as each type can have substantial benefits but can also interact in unexpected ways when you are operating at scale. On the application side we will look at write batching, use of GUID's, general index structure, the cost of additional indexes and impact of working set size. For the database we will see how wal compression, auto vacuum and checkpoint settings as well as a number of other configuration parameters can greatly affect the write performance of your database and application.
Introduction to Testing GraphQL PresentationKnoldus Inc.
Explore strategies and best practices for testing systems built on event-driven architecture. Learn how to ensure the reliability and responsiveness of event-driven applications through comprehensive testing methodologies.
Testing Graph QL Presentation (Test Automation)Knoldus Inc.
Discover how to test GraphQL effectively! This session dives into checking if your data queries and changes work correctly. You'll learn to ensure your data stays safe and accurate. We'll cover practical examples to help you understand these methods better by the end of our session.
Thrift vs Protocol Buffers vs Avro - Biased ComparisonIgor Anishchenko
Igor Anishchenko
Odessa Java TechTalks
Lohika - May, 2012
Let's take a step back and compare data serialization formats, of which there are plenty. What are the key differences between Apache Thrift, Google Protocol Buffers and Apache Avro. Which is "The Best"? Truth of the matter is, they are all very good and each has its own strong points. Hence, the answer is as much of a personal choice, as well as understanding of the historical context for each, and correctly identifying your own, individual requirements.
YugaByte DB Internals - Storage Engine and Transactions Yugabyte
This document introduces YugaByte DB, a high-performance, distributed, transactional database. It is built to scale horizontally on commodity servers across data centers for mission-critical applications. YugaByte DB uses a transactional document store based on RocksDB, Raft-based replication for resilience, and automatic sharding and rebalancing. It supports ACID transactions across documents, provides APIs compatible with Cassandra and Redis, and is open source. The architecture is designed for high performance, strong consistency, and cloud-native deployment.
Over the years there have been countless technical and social presentations doting on 5, 10, 12 ways to improve this, that and the other.
I will go through various performance tweaks (not tweets) for Oracle Application Express without limiting myself to a golden number.
These improvements will vary from simple PL/SQL refactoring; to monitoring for bottlenecks in your application; to cutting down maintenance time - which relates to the performance of you as an Oracle developer with only 24 hours in a day.
We may even visit a little APEX instrumentation on the way.
This document provides an overview of Apache Phoenix, including:
- A brief history of how it originated as an internal project at Salesforce before becoming a top-level Apache project.
- An architectural overview explaining that Phoenix provides a SQL interface for Apache HBase and runs on top of HDFS to enable next-generation data applications on HBase.
- Descriptions of Phoenix's key capabilities like SQL support, transactions, user-defined functions, and secondary indexes to boost query performance.
- Examples of how Phoenix can be used for common scenarios like analyzing server metrics data.
This document provides an introduction and overview of GraphQL, including:
- A brief history of GraphQL and how it was created by Facebook and adopted by other companies.
- How GraphQL provides a more efficient alternative to REST APIs by allowing clients to specify exactly the data they need in a request.
- Some key benefits of GraphQL like its type system, declarative data fetching, schema stitching, introspection, and versioning capabilities.
- Some disadvantages like potential complexity in queries and challenges with rate limiting.
Lambdas and streams are key new features in Java 8. Lambdas allow blocks of code to be passed around as if they were objects. Streams provide an abstraction for processing collections of objects in a declarative way using lambdas. Optional is a new class that represents null-safe references and helps avoid null pointer exceptions. Checked exceptions can cause issues with lambdas, so helper methods are recommended to convert checked exceptions to unchecked exceptions.
MongoDB is an open-source, document-oriented database that provides high performance and horizontal scalability. It uses a document-model where data is organized in flexible, JSON-like documents rather than rigidly defined rows and tables. Documents can contain multiple types of nested objects and arrays. MongoDB is best suited for applications that need to store large amounts of unstructured or semi-structured data and benefit from horizontal scalability and high performance.
This document provides an overview of Postgresql, including its history, capabilities, advantages over other databases, best practices, and references for further learning. Postgresql is an open source relational database management system that has been in development for over 30 years. It offers rich SQL support, high performance, ACID transactions, and extensive extensibility through features like JSON, XML, and programming languages.
This document discusses GraalVM and its key components. GraalVM is a virtual machine that supports multiple languages including Java, JavaScript, Ruby, and Python. It includes Graal, a new just-in-time compiler that aims to replace the C2 compiler in HotSpot VM. GraalVM also features Truffle, a framework for building languages, and Native Image, an ahead-of-time compiler.
This document provides an overview of Spring Boot, a framework for creating stand-alone, production-grade Spring based applications. It discusses how Spring Boot aims to make it easy to create Spring applications with default configurations and minimal code. The key topics covered include: using Maven and Gradle build tools with Spring Boot, common features and conventions like auto-configuration and main application classes, Spring Data and JPA for database access, Spring MVC features for web applications, and testing Spring applications.
This document provides an overview of patterns for scalability, availability, and stability in distributed systems. It discusses general recommendations like immutability and referential transparency. It covers scalability trade-offs around performance vs scalability, latency vs throughput, and availability vs consistency. It then describes various patterns for scalability including managing state through partitioning, caching, sharding databases, and using distributed caching. It also covers patterns for managing behavior through event-driven architecture, compute grids, load balancing, and parallel computing. Availability patterns like fail-over, replication, and fault tolerance are discussed. The document provides examples of popular technologies that implement many of these patterns.
Koalas: Making an Easy Transition from Pandas to Apache SparkDatabricks
In this talk, we present Koalas, a new open-source project that aims at bridging the gap between the big data and small data for data scientists and at simplifying Apache Spark for people who are already familiar with the pandas library in Python.
Pandas is the standard tool for data science in python, and it is typically the first step to explore and manipulate a data set by data scientists. The problem is that pandas does not scale well to big data. It was designed for small data sets that a single machine could handle.
When data scientists work today with very large data sets, they either have to migrate to PySpark to leverage Spark or downsample their data so that they can use pandas. This presentation will give a deep dive into the conversion between Spark and pandas dataframes.
Through live demonstrations and code samples, you will understand: – how to effectively leverage both pandas and Spark inside the same code base – how to leverage powerful pandas concepts such as lightweight indexing with Spark – technical considerations for unifying the different behaviors of Spark and pandas
Redis is an in-memory key-value store that is often used as a database, cache, and message broker. It supports various data structures like strings, hashes, lists, sets, and sorted sets. While data is stored in memory for fast access, Redis can also persist data to disk. It is widely used by companies like GitHub, Craigslist, and Engine Yard to power applications with high performance needs.
Highly Available Kafka Consumers and Kafka Streams on Kubernetes with Adrian ...HostedbyConfluent
"Getting started with a Kafka Consumer or Streams application is relatively straight forward, but having those clients be highly available and resilient in a real-world application where compute is becoming more ephemeral is another matter. Kubernetes is a popular place to deploy these workloads, making these challenges more accessible and prevalent than ever.
In this talk I will introduce a simple consumer implementation with a default configuration and discuss the KIPs and features that have been introduced over time to limit how the hostile world of cloud computing can impact your real-time consuming applications.
Once the Kafka Consumer configurations are under our belt, we can see how these same concepts are applied and augmented in Kafka Streams, and then cater for new concepts such as maintaining and restoring local state store data.
Throughout the talk I will show out-of-the box Kubernetes features and deployment configurations that harmonise with the Kafka clients and their configurations to achieve a highly available consumer or streams deployment.
If you have found your real-time streaming applications stopping the world through rebalancing, starting up slower than expected through a routine deployment, taking an age to restore state or over time found them to be less reliable as your platform engineers make your Kubernetes cluster more awesome, you will hopefully find something in this talk you could apply tomorrow."
Maximum Availability Architecture - Best Practices for Oracle Database 19cGlen Hawkins
Provides the latest updates on high availability (HA) best practices in this well-established technical deep-dive session. Learn how to optimize all aspects of Oracle Active Data Guard 19c. See how to use session draining, transparent application continuity, Oracle RAC, and Oracle GoldenGate to mask outages and planned maintenance from users and to accelerate time to repair for single database or your fleet of databases. Hear about the latest HA best practices with Oracle Multitenant and understand how the new sharded architecture can achieve even higher levels of HA and fault isolation for OLTP applications. Find out how everything you know about Oracle Maximum Availability Architecture (MAA) on-premises can be deployed in the cloud.
Relational databases vs Non-relational databasesJames Serra
There is a lot of confusion about the place and purpose of the many recent non-relational database solutions ("NoSQL databases") compared to the relational database solutions that have been around for so many years. In this presentation I will first clarify what exactly these database solutions are, compare them, and discuss the best use cases for each. I'll discuss topics involving OLTP, scaling, data warehousing, polyglot persistence, and the CAP theorem. We will even touch on a new type of database solution called NewSQL. If you are building a new solution it is important to understand all your options so you take the right path to success.
Using Apache Spark and MySQL for Data AnalysisSveta Smirnova
The document discusses using Apache Spark and MySQL for data analysis. It provides examples of loading Wikipedia usage statistics (Wikistats) data into both MySQL and Spark for analysis. Loading the full 10+ TB of Wikistats data into MySQL took over a month, while Spark was able to scan and analyze the entire dataset in under an hour by leveraging its ability to perform distributed, parallel processing across multiple nodes. The document compares key differences between Spark and MySQL for big data processing, such as Spark's lack of indexes but ability to perform full scans in parallel across nodes.
This presentation covers a number of the way that you can tune PostgreSQL to better handle high write workloads. We will cover both application and database tuning methods as each type can have substantial benefits but can also interact in unexpected ways when you are operating at scale. On the application side we will look at write batching, use of GUID's, general index structure, the cost of additional indexes and impact of working set size. For the database we will see how wal compression, auto vacuum and checkpoint settings as well as a number of other configuration parameters can greatly affect the write performance of your database and application.
Introduction to Testing GraphQL PresentationKnoldus Inc.
Explore strategies and best practices for testing systems built on event-driven architecture. Learn how to ensure the reliability and responsiveness of event-driven applications through comprehensive testing methodologies.
Testing Graph QL Presentation (Test Automation)Knoldus Inc.
Discover how to test GraphQL effectively! This session dives into checking if your data queries and changes work correctly. You'll learn to ensure your data stays safe and accurate. We'll cover practical examples to help you understand these methods better by the end of our session.
It is a basic presentation which can help you understand the basic concepts about Graphql and how it can be used to resolve the frontend integration of projects and help in reducing the data fetching time
This presentation also explains the core features of Graphql and why It is a great alternative for REST APIs along with the procedure with which we can integrate it into our projects
Tutorial: Building a GraphQL API in PHPAndrew Rota
This document discusses building a GraphQL API in PHP. It provides an overview of GraphQL concepts like queries, fields, types and schemas. It then outlines the steps to build a GraphQL API in PHP using the graphql-php library:
1. Define object types and the Query root type in the schema
2. Initialize the GraphQL schema instance
3. Execute GraphQL queries against the schema and return the result
By following these steps, one can build an API for querying a database of PHP conferences and speakers to understand how to build GraphQL APIs in PHP.
GraphQL - A query language to empower your API consumers (NDC Sydney 2017)Rob Crowley
The shift to microservices, cloud native and rich web apps have made it challenging to deliver compelling API experiences. REST, as specified in Roy Fielding’s seminal dissertation, has become the architectural pattern of choice for APIs and when applied correctly allows for clients and servers to evolve in a loosely coupled manner. There are areas however where REST can deliver less than ideal client experiences. Often many HTTP requests are required to render a single view.
While this may be a minor concern for a web app running on a WAN with low latency and high bandwidth, it can yield poor client experiences for mobile clients in particular. GraphQL is Facebook’s response to this challenge and it is quickly proving itself as an exciting alternative to RESTful APIs for a wide range of contexts. GraphQL is a query language that provides a clean and simple syntax for consumers to interrogate your APIs. These queries are strongly types, hierarchical and enable clients to retrieve only the data they need.
In this session, we will take a hands-on look at GraphQL and see how it can be used to build APIs that are a joy to use.
GraphQL is an application layer query language from Facebook. With GraphQL, you can define your backend as a well-defined graph-based schema. Then client applications can query your dataset as they are needed. GraphQL’s power comes from a simple idea — instead of defining the structure of responses on the server, the flexibility is given to the client. Will GraphQL do to REST what REST did to SOAP?
GraphQL with .NET Core Microservices.pdfKnoldus Inc.
In this Webinar, will talk on GraphQL with .NET, that provides a modern and flexible approach to building APIs. It empowers developers to create efficient and tailored APIs that meet the specific needs of their applications and clients.
GraphQL is quickly becoming mainstream as one of the best ways to get data into your React application. When we see people modernize their app architecture and move to React, they often want to migrate their API to GraphQL as part of the same effort. But while React is super easy to adopt in a small part of your app at a time, GraphQL can seem like a much larger investment. In this talk, we’ll go over the fastest and most effective ways for React developers to incrementally migrate their existing APIs and backends to GraphQL, then talk about opportunities for improvement in the space. If you’re using React and are interested in GraphQL, but are looking for an extra push to get it up and running at your company, this is the talk for you!
GraphQL is a syntax that describes how to ask for data, and is generally used to load data from a server to a client. GraphQL has three main characteristics:
It lets the client specify exactly what data it needs.
It makes it easier to aggregate data from multiple sources.
It uses a type system to describe data.
GraphQL and its schema as a universal layer for database accessConnected Data World
GraphQL is a query language mostly used to streamline access to REST APIs. It is seeing tremendous growth and adoption, in organizations like Airbnb, Coursera, Docker, GitHub, Twitter, Uber, and Facebook, where it was invented.
As REST APIs are proliferating, the promise of accessing them all through a single query language and hub, which is what GraphQL and GraphQL server implementations bring, is alluring.
A significant recent addition to GraphQL was SDL, its schema definition language. SDL enables developers to define a schema governing interaction with the back-end that GraphQL servers can then implement and enforce.
Prisma is a productized version of the data layer leveraging GraphQL to access any database. Prisma works with MySQL, Postgres, and MongoDB, and is adding to this list.
Prisma sees the GraphQL community really coming together around the idea of schema-first development, and wants to use GraphQL SDL as the foundation for all interfaces between systems.
This document discusses how GraphQL and graph databases are well-suited for each other. GraphQL allows clients to request specific data in a hierarchical format, while graph databases are optimized for traversing relationships. Translating a GraphQL query to a single graph database query could prevent the "N+1 queries problem" and improve performance. The document demonstrates a proof of concept for translating GraphQL queries to queries in Neo4j and OrientDB graph databases.
In questa breve presentazione vedremo cosa è e cosa ci permette di fare GraphQL, e come questo nuovo approccio alle API possa essere integrato ad una GraphDB in modo efficiente
GraphQL has grown out of its baby shoes and is becoming the new standard for client-server communication. When it was introduced 2 years ago, there merely was any tooling that would help developers using it except for Facebook's reference implementation in JavaScript as well as corresponding middleware for Express so you could embed it in your web server. By now, the situation has changed drastically and a plethora of tools, libraries and services have entered the GraphQL ecosystem, providing great improvements to workflows and overall developer experience. In this talk, Nikolas will give an overview of the most relevant tools that exist in the GraphQL ecosystem today, ensuring you can make the best choices when starting your own GraphQL journey.
This document summarizes and compares several GraphQL libraries for Java: graphql-java, graphql-java-kickstart, and dgs-framework. It discusses their features for defining schemas and types, handling data fetching and caching, performing mutations, handling errors, testing functionality, and code generation capabilities. Overall, dgs-framework requires the least amount of boilerplate code, supports testing and code generation more fully, and is designed specifically for use within Spring Boot applications.
In this talk, I go over some of the concerns people initially have when adding GraphQL to their existing frontends and backends, and cover some of the tools that can be used to address them.
This document provides an overview and agenda for a workshop on building a full stack GraphQL application using Neo4j AuraDB, Next.js, and Vercel. The agenda includes introductions to Neo4j AuraDB, building GraphQL APIs, Next.js, and deploying to Vercel. Hands-on exercises will have attendees create a Neo4j AuraDB instance, build GraphQL APIs backed by Neo4j, develop a Next.js frontend application, and deploy the full stack application to Vercel.
apidays LIVE Australia 2020 - Have your cake and eat it too: GraphQL? REST? W...apidays
apidays LIVE Australia 2020 - Building Business Ecosystems
Have your cake and eat it too: GraphQL? REST? Why not have both!
Roy Mor, Technical Lead at Sisense
How easy (or hard) it is to monitor your graph ql service performanceRed Hat
- GraphQL performance monitoring can be challenging as queries can vary significantly even when requesting the same data. Traditional endpoint monitoring provides little insight.
- Distributed tracing using OpenTracing allows tracing queries to monitor performance at the resolver level. Tools like Jaeger and plugins for Apollo Server and other GraphQL servers can integrate tracing.
- A demo showed using the Apollo OpenTracing plugin to trace a query through an Apollo server and resolver to an external API. The trace data was sent to Jaeger for analysis to help debug performance issues.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
2. For internal use only
Rest – REST in peace
LonglivetoGraphQL
Hello GraphQL
Reviews from Developers
++
3. For internal use only
Index
History
What is GraphQL
Introduction
Rapidly Growing community
Alternative to Rest
CRUD in GraphQL
4. For internal use only
History
Invented by Facebook
Presented publically at React JS Conference in 2015 and made open source
Specification stable version – 2018
Find releases - https://graphql.github.io/graphql-spec/
2012
Development
Started
2015
Open Sourced
2016 2017 2018 2019
Evolving specification
5. For internal use only
What is GraphQL
• GraphQL is a query language for APIs and a runtime for fulfilling
those queries with your existing data.
• GraphQL provides a complete and understandable description of the
data in your API, gives clients the power to ask for exactly what they
need and nothing more, makes it easier to evolve APIs over time, and
enables powerful developer tools.
• GraphQL is not a database, it’s a specification
6. For internal use only
Introduction
Provides an efficient, powerful and flexible approach to developing web
APIs
Enables declarative data fetching
Exposes Single endpoint and responds to queries
Alternative to REST Services
Graph QL is not only for React Developers
It can be used with any programming language and framework
7. For internal use only
A Rapidly growing Community
Find more Users - https://graphql.org/users/
8. For internal use only
What we need
• User
• Users Posts
• Follower (likes)
• Commenters
14. For internal use only
Problem with Advanced requests
Rest Api - GET /users/1/friends/1/dogs/1?include=user.name,dog.age
GraphQL :
query {
user(id: 1) {
friends {
dogs {
name
}
}
}
}
15. For internal use only
GraphQL vs REST Comparison
Points REST GraphQL
EndPoints Multiple Single
Fetching data Over fetching and Under fetching Fetch Exact Data
Dependency on Endpoint Highly Dependent Not Dependent
Production Iteration at Frontend Slower Faster
Advanced Request Complex Easy
16. For internal use only
GraphQL Type System
Object Types
Interfaces
Unions
Enumerations
Fields
List
Scaler Types
Int
Float
String
Boolean
Id (serialized in String)
17. For internal use only
Schema Basics
GraphQL has its own type system that’s used to define the schema of an API.
The syntax for writing schemas is called Schema Definition Language (SDL).
The ! following the type means that this field is required.
type User {
name: String!
contactNo: Int!
posts: [Post!]
}
type Post{
name: String!
author: User!
}
User Post
1 N
18. For internal use only
CRUD in GraphQL
Graph QL Operations
Query - use to read a data
Mutation - use to write a data
Subscriptions - use to listen for data
query {
search(id: “123456"){
name
posts(last: 1) {
title
}
followers {
name
}
}
}
Mutation {
createUser(name: “Deepak“, age: 28){
id
}
Subscription {
onCreate {
name
age
}
}
19. For internal use only
Self Explanatory
query{
book(isbn: “SEBT125") {
name
author {
firstname
lastName
}
}
Give me the book with isbn=“SEBT125"
Give me the book's name
Give me the book's author
Give me the author's first name
Give me the author’s last name
20. For internal use only
GraphQL Execution Engine
Parse Query from Client
Validate Schema
Return JSON response
Executes resolvers for each field
21. For internal use only
Server Architecture
/graphql
(Single
endpoint)
Dao Layer
Business
Layer
Schema
Validation
Layer
Resolvers
(Query,
Mutation,
Subscription)
/graphiql
(Editor)
22. For internal use only
2 Ways to access GraphQL App
GraphQL Editor Application – (Windows)
https://electronjs.org/apps/graphiql
GraphiQL Server Utility
Localhost:8080/graphiql
23. For internal use only
Validation / Error Handling GraphQL
Error status :
{
"data": null,
"errors": [
{
"message": "Cannot query field ‘names' on type ‘Book'. Did you mean ‘name'? (line 52,
column 5):n namen ^",
"locations": [
{
"line": 52,
"column": 5
}
]
}
]
}
query {
searchbook(name: “java") {
names
author {
firstname
lastName
}
}
24. For internal use only
Demo
Used Technologies
Node Js, Express, MySQL Database (Server Configuration)
ReactJS, Websockets, Apollo Client (Using React Render Props)
Demo Covers :
Book Creation
Subscription
25. For internal use only
How to do Server-side Caching
Server-side caching still is a challenge with GraphQL.
26. For internal use only
1) Should I stop Using Rest API Now?
2) Should I migrate current application to
GraphQL?
Answer is No
27. For internal use only
Authentication and Authorization in GraphQL
Authentication can be done by using Oauth
To implement authorization, it is recommended to delegate any data
access logic to the business logic layer and not handle it directly in the
GraphQL implementation.
28. For internal use only
Disadvantages
You need to learn how to set up GraphQL. The ecosystem is still rapidly
evolving so you have to keep up.
You need to define the schema beforehand => extra work before you get
results
You need to have a GraphQL endpoint on your server => new
libraries that you don't know yet
The server needs to do more processing to parse the query and verify the
parameters
29. For internal use only
Resources
Find libraries for technologies –
https://graphql.org/code/
Editor -
https://lucasconstantino.github.io/graphiql-online/
30. For internal use only
References
https://graphql.org/learn/
https://www.howtographql.com/
https://blog.apollographql.com/graphql-vs-rest-5d425123e34b
31. For internal use only
Demo
Used Technologies
Spring Boot, h2 Database (Server Configuration)
ReactJS, Apollo Client (Using React Render Props)
Demo Covers :
Book Creation
Book Deletion
Books List
Editor's Notes
I simple language, Graph QL is actually a specification around http for how you get and receive resources from a server.
GraphQL helps where your client needs a flexible response format to avoid extra queries and/or massive data transformation with the overhead of keeping them in sync.
Using a GraphQL server makes it very easy for a client side developer to change the response format without any change on the backend.
Single Endpoint is serving your purpose.
Over-fetching is fetching too much data, meaning there is data in the response you don't use.
Under-fetching is not having enough data with a call to an endpoint, forcing you to call a second endpoint.
GraphQL helps where your client needs a flexible response format to avoid extra queries and/or massive data transformation with the overhead of keeping them in sync.
Using a GraphQL server makes it very easy for a client side developer to change the response format without any change on the backend.
With GraphQL, you can describe the required data in a more natural way. It can speed up development, because in application structures like top-down rendering in React, the required data is more similar to your component structure.
One common concern with GraphQL, especially when comparing it to REST, are the difficulties to maintain server-side cache. With REST, it’s easy to cache the data for each endpoint, since it’s sure that the structure of the data will not change.
With GraphQL on the other hand, it’s not clear what a client will request next, so putting a caching layer right behind the API doesn’t make a lot of sense.
Should I abandon the REST API and always should use Graph QL. And is there like no usage of REST API now.
I would say No
There is no mandate like migrate your application to GraphQL right now. There is no such high an immediate need that you should implement graph QL
Millions of Millions Queries in Single Day. – GraphQL
REST Service wiil be there