If you’ve ever argued about the way your JSON responses should be formatted, JSON:API can be your anti-bikeshedding tool. JSON:API is a great way to expose a consistent API in your application.
In this session, we will talk about how JSON:API got to where it is today and how it can help you make Drupal the core of all your online transactions. We will check out the specifications and look at the main benefits of JSON:API and see how Drupal implemented the spec.
Expect to learn the structure and features of the JSON:API specifications and why it should be your smart default. You should be able to get started right away with some examples we will provide in this session.
Amongst all the big front end frameworks, Nuxt.js stands out as it has a lot of advantages over the other. This presentation covers an overview of Nuxt.js and how Server Side Rendering helps in improving the SEO of a site.
https://www.oxygenxml.com/events/2021/webinar_the_new_json_schema_diagram_editor.html
A webinar in which we will show you how Oxygen now offers even more powerful tools that allow you to design, develop, and edit JSON Schemas. We will be focusing on presenting features ranging from the new intuitive and expressive visual schema Design mode, all the way up to the JSON Schema documentation generator that includes diagram images for each component.
During this live webinar, you will get the chance to take an in-depth look at all of these features, as well as learn:
How to create JSON Schemas from scratch
How to visualize and edit complex JSON Schemas
How to generate JSON Schema documentation
Palestra explicando o porque o time de Apps da globo.com adotou o React Native como sua solução.
Eu também mostro exemplos de códigos em React Native e explico como a sua arquitetura funciona.
Amongst all the big front end frameworks, Nuxt.js stands out as it has a lot of advantages over the other. This presentation covers an overview of Nuxt.js and how Server Side Rendering helps in improving the SEO of a site.
https://www.oxygenxml.com/events/2021/webinar_the_new_json_schema_diagram_editor.html
A webinar in which we will show you how Oxygen now offers even more powerful tools that allow you to design, develop, and edit JSON Schemas. We will be focusing on presenting features ranging from the new intuitive and expressive visual schema Design mode, all the way up to the JSON Schema documentation generator that includes diagram images for each component.
During this live webinar, you will get the chance to take an in-depth look at all of these features, as well as learn:
How to create JSON Schemas from scratch
How to visualize and edit complex JSON Schemas
How to generate JSON Schema documentation
Palestra explicando o porque o time de Apps da globo.com adotou o React Native como sua solução.
Eu também mostro exemplos de códigos em React Native e explico como a sua arquitetura funciona.
Version 6 of Adobe Experience Manager (AEM 6) is a major release that introduces significant innovations. Sightly is a new template system to be used in place of (or together with) JSP. Along with Sling Models, SIghtly strongly improves the separation between the logic and presentation. The development effort is reduced because a Sightly template is an HTML 5 document, easily maintainable even by front-end developers.
The presentation provides an overview of the basic features of Sightly and introduces the fundamentals of the new development model with the support of tools released release together with AEM 6.
Na estréia da série sobre Node.js, vamos falar sobre a história e as principais caraterísticas da plataforma como o V8, event loop e thread pool.
Vamos mostrar por meio de diversos exemplos como o Node.js funciona e quais são os aspectos importantes em termos de escalabilidade e performance.
https://www.youtube.com/watch?v=KtDwdoxQL4A
In this session you will learn:
Understand Spring framework overview & its salient features
Spring concepts (IoC container / DI)
Spring-AOP basics
Spring ORM / Spring DAO overview
Spring Web / MVC overview
For more information, visit: https://www.mindsmapped.com/courses/software-development/java-developer-training-for-beginners/
Slide ini merupakan tutorial dasar dari penggunaan javaScript. Bagaimana javascript berjalan dan penggunaannya, javaScript statement, javaScript variable, javaScript looping, dan javaScript operator.
This text presentation attempts to hit on the highlight features and structure of Django and its ecco-system. It is intended as an introduction for those who are curious about , what is it?
n this world of Microservices, I am building a Monolith app. In this world of React and Vue, am building a server-side rendered app.
However, I need Javascript. I can’t avoid that. I need some parts of the page updated dynamically. I need to show/hide certain parts of the page depending upon user actions.
I don’t want JQuery for the obvious reasons. Slow.., Heavy and then, of course, it can easily create spaghetti code.
That is when I came across Stimulus JS - a modest Javascript framework. It sprinkles Javascript to add behaviour to your HTML.
It has a controller, action and targets (i.e. the HTML elements). Moreover, it pairs well with Turbolinks. So I don’t need to do the circus of converting JSON to DOM.
I’ve been using Stimulus for over a year and its been quite good. This talk is about my experiences with Stimulus with a few examples. I will share recommendations on where it might be useful and where it is not.
Angular 2 has finally hit the shelves and it is not just an upgrade. The producers of Angular have issued Angular 2 and it stands miles apart from the original framework. The new Angular 2 is a modern and robust framework that is faster, more expressive and flexible in nature. Here are a few interesting facts about Angular 2 that you may need to get started with this brilliant framework.
Solutions for bi-directional Integration between Oracle RDMBS & Apache KafkaGuido Schmutz
A Kafka cluster stores streams of records (messages) in categories called topics. It is the architectural backbone for integrating streaming data with a Data Lake, Microservices and Stream Processing. Today’s enterprises have their core systems often implemented on top of relational databases, such as the Oracle RDBMS. Implementing a new solution supporting the digital strategy using Kafka and the ecosystem can not always be done completely separate from the traditional legacy solutions. Often streaming data has to be enriched with state data which is held in an RDBMS of a legacy application. It’s important to cache this data in the stream processing solution, so that It can be efficiently joined to the data stream. But how do we make sure that the cache is kept up-to-date, if the source data changes? We can either poll for changes from Kafka using Kafka Connect or let the RDBMS push the data changes to Kafka. But what about writing data back to the legacy application, i.e. an anomaly is detected inside the stream processing solution which should trigger an action inside the legacy application. Using Kafka Connect we can write to a database table or view, which could trigger the action. But this not always the best option. If you have an Oracle RDBMS, there are many other ways to integrate the database with Kafka, such as Advanced Queueing (message broker in the database), CDC through Golden Gate or Debezium, Oracle REST Database Service (ORDS) and more. In this session, we present various blueprints for integrating an Oracle RDBMS with Apache Kafka in both directions and discuss how these blueprints can be implemented using the products mentioned before.
Version 6 of Adobe Experience Manager (AEM 6) is a major release that introduces significant innovations. Sightly is a new template system to be used in place of (or together with) JSP. Along with Sling Models, SIghtly strongly improves the separation between the logic and presentation. The development effort is reduced because a Sightly template is an HTML 5 document, easily maintainable even by front-end developers.
The presentation provides an overview of the basic features of Sightly and introduces the fundamentals of the new development model with the support of tools released release together with AEM 6.
Na estréia da série sobre Node.js, vamos falar sobre a história e as principais caraterísticas da plataforma como o V8, event loop e thread pool.
Vamos mostrar por meio de diversos exemplos como o Node.js funciona e quais são os aspectos importantes em termos de escalabilidade e performance.
https://www.youtube.com/watch?v=KtDwdoxQL4A
In this session you will learn:
Understand Spring framework overview & its salient features
Spring concepts (IoC container / DI)
Spring-AOP basics
Spring ORM / Spring DAO overview
Spring Web / MVC overview
For more information, visit: https://www.mindsmapped.com/courses/software-development/java-developer-training-for-beginners/
Slide ini merupakan tutorial dasar dari penggunaan javaScript. Bagaimana javascript berjalan dan penggunaannya, javaScript statement, javaScript variable, javaScript looping, dan javaScript operator.
This text presentation attempts to hit on the highlight features and structure of Django and its ecco-system. It is intended as an introduction for those who are curious about , what is it?
n this world of Microservices, I am building a Monolith app. In this world of React and Vue, am building a server-side rendered app.
However, I need Javascript. I can’t avoid that. I need some parts of the page updated dynamically. I need to show/hide certain parts of the page depending upon user actions.
I don’t want JQuery for the obvious reasons. Slow.., Heavy and then, of course, it can easily create spaghetti code.
That is when I came across Stimulus JS - a modest Javascript framework. It sprinkles Javascript to add behaviour to your HTML.
It has a controller, action and targets (i.e. the HTML elements). Moreover, it pairs well with Turbolinks. So I don’t need to do the circus of converting JSON to DOM.
I’ve been using Stimulus for over a year and its been quite good. This talk is about my experiences with Stimulus with a few examples. I will share recommendations on where it might be useful and where it is not.
Angular 2 has finally hit the shelves and it is not just an upgrade. The producers of Angular have issued Angular 2 and it stands miles apart from the original framework. The new Angular 2 is a modern and robust framework that is faster, more expressive and flexible in nature. Here are a few interesting facts about Angular 2 that you may need to get started with this brilliant framework.
Solutions for bi-directional Integration between Oracle RDMBS & Apache KafkaGuido Schmutz
A Kafka cluster stores streams of records (messages) in categories called topics. It is the architectural backbone for integrating streaming data with a Data Lake, Microservices and Stream Processing. Today’s enterprises have their core systems often implemented on top of relational databases, such as the Oracle RDBMS. Implementing a new solution supporting the digital strategy using Kafka and the ecosystem can not always be done completely separate from the traditional legacy solutions. Often streaming data has to be enriched with state data which is held in an RDBMS of a legacy application. It’s important to cache this data in the stream processing solution, so that It can be efficiently joined to the data stream. But how do we make sure that the cache is kept up-to-date, if the source data changes? We can either poll for changes from Kafka using Kafka Connect or let the RDBMS push the data changes to Kafka. But what about writing data back to the legacy application, i.e. an anomaly is detected inside the stream processing solution which should trigger an action inside the legacy application. Using Kafka Connect we can write to a database table or view, which could trigger the action. But this not always the best option. If you have an Oracle RDBMS, there are many other ways to integrate the database with Kafka, such as Advanced Queueing (message broker in the database), CDC through Golden Gate or Debezium, Oracle REST Database Service (ORDS) and more. In this session, we present various blueprints for integrating an Oracle RDBMS with Apache Kafka in both directions and discuss how these blueprints can be implemented using the products mentioned before.
Solutions for bi-directional integration between Oracle RDBMS and Apache Kafk...confluent
A Kafka cluster stores streams of records (messages) in categories called topics. It is the architectural backbone for integrating streaming data with a Data Lake, Microservices and Stream Processing. Today's enterprises have their core systems often implemented on top of relational databases, such as the Oracle RDBMS. Implementing a new solution supporting the digital strategy using Kafka and the ecosystem can not always be done completely separate from the traditional legacy solutions. Often streaming data has to be enriched with state data which is held in an RDBMS of a legacy application. It's important to cache this data in the stream processing solution, so that It can be efficiently joined to the data stream. But how do we make sure that the cache is kept up-to-date, if the source data changes? We can either poll for changes from Kafka using Kafka Connect or let the RDBMS push the data changes to Kafka. But what about writing data back to the legacy application, i.e. an anomaly is detected inside the stream processing solution which should trigger an action inside the legacy application. Using Kafka Connect we can write to a database table or view, which could trigger the action. But this not always the best option. If you have an Oracle RDBMS, there are many other ways to integrate the database with Kafka, such as Advanced Queueing (message broker in the database), CDC through Golden Gate or Debezium, Oracle REST Database Service (ORDS) and more. In this session, we present various blueprints for integrating an Oracle RDBMS with Apache Kafka in both directions and discuss how these blueprints can be implemented using the products mentioned before.
Solutions for bi-directional integration between Oracle RDBMS & Apache KafkaGuido Schmutz
Apache Kafka is a popular distributed streaming data platform. A Kafka cluster stores streams of records (messages) in categories called topics. It is the architectural backbone for integrating streaming data with a Data Lake, Microservices and Stream Processing. Data sources flowing into Kafka are often native data streams such as social media streams, telemetry data, financial transactions and many others. But these data stream only contain part of the information. A lot of data necessary in stream processing is stored in traditional systems backed by relational databases. To implement new and modern, real-time solutions, an up-to-date view of that information is needed. So how do we make sure that information can flow between the RDBMS and Kafka, so that changes are available in Kafka as soon as possible in near-real-time? This session will present different approaches for integrating relational databases with Kafka, such as Kafka Connect, Oracle GoldenGate and bridging Kafka with Oracle Advanced Queuing (AQ).
Web APIs have revolutionized all kinds of products and services, and still continue to do so. Nowadays the most relevant architecture is REST along with the JSON media type. Furthermore, lots of specifications to serialize those media types are appearing. JSON API has released its first version last May.
OData: Universal Data Solvent or Clunky Enterprise Goo? (GlueCon 2015)Pat Patterson
Why would anyone but the most pedestrian enterprise developer be interested in a data access protocol originally designed by Microsoft, implemented in XML and handed to OASIS for standardization? The Open Data Protocol, or OData for short, has evolved into a clean, RESTful interface for CRUD operations against data services. Alongside the usual enterprise suspects such as Microsoft, Salesforce and IBM, OData has been adopted by government and non-profit agencies to open up their data and make it accessible to the public. For developers wanting to consume data, or create their own OData services, there's no shortage of open source options, from Apache Olingo in Java to node-odata and ODataCpp. Whether you're accessing customer orders in SAP or the Whitehouse visitor book, you're going to need some OData smarts.
Postman is a tool for designing, sharing and testing APIs between a group of collaborators that range from the API developers down to the final clients, be them mobile apps or web apps.
This presentation focuses on using Postman's advanced free features, with a special focus on testing.
I have linked an example collection which I refer to several times during the presentation.
Section 1 - Fundamentals
Environments, variables, collections, and workspaces
Roles, VCS
Section 2 - Scripts & Testing
Pre request scripts and tests
Scopes
Pass data between requests
Section 3 - Integrated testing
Collection runners: read data from files, workflows
Monitors
CD/CI integration with Newman
Section 4 - More!
Documentation
Mock server
Integrations
Python RESTful webservices with Python: Flask and Django solutionsSolution4Future
Slides contain RESTful solutions based on Python frameworks like Flask and Django. The presentation introduce in REST concept, presents benchmarks and research for best solutions, analyzes performance problems and shows how to simple get better results. Finally presents soruce code in Flask and Django how to make your own RESTful API in 15 minutes.
Workshop: EmberJS - In Depth
- Ember Data - Adapters & Serializers
- Routing and Navigation
- Templates
- Services
- Components
- Integration with 3rd party libraries
Presentado por ingenieros: Mario García y Marc Torrent
Agenda:
MongoDB Overview/History
Workshop
1. How to perform operations to MongoDB – Workshop
2. Using MongoDB in your Java application
Advance usage of MongoDB
1. Performance measurement comparison – real life use cases
3. Doing Cluster setup
4. Cons of MongoDB with other document oriented DB
5. Map-reduce/ Aggregation overview
Workshop prerequisite
1. All participants must bring their laptops.
2. https://github.com/geek007/mongdb-examples
3. Software prerequisite
a. Java version 1.6+
b. Your favorite IDE, Preferred http://www.jetbrains.com/idea/download/
c. MongoDB server version – 2.6.3 (http://www.mongodb.org/downloads - 64 bit version)
d. Participants can install MongoDB client – http://robomongo.org/
About Speaker:
Akbar Gadhiya is working with Ishi Systems as Programmer Analyst. Previously he worked with PMC, Baroda and HCL Technologies.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
24. History of {JSON:API}
- Embed.js creator
- Rust Core Team
- Retired Ruby on Rails and jQuery core teams.
- ECMAScript's TC39 standards committee member
- W3C's TAG (Technical Architecture Group) memberYehuda Katz
25. Eliminate the need for ad-hoc code per
application to communicate with servers
that communicate in a well-defined way.
36. Compound documents
To reduce the number of HTTP requests,
servers MAY allow responses that include
related resources along with the requested
primary resources. Such responses are
called “compound documents”.
48. Filtering in Drupal
GET /articles?filter[field_name]=value&filter[field_other]=value HTTP/1.1
Accept: application/vnd.api+json
49. Short and normal - Filtering in Drupal
SHORT:
?filter[field_first_name]=Janis
NORMAL:
?filter[a-label][condition][path]=field_first_name
&filter[a-label][condition][operator]=%3D (an encoded ‘=‘)
&filter[a-label][condition][value]=Janis
61. Examples - Filtering in Drupal
SHORT
filter[uid.name][value]=admin
NORMAL
filter[name-filter][condition][path]=uid.name
filter[name-filter][condition][value]=admin
Get nodes created by user admin
62. Examples - Filtering in Drupal
SHORT
filter[title][operator]=CONTAINS&filter[title][value]=Foo
NORMAL
filter[title-filter][condition][path]=title
filter[title-filter][condition][operator]=CONTAINS
filter[title-filter][condition][value]=Foo
Filter nodes where ‘title’ contains ‘Foo’
63. Examples - Filtering in Drupal
FILTER BY LOCALITY
filter[field_address][condition][path]=field_address.locality
filter[field_address][condition][value]=Mordor
FILTER BY ADDRESS LINE
filter[field_address][condition][path]=field_address.address_line1
filter[field_address][condition][value]=Rings Street
Filter by non-standard complex field
68. Filtering
A server MAY provide links to traverse a
paginated data set (“pagination links”).
The page query parameter is reserved for
pagination. Servers and clients SHOULD use
this key for pagination operations.
84. • Presentation, these links and more
• https://swis.nl/drupaljam
• Leave your email theres
• Slack
• #jsonapi and #contenta
• JSON:API specification
• https://jsonapi.org
• Drupal’s great JSON:API documentation
• https://www.drupal.org/docs/8/modules/jsonapi
• API first initiative
• https://www.drupal.org/about/strategic-initiatives/api-first
Questions?
- Before we start, please feel free to ask questions at any point.
- Check out https://swis.nl/drupaljam for this presentation and all mentioned links.
- Hi everyone, my name is Björn Brala, technical director at SWIS.
- I live in Leiden in the Netherlands with my 3 cats, wife and daughter, of 4.
- I started developing for the web back in 2003 when frames were fine and flash was cool. I’ve been involved with the web since then.
I started using Drupal when 8 was in beta. The modernization of Drupal sounded like a giant leap forwards.
- Fast forward to now, I work at SWIS as technical director.
- SWIS creates digital experiences and supports Drupal, Laravel and multiple front-end frameworks. Our client base spans multiple sectors and sizes. Getting the communication between different systems right is essential in creating sustainable architectures.
We create our converting websites in Drupal, most of the time. Nowadays in order to create a complete digital experience you need to connect to multiple systems to make sure online transactions go smoothly.
Integration of systems as the CRM, ERP or financial software of a company streamline the workflow of the employees and customer journey of clients.
We mostly use Laravel and VueJS to create custom applications or tools that solve specific business needs for our clients, we always try to use the right tool for the job.
Going forward from this architecture we create a platform where we can help our clients create an optimal digital experience.
Experiences aren’t limited to just single websites anymore.
You could have a normal Drupal website as your corporate site, SPA’s to help your clients on the go, native applications or even API’s to let your clients create more value for your business.
We aren’t just using computers anymore. Mobile, voice, IOT devices, VR all are a part of the online landscape.
We combine these tools to create an optimal digital experience for our clients AND their clients.
So I say; Stop the noise!
We should stop talking about how we communicate and talk about what we communicate. This is where JSON:API comes in.
On the homepage of JSON:API they say: If you’ve ever argued with your team about the way your JSON responses should be formatted, JSON:API can be your anti-bikeshedding tool.
You might think, bikeshedding? What?
Back in the 50’s a guy named Parkinson made the argument that members of an organization give to much weight to trivial issues. Like when approving plans for a powerplant, discuss what color and material should the bikeshed be.
This eventually led to the term bikeshedding, where we spend to much time discussing the wrong things.
I see this happen all the time. Getting interop right between multiple teams and systems is hard. But a lot of time is wasted talking about the way systems should communicate.
So, guys, stop talking about the color of your bikeshed… any colour will do… really.
We love our jobs, we love talking about architectures, we love talking about innovative solutions to problems. I mean, connecting multiple systems is fun!
Discussing the structure of an API for the gazilionth time, not so fun.
JSON:API solves this problem by defining a generic format for all your API responses. No need to talk about the structure, or write a new API client for every new service.
And we can go back to focusing on what we love. Solving new problems.
I’m going to start with a introduction on how JSON:API came to be and how it ended up in Drupal core.
JSON:API was originally drafted by Yehuda Katz in May 2013.
JSON:API was originally drafted by Yehuda Katz in May 2013.
Embed.js creator
Rust Core Team
Retired Ruby on Rails and jQuery core teams.
ECMAScript's TC39 standards committee member
W3C's TAG (Technical Architecture Group) member
This first draft was extracted from the JSON transport implicitly defined by Ember Data’s REST adapter.
Eliminate the need for ad-hoc code per application to communicate with servers that communicate in a well-defined way.
In general the goal of Embed Data was; you should not require ad-hoc code per application to communicate with servers.
In the next 2 years they worked quite hard to get the spec complete, and in may 2015 the first 1.0 version was released.
What I find so awesome is that the specification was not crafted in some small backroom in an office, but extracted from solving real life problems and iterating on those solutions. I feel this is the best kind of spec.
A specification is only really viable if it is used in real life. You can’t reap much benefits of a specification if you are the sole user.
There are quite a few companies already using the spec publicly. Ignoring the fact that every up-to-date Drupal site is a possible implementation.
Netflix has publishes an extremely fast JSON:API serializer for Ruby. Hinting at usage internally.
Fitbit uses it in exposing a Friends Web API and an ex employee did a presentation on how JSON:API helped Fitbit in smoothing out interservice communication internally.
G2 crowd; a site to compare software solutions uses it for their Data API.
The Apple Store Connect API, released in 2018, uses the JSON:API spec to help developers automate their development cycle.
KFC UK is currently building a new site, where all content will be loaded from Drupal JSON:API.
So even the bigger companies are embracing JSON:API as a way to be more efficient in publishing API’s and even using Drupal to do so!
There are about 150+ client and server implementations listed on jsonapi.org, available in about 20 different languages!
In regards to Drupal the client implementations can help you fetch your data into a wide array of languages and frameworks.
Of all the server implementation Drupal’s implementation is the most complete available.
Drupal has the best server-implementation of JSON:API available. Rome wasn’t build in one day, a small history of the module.
In 2016 guy named Mateu Aguiló Bosch (e0ipso, eh-oh-ipso)
In 2016 he started working on a JSON:API contrib module and he released version 1.0 in may 2017.
Around mid 2016 Dries started floating the idea of JSON:API being part of Drupal core. This changed to a recommendation at the end of 2016.
A year later Win Leers and Gabe Sullice were assigned to the JSON:API module by Aquia and started devoting most of their time on getting it ready for core.
This work resulted in a 2.0 release at the beginning of this year. This marked the start of the module’s move to Drupal core. 2 months ago, the time was nigh! And JSON:API released in Drupal 8.7!
So from beginning to end, it took 28 months, 450 commits, 32 releases and more than 5,500 test runs. Pretty impressive really.
So lets get to the good stuff. Lets talk about the specification.
So lets get to the good stuff. Lets talk about the specification. JSON:API uses basic HTTP for communication. The HTTP method defines what want action to take. Just basic GET, POST, PATCH and DELETE requests.
The API responds in JSON, with a specific header for the content type.
This means JSON:API is pretty easy to integrate in an existing hosting platform and automatically makes use of any optimizations you might have implemented. Such as caching, load balancing or other basic http optimizations.
This is an example of a simple get request. It calls /articles with an proper accept header. This will result in a list of documents.
The response such a request is a simple JSON document with the information for that entity. You see the article with its attributes.
You see the relationships for this entity in the relation tag. This contains the basic relation and reference to that entity. Note there is no data of that relation available here.
Using the links in the document you can retrieve information for related resources.
But this is only scratching the surface. There are quite a few features which make JSON:API such a lovely spec to use.
So, next up are “compound documents”. These are defined as;
To reduce the number of HTTP requests, servers MAY allow responses that include related resources along with the requested primary resources. Such responses are called “compound documents”.
That sound pretty awesome right? So, a compound document is a response which includes the data of all included resources. For example, when requesting an article it may include the data of the author and all related comments for that article.
This is awesome for a few reasons. Getting the full data of a resource you want to consume in one go is unheard of unless specific endpoints are programmed. On the server side of things, caching and invalidating requests like this is easy-peasy!
Let’s have a better look at a compound document.
A compound document adds an ‘included’ property to the response. This property contains an array of documents of all the relations. So here we have an article which contains a comment with id 5.
Let’s have a look at the include.
There you have it, the related comment to this article. The structure is exactly the same. Some things to note though.
It might feel weird for the JSON to only refer to the comment as an ID and storing the data in the included tag. But imagine this being an author. Multiple articles might refer to the same author, this way JSON:API only send the data of the author once. Pretty efficient 😊
When you start retrieving large lists of articles, the includes will get of hand pretty fast. If you load 10 articles, all with references to taxonomy, users, comments or more. Your document will get HUGE.
You don’t always want everything. Sometimes you just need the related author and certain taxonomy should be kept on the server. In order to de-bloat your request you can specify what relations you want by using the ‘include’ parameter.
So here you see I am retrieving an article with only the author included. In the document every other relation is now only referenced by their id with a link.
This parameter is pretty flexible, if you need more than one include you can just add more relations comma separated.
So now we have the author and comments for this article. But that’s not all. What if we need to go even deeper. Comments have related authors also.
You can include nested relations with ease using a dot-notation. So adding `comments.author` will include all the authors of the related comments.
This level of flexibility makes retrieving the correct information in one go a breeze.
As I said earlier, if you are fetching data from a JSON:API your request can get large fast. Especially when the included relationships contain a lot of data. Most of the time you do not need every single attribute defined in the resource but only want the things like author names and titles.
JSON:API provides sparse fieldsets for this usecase.
You use sparse fieldsets with the fields parameter. The key of the parameter is the resource-type and the value is the field you want to retrieve. You can specify the fields once for every type of resource you fetch.
So we’ve talked about fetching documents. We talked about slimming down the content of the documents.
But what if we need filtering of collections.
The filter query parameter is reserved for filtering data. Servers and clients SHOULD use this key for filtering operations.
Well, JSON:API only reserves the parameter and the server can decide how to filter.
Ok, that is fine I guess. Lets try that again.
The filter implementation of Drupal is extremely powerful. This does there a slight learning curve. Lets start simple.
The simplest, most common filter is a key-value filter. Pretty basic stuff. You can add any amount of filters.
Drupal support 2 ways of writing filters. This is the short syntax, but if you want more control you can use, what Drupal defines as, the normal syntax.
Here we give the filter a-label and we split the path, operator and value into their own parameter.
The path is the field we are filtering on. The operator describes how we compare the value and the value is the thing you compare against. The label is essentially needed to group these 3 parameters together. Make sure your identifiers are unique.
For a normal key-value filter this doesn’t do much. But it opens up a plethora of possibilities later down the line. For example this way you can have full control on the operator.
The filtering system supports multiple operators, grouping of filters to control conjunctions (AND/OR) and paths to define what to filter on.
Let’s have a closer look at these parts.
The operator controls describes how we compare the value.
We have quite a lot available, looking at the source the current set supports all the operators you might need.
We have the basics, like equals, greatest/less than. But also string, range, set and NULL comparison.
So for example if you want to find all users with a first name starting with a “J” you would use STARTS_WITH. So the operator is STARTS_WITH and the value should be “J”.
Every condition that compares to a single value works like this. Just set the operator and value and you are good to go.
Not every operators compared to a single value. We need a way to send multiple values to the system. We do this by making the value parameter an array.
Lets try finding users with a few different names. For this we use the IN operator and send the names we want to match on as an array.
This would match all users where the first name is either Janice, John or Jeff. You should use this syntax for the ‘value in set’ and `range` operators.
So, that’s about it for the operators. Next up is condition grouping. Consider the following question:
Show me the users that have the job developer and have the first or last name ‘smith’.
In order to get that working we need a way to query firstname = smith OR last name = smith grouped together.
For this we need to create a condition group. A condition group is a set of conditions with a set conjunction. This can be either AND or OR.
So this is such a group, a group starts with a label. This label is important because we refer to this label in our conditions. We tell the API we are defining a group with the conjunction OR.
After defining the group we can add conditions to this group. We use the memberOf property for that. This means that every condition that is a member of the group with use the conjunction defined in that group.
So, quick question; Who notices something missing in this example? …
The operator is missing! The thing is, the default operator is equals, so you so not need to write the equals operator. There are a few shortcuts hidden in the Drupal implementation, you’ll find those in the docs on Drupal.org.
Hopefully you are understand how the grouping helps you create filters. We will look at some more real life examples later on.
Next up, paths. In our examples we see path used to communicate the field which we need to filter.
Earlier, when we talked about includes I showed you the dot notation to refer to nested relations. Paths can do the same.
So for example, we have a users career stored as a separate resource. This means if we want to filter on the name of the career we need to refer to it as field_career.name.
This same technique is used for fields that might store sub-properties. So if we think about a phone field, it could have a country-code attached to it. This extends to other fields also, an url could have an uri and url.
This is a lot to process, to give you a bit more feeling for how you would use this in the real world I’ll quick run through some common examples from the documentation.
This one is quite simple, just filter on status = 1
A lot of time you want to filter by a reference to an entity. In this care the username. You can nest the name through uid.
We’ve seen the CONTAINS operator, but here you can compare the short and normal version.
In the short version the field is the key, in the normal version we label the condition so we can group is later.
Some fields contain multiple values, in this example you see an address field. You can filter by locality or address line quite easily using the same syntax as with relations.
By now we are quite capable of selecting the right set of documents, and trimming down the data to what we need.
But you might want to sort your documents in a more sane order. This could be as simple as sorting by created date, but this can be any number of parameters really.
So let’s start simple, sorting is done by using the sort parameters. Sorting by a field is done by using…
?sort=created
By default the parameters are sorted ascending, so unless you want the oldest record first, this is rather useless. In order to reverse the order you can just add a minus sight to it.
?sort=-created
This way the newest record will be first, and you can compile a list of latest articles for example. The sort parameter accepts multiple values, separated by a comma. So, to ask the API to order by author, then created date descending we use;
?sort=author,-created
So that is basic sorting. Up until now pretty much every parameter you can fill with a field also accepts the value inside a relation. This is also true for the sort parameter. For example in Drupal when you like to sort by the author name you would use:
?sort=uid.name,-created
This fetches the relation and sorts by related data.
Since the Drupal implementation uses the same parameter-parsing as with filtering it is also possible to write a NORMAL version or the sort parameter.
Though I cannot really find much reason to use that unless it’s easier for your client to generate. For now it does not support any more features than the SHORT version.
Next up, pagination. You really do need to limit your collection results to sensible amounts.
A server MAY provide links to traverse a paginated data set (“pagination links”).
The page query parameter is reserved for pagination. Servers and clients SHOULD use this key for pagination operations.
There is a pattern here…
So the specification says you may use the ‘page’ parameter to paginate collections. It doesn’t care how you implement this. It does suggests some strategies. Let’s have a look at how Drupal implemented pagination.
Lets have a look at a basic example of a paginated response. There is a few things of note in this response. There are 3 pagination links in the links property.
Self: current url
Next: next page
Prev: previous page.
There is also a page[limit] set to 3 in the links. It’s important to note that the existence of the pagination links is relevant.
Next: if it exists, there are more pages.
Next: doesn’t exist, there are no mote pages.
If prev doesn’t exist then you are on the first page.
If neither next or prev exists, you are on the only page.
So the response does inform you completely about the state of the pagination.
There are a few gotcha’s in relation to pagination.
Maximum page limit 50: To limit possibilities of DDOS’in, the limit is set to 50 a page. When a response is uncached the module has to do an access check of all resources. This means if someone would set the limit to something insane, like 200k, its quite easy to break the server.
No pagecount in response: The pagecount is not available for performance reasons. It would need to calculate the pagecount for every request and considering the access checks, this will result in unwanted load.
When you understand the structure of JSON:API documents, creating and updating resources is a breeze.
It uses the standard HTTP verbs to communicate the desired action of the request. GET for fetching, POST for creating, PATCH for (partial) updating and DELETE for deleting resources.
To create a resource in JSON:API you send a complete document to the server through POST. This document looks the same as you normally receive when retrieving data, but you skip adding the ID.
You should notice the relations to other types is includes directly in the payload.
The API communicates if the creation was an success through the response code. The body of the response contains the resource you just created, this will help you add the appropriate ID that is generated server side.
To update resources you send a patch request to an single resource. This request overwrites all attributes specified in the payload.
So that is quite important to note, the patch request does not empty values, unless an attribute is specified as empty.
Patch requests can also be used to update the relations of a resource. There is a difference in how you update the different kind of relations.
Author is a to-one relation. There we send the data with the relation directly.
Tags is a to-many relation. We now send a complete set of new tags to the article. This means it overwrites all other relations that might be present at the moment. I will note that it is possible to add and delete single relations by using that specific relations endpoint. But I won’t be going into that today.
Comments is a to many relationship and this request clears the relation to all comment for the article.
For sake of completing all CRUD operation, I’ll show the DELETE operator. Nothing more than sending a delete to the endpoint of a single resource. The response code communicates if the operation was successful.
This is just about everything I want to tell about the specification and it’s usage today.
We talked about the document structure, compound documents, relations.
We had a good look at how we can limit the data usage by using sparse fieldsets, includes, filters and pagination.
Last but not least we’ve looked at how to manipulate resources through POST, PATCH and DELETE.
You should have a good overview by now how JSON:API can help you communicate efficiently between services.
I love the structure, extreme consistency and built in flexibility of JSON:API. It really tickles my developers hearth.
This consistency makes it the perfect fit for Drupal. The way Drupal handles nodes, relations and caching makes it that the data can slot into the spec perfectly. Reflecting the data model beautifully in JSON.
Since Drupal is now shipping with JSON:API available for every… single... site with zero configuration. I feel this will start a new era where Drupal will gain more and more ground being the go-to headless CMS.
Let’s work together to ‘Stop the noise’ and use our combined force to make JSON:API everyone’s smart default.