Consumer Driven Contracts and Your Microservice ArchitectureMarcin Grzejszczak
TDD introduced many improvements into the development process, but in our opinion the biggest impact relates to code design. Looking at the code from the usage perspective (by first writing an acceptance test) allows us to focus on usability rather than concrete implementation. Unfortunately, we usually rest on our laurels not trying to uplift this practice to the architecture level.
This presentation will show you how you can use the Spring Cloud Contract Verifier functionality in order to have a fully automated solution to stub your HTTP / Messaging collaborators. Just by adding proper configuration, you'll surround the microservices you are testing with faked stubs that are tested against their producer, making much more realistic tests.
We will write a system using the CDC approach together with Spring Boot, Spring Cloud and Spring Cloud Contract Verifier. I'll show you how easy it is to write applications that have a consumer-driven API, allowing a developer to speed up the time for writing better quality software.
The document provides an introduction and overview of APIs, REST, and OpenAPI specification. It discusses key concepts like resources, HTTP verbs, and OpenAPI structure. It also demonstrates OpenAPI syntax using JSON and YAML examples and highlights best practices for documenting APIs with OpenAPI.
This document discusses Docker containers and provides an introduction. It begins with an overview of Docker and how it uses containerization technology like Linux containers and namespaces to provide isolation. It describes how Docker images are composed of layers and how containers run from these images. The document then explains benefits of Docker like portability and ease of scaling. It provides details on Docker architecture and components like images, registries and containers. Finally, it demonstrates how to simply run a Docker container with a command.
1. The document summarizes a session that discussed designing and prototyping APIs using an API-first approach with Postman.
2. Attendees learned how to create, edit, and import API schemas, generate and validate API elements against schemas, and version and collaborate on APIs.
3. Resources mentioned include public Postman workspaces, documentation, and a community forum for further exploring API design and development using Postman.
The motivation on why and when to use API-First service design. What are the real-life poblems in application development with regard to API's ? And how to solve these using tools like Swagger Editor , Swagger UI and Swagger-codegen. And how can an API Manager tool help to manage the Apllication Lifecycle of your API ( publishing , versioning, registration of consumers , quota's and rate-limiting )
Consumer Driven Contracts and Your Microservice ArchitectureMarcin Grzejszczak
TDD introduced many improvements into the development process, but in our opinion the biggest impact relates to code design. Looking at the code from the usage perspective (by first writing an acceptance test) allows us to focus on usability rather than concrete implementation. Unfortunately, we usually rest on our laurels not trying to uplift this practice to the architecture level.
This presentation will show you how you can use the Spring Cloud Contract Verifier functionality in order to have a fully automated solution to stub your HTTP / Messaging collaborators. Just by adding proper configuration, you'll surround the microservices you are testing with faked stubs that are tested against their producer, making much more realistic tests.
We will write a system using the CDC approach together with Spring Boot, Spring Cloud and Spring Cloud Contract Verifier. I'll show you how easy it is to write applications that have a consumer-driven API, allowing a developer to speed up the time for writing better quality software.
The document provides an introduction and overview of APIs, REST, and OpenAPI specification. It discusses key concepts like resources, HTTP verbs, and OpenAPI structure. It also demonstrates OpenAPI syntax using JSON and YAML examples and highlights best practices for documenting APIs with OpenAPI.
This document discusses Docker containers and provides an introduction. It begins with an overview of Docker and how it uses containerization technology like Linux containers and namespaces to provide isolation. It describes how Docker images are composed of layers and how containers run from these images. The document then explains benefits of Docker like portability and ease of scaling. It provides details on Docker architecture and components like images, registries and containers. Finally, it demonstrates how to simply run a Docker container with a command.
1. The document summarizes a session that discussed designing and prototyping APIs using an API-first approach with Postman.
2. Attendees learned how to create, edit, and import API schemas, generate and validate API elements against schemas, and version and collaborate on APIs.
3. Resources mentioned include public Postman workspaces, documentation, and a community forum for further exploring API design and development using Postman.
The motivation on why and when to use API-First service design. What are the real-life poblems in application development with regard to API's ? And how to solve these using tools like Swagger Editor , Swagger UI and Swagger-codegen. And how can an API Manager tool help to manage the Apllication Lifecycle of your API ( publishing , versioning, registration of consumers , quota's and rate-limiting )
The document discusses using Postman to test REST APIs. Postman is an HTTP client that allows users to create and test HTTP requests. It provides a multi-window interface to work on APIs. Users can create requests, view responses, add variables, write test scripts, and view test results in Postman. The document also provides an example of testing the Newbook API, including GET, POST, PATCH, and other requests.
OpenAPI is an the emerging standard for creating, managing and consuming REST APIs. Previously named Swagger, in the last year has been adopted by the Linux Foundation and gained the support of companies like Google, Microsoft, IBM, Paypal, etc. to become a de-facto standard for APIs. In this talk we will review 3 uses cases to apply OpenAPI to enhance and speed-up our developments to create OpenAPI compliant APIs.
Irfan Baqui, Senior Engineer at LunchBadger, breaks down the important role of the API Gateway in Microservices. Additionally, Irfan covers how to get started with Express Gateway, an open source API Gateway built entirely on Express.js. Originally presented at the San Francisco Node Meetup.
My presentation from Nordic APIs 2014 in Stockholm, Sweden.
How can the architecture of one API platform look like? How can you break down things to make this challenge easier?
API testing verifies the functionality, usability, security, and performance of application programming interfaces (APIs). Key aspects to test include input parameters, error handling, response times, authentication, and documentation. Automated testing scripts should be created to regularly test APIs for bugs such as unhandled errors, security vulnerabilities, incorrect responses, and reliability issues. Thorough API testing requires considering parameter combinations, output validation across systems, and exception handling.
This document provides an overview of API testing tools and methods. It defines APIs and REST, describes how API testing works, lists common API testing tools like Postman, and outlines different types of API tests including functionality, reliability, load, and security testing. Examples are given of the GET, POST, PUT, and DELETE HTTP methods along with response status codes. A live demo of an API is presented at the end.
apidays Paris 2022 - Generating APIs from business models, Frederic Fontanet,...apidays
apidays Paris 2022 - APIs the next 10 years: Software, Society, Sovereignty, Sustainability
December 14, 15 & 16, 2022
Generating APIs from business models: high productivity and consistency
Frederic Fontanet, Architect / API designer at UMLTech
------
Check out our conferences at https://www.apidays.global/
Do you want to sponsor or talk at one of our conferences?
https://apidays.typeform.com/to/ILJeAaV8
Learn more on APIscene, the global media made by the community for the community:
https://www.apiscene.io
Explore the API ecosystem with the API Landscape:
https://apilandscape.apiscene.io/
Deep dive into the API industry with our reports:
https://www.apidays.global/industry-reports/
Subscribe to our global newsletter:
https://apidays.typeform.com/to/i1MPEW
What is No-Code/Low-Code App Development and Why Should Your Business Care?kintone
No-code/low-code aPaaS (Platform as a service) solutions enable line of business managers to handle technology needs to automate workflows, develop shared document repositories, construct reporting dashboards and process data without ever having to write a line of code.
Postman. From simple API test to end to end scenarioHYS Enterprise
The document discusses Postman, a tool for testing APIs. It provides an overview of APIs and common API implementation approaches like SOAP and REST. It also demonstrates how Postman can be used to test APIs by creating workflows to send requests and validate responses using features like environments, variables, assertions and data-driven tests.
API as-a-Product with Azure API Management (APIM)Bishoy Demian
Transitions from a single App or a closed system to an open ecosystem that drives innovation and delivers value-add Apps and services for your end-users. Monetise your data with minimal hassle & cost. Reach your end-users on any platform. Enable your IoT strategy with a strong cloud-based API platform.
Using Azure API Management, you can build a modern interactive developer portal for your APIs. Learn about your API usage patterns with analytics. Secure access, and manage subscriptions with quotas and throttling.
Peeling the Onion: Making Sense of the Layers of API SecurityMatt Tesauro
This document provides an overview of API security from multiple perspectives: API security posture, runtime security, and security testing. It discusses the complex API ecosystem involving various stakeholders. The document also outlines common API attack classes like DDoS, data breaches, and abuse of functionality. Finally, it provides key takeaways that APIs have complex interconnected systems, require coordination across teams, and need to be evaluated from different security perspectives.
Kubernetes is an open-source system for managing containerized applications across multiple hosts. It includes key components like Pods, Services, ReplicationControllers, and a master node for managing the cluster. The master maintains state using etcd and schedules containers on worker nodes, while nodes run the kubelet daemon to manage Pods and their containers. Kubernetes handles tasks like replication, rollouts, and health checking through its API objects.
In this community call, we will discuss the highlights of WSO2 API Manager 4.0 including
- Why we moved from WSO2 API Manager 3.2.0 to 4.0.0.
- New architectural changes
- Overview of the new features with a demo
- Improvements to the existing features and deprecated features
Recording: https://youtu.be/_ks4zEeRFdk
Sign up to get notified of future calls: https://bit.ly/373f4ae
WSO2 API Manager Community Channels:
- Slack: https://apim-slack.wso2.com
- Twitter: https://twitter.com/wso2apimanager
The document discusses demystifying APIs. It begins with an introduction to APIs, including their evolution and benefits. It then discusses RESTful APIs and their key aspects like uniform interface and use of HTTP methods. The document outlines best practices for API design, development, and challenges. It provides examples of designing APIs using Node.js and Hapi.js and discusses challenges like security, authentication, rate limiting, and scalability. Tools mentioned include Express, Swagger, Postman, and Kong.
Postman is a collaboration platform that simplifies each step of building an API and streamlining collaboration to create better APIs faster. It is a popular API client that allows users to send HTTP/HTTPS requests to services and view responses. Postman offers features like a community forum, integration with CI/CD tools, extensibility, the ability to make any type of API call, and is freely available and easy to use.
The document discusses using Postman for API testing over 10 days. It covers topics like the Postman UI, creating and organizing API requests and collections, using variables and environments, running collections from the command line and generating HTML reports, and common authentication, authorization, and status codes.
Kong is an open source API gateway that runs in front of RESTful APIs. It provides functionality through plugins such as authentication, security, traffic control, and logging. Kong creates and manages APIs and plugins to add authentication. For example, a key authentication plugin is enabled on an API, and a consumer is created with a key that must be provided in requests to access the API. Without a valid key, requests return an error.
The document discusses upcoming features and changes in Apache Airflow 2.0. Key points include:
1. Scheduler high availability will use an active-active model with row-level locks to allow killing a scheduler without interrupting tasks.
2. DAG serialization will decouple DAG parsing from scheduling to reduce delays, support lazy loading, and enable features like versioning.
3. Performance improvements include optimizing the DAG file processor and using a profiling tool to identify other bottlenecks.
4. The Kubernetes executor will integrate with KEDA for autoscaling and allow customizing pods through templating.
5. The official Helm chart, functional DAGs, and smaller usability changes
This document summarizes the requirements, installation, and new features of Oracle Data Integrator 12c. It outlines that ODI 12c requires Java 7 and Oracle 11g or 12c databases. It also lists several new features including declarative flow-based design, reusable mappings, a step-by-step debugger, knowledge module architecture, and enhanced parallelism capabilities. The document provides high-level information about installing ODI 12c and introduces several new features but does not go into detail about any single topic.
The document discusses using Postman to test REST APIs. Postman is an HTTP client that allows users to create and test HTTP requests. It provides a multi-window interface to work on APIs. Users can create requests, view responses, add variables, write test scripts, and view test results in Postman. The document also provides an example of testing the Newbook API, including GET, POST, PATCH, and other requests.
OpenAPI is an the emerging standard for creating, managing and consuming REST APIs. Previously named Swagger, in the last year has been adopted by the Linux Foundation and gained the support of companies like Google, Microsoft, IBM, Paypal, etc. to become a de-facto standard for APIs. In this talk we will review 3 uses cases to apply OpenAPI to enhance and speed-up our developments to create OpenAPI compliant APIs.
Irfan Baqui, Senior Engineer at LunchBadger, breaks down the important role of the API Gateway in Microservices. Additionally, Irfan covers how to get started with Express Gateway, an open source API Gateway built entirely on Express.js. Originally presented at the San Francisco Node Meetup.
My presentation from Nordic APIs 2014 in Stockholm, Sweden.
How can the architecture of one API platform look like? How can you break down things to make this challenge easier?
API testing verifies the functionality, usability, security, and performance of application programming interfaces (APIs). Key aspects to test include input parameters, error handling, response times, authentication, and documentation. Automated testing scripts should be created to regularly test APIs for bugs such as unhandled errors, security vulnerabilities, incorrect responses, and reliability issues. Thorough API testing requires considering parameter combinations, output validation across systems, and exception handling.
This document provides an overview of API testing tools and methods. It defines APIs and REST, describes how API testing works, lists common API testing tools like Postman, and outlines different types of API tests including functionality, reliability, load, and security testing. Examples are given of the GET, POST, PUT, and DELETE HTTP methods along with response status codes. A live demo of an API is presented at the end.
apidays Paris 2022 - Generating APIs from business models, Frederic Fontanet,...apidays
apidays Paris 2022 - APIs the next 10 years: Software, Society, Sovereignty, Sustainability
December 14, 15 & 16, 2022
Generating APIs from business models: high productivity and consistency
Frederic Fontanet, Architect / API designer at UMLTech
------
Check out our conferences at https://www.apidays.global/
Do you want to sponsor or talk at one of our conferences?
https://apidays.typeform.com/to/ILJeAaV8
Learn more on APIscene, the global media made by the community for the community:
https://www.apiscene.io
Explore the API ecosystem with the API Landscape:
https://apilandscape.apiscene.io/
Deep dive into the API industry with our reports:
https://www.apidays.global/industry-reports/
Subscribe to our global newsletter:
https://apidays.typeform.com/to/i1MPEW
What is No-Code/Low-Code App Development and Why Should Your Business Care?kintone
No-code/low-code aPaaS (Platform as a service) solutions enable line of business managers to handle technology needs to automate workflows, develop shared document repositories, construct reporting dashboards and process data without ever having to write a line of code.
Postman. From simple API test to end to end scenarioHYS Enterprise
The document discusses Postman, a tool for testing APIs. It provides an overview of APIs and common API implementation approaches like SOAP and REST. It also demonstrates how Postman can be used to test APIs by creating workflows to send requests and validate responses using features like environments, variables, assertions and data-driven tests.
API as-a-Product with Azure API Management (APIM)Bishoy Demian
Transitions from a single App or a closed system to an open ecosystem that drives innovation and delivers value-add Apps and services for your end-users. Monetise your data with minimal hassle & cost. Reach your end-users on any platform. Enable your IoT strategy with a strong cloud-based API platform.
Using Azure API Management, you can build a modern interactive developer portal for your APIs. Learn about your API usage patterns with analytics. Secure access, and manage subscriptions with quotas and throttling.
Peeling the Onion: Making Sense of the Layers of API SecurityMatt Tesauro
This document provides an overview of API security from multiple perspectives: API security posture, runtime security, and security testing. It discusses the complex API ecosystem involving various stakeholders. The document also outlines common API attack classes like DDoS, data breaches, and abuse of functionality. Finally, it provides key takeaways that APIs have complex interconnected systems, require coordination across teams, and need to be evaluated from different security perspectives.
Kubernetes is an open-source system for managing containerized applications across multiple hosts. It includes key components like Pods, Services, ReplicationControllers, and a master node for managing the cluster. The master maintains state using etcd and schedules containers on worker nodes, while nodes run the kubelet daemon to manage Pods and their containers. Kubernetes handles tasks like replication, rollouts, and health checking through its API objects.
In this community call, we will discuss the highlights of WSO2 API Manager 4.0 including
- Why we moved from WSO2 API Manager 3.2.0 to 4.0.0.
- New architectural changes
- Overview of the new features with a demo
- Improvements to the existing features and deprecated features
Recording: https://youtu.be/_ks4zEeRFdk
Sign up to get notified of future calls: https://bit.ly/373f4ae
WSO2 API Manager Community Channels:
- Slack: https://apim-slack.wso2.com
- Twitter: https://twitter.com/wso2apimanager
The document discusses demystifying APIs. It begins with an introduction to APIs, including their evolution and benefits. It then discusses RESTful APIs and their key aspects like uniform interface and use of HTTP methods. The document outlines best practices for API design, development, and challenges. It provides examples of designing APIs using Node.js and Hapi.js and discusses challenges like security, authentication, rate limiting, and scalability. Tools mentioned include Express, Swagger, Postman, and Kong.
Postman is a collaboration platform that simplifies each step of building an API and streamlining collaboration to create better APIs faster. It is a popular API client that allows users to send HTTP/HTTPS requests to services and view responses. Postman offers features like a community forum, integration with CI/CD tools, extensibility, the ability to make any type of API call, and is freely available and easy to use.
The document discusses using Postman for API testing over 10 days. It covers topics like the Postman UI, creating and organizing API requests and collections, using variables and environments, running collections from the command line and generating HTML reports, and common authentication, authorization, and status codes.
Kong is an open source API gateway that runs in front of RESTful APIs. It provides functionality through plugins such as authentication, security, traffic control, and logging. Kong creates and manages APIs and plugins to add authentication. For example, a key authentication plugin is enabled on an API, and a consumer is created with a key that must be provided in requests to access the API. Without a valid key, requests return an error.
The document discusses upcoming features and changes in Apache Airflow 2.0. Key points include:
1. Scheduler high availability will use an active-active model with row-level locks to allow killing a scheduler without interrupting tasks.
2. DAG serialization will decouple DAG parsing from scheduling to reduce delays, support lazy loading, and enable features like versioning.
3. Performance improvements include optimizing the DAG file processor and using a profiling tool to identify other bottlenecks.
4. The Kubernetes executor will integrate with KEDA for autoscaling and allow customizing pods through templating.
5. The official Helm chart, functional DAGs, and smaller usability changes
This document summarizes the requirements, installation, and new features of Oracle Data Integrator 12c. It outlines that ODI 12c requires Java 7 and Oracle 11g or 12c databases. It also lists several new features including declarative flow-based design, reusable mappings, a step-by-step debugger, knowledge module architecture, and enhanced parallelism capabilities. The document provides high-level information about installing ODI 12c and introduces several new features but does not go into detail about any single topic.
Greenplum for Kubernetes - Greenplum Summit 2019VMware Tanzu
The document discusses Greenplum for Kubernetes, which allows Greenplum databases to be deployed on Kubernetes. It can be deployed on public clouds, private clouds, or bare metal. Greenplum is packaged as containers for portability and managed by Kubernetes for high availability and elasticity. Benefits include speed of deployment, savings from using existing Kubernetes skills and hardware, security, stability, and scalability. Use cases include agile analytics, workbenches with curated tool stacks, and automatic data platforms with day-2 operations automation.
dbt Python models - GoDataFest by Guillermo SanchezGoDataDriven
Guillermo Sanchez presented on the pros and cons of using Python models in dbt. While Python models allow for more advanced analytics and leveraging the Python ecosystem, they also introduce more complexity in setup and divergent APIs across platforms. Additionally, dbt may not be well-suited for certain use cases like ingesting external data or building full MLOps pipelines. In general, Python models are best for the right analytical use cases, but caution is needed, especially for production environments.
Agile Oracle to PostgreSQL migrations (PGConf.EU 2013)Gabriele Bartolini
Migrating an Oracle database to Postgres is never an automated operation. And it rarely (never?) involve just the database. Experience brought us to develop an agile methodology for the migration process, involving schema migration, data import, migration of procedures and queries up to the generation of unit tests for QA.
Pitfalls, technologies and main migration opportunities will be outlined, focusing on the reduction of total costs of ownership and management of a database solution in the middle-long term (without reducing quality and business continuity requirements).
Skaffold is an open-source Google Container tool that provides a toolkit for creating CI/CD pipelines for Kubernetes applications. It simplifies the process of building container images, pushing them to registries, and deploying them to Kubernetes clusters. Skaffold monitors source code changes and automatically builds, pushes, and deploys container images. It supports local development workflows as well as deployments to remote clusters. Skaffold improves developer productivity by separating business logic from platform operations management.
- Docker started as an internal project at dotcloud and was later open sourced in 2013. It allows for standardized packaging of software and isolates applications from each other while sharing the same OS kernel.
- Containers provide benefits over traditional virtual machines by providing an application-level rather than infrastructure-level construct, resulting in better performance and efficiency.
- Kubernetes is an open source container orchestration platform originally developed by Google that provides self-healing and automated scaling of containerized applications. It abstracts away underlying infrastructure to provide a uniform interface for workloads.
Operating PostgreSQL at Scale with KubernetesJonathan Katz
The maturation of containerization platforms has changed how people think about creating development environments and has eliminated many inefficiencies for deploying applications. These concept and technologies have made its way into the PostgreSQL ecosystem as well, and tools such as Docker and Kubernetes have enabled teams to run their own “database-as-a-service” on the infrastructure of their choosing.
All this sounds great, but if you are new to the world of containers, it can be very overwhelming to find a place to start. In this talk, which centers around demos, we will see how you can get PostgreSQL up and running in a containerized environment with some advanced sidecars in only a few steps! We will also see how it extends to a larger production environment with Kubernetes, and what the future holds for PostgreSQL in a containerized world.
We will cover the following:
* Why containers are important and what they mean for PostgreSQL
* Create a development environment with PostgreSQL, pgadmin4, monitoring, and more
* How to use Kubernetes to create your own "database-as-a-service"-like PostgreSQL environment
* Trends in the container world and how it will affect PostgreSQL
At the conclusion of the talk, you will understand the fundamentals of how to use container technologies with PostgreSQL and be on your way to running a containerized PostgreSQL environment at scale!
At Opendoor, we do a lot of big data processing, and use Spark and Dask clusters for the computations. Our machine learning platform is written in Dask and we are actively moving data ingestion pipelines and geo computations to PySpark. The biggest challenge is that jobs vary in memory, cpu needs, and the load in not evenly distributed over time, which causes our workers and clusters to be over-provisioned. In addition to this, we need to enable data scientists and engineers run their code without having to upgrade the cluster for every request and deal with the dependency hell.
To solve all of these problems, we introduce a lightweight integration across some popular tools like Kubernetes, Docker, Airflow and Spark. Using a combination of these tools, we are able to spin up on-demand Spark and Dask clusters for our computing jobs, bring down the cost using autoscaling and spot pricing, unify DAGs across many teams with different stacks on the single Airflow instance, and all of it at minimal cost.
Python and GIS: Improving Your WorkflowJohn Reiser
A 40 minute talk on using Python with GIS software. Integration with ArcGIS and open source software is demonstrated. Includes links to several Python-based projects on Github. Presented at the Delaware Valley Regional Planning Commission's Information Resource Exchange Group on December 9th, 2015.
1) The document discusses a presentation about Docker given by Quey-Liang Kao at SC14. It provides background on Kao and an overview of Docker's history and basic usage.
2) Details are given on setting up Docker environments for HPC and benchmarking Docker performance versus VMs using HPL.
3) Future work is discussed, including exploring GPU support in Docker and live migration capabilities.
This document discusses using Docker containers on Oracle Exadata systems. It provides an overview of Docker and its key components. It then discusses using Docker for various use cases with Exadata, including hosting Oracle applications and database releases in containers for test and development. It also provides instructions for setting up an Oracle Database in a Docker container on Exadata, such as downloading the necessary files from GitHub, building the Docker image, and using DBCA to configure the database.
Kubernetes provides logical abstractions for deploying and managing containerized applications across a cluster. The main concepts include pods (groups of containers), controllers that ensure desired pod states are maintained, services for exposing pods, and deployments for updating replicated pods. Kubernetes allows defining pod specifications that include containers, volumes, probes, restart policies, and more. Controllers like replica sets ensure the desired number of pod replicas are running. Services provide discovery of pods through labels and load balancing. Deployments are used to declaratively define and rollout updates to replicated applications.
When developing a microservices architecture using containers, orchestration is key to provide an elastic scalable infrastructure. Kubernetes (w/ Docker) and Payara Micro 5 make this possible! This talk will showcase how to implement all of this for a real production scenario!
The document discusses containerizing MPI workloads using Docker and QNIBTerminal. It provides an overview of Docker, describes the QNIBTerminal testbed which runs an HPCG benchmark on multiple Linux distributions within Docker containers, and presents results showing a low performance overhead for containerized workloads compared to bare metal. Future work is discussed around optimizing containers for HPC and benchmarking real-world applications.
This document discusses Skaffold, an open-source tool from Google for simplifying continuous development workflows for Kubernetes applications. Skaffold monitors source code changes and automatically builds, pushes, and deploys container images to Kubernetes clusters. It aims to improve developer productivity by automating repetitive development tasks. The document provides an overview of Skaffold's architecture and functionality, how it works, advantages it provides, examples of companies using it, and compares it to other developer tools.
Slide deck for the Kubernetes Manchester meetup December 2018 talk. Jim introduces a little about moneysupermarket, the direction we're heading and historical problems we've had.
I (David) then walk through the technology choices we've made and how they fit together to form our Istio service mesh on an auto-scaling AWS EC2 kubernetes platform.
[FOSDEM 2020] Lazy distribution of container imagesAkihiro Suda
https://fosdem.org/2020/schedule/event/containers_lazy_image_distribution/
The biggest problem of the OCI Image Spec is that a container cannot be started until all the tarball layers are downloaded, even though more than 90% of the tarball contents are often unneeded for the actual workload.
This session will show state-of-the-art alternative image formats, which allow runtime implementations to start a container without waiting for all its image contents to be locally available.
Especially, this session will put focus on CRFS/stargz and its implementation status in containerd (https://github.com/containerd/containerd/issues/3731). The plan for BuildKit integration will be shown as well.
Similar to Lessons learned from running Pega in Kubernetes (20)
Catalin Jora gave a presentation on WebAssembly (WASM) at the Docker meetup. He began with an introduction to WASM, describing it as a fast, light alternative to containers that compiles code from 40+ languages to run in sandboxed environments. He then discussed WASI, which makes WASM usable by providing file system and network capabilities. The presentation included a demo of WASM integration with Docker Desktop and examples of WASM applications. Catalin also shared resources on WASM and its use in Kubernetes, as well as potential applications of WASM.
Can one bring the open-source style of community inside a company? Yes! It can be done and it should be done.
All enterprises aim to be agile. The chances to have a bunch of people passionate about a specific technology grows with the company size. Often, especially in enterprises, the tech engineers are siloed in their tribes/product centers and can't really collaborate. The "Spotify model" praised by the business is not helping here.
This presentation will cover the bootstrapping of a Kubernetes community in a enterprise. It will showcase a step-by-step framework that can be adapted and replicated for building similar communities. The focus will be on the many benefits it brought to internal engineers, the business and on the impact it can have on the wider open source community.
Killing technical debt and reducing costs with DockerCatalin Jora
This document summarizes the goals and benefits of using Docker to modernize a company's enterprise software applications. It outlines the current pain points of slow, error-prone releases and high technical debt. The short term goal is to dockerize all applications, and the medium term goal is to move to the cloud. Leveraging Docker features like environments on demand and runtime configuration injection helped simplify releases. This reduced release time from around an hour to just 5 minutes, killed technical debt, and reduced operational costs.
This document discusses persistent storage for containers and Kubernetes. It outlines some of the challenges with container storage including lack of storage pets and data needing to follow containers. It then describes eight principles of cloud native storage including being application centric, platform agnostic, and consistently available. The document explains how Docker and Kubernetes manage persistent container storage using named volumes, volume plugins, persistent volumes and claims. It introduces StorageOS as a platform that manages container storage and integrates with Kubernetes, providing high availability, replication and other features.
Why Orchestration on a development machine
K8s on Docker local environment - Implementation details
Demo of K8s on Docker for Mac
So, what problems does it solve?
Catalin Jora will present on going cloud native with Kubernetes, including a Kubernetes bootcamp tutorial for learning Kubernetes and a demonstration of cluster federation on multiple cloud providers. The bootcamp allows users to deploy, scale, update and debug containerized applications using an interactive online terminal. The cluster federation demo will show how to provision a Kubernetes cluster with kubeadm and federate clusters across different cloud providers, while visualizing traffic and load testing the federated cluster.
Microservices continuous delivery with mantl & shippedCatalin Jora
Microservices continuous delivery with MANTL & Shipped
Running, building and deploying microservices is hard. Either if you try to chunk a monolith application into small pieces or want to start a project from scratch, you’ll need to figure out how to deal with: security, service discovery, networking, monitoring, persistence, orchestration and cluster management. Once you manage to have a microservices architecture in place, you’ll hit other challenges: scaling, infrastructure monitoring, building, running and shipping to your users.
In this talk I’ll cover what you need to take into account when you run microservices and how those problems are addressed in MANTL I’ll also look into a continuous delivery pipeline for microservices using Shipped
MANTL is an open source platform for building microservices started by Cisco. It combines the best open source technologies to deliver an out-of-the box open platform for microservices development. You can contribute to MANTL: https://github.com/CiscoCloud/mantl
Shipped is a CI/CD tool that will be released later this year by Cisco and is natively integrated with MANTL. Shipped is in open beta now: ciscoshipped.io
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
8. On Provisioning
8
Pre-requisites for running Pega in K8s:
• you need a K8s cluster (production ready)
• you need a (skilled)team that will manage the cluster(s)
• 2 - 3 engineer are needed to run a cluster that offers 24/7 SLA
• starting from scratch (what is Docker?) will take you +1 year to get to a production
ready cluster:
• level of automation
• implementation team skills (K8s, terraform, etc)
9. Implementation decisions to be made
9
• container registry
• K8s implementation (managed or self-provisioned)
• proximity to DB
• ingress controller l traffic routing
• logging I monitoring
• secret management
• developer access
10. Installing Pega in K8s: how
10
• Helm Chart :
https://github.com/pegasystems/pega-helm-charts
• Pega Docker images:
docker pull pega-docker.downloads.pega.com/platform/pega:<version>
11. Installing Pega in K8s: who
11
• who will do the upgrade/install/deploy?
• How often?
• image vulnerabilities?
• what is their level of knowledge with K8s, Pega, helm, docker?
12. Installing Pega in K8s: how
12
• Helm Chart :
https://github.com/pegasystems/pega-helm-charts
But is helm the right tool for this?
You can get the manifests out of the chart and deploy the yaml directly
(Kustomize)
Advantage: get rid of helm
13. On Database
13
Are you also changing the database?
• NO:
• 1-on-1 migrationsis the simplestchoice(eg.Oracle-> Oracle, PostgreSQL ->
PostgreSQL)
14. On Database
14
Are you also changing the database?
• YES
• You need to research the migrationtools
• Need to have peoplewho are familiarwith both DB's (source/target) and
businesslogic
• this can be a major pain
• live or offlinemigration
15. On customizing the Pega image
15
• Docker image can be customized
• The base image is not open (Dockerfileis not available)
• Changes are applied in config maps objects
16. On customizing the Pega config maps
16
• What can be changed in config maps:
• Prconfig.xml
• Context.xml
• Prlog4j2.xml
• Pega uses dockerize inside the image to inject real values at run time
into the templated files.
• In the docker entry point scriptonly the above filesare templated
17. On CI/CD
17
• Azure DevOps Pipelines is logical choice for deployments to AKS
• There are predefinedtasks for deploymentsto k8s
• other tools will work as well (Jenkins, ArgoCD, Flux, etc)
19. On CI/CD
19
• Changing a config map will not redeploy a pod in Kubernetes
• how to fix?
• redeploy when needed: Kubectl rollout restart
• helm annotationchecksum/config
• kind: Deployment
• spec:
• template:
• metadata:
• annotations:
• checksum/config: {{ include (print $.Template.BasePath
"/configmap.yaml") . | sha256sum }}
• [...]
•