1. The document proposes a low-code solution for billing in a private cloud using open-source tools like KillBill and Prometheus.
2. It outlines an initial architecture that would ingest usage metrics from products, aggregate the data, and publish billing events to KillBill for invoicing and payments.
3. Exporters would collect metrics from products like S3 and ingress and expose them in a format readable by Prometheus for long-term storage and analysis by the billing system.
Accelerate Cloud Migration to AWS Cloud with Cognizant Cloud StepsAmazon Web Services
Digital transformation and cloud migration are complex but necessary mandates for modern organizations. Cognizant’s Cloud Steps transformation and migration framework simplifies the process, enabling enterprises to quickly and easily build resilient cloud foundations and confidently migrate applications, infrastructure, security, and DevOps to an AWS environment at speed and scale.
Protecting Agile Transformation through Secure DevOps (DevSecOps)Eryk Budi Pratama
Respresenting Cyber Defense Community (cdef.id) to present and share my view on Secure DevOps / DevSecOps. Through this presentation, I shared several insights about:
1. How to balance the risk and controls in the "great shift left" paradigm (agile)
2. DevOps activities
3. How to seamlessly integrate security into DevOps
4. How to "shift left" the security"
5. Get started with Secure DevOps / DevSecOps
6. Case Study about DevSecOps implementation
For further discussion, especially how to secure digital and agile transformation in your organization, don't hesitate to contact me :)
Perform a Cloud Readiness Assessment for Your Own CompanyAmazon Web Services
In this session you will get an understanding how to evaluate your company's or applications' cloud readiness. We will cover aspects such as workload and data categorisation, automation levels, design for failure and cost-optimised architectures. We will be looking at typical application evolution paths from tightly coupled physical systems, in some cases through virtualisation, to cloud-native, or cloud-ready, loosely coupled, distributed and automated solutions.
Like any major transformation project, the migration to Cloud requires a compelling business case to justify the move. In this session, we: cover the fundamental commercial levers that AWS provides its customers; work through a framework to help identify the possible benefits of moving to cloud; and, outline the steps required to create a Cloud business case.
Azure Cost Management is a native Azure service that helps you analyze costs, create and manage budgets, export data, and review and act on optimization recommendations to save money.
To be able to use analytics effectively and thus leverage the data treasures in the company, you need a modern and scalable data platform that can react flexibly to events and was designed with a DataOps mindset from the very beginning.
Accelerate Cloud Migration to AWS Cloud with Cognizant Cloud StepsAmazon Web Services
Digital transformation and cloud migration are complex but necessary mandates for modern organizations. Cognizant’s Cloud Steps transformation and migration framework simplifies the process, enabling enterprises to quickly and easily build resilient cloud foundations and confidently migrate applications, infrastructure, security, and DevOps to an AWS environment at speed and scale.
Protecting Agile Transformation through Secure DevOps (DevSecOps)Eryk Budi Pratama
Respresenting Cyber Defense Community (cdef.id) to present and share my view on Secure DevOps / DevSecOps. Through this presentation, I shared several insights about:
1. How to balance the risk and controls in the "great shift left" paradigm (agile)
2. DevOps activities
3. How to seamlessly integrate security into DevOps
4. How to "shift left" the security"
5. Get started with Secure DevOps / DevSecOps
6. Case Study about DevSecOps implementation
For further discussion, especially how to secure digital and agile transformation in your organization, don't hesitate to contact me :)
Perform a Cloud Readiness Assessment for Your Own CompanyAmazon Web Services
In this session you will get an understanding how to evaluate your company's or applications' cloud readiness. We will cover aspects such as workload and data categorisation, automation levels, design for failure and cost-optimised architectures. We will be looking at typical application evolution paths from tightly coupled physical systems, in some cases through virtualisation, to cloud-native, or cloud-ready, loosely coupled, distributed and automated solutions.
Like any major transformation project, the migration to Cloud requires a compelling business case to justify the move. In this session, we: cover the fundamental commercial levers that AWS provides its customers; work through a framework to help identify the possible benefits of moving to cloud; and, outline the steps required to create a Cloud business case.
Azure Cost Management is a native Azure service that helps you analyze costs, create and manage budgets, export data, and review and act on optimization recommendations to save money.
To be able to use analytics effectively and thus leverage the data treasures in the company, you need a modern and scalable data platform that can react flexibly to events and was designed with a DataOps mindset from the very beginning.
Mactores Cloud Assessment Suite is strategically made for enterprises who foresee cloud as a driving force for their business. We help enterprises identify applications and resources which are well suited for cloud and distinguish benefits to the enterprise in terms of ROI, Scalability and Agility.
Leveraging the AWS Sales Methodology and Partner Best Practices aws-partner-s...Amazon Web Services
The AWS outcome-based approach to sales is customer obsessed and supports the new reality of IT. Learn how to align effectively with AWS sales and help customers accelerate their cloud adoption. AWS and Partners will also share best practices and lessons learned.
"Introduction to FinOps" – Greg VanderWel at Chicago AWS user groupAWS Chicago
Chicago's AWS user group
September 24th 2019
FinOps in AWS
"Introduction to FinOps" – Greg VanderWel Area Director, Apptio
FinOps in AWS - managing cost, spending, and budgets in AWS accounts
Cost Star Ratings to score team's AWS optimization at Morningstar" - Katelyn ...AWS Chicago
Chicago AWS user group
FinOps in AWS - managing cost, spending, and budgets in AWS accounts
Sept 24 2019
"Cost Star Ratings to score team's AWS optimization at Morningstar" - Katelyn Decraene, Cloud Finance Manager, Cloud Services at Morningstar
Chicago FinOps first official meet-up with guest speakers J.R. Storment, President @ FinOps Foundation and Katelyn Decraene, Cloud Finance Manager @ Morningstar
See the full recording here: https://vimeo.com/374237773/a14a5eba6a
In this session, we will cover how ISV and SI partners have successfully integrated their products with AWS and developed their sales and marketing strategy to transform their businesses. Learn best practices on how ISVs such as Infor have leveraged the global AWS platform and how to build a consulting practice around cloud enablement, the skills that are required, as well as examples of successful programs that have been delivered by AWS partners at F2000 clients and Public Sector accounts.
Amazon Web Services gives you fast access to flexible and low cost IT resources, so you can rapidly scale and build virtually any big data and analytics application including data warehousing, clickstream analytics, fraud detection, recommendation engines, event-driven ETL, serverless computing, and internet-of-things processing regardless of volume, velocity, and variety of data.
In this one-hour webinar, we will look at the portfolio of AWS Big Data services and how they can be used to build a modern data architecture.
We will cover:
Using different SQL engines to analyze large amounts of structured data
Analysing streaming data in near-real time
Architectures for batch processing
Best practices for Data Lake architectures
This session is suited for:
Solution and enterprise architects
Data architects/ Data warehouse owners
IT & Innovation team members
Cloud adoption requires that fundamental changes are considered across the entire organization, and that stakeholders across all organizational units are engaged in these changes. This session will introduce participants to the AWS Cloud Adoption Framework (AWS CAF) to help organizations take an accelerated path to successful cloud adoption. Participants will be exposed to consideration, guidance, and best practices that can be used to help their organizations develop an efficient and effective plan to realize measurable business benefits from cloud adoption faster and with less risk.
Amazon EC2 Container Service is a new AWS service that makes it easy to run and manage Docker-enabled applications across a cluster of Amazon EC2 instances. Amazon EC2 Container Service lets you define, schedule, and stop sets of containers. You have access to the state of your resources, making it easy to confirm that tasks are running or view the utilization of Amazon EC2 instances in your cluster. This session will describe the benefits of containers, introduce the Amazon EC2 Container Service, and demonstrate how to use Amazon EC2 Container Service for your applications.
Speakers:
Ian Massingham, AWS Technical Evangelist and
Boyan Dimitrov, Platform Automation Lead, Hailo Cabs
Data Warehouse vs. Data Lake vs. Data Streaming – Friends, Enemies, Frenemies?Kai Wähner
The concepts and architectures of a data warehouse, a data lake, and data streaming are complementary to solving business problems.
Unfortunately, the underlying technologies are often misunderstood, overused for monolithic and inflexible architectures, and pitched for wrong use cases by vendors. Let’s explore this dilemma in a presentation.
The slides cover technologies such as Apache Kafka, Apache Spark, Confluent, Databricks, Snowflake, Elasticsearch, AWS Redshift, GCP with Google Bigquery, and Azure Synapse.
AWS Partner Webcast - Data Center Migration to the AWS CloudAmazon Web Services
Migrating your data center to the cloud can reduce operating costs, and improve the service you deliver to your clients. However, preparing for migration takes thoughtful planning, and awareness of proven solutions.
Review this webinar to learn how RISO Inc. used Apps Associates services to move their data center to the AWS cloud. In so doing, RISO Inc. avoided capital intensive hardware purchases and redeployed IT staff to projects that directly supported their business goals. We'll also describe how you can integrate your existing environment with the cloud to create a hybrid architecture.
What you’ll learn:
• Considerations for moving workloads to the AWS Cloud
• An approach to design and implementation of infrastructure, and application migration
• Customer case study, RISO Inc., on successful data center migration with Apps Associates
In this session you will get an understanding how to evaluate your company's or applications' cloud readiness. We will cover aspects such as workload and data categorisation, automation levels, design for failure and cost-optimised architectures. We will be looking at typical application evolution paths from tightly coupled physical systems, in some cases through virtualisation, to cloud-native, or cloud-ready, loosely coupled, distributed and automated solutions.
This session will also take a look at typical enterprise business processes, from procurement to development and testing, and operations and support. We will introduce known-to-work cloud-ready business processes and new best practices, through customer use cases from companies who are cloud native, or have undergone a cloud transformation to get there.
code talks Commerce: The API Economy as an E-Commerce Operating SystemAdelina Todeva
My talk for the CodeTalks Commerce Edition, April 19 and 20 2016 in Berlin.
I explore the possibilities of APIs and API only products, explaining what APIs are, how one can participate in the API Economy, and what do look out for when selecting API products to power an e-commerce organisation
Christian's part of the AWS re:Invent 2015 talk shared with Sajee Mathew - ARC304 - Designing for SaaS: Next Generation Software Delivery Models on AWS. Full video of the 60 minute presentation: https://www.youtube.com/watch?v=d16aUztH9hk&list=PLhr1KZpdzukdRxs_pGJm-qSy5LayL6W_Y
Mactores Cloud Assessment Suite is strategically made for enterprises who foresee cloud as a driving force for their business. We help enterprises identify applications and resources which are well suited for cloud and distinguish benefits to the enterprise in terms of ROI, Scalability and Agility.
Leveraging the AWS Sales Methodology and Partner Best Practices aws-partner-s...Amazon Web Services
The AWS outcome-based approach to sales is customer obsessed and supports the new reality of IT. Learn how to align effectively with AWS sales and help customers accelerate their cloud adoption. AWS and Partners will also share best practices and lessons learned.
"Introduction to FinOps" – Greg VanderWel at Chicago AWS user groupAWS Chicago
Chicago's AWS user group
September 24th 2019
FinOps in AWS
"Introduction to FinOps" – Greg VanderWel Area Director, Apptio
FinOps in AWS - managing cost, spending, and budgets in AWS accounts
Cost Star Ratings to score team's AWS optimization at Morningstar" - Katelyn ...AWS Chicago
Chicago AWS user group
FinOps in AWS - managing cost, spending, and budgets in AWS accounts
Sept 24 2019
"Cost Star Ratings to score team's AWS optimization at Morningstar" - Katelyn Decraene, Cloud Finance Manager, Cloud Services at Morningstar
Chicago FinOps first official meet-up with guest speakers J.R. Storment, President @ FinOps Foundation and Katelyn Decraene, Cloud Finance Manager @ Morningstar
See the full recording here: https://vimeo.com/374237773/a14a5eba6a
In this session, we will cover how ISV and SI partners have successfully integrated their products with AWS and developed their sales and marketing strategy to transform their businesses. Learn best practices on how ISVs such as Infor have leveraged the global AWS platform and how to build a consulting practice around cloud enablement, the skills that are required, as well as examples of successful programs that have been delivered by AWS partners at F2000 clients and Public Sector accounts.
Amazon Web Services gives you fast access to flexible and low cost IT resources, so you can rapidly scale and build virtually any big data and analytics application including data warehousing, clickstream analytics, fraud detection, recommendation engines, event-driven ETL, serverless computing, and internet-of-things processing regardless of volume, velocity, and variety of data.
In this one-hour webinar, we will look at the portfolio of AWS Big Data services and how they can be used to build a modern data architecture.
We will cover:
Using different SQL engines to analyze large amounts of structured data
Analysing streaming data in near-real time
Architectures for batch processing
Best practices for Data Lake architectures
This session is suited for:
Solution and enterprise architects
Data architects/ Data warehouse owners
IT & Innovation team members
Cloud adoption requires that fundamental changes are considered across the entire organization, and that stakeholders across all organizational units are engaged in these changes. This session will introduce participants to the AWS Cloud Adoption Framework (AWS CAF) to help organizations take an accelerated path to successful cloud adoption. Participants will be exposed to consideration, guidance, and best practices that can be used to help their organizations develop an efficient and effective plan to realize measurable business benefits from cloud adoption faster and with less risk.
Amazon EC2 Container Service is a new AWS service that makes it easy to run and manage Docker-enabled applications across a cluster of Amazon EC2 instances. Amazon EC2 Container Service lets you define, schedule, and stop sets of containers. You have access to the state of your resources, making it easy to confirm that tasks are running or view the utilization of Amazon EC2 instances in your cluster. This session will describe the benefits of containers, introduce the Amazon EC2 Container Service, and demonstrate how to use Amazon EC2 Container Service for your applications.
Speakers:
Ian Massingham, AWS Technical Evangelist and
Boyan Dimitrov, Platform Automation Lead, Hailo Cabs
Data Warehouse vs. Data Lake vs. Data Streaming – Friends, Enemies, Frenemies?Kai Wähner
The concepts and architectures of a data warehouse, a data lake, and data streaming are complementary to solving business problems.
Unfortunately, the underlying technologies are often misunderstood, overused for monolithic and inflexible architectures, and pitched for wrong use cases by vendors. Let’s explore this dilemma in a presentation.
The slides cover technologies such as Apache Kafka, Apache Spark, Confluent, Databricks, Snowflake, Elasticsearch, AWS Redshift, GCP with Google Bigquery, and Azure Synapse.
AWS Partner Webcast - Data Center Migration to the AWS CloudAmazon Web Services
Migrating your data center to the cloud can reduce operating costs, and improve the service you deliver to your clients. However, preparing for migration takes thoughtful planning, and awareness of proven solutions.
Review this webinar to learn how RISO Inc. used Apps Associates services to move their data center to the AWS cloud. In so doing, RISO Inc. avoided capital intensive hardware purchases and redeployed IT staff to projects that directly supported their business goals. We'll also describe how you can integrate your existing environment with the cloud to create a hybrid architecture.
What you’ll learn:
• Considerations for moving workloads to the AWS Cloud
• An approach to design and implementation of infrastructure, and application migration
• Customer case study, RISO Inc., on successful data center migration with Apps Associates
In this session you will get an understanding how to evaluate your company's or applications' cloud readiness. We will cover aspects such as workload and data categorisation, automation levels, design for failure and cost-optimised architectures. We will be looking at typical application evolution paths from tightly coupled physical systems, in some cases through virtualisation, to cloud-native, or cloud-ready, loosely coupled, distributed and automated solutions.
This session will also take a look at typical enterprise business processes, from procurement to development and testing, and operations and support. We will introduce known-to-work cloud-ready business processes and new best practices, through customer use cases from companies who are cloud native, or have undergone a cloud transformation to get there.
code talks Commerce: The API Economy as an E-Commerce Operating SystemAdelina Todeva
My talk for the CodeTalks Commerce Edition, April 19 and 20 2016 in Berlin.
I explore the possibilities of APIs and API only products, explaining what APIs are, how one can participate in the API Economy, and what do look out for when selecting API products to power an e-commerce organisation
Christian's part of the AWS re:Invent 2015 talk shared with Sajee Mathew - ARC304 - Designing for SaaS: Next Generation Software Delivery Models on AWS. Full video of the 60 minute presentation: https://www.youtube.com/watch?v=d16aUztH9hk&list=PLhr1KZpdzukdRxs_pGJm-qSy5LayL6W_Y
Building a real-time analytics solution has never been faster or more cost-efficient. Most organizations are trying to find a way to improve customer experience and respond to business events in real time. Importantly, to do this quickly and at a fraction of the price of traditional approaches. In this session we will look at how to use the AWS services to best meet your real-time analytics needs.
Combinação de logs, métricas e rastreamentos para observabilidade unificadaElasticsearch
Saiba como o Elasticsearch combina com eficiência dados em um único armazenamento e como o Kibana é usado para analisá-los. Além disso, veja como os desenvolvimentos recentes ajudam a identificar e resolver problemas operacionais mais rapidamente.
Design Microservice Architectures the Right WayC4Media
Video and slides synchronized, mp3 and slide download available at URL https://bit.ly/2O7BN8T.
Michael Bryzek highlights specific key decisions that very directly impact the quality and maintainability of a microservice architecture, covering infrastructure, continuous deployment, communication, event streaming, language choice and more, all to ensure that teams and systems remain productive and scale. Filmed at qconnewyork.com.
Michael Bryzek is the CTO and co-founder of Flow Commerce, an enterprise SAAS platform that is the world’s most advanced solution for global ecommerce. Prior, he was the cofounder and CTO of Gilt Groupe, an innovative online shopping destination offering.
Understand how to monitor different components of your business infrastructure such as application servers, databases, big data stores, web servers, ERP software, middleware and messaging components, as well as virtual and cloud resources. You will also learn how to assign threshold values, configure alerts, automate corrective actions, generate reports, and create custom dashboards.
Learn how to monitor and manage your serverless APIs in production. We show you how to set up Amazon CloudWatch alarms, interpret CloudWatch logs for Amazon API Gateway and AWS Lambda, and automate common maintenance and management tasks on your service.
Hosted by PolarSeven Cloud Consulting - http://polarseven.com
Our monthly AWS User Group Sydney presentation night.
http://www.meetup.com/AWS-Sydney/
Introductions and What's New In AWS - by PolarSeven"
Session 1:
Advanced Monitoring for AWS environments
Dynatrace is the first AI assisted monitoring platform, offering a revolutionary approach to managing the operational complexity of microservices and cloud centric applications.
Presenter: Kevin Leng, Senior SE, APAC
Dynatrace
https://www.dynatrace.com/
See video presentation here
https://youtu.be/MUV_-E3nQGM
Session 2:
Cost Optimization and Cost Control - Best Practises
Join CloudHealth as we explore the key challenge organisations face in managing cloud cost. From our key insights partnering with enterprises around the globe we will share our defined blueprint for cost optimisation and introducing cost controls into your cloud strategy.
Presenter: Richard Economides, Solution Architect
CloudHealth
https://www.cloudhealthtech.com/
Watch the video presentation here
https://youtu.be/rAOfXssTLo8
Determining the Deployment Model that Fits Your Organization's NeedsCelonis
The Celonis Intelligent Business Cloud (IBC) is the first transformation platform developed from the ground up specifically for the cloud. It allows you to utilize all benefits of a full cloud platform on a multi-tenant architecture, but keep the data in your local on-premise systems as well. By default, the IBC provides enterprise-class security without the additional effort, complexity and management that traditional solutions require. And the combination of on-premise extractor technology and the leave-in-place deployment allows you to decide which setup fits best for your IT infrastructure and allows you to maintain full data governance over your IT landscape. This session will introduce you to the different scenarios available with the IBC and help you decide which might be best for your next transformation project.
Presenter:
Manuel Haug, Head of Product Management Core, Celonis
API Gateways are going through an identity crisisChristian Posta
API Gateways provide functionality like rate limiting, authentication, request routing, reporting, and more. If you've been following the rise in service-mesh technologies, you'll notice there is a lot of overlap with API Gateways when solving some of the challenges of microservices. If service mesh can solve these same problems, you may wonder whether you really need a dedicated API Gateway solution?
The reality is there is some nuance in the problems solved at the edge (API Gateway) compared to service-to-service communication (service mesh) within a cluster. But with the evolution of cluster-deployment patterns, these nuances are becoming less important. What's more important is that the API Gateway is evolving to live at a layer above service mesh and not directly overlapping with it. In other words, API Gateways are evolving to solve application-level concerns like aggregation, transformation, and deeper context and content-based routing as well as fitting into a more self-service, GitOps style workflow.
In this talk we put aside the "API Gateway" infrastructure as we know it today and go back to first principles with the "API Gateway pattern" and revisit the real problems we're trying to solve. Then we'll discuss pros and cons of alternative ways to implement the API Gateway pattern and finally look at open source projects like Envoy, Kubernetes, and GraphQL to see how the "API Gateway pattern" actually becomes the API for our applications while coexisting nicely with a service mesh (if you adopt a service mesh).
Amazon Kinesis is the AWS service for real-time streaming big data ingestion and processing. This talk gives a detailed exploration of Kinesis stream processing. We'll discuss in detail techniques for building, and scaling Kinesis processing applications, including data filtration and transformation. Finally we'll address tips and techniques to emitting data into S3, DynamoDB, and Redshift.
MongoDB .local Houston 2019: Building an IoT Streaming Analytics Platform to ...MongoDB
Corva's analytics platform enables real-time engineering and machine learning predictions and powers faster and safer drilling. The platform utilizes AWS serverless Lambda & extensible, data-driven API with MongoDB to handle 100,000+ requests per minute of streaming sensor data.
The presentation explains how to setup rate limits, how to work with 429 code, how rate limits are implemented in kubernetes, cni, loadbalancer and so on
the presentation is about federated GraphQL in huge enterprises. I explain why and what for big enterprises need distributed GraphQL and classic one does not work.
The majority of cloud-based DWH provides a wide range of migration tools from in-house DWH. However, I believe that cloud migration success is based not only on reducing infrastructure maintenance costs, but also on additional performance profit inherited from tailored data model.
I am going to prove that copying star or snowflake schemas as is will not lead to maximum performance boost in such DWH as Amazon Redshift and Google BigQuery. Moreover, this approach may cause additional cloud expenses.
We will discuss why data models should be different for each particular database, and how to get maximum performance from database peculiarities.
Most of performance tuning techniques for cloud-based DWH are about adding extra nodes to cluster, but it may lead to performance degradation in some cases, as well as extra costs burden. Sometimes, this approach allows to get maximum speed from current hardware configuration, may be even less expensive servers.
I will show some examples from production projects with extra performance using lower hardware, and edge cases like huge wide fact table with fully denormalized dimensions instead of classical star schema.
the presentation describes a lot of very technical details about row level security, possibble security breaches in relational databases like Oracle and Postgres. A lot of examples how to protect data is shown.
The presentation describes various options for implementing row-level security in enterprise applications: database side, application server side, mixed approaches. we consider oracle virtual private database, different encription options and possible security breaches and their mitigation path.
Oracle JSON treatment evolution - from 12.1 to 18 AOUG-2018Alexander Tokarev
The presentation was prepared for Austria Oracle User group 30 years. It tells us a lot of challenges which Oracle developers face with implementing high-load json processing pipelines.
The presentation describes how to design robust solution for tagging search, how to use tagging for faceted search. Various architecture and data patterns are considered. We discuss relational databases like Oracle, full text search servers like Apache Solr. We will see how Oracle 18c features permit to use embedded faceted search.
The presentation is a deep analysis of Oracle JSON treatment feauture. It is considered to real-life experience and workarounds to defeat known json errors.
The presentation is an advanced version about Oracle Result cache feature. It is rewritten presentation from HIGLOAD-2017. A lot of result cache internals under the cover.
Изначально будут раскрыты базовые причины, которые заставили появиться такой части механизма СУБД, как кэш результатов, и почему в ряде СУБД он есть или отсутствует.
Будут рассмотрены различные варианты кэширования результатов как sql-запросов, так и результатов хранимой в БД бизнес-логики. Произведено сравнение способов кэширования (программируемые вручную кэши, стандартный функционал) и даны рекомендации, когда и в каких случаях данные способы оптимальны, а порой опасны.
Для каждой из рекомендаций будут продемонстрированы как положительные так и отрицательные кейсы из опыта production-эксплуатации реальных систем, где используются разные варианты кэшей
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Do you want Software for your Business? Visit Deuglo
Deuglo has top Software Developers in India. They are experts in software development and help design and create custom Software solutions.
Deuglo follows seven steps methods for delivering their services to their customers. They called it the Software development life cycle process (SDLC).
Requirement — Collecting the Requirements is the first Phase in the SSLC process.
Feasibility Study — after completing the requirement process they move to the design phase.
Design — in this phase, they start designing the software.
Coding — when designing is completed, the developers start coding for the software.
Testing — in this phase when the coding of the software is done the testing team will start testing.
Installation — after completion of testing, the application opens to the live server and launches!
Maintenance — after completing the software development, customers start using the software.
Zoom is a comprehensive platform designed to connect individuals and teams efficiently. With its user-friendly interface and powerful features, Zoom has become a go-to solution for virtual communication and collaboration. It offers a range of tools, including virtual meetings, team chat, VoIP phone systems, online whiteboards, and AI companions, to streamline workflows and enhance productivity.
AI Genie Review: World’s First Open AI WordPress Website CreatorGoogle
AI Genie Review: World’s First Open AI WordPress Website Creator
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-genie-review
AI Genie Review: Key Features
✅Creates Limitless Real-Time Unique Content, auto-publishing Posts, Pages & Images directly from Chat GPT & Open AI on WordPress in any Niche
✅First & Only Google Bard Approved Software That Publishes 100% Original, SEO Friendly Content using Open AI
✅Publish Automated Posts and Pages using AI Genie directly on Your website
✅50 DFY Websites Included Without Adding Any Images, Content Or Doing Anything Yourself
✅Integrated Chat GPT Bot gives Instant Answers on Your Website to Visitors
✅Just Enter the title, and your Content for Pages and Posts will be ready on your website
✅Automatically insert visually appealing images into posts based on keywords and titles.
✅Choose the temperature of the content and control its randomness.
✅Control the length of the content to be generated.
✅Never Worry About Paying Huge Money Monthly To Top Content Creation Platforms
✅100% Easy-to-Use, Newbie-Friendly Technology
✅30-Days Money-Back Guarantee
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
#AIGenieApp #AIGenieBonus #AIGenieBonuses #AIGenieDemo #AIGenieDownload #AIGenieLegit #AIGenieLiveDemo #AIGenieOTO #AIGeniePreview #AIGenieReview #AIGenieReviewandBonus #AIGenieScamorLegit #AIGenieSoftware #AIGenieUpgrades #AIGenieUpsells #HowDoesAlGenie #HowtoBuyAIGenie #HowtoMakeMoneywithAIGenie #MakeMoneyOnline #MakeMoneywithAIGenie
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
Looking for a reliable mobile app development company in Noida? Look no further than Drona Infotech. We specialize in creating customized apps for your business needs.
Visit Us For : https://www.dronainfotech.com/mobile-application-development/
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
OpenMetadata Community Meeting - 5th June 2024OpenMetadata
The OpenMetadata Community Meeting was held on June 5th, 2024. In this meeting, we discussed about the data quality capabilities that are integrated with the Incident Manager, providing a complete solution to handle your data observability needs. Watch the end-to-end demo of the data quality features.
* How to run your own data quality framework
* What is the performance impact of running data quality frameworks
* How to run the test cases in your own ETL pipelines
* How the Incident Manager is integrated
* Get notified with alerts when test cases fail
Watch the meeting recording here - https://www.youtube.com/watch?v=UbNOje0kf6E
Introducing Crescat - Event Management Software for Venues, Festivals and Eve...Crescat
Crescat is industry-trusted event management software, built by event professionals for event professionals. Founded in 2017, we have three key products tailored for the live event industry.
Crescat Event for concert promoters and event agencies. Crescat Venue for music venues, conference centers, wedding venues, concert halls and more. And Crescat Festival for festivals, conferences and complex events.
With a wide range of popular features such as event scheduling, shift management, volunteer and crew coordination, artist booking and much more, Crescat is designed for customisation and ease-of-use.
Over 125,000 events have been planned in Crescat and with hundreds of customers of all shapes and sizes, from boutique event agencies through to international concert promoters, Crescat is rigged for success. What's more, we highly value feedback from our users and we are constantly improving our software with updates, new features and improvements.
If you plan events, run a venue or produce festivals and you're looking for ways to make your life easier, then we have a solution for you. Try our software for free or schedule a no-obligation demo with one of our product specialists today at crescat.io
6. What is FinOps – management point
“FinOps” is a movement that advocates:
1. a collaborative working relationship between DevOps and
Finance data-driven management of infrastructure spending
2. Transparency between IT and finance
3. Cost efficiency, profitability and product delivery pace
7. What is FinOps – IT point
Near real time reporting
+
Just-in-time processes
+
IT and finance teams work together
+
Shared cloud dictionary
=
FinOps
+
Trust
=
Balance between speed of changes, availability of services and cloud costs
8. Opensource FinOps for public cloud
Public cloud
$$$
Mock API
https://cutt.us/tKkrA
$$$
9. Opensource FinOps for private cloud
Private cloud
Doesn’t exist!!!
Neither IAAS nor PAAS!
$$$
10. IAAS billing
• The best part is dead
• Ceilometer is not the best + deprecated
• Alive projects aren’t opensource
ISP solutions!
11. Private cloud FinOps tools
1 relevant link on 5th page!
2 relevant link on 2nd page!
16. • Cloud/Virtualization/Backup/Storage software Extractor Templates –
AWS, GCP, Openstack, Vmware, Veam, NetApp…
• API for custom extractors
• Near real-time extraction/billing tools
• Lookup management – organization hierarchy
• Rate management
• Budget management, chargeback/showback
• Account management with CMDB integration
• Reporting with drilldown
• RBAC + SSO
Exivity features
Very small dev team
but perfect in-house
FinOps platform!
From organization to VM
Not a good option for Sber
+
no Open-Source!
17. Full billing cycle reference architecture
All
IAAS/PAAS
products
Billing event
ingestion
User/product
onboarding
ingestion
database
Event
aggregation
Publish to
billing API
Billing
18. Billing features
• Extensible product catalog
• Extensible product pricing models
• Extensible product cancelation policies
• Invoicing
• API
• Proven performance
• …………………………..
Too hard to implement from scratch!
20. Opensource BSS
Language License Community Features set API Documentation Huge customers
Java JBL dead rich Decent Bad No
Java Apache 2.0 alive huge Decent Good Yes
PHP AGPL-3 small rich Good Not bad Yes
22. KillBill deployment architecture
UI and analytic schema
PG or MySQL
OLTP schema
PG or MySQL
K8s cluster
KillBill
KillBill
KillBill
KaUI
Analytic plugin
4 core, 16 Gb
4 core, 16 Gb
23. KillBill features
• Flat and hierarchical account
• Tiered plans, multi-phase plans
• Plans versioning + deferred execution time
• Prepaid/postpaid billing
• Usage/subscription billing
• Invoicing
• Payment/taxation engine
• Entitlement and overdue decoupled engines
• Each service could be extended by plugins + set of plugins in the box
Very cool for complicated enterprises
24. KillBill used features
• Flat and hierarchical account
• Tiered plans, multi-phase plans
• Plans versioning + deferred execution time
• Prepaid/postpaid billing
• Usage/subscription billing
• Invoicing
• Payment/taxation engine
• Entitlement and overdue decoupled engines
• Each service could be extended by plugins + set of plugins in the box
• …
42. In-house billing implementation steps
• Plan catalog population
• Usage metrics delivery
• Warning about not-paid products
• Usage metrics population
43. IIAS full billing cycle initial architecture
IAAS/PAAS
product
Billing event
ingestion
User /product
onboarding
ingestion
database
Event
aggregation
Publish to
billing API
Billing
XML with
plans
Metrics
44. IIAS full billing cycle initial architecture
IAAS/PAAS
product
Billing event
ingestion
User /product
onboarding
ingestion
Timeseries
database
Event
aggregation
Publish to
billing API
Billing
XML with
plans
Metrics
45. Metrics
1. Timeseries data
2. Value = usage value
3. Mandatory labels:
• Unique private cloud resource name – RN
• Metric type: counter or gauge
• Unit type from KillBill plan
47. IIAS full billing cycle initial architecture
IAAS/PAAS
product
Billing event
ingestion
User /product
onboarding
ingestion
Timeseries
database
Event
aggregation
Publish to
billing API
Billing
XML with
plans
Metrics
48. IIAS full billing cycle initial architecture
IAAS/PAAS
product
Billing event
consumption
User /product
onboarding
ingestion
Timeseries
database
Event
aggregation
Publish to
billing API
Billing
XML with
plans
Prometheus Exposition Format
Metrics VMAgent
YAML with scrape
endpoint
49. IIAS full billing cycle initial architecture
IAAS/PAAS
product
Billing event
consumption
User /product
onboarding
ingestion
Timeseries
database
Event
aggregation
Publish to
billing API
Billing
XML with
plans
Prometheus Exposition Format
Metrics
Retention – 1 month
Timeseries
database
Retention – 2 years
History and disputes
investigation
VMAgent
YAML with scrape
endpoint
50. Billing data population concept
• Easier for IAAS/PAAS products
• Harder for IAAS/PAAS products
Pull billing data
Push metrics
Metrics exposition endpoint
Billing API interaction
51.
52. Billing data population
• Easier for IAAS/PAAS products
• Harder for IAAS/PAAS products
Pull billing data
Push metrics
Single metrics exposition endpoint
Billing API interaction
Developers are not happy with any approach!
State DB
Survive reboot
Buffer
flow
consumer
53. Generic connector
Что мы хотим?
Биллинг!
Что мы сделаем для этого?
Ничего!
Кто поработает за нас?
Биллинг!
•Inbound traffic
•Outbound traffic
•API calls count
56. IIAS full billing cycle initial architecture
IAAS/PAAS
product
Billing event
consumption
User /product
onboarding
ingestion
Timeseries
database
Event
aggregation
Publish to
billing API
Billing
XML with
plans
Prometheus Exposition Format
Metrics
Retention – 1 month
Timeseries
database
Retention – 2 years
History and disputes
investigation
VMAgent
YAML with scrape
endpoint
57. Ingress exporter duties
1. Scan all ingresses labeled for billing
2. Extract RN from ingress path
3. Extract nginx_ingress_controller_bytes_sent_sum +
nginx_ingress_controller_request_size_sum + nginx_ingress_controller_request into download-
bytes_total, upload-bytes_total, api_call_count_total counter metrics
4. Label metrics with RN, unit and metric type
5. Write logs
6. Expose metrics in PEF via endpoint
Lines: 105
Тут скрин настройки nginx
58. IIAS full billing cycle initial architecture
IAAS/PAAS
product
Billing event
consumption
User /product
onboarding
ingestion
Timeseries
database
Event
aggregation
Publish to
billing API
Billing
XML with
plans
Prometheus Exposition Format
Metrics
Retention – 1 month
Timeseries
database
Retention – 2 years
History and disputes
investigation
VMAgent
YAML with scrape
endpoint
59. Billing importer duties
1. Extract data from VictoriaMetrics just after previous successful
KillBill interaction
2. Aggregate event by RN, unit type and metric type – 24 hour interval
3. Write delta for counter and gauge usage to KillBill API in bulk mode
4. Write self-healing data
5. Write logs
6. Repeat each 60 seconds
Lines: 112
Thank you VM for INCREASE function
60. IIAS full billing cycle initial architecture
IAAS/PAAS
product
Billing event
consumption
User /product
onboarding
ingestion
Timeseries
database
Event
aggregation
Publish to
billing API
Billing
XML with
plans
Prometheus Exposition Format
Metrics
Retention – 1 month
Timeseries
database
Retention – 2 years
History and disputes
investigation
VMAgent
YAML with scrape
endpoint
62. IIAS full billing cycle initial architecture
IAAS/PAAS
product
Billing event
consumption
User/product
onboarding
ingestion
Timeseries
database
Event
aggregation
Publish to
billing API
Billing
XML with
plans
Prometheus Exposition Format
Metrics
Retention – 1 month
Timeseries
database
Retention – 2 years
History and disputes
investigation
VMAgent
YAML with scrape
endpoint
63. Common product billing initial architecture
IAAS/PAAS
product
Billing event
consumption
User/product
onboarding
ingestion
Timeseries
database
Event
aggregation
Publish to
billing API
Billing
XML with
plans
Prometheus Exposition Format
Metrics
Retention – 1 month
Timeseries
database
Retention – 2 years
History and disputes
investigation
VMAgent
YAML with scrape
endpoint
Self-service
portal
Event log
New/Remove
user/product
events
New/Remove
product
events
New/Remove
user/product
events
Portal
connector
accounts and
subscriptions
Ingress
API calls
Metrics
API call count
Inbound traffic
Outbound traffic
Ingress
exporter
Billing importer
Usage metrics
Aggregated usage metrics
64. 1. Scan all ingresses labeled for billing
2. Extract RN from S3 service tenant namespace name
3. Extract get, head, post, put requests information from S3 /metrics endpoint
4. Merge get+head and put+post into get-head-requests_total and put-post-requests_total
counter metrics
5. Count delete requests just in case
6. Extract nginx_ingress_controller_bytes_sent_sum into download-bytes_total counter
metrics
7. Extract from S3 CLI current used space for tenant into bytes_sum to gauge metric
8. Label metrics with RN, unit and metric type
9. Write logs
10. Expose metrics in PEF via endpoint
Lines: 112
S3 exporter duties
Тут скрин настройки nginx
No changes in S3 service source code!
Mediation microservice!
65. Common product billing initial architecture
IAAS/PAAS
product
Timeseries
database
Billing
XML with
plans
Prometheus Exposition Format
Metrics
API call count
Inbound traffic
Outbound traffic
Retention – 1 month
Timeseries
database
Retention – 2 years
History and disputes
investigation
Self-service
portal
Event log
New/Remove
user/product
events
Portal
connector
Billing importer
accounts and
subscriptions
Aggregated usage metrics
New/Remove
user/product
events
Usage metrics
Ingress
exporter
Ingress
API calls
New/Remove
product
events
VMAgent
YAML with scrape
endpoint
66. Advanced product billing initial architecture
IAAS/PAAS
product
Timeseries
database
Billing
XML with
plans
Prometheus Exposition Format
Metrics
API call count
Inbound traffic
Outbound traffic
Retention – 1 month
Timeseries
database
Retention – 2 years
History and disputes
investigation
Self-service
portal
Event log
New/Remove
user/product
events
Portal
connector
Billing importer
accounts and
subscriptions
Aggregated usage metrics
New/Remove
user/product
events
Usage metrics
Ingress
exporter
Ingress
API calls
New/Remove
product
events
VMAgent
YAML with scrape
endpoint
67. Advanced product billing initial architecture
Timeseries
database
Billing
XML with
plans
Prometheus Exposition Format
Metrics
API call count
Inbound traffic
Outbound traffic
Retention – 1 month
Timeseries
database
Retention – 2 years
History and disputes
investigation
Self-service
portal
Event log
New/Remove
user/product
events
Portal
connector
Billing importer
accounts and
subscriptions
Aggregated usage metrics
New/Remove
user/product
events
Usage metrics
Ingress
exporter
Ingress
New/Remove
product
events
VMAgent
YAML with scrape
endpoint
S3
product
S3 API calls
S3
exporter
S3 CLI logs
Metrics
storage size
Metrics
API calls per group
Metrics
Outbound traffic
68. Billing product vision
из Виктории!!!
• Take some data from log/endpoint
• Transform to consumption metrics
• Expose to Prometheus
90% of
code!
81. Low-Code billing architecture
Any
product
Timeseries
database
Billing
XML with
plans
Prometheus Exposition Format
Metrics
Inbound traffic
Outbound traffic
Retention – 1 month
Timeseries
database
Retention – 2 years
History and disputes
investigation
Self-service
portal
Event log
New/Remove
user/product
events
Portal
connector
Billing importer
accounts and
subscriptions
Aggregated usage metrics
New/Remove
user/product
events
usage metrics
Any API calls
Any logs
Ingress
Metrics
Any product metric
Metrics
Any logs metric
mediation
TOML with
pipeline
New/Remove
product
events
VMAgent
YAML with scrape
endpoint
82. Final Low-Code billing architecture
Any
product
Timeseries
database
Billing
XML with
plans
Prometheus Exposition Format
Metrics
Inbound traffic
Outbound traffic
Retention – 1 month
Timeseries
database
Retention – 2 years
History and disputes
investigation
Self-service
portal
Event log
New/Remove
user/product
events
Portal
connector
Billing importer
accounts and
subscriptions
Aggregated usage metrics
New/Remove
user/product
events
usage metrics
Any API calls
Any logs
Ingress
Metrics
Any product metric
Metrics
Any logs metric
mediation
TOML with
pipeline
New/Remove
product
events
VMAgent
YAML with scrape
endpoint
YAML Quotas
Definition
Language
Quotas
enforcer
83. Final Low-Code billing architecture
Any
product
Timeseries
database
Billing
XML with
plans
Prometheus Exposition Format
Metrics
Inbound traffic
Outbound traffic
Retention – 1 month
Timeseries
database
Retention – 2 years
History and disputes
investigation
Self-service
portal
Event log
New/Remove
user/product
events
Portal
connector
Billing importer
accounts and
subscriptions
Aggregated usage metrics
New/Remove
user/product
events
usage metrics
Any API calls
Any logs
Ingress
Metrics
Any product metric
Metrics
Any logs metric
mediation
TOML with
pipeline
New/Remove
product
events
VMAgent
YAML with scrape
endpoint
YAML Quotas
Definition
Language
Quotas
enforcer
84. Final Low-Code billing architecture
Any
product
Timeseries
database
Billing
XML with
plans
Prometheus Exposition Format
Metrics
Inbound traffic
Outbound traffic
Retention – 1 month
Timeseries
database
Retention – 2 years
History and disputes
investigation
Self-service
portal
Event log
New/Remove
user/product
events
Portal
connector
Billing importer
accounts and
subscriptions
Aggregated usage metrics
New/Remove
user/product
events
usage metrics
Any API calls
Any logs
Ingress
Metrics
Any product metric
Metrics
Any logs metric
mediation
TOML with
pipeline
New/Remove
product
events
VMAgent
YAML with scrape
endpoint
YAML Quotas
Definition
Language
Quotas
enforcer
Ответственности
Команда биллинга – код!
Продуктовые команды – Low-Code
85. Pains
• Generic connector is K8S-tailored
• Analytic reports are too slow without KillBill analytics plugin
• KillBill is slow for balance requests – 2-3 seconds delay
• KillBill is Java-based
• Metrics history has gaps
• VRL
hard for disputes
86. Pros
• Basic billing metrics – no efforts from dev teams
• All SRE are familiar with used software
• 80% of the solution – YAML + XML + TOML
• KillBill community is alive