From HashiCorp Korea User Group Meetup
발표자: 김민규(데브시스터즈, 인프라 관리, https://github.com/synthdnb)
발표자: 김도윤(데브시스터즈, 플랫폼 API 서버 개발, https://github.com/solmonk)
발표내용: 팀의 규모가 커지면서 Secret 관리 문제가 조금씩 부각되었습니다. 예를 들면 코드에 커밋되거나, 구전으로 전해지는 Secret들, SSH Key Rotation 등의 문제를 처리하기 위해 많은 노력과 삽질이 필요했습니다. 저희 팀에서 Vault를 통해 이런 문제들을 어떻게 해결했는지 소개하려 합니다.
Cloud dw benchmark using tpd-ds( Snowflake vs Redshift vs EMR Hive )SANG WON PARK
몇년 전부터 Data Architecture의 변화가 빠르게 진행되고 있고,
그 중 Cloud DW는 기존 Data Lake(Hadoop 기반)의 한계(성능, 비용, 운영 등)에 대한 대안으로 주목받으며,
많은 기업들이 이미 도입했거나, 도입을 검토하고 있다.
본 자료는 이러한 Cloud DW에 대해서 개념적으로 이해하고,
시장에 존재하는 다양한 Cloud DW 중에서 기업의 환경에 맞는 제품이 어떤 것인지 성능/비용 관점으로 비교했다.
- 왜기업들은 CloudDW에주목하는가?
- 시장에는어떤 제품들이 있는가?
- 우리Biz환경에서는 어떤 제품을 도입해야 하는가?
- CloudDW솔루션의 성능은?
- 기존DataLake(EMR)대비 성능은?
- 유사CloudDW(snowflake vs redshift) 대비성능은?
앞으로도 Data를 둘러싼 시장은 Cloud DW를 기반으로 ELT, Mata Mesh, Reverse ETL등 새로운 생테계가 급속하게 발전할 것이고,
이를 위한 데이터 엔지니어/데이터 아키텍트 관점의 기술적 검토와 고민이 필요할 것 같다.
https://blog.naver.com/freepsw/222654809552
카카오 광고 플랫폼 MSA 적용 사례 및 API Gateway와 인증 구현에 대한 소개if kakao
황민호(robin.hwang) / kakao corp. DSP개발파트
---
최근 Spring Cloud와 Netflix OSS로 MSA를 구성하는 시스템 기반의 서비스들이 많아지는 추세입니다.
카카오에서도 작년에 오픈한 광고 플랫폼 모먼트에 Spring Cloud 기반의 MSA환경을 구성하여, API Gateway도 적용하였는데 1년 반 정도 운영한 경험을 공유할 예정입니다. 더불어 MSA 환경에서는 API Gateway를 통해 인증을 어떻게 처리하는지 알아보고 OAuth2 기반의 JWT Token을 이용한 인증에 대한 이야기도 함께 나눌 예정입니다.
Apache kafka 모니터링을 위한 Metrics 이해 및 최적화 방안SANG WON PARK
Apache Kafak의 빅데이터 아키텍처에서 역할이 점차 커지고, 중요한 비중을 차지하게 되면서, 성능에 대한 고민도 늘어나고 있다.
다양한 프로젝트를 진행하면서 Apache Kafka를 모니터링 하기 위해 필요한 Metrics들을 이해하고, 이를 최적화 하기 위한 Configruation 설정을 정리해 보았다.
[Apache kafka 모니터링을 위한 Metrics 이해 및 최적화 방안]
Apache Kafka 성능 모니터링에 필요한 metrics에 대해 이해하고, 4가지 관점(처리량, 지연, Durability, 가용성)에서 성능을 최적화 하는 방안을 정리함. Kafka를 구성하는 3개 모듈(Producer, Broker, Consumer)별로 성능 최적화를 위한 …
[Apache Kafka 모니터링을 위한 Metrics 이해]
Apache Kafka의 상태를 모니터링 하기 위해서는 4개(System(OS), Producer, Broker, Consumer)에서 발생하는 metrics들을 살펴봐야 한다.
이번 글에서는 JVM에서 제공하는 JMX metrics를 중심으로 producer/broker/consumer의 지표를 정리하였다.
모든 지표를 정리하진 않았고, 내 관점에서 유의미한 지표들을 중심으로 이해한 내용임
[Apache Kafka 성능 Configuration 최적화]
성능목표를 4개로 구분(Throughtput, Latency, Durability, Avalibility)하고, 각 목표에 따라 어떤 Kafka configuration의 조정을 어떻게 해야하는지 정리하였다.
튜닝한 파라미터를 적용한 후, 성능테스트를 수행하면서 추출된 Metrics를 모니터링하여 현재 업무에 최적화 되도록 최적화를 수행하는 것이 필요하다.
From HashiCorp Korea User Group Meetup
발표자: 김민규(데브시스터즈, 인프라 관리, https://github.com/synthdnb)
발표자: 김도윤(데브시스터즈, 플랫폼 API 서버 개발, https://github.com/solmonk)
발표내용: 팀의 규모가 커지면서 Secret 관리 문제가 조금씩 부각되었습니다. 예를 들면 코드에 커밋되거나, 구전으로 전해지는 Secret들, SSH Key Rotation 등의 문제를 처리하기 위해 많은 노력과 삽질이 필요했습니다. 저희 팀에서 Vault를 통해 이런 문제들을 어떻게 해결했는지 소개하려 합니다.
Cloud dw benchmark using tpd-ds( Snowflake vs Redshift vs EMR Hive )SANG WON PARK
몇년 전부터 Data Architecture의 변화가 빠르게 진행되고 있고,
그 중 Cloud DW는 기존 Data Lake(Hadoop 기반)의 한계(성능, 비용, 운영 등)에 대한 대안으로 주목받으며,
많은 기업들이 이미 도입했거나, 도입을 검토하고 있다.
본 자료는 이러한 Cloud DW에 대해서 개념적으로 이해하고,
시장에 존재하는 다양한 Cloud DW 중에서 기업의 환경에 맞는 제품이 어떤 것인지 성능/비용 관점으로 비교했다.
- 왜기업들은 CloudDW에주목하는가?
- 시장에는어떤 제품들이 있는가?
- 우리Biz환경에서는 어떤 제품을 도입해야 하는가?
- CloudDW솔루션의 성능은?
- 기존DataLake(EMR)대비 성능은?
- 유사CloudDW(snowflake vs redshift) 대비성능은?
앞으로도 Data를 둘러싼 시장은 Cloud DW를 기반으로 ELT, Mata Mesh, Reverse ETL등 새로운 생테계가 급속하게 발전할 것이고,
이를 위한 데이터 엔지니어/데이터 아키텍트 관점의 기술적 검토와 고민이 필요할 것 같다.
https://blog.naver.com/freepsw/222654809552
카카오 광고 플랫폼 MSA 적용 사례 및 API Gateway와 인증 구현에 대한 소개if kakao
황민호(robin.hwang) / kakao corp. DSP개발파트
---
최근 Spring Cloud와 Netflix OSS로 MSA를 구성하는 시스템 기반의 서비스들이 많아지는 추세입니다.
카카오에서도 작년에 오픈한 광고 플랫폼 모먼트에 Spring Cloud 기반의 MSA환경을 구성하여, API Gateway도 적용하였는데 1년 반 정도 운영한 경험을 공유할 예정입니다. 더불어 MSA 환경에서는 API Gateway를 통해 인증을 어떻게 처리하는지 알아보고 OAuth2 기반의 JWT Token을 이용한 인증에 대한 이야기도 함께 나눌 예정입니다.
Apache kafka 모니터링을 위한 Metrics 이해 및 최적화 방안SANG WON PARK
Apache Kafak의 빅데이터 아키텍처에서 역할이 점차 커지고, 중요한 비중을 차지하게 되면서, 성능에 대한 고민도 늘어나고 있다.
다양한 프로젝트를 진행하면서 Apache Kafka를 모니터링 하기 위해 필요한 Metrics들을 이해하고, 이를 최적화 하기 위한 Configruation 설정을 정리해 보았다.
[Apache kafka 모니터링을 위한 Metrics 이해 및 최적화 방안]
Apache Kafka 성능 모니터링에 필요한 metrics에 대해 이해하고, 4가지 관점(처리량, 지연, Durability, 가용성)에서 성능을 최적화 하는 방안을 정리함. Kafka를 구성하는 3개 모듈(Producer, Broker, Consumer)별로 성능 최적화를 위한 …
[Apache Kafka 모니터링을 위한 Metrics 이해]
Apache Kafka의 상태를 모니터링 하기 위해서는 4개(System(OS), Producer, Broker, Consumer)에서 발생하는 metrics들을 살펴봐야 한다.
이번 글에서는 JVM에서 제공하는 JMX metrics를 중심으로 producer/broker/consumer의 지표를 정리하였다.
모든 지표를 정리하진 않았고, 내 관점에서 유의미한 지표들을 중심으로 이해한 내용임
[Apache Kafka 성능 Configuration 최적화]
성능목표를 4개로 구분(Throughtput, Latency, Durability, Avalibility)하고, 각 목표에 따라 어떤 Kafka configuration의 조정을 어떻게 해야하는지 정리하였다.
튜닝한 파라미터를 적용한 후, 성능테스트를 수행하면서 추출된 Metrics를 모니터링하여 현재 업무에 최적화 되도록 최적화를 수행하는 것이 필요하다.
Show Me Kafka Tools That Will Increase My Productivity! (Stephane Maarek, Dat...confluent
In the Apache Kafka world, there is such a great diversity of open source tools available (I counted over 50!) that it’s easy to get lost. Over the years I have dealt with Kafka, I have learned to particularly enjoy a few of them that save me a tremendous amount of time over performing manual tasks. I will be sharing my experience and doing live demos of my favorite Kafka tools, so that you too can hopefully increase your productivity and efficiency when managing and administering Kafka. Come learn about the latest and greatest tools for CLI, UI, Replication, Management, Security, Monitoring, and more!
데브시스터즈 데이터 레이크 구축 이야기 : Data Lake architecture case study (박주홍 데이터 분석 및 인프라 팀...Amazon Web Services Korea
데브시스터즈 데이터 레이크 구축 이야기 : Data Lake architecture case study
이 세션에서는 데브시스터즈의 Case Study를 통하여 Data Lake를 만들고 사용하는데 있어 요구 되는 사항들에 대해 공유합니다. 여러 목적에 맞는 데이터를 전달하기 위해 AWS 를 활용하여 Data Lake 를 구축하게된 계기와 실제 구축 작업을 하면서 경험하게 된 것들에 대해 말씀드리고자 합니다. 기존 인프라 구조 대비 효율성 및 비용적 측면을 소개해드리고, 빅데이터를 이용한 부서별 데이터 세분화를 진행할 때 어떠한 Architecture가 사용되었는지 소개드리고자 합니다.
Introduction to Apache Airflow, it's main concepts and features and an example of a DAG. Afterwards some lessons and best practices learned by from the 3 years I have been using Airflow to power workflows in production.
Oak, the architecture of Apache Jackrabbit 3Jukka Zitting
Apache Jackrabbit is just about to reach the 3.0 milestone based on a new architecture called Oak. Based on concepts like eventual consistency and multi-version concurrency control, and borrowing ideas from distributed version control systems and cloud-scale databases, the Oak architecture is a major leap ahead for Jackrabbit. This presentation describes the Oak architecture and shows what it means for the scalability and performance of modern content applications. Changes to existing Jackrabbit functionality are described and the migration process is explained.
Show Me Kafka Tools That Will Increase My Productivity! (Stephane Maarek, Dat...confluent
In the Apache Kafka world, there is such a great diversity of open source tools available (I counted over 50!) that it’s easy to get lost. Over the years I have dealt with Kafka, I have learned to particularly enjoy a few of them that save me a tremendous amount of time over performing manual tasks. I will be sharing my experience and doing live demos of my favorite Kafka tools, so that you too can hopefully increase your productivity and efficiency when managing and administering Kafka. Come learn about the latest and greatest tools for CLI, UI, Replication, Management, Security, Monitoring, and more!
데브시스터즈 데이터 레이크 구축 이야기 : Data Lake architecture case study (박주홍 데이터 분석 및 인프라 팀...Amazon Web Services Korea
데브시스터즈 데이터 레이크 구축 이야기 : Data Lake architecture case study
이 세션에서는 데브시스터즈의 Case Study를 통하여 Data Lake를 만들고 사용하는데 있어 요구 되는 사항들에 대해 공유합니다. 여러 목적에 맞는 데이터를 전달하기 위해 AWS 를 활용하여 Data Lake 를 구축하게된 계기와 실제 구축 작업을 하면서 경험하게 된 것들에 대해 말씀드리고자 합니다. 기존 인프라 구조 대비 효율성 및 비용적 측면을 소개해드리고, 빅데이터를 이용한 부서별 데이터 세분화를 진행할 때 어떠한 Architecture가 사용되었는지 소개드리고자 합니다.
Introduction to Apache Airflow, it's main concepts and features and an example of a DAG. Afterwards some lessons and best practices learned by from the 3 years I have been using Airflow to power workflows in production.
Oak, the architecture of Apache Jackrabbit 3Jukka Zitting
Apache Jackrabbit is just about to reach the 3.0 milestone based on a new architecture called Oak. Based on concepts like eventual consistency and multi-version concurrency control, and borrowing ideas from distributed version control systems and cloud-scale databases, the Oak architecture is a major leap ahead for Jackrabbit. This presentation describes the Oak architecture and shows what it means for the scalability and performance of modern content applications. Changes to existing Jackrabbit functionality are described and the migration process is explained.
In previous work, we proposed a new multi-versioning STM--adaptive object metadata (henceforth AOM for short)---that reduces substantially both the memory and the performance overheads associated with transactional locations that are not under contention. AOM is an object-based design that follows the JVSTM general design, but it is adaptive because the metadata used for each transactional object changes over time, depending on how objects are accessed. Now we implemented a new version of the AOM that is based on the lock-free version of the JVSTM and we eliminated all the overheads of accessing objects in the compact layout during read-only transactions. To make the contention-free execution path free of any STM barrier, we duplicated the accessors of the transactional classes, so that one accesses directly the object fields and another uses STM barriers.
The research work that I describe in this dissertation is concerned with
the problem of shared-memory synchronization in large-scale
programs.
The difficulties of developing fine-grained lock-based synchronization
are well-known and many researchers have argued for the need of
alternative approaches.
Simply put, the main goal of my work is to provide an efficient
alternative to such approaches.
My proposal is based on Software Transactional Memory
(STM) and I implemented it in a well-known STM framework for
Java---Deuce STM.
To that end I propose a new approach that significantly lowers the
overhead caused by an STM in large-scale programs for which only a
small fraction of the memory is under contention. My solution
combines two novel optimization techniques in a synergistic way,
allowing us to get, for the first time, performance with an STM that
rivals the performance of the best lock-based approaches in some of
the more challenging benchmarks. My approach and experimental
results show that STMs may be the first efficient alternative to locks
for shared-memory synchronization in real-world--sized applications.
Learn what is the Office Graph and how it related to Office Delve, the new Search & Discovery application in Office 365. Learn how to query the Office Graph using GQL (Graph Query Language) or the Office 365 Unified API.
AWS CloudFormation Automation, TrafficScript, and Serverless architecture wit...PolarSeven Pty Ltd
Chris Kawchuck has 20 years experience in the Telecom and Service provider industry. He will be demonstrating how easy it is to spin up a Brocade vADC in AWS; enabling serverless architectures using S3 buckets, and accomplish real-time traffic rewrites to get you out of sticky situations.
Learn about:
1. Load balancing and scaling options available on AWS
2. Automating the Brocade vADC spin up using Cloudformation Templates
3. Enabling use of "Serverless" web pages in AWS 4.Taking care of tricky situations using TrafficScript
Implementing any 3rd party Load Balancer from the Amazon AWS Marketplace can be a daunting task. Not only does one have to learn the vendor's specific interface, you also need to perform quite a few administrative tasks to setup front end IPs, back end pools, clustering, and so on.
Brocade has published a CloudFormation Template (CFT) which takes all the hard work out of setting it up and operating. Using DevOps tools and open source scripts, we not only automate the deployment of the Brocade vADC within AWS, but all the configuration you need to administer, cluster, and provision your Load Balancers; including public IPs and your back-end server pools.
We would like you to try it, and take advantage of the powerful feature of the Brocade vADC.
https://github.com/dkalintsev/Brocade/tree/master/vADC/CloudFormation/Templates/Variants-and-experimental/Configured-by-Puppet
* Presented at the Sydney AWS User Group session 1st February 2017
http://www.meetup.com/AWS-Sydney/
Hosted and organised by PolarSeven - http://polarseven.com
View the full video presentation here:
https://youtu.be/rKTG2zjQS6o
Prioritization by value (DevOps, Scrum)Tommy Quitt
In Scrum, DevOps it is important to learn how to prioritize by value. Learn how to do it in this very short presentation.
Agile coaching can help teams to flex the muscle of prioritization and proper backlog grooming.
Presented by Matt Ray, Manager and Solutions Architect for APJ for Chef. He currently resides in Sydney, Australia after relocating from Austin, Texas.
He podcasts at SoftwareDefinedTalk.com and is @mattray on Twitter, IRC, GitHub and too many Slacks.
This session will provide an overview of the Chef Automate solutions and how they come together on AWS.
Ready to give it a try? Get started with this tutorial.
https://learn.chef.io/tutorials/manage-a-node/opsworks/
You might also be interested in our white paper, "DevOps and the Cloud: Chef and Amazon Web Services." This paper is an introduction to how using DevOps patterns with cloud resources can decrease time to market and reduce costs.
https://pages.chef.io/rs/255-VFB-268/images/devops-and-the-cloud-chef-and-aws.pdf
* Presented at the Sydney AWS User Group session 1st February 2017
http://www.meetup.com/AWS-Sydney/
Hosted and organised by PolarSeven - http://polarseven.com
View the full video presentation here:
https://youtu.be/CD_ptwS8k1w
AWS Webcast - Introduction to Amazon RDS: Low Admin, High Performance Databas...Amazon Web Services
Amazon Relational Database Service (Amazon RDS) makes it easy to set up, operate, and scale a relational database in the cloud. It provides cost-efficient and resizable capacity while managing time-consuming database administration tasks, freeing you up to focus on your applications and business.
In this webinar we review how to move your existing databases to RDS with minimum disruption. We will also cover how to deploy very high performance databases on the cloud. And finally, we will provide examples of how customers have successfully deployed high performance databases using RDS.
DevOps is changing today's software development world by helping us build better software, faster. However most of the knowledge and experience with DevOps is based around application software and ignores the database. We will examine how the concepts and principles of DevOps can be applied to database development by looking at both automated comparison analysis as well as migration script management. Automated building, testing, and deployment of database changes will be shown.
About the Presenter
Steve Jones is a Microsoft SQL Server MVP and has been working with SQL Server since version 4.2 on OS/2. After working as a DBA and developer for a variety of companies, Steve co-founded the community website SQLServerCentral.com in 2001. Since 2004, Steve has been the full-time editor of the site, ensuring it continues to be a great resource for SQL Server professionals. Over the last decade, Steve has written hundreds of articles about SQL Server for SQLServerCentral.com, SQL Server Standard magazine, SQL Server Magazine, and Database Journal.
Patching is Your Friend in the New World Order of EPM and ERP CloudDatavail
Historically, patching was an IT effort to stay on the support path or remove vulnerabilities. Today, in the EPM Cloud market, patching is so much more. This presentation will review several case studies of how clients received free capacities in their patches. Be a hero and make business change.
DevOps has been an emerging trend in the software development world for the past several years. While the term is relatively new, it is really a convergence of a number of practices that have been evolving for decades. Unfortunately, database development has been left out of much of this movement, but that's starting to change. As database professionals, we all need to understand what this important change is about, how we fit in, and how to best work database development practices into the established DevOps practices.
One of the cornerstones of the DevOps methodology is source control. When most people think of source control, they picture a tool - either a traditional, centralized system like TFS, or a newer, distributed system like Git. Source control is more than a tool, though; human processes and practices also play a critical role in an effective source control (and DevOps) implementation. In this session, we'll talk in depth about both types of source control systems and how you can effectively use source control for your databases.
Beyond DevOps: How Netflix Bridges the Gap?C4Media
Video and slides synchronized, mp3 and slide download available at URL http://bit.ly/1mv6Kpr.
Josh Evans uses the Netflix Operations Engineering as a case study to explore the challenges faced by centralized engineering teams and approaches to addressing those challenges. Filmed at qconsf.com.
Josh Evans is Director of Operations Engineering at Netflix, with experience in e-commerce, playback control services, infrastructure, tools, testing, and operations.
Enabling your DevOps culture with AWS-webinarAaron Walker
In this presentation shows you how the benefits of AWS technologies can be combined with a new approach to Development and Operations.
It’s all about delivering new features and functionality faster, without compromising reliability, stability and performance.
* Understand the challenges faced by traditional Development and Operations teams
* Apply Continuous Integration/Delivery processes and tools to enable change
* Appreciate how various AWS technologies can be used to facilitate DevOps
Managing one or two unique machines in an ad-hoc manner is not a story that many people talk about nowadays. Today, small teams need to manage hundreds or thousands of nodes, serving a myriad of purposes, running any number of critical Dev and Ops workloads. And they have to do it in a way that still leaves time for unplanned and strategic work.
Learn how HP ties DevOps automation, monitoring information and ChatOps collaboration together to eliminate manual, error-prone work and keep critical services running
In our recent webinar hosted by Mike Current, a member of the Hyland Upgrade Council, and Mark Hamilton, DataBank's Infrastructure Engineer, we expanded on how upgrading OnBase offers the ability to not only gain enhancements and fixes, but also radically improve the security, stability and architecture of your entire OnBase environment.
In this presentation you will...
1. Learn the formula for upgrade success with actionable items to work through right away
2. Understand the team needed to get the job done and how DataBank can step in to help
3. The importance of establishing a test environment and more
You can also watch the full webinar here: http://info.databankimx.com/Upgrade-Webinar-RCD.html
Download the Hyland 3rd Part Compatibility Matrix from slide #25 here: http://info.databankimx.com/rs/167-SSD-475/images/Third%20Party%20Product%20Compatibility%20Matrix.pdf
Graspan: A Big Data System for Big Code AnalysisAftab Hussain
We built a disk-based parallel graph system, Graspan, that uses a novel edge-pair centric computation model to compute dynamic transitive closures on very large program graphs.
We implement context-sensitive pointer/alias and dataflow analyses on Graspan. An evaluation of these analyses on large codebases such as Linux shows that their Graspan implementations scale to millions of lines of code and are much simpler than their original implementations.
These analyses were used to augment the existing checkers; these augmented checkers found 132 new NULL pointer bugs and 1308 unnecessary NULL tests in Linux 4.4.0-rc5, PostgreSQL 8.3.9, and Apache httpd 2.2.18.
- Accepted in ASPLOS ‘17, Xi’an, China.
- Featured in the tutorial, Systemized Program Analyses: A Big Data Perspective on Static Analysis Scalability, ASPLOS ‘17.
- Invited for presentation at SoCal PLS ‘16.
- Invited for poster presentation at PLDI SRC ‘16.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Top 7 Unique WhatsApp API Benefits | Saudi ArabiaYara Milbes
Discover the transformative power of the WhatsApp API in our latest SlideShare presentation, "Top 7 Unique WhatsApp API Benefits." In today's fast-paced digital era, effective communication is crucial for both personal and professional success. Whether you're a small business looking to enhance customer interactions or an individual seeking seamless communication with loved ones, the WhatsApp API offers robust capabilities that can significantly elevate your experience.
In this presentation, we delve into the top 7 distinctive benefits of the WhatsApp API, provided by the leading WhatsApp API service provider in Saudi Arabia. Learn how to streamline customer support, automate notifications, leverage rich media messaging, run scalable marketing campaigns, integrate secure payments, synchronize with CRM systems, and ensure enhanced security and privacy.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
3. DBOPS, DEVOPS & OPS
1.Before Scrum
2.Scrum
3.Kanban
4.Scrum + Kanban
Time line
4. DBOPS, DEVOPS & OPS
1.Before Scrum
2.Scrum
3.Kanban
4.Scrum + Kanban
Time line
5. DBOPS, DEVOPS & OPS
Before Scrum
• Before Scrum - The “real agile” method
• Centralized decision mechanism
• Growing phase
• Size (5 to 15)
• Complexity (more and more components)
7. DBOPS, DEVOPS & OPS
1.Before Scrum
2.Scrum
3.Kanban
4.Scrum + Kanban
Time line
8. DBOPS, DEVOPS & OPS
Scrum
• Scrum - The “magic” scrum
• Scrum for the masses (>20)
• Pure Scrum approach – operations as development team
• 2 week sprint, sprint goal
• The team was “too reactive”
9. DBOPS, DEVOPS & OPS
DbOps – The beginning
Application #1
Application #2
Application #3
Shared database(s)
Team #1
Team #2
Team #3
Team #T-SQL
10. DBOPS, DEVOPS & OPS
DbOps – Building a process
Source
Control
Continuous
Integration
Continuous
Delivery
Database
+
Application
11. DBOPS, DEVOPS & OPS
What’s so special about databases?
DbOps – Building a process
12. DBOPS, DEVOPS & OPS
DbOps - Motivation
• Databases are out of pace with application development
• Need of synchronization between development and DBA teams
• No traceability of database changes (changes history)
• What changed? Who? When? Why?
• Manual databases processes prevent the CI and CD utilization in their full extent
• Your process has the strength of your weakest step
• Time consuming and error prone
• Releases are less frequent and risky
13. DBOPS, DEVOPS & OPS
DbOps - Motivation
• Bugs in production environment
• Database related bugs are only discovered after deployment to production
• Manual tests or inexistent tests
• Fixes and hotfixes have time cost, what can lead to delay a release
• Database setup time of a new environment
• Expensive process for new clients
• Databases become a bottleneck in agile delivery processes
• An easy target to blame
14. DBOPS, DEVOPS & OPS
DbOps – Building a process
Source
Control
Continuous
Integration
Continuous
Delivery
Automation
+
Change control
15. DBOPS, DEVOPS & OPS
DbOps - The value of automation
• Enable control over database development
• Increase speed of response to change
• Keep a versioned “history” of database states
• Greater reliability of the release process
• Increase release frequency through repeatability of processes
• Reduce time spent fixing bugs - automated tests
• Remove/reduce human intervention in the release process
• The build step is automatic triggered by a “push” into source control repository
• The deploy step is automatic triggered by a successfully build process
16. DBOPS, DEVOPS & OPS
DbOps - The value of automation
• Without automation your are working in a amnesic state
• Learn and forget it vs Improve and forget it
• You do not want to depend on the “best of the best”
17. DBOPS, DEVOPS & OPS
DbOps - Communicating through a contract
• Contract – change communication management tool
• Set of rules and expectations
• Define responsibility frontiers
• Sets a common language
Application #1
Application #2
Application #3
Shared database(s)
Team #1
T-SQL Script
Team #2
Team #3
Team #T-SQL
18. DBOPS, DEVOPS & OPS
DbOps - Communicating through a contract
• Contract – change communication management tool
• Rule 1: Script version (timestamp)
• Rule 2: Operation type
• Rule 3: Object type
• Rule 4: Object name
Example:
V20160220.1100__Create_TB_MyTable.sql
19. DBOPS, DEVOPS & OPS
DbOps - Communicating through a contract
• Contract – change communication management tool
• Should be reflected in your development pipeline
• The better/clearer your pipeline, the less you need to document (your code is your documentation)
• Everything is negotiable in the contract, except its application
20. DBOPS, DEVOPS & OPS
1.Before Scrum
2.Scrum
3.Kanban
4.Scrum + Kanban
Time line
21. DBOPS, DEVOPS & OPS
Kanban
• Kanban 101
• Focus on development teams necessities
• 4 week iterations
• A bad choice
• Different “language” from other teams
• Desynchronization between operations and development teams
23. DBOPS, DEVOPS & OPS
1.Before Scrum
2.Scrum
3.Kanban
4.Scrum + Kanban
Time line
24. DBOPS, DEVOPS & OPS
Scrum + Kanban
• Scrum + Kanban – The best of two worlds
• 2 week sprint, sprint goal
• Task definition was based on teams necessities and our necessities
• Team capacity < 70% (enough bandwidth to react)
• Strong and disciplined team
• Integration in solutions design
25. DBOPS, DEVOPS & OPS
DbOps - Communicating through a contract
• Extending the Contract – change communication management tool
• Applications
• Databases
• Infrastructure
26. DBOPS, DEVOPS & OPS
DbOps - Communicating through a contract
• Extending the Contract – change communication management tool
• Applications
Interaction points between apps and the others components of the system
Behavior definition (configuration)
• Databases
Minimal context definition (data security)
• Infrastructure
Every team should know/contribute to the infrastructure model (Infrastructure as code)
27. DBOPS, DEVOPS & OPS
Why DevOps? (Definition)
• Developing software is not enough, you have to deliver it
• Communication framework for manage change
• You can not stop change, but you can control it
• Perspectives
• Need for speed (time-to- market) (management people)
• Need for control (error control) (operations people)
28. DBOPS, DEVOPS & OPS
Operations
• “Is the constellation of your org’s technical skills, practices, and cultural
values around designing, building and maintaining systems, shipping
software, and solving problems with technology.”
• “It is how you get shit done”
https://charity.wtf/2016/05/31/wtf-is-operations-serverless/
29. DBOPS, DEVOPS & OPS
Operations future
• #Insert_Here# … as Code
• Everything is code (Thank you virtualization!!)
• Automation (cost, speed and risk)
Leave the work to the machines and the thinking to you
• The road to continuous…
• App centered
• The automation focus the application
• Automation flies with the application
• Minimal images, immutable instances/behavior
30. DBOPS, DEVOPS & OPS
DevOps – Final thoughts
• Helps to manage your delivery pain
• In order to be fast you need to have control
• It´s a role
• It´s a role that everyone must have
• Your team is as strong as your weaker player
• Choose whatever devops model/approach you want
• You just need to hire competent people