Presentation by Bob Wise at the inaugural Kubecon 2015. Community update from the Kubernetes scalabilty SIG - k8scale. Commentary on Kubernetes scaling and role in the modern software defined datacenter.
My talk on data center futures at Samsung SDS at Techtonic Summit in NYC. CoreOS has a full video of the talk with slides showing that is nicely produced along with the other talks which were also very informative.
TLDR: A single management layer providing large shared clusters is the only way to approach Google / Amazon levels of efficiency. There are a limited number of open source options, and I discuss why we chose Kubernetes and a container-centric foundation.
Adopting Kubernetes for production has huge impacts on operations at all levels. We present our pattern for formalizing cluster operations as a separate role from infrastructure and application operations, and explore the impact on the role of the SRE.
This document contains a presentation about operationalizing machine learning. It begins with copyright information and a disclaimer about forward-looking statements. Next, it introduces the presenter Kelly Feagans and provides background on machine learning concepts such as the different types of machine learning. The presentation then discusses use cases for machine learning in IT operations, security, and business analytics. It describes the machine learning process and how Splunk can be used for machine learning. Finally, it promotes an upcoming Splunk conference and machine learning app.
This document contains a disclaimer stating that any forward-looking statements made during the presentation are based on current expectations and estimates and could differ materially. It also states that the information provided about product roadmaps is for informational purposes only and may change. The document provides an overview of machine learning, including definitions of common machine learning techniques like supervised learning, unsupervised learning, and reinforcement learning. It also describes Splunk's machine learning capabilities, including search commands, the Machine Learning Toolkit, and packaged solutions like Splunk IT Service Intelligence that incorporate machine learning.
The document is a presentation on Splunk's enterprise security and user behavior analytics solutions. It discusses Splunk's positioning as a leader in Gartner reports, and describes the key frameworks that make up Splunk Enterprise Security, including notable events, asset and identity management, risk analysis, threat intelligence, and adaptive response. It also provides an overview of Splunk User Behavior Analytics and its ability to detect insider threats and cyber attacks through unsupervised machine learning. The presentation concludes with a planned demo of how Splunk UBA can ingest security data from multiple sources and generate anomalies and threats in real-time.
What's New in Splunk Cloud and Enterprise 6.5Splunk
This document provides an overview and agenda for what's new in Splunk Cloud and Enterprise 6.5. It introduces new features for easier data preparation and analysis through intuitive table views. Extended platform and management capabilities include integrated Hadoop features for storage flexibility and automated management tools. New machine learning analytics allow for predictive analytics through packaged and custom models. Additional developer resources are introduced to simplify app development and certification. The presentation concludes with details on liberalized licensing terms and resources for getting started with Splunk.
GreenHopper for DevOps provides lean, agile, and automated tools to help organizations practice DevOps. It allows teams to redefine what "done" means, visualize workflow, and measure delivery times through tools like JIRA and Rapid boards. These capabilities help restore trust between teams, share code openly, and provide clear visibility into the process to continuously learn, improve, and deploy updates.
My talk on data center futures at Samsung SDS at Techtonic Summit in NYC. CoreOS has a full video of the talk with slides showing that is nicely produced along with the other talks which were also very informative.
TLDR: A single management layer providing large shared clusters is the only way to approach Google / Amazon levels of efficiency. There are a limited number of open source options, and I discuss why we chose Kubernetes and a container-centric foundation.
Adopting Kubernetes for production has huge impacts on operations at all levels. We present our pattern for formalizing cluster operations as a separate role from infrastructure and application operations, and explore the impact on the role of the SRE.
This document contains a presentation about operationalizing machine learning. It begins with copyright information and a disclaimer about forward-looking statements. Next, it introduces the presenter Kelly Feagans and provides background on machine learning concepts such as the different types of machine learning. The presentation then discusses use cases for machine learning in IT operations, security, and business analytics. It describes the machine learning process and how Splunk can be used for machine learning. Finally, it promotes an upcoming Splunk conference and machine learning app.
This document contains a disclaimer stating that any forward-looking statements made during the presentation are based on current expectations and estimates and could differ materially. It also states that the information provided about product roadmaps is for informational purposes only and may change. The document provides an overview of machine learning, including definitions of common machine learning techniques like supervised learning, unsupervised learning, and reinforcement learning. It also describes Splunk's machine learning capabilities, including search commands, the Machine Learning Toolkit, and packaged solutions like Splunk IT Service Intelligence that incorporate machine learning.
The document is a presentation on Splunk's enterprise security and user behavior analytics solutions. It discusses Splunk's positioning as a leader in Gartner reports, and describes the key frameworks that make up Splunk Enterprise Security, including notable events, asset and identity management, risk analysis, threat intelligence, and adaptive response. It also provides an overview of Splunk User Behavior Analytics and its ability to detect insider threats and cyber attacks through unsupervised machine learning. The presentation concludes with a planned demo of how Splunk UBA can ingest security data from multiple sources and generate anomalies and threats in real-time.
What's New in Splunk Cloud and Enterprise 6.5Splunk
This document provides an overview and agenda for what's new in Splunk Cloud and Enterprise 6.5. It introduces new features for easier data preparation and analysis through intuitive table views. Extended platform and management capabilities include integrated Hadoop features for storage flexibility and automated management tools. New machine learning analytics allow for predictive analytics through packaged and custom models. Additional developer resources are introduced to simplify app development and certification. The presentation concludes with details on liberalized licensing terms and resources for getting started with Splunk.
GreenHopper for DevOps provides lean, agile, and automated tools to help organizations practice DevOps. It allows teams to redefine what "done" means, visualize workflow, and measure delivery times through tools like JIRA and Rapid boards. These capabilities help restore trust between teams, share code openly, and provide clear visibility into the process to continuously learn, improve, and deploy updates.
I recently presented this 2 hours session about the automation model developed in Videobet, the tools used in the R&D, QA and operations:
Issue mgmt.: JIRA/Greenhopper
Build system and repository: Maven & Nexus
Build server: QuickBuild
Code quality: Sonar
Continuous Integration: Selenium Grid
Crash dump analysis: Socorro
Database versioning: Flyway DB
Azure + DataStax Enterprise (DSE) Powers Office365 Per User StoreDataStax Academy
We will present our Office 365 use case scenarios, why we chose Cassandra + Spark, and walk through the architecture we chose for running DSE on Azure.
The presentation will feature demos on how you too can build similar applications.
Is Dynamic Cubes now ready to replace Transformer implementations? The business analytics experts at Senturus take an unbiased look at the pros and cons of switching. View the webinar video recording and download this deck: http://www.senturus.com/resources/cognos-dynamic-cubes-set-to-retire-transformer/.
Topics discussed include the types of Transformer implementations that could benefit by switching to Dynamic Cubes, pre-requisites for replacing a Transformer implementation with Dynamic Cubes and typical pitfalls you may encounter in the process.
Senturus, a business analytics consulting firm, has a resource library with hundreds of free recorded webinars, trainings, demos and unbiased product reviews. Take a look and share them with your colleagues and friends: http://www.senturus.com/resources/.
“Scaling with LeSS” by Kārlis Cinis from Tieto Latvia at Large Scale Agile fo...DevClub_lv
Have you ever thought about scaling Scrum? Scrum is nice and beautiful framework for one team - a team that is working closely together, is self-managing, cross-functional with T-shaped competences and has shared responsibility for team’s work. But what if you want more of this beauty? How should you scale?
I will explain basics behind Scrum scaling framework called LeSS (Large Scale Scrum) and share some experience about implementing it.
Kārlis is Scrum Master at Tieto for almost 2 years now and has been participating in implementing LeSS in Tieto Retail Payments and Cards from the start. He also has over 10 years of experience in the role of project manager. Kārlis is passionate about hiking in mountains, running and having a long sleep on Saturday mornings.
Azure + DataStax Enterprise Powers Office 365 Per User StoreDataStax Academy
We will present our O365 use case scenarios, why we chose Cassandra + Spark, and walk through the architecture we chose for running DataStax Enterprise on azure.
DevOps and APIs: Great Alone, Better Together MuleSoft
DevOps has emerged as a critical enabler of agility in enterprise IT; a DevOps model increases reliability and minimizes disruption, with the added benefit of increasing speed. But that isn’t enough. DevOps must be balanced with a focus on asset consumption and reuse to make sure the organization is extracting maximum value out of all the newly built assets. And that’s where an API strategy comes in. In this session, we'll discuss how organizations use DevOps and API-led connectivity to reduce time to market 3-4x.
How jKool Analyzes Streaming Data in Real Time with DataStaxjKool
jKool provides an application analytics SaaS for DevOps. These slides illustrate some of the choices we had to make and the architectural decisions to build a system for both real-time and historical application analytics.
How jKool Analyzes Streaming Data in Real Time with DataStaxDataStax
In this webinar, Charles Rich, VP of Product Management at jKool will share their journey with DataStax; how jKool knew from the start that traditional relational databases wouldn’t work for the scalability and availability demands of time-series data, and why they turned to DataStax Enterprise for blazing performance and powerful enterprise search and analytics capabilities.
A Reference Architecture to Enable Visibility and Traceability across the Ent...CollabNet
Software development should not be a “black box” to the business, customers or other developers. Instead collaboration across stakeholders should be the norm--business, development and operations teams. Forrester recently reported that 13% of organizations doing Agile link “upstream” agile planning with ‘“downstream” development.
As a result, executives continue to have only limited or no visibility beyond the initial planning stage of what is in a particular release. It’s not their fault, because today’s tools focus on upfront planning and don’t give you visibility into what’s happening in development. Often times that visibility is too late resulting in software that gets delivered and does not meet the customer’s needs.
Join CollabNet’s most experienced senior solution architects as they explain how you can you gain real time visibility into all stages of the development process—from ideation into production through deployment. Imagine what can your teams get done if all stakeholders are able to collaborate together and view real time feeds into all stages of the delivery pipelines within a single easy-to-use system.
Who Should attend:
Any executive or manager interested in learning how to get traceability and visibility across the enterprise-- particularly, into the build and release management functions of their application lifecycle.
What will be covered:
An enterprise-scalable reference architecture for CI, CD, and DevOps
The importance of build management, release management and application release automation integration
A blueprint for scaling business agility across a large development organization How does CollabNet help organizations solve these problems
A demonstration of TeamForge’s capabilities using Git/Gerrit, Code Review, Jenkins, Nexus, Artifactory, Chef and Automic
Apache Mesos is a cluster manager that provides efficient resource sharing for distributed applications across a shared pool of nodes. It allows organizations to run applications like Hadoop, Spark, and Storm on large clusters with high utilization. Mesos addresses issues with prior solutions that constrained everything as "jobs" or required static partitioning. It has been adopted by companies like Twitter, Airbnb, and Hubspot to improve efficiency and allow applications to dynamically scale resources.
An Introduction to Apache Geode (incubating)Anthony Baker
Geode is a data management platform that provides real-time, consistent access to data-intensive applications throughout widely distributed cloud architectures.
Geode pools memory (along with CPU, network and optionally local disk) across multiple processes to manage application objects and behavior. It uses dynamic replication and data partitioning techniques for high availability, improved performance, scalability, and fault tolerance. Geode is both a distributed data container and an in-memory data management system providing reliable asynchronous event notifications and guaranteed message delivery.
Pivotal GemFire has had a long and winding journey, starting in 2002, winding through VMware, Pivotal, and finding it's way to Apache in 2015. Companies using GemFire have deployed it in some of the most mission critical latency sensitive applications in their enterprises, making sure tickets are purchased in a timely fashion, hotel rooms are booked, trades are made, and credit card transactions are cleared. This presentation discusses:
- A brief history of GemFire
- Architecture and use cases
- Why we are taking GemFire Open Source
- Design philosophy and principles
But most importantly: how you can join this exciting community to work on the bleeding edge in-memory platform.
An Introduction to Apache Geode (incubating) - Geode is a data management platform that provides real-time, consistent access to data-intensive applications throughout widely distributed cloud architectures.
STMicroelectronics IT service Manager explains how requirement management and software development are managed at ST Worldwide : 10.000 active users, 5.000 R&D and embedded software projects.
This document summarizes a MuleSoft meetup event in Sydney. It provides an agenda for presentations and panel discussions on MuleSoft migrations and careers. The first presentation will be from University of Newcastle representatives on their journey migrating APIs from their on-premises environment to MuleSoft's CloudHub platform. This will be followed by a panel discussion on building a career in APIs, integration and MuleSoft. The meetup will conclude with a trivia game and prizes. Attendees are encouraged to introduce themselves and ask questions in the chat.
The presentation was created for Cracow Mulesoft Meetup #1.
It covers the following content:
• Let us understand how the MuleSoft Forum and Meetup Community are helping across the World.
• Overview of Mule Migration Assistant (open source CLI tool provided by MuleSoft)
• Drivers to migrate your Mule 3 application
• How this baseline framework make your migration from Mule 3 to Mule 4 smooth?
• MMA in action - Demo
• Recent product updates, get trend and become MuleSoft Certified as a community success month.
This event is worth watching, if you:
• Have many Mule 3 apps in your organization and you want to switch to Mule 4
• Want to increase developers productivity through semi-automatic tool during the re-development
• Are Mule developer and you want to make your life easier in migration projects
Is OLAP Dead?: Can Next Gen Tools Take Over?Senturus
Explores pros and cons of current OLAP technologies, new generation visualization tools, in-memory databases and OLAP for big data. We also discuss real-life client scenarios for a pragmatic perspective. View the video recording and download this deck at: http://www.senturus.com/resources/is-olap-dead/
Senturus, a business analytics consulting firm, has a resource library with hundreds of free recorded webinars, trainings, demos and unbiased product reviews. Take a look and share them with your colleagues and friends: http://www.senturus.com/resources/.
Sydney mule soft meetup #8 1 August 2019 - all slidesRoyston Lobo
The document summarizes the agenda and key topics from a MuleSoft meetup in Sydney on API community management and troubleshooting Mule applications. The meetup included presentations on introducing an API community manager and troubleshooting Mule application performance. It also provided information on upcoming MuleSoft events and how to become a speaker at future meetups.
Top 10 Tips for an Effective Postgres DeploymentEDB
This presentation addresses these key questions during your Postgres deployment:
* What is this database going to be used for – a reporting server or data warehouse, or as an operational database supporting an application?
* Which resources should I spend the budget on to ensure optimal database performance – bigger servers, more CPUs/cores, disks, or more memory?
* What are my backup requirements? If I ever need to restore, how far back do I need to go and what will that mean to the business?
* How will I handle any hot fixes, such as security patches?
* What downtime can be afforded and what processes need to be in place to apply critical or maintenance updates?
* What are my replication and failover requirements and what should I do for my high availability configuration?
The answers to these questions will impact how well you prepare, configure, and tune your database environment. The consequences of overlooking the key ingredients of your deployment can result in misallocated resources, limited ability to change, or worse - facing an outage with critical data loss.
With solid Postgres deployment planning, you can reduce risks, spend less time troubleshooting in post-production situations, lower long-term maintenance costs, instill confidence, and be a superstar DBA.
****************************************
This presentation is helpful for DBAs, Data Architects, IT Managers, IT Directors, and IT Strategists who are responsible for supporting Postgres-based applications and deployment with ongoing maintenance of Postgres databases. It is equally suitable for organizations using community PostgreSQL as well as EDB’s Postgres Plus product family.
This slide is translated version. Originally it was written in Korean. (http://www.slideshare.net/saltynut/how-do-we-drive-tech-changes )
It describes how do we drive technical changes onto our organizations had used old-fashioned java combinations(Java 1.6+Spring 3.x+MyBatis) and monolithic architecture.
Key point is what we need to do to drive changes, and I'll discuss what we did during Phase1 and what we are doing at Phase 2 for architecture, frontend, backend, methodologies/process.
Phase1
- Architecture : Frontend / Backend Separation
- Frontend : Angular.js, Grunt, Bower
- Backend : Java 1.7/Spring4, ORM
- Methodology/Process : Scrum, Git
Phase2
- Architecture : Micro-Service Architecture(MSA)
- Frontend : Content Router, E2E Test
- Backend : Polyglot, Multi-Framework
- Methodology/Process : Scrum+JIRA, Git Branch Policy, Pair Programming, Code Workshop
Most important New features of Oracle 23c for DBAs and Developers. You can get more idea from my youtube channel video from https://youtu.be/XvL5WtaC20A
Unveiling the Advantages of Agile Software Development.pdfbrainerhub1
Learn about Agile Software Development's advantages. Simplify your workflow to spur quicker innovation. Jump right in! We have also discussed the advantages.
I recently presented this 2 hours session about the automation model developed in Videobet, the tools used in the R&D, QA and operations:
Issue mgmt.: JIRA/Greenhopper
Build system and repository: Maven & Nexus
Build server: QuickBuild
Code quality: Sonar
Continuous Integration: Selenium Grid
Crash dump analysis: Socorro
Database versioning: Flyway DB
Azure + DataStax Enterprise (DSE) Powers Office365 Per User StoreDataStax Academy
We will present our Office 365 use case scenarios, why we chose Cassandra + Spark, and walk through the architecture we chose for running DSE on Azure.
The presentation will feature demos on how you too can build similar applications.
Is Dynamic Cubes now ready to replace Transformer implementations? The business analytics experts at Senturus take an unbiased look at the pros and cons of switching. View the webinar video recording and download this deck: http://www.senturus.com/resources/cognos-dynamic-cubes-set-to-retire-transformer/.
Topics discussed include the types of Transformer implementations that could benefit by switching to Dynamic Cubes, pre-requisites for replacing a Transformer implementation with Dynamic Cubes and typical pitfalls you may encounter in the process.
Senturus, a business analytics consulting firm, has a resource library with hundreds of free recorded webinars, trainings, demos and unbiased product reviews. Take a look and share them with your colleagues and friends: http://www.senturus.com/resources/.
“Scaling with LeSS” by Kārlis Cinis from Tieto Latvia at Large Scale Agile fo...DevClub_lv
Have you ever thought about scaling Scrum? Scrum is nice and beautiful framework for one team - a team that is working closely together, is self-managing, cross-functional with T-shaped competences and has shared responsibility for team’s work. But what if you want more of this beauty? How should you scale?
I will explain basics behind Scrum scaling framework called LeSS (Large Scale Scrum) and share some experience about implementing it.
Kārlis is Scrum Master at Tieto for almost 2 years now and has been participating in implementing LeSS in Tieto Retail Payments and Cards from the start. He also has over 10 years of experience in the role of project manager. Kārlis is passionate about hiking in mountains, running and having a long sleep on Saturday mornings.
Azure + DataStax Enterprise Powers Office 365 Per User StoreDataStax Academy
We will present our O365 use case scenarios, why we chose Cassandra + Spark, and walk through the architecture we chose for running DataStax Enterprise on azure.
DevOps and APIs: Great Alone, Better Together MuleSoft
DevOps has emerged as a critical enabler of agility in enterprise IT; a DevOps model increases reliability and minimizes disruption, with the added benefit of increasing speed. But that isn’t enough. DevOps must be balanced with a focus on asset consumption and reuse to make sure the organization is extracting maximum value out of all the newly built assets. And that’s where an API strategy comes in. In this session, we'll discuss how organizations use DevOps and API-led connectivity to reduce time to market 3-4x.
How jKool Analyzes Streaming Data in Real Time with DataStaxjKool
jKool provides an application analytics SaaS for DevOps. These slides illustrate some of the choices we had to make and the architectural decisions to build a system for both real-time and historical application analytics.
How jKool Analyzes Streaming Data in Real Time with DataStaxDataStax
In this webinar, Charles Rich, VP of Product Management at jKool will share their journey with DataStax; how jKool knew from the start that traditional relational databases wouldn’t work for the scalability and availability demands of time-series data, and why they turned to DataStax Enterprise for blazing performance and powerful enterprise search and analytics capabilities.
A Reference Architecture to Enable Visibility and Traceability across the Ent...CollabNet
Software development should not be a “black box” to the business, customers or other developers. Instead collaboration across stakeholders should be the norm--business, development and operations teams. Forrester recently reported that 13% of organizations doing Agile link “upstream” agile planning with ‘“downstream” development.
As a result, executives continue to have only limited or no visibility beyond the initial planning stage of what is in a particular release. It’s not their fault, because today’s tools focus on upfront planning and don’t give you visibility into what’s happening in development. Often times that visibility is too late resulting in software that gets delivered and does not meet the customer’s needs.
Join CollabNet’s most experienced senior solution architects as they explain how you can you gain real time visibility into all stages of the development process—from ideation into production through deployment. Imagine what can your teams get done if all stakeholders are able to collaborate together and view real time feeds into all stages of the delivery pipelines within a single easy-to-use system.
Who Should attend:
Any executive or manager interested in learning how to get traceability and visibility across the enterprise-- particularly, into the build and release management functions of their application lifecycle.
What will be covered:
An enterprise-scalable reference architecture for CI, CD, and DevOps
The importance of build management, release management and application release automation integration
A blueprint for scaling business agility across a large development organization How does CollabNet help organizations solve these problems
A demonstration of TeamForge’s capabilities using Git/Gerrit, Code Review, Jenkins, Nexus, Artifactory, Chef and Automic
Apache Mesos is a cluster manager that provides efficient resource sharing for distributed applications across a shared pool of nodes. It allows organizations to run applications like Hadoop, Spark, and Storm on large clusters with high utilization. Mesos addresses issues with prior solutions that constrained everything as "jobs" or required static partitioning. It has been adopted by companies like Twitter, Airbnb, and Hubspot to improve efficiency and allow applications to dynamically scale resources.
An Introduction to Apache Geode (incubating)Anthony Baker
Geode is a data management platform that provides real-time, consistent access to data-intensive applications throughout widely distributed cloud architectures.
Geode pools memory (along with CPU, network and optionally local disk) across multiple processes to manage application objects and behavior. It uses dynamic replication and data partitioning techniques for high availability, improved performance, scalability, and fault tolerance. Geode is both a distributed data container and an in-memory data management system providing reliable asynchronous event notifications and guaranteed message delivery.
Pivotal GemFire has had a long and winding journey, starting in 2002, winding through VMware, Pivotal, and finding it's way to Apache in 2015. Companies using GemFire have deployed it in some of the most mission critical latency sensitive applications in their enterprises, making sure tickets are purchased in a timely fashion, hotel rooms are booked, trades are made, and credit card transactions are cleared. This presentation discusses:
- A brief history of GemFire
- Architecture and use cases
- Why we are taking GemFire Open Source
- Design philosophy and principles
But most importantly: how you can join this exciting community to work on the bleeding edge in-memory platform.
An Introduction to Apache Geode (incubating) - Geode is a data management platform that provides real-time, consistent access to data-intensive applications throughout widely distributed cloud architectures.
STMicroelectronics IT service Manager explains how requirement management and software development are managed at ST Worldwide : 10.000 active users, 5.000 R&D and embedded software projects.
This document summarizes a MuleSoft meetup event in Sydney. It provides an agenda for presentations and panel discussions on MuleSoft migrations and careers. The first presentation will be from University of Newcastle representatives on their journey migrating APIs from their on-premises environment to MuleSoft's CloudHub platform. This will be followed by a panel discussion on building a career in APIs, integration and MuleSoft. The meetup will conclude with a trivia game and prizes. Attendees are encouraged to introduce themselves and ask questions in the chat.
The presentation was created for Cracow Mulesoft Meetup #1.
It covers the following content:
• Let us understand how the MuleSoft Forum and Meetup Community are helping across the World.
• Overview of Mule Migration Assistant (open source CLI tool provided by MuleSoft)
• Drivers to migrate your Mule 3 application
• How this baseline framework make your migration from Mule 3 to Mule 4 smooth?
• MMA in action - Demo
• Recent product updates, get trend and become MuleSoft Certified as a community success month.
This event is worth watching, if you:
• Have many Mule 3 apps in your organization and you want to switch to Mule 4
• Want to increase developers productivity through semi-automatic tool during the re-development
• Are Mule developer and you want to make your life easier in migration projects
Is OLAP Dead?: Can Next Gen Tools Take Over?Senturus
Explores pros and cons of current OLAP technologies, new generation visualization tools, in-memory databases and OLAP for big data. We also discuss real-life client scenarios for a pragmatic perspective. View the video recording and download this deck at: http://www.senturus.com/resources/is-olap-dead/
Senturus, a business analytics consulting firm, has a resource library with hundreds of free recorded webinars, trainings, demos and unbiased product reviews. Take a look and share them with your colleagues and friends: http://www.senturus.com/resources/.
Sydney mule soft meetup #8 1 August 2019 - all slidesRoyston Lobo
The document summarizes the agenda and key topics from a MuleSoft meetup in Sydney on API community management and troubleshooting Mule applications. The meetup included presentations on introducing an API community manager and troubleshooting Mule application performance. It also provided information on upcoming MuleSoft events and how to become a speaker at future meetups.
Top 10 Tips for an Effective Postgres DeploymentEDB
This presentation addresses these key questions during your Postgres deployment:
* What is this database going to be used for – a reporting server or data warehouse, or as an operational database supporting an application?
* Which resources should I spend the budget on to ensure optimal database performance – bigger servers, more CPUs/cores, disks, or more memory?
* What are my backup requirements? If I ever need to restore, how far back do I need to go and what will that mean to the business?
* How will I handle any hot fixes, such as security patches?
* What downtime can be afforded and what processes need to be in place to apply critical or maintenance updates?
* What are my replication and failover requirements and what should I do for my high availability configuration?
The answers to these questions will impact how well you prepare, configure, and tune your database environment. The consequences of overlooking the key ingredients of your deployment can result in misallocated resources, limited ability to change, or worse - facing an outage with critical data loss.
With solid Postgres deployment planning, you can reduce risks, spend less time troubleshooting in post-production situations, lower long-term maintenance costs, instill confidence, and be a superstar DBA.
****************************************
This presentation is helpful for DBAs, Data Architects, IT Managers, IT Directors, and IT Strategists who are responsible for supporting Postgres-based applications and deployment with ongoing maintenance of Postgres databases. It is equally suitable for organizations using community PostgreSQL as well as EDB’s Postgres Plus product family.
This slide is translated version. Originally it was written in Korean. (http://www.slideshare.net/saltynut/how-do-we-drive-tech-changes )
It describes how do we drive technical changes onto our organizations had used old-fashioned java combinations(Java 1.6+Spring 3.x+MyBatis) and monolithic architecture.
Key point is what we need to do to drive changes, and I'll discuss what we did during Phase1 and what we are doing at Phase 2 for architecture, frontend, backend, methodologies/process.
Phase1
- Architecture : Frontend / Backend Separation
- Frontend : Angular.js, Grunt, Bower
- Backend : Java 1.7/Spring4, ORM
- Methodology/Process : Scrum, Git
Phase2
- Architecture : Micro-Service Architecture(MSA)
- Frontend : Content Router, E2E Test
- Backend : Polyglot, Multi-Framework
- Methodology/Process : Scrum+JIRA, Git Branch Policy, Pair Programming, Code Workshop
Most important New features of Oracle 23c for DBAs and Developers. You can get more idea from my youtube channel video from https://youtu.be/XvL5WtaC20A
Unveiling the Advantages of Agile Software Development.pdfbrainerhub1
Learn about Agile Software Development's advantages. Simplify your workflow to spur quicker innovation. Jump right in! We have also discussed the advantages.
E-commerce Development Services- Hornet DynamicsHornet Dynamics
For any business hoping to succeed in the digital age, having a strong online presence is crucial. We offer Ecommerce Development Services that are customized according to your business requirements and client preferences, enabling you to create a dynamic, safe, and user-friendly online store.
Consistent toolbox talks are critical for maintaining workplace safety, as they provide regular opportunities to address specific hazards and reinforce safe practices.
These brief, focused sessions ensure that safety is a continual conversation rather than a one-time event, which helps keep safety protocols fresh in employees' minds. Studies have shown that shorter, more frequent training sessions are more effective for retention and behavior change compared to longer, infrequent sessions.
Engaging workers regularly, toolbox talks promote a culture of safety, empower employees to voice concerns, and ultimately reduce the likelihood of accidents and injuries on site.
The traditional method of conducting safety talks with paper documents and lengthy meetings is not only time-consuming but also less effective. Manual tracking of attendance and compliance is prone to errors and inconsistencies, leading to gaps in safety communication and potential non-compliance with OSHA regulations. Switching to a digital solution like Safelyio offers significant advantages.
Safelyio automates the delivery and documentation of safety talks, ensuring consistency and accessibility. The microlearning approach breaks down complex safety protocols into manageable, bite-sized pieces, making it easier for employees to absorb and retain information.
This method minimizes disruptions to work schedules, eliminates the hassle of paperwork, and ensures that all safety communications are tracked and recorded accurately. Ultimately, using a digital platform like Safelyio enhances engagement, compliance, and overall safety performance on site. https://safelyio.com/
Project Management: The Role of Project Dashboards.pdfKarya Keeper
Project management is a crucial aspect of any organization, ensuring that projects are completed efficiently and effectively. One of the key tools used in project management is the project dashboard, which provides a comprehensive view of project progress and performance. In this article, we will explore the role of project dashboards in project management, highlighting their key features and benefits.
How Can Hiring A Mobile App Development Company Help Your Business Grow?ToXSL Technologies
ToXSL Technologies is an award-winning Mobile App Development Company in Dubai that helps businesses reshape their digital possibilities with custom app services. As a top app development company in Dubai, we offer highly engaging iOS & Android app solutions. https://rb.gy/necdnt
Hand Rolled Applicative User ValidationCode KataPhilip Schwarz
Could you use a simple piece of Scala validation code (granted, a very simplistic one too!) that you can rewrite, now and again, to refresh your basic understanding of Applicative operators <*>, <*, *>?
The goal is not to write perfect code showcasing validation, but rather, to provide a small, rough-and ready exercise to reinforce your muscle-memory.
Despite its grandiose-sounding title, this deck consists of just three slides showing the Scala 3 code to be rewritten whenever the details of the operators begin to fade away.
The code is my rough and ready translation of a Haskell user-validation program found in a book called Finding Success (and Failure) in Haskell - Fall in love with applicative functors.
25. Why
do
we
want
Kubernetes?
26
Standardize,
Containerize,
Deploy
…to
Samsung
Data
Centers.
…to
developer
systems
for
agility
and
produc@vity.
…to
public
virtual
machine
clouds.
…to
new
and
even
more
efficient
public
container
clouds.
26. Why
Focus
on
Kubernetes?
• Key
Technology:
Container
Management
– Deployment
– Repair
– Scaling
• Clean
open
source
license
• Good
design
by
a
vibrant,
healthy
community
• Rapid
pace
of
improvement
• Right
contributors
with
the
right
experience:
• Best
high
scale
public
cloud
container
op@on
– Google
Container
Engine
–
available
now
• Supports
mul@ple
container
specs:
Docker
and
APCC
27