"Containers, DevOps, Microservices and Kafka: Tools used by our Monolith wrecking crew."
Speakers: Jonathan Owens, Senior Site Reliability Engineer and Jose Fernandez, Lead Software Engineer, New Relic
New Relic Infrastructure Intro: Increasing Mean Time Between Loss of Sleep [F...New Relic
New Relic Infrastructure Intro:
Increasing Mean Time Between Loss of Sleep
Speaker: Jim Stoneham, GM & SVP, SMB Business, New Relic & Mikhail Panchechenko, Director, Engineering, New Relic
Featuring: Al Kemner, Principle Engineer – Platform as a Service, Gannett
New Relic Infrastructure Intro: Increasing Mean Time Between Loss of Sleep [F...New Relic
New Relic Infrastructure Intro:
Increasing Mean Time Between Loss of Sleep
Speaker: Jim Stoneham, GM & SVP, SMB Business, New Relic & Mikhail Panchechenko, Director, Engineering, New Relic
Featuring: Al Kemner, Principle Engineer – Platform as a Service, Gannett
Are you ready to migrate to the cloud? How will you prove success? This presentation covers how to baseline before and after your cloud migration to prove success.
Thinking About the Full Stack to Create Great Mobile Experiences, New Relic [...New Relic
Thinking About the Full Stack to Create Great Mobile Experiences
Speaker: Devin Cheevers, Product Marketing Manager and Susie Dirks, Product Manager, Mobile, New Relic
What are the challenges to use Docker? Find out what we learned at New Relic software. Presented at Velocity 2016 Santa Clara (at the New Relic booth).
Faster business decisions and collaboration with Elastic Workplace SearchElasticsearch
With a dizzying array of productivity and collaboration tools available on the market, learning how to maximize their usefulness — and your investment in them — is paramount. A unified search experience across all those sources of content ensures that your teams consistently see and share the right docs and data. See how less time spent searching means more time spent strategizing.
Advanced correlations for threat detection and moreElasticsearch
Learn how to perform correlations and create rules to detect malicious activity and identify and correlate behaviors. Event Query Language (EQL) provides robust data processing and analysis capabilities that are ideal for hunting threats, investigating suspicious activity, and scoping incidents.
Scalability, fault tolerance, distributed log…these are terms which we hear more and more these days. Make them happen is quite a challenge sometimes especially if our business need to be data intensive, agile and fast to market.
One way to answer to this challenge is microservices. These are small services that communicate to each other to deliver business value. The key word here is _communication_. Without communication all the power of microservices falls apart. And communication is not a trivial fact when involves systems with multiple data systems that are talking to one another over many channels. Each of the channel requiring their own protocol and communication methods. This is where communication can become a bottleneck if not handled properly.
One answer to this problem is Kafka, a distributed messaging system providing fast, highly scalable and redundant message exchange using a publish-subscribe model. And when we talk about fast we talk about one of the fastest messaging systems out there.
This presentation will show you an alternative way of doing microservices with event-driven architecture through Kafka.
Presenters:
Laszlo-Robert Albert (albertlaszlorobert [at] gmail [dot] com)
Dan Balescu (dfbalescu [at] gmail [dot] com)
Application Architecture Summit - Monitoring the Dynamic Cloud New Relic
How do you apply modern application to your digital business? Hear from New Relic's Sr Director, Strategic Architecture, Lee Atchison, at the Application Architecture Summit. Learn more here: https://newrelic.com/partner/aws
Since the term “DevOps” was coined nearly a decade ago, organizations have strived to embrace the concept as a way to increase agility and speed. Yet, after years of experiments and pilots, DevOps has often failed to live up to grand expectations. For many organizations, the seemingly simple concepts of collaboration and transparency are challenging in practice.
In this webinar, Donnie Berkholz, DevOps Research Director at 451 Research, shared what successful DevOps looks like and how new collaboration models and technologies can aid in your efforts to adopt this software development methodology.
View the full webinar here: https://newrelic.com/resources/webinar/DevOps-101-170315
Are you ready to migrate to the cloud? How will you prove success? This presentation covers how to baseline before and after your cloud migration to prove success.
Thinking About the Full Stack to Create Great Mobile Experiences, New Relic [...New Relic
Thinking About the Full Stack to Create Great Mobile Experiences
Speaker: Devin Cheevers, Product Marketing Manager and Susie Dirks, Product Manager, Mobile, New Relic
What are the challenges to use Docker? Find out what we learned at New Relic software. Presented at Velocity 2016 Santa Clara (at the New Relic booth).
Faster business decisions and collaboration with Elastic Workplace SearchElasticsearch
With a dizzying array of productivity and collaboration tools available on the market, learning how to maximize their usefulness — and your investment in them — is paramount. A unified search experience across all those sources of content ensures that your teams consistently see and share the right docs and data. See how less time spent searching means more time spent strategizing.
Advanced correlations for threat detection and moreElasticsearch
Learn how to perform correlations and create rules to detect malicious activity and identify and correlate behaviors. Event Query Language (EQL) provides robust data processing and analysis capabilities that are ideal for hunting threats, investigating suspicious activity, and scoping incidents.
Scalability, fault tolerance, distributed log…these are terms which we hear more and more these days. Make them happen is quite a challenge sometimes especially if our business need to be data intensive, agile and fast to market.
One way to answer to this challenge is microservices. These are small services that communicate to each other to deliver business value. The key word here is _communication_. Without communication all the power of microservices falls apart. And communication is not a trivial fact when involves systems with multiple data systems that are talking to one another over many channels. Each of the channel requiring their own protocol and communication methods. This is where communication can become a bottleneck if not handled properly.
One answer to this problem is Kafka, a distributed messaging system providing fast, highly scalable and redundant message exchange using a publish-subscribe model. And when we talk about fast we talk about one of the fastest messaging systems out there.
This presentation will show you an alternative way of doing microservices with event-driven architecture through Kafka.
Presenters:
Laszlo-Robert Albert (albertlaszlorobert [at] gmail [dot] com)
Dan Balescu (dfbalescu [at] gmail [dot] com)
Application Architecture Summit - Monitoring the Dynamic Cloud New Relic
How do you apply modern application to your digital business? Hear from New Relic's Sr Director, Strategic Architecture, Lee Atchison, at the Application Architecture Summit. Learn more here: https://newrelic.com/partner/aws
Since the term “DevOps” was coined nearly a decade ago, organizations have strived to embrace the concept as a way to increase agility and speed. Yet, after years of experiments and pilots, DevOps has often failed to live up to grand expectations. For many organizations, the seemingly simple concepts of collaboration and transparency are challenging in practice.
In this webinar, Donnie Berkholz, DevOps Research Director at 451 Research, shared what successful DevOps looks like and how new collaboration models and technologies can aid in your efforts to adopt this software development methodology.
View the full webinar here: https://newrelic.com/resources/webinar/DevOps-101-170315
Apache Camel journey with Microservices, lessons learned and utilisation of Fabric8 to make Docker, Kubernetes and OpenShift easy for developers to use
Build a Cloud Day presentation about Fuse Fabric technology in the cloud and how integration projects / architectures can be designed top of cloudstack, openstack, amazon, ...
Webinar: iPaaS in the Enterprise - What to Look for in a Cloud Integration Pl...SnapLogic
In this webinar, we talk about important features when it comes to evaluating an integration platform as a service (iPaaS) solution, including ease of use, flexibility, functionality and cloud-based architecture. Joining us in this webinar was Bryant Pham of SnapLogic customer Xactly.
With Bryant, we also discussed Xactly’s evaluation process in finding a solution to connect applications in real time to create a single, comprehensive system of systems to run an expanding business, and initial results the Xactly team is seeing with the use of SnapLogic, including automation and cloud analytics.
To learn more, visit: www.snaplogic.com/ipaas
Developing real-time data pipelines with Spring and Kafkamarius_bogoevici
Talk given at the Apache Kafka NYC Meetup, October 20, 2015.
http://www.meetup.com/Apache-Kafka-NYC/events/225697500/
Kafka has emerged as a clear choice for a high-throughput, low latency messaging system that addresses the needs of high-performance streaming applications. The Spring Framework has been, in the last decade, the de-facto standard for developing enterprise Java applications, providing a simple and powerful programming model that allows developers to focus on the business needs, leaving the boilerplate and middleware integration to the framework itself. In fact, it has evolved into a rich and powerful ecosystem, with projects focusing on specific aspects of enterprise software development - like Spring Boot, Spring Data, Spring Integration, Spring XD, Spring Cloud Stream/Data Flow to name just a few.
In this presentation, Marius Bogoevici from the Spring team will take the perspective of the Kafka user, and show, with live demos, how the various projects in the Spring ecosystem address their needs:
- how to build simple data integration applications using Spring Integration Kafka;
- how to build sophisticated data pipelines with Spring XD and Kafka;
- how to build cloud native message-driven microservices using Spring Cloud Stream and Kafka, and how to orchestrate them using Spring Cloud Data Flow;
Reducing Microservice Complexity with Kafka and Reactive Streamsjimriecken
My talk from ScalaDays 2016 in New York on May 11, 2016:
Transitioning from a monolithic application to a set of microservices can help increase performance and scalability, but it can also drastically increase complexity. Layers of inter-service network calls for add latency and an increasing risk of failure where previously only local function calls existed. In this talk, I'll speak about how to tame this complexity using Apache Kafka and Reactive Streams to:
- Extract non-critical processing from the critical path of your application to reduce request latency
- Provide back-pressure to handle both slow and fast producers/consumers
- Maintain high availability, high performance, and reliable messaging
- Evolve message payloads while maintaining backwards and forwards compatibility.
MuleSoft, leading open source ESB developer, launched their cloud offering, Mule iON - integration platform as a service, earlier this year. This session will give you a behind the scenes peek into the architecture of Mule iON, and lessons learned in the trenches building and running a 24x7 cloud service.
Once upon a time… integration was all about ESB’s, EAI and B2B. Today, many companies wish to integrate beyond firewalls, and typically with SaaS. Hence the uprise of API based integration, using lightweight protocols. The evolution is a fact; so now what is the current state of the Azure Integration Platform?
Glenn dives into its architecture and explains Logic Apps and the Enterprise Integration Pack. Learn to create basic IFTTT (If This Then That) scenarios, or why not: think bigger and create enterprise-level, hybrid integration scenarios using Logic Apps and on premises LOB apps. 'How does it work', 'How is it Made' and 'How does it all fit together’? Just a couple of questions you will find the answer to.
You have data - probably lots of it. There are many systems that promise to store that data durably, retrieve it quickly, and grow with you indefinitely. Not all of them are good at everything though—so which one do you use? I'll help you make the right choice for your application, scale, and requirements, based on real production experience.
Made for Each Other: Microservices + PaaSVMware Tanzu
Companies need to build better software faster to compete. But existing monolithic applications, legacy platforms, and lengthy operational deployment cycles are holding innovation back. Microservices are becoming the cloud architecture of choice because they offer the ability to loosely couple applications into discrete services that can be surgically changed without requiring disruptive overhauls. This approach enables the responsiveness and rapid change needed by the business.
Enterprise PaaS is a critical foundation to simplify the operations, governance, and health management of these new architectures. Together with a DevOps culture, microservices and PaaS are the engine that drives innovation at speed.
AWS re:Invent 2016: Cloud Monitoring: Change is the New Normal- New Relic & G...Amazon Web Services
Dynamic applications require dynamic resources and dynamic infrastructure. AWS provides many resources for applications to build highly scaled, highly available, dynamic applications, services, and microservices. However, managing and tracking these resources—and making sure they are operating as expected—is a challenge. In this session, we discuss how to monitor and manage the dynamic resources that make up your applications, and learn how to tell when a resource is causing your application problems. Designed for people already acquainted with basic dynamic resource allocation techniques, such as effectively using Auto Scaling, this session helps you take your resource management to the next level. Session sponsored by New Relic.
AWS Competency Partner
Monitoring the Cloud – Understanding the Dynamic Nature of Cloud Computing - ...Amazon Web Services
Applications running in a typical data center are static entities. The same is not true in the cloud. Dynamic scaling and resource allocation is the norm in AWS. Technologies such as EC2, Lambda, and AutoScaling make tracking resources and resource utilization a challenge. The days of static server monitoring are over.
In this session, we examine trends we've discovered in dynamic resource allocation and how AWS helps deliver those trends. We will discuss some of the best practices we've learned working with New Relic customers on how you can manage applications running in this environment and take advantage of the dynamic nature of the cloud to give you additional insights into your application performance.
Speaker: Lee Atchison, Principal Cloud Architect and Advocate, New Relic
Monitoring Performance of Enterprise Applications on AWS: Understanding the D...Amazon Web Services
Applications running in a typical data center are static entities. But applications aren't static in the cloud. Dynamic scaling and resource allocation is the norm in AWS. Technologies such as Amazon EC2, AWS Lambda, and Auto Scaling provide flexibility but can add complexity to the enterprise application environment. New Relic helps manage that complexity to give the benefits of the cloud without sacrificing simplicity. In this session, we discuss some of the best practices we’ve learned working with New Relic customers on how to manage applications running in this environment and take advantage of the dynamic nature of the cloud to give you additional insights into your application performance.
AWS re:Invent 2016: Cloud Monitoring - Understanding, Preparing, and Troubles...Amazon Web Services
Applications running in a typical data center are static entities. Dynamic scaling and resource allocation are the norm in AWS. Technologies such as Amazon EC2, Docker, AWS Lambda, and Auto Scaling make tracking resources and resource utilization a challenge. The days of static server monitoring are over.
In this session, we examine trends we’ve observed across thousands of customers using dynamic resource allocation and discuss why dynamic infrastructure fundamentally changes your monitoring strategy. We discuss some of the best practices we’ve learned by working with New Relic customers to build, manage, and troubleshoot applications and dynamic cloud services. Session sponsored by New Relic.
AWS Competency Partner
Implementing Docker in Production at ScaleKarl Matthias
Quickly and reliably delivering robust software applications at scale is an important goal of any large organization. Docker is a new tool that can significantly streamline the whole delivery workflow from development through production. With a little planning, a Docker workflow can go a long way to solving some challenging organizational and deployment issues.
However, a fully-functioning production workflow based around Docker requires components that don’t come out of the box. There are a lot of overlapping tools provided by both Docker, Inc. and the Docker community that are intended to cover the gaps in the workflow that are not directly handled by Docker. Selecting the right tools from that ecosystem and adopting the right strategy to implement them in your organization is critical, but the path forward may not always seem very clear.
At New Relic we’ve been shipping production applications via Docker containers for one-and-a-half years. We’ve now leveraging Docker to deliver highly scalable applications for multiple teams, in a fast-paced, innovative company. We’ll explain the choices we made during our journey to implement a full-scale production Docker workflow, discuss some things we might do differently today, and demonstrate a set of tools that will allow you to easily build Docker clusters in multiple datacenters and deploy and monitor your containers across those environments.
You will leave this talk armed with a clear understanding of how to go back and implement this solution in your organization, using available open source tools.
Engineering and Autonomy in the Age of Microservices - Nic Benders, New RelicAmbassador Labs
Nic Benders, New Relic's Chief Architect discusses how New Relic re-organized their engineering teams around microservices in order to achieve greater scale and efficiency
If It Touches Production, It Is ProductionNew Relic
In Site Engineering at New Relic we treat operations like software. Operationalizing development teams to build, maintain and scale a unified polyglot environment. By following the mantra of "automate everything" we treat all tasks, tools and processes the same way we do our products, with a well defined lifecycle. In this session you will learn how to integrate your DevOPs team with your product development teams and discuss best practices we have learned that will successfully take your infrastructure platform into the future.
Using Queryable State for Fun and ProfitFlink Forward
Flink Forward San Francisco 2022.
A particular feature in our system relies on a streaming 90-minute trailing window of 1-minute samples - implemented as a lookaside cache - to speed up a particular query, allowing our customers to rapidly see an overview of their estate. Across our entire customer base, there is a substantial amount of data flowing into this cache - ~1,000,000 entries/second, with the entire cache requiring ~600GB of RAM. The current implementation is simplistic but expensive. In this talk I describe a replacement implementation as a stateful streaming Flink application leveraging Queryable State. This Flink application reduces the net cost by ~90%. In this session, the implementation is described in detail, including windowing considerations, a sliding-window state buffer that avoids the sliding window replication penalty, and a comparison of queryable state and Redis queries. The talk concludes with a frank discussion of when this distinctive approach is, and is not, appropriate.
by
Ron Crocker
7 Tips & Tricks to Having Happy Customers at ScaleNew Relic
Customer expectations are at an all-time high, making it more and more difficult for companies to please them. Companies who understand their customers well are the ones who rise to the top over their competitors. New Relic, provider of real-time insights for software-driven businesses has this formula figured out. Roger Scott, New Relic's EVP and Chief Customer Officer shares his 7 tips and tricks for keeping your customers happy— and how to do so at a large scale.
7 Tips & Tricks to Having Happy Customers at ScaleNew Relic
Customer expectations are at an all-time high, making it more and more difficult for companies to please them. Companies who understand their customers well are the ones who rise to the top over their competitors. New Relic, provider of real-time insights for software-driven businesses has this formula figured out. Roger Scott, New Relic's EVP and Chief Customer Officer shares his 7 tips and tricks for keeping your customers happy— and how to do so at a large scale.
FutureStack Tokyo 19 -[New Relic テクニカル講演]モニタリングと可視化がデジタルトランスフォーメーションを救う! - サ...New Relic
New Relicの目指していることの一つが、DevOpsを推進することを手助けし、デジタルトランスフォーメーションを成功させることです。DevOpsにとってなぜモニタリングと可視化が重要なのか、またどのようなデータを管理する必要があるのかを考察した上で、New Relicで実現できる例をデモを交え、技術からビジネスまで幅広い観点でご紹介します。
New Relic 株式会社
ソリューション コンサルタント
佐々木 千枝
FutureStack Tokyo 19_インサイトとデータを組織の力にする_株式会社ドワンゴ 池田 明啓 氏New Relic
サービス、プロダクトを”いつまでも”継続する為には、インサイトとデータを組織の力とする必要があります。
私達が開発、運用するドワンゴジェイピーは、間もなく二十周年を迎えます。決して順風満帆ではなかったシステムの遍歴と New Relic の導入方法を交え、継続できた理由の一つ、インサイトとデータを組織の力へ変換する方法をご紹介します。
Three Monitoring Mistakes and How to Avoid ThemNew Relic
The days of parsing log files and building out homebrewed monitoring tools are (thankfully) coming to an end. Yet as those outdated techniques begin to fade, a whole new set of challenges have arisen around employing and running modern monitoring solutions.
Discover how New Relic can help turn monitoring blunders into intelligent problem solving, including how to avoid making common mistakes like:
- Not monitoring the whole system
- Monitoring arbitrary things in your system
- Making your monitoring part of the problem
Intro to Multidimensional Kubernetes MonitoringNew Relic
As a Kubernetes environment grows and becomes more complex, it gets harder to answer some very basic—but very important—questions. Questions like: What is the health of my cluster? What is the hierarchy and the health of the elements (nodes, pods, containers, and applications) within my cluster? In order to effectively manage the health and performance of your Kubernetes environments—at any scale and any level of complexity—it’s essential you have immediate, useful answers to these questions.
Our Kubernetes cluster explorer was designed to give you a multi-dimensional representation of your clusters—giving you the ability to drill down into Kubernetes data and metadata in a high-fidelity, curated UI.
Understanding Microservice Latency for DevOps Teams: An Introduction to New R...New Relic
Distributed tracing is designed to give DevOps teams an easy way to capture, visualize, and analyze traces through complex architectures—including architectures that use both monoliths and microservices. And, by leveraging New Relic Applied Intelligence capabilities, you can easily highlight anomalies within a trace for more faster resolution.
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
"Containers, DevOps, Microservices and Kafka: Tools used by our Monolith wrecking crew." [FutureStack16]
1. Containers, DevOps,
Microservices and Kafka:
Tools used by our Monolith
wrecking crew.
Jonathan Owens, Senior Site Reliability Engineer &
Jose Fernandez, Lead Software Engineer
2. This document and the information herein (including any information that may be incorporated by reference) is provided
for informational purposes only and should not be construed as an offer, commitment, promise or obligation on behalf of
New Relic, Inc. (“New Relic”) to sell securities or deliver any product, material, code, functionality, or other feature. Any
information provided hereby is proprietary to New Relic and may not be replicated or disclosed without New Relic’s
express written permission.
Such information may contain forward-looking statements within the meaning of federal securities laws. Any statement
that is not a historical fact or refers to expectations, projections, future plans, objectives, estimates, goals, or other
characterizations of future events is a forward-looking statement. These forward-looking statements can often be
identified as such because the context of the statement will include words such as “believes,” “anticipates,”, “expects” or
words of similar import.
Actual results may differ materially from those expressed in these forward-looking statements, which speak only as of
the date hereof, and are subject to change at any time without notice. Existing and prospective investors, customers and
other third parties transacting business with New Relic are cautioned not to place undue reliance on this forward-looking
information. The achievement or success of the matters covered by such forward-looking statements are based on New
Relic’s current assumptions, expectations, and beliefs and are subject to substantial risks, uncertainties, assumptions, and
changes in circumstances that may cause the actual results, performance, or achievements to differ materially from those
expressed or implied in any forward-looking statement. Further information on factors that could affect such forward-
looking statements is included in the filings New Relic makes with the SEC from time to time. Copies of these documents
may be obtained by visiting New Relic’s Investor Relations website at ir.newrelic.com or the SEC’s website at
www.sec.gov.
New Relic assumes no obligation and does not intend to update these forward-looking statements, except as required by
law. New Relic makes no warranties, expressed or implied, in this document or otherwise, with respect to the information
provided.
13. Updating last-seen
timestamps for metrics
Metric ID resolution
Computing cluster
agent metrics
Writing metric
metadata
Writing summary records
Applying black / whitelist
rules to accounts
Processing and storing
environment values
Legacy alerting
Agent connect() calls
Agent run state
Orchestrating
thread profiles
Applying rename rules
to incoming metrics
Aggregation of 1-minute
and 1-hour timeslices
from raw harvests
Writing 1-minute and 1-hour
timeslices to MySQL
14. Updating last-seen
timestamps for metrics
Metric ID resolution
Computing cluster
agent metrics
Writing metric
metadata
Writing summary records
Applying black / whitelist
rules to accounts
Processing and storing
environment values
Agent connect() calls
Agent run state
Orchestrating
thread profiles
Applying rename rules
to incoming metrics
Legacy alerting
Aggregation of 1-minute
and 1-hour timeslices
from raw harvests
Writing 1-minute and 1-hour
timeslices to MySQL
15. Shopping list for replacing a Monolith
New Relic APM + Insights
Kafka
Microservices
DevOps
Containers
30. Shopping list for replacing a Monolith
New Relic APM + Insights
Kafka
Microservices
DevOps
Containers
31.
32. Alice Goldfuss
Site Reliability Engineer
New Relic
@alicegoldfuss
DevOps means empathy and communication over
the wall until there is no more wall.
33. Jonathan
Owens
Jose
Fernandez
OpsDev
… long wait…
frustration due
to lack of
visibility…
cycle
repeats
“I need 3
Cassandra servers
ASAP!!”
(vague request)
“Here they are.
Sorry it took so long.
Our queue was
backed up.”
<different goals>
“Doh.
I needed the
newer version
of Cassandra…”
38. Shopping list for replacing a Monolith
New Relic APM + Insights
Kafka
Microservices
DevOps
Containers
39. Docker is opinionated about
software architecture in
a way that encourages more
robustly crafted applications.
“
– Karl Matthias & Sean P. Kane
Docker: Up and Running
40. PUSH
and then
PULL door
Push plate?
Policy statement
Push on a
pull handle
Visual instruction
to use badge
(with light indicator)