The document discusses how New Relic launched its Metric Explorer initiative in 3 months by adopting an agile approach focused on failing better. It emphasizes incremental development, feature flagging, dark launches, data-driven scoping, and empowering self-reliant teams with shared goals and real-time visibility into product usage and deployments provided by New Relic tools. This allowed for rapid iteration, integration of feedback, and reliability improvements to achieve a successful launch.
Our Evolution to GraphQL: Unifying our API StrategyNew Relic
New Relic adopted GraphQL to address issues from rapid growth including an unwieldy code base, difficulty managing changes, and engineering stress. GraphQL provided a common API language and extensible layer to divide code into microservices and containers while simplifying authorization and allowing client-driven querying and updating across multiple services.
The document is New Relic's 1Q19 investor presentation. It discusses New Relic's platform for monitoring digital systems in real-time. Some key points:
- New Relic provides visibility into application and infrastructure performance, customer experience, and business outcomes.
- The market for application performance monitoring is growing rapidly as software and technology become more complex.
- New Relic has a multi-tenant cloud platform and recurring revenue SaaS business model with high gross margins.
- The company focuses on expanding its platform capabilities and growing within existing customers and internationally.
Sandboxes: The Future of App DevelopmentDreamforce
Major Releases, Minor Releases. Developers, Testers. Refreshes and Previews. How do you manage all of these various demands in your Salesforce environments and sandboxes? Join Farhan Tahir, Platform Product Manager, as he shares details on how to tackle these problems around sandbox management through the use of both processes and tools. As well as insight on roadmap features to make development efficient and agile by automating with Salesforce Sandboxes. Watch the video now: https://www.youtube.com/watch?v=FMH77436I2o
Cisco ERP Implementation and related results about Systems Integration.
Project Members:
Rohan Kumbhar, Chris Moss, Dhanesh Gandhi, John Hicks and Gouthami Gurram
British Medical Journal: Refine Your Metrics For Digital Success - AppD Summi...AppDynamics
This document outlines metrics that technology leaders should consider to drive business outcomes. It recommends starting with the desired business outcome and mapping the customer journey. Both lagging and leading metrics should be identified to measure progress towards the outcome. The document provides examples of metrics related to software quality, agility, and customer retention. It emphasizes that metrics must be specific to the business and stresses the importance of involving both technical and business stakeholders when developing metrics.
The document summarizes a MuleSoft meetup event that included presentations on the MuleSoft Solace Connector, Event Mesh, and MuleSoft product updates. The agenda included introductions, a presentation on the Solace Connector and Event Mesh by Jason Abram, and a presentation on MuleSoft updates by Ryan Grondal and Michael Price. The MuleSoft updates presentation covered key market trends, MuleSoft's 3-5 year strategy, ideas submitted to the product portal, and new features launched in Q4 2021 around connectivity, integration, APIs, and the platform. The event concluded with networking time.
Maximizing Salesforce Lightning Experience and Lightning Component PerformanceSalesforce Developers
The document discusses various factors that affect the performance of Lightning Experience and Lightning Component pages. It outlines six main factors: geographical and network latency, device and browser capabilities, Salesforce org configuration, page complexity, component architecture, and server processing. For each factor, it provides recommendations for how to measure and optimize performance, such as enabling the Salesforce Edge network, limiting the number of components on a page, using conditional rendering, and leveraging caching features. The overall message is that page load time in Lightning Experience is sensitive to these infrastructure, code, and configuration factors.
The document discusses how New Relic launched its Metric Explorer initiative in 3 months by adopting an agile approach focused on failing better. It emphasizes incremental development, feature flagging, dark launches, data-driven scoping, and empowering self-reliant teams with shared goals and real-time visibility into product usage and deployments provided by New Relic tools. This allowed for rapid iteration, integration of feedback, and reliability improvements to achieve a successful launch.
Our Evolution to GraphQL: Unifying our API StrategyNew Relic
New Relic adopted GraphQL to address issues from rapid growth including an unwieldy code base, difficulty managing changes, and engineering stress. GraphQL provided a common API language and extensible layer to divide code into microservices and containers while simplifying authorization and allowing client-driven querying and updating across multiple services.
The document is New Relic's 1Q19 investor presentation. It discusses New Relic's platform for monitoring digital systems in real-time. Some key points:
- New Relic provides visibility into application and infrastructure performance, customer experience, and business outcomes.
- The market for application performance monitoring is growing rapidly as software and technology become more complex.
- New Relic has a multi-tenant cloud platform and recurring revenue SaaS business model with high gross margins.
- The company focuses on expanding its platform capabilities and growing within existing customers and internationally.
Sandboxes: The Future of App DevelopmentDreamforce
Major Releases, Minor Releases. Developers, Testers. Refreshes and Previews. How do you manage all of these various demands in your Salesforce environments and sandboxes? Join Farhan Tahir, Platform Product Manager, as he shares details on how to tackle these problems around sandbox management through the use of both processes and tools. As well as insight on roadmap features to make development efficient and agile by automating with Salesforce Sandboxes. Watch the video now: https://www.youtube.com/watch?v=FMH77436I2o
Cisco ERP Implementation and related results about Systems Integration.
Project Members:
Rohan Kumbhar, Chris Moss, Dhanesh Gandhi, John Hicks and Gouthami Gurram
British Medical Journal: Refine Your Metrics For Digital Success - AppD Summi...AppDynamics
This document outlines metrics that technology leaders should consider to drive business outcomes. It recommends starting with the desired business outcome and mapping the customer journey. Both lagging and leading metrics should be identified to measure progress towards the outcome. The document provides examples of metrics related to software quality, agility, and customer retention. It emphasizes that metrics must be specific to the business and stresses the importance of involving both technical and business stakeholders when developing metrics.
The document summarizes a MuleSoft meetup event that included presentations on the MuleSoft Solace Connector, Event Mesh, and MuleSoft product updates. The agenda included introductions, a presentation on the Solace Connector and Event Mesh by Jason Abram, and a presentation on MuleSoft updates by Ryan Grondal and Michael Price. The MuleSoft updates presentation covered key market trends, MuleSoft's 3-5 year strategy, ideas submitted to the product portal, and new features launched in Q4 2021 around connectivity, integration, APIs, and the platform. The event concluded with networking time.
Maximizing Salesforce Lightning Experience and Lightning Component PerformanceSalesforce Developers
The document discusses various factors that affect the performance of Lightning Experience and Lightning Component pages. It outlines six main factors: geographical and network latency, device and browser capabilities, Salesforce org configuration, page complexity, component architecture, and server processing. For each factor, it provides recommendations for how to measure and optimize performance, such as enabling the Salesforce Edge network, limiting the number of components on a page, using conditional rendering, and leveraging caching features. The overall message is that page load time in Lightning Experience is sensitive to these infrastructure, code, and configuration factors.
How to justify the economic value of your data investmentSplunk
This document discusses methods for calculating the return on investment (ROI) and other metrics to justify data investments. It provides an example of using an interactive value assessment to gather key metrics from a manufacturing customer, such as downtime hours and production units. The assessment then calculates the potential costs avoided and benefits realized, such as reduced downtime and faulty units, to determine the cost-benefit ratio and payback period of investing in data and Splunk technology. The document emphasizes that value assessments are one part of developing an overall data strategy and roadmap to optimize investments for the future.
Our API Evolution: From Metadata to Tooling API for Building Incredible AppsDreamforce
This document discusses how Salesforce APIs have evolved to better support building incredible user experiences. It describes how early APIs like SOAP were limited and how newer APIs like Tooling API and Metadata API were developed to address those limitations. It also explains how Salesforce's "API First" approach was used to build the Lightning Experience user interface by replacing describe calls with SOQL queries to the new Metadata Catalog objects to retrieve only necessary entity information.
High Tech Perspective: Overlooked Opportunity from S&OPSteelwedge
Steelwedge Agility Webinar Series
Featured Presenter - Dennis Omanoff, a well respected leader, consultant and lecturer who has lead end-to-end global supply chains at major multi-billion dollar public companies and start-ups
Kick off the New Year with perspective from Dennis Omanoff, whose deep experience as Chief Supply Chain officer at some of the High Tech industry's largest manufacturers like Seagate and MacAfee will illuminate a discussion on the hit-and-miss realities of using S&OP to make a difference in High Tech business. Mr. Omanoff will offer his view and real world examples, of where S&OP strategy, practice and technology could be better used to sense and respond to the changing dynamics that are a particular challenge in the High Tech industry.
Register for this webinar to learn about how you can approach the most overlooked potential of S&OP in High Tech customer value networks: Driving Top Side Revenue.
Key topics include:
• the biggest, and most overlooked opportunity of S&OP
• connecting “the other side” of the sales order
• changing focus from 30 day P.O.’s to daily/weekly change response
Presenters:
Dennis Omanoff is a well respected leader, consultant and lecturer who has lead end-to-end global supply chains at major multi-billion dollar public companies and start-ups in the Information Security, Networking, Storage, Telecom and Retail sectors.
Nari Viswanathan is the VP of Product Management and Marketing at Steelwedge and was previously the lead Supply Chain analyst at Aberdeen.
For more information about S&OP, please visit: http://www.steelwedge.com/solutions/
Sharing APIs at Scale for a Great Developer ExperiencePostman
This document discusses challenges with developing APIs at an enterprise scale and providing a modern developer experience. It outlines strategies for sharing APIs in a scalable way, including starting with basic functionality and authentication options, leveraging community contributions through open source, and maximizing visibility by publishing documentation on API networks and public workspaces. The presentation emphasizes starting small and iterating based on feedback, as well as covering multiple access paths through desktop, web, workspaces and networks.
Creating stellar customer support experiences using searchElasticsearch
Customers, now more than ever, want to solve support issues on their own using websites and mobile applications. And self-service customer support translates to reduced support costs and higher customer satisfaction. Learn how Elastic Enterprise Search helps you achieve all this and more.
SplunkLive! Stockholm 2015 breakout - Splunk IT Service IntelligenceSplunk
Splunk's new Premium App offering, Splunk IT Service Intelligence, is full of exciting new features and functionality to enable the data-driven enterprise to monitor, alert on, and visualize these services in several new ways, including flexible free-form dashboards called "Glass Tables." Join us in this session to explore the versatility of the Glass Tables feature, discuss best practices around creating valuable and compelling Glass Tables for IT operations and business users, and inspect several examples of purpose-built Glass Tables.
Lightning Flow makes it easier for developers to build dynamic process-driven apps with Process Builder and the new Flow Builder. Join us and learn more about how you can get in the Flow!
Einstein Bots enable your customers to quickly and accurately interact with your company without waiting for a human agent to become available. Join us in this webinar as we talk about and show how Einstein Bots can be used to make your apps smarter without code, and how we can extend the functionality of Einstein Bots using Apex and external integrations.
Developers need data to create great apps, but often find managing lots of data to be a painful process. Big Objects brings the power and scale of big data to the Lightning Platform, all while using the same Salesforce platform tools and APIs you already know.
Forrester Research: How To Organise Your Business For Digital Success - AppD ...AppDynamics
The document discusses what digital leaders need to know to be effective. It outlines six principles for digital leadership: 1) design holistic experiences, 2) become insights-driven, 3) invest in business agility, 4) redesign organizations from silos to connections, 5) fuel customer-led innovation, and 6) deliver customer outcomes. The principles emphasize understanding customer desires and designing experiences across touchpoints to satisfy those desires through outcomes. Digital leaders must also focus on business agility, data insights, and operational capabilities to smoothly deliver outcomes that meet rising customer expectations.
Insurers Can Now Update ISO Rating Content Digitally - A webinar presentation...ValueMomentum
Keeping ISO rating data current is a complex and labor-intensive process for many insurance carriers. To react quickly and appropriately to market opportunities with ISO rating content updates, as well as regulatory changes, carriers need to look for solutions that streamline this process - enhancing their go-to-market strategies.
If you are actively seeking to modernize and integrate commercial lines rating with your existing core systems and extend the ability to digitally access rate-quote-bind capabilities for agents and brokers, look to the iFoundry Rating Engine. iFoundry is a modern rating engine that fully leverages the digital delivery of ISO's rating content via the ISO Electronic Rating Content (ERC) product. With ISO ERC and iFoundry, insurers are able to lower costs associated with rating operations by significantly reducing the time and effort needed to analyze and implement ISO updates to advisory loss costs and rules. Together, the ValueMomentum and ISO ERC offerings help companies automate the management of ISO rate plans along with their company deviations.
JK Technosoft is a software solutions and services company operating under the JK Organization conglomerate with over 650 employees across multiple development centers. It has been delivering software solutions since 1994 and has executed projects in 15 countries for clients in various industries. It offers a range of services including SAP consulting, application management, independent testing, and training.
Ensure Every Customer Matters With End User Monitoring at AppD Global Tour Lo...AppDynamics
Retaining loyal customers is more important than ever, so ensuring exceptional customer experience should be top priority. End User Monitoring (EUM) is central to a successful enterprise APM strategy - watch this session and see what AppDynamics EUM can do for you and your business.
Learn why Elastic Cloud is the best place to run everything Elastic. You will hear about our commitment to the cloud, the benefits of using managed services, and the optimizations we’ve made for running in public clouds. Listen as IST Research discusses their success with Elastic and see an end-to-end demo showing how easy it is to get started.
Why you should use Elastic for infrastructure metricsElasticsearch
Widely known for full-text search and logging, the Elastic Stack has evolved into a compelling solution for infrastructure metrics use cases. From fast and efficient time series datastore to integrations for onboarding common service metrics and dedicated UIs for visual exploration, see the many reasons to start using Elastic for your infrastructure metrics use case today.
Top Tips For AppD Adoption Success - AppD Global Tour StockholmAppDynamics
Want to become an AppDynamics expert? In this essential session, you’ll learn about best practices for configuring Business Transactions, role based access control and other top tips for APM success.
What's next for AppD and Cisco? - AppD Global TourAppDynamics
Cisco and AppDynamics are working towards the self-driving enterprise, and application and business performance intelligence is central to this vision. Take a look at the Cisco integrations we have been working on and get the lowdown on future AppDynamics product developments.
CodeLive: Build Lightning Web Components faster with Local DevelopmentSalesforce Developers
GitHub repo: https://github.com/satyasekharcvb/lwc-local-dev.git
With the release of a new beta version of Local Development, you can now build Lightning web components faster than ever before! You can now render changes, iterate rapidly, troubleshoot errors, and even connect with data from your org by spinning up a local development server on your machine.
In this session, we build Lightning web components in real time. The exciting new capabilities we showcase will enable you to be an even more productive developer.
In this CodeLive session we:
- Spin up a local development server from the CLI to rapidly edit and view components
- Observe how a rich error handling experience simplifies testing and debugging
- Learn how to proxy data from an org for more context and fine-tuned development
Customers are increasingly relying upon your applications for their daily lives. From general information to shopping to travel, your customers view applications as critical to their daily lives. They demand that the applications work, correctly and quickly, the first time - every time. Applications have become more and more complex, often cloud-based, and updated repeatedly. How will you make sure your applications are ready for today’s demanding users? This session will cover a typical customer experience, proving how critical today’s modern applications have become. This will highlight the growing importance of monitoring dynamic environments in our constantly changing, modern world.
The document discusses the challenges of implementing DevOps without proper measurement. It argues that digital teams often lack aligned measures of success and that application changes are difficult to assess across dynamic architectures without instrumentation. The document then presents New Relic's software measurement framework for aligning DevOps teams around key performance indicators for business success, customer experience, and application/infrastructure performance. It provides examples of how to measure service quality, customer experience, engineering velocity, and business value.
How to justify the economic value of your data investmentSplunk
This document discusses methods for calculating the return on investment (ROI) and other metrics to justify data investments. It provides an example of using an interactive value assessment to gather key metrics from a manufacturing customer, such as downtime hours and production units. The assessment then calculates the potential costs avoided and benefits realized, such as reduced downtime and faulty units, to determine the cost-benefit ratio and payback period of investing in data and Splunk technology. The document emphasizes that value assessments are one part of developing an overall data strategy and roadmap to optimize investments for the future.
Our API Evolution: From Metadata to Tooling API for Building Incredible AppsDreamforce
This document discusses how Salesforce APIs have evolved to better support building incredible user experiences. It describes how early APIs like SOAP were limited and how newer APIs like Tooling API and Metadata API were developed to address those limitations. It also explains how Salesforce's "API First" approach was used to build the Lightning Experience user interface by replacing describe calls with SOQL queries to the new Metadata Catalog objects to retrieve only necessary entity information.
High Tech Perspective: Overlooked Opportunity from S&OPSteelwedge
Steelwedge Agility Webinar Series
Featured Presenter - Dennis Omanoff, a well respected leader, consultant and lecturer who has lead end-to-end global supply chains at major multi-billion dollar public companies and start-ups
Kick off the New Year with perspective from Dennis Omanoff, whose deep experience as Chief Supply Chain officer at some of the High Tech industry's largest manufacturers like Seagate and MacAfee will illuminate a discussion on the hit-and-miss realities of using S&OP to make a difference in High Tech business. Mr. Omanoff will offer his view and real world examples, of where S&OP strategy, practice and technology could be better used to sense and respond to the changing dynamics that are a particular challenge in the High Tech industry.
Register for this webinar to learn about how you can approach the most overlooked potential of S&OP in High Tech customer value networks: Driving Top Side Revenue.
Key topics include:
• the biggest, and most overlooked opportunity of S&OP
• connecting “the other side” of the sales order
• changing focus from 30 day P.O.’s to daily/weekly change response
Presenters:
Dennis Omanoff is a well respected leader, consultant and lecturer who has lead end-to-end global supply chains at major multi-billion dollar public companies and start-ups in the Information Security, Networking, Storage, Telecom and Retail sectors.
Nari Viswanathan is the VP of Product Management and Marketing at Steelwedge and was previously the lead Supply Chain analyst at Aberdeen.
For more information about S&OP, please visit: http://www.steelwedge.com/solutions/
Sharing APIs at Scale for a Great Developer ExperiencePostman
This document discusses challenges with developing APIs at an enterprise scale and providing a modern developer experience. It outlines strategies for sharing APIs in a scalable way, including starting with basic functionality and authentication options, leveraging community contributions through open source, and maximizing visibility by publishing documentation on API networks and public workspaces. The presentation emphasizes starting small and iterating based on feedback, as well as covering multiple access paths through desktop, web, workspaces and networks.
Creating stellar customer support experiences using searchElasticsearch
Customers, now more than ever, want to solve support issues on their own using websites and mobile applications. And self-service customer support translates to reduced support costs and higher customer satisfaction. Learn how Elastic Enterprise Search helps you achieve all this and more.
SplunkLive! Stockholm 2015 breakout - Splunk IT Service IntelligenceSplunk
Splunk's new Premium App offering, Splunk IT Service Intelligence, is full of exciting new features and functionality to enable the data-driven enterprise to monitor, alert on, and visualize these services in several new ways, including flexible free-form dashboards called "Glass Tables." Join us in this session to explore the versatility of the Glass Tables feature, discuss best practices around creating valuable and compelling Glass Tables for IT operations and business users, and inspect several examples of purpose-built Glass Tables.
Lightning Flow makes it easier for developers to build dynamic process-driven apps with Process Builder and the new Flow Builder. Join us and learn more about how you can get in the Flow!
Einstein Bots enable your customers to quickly and accurately interact with your company without waiting for a human agent to become available. Join us in this webinar as we talk about and show how Einstein Bots can be used to make your apps smarter without code, and how we can extend the functionality of Einstein Bots using Apex and external integrations.
Developers need data to create great apps, but often find managing lots of data to be a painful process. Big Objects brings the power and scale of big data to the Lightning Platform, all while using the same Salesforce platform tools and APIs you already know.
Forrester Research: How To Organise Your Business For Digital Success - AppD ...AppDynamics
The document discusses what digital leaders need to know to be effective. It outlines six principles for digital leadership: 1) design holistic experiences, 2) become insights-driven, 3) invest in business agility, 4) redesign organizations from silos to connections, 5) fuel customer-led innovation, and 6) deliver customer outcomes. The principles emphasize understanding customer desires and designing experiences across touchpoints to satisfy those desires through outcomes. Digital leaders must also focus on business agility, data insights, and operational capabilities to smoothly deliver outcomes that meet rising customer expectations.
Insurers Can Now Update ISO Rating Content Digitally - A webinar presentation...ValueMomentum
Keeping ISO rating data current is a complex and labor-intensive process for many insurance carriers. To react quickly and appropriately to market opportunities with ISO rating content updates, as well as regulatory changes, carriers need to look for solutions that streamline this process - enhancing their go-to-market strategies.
If you are actively seeking to modernize and integrate commercial lines rating with your existing core systems and extend the ability to digitally access rate-quote-bind capabilities for agents and brokers, look to the iFoundry Rating Engine. iFoundry is a modern rating engine that fully leverages the digital delivery of ISO's rating content via the ISO Electronic Rating Content (ERC) product. With ISO ERC and iFoundry, insurers are able to lower costs associated with rating operations by significantly reducing the time and effort needed to analyze and implement ISO updates to advisory loss costs and rules. Together, the ValueMomentum and ISO ERC offerings help companies automate the management of ISO rate plans along with their company deviations.
JK Technosoft is a software solutions and services company operating under the JK Organization conglomerate with over 650 employees across multiple development centers. It has been delivering software solutions since 1994 and has executed projects in 15 countries for clients in various industries. It offers a range of services including SAP consulting, application management, independent testing, and training.
Ensure Every Customer Matters With End User Monitoring at AppD Global Tour Lo...AppDynamics
Retaining loyal customers is more important than ever, so ensuring exceptional customer experience should be top priority. End User Monitoring (EUM) is central to a successful enterprise APM strategy - watch this session and see what AppDynamics EUM can do for you and your business.
Learn why Elastic Cloud is the best place to run everything Elastic. You will hear about our commitment to the cloud, the benefits of using managed services, and the optimizations we’ve made for running in public clouds. Listen as IST Research discusses their success with Elastic and see an end-to-end demo showing how easy it is to get started.
Why you should use Elastic for infrastructure metricsElasticsearch
Widely known for full-text search and logging, the Elastic Stack has evolved into a compelling solution for infrastructure metrics use cases. From fast and efficient time series datastore to integrations for onboarding common service metrics and dedicated UIs for visual exploration, see the many reasons to start using Elastic for your infrastructure metrics use case today.
Top Tips For AppD Adoption Success - AppD Global Tour StockholmAppDynamics
Want to become an AppDynamics expert? In this essential session, you’ll learn about best practices for configuring Business Transactions, role based access control and other top tips for APM success.
What's next for AppD and Cisco? - AppD Global TourAppDynamics
Cisco and AppDynamics are working towards the self-driving enterprise, and application and business performance intelligence is central to this vision. Take a look at the Cisco integrations we have been working on and get the lowdown on future AppDynamics product developments.
CodeLive: Build Lightning Web Components faster with Local DevelopmentSalesforce Developers
GitHub repo: https://github.com/satyasekharcvb/lwc-local-dev.git
With the release of a new beta version of Local Development, you can now build Lightning web components faster than ever before! You can now render changes, iterate rapidly, troubleshoot errors, and even connect with data from your org by spinning up a local development server on your machine.
In this session, we build Lightning web components in real time. The exciting new capabilities we showcase will enable you to be an even more productive developer.
In this CodeLive session we:
- Spin up a local development server from the CLI to rapidly edit and view components
- Observe how a rich error handling experience simplifies testing and debugging
- Learn how to proxy data from an org for more context and fine-tuned development
Customers are increasingly relying upon your applications for their daily lives. From general information to shopping to travel, your customers view applications as critical to their daily lives. They demand that the applications work, correctly and quickly, the first time - every time. Applications have become more and more complex, often cloud-based, and updated repeatedly. How will you make sure your applications are ready for today’s demanding users? This session will cover a typical customer experience, proving how critical today’s modern applications have become. This will highlight the growing importance of monitoring dynamic environments in our constantly changing, modern world.
The document discusses the challenges of implementing DevOps without proper measurement. It argues that digital teams often lack aligned measures of success and that application changes are difficult to assess across dynamic architectures without instrumentation. The document then presents New Relic's software measurement framework for aligning DevOps teams around key performance indicators for business success, customer experience, and application/infrastructure performance. It provides examples of how to measure service quality, customer experience, engineering velocity, and business value.
The document discusses the need to rethink cloud migration strategies. It summarizes a presentation by New Relic on moving applications to the cloud. The presentation introduces the concept of "re:thinking" as the 7th "R" in cloud migration strategies. It argues that to successfully migrate applications, teams need to rethink monitoring, tagging policies, auto-scaling, cost management, legacy system integration, and continuous refactoring and rearchitecting of applications for the cloud.
Kubernetes in the Wild: Best Practices for MonitoringNew Relic
The document discusses the need to rethink cloud migration strategies. It summarizes a presentation by New Relic on moving applications to the cloud. The presentation introduces the concept of "re:thinking" as the 7th "R" in cloud migration strategies. It argues that to successfully migrate applications, teams need to rethink monitoring, tagging policies, auto-scaling, cost management, legacy system integration, and continuous refactoring through a lens of cloud-native practices.
You’re ready to migrate, but how will you prove success?New Relic
The document discusses acceptance testing for migrating applications to the cloud. It recommends instrumenting applications both on-premises and in the cloud to establish performance baselines for each environment. A comparison dashboard can then prove whether the cloud migration was successful by comparing the key performance indicators between the two baselines.
Are you ready to migrate to the cloud? How will you prove success? This presentation covers how to baseline before and after your cloud migration to prove success.
The document discusses how modern applications require modern monitoring and processes to stay performing. It notes that modern applications operate on dynamic cloud infrastructures with constant changes, requiring monitoring of business success, application performance, and customer experience. It emphasizes the importance of managing risk through understanding and mitigating risks rather than removing risks. It also discusses how DevOps is a cultural change involving team-level responsibility and ownership. The presentation aims to explain how instrumentation, infrastructure management, risk management, and DevOps culture can help keep modern applications running effectively.
The document is a presentation from New Relic's Analyst and Investor Day on June 4, 2018. It begins with introductions and a safe harbor statement. The CEO then discusses New Relic's vision of being the catalyst for customers' digital transformations. The presentation outlines New Relic's product strategy and innovation, including its platform approach and focus on cloud, DevOps, and digital customer experience. It discusses New Relic's growth strategy of expanding within existing customers and entering new enterprise accounts. The goal is to achieve $1 billion in annual revenue by fiscal year 2022.
Host for the Most: Cloud Cost OptimizationNew Relic
The document discusses the need for workload aware spend optimization when moving workloads to the cloud. It outlines a methodology for defining, refining, and optimizing cloud initiatives by baselining workloads, establishing organization and migration tracking, implementing feedback loops, and achieving business agility. The methodology aims to optimize both cloud spending and end user experience using New Relic's monitoring capabilities.
Cloud Adoption Best Practices with New RelicNew Relic
The document discusses best practices for cloud adoption, including instrumenting applications early in the migration process to save time and costs. It outlines New Relic products that can be used at different stages of a cloud migration to establish performance baselines, validate improvements, refactor applications, and optimize customer experience. Monitoring with New Relic and Amazon CloudWatch together provides visibility into both application and infrastructure metrics.
Architecting for scale - dynamic infrastructure and the cloudLee Atchison
The document discusses dynamic infrastructure and how cloud technologies enable scaling and availability. It describes how a dynamic infrastructure allows applications to allocate and consume resources on demand. It provides examples of how Docker containers can scale dynamically and how cloud technologies like EC2 auto scaling support this. Finally, it outlines progressive stages companies go through in adopting cloud technologies from initial experimentation to fully mandating cloud usage.
How to Lower or Justify your Cloud Spend New Relic
The document discusses optimizing cloud spending and justifying cloud costs. It introduces New Relic's cloud optimization solutions, including integrating AWS budgets with New Relic billing, using tags to track application environments, dashboards to monitor performance and costs, NRQL to query metrics, data apps to analyze usage, and baseline alerts to detect anomalies. It also discusses right-sizing instances, scaling workloads in/out as needed, and New Relic's cloud adoption solution guide to plan, migrate, and optimize applications on cloud services.
How to Lower or Justify your Cloud SpendKevin Downs
Are you responsible for keeping your cloud spend down? Or, are you looking for a way to justify your current spend - maybe even prove you need to expand your cloud budget? This presentation shows you how you can using cloud service metrics and KPIs to optimize your cloud spend.
This document discusses New Relic, Inc., a company that provides application performance monitoring and management products. It notes that the document contains forward-looking statements and actual results may differ. It also states that New Relic assumes no obligation to update any forward-looking statements except as required by law.
The document discusses how modern applications require modern monitoring, infrastructure, and processes to keep them running effectively. It emphasizes that managing risk, instrumenting all aspects of an application, using dynamic cloud infrastructure, and embracing a DevOps culture are necessary to maintain high-performing modern applications. Removing risk entirely is impossible, so risk management through understanding and mitigation is key.
This document discusses site reliability engineering (SRE) practices at New Relic. It describes New Relic's transition from a monolithic architecture to microservices, and the establishment of an SRE team with both embedded and dedicated roles. The SRE team aims to continuously improve the reliability of New Relic's platform. Key aspects of SRE success outlined include reliability as a feature, shared understanding, clear guidelines, and community building.
Monitoring is important not just for production but also for pre-production environments. This allows developers to detect issues early, reduce the number of incidents that occur in production, and standardize response processes. New Relic enables monitoring throughout the development lifecycle by allowing custom metrics and attributes to be collected from development and included in alerts across all environments. Scripting alerts and maintaining them as code helps ensure a consistent monitoring configuration.
Microservices Practitioner Summit Jan '15 - Designing APIs with Customers in ...Ambassador Labs
Nic Benders from New Relic on designing APIs as products, and always asking how your consumers will think about your data model.
Full video here: http://www.microservices.com/nic-benders-designing-apis-with-customers-in-mind
The document discusses Site Reliability Engineering (SRE) practices at New Relic. It summarizes that New Relic has transitioned from a monolithic architecture run by siloed teams to over 200 microservices run by many engineering teams with embedded SREs. SREs aim to continuously improve reliability by reducing toil, encouraging best practices, automating operations, and supporting engineering teams. SREs focus on stability, reliability engineering, and reducing operations toil. The document provides a template for other companies to establish SRE roles, focus areas, and details in the SRE book.
Similar to New Relic After Lift and Shift - FutureStack 2019 (20)
Transforming Product Development using OnePlan To Boost Efficiency and Innova...OnePlan Solutions
Ready to overcome challenges and drive innovation in your organization? Join us in our upcoming webinar where we discuss how to combat resource limitations, scope creep, and the difficulties of aligning your projects with strategic goals. Discover how OnePlan can revolutionize your product development processes, helping your team to innovate faster, manage resources more effectively, and deliver exceptional results.
🏎️Tech Transformation: DevOps Insights from the Experts 👩💻campbellclarkson
Connect with fellow Trailblazers, learn from industry experts Glenda Thomson (Salesforce, Principal Technical Architect) and Will Dinn (Judo Bank, Salesforce Development Lead), and discover how to harness DevOps tools with Salesforce.
The Comprehensive Guide to Validating Audio-Visual Performances.pdfkalichargn70th171
Ensuring the optimal performance of your audio-visual (AV) equipment is crucial for delivering exceptional experiences. AV performance validation is a critical process that verifies the quality and functionality of your AV setup. Whether you're a content creator, a business conducting webinars, or a homeowner creating a home theater, validating your AV performance is essential.
WWDC 2024 Keynote Review: For CocoaCoders AustinPatrick Weigel
Overview of WWDC 2024 Keynote Address.
Covers: Apple Intelligence, iOS18, macOS Sequoia, iPadOS, watchOS, visionOS, and Apple TV+.
Understandable dialogue on Apple TV+
On-device app controlling AI.
Access to ChatGPT with a guest appearance by Chief Data Thief Sam Altman!
App Locking! iPhone Mirroring! And a Calculator!!
A Comprehensive Guide on Implementing Real-World Mobile Testing Strategies fo...kalichargn70th171
In today's fiercely competitive mobile app market, the role of the QA team is pivotal for continuous improvement and sustained success. Effective testing strategies are essential to navigate the challenges confidently and precisely. Ensuring the perfection of mobile apps before they reach end-users requires thoughtful decisions in the testing plan.
How Can Hiring A Mobile App Development Company Help Your Business Grow?ToXSL Technologies
ToXSL Technologies is an award-winning Mobile App Development Company in Dubai that helps businesses reshape their digital possibilities with custom app services. As a top app development company in Dubai, we offer highly engaging iOS & Android app solutions. https://rb.gy/necdnt
Building API data products on top of your real-time data infrastructureconfluent
This talk and live demonstration will examine how Confluent and Gravitee.io integrate to unlock value from streaming data through API products.
You will learn how data owners and API providers can document, secure data products on top of Confluent brokers, including schema validation, topic routing and message filtering.
You will also see how data and API consumers can discover and subscribe to products in a developer portal, as well as how they can integrate with Confluent topics through protocols like REST, Websockets, Server-sent Events and Webhooks.
Whether you want to monetize your real-time data, enable new integrations with partners, or provide self-service access to topics through various protocols, this webinar is for you!
WMF 2024 - Unlocking the Future of Data Powering Next-Gen AI with Vector Data...Luigi Fugaro
Vector databases are transforming how we handle data, allowing us to search through text, images, and audio by converting them into vectors. Today, we'll dive into the basics of this exciting technology and discuss its potential to revolutionize our next-generation AI applications. We'll examine typical uses for these databases and the essential tools
developers need. Plus, we'll zoom in on the advanced capabilities of vector search and semantic caching in Java, showcasing these through a live demo with Redis libraries. Get ready to see how these powerful tools can change the game!
What is Continuous Testing in DevOps - A Definitive Guide.pdfkalichargn70th171
Once an overlooked aspect, continuous testing has become indispensable for enterprises striving to accelerate application delivery and reduce business impacts. According to a Statista report, 31.3% of global enterprises have embraced continuous integration and deployment within their DevOps, signaling a pervasive trend toward hastening release cycles.
Alluxio Webinar | 10x Faster Trino Queries on Your Data PlatformAlluxio, Inc.
Alluxio Webinar
June. 18, 2024
For more Alluxio Events: https://www.alluxio.io/events/
Speaker:
- Jianjian Xie (Staff Software Engineer, Alluxio)
As Trino users increasingly rely on cloud object storage for retrieving data, speed and cloud cost have become major challenges. The separation of compute and storage creates latency challenges when querying datasets; scanning data between storage and compute tiers becomes I/O bound. On the other hand, cloud API costs related to GET/LIST operations and cross-region data transfer add up quickly.
The newly introduced Trino file system cache by Alluxio aims to overcome the above challenges. In this session, Jianjian will dive into Trino data caching strategies, the latest test results, and discuss the multi-level caching architecture. This architecture makes Trino 10x faster for data lakes of any scale, from GB to EB.
What you will learn:
- Challenges relating to the speed and costs of running Trino in the cloud
- The new Trino file system cache feature overview, including the latest development status and test results
- A multi-level cache framework for maximized speed, including Trino file system cache and Alluxio distributed cache
- Real-world cases, including a large online payment firm and a top ridesharing company
- The future roadmap of Trino file system cache and Trino-Alluxio integration
How GenAI Can Improve Supplier Performance Management.pdfZycus
Data Collection and Analysis with GenAI enables organizations to gather, analyze, and visualize vast amounts of supplier data, identifying key performance indicators and trends. Predictive analytics forecast future supplier performance, mitigating risks and seizing opportunities. Supplier segmentation allows for tailored management strategies, optimizing resource allocation. Automated scorecards and reporting provide real-time insights, enhancing transparency and tracking progress. Collaboration is fostered through GenAI-powered platforms, driving continuous improvement. NLP analyzes unstructured feedback, uncovering deeper insights into supplier relationships. Simulation and scenario planning tools anticipate supply chain disruptions, supporting informed decision-making. Integration with existing systems enhances data accuracy and consistency. McKinsey estimates GenAI could deliver $2.6 trillion to $4.4 trillion in economic benefits annually across industries, revolutionizing procurement processes and delivering significant ROI.
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
React.js, a JavaScript library developed by Facebook, has gained immense popularity for building user interfaces, especially for single-page applications. Over the years, React has evolved and expanded its capabilities, becoming a preferred choice for mobile app development. This article will explore why React.js is an excellent choice for the Best Mobile App development company in Noida.
Visit Us For Information: https://www.linkedin.com/pulse/what-makes-reactjs-stand-out-mobile-app-development-rajesh-rai-pihvf/
Over 20 years in industry12 years as a Solutions Architect
Infrastructure & Application Monitoring
Had apps in both MacOS and iOS stores
Specializing in:
SaaS
Cloud Adoption
The cloud is just the beginning
Lift & Shift has benefits
There are a great number of reason to modernize
Elasticity - Resiliency - deployment - management - flexibility
Options are Rehost, Replatform, and Refactor - and Risk
Dependencies - Automation - customers - Errors - Compatibility - Changes - Outcomes
Continuous Application Modernization - 5 steps
5 steps: Goals - Understanding - Approach - Observability - Repeat
Josh: Final thought? – democratization of data and observability
Javier: Final thought?
Pun intended
Cost
data center
faster procurement/provisioning process
Flexible payment models (Capex to Opex)
Infrastructure
better purposed instances
Improved customer experience
better operations management (OS patching upgrades etc)
Disaster recovery
Development
Distributed development team
New services available
Maybe: On-prem to cloud website migration story
We needed to move some apps to the cloud -- check
We’re going all in with the cloud --- check
We’re out of our data center(s) --- check
Everything is as it should be --- ?
Intro yourself
What is your philosophy on moving to the cloud - lift & shift, replatform, refactor, etc?
Josh: Speed of execution - velocity. Fast as possible
Intro yourself
What is your philosophy on moving to the cloud - lift & shift, replatform, refactor, etc?
Josh: Speed of execution - velocity. Fast as possible
Intro yourself
What is your philosophy on moving to the cloud - lift & shift, replatform, refactor, etc?
Javier: Going to modern. Hybrid - global - fast moving
During your migration journey…
You wanted to make more improvements
You’ve learned a thing or two (or a hundred!)
You want to increase cloud cost savings
Basically, you want to modernize in the cloud
...and you know you need to!
Your Goal:
From the AWS Summit 2017 Chicago
Lift and Shift to EC2, MS SQL & DynamoDB
Modernized to an S3 Data Lake and AWS Lambda
Take full advantage of…
The question is not whether to modernize at all, but what and how to modernize.
Should you move your core applications to the cloud?
Which business areas or applications will deliver the largest impact or highest value for the business when modernized?
Are there applications that should not be modernized because they have reached the end of their useful life?
What are best practices for breaking monoliths apart?
To answer these “what” and “how” questions about your legacy applications, you need two fundamental things:
An understanding of your modernization options and
Deep insight into your applications
Failure to understand application dependencies, both externally and internally, and/or overly aggressive and complex project goals
Lack of cost savings due to automation not being implemented or improved
Negative impact on the customer experience due to poor performance or availability caused by unexpected errors
Project time exceeds original plans because of unexpected errors and/or dependencies
Failure to identify application incompatibility with the new platform.
Extensive changes that require extended testing and debugging with dual infrastructures, temporarily driving up costs
Failure to achieve expected business outcomes
In any project, understanding the risks involved is the first step in minimizing them. While the risks for this type of modernization are relatively low, there are certain ones you may encounter when you replatform applications. These can include:
These risks and others can be avoided or minimized by using best practices that are informed and guided by data derived before, during, and after the replatforming project.
5 Steps:
Step 1: Set goals for the modernization.Step 2: Make sure you thoroughly understand each of your applications.Step 3: Choose the optimal modernization approach for each application.Step 4: Monitor and measure changes against your goals before, during, and after modernization.Step 5: Start over with step one.
Before the break your went into DevOps...
Modernization will allow for a company to adopt a high-performing software development environment.
Application modernization enables DevOps success and vice versa. As legacy applications and their environments are modernized, DevOps teams can spend more time on developing and delivering new features and less on overcoming friction in the software lifecycle of existing systems.
Josh: How does automation help your DevOps journey with modernization?
→ Small teams - pipelines - alert conditions
Javier: How has modernizing of your infrastructure accelerated your delivery Cycles?
→ before to once-a-month to once-a-week - get to root cause
Define and refine your goals
Josh: During your journey, how did your focus change and how does Cardinal Health’s modernization impact that focus?
→ Customer Focused – essential to care
Javier: One of your goals is uptime. Can you expand on this goal and how modernization helps that goal?
→ 99% to 99.8%
The next step in the strategy is to get a clear understanding of each application and its interdependencies.
To do this, take a baseline measurement that helps you figure out how each application currently performs.
This gives you the foundation for making data-driven decisions as you create your initial roadmap for modernization.
With this understanding, you can identify and prioritize applications to modernize and determine how you want to approach modernizing each application.
Application Infrastructure
How are resources being used?
Which resources are being used by this application?
What chronic resource issues exist (e.g., over- or underutilization)?
Application Quality and Performance
How is the application performing? What is normal, baseline performance?
What errors does the application have currently?
What are the application dependencies?
How is the end user experience?
Where is your application spending most of its time?
Impact on the Business
How does performance and reliability impact revenue?
How much time do users spend on the application?
How does the application impact conversion rates or order value?
Do errors/downtime impact customer service costs?
Now it’s time to use all the data you’ve gathered to make an informed decision about how to begin modernizing or further modernize each application.
Although there are six industry-recognized approaches that you can consider, only the last three—rehost, replatform, and refactor—involve modernizing the application.
We’ve already covered rehosting.
Keep in mind that with an iterative modernization strategy, you can start with one approach, reap some initial benefits, and then continue to modernize through other approaches to obtain more benefits.
For example, you could take a rehosted application and replatform it to swap out the current database for a cloud-based database service.
Or, with a different, higher-priority application, you could decide to go straight to refactoring it to take advantage of additional cloud technologies.
Improve scalability to accommodate business growth
Improve reliability and performance for a better customer experience
Reduce costs for software licensing and resource usage
Reduce the total security surface area of your application
Reduce the management efforts and associated time and costs
Improve ability to make informed decisions
Understand how apps and services in your architecture connect and talk to each other using Service Maps.
Drive new revenue streams and/or optimize existing ones
Create improvements that directly impact future revenue capabilities
Deliver a better customer experience
Enable faster time-to-market with new features
Support a changing/new business model
Comply with changing/new regulations
What parts of the codebase change the most or have the most issues filed against them? These are potentially good candidates for making them a component (i.e. a microservice running in a container or AWS Lambda)
Mitigate performance problems?
Use serverless technologies?
Use self-healing infrastructure and services rather than manage the infrastructure?
Which parts of the codebase have (which indicates rework)?
Stop self-managing infrastructure if it does not issues regularly reopened give you a competitive advantage?
Which parts of the application perform well? Which parts are complex and prone to errors? Which parts take the longest to run?
How well are deployments going? How fast are things provisioned?
What are the key performance indicators (KPIs) that help you measure business impact?
Going back to its traditional meaning, the first type of refactoring is all about improving an application’s code.
The idea is to identify portions of the application where it makes the most sense to re-architect the code for quality, maintainability, performance, and predictability.
It’s an opportunity to fix existing issues and create less complex and more streamlined code.
After you’ve decided on the parts of the codebase to work on, the next step is to consider a newer deployment model.
Choose the deployment model that makes the most sense for your organization based on your IT philosophy and future direction.
Distributed tracing lets you see the path that a request takes as it travels through a distributed system.
Use it to discover the latency of components along a path or understand which component is creating a bottleneck.
It should now be clear that you need to fully understand how your applications perform before you can decide how and whether to modernize them.
The baseline picture of application performance gives you data not only to inform your decisions but to serve as a comparison during and after your modernization effort.
It also helps you identify any issues that you need to address before you begin the modernization.
After the initial modernization iteration is complete, you can demonstrate success by comparing your previous baseline against current performance, customer experience, and business outcome data.
Ideally, you’ll see improvements and identify areas where further modernization and optimization can help.
Josh: Cardinal Health experienced an 11th hour hiccup, can you tell us about it and how New Relic was able to support you?
→ re: pivot to GCP – multi-cloud – co-locate (Dublin)
Javier: How does New Relic keep everyone at Fleet Complete aware of what’s going on?
→ (Single pane of glass, bottlenecks, AWS Cost Explorer and NR)
Forecasted to come in significantly over their cloud budget
Trainline took two weeks to analyze their systems and applications to identify key areas that could be address quickly
Ended coming in 1% under budget
This last step is neither an actual step nor the last activity of the modernization process; rather, it’s a reminder that modernization is continuous, and that even as you gradually modernize your applications, there will always be new technologies and capabilities to incorporate and support.
However, with each cycle of modernization, your organization is making significant strides in enabling and fostering digital transformation and the culture, process, and technology changes that must happen as part of it.
The cloud is just the beginning
Lift & Shift has benefits
There are a great number of reason to modernize
Elasticity - Resiliency - deployment - management - flexibility
Options are Rehost, Replatform, and Refactor - and Risk
Dependencies - Automation - customers - Errors - Compatibility - Changes - Outcomes
Continuous Application Modernization - 5 steps
5 steps: Goals - Understanding - Approach - Observability - Repeat
Josh: Final thought? – democratization of data and observability
Javier: Final thought?