The document discusses IBM WebSphere Application Server V8.5 features for classloader memory leak prevention, detection, and remediation. It introduces that customers discovered classloader and ThreadLocal memory leaks in WebSphere Application Server and their own applications. The new features in V8.5 include prevention of common leak patterns, detection of application-triggered leaks, and automated fixing of leaks by leveraging JDK APIs. The summary is configured through JVM properties and administrators can view leak detection messages and run operations to find and fix leaks through dynamic MBeans.
Big Data means big hardware, and the less of it we can use to do the job properly, the better the bottom line. Apache Kafka makes up the core of our data pipelines at many organizations, including LinkedIn, and we are on a perpetual quest to squeeze as much as we can out of our systems, from Zookeeper, to the brokers, to the various client applications. This means we need to know how well the system is running, and only then can we start turning the knobs to optimize it. In this talk, we will explore how best to monitor Kafka and its clients to assure they are working well. Then we will dive into how to get the best performance from Kafka, including how to pick hardware and the effect of a variety of configurations in both the broker and clients. We’ll also talk about setting up Kafka for no data loss.
(CMP404) Cloud Rendering at Walt Disney Animation StudiosAmazon Web Services
"Each year, the technical complexity of making the next great Walt Disney Animation Studios film increases. Animation and Visual FX studios continue to push the bounds of what is possible in computer graphics. This complexity drives rapid technological growth in both computational resources and storage to the point that it exceeds what we can physically provide with our on-premise compute cluster. As a result, we have started to adopt a hybrid approach with the cloud.
This session addresses the hurdles that animation and VFX studios face and focuses on automation of 'disposable' components (specifically infrastructure, licensing, fleet management, data and dependency management in a large-scale batch workload). We apply these general cloud techniques and utilities to an animation/VFX workload and push the limits with a very large scale cloud renderfarm deployment.
The team from Walt Disney Animation Studios walks through how they use cloud technologies to maximize render capacity. Learn how to leverage high-performance storage (like Amazon EFS), Amazon EC2 networking and the latest EC2 Spot features to provide a fully functional renderfarm at production-quality scale."
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...Spark Summit
What if you could get the simplicity, convenience, interoperability, and storage niceties of an old-fashioned CSV with the speed of a NoSQL database and the storage requirements of a gzipped file? Enter Parquet.
At The Weather Company, Parquet files are a quietly awesome and deeply integral part of our Spark-driven analytics workflow. Using Spark + Parquet, we’ve built a blazing fast, storage-efficient, query-efficient data lake and a suite of tools to accompany it.
We will give a technical overview of how Parquet works and how recent improvements from Tungsten enable SparkSQL to take advantage of this design to provide fast queries by overcoming two major bottlenecks of distributed analytics: communication costs (IO bound) and data decoding (CPU bound).
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the CloudNoritaka Sekiyama
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud (Hadoop / Spark Conference Japan 2019)
# English version #
http://hadoop.apache.jp/hcj2019-program/
Big Data means big hardware, and the less of it we can use to do the job properly, the better the bottom line. Apache Kafka makes up the core of our data pipelines at many organizations, including LinkedIn, and we are on a perpetual quest to squeeze as much as we can out of our systems, from Zookeeper, to the brokers, to the various client applications. This means we need to know how well the system is running, and only then can we start turning the knobs to optimize it. In this talk, we will explore how best to monitor Kafka and its clients to assure they are working well. Then we will dive into how to get the best performance from Kafka, including how to pick hardware and the effect of a variety of configurations in both the broker and clients. We’ll also talk about setting up Kafka for no data loss.
(CMP404) Cloud Rendering at Walt Disney Animation StudiosAmazon Web Services
"Each year, the technical complexity of making the next great Walt Disney Animation Studios film increases. Animation and Visual FX studios continue to push the bounds of what is possible in computer graphics. This complexity drives rapid technological growth in both computational resources and storage to the point that it exceeds what we can physically provide with our on-premise compute cluster. As a result, we have started to adopt a hybrid approach with the cloud.
This session addresses the hurdles that animation and VFX studios face and focuses on automation of 'disposable' components (specifically infrastructure, licensing, fleet management, data and dependency management in a large-scale batch workload). We apply these general cloud techniques and utilities to an animation/VFX workload and push the limits with a very large scale cloud renderfarm deployment.
The team from Walt Disney Animation Studios walks through how they use cloud technologies to maximize render capacity. Learn how to leverage high-performance storage (like Amazon EFS), Amazon EC2 networking and the latest EC2 Spot features to provide a fully functional renderfarm at production-quality scale."
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...Spark Summit
What if you could get the simplicity, convenience, interoperability, and storage niceties of an old-fashioned CSV with the speed of a NoSQL database and the storage requirements of a gzipped file? Enter Parquet.
At The Weather Company, Parquet files are a quietly awesome and deeply integral part of our Spark-driven analytics workflow. Using Spark + Parquet, we’ve built a blazing fast, storage-efficient, query-efficient data lake and a suite of tools to accompany it.
We will give a technical overview of how Parquet works and how recent improvements from Tungsten enable SparkSQL to take advantage of this design to provide fast queries by overcoming two major bottlenecks of distributed analytics: communication costs (IO bound) and data decoding (CPU bound).
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the CloudNoritaka Sekiyama
Amazon S3 Best Practice and Tuning for Hadoop/Spark in the Cloud (Hadoop / Spark Conference Japan 2019)
# English version #
http://hadoop.apache.jp/hcj2019-program/
NHN NEXT 게임 서버 프로그래밍 강의 자료입니다. 최소한의 필요한 이론 내용은 질문 위주로 구성되어 있고 (답은 학생들 개별로 고민해와서 피드백 받는 방식) 해당 내용에 맞는 실습(구현) 과제가 포함되어 있습니다.
참고로, 서버 아키텍처에 관한 과목은 따로 있어서 본 강의에는 포함되어 있지 않습니다.
Replicate Elasticsearch Data with Cross-Cluster Replication (CCR)Elasticsearch
Launched in Elasticsearch version 6.5, cross-cluster replication (CCR) allows you to replicate an index from one Elasticsearch cluster to another. CCR is perfect for a number of use cases including cross data center replication, data locality (multiple copies of data closer to users), and creating a dedicated, centralized analytics cluster populated by multiple source clusters.
Kafka Streams vs. KSQL for Stream Processing on top of Apache KafkaKai Wähner
Spoilt for Choice – Kafka Streams vs. KSQL for Stream Processing on top of Apache Kafka:
Apache Kafka is a de facto standard streaming data processing platform. It is widely deployed as event streaming platform. Part of Kafka is its stream processing API “Kafka Streams”. In addition, the Kafka ecosystem now offers KSQL, a declarative, SQL-like stream processing language that lets you define powerful stream-processing applications easily. What once took some moderately sophisticated Java code can now be done at the command line with a familiar and eminently approachable syntax.
This session discusses and demos the pros and cons of Kafka Streams and KSQL to understand when to use which stream processing alternative for continuous stream processing natively on Apache Kafka infrastructures. The end of the session compares the trade-offs of Kafka Streams and KSQL to separate stream processing frameworks such as Apache Flink or Spark Streaming.
Scaling ScyllaDB Storage Engine with State-of-Art CompactionScyllaDB
Log Structured Merge (LSM) tree storage engines are known for very fast writes. This LSM tree structure is used by ScyllaDB to immutable Sorted Strings Tables (SSTables) on disk. These fast writes come with a tradeoff in terms of read and space amplification. While compaction processes can help mitigate this, the RUM conjecture states that only two amplification factors can be optimized at the extent of a third. Learn how ScyllaDB leverages RUM conjecture and controller theory, to deliver a state-of-art LSM-tree compaction for its users.
Spark Autotuning: Spark Summit East talk by Lawrence SpracklenSpark Summit
While the performance delivered by Spark has enabled data scientists to undertake sophisticated analyses on big and complex data in actionable timeframes, too often, the process of manually configuring the underlying Spark jobs (including the number and size of the executors) can be a significant and time consuming undertaking. Not only it does this configuration process typically rely heavily on repeated trial-and-error, it necessitates that data scientists have a low-level understanding of Spark and detailed cluster sizing information. At Alpine Data we have been working to eliminate this requirement, and develop algorithms that can be used to automatically tune Spark jobs with minimal user involvement,
In this presentation, we discuss the algorithms we have developed and illustrate how they leverage information about the size of the data being analyzed, the analytical operations being used in the flow, the cluster size, configuration and real-time utilization, to automatically determine the optimal Spark job configuration for peak performance.
What is the State of my Kafka Streams Application? Unleashing Metrics. | Neil...HostedbyConfluent
"Just as the Apache Kafka Brokers provide JMX metrics to monitor your cluster's health, Kafka Streams provides a rich set of metrics for monitoring your application's health and performance. The metrics to observe for a given use-case of Kafka Streams will vary significantly from application to application. Learning how to build and customize monitoring of those applications will help you maintain a healthy Kafka Streams ecosystem.
Takeaways
* An analysis and overview of the provided metrics, including the new end-to-end metrics of Kafka Streams 2.7.
* See how to extract metrics from your application using existing JMX tooling.
* Walkthrough how to build a dashboard for observing those metrics.
* Explore options of how to add additional JMX resources and Kafka Stream metrics to your application.
* How to verify you built your dashboard correctly by creating a data control set to validate your dashboard.
* Go beyond what you can collect from the Kafka Stream metrics."
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...Databricks
Spark SQL is a highly scalable and efficient relational processing engine with ease-to-use APIs and mid-query fault tolerance. It is a core module of Apache Spark. Spark SQL can process, integrate and analyze the data from diverse data sources (e.g., Hive, Cassandra, Kafka and Oracle) and file formats (e.g., Parquet, ORC, CSV, and JSON). This talk will dive into the technical details of SparkSQL spanning the entire lifecycle of a query execution. The audience will get a deeper understanding of Spark SQL and understand how to tune Spark SQL performance.
In this hands on tutorial we will present Koalas, a new open source project. Koalas is an open source Python package that implements the pandas API on top of Apache Spark, to make the pandas API scalable to big data. Using Koalas, data scientists can make the transition from a single machine to a distributed environment without needing to learn a new framework.
MongoDB World 2015 - A Technical Introduction to WiredTigerWiredTiger
MongoDB 3.0 introduces a new pluggable storage engine API and a new storage engine called WiredTiger. The engineering team behind WiredTiger team has a long and distinguished career, having architected and built Berkeley DB, now the world's most widely used embedded database. In this talk we will describe our original design goals for WiredTiger, including considerations we made for heavily threaded hardware, large on-chip caches, and SSD storage. We'll also look at some of the latch-free and non-blocking algorithms we've implemented, as well as other techniques that improve scaling, overall throughput and latency. Finally, we'll take a look at some of the features we hope to incorporate into WiredTiger and MongoDB in the future.
In Apache Pulsar Meetup, Jia Zhai from StreamNative presents KoP (Kafka-on-Pulsar) which bring native Kafka protocol support on Pulsar broker. He gave a demo about how to use Kafka clients and Pulsar clients can work seamlessly on same data, and how Kafka Connectors can work on a Pulsar cluster.
KafkaConsumer - Decoupling Consumption and Processing for Better Resource Uti...confluent
When working with KafkaConsumer, we usually employ single thread both for reading and processing of messages. KafkaConsumer is not thread-safe, so using single thread fits in well. Downside of this approach is that you are limited to single thread for processing messages.
By decoupling consumption and processing, we can achieve processing parallelization with single consumer and get the most out of multi-core CPU architectures available today. While this can be very useful in certain use-case scenarios, it's not trivial to implement.
How do we use multiple threads with KafkaConsumer which is not thread safe? How do we react to consumer group rebalancing? Can we get desired processing and ordering guarantees? In this talk we 'll try to answer these questions and explore challenges we face on our path.
Här har ni en presentation om WebSphere Application Server.
Titta närmare på området på dessa länkar: Application Infrastructure (http://www-03.ibm.com/software/products/sv/category/SW600) respektive Connectivity & Integration (http://www-03.ibm.com/software/products/sv/category/SW666).
NHN NEXT 게임 서버 프로그래밍 강의 자료입니다. 최소한의 필요한 이론 내용은 질문 위주로 구성되어 있고 (답은 학생들 개별로 고민해와서 피드백 받는 방식) 해당 내용에 맞는 실습(구현) 과제가 포함되어 있습니다.
참고로, 서버 아키텍처에 관한 과목은 따로 있어서 본 강의에는 포함되어 있지 않습니다.
Replicate Elasticsearch Data with Cross-Cluster Replication (CCR)Elasticsearch
Launched in Elasticsearch version 6.5, cross-cluster replication (CCR) allows you to replicate an index from one Elasticsearch cluster to another. CCR is perfect for a number of use cases including cross data center replication, data locality (multiple copies of data closer to users), and creating a dedicated, centralized analytics cluster populated by multiple source clusters.
Kafka Streams vs. KSQL for Stream Processing on top of Apache KafkaKai Wähner
Spoilt for Choice – Kafka Streams vs. KSQL for Stream Processing on top of Apache Kafka:
Apache Kafka is a de facto standard streaming data processing platform. It is widely deployed as event streaming platform. Part of Kafka is its stream processing API “Kafka Streams”. In addition, the Kafka ecosystem now offers KSQL, a declarative, SQL-like stream processing language that lets you define powerful stream-processing applications easily. What once took some moderately sophisticated Java code can now be done at the command line with a familiar and eminently approachable syntax.
This session discusses and demos the pros and cons of Kafka Streams and KSQL to understand when to use which stream processing alternative for continuous stream processing natively on Apache Kafka infrastructures. The end of the session compares the trade-offs of Kafka Streams and KSQL to separate stream processing frameworks such as Apache Flink or Spark Streaming.
Scaling ScyllaDB Storage Engine with State-of-Art CompactionScyllaDB
Log Structured Merge (LSM) tree storage engines are known for very fast writes. This LSM tree structure is used by ScyllaDB to immutable Sorted Strings Tables (SSTables) on disk. These fast writes come with a tradeoff in terms of read and space amplification. While compaction processes can help mitigate this, the RUM conjecture states that only two amplification factors can be optimized at the extent of a third. Learn how ScyllaDB leverages RUM conjecture and controller theory, to deliver a state-of-art LSM-tree compaction for its users.
Spark Autotuning: Spark Summit East talk by Lawrence SpracklenSpark Summit
While the performance delivered by Spark has enabled data scientists to undertake sophisticated analyses on big and complex data in actionable timeframes, too often, the process of manually configuring the underlying Spark jobs (including the number and size of the executors) can be a significant and time consuming undertaking. Not only it does this configuration process typically rely heavily on repeated trial-and-error, it necessitates that data scientists have a low-level understanding of Spark and detailed cluster sizing information. At Alpine Data we have been working to eliminate this requirement, and develop algorithms that can be used to automatically tune Spark jobs with minimal user involvement,
In this presentation, we discuss the algorithms we have developed and illustrate how they leverage information about the size of the data being analyzed, the analytical operations being used in the flow, the cluster size, configuration and real-time utilization, to automatically determine the optimal Spark job configuration for peak performance.
What is the State of my Kafka Streams Application? Unleashing Metrics. | Neil...HostedbyConfluent
"Just as the Apache Kafka Brokers provide JMX metrics to monitor your cluster's health, Kafka Streams provides a rich set of metrics for monitoring your application's health and performance. The metrics to observe for a given use-case of Kafka Streams will vary significantly from application to application. Learning how to build and customize monitoring of those applications will help you maintain a healthy Kafka Streams ecosystem.
Takeaways
* An analysis and overview of the provided metrics, including the new end-to-end metrics of Kafka Streams 2.7.
* See how to extract metrics from your application using existing JMX tooling.
* Walkthrough how to build a dashboard for observing those metrics.
* Explore options of how to add additional JMX resources and Kafka Stream metrics to your application.
* How to verify you built your dashboard correctly by creating a data control set to validate your dashboard.
* Go beyond what you can collect from the Kafka Stream metrics."
Deep Dive into Spark SQL with Advanced Performance Tuning with Xiao Li & Wenc...Databricks
Spark SQL is a highly scalable and efficient relational processing engine with ease-to-use APIs and mid-query fault tolerance. It is a core module of Apache Spark. Spark SQL can process, integrate and analyze the data from diverse data sources (e.g., Hive, Cassandra, Kafka and Oracle) and file formats (e.g., Parquet, ORC, CSV, and JSON). This talk will dive into the technical details of SparkSQL spanning the entire lifecycle of a query execution. The audience will get a deeper understanding of Spark SQL and understand how to tune Spark SQL performance.
In this hands on tutorial we will present Koalas, a new open source project. Koalas is an open source Python package that implements the pandas API on top of Apache Spark, to make the pandas API scalable to big data. Using Koalas, data scientists can make the transition from a single machine to a distributed environment without needing to learn a new framework.
MongoDB World 2015 - A Technical Introduction to WiredTigerWiredTiger
MongoDB 3.0 introduces a new pluggable storage engine API and a new storage engine called WiredTiger. The engineering team behind WiredTiger team has a long and distinguished career, having architected and built Berkeley DB, now the world's most widely used embedded database. In this talk we will describe our original design goals for WiredTiger, including considerations we made for heavily threaded hardware, large on-chip caches, and SSD storage. We'll also look at some of the latch-free and non-blocking algorithms we've implemented, as well as other techniques that improve scaling, overall throughput and latency. Finally, we'll take a look at some of the features we hope to incorporate into WiredTiger and MongoDB in the future.
In Apache Pulsar Meetup, Jia Zhai from StreamNative presents KoP (Kafka-on-Pulsar) which bring native Kafka protocol support on Pulsar broker. He gave a demo about how to use Kafka clients and Pulsar clients can work seamlessly on same data, and how Kafka Connectors can work on a Pulsar cluster.
KafkaConsumer - Decoupling Consumption and Processing for Better Resource Uti...confluent
When working with KafkaConsumer, we usually employ single thread both for reading and processing of messages. KafkaConsumer is not thread-safe, so using single thread fits in well. Downside of this approach is that you are limited to single thread for processing messages.
By decoupling consumption and processing, we can achieve processing parallelization with single consumer and get the most out of multi-core CPU architectures available today. While this can be very useful in certain use-case scenarios, it's not trivial to implement.
How do we use multiple threads with KafkaConsumer which is not thread safe? How do we react to consumer group rebalancing? Can we get desired processing and ordering guarantees? In this talk we 'll try to answer these questions and explore challenges we face on our path.
Här har ni en presentation om WebSphere Application Server.
Titta närmare på området på dessa länkar: Application Infrastructure (http://www-03.ibm.com/software/products/sv/category/SW600) respektive Connectivity & Integration (http://www-03.ibm.com/software/products/sv/category/SW666).
This presentation was given by Seema Kumar, Websphere Product Management and Surya V Duggirala, Websphere Performance Architect at IBM Impact 2012 at Mumbai on the 1st of June. It talks about Innovative Applications and Interactive Experiences
AD109 - Using the IBM Sametime Proxy SDK: WebSphere Portal, IBM Connections -...Carl Tyler
From simple lightweight usage to full real world integration and development, the Sametime Proxy offers an exceptional range of social capabilities. This session will showcase our integration with Portal and Connections, and then move on to illustrate how the openness of the programming model makes it suitable for any environment, by extending SDK objects, managing events and overriding Sametime Proxy widget prototypes. This session will show you real world examples of how customers transformed regular web and mobile applications into those with a rich social experience using the Sametime Proxy
Are you tired of java.lang.OutOfMemoryError: PermGen space? Then this talk is for you! We'll begin with a crash course in the Java memory model in order to understand what the error message means. Then we'll look at different causes of the error and how to avoid them. We may glance at a few interesting mistakes from the Open Source world. Last but not least you'll learn how you can get rid of java.lang.OutOfMemoryError: PermGen space once and for all.
News about updates to the IBM Messaging Family, including the most recent MQ fixpack, V8.0.0.3.
Presented at the WebSphere Integration User Group UK, held on 14 July 2015 at IBM Hursley.
WebSphere Application Server Liberty Profile and DockerDavid Currie
Presentation from IBM InterConnect 2015 covering a brief introduction to Docker, the relationship between IBM and Docker, and then using WebSphere Application Server Liberty Profile under Docker.
WebSphere Technical University: Top WebSphere Problem Determination FeaturesChris Bailey
Problem determination is an important focus area in the IBM WebSphere Application Server. Serviceability improvements have been added that have greatly improved the ability to find root causes of problems in both the full IBM WebSphere Application Server profile, and the newer Liberty profile. The session focuses on how to effectively use serviceability improvements added to the application server since V8.0. This includes high performance extensibe logging, cross-component trace, IBM Support Assistant data collector, timed operations, memory leak detection/prevention, and IBM Support Assistant 5.
Presented at the WebSphere Technical University 2014, Dusseldorf
WebSphere MQ includes a alternative of APIs and supports the Java™ Message Service (JMS) API. WebSphere MQ is that the market-leading messaging integration middleware product. Originally introduced in 1993 (under the IBM MQSeries® name), WebSphere MQ provides associate degree an, reliable, scalable, secure, and superior transport mechanism to handle businesses property necessities.
Objective of this article is to share internal architecture details of Java Virtual Machine. Focuses on:
- How many component does JVM has?
- How these component are integrated?
- How processing takes place at run time for classes?
API First or Events First: Is it a Binary Choice? Rohit Kelapure
When do you use API-first or events-first architecture? Is this a binary choice? This is a false dichotomy! A mental model is needed to frame the architecture, packaging, and programming choices for modern applications.
Varying degrees of combination of events and APIs can be used to design a system. Event notifications-based systems require an API callback to the source. CQRS and event-sourcing patterns, on the other hand, are on the complex end of the event-driven spectrum. APIs also have a maturity model, evolving with the adoption of reactive paradigms.
In this session, we’ll look at heuristics such as cost, latency, security, and external integrations that will influence implementation. Architects will learn actionable fitness functions to strike a balance between APIs and events to build sustainable architectures.
Why does application Modernization in the form of decomposing monoliths result in so many microservices ? Why has microservices become the default deployment model for applications. In this talk we will add some sanity to the process of constructing microservices and provide guidelines and design heuristics on structuring microservices. We will look at life after running microservices architectures in production and learn from the mistakes committed over the past five years. We will analyze real life systems on the criteria for consolidating microservices into monoliths or moduliths based on technical and business heuristics as illustrated In [4]. The techniques - a combination of mapping microservices to core technical attributes [2] reduced by affinity mapping and business domain context distillation [3] - have emerged from working with a number of customers where the value of microservices has not been realized despite leveraging Domain Driven Design.
References:
1 https://hackmd.io/10j-7DfqSIu1C8GQjHa1Bw?view
2 https://content.pivotal.io/blog/should-that-be-a-microservice-keep-these-six-factors-in-mind
3 https://bit.ly/2VFwDr1
4 https://twitter.com/RKela/status/1227188151887843329/photo/1
How do we add some sanity to the process of constructing microservices and provide guidelines and design heuristics on restructuring microservices. In this talk we will look at life after running microservices architectures in production and learn from the mistakes committed over the past five years. We will analyze real life systems on the criteria for consolidating microservices into monoliths or moduliths based on technical and business heuristics as illustrated In [4]. The techniques - a combination of mapping microservices to core technical attributes [2] reduced by affinity mapping and business domain context distillation [3] - have emerged from working with a number of customers where the value of microservices has not been realized despite leveraging Domain Driven Design.
1. Essay on this topic : https://hackmd.io/10j-7DfqSIu1C8GQjHa1Bw?view
2. https://content.pivotal.io/blog/should-that-be-a-microservice-keep-these-six-factors-in-mind
3. https://medium.com/nick-tune-tech-strategy-blog/core-domain-patterns-941f89446af5
4. https://twitter.com/RKela/status/1227188151887843329/photo/1
Travelers 360 degree health assessment of microservices on the pivotal platformRohit Kelapure
Is your system healthy? Are SLOs being met? What are the top performance constraints? What are the high-priority implementation concerns? Is the architecture a right fit? Are the teams leveraging the capabilities of the platform? What are the pain points with platform services? It can be challenging to find root cause among problem symptoms in distributed systems. Just as in real life, it's important for microservices to undergo regular health checks.
In this talk, we'll provide a systems-based approach to execute an app health check along 10 different dimensions: monitoring and metrics, failure mode analysis, technical debt, emergency response, performance optimization, change management, microservices rationalization, platform as a product, balanced team, and path to production. We'll explain how to address issues uncovered during a health check and provide recommendations on how to build a sustainable Day 2 app-ops reliability engineering practice.
Migrate Heroku & OpenShift Applications to IBM BlueMixRohit Kelapure
This slide deck describes some of the architectural principles behind the Heroku, OpenShift, Cloud Foundry and BlueMix enterprise PaaS. The commonalities and differences in designing and porting apps across these platforms to Cloud Foundy/BlueMix are explored.
Liberty Buildpack: Designed for Extension - Integrating your services in Blue...Rohit Kelapure
The Liberty Buildpack aims to remove the hassle of running Java applications on Cloud Foundry whether it is the simplified setup, auto-configuration of Liberty and Java EE references to cloud resources, reduced droplet size through selective provisioning of the runtime, or the zero-touch configuration and usage of services. There are times, however, when an application needs a feature that the buildpack does not yet provide. This talk will start by showing how to use and configure the Java buildpack and finish by showing how to extend the buildpack to ensure that IBM BlueMix Cloud Foundry is the best place to run your application. To build services and integrate them with BlueMix, you must implement the Service Broker API of Cloud Foundry for your services. This talk will explain how to write plugins to the Liberty Buildpack that will auto wire services your organization developed and integrated into CF making it easier for your apps to use the services in Cloud Foundry.
A Deep Dive into the Liberty Buildpack on IBM BlueMix Rohit Kelapure
This talk goes into the details and mechanics of how the Liberty buildpack deploys an application into the IBM BlueMix Cloud Foundry. It also explores how the Cloud Foundry runtime drives the Liberty buildpack code and what the Liberty buildpack code in Cloud Foundry does to run an application in the cloud environment. This talk touches on the restrictions that Cloud Foundry and the Liberty runtime imposes on applications running in Cloud Foundry. Developers attending this talk get deep insight into the why, what, how, and when of the Liberty buildpack ruby code, enabling them to write applications faster and optimized for the Liberty runtime in IBM BlueMix.
The ICAP Integrated Development Environment (IDE) provides a number of standard development tools to ease the design of modern applications.
Mobile (Worklight)
Includes IBM's industry leading mobile development platform
Java (WebSphere Liberty Profile)
Rapidly build next-generation, engaging applications for the WebSphere Application Server Liberty Profile.
JavaScript (Node.js)
Easily build applications with the most popular JavaScript runtime for event-driven server side development .
Cloud Explorer
Quickly discover shared services to enhance applications. Develop custom services to share with others.
This session compares the Spring and Java EE stacks in terms of Web frameworks. It re-examines the motivations behind the Spring framework and explores the emergence of the Java EE programming model to meet the challenges posed. The presentation provides insight into when Spring and/or Java EE is appropriate for a building Web applications and if they can coexist.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
The Metaverse and AI: how can decision-makers harness the Metaverse for their...Jen Stirrup
The Metaverse is popularized in science fiction, and now it is becoming closer to being a part of our daily lives through the use of social media and shopping companies. How can businesses survive in a world where Artificial Intelligence is becoming the present as well as the future of technology, and how does the Metaverse fit into business strategy when futurist ideas are developing into reality at accelerated rates? How do we do this when our data isn't up to scratch? How can we move towards success with our data so we are set up for the Metaverse when it arrives?
How can you help your company evolve, adapt, and succeed using Artificial Intelligence and the Metaverse to stay ahead of the competition? What are the potential issues, complications, and benefits that these technologies could bring to us and our organizations? In this session, Jen Stirrup will explain how to start thinking about these technologies as an organisation.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
This presentation describes support for Memory leak detection, prevention and Remediation included in IBM WebSphere Application Server V8.5 WebSphere Application Server Version 8.5 provides a top down pattern-based memory leak detection, prevention, and action by watching for suspect patterns in application code at run time. WebSphere Application Server has some means of protection against memory leaks when stopping or redeploying applications. If leak detection, prevention and action are enabled, WebSphere Application Server monitors application and module activity performs diagnostic actions to detect and fix leaks when an application or an individual module stops. This feature helps in increasing application up time with frequent application redeployments without cycling the server.
What will nameOfFeature do for you?
WebSphere Application Server now has some means of protection against memory leaks when stopping or redeploying applications. WebSphere Application Server monitors application and module activity and performs diagnostic actions when an application or an individual module is stopped.
nameOfFeature is used in the following scenarios.
Class loader memory leak Many memory leaks manifest themselves as class loader leaks. A Java class is uniquely identified by its name and the class loader that loaded it. Classes with the same name can be loaded multiple times in a single JVM, each in a different class loader. Each web application gets its own class loader and this is what WebSphere Application Server uses for isolating applications. An object retains a reference to the class it is an instance of. A class retains a reference to the class loader that loaded it. The class loader retains a reference to every class it loaded. Retaining a reference to a single object from a web application pins every class loaded by the web application. These references often remain after a web application reload. With each reload, more classes are pinned which leads to an out of memory error. Class loader memory leaks are normally caused by the application code or JRE triggered code. JRE triggered leaks Memory leaks occur when Java Runtime Environment (JRE) code uses the context class loader to load an application singleton. These singletons can be threads or other objects that are loaded by the JRE using the context class loader. If the web application code triggers the initialization of a singleton or a static initializer, the following conditions apply:The context class loader becomes the web application class loader. A reference is created to the web application class loader. This reference is never garbage collected. Pins the class loader, and all the classes loaded by it, in memory. Application triggered leaks Categories of application triggered leaks are as follows:Custom ThreadLocal class Web application class instance as ThreadLocal value Web application class instance indirectly held through a ThreadLocal value ThreadLocal pseudo-leak ContextClassLoader and threads created by web applications ContextClassLoader and threads created by classes loaded by the common class loader Static class variables JDBC driver registration: RMI targets For more information on application triggered links, see http://wasdynacache.blogspot.com/2012/01/websphere-classloader-memory-leak.html , http://www.websphereusergroup.org.uk/wug/files/presentations/31/Ian_Partridge_-_WUG_classloader_leaks.pdf , and http://www.ibm.com/support/docview.wss?uid=swg1PM39870 .
Detection: Issue warnings when a memory leak is detected. Through a combination of standard API calls and some reflection tricks when a web application is stopped, undeployed or reloaded. Prevention is on by default and applies only to JRE triggered leaks. JRE triggered leaks are prevented by initializing singletons at server startup, when the application Server class loader is the context class loader. Action: Take proactive action to fix memory leaks. These actions have reasonable defaults and are configured on a case-by-case basis.protected. Page of 32
The MemoryLeak service and its mbean are active only in an application server that hosts applications and services requests. This service is not active on a Deployment Manager, node agent, administrative agent, or other servers types like WebSphere proxy server and so on. The leak detection option is disabled by default. You can use Java virtual machine (JVM) custom properties to adjust the leak policy values, such as, enable and disable leak detection, action, and prevention. These custom properties are only applicable to a stand-alone server or managed application server and not to a node agent, admin agent, job manager, or deployment manager. When the application or the server is shutting down, WebSphere Application Server determines the classloaders that have leaked and are held references to all of associated loaded classes and objects. If a classloader leak is detected, a heapdump or systemdump is taken. All persistent configurations of this service are completed using JVM custom properties. There are no administrative console panels. At runtime, use the MemoryLeakConfig and MemoryLeakAdmin mbeans for configuration and administration respectively; however, the configuration changes are not persisted until JVM custom properties are configured. Page of 32
These JVM custom properties are persisted in the WebSphere Application Server configuration model in the server.xml file. The following code snippet displays the persisted leak policy configuration from the server_home /config/cells/nodes/servers/server.xml file of an unmanaged server: Page of 32
At runtime the memory leak detection, prevention and policy configuration can be changed using the MemoryLeakConfig mbean. Administration of the Memory leak policy can be carried out using the MemoryLeakAdmin mbean. The leak policy affects how the application server responds to a classloader memory leak when an application or when the server is stopped. You can adjust the Memory leak policy settings by using the wsadmin scripting interface. These changes take effect immediately, but do not persist to the server configuration, and are lost when the server is restarted. Page of 32
Mbean operations that are exposed to customers to find and fix leaks at runtime via the MemoryLeakAdmin mbean. Page of 32
A sampling of the different types of leak warnings output when a memory leak is detected. Page of 32
To recap,..
This feature exists in two parts. First, it prevents memory leaks through life-cycle that calls various parts of the Java API. Its common that if the web application is the first code to call the Java APIs, the web application class loader will be pinned in memory, causing leaks. The listener ensures that WebSphere Application Server is the first to make a call, and therefore prevents the class loader from being pinned in memory. Second, it handles detection by executing code when a web application is stopped, undeployed or reloaded. It scans the code for standard causes of memory leaks, and where it can, fixes the leaks. Implemented in the WebSphere Application Classloader, there are a series of expandable, standard API calls and some reflection tricks that help this detection feature do its job.
See the following references for additional information about nameOfFeature
You can help improve the quality of IBM Education Assistant content by providing feedback.