The document discusses JAX-RS (JSR-311), which is a Java API for RESTful web services. It aims to make it easy to build RESTful web services and clients using plain Java objects and annotations. Key points covered include:
- JAX-RS uses annotations to map Java methods to HTTP methods and URI paths.
- It supports common features like URI templates, content negotiation, cookies and headers.
- Providers are used to bridge between HTTP requests/responses and Java objects.
- Annotations like @Produces specify the media types a resource can generate.
IO is slow, memory is fast. For many applications, the main performance and scalability bottleneck is disk and network access. This session will cover strategies that can help you utilize your RAM efficiently even in a distributed environment.
Discover how a clustering solution like Terracotta can help you reduce overall application latency.
A real-time architecture using Hadoop & Storm - Nathan Bijnens & Geert Van La...jaxLondonConference
Presented at JAX London 2013
With the proliferation of data sources and growing user bases, the amount of data generated requires new ways for storage and processing. Hadoop opened new possibilities, yet it falls short of instant delivery. Adding stream processing using Nathan Marz’s Storm, can overcome this delay and bridge the gap to real-time aggregation and reporting. On the Batch layer all master data is kept and is immutable. Once the base data is stored a recurring process will index the data. This process reads all master data, parses it and will create new views out of it.
Ray (https://github.com/ray-project/ray) is a framework developed at UC Berkeley and maintained by Anyscale for building distributed AI applications. Over the last year, the broader machine learning ecosystem has been rapidly adopting Ray as the primary framework for distributed execution. In this talk, we will overview how libraries such as Horovod (https://horovod.ai/), XGBoost, and Hugging Face Transformers, have integrated with Ray. We will then showcase how Uber leverages Ray and these ecosystem integrations to simplify critical production workloads at Uber. This is a joint talk between Anyscale and Uber.
IO is slow, memory is fast. For many applications, the main performance and scalability bottleneck is disk and network access. This session will cover strategies that can help you utilize your RAM efficiently even in a distributed environment.
Discover how a clustering solution like Terracotta can help you reduce overall application latency.
A real-time architecture using Hadoop & Storm - Nathan Bijnens & Geert Van La...jaxLondonConference
Presented at JAX London 2013
With the proliferation of data sources and growing user bases, the amount of data generated requires new ways for storage and processing. Hadoop opened new possibilities, yet it falls short of instant delivery. Adding stream processing using Nathan Marz’s Storm, can overcome this delay and bridge the gap to real-time aggregation and reporting. On the Batch layer all master data is kept and is immutable. Once the base data is stored a recurring process will index the data. This process reads all master data, parses it and will create new views out of it.
Ray (https://github.com/ray-project/ray) is a framework developed at UC Berkeley and maintained by Anyscale for building distributed AI applications. Over the last year, the broader machine learning ecosystem has been rapidly adopting Ray as the primary framework for distributed execution. In this talk, we will overview how libraries such as Horovod (https://horovod.ai/), XGBoost, and Hugging Face Transformers, have integrated with Ray. We will then showcase how Uber leverages Ray and these ecosystem integrations to simplify critical production workloads at Uber. This is a joint talk between Anyscale and Uber.
How Machine Learning and AI Can Support the Fight Against COVID-19Databricks
In this session, we show how to leverage CORD dataset, containing more than 400000 scientific papers on COVID and related topics, and recent advances in natural language processing and other AI techniques to generate new insights in support of the ongoing fight against this infectious disease.
The idea explored in our talk is to apply modern NLP methods, such and named entity recognition (NER) and relation extraction to article’s abstracts (and, possibly, full text), to extract some meaningful insights from the text, and to enable semantically rich search over the paper corpus. We first investigate how to train NER model using Medical NER dataset from Kaggle, and specialized version of BERT (PubMedBERT) as a feature extractor, to allow automatic extraction of such entities as medical condition names, medicine names and pathogens. Entity extraction alone can provide us with some interesting findings, such as how approaches to COVID treatment evolved with time, in terms of mentioned medicines. We demonstrate how to use Azure Machine Learning for training the model.
To take this investigation one step further, we also investigate the usage of pre-trained medical models, available as Text Analytics for Health service on the Microsoft Azure cloud. In addition to many entity types, it can also extract relations (such as the dosage of medicine provisioned), entity negation, and entity mapping to some well-known medical ontologies. We investigate the best way to use Azure ML at scale to score large paper collection, and to store the results.
Integrating Deep Learning Libraries with Apache SparkDatabricks
The combination of deep learning with Apache Spark has the potential to make a huge impact. Joseph Bradley and Xiangrui Meng share best practices for integrating popular deep learning libraries with Apache Spark. Rather than comparing deep learning systems or specific optimizations, Joseph and Xiangrui focus on issues that are common to many deep learning frameworks when running on a Spark cluster, such as optimizing cluster setup and data ingest (clusters can be configured to avoid task conflicts on GPUs and to allow using multiple GPUs per worker), configuring the cluster (setting up pipelines for efficient data ingest improves job throughput), and monitoring long-running jobs (interactive monitoring facilitates both the work of configuration and checking the stability of deep learning jobs). Joseph and Xiangrui then demonstrate the techniques using Google’s popular TensorFlow library.
Deep Dive: Memory Management in Apache SparkDatabricks
Memory management is at the heart of any data-intensive system. Spark, in particular, must arbitrate memory allocation between two main use cases: buffering intermediate data for processing (execution) and caching user data (storage). This talk will take a deep dive through the memory management designs adopted in Spark since its inception and discuss their performance and usability implications for the end user.
Summary of the lessons we learned with Docker (Dockerfile, storage, distributed networking) during the first iteration of the AdamCloud project (Fall 2014).
The AdamCloud project (part I) was presented here:
http://www.slideshare.net/davidonlaptop/bdm29-adamcloud-planification
Java is one of the most popular languages and it's very important to understand the performance of Java servers. Modern JVMs compile the Java code in runtime using Just-In-Time (JIT) compiler and such JIT compiled code runs very close to optimized native code in terms of speed.
When understanding performance, it's important to know how Java works and we can also measure the performance using key metrics like Throughput and Latency. After measuring the performance, we can use profilers to understand the application behavior and find performance bottlenecks.
In this session, we will look at how Java manages the memory and how it optimizes the Java code using JIT compilation. We will also look at how we can use the Java Flight Recorder (JFR) to profile the JVM and find performance bottlenecks.
Finally, we can look at how "Flame Graphs" can be used to identify the most frequent code-paths quickly and accurately.
Writing Continuous Applications with Structured Streaming Python APIs in Apac...Databricks
Description:
We are amidst the Big Data Zeitgeist era in which data comes at us fast, in myriad forms and formats at intermittent intervals or in a continuous stream, and we need to respond to streaming data immediately. This need has created a notion of writing a streaming application that’s continuous, reacts and interacts with data in real-time. We call this continuous application, which we will discuss.
Abstract:
We are amidst the Big Data Zeitgeist era in which data comes at us fast, in myriad forms and formats at intermittent intervals or in a continuous stream, and we need to respond to streaming data immediately. This need has created a notion of writing a streaming application that’s continuous, reacts and interacts with data in real-time. We call this continuous application.
In this talk we will explore the concepts and motivations behind the continuous application, how Structured Streaming Python APIs in Apache Spark 2.x enables writing continuous applications, examine the programming model behind Structured Streaming, and look at the APIs that support them.
Through a short demo and code examples, I will demonstrate how to write an end-to-end Structured Streaming application that reacts and interacts with both real-time and historical data to perform advanced analytics using Spark SQL, DataFrames and Datasets APIs.
You’ll walk away with an understanding of what’s a continuous application, appreciate the easy-to-use Structured Streaming APIs, and why Structured Streaming in Apache Spark 2.x is a step forward in developing new kinds of streaming applications.
Scaling Spark Workloads on YARN - Boulder/Denver July 2015Mac Moore
Hortonworks Presentation at The Boulder/Denver BigData Meetup on July 22nd, 2015. Topic: Scaling Spark Workloads on YARN. Spark as a workload in a multi-tenant Hadoop infrastructure, scaling, cloud deployment, tuning.
Speeding Up Spark with Data Compression on Xeon+FPGA with David OjikaDatabricks
Data compression is a key aspect in big data processing frameworks, such as Apache Hadoop and Spark, because compression enables the size of the input, shuffle and output data to be reduced, thus potentially speeding up overall processing time by orders of magnitude, especially for large-scale systems. However, since many compression algorithms with good compression ratio are also very CPU-intensive, developers are often forced to use algorithms that are less CPU-intensive at the cost of reduced compression ratio.
In this session, you’ll learn about a field-programmable gate array (FPGA)-based approach for accelerating data compression in Spark. By opportunistically offloading compute-heavy, compression tasks to the FPGA, the CPU is freed to perform other tasks, resulting in an improved overall performance for end-user applications. In contrast to existing GPU methods for acceleration, this approach affords more performance/energy efficiency, which can translate to significant savings in power and cooling costs, especially for large datacenters. In addition, this implementation offers the benefit of reconfigurability, allowing for the FPGA to be rapidly reprogrammed with a different algorithm to meet system or user requirements.
Using the Intel Xeon+FPGA platform, Ojika will share how they ported Swif (simplified workload-intuitive framework) to Spark, and the method used to enable an end-to-end, FPGA-aware Spark deployment. Swif is an in-house framework developed to democratize and simplify the deployment of FPGAs in heterogeneous datacenters. Using Swif’s application programmable interface (API), he’ll describe how system architects and software developers can seamlessly integrate FPGAs into their Spark workflow, and in particular, deploy FPGA-based compression schemes that achieve improved performance compared to software-only approaches. In general, Swif’s software stack, along with the underlying Xeon+FPGA hardware platform, provides a workload-centric processing environment that streamlines the process of offloading CPU-intensive tasks to shared FPGA resources, while providing improved system throughput and high resource utilization.
Packed Objects: Fast Talking Java Meets Native Code - Steve Poole (IBM)jaxLondonConference
Presented at JAX London 2013
Worried about the future of Java? Want to see it keep moving forward? Don't be concerned. The transformation of Java is already underway. Driven by new technologies and new opportunities Java and the JVM are entering uncharted worlds and challenging old approaches. In this session learn about one such expedition in the form of an introductory talk to technology being developed by IBM. This experimental technology is exploring a new way to share data between the JVM and other runtimes.
Bringing the Semantic Web closer to reality: PostgreSQL as RDF Graph DatabaseJimmy Angelakos
Presentation of an investigation into how Python's RDFLib and SQLAlchemy can be used to leverage PostgreSQL's capabilities to provide a persistent storage back-end for Graphs, and become the elusive practical RDF triple store for the Semantic Web (or simply help you export your data to someone who's expecting RDF)!
Talk presented at FOSDEM 2017 in Brussels on 04-05/02/2017. Practical & hands-on presentation with example code which is certainly not optimal ;)
Video:
MP4: http://video.fosdem.org/2017/H.1309/postgresql_semantic_web.mp4
WebM/VP8: http://ftp.osuosl.org/pub/fosdem/2017/H.1309/postgresql_semantic_web.vp8.webm
Improving PySpark performance: Spark Performance Beyond the JVMHolden Karau
This talk covers a number of important topics for making scalable Apache Spark programs - from RDD re-use to considerations for working with Key/Value data, why avoiding groupByKey is important and more. We also include Python specific considerations, like the difference between DataFrames/Datasets and traditional RDDs with Python. We also explore some tricks to intermix Python and JVM code for cases where the performance overhead is too high.
An investigation of how PostgreSQL and its latest capabilities (JSONB data type, GIN indices, Full Text Search) can be used to store, index and perform queries on structured Bibliographic Data such as MARC21/MARCXML, breaking the dependence on proprietary and arcane or obsolete software products.
Talk presented at FOSDEM 2016 in Brussels on 31/01/2016. This is a very practical & hands-on presentation with example code which is certainly not optimal ;)
A Java Implementer's Guide to Boosting Apache Spark Performance by Tim Ellison.J On The Beach
Apache Spark has rocked the big data landscape, quickly becoming the largest open source big data community with over 750 contributors from more than 200 organizations. Spark's core tenants of speed, ease of use, and its unified programming model fit neatly with the high performance, scalable, and manageable characteristics of modern Java runtimes. In this talk we introduce the Spark programming model, and describe some unique Java runtime capabilities in the JIT, fast networking, serialization techniques, and GPU off-loading that deliver the ultimate big data platform for solving business problems. We will show how solutions, previously infeasible with regular Java programming, become possible with a high performance Spark core runtime, enabling you to solve problems smarter and faster.
How Machine Learning and AI Can Support the Fight Against COVID-19Databricks
In this session, we show how to leverage CORD dataset, containing more than 400000 scientific papers on COVID and related topics, and recent advances in natural language processing and other AI techniques to generate new insights in support of the ongoing fight against this infectious disease.
The idea explored in our talk is to apply modern NLP methods, such and named entity recognition (NER) and relation extraction to article’s abstracts (and, possibly, full text), to extract some meaningful insights from the text, and to enable semantically rich search over the paper corpus. We first investigate how to train NER model using Medical NER dataset from Kaggle, and specialized version of BERT (PubMedBERT) as a feature extractor, to allow automatic extraction of such entities as medical condition names, medicine names and pathogens. Entity extraction alone can provide us with some interesting findings, such as how approaches to COVID treatment evolved with time, in terms of mentioned medicines. We demonstrate how to use Azure Machine Learning for training the model.
To take this investigation one step further, we also investigate the usage of pre-trained medical models, available as Text Analytics for Health service on the Microsoft Azure cloud. In addition to many entity types, it can also extract relations (such as the dosage of medicine provisioned), entity negation, and entity mapping to some well-known medical ontologies. We investigate the best way to use Azure ML at scale to score large paper collection, and to store the results.
Integrating Deep Learning Libraries with Apache SparkDatabricks
The combination of deep learning with Apache Spark has the potential to make a huge impact. Joseph Bradley and Xiangrui Meng share best practices for integrating popular deep learning libraries with Apache Spark. Rather than comparing deep learning systems or specific optimizations, Joseph and Xiangrui focus on issues that are common to many deep learning frameworks when running on a Spark cluster, such as optimizing cluster setup and data ingest (clusters can be configured to avoid task conflicts on GPUs and to allow using multiple GPUs per worker), configuring the cluster (setting up pipelines for efficient data ingest improves job throughput), and monitoring long-running jobs (interactive monitoring facilitates both the work of configuration and checking the stability of deep learning jobs). Joseph and Xiangrui then demonstrate the techniques using Google’s popular TensorFlow library.
Deep Dive: Memory Management in Apache SparkDatabricks
Memory management is at the heart of any data-intensive system. Spark, in particular, must arbitrate memory allocation between two main use cases: buffering intermediate data for processing (execution) and caching user data (storage). This talk will take a deep dive through the memory management designs adopted in Spark since its inception and discuss their performance and usability implications for the end user.
Summary of the lessons we learned with Docker (Dockerfile, storage, distributed networking) during the first iteration of the AdamCloud project (Fall 2014).
The AdamCloud project (part I) was presented here:
http://www.slideshare.net/davidonlaptop/bdm29-adamcloud-planification
Java is one of the most popular languages and it's very important to understand the performance of Java servers. Modern JVMs compile the Java code in runtime using Just-In-Time (JIT) compiler and such JIT compiled code runs very close to optimized native code in terms of speed.
When understanding performance, it's important to know how Java works and we can also measure the performance using key metrics like Throughput and Latency. After measuring the performance, we can use profilers to understand the application behavior and find performance bottlenecks.
In this session, we will look at how Java manages the memory and how it optimizes the Java code using JIT compilation. We will also look at how we can use the Java Flight Recorder (JFR) to profile the JVM and find performance bottlenecks.
Finally, we can look at how "Flame Graphs" can be used to identify the most frequent code-paths quickly and accurately.
Writing Continuous Applications with Structured Streaming Python APIs in Apac...Databricks
Description:
We are amidst the Big Data Zeitgeist era in which data comes at us fast, in myriad forms and formats at intermittent intervals or in a continuous stream, and we need to respond to streaming data immediately. This need has created a notion of writing a streaming application that’s continuous, reacts and interacts with data in real-time. We call this continuous application, which we will discuss.
Abstract:
We are amidst the Big Data Zeitgeist era in which data comes at us fast, in myriad forms and formats at intermittent intervals or in a continuous stream, and we need to respond to streaming data immediately. This need has created a notion of writing a streaming application that’s continuous, reacts and interacts with data in real-time. We call this continuous application.
In this talk we will explore the concepts and motivations behind the continuous application, how Structured Streaming Python APIs in Apache Spark 2.x enables writing continuous applications, examine the programming model behind Structured Streaming, and look at the APIs that support them.
Through a short demo and code examples, I will demonstrate how to write an end-to-end Structured Streaming application that reacts and interacts with both real-time and historical data to perform advanced analytics using Spark SQL, DataFrames and Datasets APIs.
You’ll walk away with an understanding of what’s a continuous application, appreciate the easy-to-use Structured Streaming APIs, and why Structured Streaming in Apache Spark 2.x is a step forward in developing new kinds of streaming applications.
Scaling Spark Workloads on YARN - Boulder/Denver July 2015Mac Moore
Hortonworks Presentation at The Boulder/Denver BigData Meetup on July 22nd, 2015. Topic: Scaling Spark Workloads on YARN. Spark as a workload in a multi-tenant Hadoop infrastructure, scaling, cloud deployment, tuning.
Speeding Up Spark with Data Compression on Xeon+FPGA with David OjikaDatabricks
Data compression is a key aspect in big data processing frameworks, such as Apache Hadoop and Spark, because compression enables the size of the input, shuffle and output data to be reduced, thus potentially speeding up overall processing time by orders of magnitude, especially for large-scale systems. However, since many compression algorithms with good compression ratio are also very CPU-intensive, developers are often forced to use algorithms that are less CPU-intensive at the cost of reduced compression ratio.
In this session, you’ll learn about a field-programmable gate array (FPGA)-based approach for accelerating data compression in Spark. By opportunistically offloading compute-heavy, compression tasks to the FPGA, the CPU is freed to perform other tasks, resulting in an improved overall performance for end-user applications. In contrast to existing GPU methods for acceleration, this approach affords more performance/energy efficiency, which can translate to significant savings in power and cooling costs, especially for large datacenters. In addition, this implementation offers the benefit of reconfigurability, allowing for the FPGA to be rapidly reprogrammed with a different algorithm to meet system or user requirements.
Using the Intel Xeon+FPGA platform, Ojika will share how they ported Swif (simplified workload-intuitive framework) to Spark, and the method used to enable an end-to-end, FPGA-aware Spark deployment. Swif is an in-house framework developed to democratize and simplify the deployment of FPGAs in heterogeneous datacenters. Using Swif’s application programmable interface (API), he’ll describe how system architects and software developers can seamlessly integrate FPGAs into their Spark workflow, and in particular, deploy FPGA-based compression schemes that achieve improved performance compared to software-only approaches. In general, Swif’s software stack, along with the underlying Xeon+FPGA hardware platform, provides a workload-centric processing environment that streamlines the process of offloading CPU-intensive tasks to shared FPGA resources, while providing improved system throughput and high resource utilization.
Packed Objects: Fast Talking Java Meets Native Code - Steve Poole (IBM)jaxLondonConference
Presented at JAX London 2013
Worried about the future of Java? Want to see it keep moving forward? Don't be concerned. The transformation of Java is already underway. Driven by new technologies and new opportunities Java and the JVM are entering uncharted worlds and challenging old approaches. In this session learn about one such expedition in the form of an introductory talk to technology being developed by IBM. This experimental technology is exploring a new way to share data between the JVM and other runtimes.
Bringing the Semantic Web closer to reality: PostgreSQL as RDF Graph DatabaseJimmy Angelakos
Presentation of an investigation into how Python's RDFLib and SQLAlchemy can be used to leverage PostgreSQL's capabilities to provide a persistent storage back-end for Graphs, and become the elusive practical RDF triple store for the Semantic Web (or simply help you export your data to someone who's expecting RDF)!
Talk presented at FOSDEM 2017 in Brussels on 04-05/02/2017. Practical & hands-on presentation with example code which is certainly not optimal ;)
Video:
MP4: http://video.fosdem.org/2017/H.1309/postgresql_semantic_web.mp4
WebM/VP8: http://ftp.osuosl.org/pub/fosdem/2017/H.1309/postgresql_semantic_web.vp8.webm
Improving PySpark performance: Spark Performance Beyond the JVMHolden Karau
This talk covers a number of important topics for making scalable Apache Spark programs - from RDD re-use to considerations for working with Key/Value data, why avoiding groupByKey is important and more. We also include Python specific considerations, like the difference between DataFrames/Datasets and traditional RDDs with Python. We also explore some tricks to intermix Python and JVM code for cases where the performance overhead is too high.
An investigation of how PostgreSQL and its latest capabilities (JSONB data type, GIN indices, Full Text Search) can be used to store, index and perform queries on structured Bibliographic Data such as MARC21/MARCXML, breaking the dependence on proprietary and arcane or obsolete software products.
Talk presented at FOSDEM 2016 in Brussels on 31/01/2016. This is a very practical & hands-on presentation with example code which is certainly not optimal ;)
A Java Implementer's Guide to Boosting Apache Spark Performance by Tim Ellison.J On The Beach
Apache Spark has rocked the big data landscape, quickly becoming the largest open source big data community with over 750 contributors from more than 200 organizations. Spark's core tenants of speed, ease of use, and its unified programming model fit neatly with the high performance, scalable, and manageable characteristics of modern Java runtimes. In this talk we introduce the Spark programming model, and describe some unique Java runtime capabilities in the JIT, fast networking, serialization techniques, and GPU off-loading that deliver the ultimate big data platform for solving business problems. We will show how solutions, previously infeasible with regular Java programming, become possible with a high performance Spark core runtime, enabling you to solve problems smarter and faster.
Tworzenie list zainteresowań na Facebooku i TwitterzeŁukasz Dębski
Krótki tutorial jak stworzyć listę zainteresowań na Facebooku i Twitterze. Dodatkowo pokazuję jak te listy zainteresowań mogą pomóc nam w działaniach marketingowych, w pracy.
Ramowanie w telewizyjnej reklamie politycznejTomasz Olczyk
Rozdział 5 mojej książki o reklamie politycznej i Politrozrywka i popperswazja http://www.waip.com.pl/index.php/waip/waip/nowosci/politrozrywka_i_popperswazja_reklama_telewizyjna_w_polskich_kampaniach_wyborczych_xxi_wieku
Fifth chapter of my book about political advertising titled Politainment and poppersuasion
Latest version of the Netflix Cloud Architecture story was given at Gluecon May 23rd 2012. Gluecon rocks, and lots of Van Halen references were added for the occasion. There tradeoff between developer driven high functionality AWS based PaaS, and operations driven low cost portable PaaS is discussed. The three sections cover the developer view, the operator view and the builder view.
Java Spring MVC Framework with AngularJS by Google and HTML5Tuna Tore
Course Description
#springframework, #spring, #udemy, #discount, #programming, #springmvc, spring, #udemycourse, #education
NEW udemy course related to the latest Java Spring MVC Framework 4 for developing WEB applications with popular and proven technologies such as AngularJS by Google and HTML5. (Lectures are divided in three main sections so you don't have to learn AngularJS Framework until you start the last section. The last section will teach you AngularJS by Google and the integration with Java Spring MVC Framework 4)
https://www.udemy.com/java-spring-mvc-framework-with-angularjs-by-google-and-html5
Moreover, this course is designed and created with the mindset of teaching you the latest web technologies in a short period of time with low training cost and high-quality content including real production quality code examples.
Therefore after attending this course, you will be ready to design and develop any commercial Java Spring MVC applications by learning the main principals, best practices, and most important concepts.
Furthermore, this is a fast track course and covers the most important concepts in AngularJS Framework, HTML5 and the latest Java Spring MVC Framework 4x with code examples and sample applications. You will be able to download source codes/slides/diagrams by attending this course and you can use those samples/codes in your applications as well. Therefore, it will be more than enough for you to develop Java Spring MVC applications if you attend this course.
The benefits of attending this udemy course are listed like as below;
You will earn a higher salary hence you will be able to use the latest and productive technologies and this course will also improve the way of your thinking in terms of programming by teaching you dependency injection principle used in Spring MVC and AngularJS
You will be more confident about commercial WEB programming for the following years and general programming concepts as well.
We will only use FREE Open Source Software tools during the development of components in this course.
You will learn the latest Java Spring MVC Framework with hands-on examples
You will learn the usage of AngularJS by Google for developing structured rich client side applications
You will understand the usage of latest useful basic HTML5 tags with code examples
You will gain experience of using CSS(Style Sheets) in web applications
Learn how to develop, test, run and debug Java Spring MVC applications
Learn how to integrate AngularJS with Java Spring MVC framework.
https://www.udemy.com/java-spring-mvc-framework-with-angularjs-by-google-and-html5
#springframework, #spring, #udemy, #discount, #programming, #springmvc, spring, #udemycourse, #education
OWASP Top 10 - Checkmarx Presentation at Polytechnic Institute of Cávado and AveCheckmarx
Presented by Paulo Silva, Security Researcher at Checkmarx on October 31, 2018 at Polytechnic Institute of Cávado and Ave.
Learn all about the OWASP Top 10 from his talk:
Part I
Web Application architecture
The HTTP protocol
HTTP Request walk-through
Part II
What is OWASP
What is the OWASP TOP 10
OWASP Top 10 walk - through
Improving performance by changing the rules from fast to SPDYCotendo
SPDY was proposed by Google back in November 2009 to reduce the latency and load time of web pages. It was provided as part of the Chromium open-source project and is enabled in Chrome by default.
We at Cotendo took on the challenge, implemented the server side, and extended our proxies to support SPDY, providing SPDY to HTTP “translation”. Guess what? It really speeds things up. But like all new good things, there is still work to do. We will share insights from our implementation, optimization of SSL-based traffic and present performance data both from Google’s and our customers’ deployment.
What’s next?
We believe the introduction of SPDY as a new application layer presents a unique opportunity to rethink web design concepts and front-end-optimization (FEO) techniques. We will discuss some optimizations we developed and suggest some guidelines on how you can approach these new types of optimizations.
This talk is a generic but comprehensive overview of security mechanism, controls and potential attacks in modern browsers. The talk focuses also on new technologies, such as HTML5 and related APIs to highlight new attack scenario against browsers.
Presentation given at the International PHP conference in Mainz, October 2012, dealing with a bit of history about the HTTP protocol, SPDY and the future (HTTP/2.0).
HTTP is one of the most widely used protocols in the world.
The version of HTTP 1.1, used to this day, was developed and described 18 years ago - 1999.
With the increasing complexity of web applications, the capabilities of HTTP 1.1 are already insufficient to provide increased demands on performance and responsiveness.
So in order to meet new requirements, HTTP must evolve. HTTP 2.0 is designed to make web applications faster, simple and reliable.
In this report I will tell about
- drawbacks of HTTP 1.1 and why we need a new version of HTTP.
- which advantages HTTP/2 offers in comparison with the previous version?
- how the new protocol affected the new version of SERVLET 4.0 and how we can use it.
Oleg Natalushko. Drupal server anatomy. DrupalCamp Kyiv 2011Vlad Savitsky
Любой полезный ресурс рано или поздно выходит за рамки «shared hosting» тарифов и владельцы ресурса начинают поглядывать в сторону VDS, выделенных серверов и облачных решений.
Из нашего доклада Вы узнаете:
- стоит искать другого хостера или уже пришло время переезжать на выделенный сервер;
- о выборе площадки для аренды или размещению своего сервера;
- о выборе ресурсных характеристик сервера, подборе и конфигурации ПО;
- что делать дальше, когда сервер установлен.
Многие моменты доклада будут украшены реальными примерами из опыта работы компании «IT Patrol inc.»
В докладе рассказывается о расширении для стека протоколов TCP/IP в ОС Linux, которое необходимо для того, чтобы HTTPS работал в том же стеке, что TCP и IP. DDoS-атаки такого типа как HTTP-флуд на уровне приложений, как правило, подавляются HTTP-акселераторами или балансировщиками нагрузки HTTP. Однако интерфейс сокетов Linux, используемый программным обеспечением, не дает той продуктивности, которая необходима при предельных нагрузках, вызванных DDoS-атаками. HTTP-серверы на базе стеков TCP/IP в пространстве пользователя становятся популярными в связи с увеличением их эффективности, но стеки TCP/IP представляют собой масштабный и сложный код, поэтому неблагоразумно реализовывать и исполнять его дважды — в пространстве пользователя и пространстве ядра. Стек TCP/IP в пространстве ядра хорошо интегрирован со многими мощными инструментами, например IPTables, IPVS, tc, tcpdump, которые недоступны для стека TCP/IP в пространстве пользователя или требуют сложных интерфейсов. Докладчик представит решение Tempesta FW, которое передает обработку HTTPS ядру. HTTPS встроен в стек TCP/IP Linux. Исполняя функцию межсетевого экрана HTTP, Tempesta FW устанавливает набор ограничений по скорости передачи и набор эвристических правил для защиты от таких атак как HTTPS-флуд и Slow HTTP.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
4. What ? Who ? Why ?
Architectural overview
GET http://weather.yahoo.com/israel/tel-aviv/ramat-gan-1967869/
HTTP request
Yahoo’s weather
HTTP client web-server
HTTP response
RESPONSE CODE: 200 (OK) + BODY
4
5. What ? Who ? Why ?
Architectural overview
GET http://weather.yahoo.com/israel/tel-aviv/ramat-gan-1967869/
Read, The weather in Israel at Tel-Aviv area
(Ramat Gan).
5
10. What ? Who ? Why ?
HTTP request overview
HTTP request Yahoo’s weather
HTTP client web-server
RESPONSE CODE: 200 (OK) + BODY
HTTP/1.1 200 OK
Connection: close
Content-Type: text/html;charset=utf-8
Cache-Control: private
Content-Length: 69947
Date: Sun, 22 Nov 2009 07:59:11 GMT
Set-Cooie: t=164531234;
10
11. What ? Who ? Why ?
HTTP request overview
HTTP request Yahoo’s weather
HTTP client web-server
RESPONSE CODE: 200 (OK) + BODY
HTTP/1.1 200 OK
Connection: close
Content-Type: text/html;charset=utf-8
Cache-Control: private
Content-Length: 69947
Date: Sun, 22 Nov 2009 07:59:11 GMT
Set-Cooie: t=164531234;
11
12. What ? Who ? Why ?
HTTP request overview
HTTP request Yahoo’s weather
HTTP client web-server
RESPONSE CODE: 200 (OK) + BODY
HTTP/1.1 200 OK
Connection: close
Content-Type: text/html;charset=utf-8
Cache-Control: private
Content-Length: 69947
Date: Sun, 22 Nov 2009 07:59:11 GMT
Set-Cooie: t=164531234;
12
13. What ? Who ? Why ?
HTTP request overview
HTTP request Yahoo’s weather
HTTP client web-server
RESPONSE CODE: 200 (OK) + BODY
HTTP/1.1 200 OK
Connection: close
Content-Type: text/html;charset=utf-8
Cache-Control: private
Content-Length: 69947
Date: Sun, 22 Nov 2009 07:59:11 GMT
Set-Cooie: t=164531234;
13
14. What ? Who ? Why ?
Everything is a resource …
» A resource is …
A network-accessible data
object or service identified by
[1]
an URI (IRI ):
• Images,
• Documents (HTML, PDF, …),
• Geo-location,
• Weather
[1] Section 3, Atom Publishing Protocol
14
15. What ? Who ? Why ?
Resources:
» Collections
http://portal/bicycles/
» Members/Items:
http://portal/documents/mydog.doc
[1] Section 3, Atom Publishing Protocol
15
16. What ? Who ? Why ?
HTTP defines more than just ‘GET’ and ‘POST’:
Method REST Operation Description
POST CREATE (INSERT) Create or update
GET READ (QUERY) Query about the resource
PUT UPDATE (CHANGE) Update
DELETE DELETE (DELETE) I want to delete what-ever-it-is ….
HEAD I’m something like ‘GET’ [1]
OPTIONS JAX-RS mumbles something about me.
TRACE
CONNECT
[1] Unique extension of JAX-RS.
16
17. What ? Who ? Why ?
Roy Fielding defines REST as:
» Free of any platform or language,
» Free of any schema or protocol (beyond that of HTTP),
» No ALP (Application- or Presentation- layer Protocol)
coercion
[1]
» Only a set of recommendations !
[1] Principled Design of the Modern Web Architecture - Roy T. Fielding, Richard N. Taylor - section 4.
17
18. What ? Who ? Why ?
Some important points …
» REST recommends using URIs instead of query-based
URLs:
Don’t use:
http://host.com/service?type=weather&when=today
Use:
http://host.com/service/weather/today
» Atom Publishing Protocol (APP).
RFC-5023 (text-only)
18
19. What ? Who ? Why ?
REST is …
» Architectural style, not technology !
Client/server + Request/response approach.
» Everything is a RESOURCE.
» CRUD (Create / Read / Update / Delete) … [1]
» Stateless by nature (excellent for distributed systems),
» Cacheable (naturally supported !)
» A great way to web-service !
[1] Reference to other acronyms at Wikipedia
19
26. JAX-RS (JSR-311)
Reading the catalog
GET …/catalog
CLIENT SERVER
List all items available for sale.
/**
* http://www.disney.com/muppets/catalog
*/
@Path("/catalog")
public class MuppetCatalogController {
@GET
public String findAllCatalogItems() {
String list = ... // Compile a list of all items.
return list;
}
}
26
27. JAX-RS (JSR-311)
Reading the catalog
GET …/catalog?muppetId=650
CLIENT SERVER
Properties of Kermit
public void doGet(HttpServletRequest req,
HttpServletResponse resp) throws ... {
int muppetId;
String stringId = req.getParameter("muppetId");
if (stringId != null) {
// Hoping for no exception to occur!
muppetId = Integer.parseInt(stringId);
} else {
muppetId = ... // Use some default value …
}
Muppet muppet = findMuppet(muppetId);
generateTextualOutput(muppet, resp.getWriter());
}
27
28. JAX-RS (JSR-311)
URI template
GET …/catalog?muppetId=650
CLIENT SERVER
Properties of Kermit
@GET
@Path(“/catalog/{muppetId}”)
public String findItem(@QueryParam(“muppetId”)int muppedId) {
Muppet muppet = findMuppet(muppetId);
return ...
}
28
29. JAX-RS (JSR-311)
URI template
GET …/catalog?muppetId=650
CLIENT SERVER
Properties of Kermit
@GET
@Path(“/catalog/{muppetId}”)
public String findItem(@DefaultValue(“0”)
@QueryParam(“muppetId”)int muppedId) {
Muppet muppet = findMuppet(muppetId);
return ...
}
29
30. JAX-RS (JSR-311)
URI template
GET …/catalog/650
CLIENT SERVER
Properties of Kermit
@GET
@Path(“/catalog/{muppetId}”)
public String findItem(@PathParam(“muppetId”)int muppedId) {
// ... Do something
}
30
31. JAX-RS (JSR-311)
URI template
GET …/catalog/650
CLIENT SERVER
Properties of Kermit
@GET Regular expression
@Path(“/catalog/{muppetId:[0-9]+}”)
public String findItem(@PathParam(“muppetId”)int muppedId) {
// ... Do something
}
31
32. JAX-RS (JSR-311)
URI template
GET …/catalog/650
CLIENT SERVER
Properties of Kermit
@GET
@Path(“/catalog/{muppetId:[0-9]+}”)
public String findItem(@PathParam(“muppetId”)int muppedId) {
// ... Do something
}
// ACCEPT: http://.../catalog/-477
@GET
@Path(“/catalog/{muppetId:-[0-9]+}”)
public String findItem2(@PathParam(“muppetId”)int muppedId) {
// ... Do something
}
32
33. JAX-RS (JSR-311)
Cookies, headers and friends …
Internet
http://www.disney.com
Public
SERVER
client
Intranet
GET /muppet/catalog HTTP/1.1
host: crm.intranet http://crm.intranet
accept: text/plain
User-Agent: Mozilla/4.0 (...) Internal
CRM
Cookie: user-type=ADMIN client
Cookie
33
34. JAX-RS (JSR-311)
Cookies, headers and friends …
@PUT
@Path("/catalog/{muppetId}/{propertyName}")
public void updateItem(
@HeaderParam("host") String hostname,
@CookieParam("user-type") UserType type, ...) {
if (!hostname.equals("crm.intranet")) { throw ... }
if (!UserType.CUSTOMER_CARE.equals(type)) { throw ... }
// ... handle the request.
}
enum UserType { ADMIN, CUSTOMER_CARE, TECHNICAL; }
34
37. JAX-RS (JSR-311)
CLIENT SERVER
GET http://..../muppets/muppetOfTheMonth/image
+
Content negotiation precondition
RESPONSE:
• 200 (OK) + body
• 304 (Not Modified)
37
38. JAX-RS (JSR-311)
Content negotiation
@GET
@Path("/muppetOfTheMonth/image")
public Response findMuppetOfTheMonth(
@Context UriInfo uri, @Context Request request) {
File file = locateFile(uri.getRequestUri());
EntityTag tag = calculateTag(file);
Date modified = new Date(file.lastModified());
ResponseBuilder r = request.evaluatePreconditions(modified,
tag);
// ...
}
38
39. JAX-RS (JSR-311)
The @HEAD method
HEAD …/catalog/650/image
CLIENT SERVER
HTTP/1.1 200 OK
Content-type: text/plain
Content-length: 1024256
39
40. JAX-RS (JSR-311)
The @HEAD method
@GET
@Path("/catalog/{muppetId}/image")
public Response fetchThumbnailHeadAndBody() {
return ...;
}
@HEAD
@Path("/catalog/{muppetId}/image")
public Response fetchThumbnailHeadOnly(...) {
int size = getThumbnailSize(muppetId);
ResponseBuilder builder = Response.noContent();
builder.header("Content-length", size);
builder.header(“Is-ReadOnly", true);
return builder.build();
{
40
41. JAX-RS (JSR-311)
Bridging between the two worlds ….
HTTP Java
MessageBodyReader
MessageBodyWriter
The Millau Viaduct bridge, part E11 highway - connecting Paris and Barcelona. The highest bridge ever
constructed.
41
42. JAX-RS (JSR-311)
Providers:
» Provides adaptation between the “HTTP world” and
our own application domain:
MessageBodyReader,
MessageBodyWriter
Java object
HTTP Request
MessageBodyReader
Resource class
MessageBodyWriter
HTTP Response
42
43. JAX-RS (JSR-311)
@Path("users/{id}/properties")
public class UserPropertiesResource {
@GET
@Produces(“application/json")
public User findUser(@PathParam("id") int userId) {
return userDao.getUser(userId);
{
@GET
@Produces(“application/atom+xml")
public User findUser(@PathParam("id") int userId) { ... }
@POST
public void findUser(User user) {
userDao.persist(user);
}
{
43
44. JAX-RS (JSR-311)
@Provider
@Path("users/{id}/properties")
@Produces("application/json")
public classclass UserPropertiesResource {
public JSONWriter implements MessageBodyWriter<User> {
@Override@GET
@Produces(“application/json")
public long getSize(User user, ...) {
public User findUser(@PathParam("id") int userId) {
return JSON.toString(user).length();
{ // ...
{
@Override
@POST
public boolean isWriteable(java.lang.Class<?> type, ...) {
return User.class.equals(type);user) {
public void findUser(User
{ // ...
}
{
@Override
public void writeTo(User user, ... OutputStream out) {
JSON.write(user, out);
{
44
53. Content deliver
Proprietary (custom made) solution
» When we have a really simple format:
Short-message strings,
Single result objects
» Specific binary format,
Multimedia (Images, Movies, etc…),
Proprietary protocol.
» Bound to certain technology:
JAXB, DOM-based (JAXP),
Java native (binary) serialization.
53
54. Content deliver
Hessian binary web-service protocol
» Binary, compact format.
» Very lightweight,
Extremely suitable to mobile or other limited devices.
Provide J2ME libraries.
» No external IDL or schema,
» Language independent,
» Support for compression, encryption, signatures (with
partial external support).
54
55. Content deliver
Burlap XML-based web-service protocol
» Minimal XML-based format,
» Very lightweight (considering XML format),
Provide J2ME libraries.
» No external IDL or schema,
» Language independent (as XML is !),
» Suffice to operate EJB –
Cell phone -> Burlap -> RESTEasy -> EJB !
55
56. Content deliver
Avro serialization stack
» Part of Hadoop stack,
» Lightweight, but not as the other protocols.
» Requires schema:
Pluggable architecture to support multiple formats (JSON,
XML, etc …)
» Dynamic typing (very reach with its support),
» Untagged data.
56
57. Summary
» REST is a simple WS.
» JAX-RS is a reflection of the HTTP world.
Using Java-5 annotations only.
» Lack of concrete security model.
57