My talk at HighLoadStrategy, Lithuania, 2015 about Yandex.Tank, an open-source performance measurement tool that was born in the advertising technologies department at Yandex.
In this talk, we describe the design and implementation of the Python Streaming API support that has been submitted for inclusion in mainline Flink. Python is one of the most popular programming languages for data analysis. Its readability emphasizes development productivity and as a scripting language, it does not require a compilation nor complex development environment setup. Flink already has support for Python APIs for batch programming and unfortunately, the mechanism used to support batch programs (i.e., DataSet APIs) do does not work for Streaming API. We describe the limitations with the batch implementation and provide insights into how we solved this using Jython. We will walk through some of the examples programs using the new Python API and compare programmability and performance with the Java and Scala streaming APIs.
Flink Forward Berlin 2017: Aljoscha Krettek - Talk Python to me: Stream Proce...Flink Forward
Flink is a great stream processor, Python is a great programming language, Apache Beam is a great programming model and portability layer. Using all three together is a great idea! We will demo and discuss writing Beam Python pipelines and running them on Flink. We will cover Beam's portability vision that led here, what you need to know about how Beam Python pipelines are executed on Flink, and where Beam's portability framework is headed next (hint: Python pipelines reading from non-Python connectors)
Flink Forward Berlin 2017: Matt Zimmer - Custom, Complex Windows at Scale Usi...Flink Forward
The windowing capabilities offered by most stream processing engines are limited to aligned windows of a fixed duration. However, many real-world event processing use cases don’t fit this rigid structure, resulting in awkward processing pipelines. There haven’t been good alternatives, until recently that is. Apache Flink offers a rich Window API that supports implementing unaligned windows of varying duration. In this talk, Matt Zimmer will discuss using this API at Netflix to aggregate events into windows customized along varying definitions of a session. He will talk about implementation details such as: * Handling out-of-order events * Limiting state build-up while aggregating a subset of events from an event stream * Periodically emitting early results * Creating windows bounded by a type of event Attendees will leave this talk with practical techniques and knowledge to implement their own custom windows in Apache Flink.
Containers are the future for all microservice based apps. Where do you deploy them? How do you manage them? At Digital Ocean we went through growing pains of trying out 5 of the top major Docker container schedulers, Mesos, Kubernetes, Docker Swarm, Nomad and we even tried manual scheduling of containers. Let us walk you through how we chose different schedulers for different applications, and tips and tricks for choosing a scheduler to use.
OSMC 2008 | An Active Check on the Status of the Nagios Plugins PART 2 by Ton...NETWAYS
Ton will look back over the last year to see what has been achieved in the Nagios Plugins project and discuss some of the changes planned for the future.
In this talk, we describe the design and implementation of the Python Streaming API support that has been submitted for inclusion in mainline Flink. Python is one of the most popular programming languages for data analysis. Its readability emphasizes development productivity and as a scripting language, it does not require a compilation nor complex development environment setup. Flink already has support for Python APIs for batch programming and unfortunately, the mechanism used to support batch programs (i.e., DataSet APIs) do does not work for Streaming API. We describe the limitations with the batch implementation and provide insights into how we solved this using Jython. We will walk through some of the examples programs using the new Python API and compare programmability and performance with the Java and Scala streaming APIs.
Flink Forward Berlin 2017: Aljoscha Krettek - Talk Python to me: Stream Proce...Flink Forward
Flink is a great stream processor, Python is a great programming language, Apache Beam is a great programming model and portability layer. Using all three together is a great idea! We will demo and discuss writing Beam Python pipelines and running them on Flink. We will cover Beam's portability vision that led here, what you need to know about how Beam Python pipelines are executed on Flink, and where Beam's portability framework is headed next (hint: Python pipelines reading from non-Python connectors)
Flink Forward Berlin 2017: Matt Zimmer - Custom, Complex Windows at Scale Usi...Flink Forward
The windowing capabilities offered by most stream processing engines are limited to aligned windows of a fixed duration. However, many real-world event processing use cases don’t fit this rigid structure, resulting in awkward processing pipelines. There haven’t been good alternatives, until recently that is. Apache Flink offers a rich Window API that supports implementing unaligned windows of varying duration. In this talk, Matt Zimmer will discuss using this API at Netflix to aggregate events into windows customized along varying definitions of a session. He will talk about implementation details such as: * Handling out-of-order events * Limiting state build-up while aggregating a subset of events from an event stream * Periodically emitting early results * Creating windows bounded by a type of event Attendees will leave this talk with practical techniques and knowledge to implement their own custom windows in Apache Flink.
Containers are the future for all microservice based apps. Where do you deploy them? How do you manage them? At Digital Ocean we went through growing pains of trying out 5 of the top major Docker container schedulers, Mesos, Kubernetes, Docker Swarm, Nomad and we even tried manual scheduling of containers. Let us walk you through how we chose different schedulers for different applications, and tips and tricks for choosing a scheduler to use.
OSMC 2008 | An Active Check on the Status of the Nagios Plugins PART 2 by Ton...NETWAYS
Ton will look back over the last year to see what has been achieved in the Nagios Plugins project and discuss some of the changes planned for the future.
Flink Forward San Francisco 2019: Scaling a real-time streaming warehouse wit...Flink Forward
Scaling a real-time streaming warehouse with Apache Flink, Parquet and Kubernetes
At Branch, we process more than 12 billions events per day, and store and aggregate terabytes of data daily. We use Apache Flink for processing, transforming and aggregating events, and parquet as the data storage format. This talk covers our challenges with scaling our warehouse, namely:
How did we scale our Flink-Parquet warehouse to handle 3x increase in traffic?
How do we ensure exactly once, event-time based, fault tolerant processing of events?
In this talk, we also provide an overview on deploying and scaling our streaming warehouse. We give an overview on:
How we scaled our Parquet warehouse by tuning memory
Running on Kubernetes cluster for resource management
How we migrated our streaming jobs with no disruption from Mesos to Kubernetes
Our challenges and learnings along the way
Flink Forward Berlin 2017: Dominik Bruhn - Deploying Flink Jobs as Docker Con...Flink Forward
This talk will focus on how to package, distribute and deploy Flink Jobs by leveraging existing docker technology: Previously deploying of Flink Jobs has been a manual job which leads into errors. In this talk, we present an approach which works well in an CI/CD environment by automating most steps: From the code of a Flink Job in a repository to a running Job on an YARN cluster.
In file systems, large sequential writes are more beneficial than small random writes, and hence many storage systems implement a log structured file system. In the same way, the cloud favors large objects more than small objects. Cloud providers place throttling limits on PUTs and GETs, and so it takes significantly longer time to upload a bunch of small objects than a large object of the aggregate size. Moreover, there are per-PUT calls associated with uploading smaller objects.
In Netflix, a lot of media assets and their relevant metadata is generated and pushed to cloud.
We would like to propose a strategy to compact these small objects into larger blobs before uploading them to Cloud. We will discuss how to select relevant smaller objects, and manage the indexing of these objects within the blob along with modification in reads, overwrites and deletes.
Finally, we would showcase the potential impact of such a strategy on Netflix assets in terms of cost and performance.
Boosting command line experience with python and awkKirill Pavlov
It is often required to manipulate the data as fast as possible, be it either column average calculation or simple join and filtering.
Servers often do not have convenient tools, such as Python (NumPy/Pandas) of R, moreover, the data might not fit into memory.
This talk shows how to make fast but inconvenient command line tools great again.
Flink Forward Berlin 2017: Piotr Wawrzyniak - Extending Apache Flink stream p...Flink Forward
Many stream processing applications can benefit from or need to rely on the prediction made with machine learning (ML) methods. In this presentation, new features of Apache Samoa are presented with a real data processing scenario. These features make Apache SAMOA fully accessible for Apache Flink users: (1) the data stream processed within Apache Flink is forwarded to Apache Samoa stream mining engine to perform predictions with stream-oriented ML models, (2) ML models evolve after every labelled instance and, at the same time, new predictions are sent back to Apache Flink. In both cases, Apache Kafka is used for data exchange. Hence, Apache Samoa is used as stream mining engine, provided with input data from, and sending predictions to Apache Flink. During the presentation, real life aspects are illustrated with code examples, such as input and prediction stream integration and monitoring latency of data processing and stream mining.
Flink Forward Berlin 2017: Andreas Kunft - Efficiently executing R Dataframes...Flink Forward
While dataflow engines offer scalability, their programming abstractions are often unfamiliar to data scientists, which are used to Python and R. To provide a more convenient interface, dataflow engines like Spark provide an R-like dataframe abstraction. While operations without user-defined code can be executed efficiently, the execution of UDFs is dominated by serialized data exchange between the dataflow engine and an external R process that evaluates the code. We present a new approach to execute user-defined functions by using the Truffle/Graal compiler infrastructure, which enables efficient execution of dynamic languages on the JVM. Based on fastR, the R language provided by this infrastructure, we exemplify the execution of R scripts directly inside the data pipelines of Flink, without data serialization and inter-process communication. Furthermore, we discuss future opportunities and problems, and compare our approach to native Flink, Spark, and SparkR.
Flink Forward Berlin 2017: Ruben Casado Tejedor - Flink-Kudu connector: an op...Flink Forward
Kappa Architecture is a software architecture pattern that makes use of an immutable, append only log. All the processing of the event will be performed in the input streams and persisted as real-time views. Apache Flink is very well suited to be the processing engine because it provides support for event-time semantics, stateful exactly-once processing, and achieves high throughput and low latency at the same time. Apache Kudu Kudu is a storage system good at both ingesting streaming data and good at analyzing it using ad-hoc queries (e.g. interactive SQL based) and full-scan processes (e.g Spark/Flink). So Kudu is a good fit to store the real-time views in a Kappa Architecture. We have developed and open-sourced a connector to integrate Apache Kudu and Apache Flink. It allows reading/writing data from/to Kudu using the DataSet and DataStream Flink's APIs. The connector has been submitted to the Apache Bahir project and is already available from maven central repository.
Stream Processing Live Traffic Data with Kafka StreamsTim Ysewyn
In this workshop we will set up a streaming framework which will process realtime data of traffic sensors installed within the Belgian road system.
Starting with the intake of the data, you will learn best practices and the recommended approach to split the information into events in a way that won’t come back to haunt you.
With some basic stream operations (count, filter, … ) you will get to know the data and experience how easy it is to get things done with Spring Boot & Spring Cloud Stream. But since simple data processing is not enough to fulfill all your streaming needs, we will also let you experience the power of windows.
After this workshop, tumbling, sliding and session windows hold no more mysteries and you will be a true streaming wizard.
Last year we (TouK) introduced Flink in one of the biggest polish telcoms in the domain of real time marketing and fraud detection. One of the most significant problems in adoption was lack of programming skills at our client - the users were supposed to be analytics/business people. Therefore, we developed a custom platform - TouK Nussknacker - which allows users to design processes with GUI by drawing diagrams. Our project is going to be open-sourced soon - this will happen before Flink Forward. We believe it can make stream processing with Flink more accessible in many use cases, especially in companies that don't have their own development teams. During the talk I’m going to describe architecture of our platform, why we made certain design decisions and about our future plans. I’ll also describe our experiences - when being able to use GUI is great and when it’s better to develop jobs as normal code. If time permits I’ll also show a quick demo of our solution.
Conciliacion al ingreso y alta en la UGC de Farmacia de Granada. Ponencia de ...UGC Farmacia Granada
Jornada sobre integración entre Farmacia Hospitalaria y de Atención Primaria celebrada en Granada en noviembre 2016. Organizada por la Unidad de Gestión Clínica de Farmacia de Granada.
Si te gusta, mencionarnos en Twiter: @ugcfarmaciagr
Flink Forward San Francisco 2019: Scaling a real-time streaming warehouse wit...Flink Forward
Scaling a real-time streaming warehouse with Apache Flink, Parquet and Kubernetes
At Branch, we process more than 12 billions events per day, and store and aggregate terabytes of data daily. We use Apache Flink for processing, transforming and aggregating events, and parquet as the data storage format. This talk covers our challenges with scaling our warehouse, namely:
How did we scale our Flink-Parquet warehouse to handle 3x increase in traffic?
How do we ensure exactly once, event-time based, fault tolerant processing of events?
In this talk, we also provide an overview on deploying and scaling our streaming warehouse. We give an overview on:
How we scaled our Parquet warehouse by tuning memory
Running on Kubernetes cluster for resource management
How we migrated our streaming jobs with no disruption from Mesos to Kubernetes
Our challenges and learnings along the way
Flink Forward Berlin 2017: Dominik Bruhn - Deploying Flink Jobs as Docker Con...Flink Forward
This talk will focus on how to package, distribute and deploy Flink Jobs by leveraging existing docker technology: Previously deploying of Flink Jobs has been a manual job which leads into errors. In this talk, we present an approach which works well in an CI/CD environment by automating most steps: From the code of a Flink Job in a repository to a running Job on an YARN cluster.
In file systems, large sequential writes are more beneficial than small random writes, and hence many storage systems implement a log structured file system. In the same way, the cloud favors large objects more than small objects. Cloud providers place throttling limits on PUTs and GETs, and so it takes significantly longer time to upload a bunch of small objects than a large object of the aggregate size. Moreover, there are per-PUT calls associated with uploading smaller objects.
In Netflix, a lot of media assets and their relevant metadata is generated and pushed to cloud.
We would like to propose a strategy to compact these small objects into larger blobs before uploading them to Cloud. We will discuss how to select relevant smaller objects, and manage the indexing of these objects within the blob along with modification in reads, overwrites and deletes.
Finally, we would showcase the potential impact of such a strategy on Netflix assets in terms of cost and performance.
Boosting command line experience with python and awkKirill Pavlov
It is often required to manipulate the data as fast as possible, be it either column average calculation or simple join and filtering.
Servers often do not have convenient tools, such as Python (NumPy/Pandas) of R, moreover, the data might not fit into memory.
This talk shows how to make fast but inconvenient command line tools great again.
Flink Forward Berlin 2017: Piotr Wawrzyniak - Extending Apache Flink stream p...Flink Forward
Many stream processing applications can benefit from or need to rely on the prediction made with machine learning (ML) methods. In this presentation, new features of Apache Samoa are presented with a real data processing scenario. These features make Apache SAMOA fully accessible for Apache Flink users: (1) the data stream processed within Apache Flink is forwarded to Apache Samoa stream mining engine to perform predictions with stream-oriented ML models, (2) ML models evolve after every labelled instance and, at the same time, new predictions are sent back to Apache Flink. In both cases, Apache Kafka is used for data exchange. Hence, Apache Samoa is used as stream mining engine, provided with input data from, and sending predictions to Apache Flink. During the presentation, real life aspects are illustrated with code examples, such as input and prediction stream integration and monitoring latency of data processing and stream mining.
Flink Forward Berlin 2017: Andreas Kunft - Efficiently executing R Dataframes...Flink Forward
While dataflow engines offer scalability, their programming abstractions are often unfamiliar to data scientists, which are used to Python and R. To provide a more convenient interface, dataflow engines like Spark provide an R-like dataframe abstraction. While operations without user-defined code can be executed efficiently, the execution of UDFs is dominated by serialized data exchange between the dataflow engine and an external R process that evaluates the code. We present a new approach to execute user-defined functions by using the Truffle/Graal compiler infrastructure, which enables efficient execution of dynamic languages on the JVM. Based on fastR, the R language provided by this infrastructure, we exemplify the execution of R scripts directly inside the data pipelines of Flink, without data serialization and inter-process communication. Furthermore, we discuss future opportunities and problems, and compare our approach to native Flink, Spark, and SparkR.
Flink Forward Berlin 2017: Ruben Casado Tejedor - Flink-Kudu connector: an op...Flink Forward
Kappa Architecture is a software architecture pattern that makes use of an immutable, append only log. All the processing of the event will be performed in the input streams and persisted as real-time views. Apache Flink is very well suited to be the processing engine because it provides support for event-time semantics, stateful exactly-once processing, and achieves high throughput and low latency at the same time. Apache Kudu Kudu is a storage system good at both ingesting streaming data and good at analyzing it using ad-hoc queries (e.g. interactive SQL based) and full-scan processes (e.g Spark/Flink). So Kudu is a good fit to store the real-time views in a Kappa Architecture. We have developed and open-sourced a connector to integrate Apache Kudu and Apache Flink. It allows reading/writing data from/to Kudu using the DataSet and DataStream Flink's APIs. The connector has been submitted to the Apache Bahir project and is already available from maven central repository.
Stream Processing Live Traffic Data with Kafka StreamsTim Ysewyn
In this workshop we will set up a streaming framework which will process realtime data of traffic sensors installed within the Belgian road system.
Starting with the intake of the data, you will learn best practices and the recommended approach to split the information into events in a way that won’t come back to haunt you.
With some basic stream operations (count, filter, … ) you will get to know the data and experience how easy it is to get things done with Spring Boot & Spring Cloud Stream. But since simple data processing is not enough to fulfill all your streaming needs, we will also let you experience the power of windows.
After this workshop, tumbling, sliding and session windows hold no more mysteries and you will be a true streaming wizard.
Last year we (TouK) introduced Flink in one of the biggest polish telcoms in the domain of real time marketing and fraud detection. One of the most significant problems in adoption was lack of programming skills at our client - the users were supposed to be analytics/business people. Therefore, we developed a custom platform - TouK Nussknacker - which allows users to design processes with GUI by drawing diagrams. Our project is going to be open-sourced soon - this will happen before Flink Forward. We believe it can make stream processing with Flink more accessible in many use cases, especially in companies that don't have their own development teams. During the talk I’m going to describe architecture of our platform, why we made certain design decisions and about our future plans. I’ll also describe our experiences - when being able to use GUI is great and when it’s better to develop jobs as normal code. If time permits I’ll also show a quick demo of our solution.
Conciliacion al ingreso y alta en la UGC de Farmacia de Granada. Ponencia de ...UGC Farmacia Granada
Jornada sobre integración entre Farmacia Hospitalaria y de Atención Primaria celebrada en Granada en noviembre 2016. Organizada por la Unidad de Gestión Clínica de Farmacia de Granada.
Si te gusta, mencionarnos en Twiter: @ugcfarmaciagr
Social Media Marketing Solution for Dentistssocialraver
Capture and channel client opinions using social media into powerful word-of-mouth marketing to generate referrals and recommendations to grow your business.
Why is this ASP.NET web app running slowly?Mark Friedman
This presentation attempts to make assumptions used in popular web performance tools like YSlow and webpagetest explicit. It also looks at the NavTiming API and explores ways to capture RUM measurements and correlate them with server-side metrics.
Monitoring web application response times, a new approachMark Friedman
An approach to capturing and integrating web client Real User Measurements from the Navigation object with server-side network and HttpServer diagnostic events.
Unified Stream Processing at Scale with Apache Samza by Jake Maes at Big Data...Big Data Spain
The shift to stream processing at LinkedIn has accelerated over the past few years. We now have over 200 Samza applications in production processing more than 260B events per day.
https://www.bigdataspain.org/2017/talk/apache-samza-jake-maes
Big Data Spain 2017
November 16th - 17th Kinépolis Madrid
Systematic Load Testing of Web ApplicationsJürg Stuker
Talk held at the conference Coding Serbia in Novi Sad.
Performance of web applications is a crucial dissatisfier for users and thus an important quality criteria -- also used by Google to rank their result lists. As with other quality aspects, performance testing cannot be done at the end of a project but is an integral part of the development process.
The practice presentations submitted explains web performance testing along practical examples in order to better understand and judge cause and effect of behavior observed. Usually few causes have a disproportionate effect on bad performance. In addition, it is important to understand diverse load and test scenarios to optimize application behavior.
The presentation also introduces a methodology to systematically define and assess performance metrics of an application. The content is based on open source tools and the presentation includes live testing to illustrate the excellent cost benefit ratio of systematically white box testing of performance using an HTTP proxy.
Unified Stream Processing at Scale with Apache Samza - BDS2017Jacob Maes
The shift to stream processing at LinkedIn has accelerated over the past few years. We now have over 200 Samza applications in production processing more than 260B events per day. Many of these are new applications, but there have also been more migrations from existing online and offline applications. To support the influx of new use cases, we have improved the flexibility, efficiency and reliability of Apache Samza.
In this talk, we will take a brief look at the broader streaming ecosystem at LinkedIn, then we will zoom in on a few representative use cases and explain how they are powered by recent advancements to Apache Samza including a unified high level API, flexible deployment model, batch processing, and more.
Reactive streams and components on OSGi - C Schneidermfrancis
OSGi Community Event 2017 Presentation by Christian Schneider [Adobe]
Reactive frameworks allow to implement non blocking asynchronous processing in a convenient way. This talk explores how to apply reactive patterns to typical use cases on OSGi like REST services, MQTT Message processing, computations over sliding windows.
We combine messaging and reactor.io streams like also used in Spring 5 to create services that are highly scalable while not tying the user to the technologies being used.
The presented reactive components framework allows to abstract from the messaging technologies using OSGi services. These offer standard Publishers and Subscribers that work nicely with any reactive framework based on the reactive streams API.
This allows to create integrations like with Apache Camel but in a much leaner and OSGi compliant way.
What's New in Apache Spark 2.3 & Why Should You CareDatabricks
The Apache Spark 2.3 release marks a big step forward in speed, unification, and API support.
This talk will quickly walk through what’s new and how you can benefit from the upcoming improvements:
* Continuous Processing in Structured Streaming.
* PySpark support for vectorization, giving Python developers the ability to run native Python code fast.
* Native Kubernetes support, marrying the best of container orchestration and distributed data processing.
JMeter is an Open source tool. Can load and performance test many different server types: Web - HTTP, HTTPS, SOAP, Database via JDBC, LDAP, JMS, Mail - POP3(S) and IMAP(S).
User friendly GUI Design compare to other tools.
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Launch Your Streaming Platforms in MinutesRoshan Dwivedi
The claim of launching a streaming platform in minutes might be a bit of an exaggeration, but there are services that can significantly streamline the process. Here's a breakdown:
Pros of Speedy Streaming Platform Launch Services:
No coding required: These services often use drag-and-drop interfaces or pre-built templates, eliminating the need for programming knowledge.
Faster setup: Compared to building from scratch, these platforms can get you up and running much quicker.
All-in-one solutions: Many services offer features like content management systems (CMS), video players, and monetization tools, reducing the need for multiple integrations.
Things to Consider:
Limited customization: These platforms may offer less flexibility in design and functionality compared to custom-built solutions.
Scalability: As your audience grows, you might need to upgrade to a more robust platform or encounter limitations with the "quick launch" option.
Features: Carefully evaluate which features are included and if they meet your specific needs (e.g., live streaming, subscription options).
Examples of Services for Launching Streaming Platforms:
Muvi [muvi com]
Uscreen [usencreen tv]
Alternatives to Consider:
Existing Streaming platforms: Platforms like YouTube or Twitch might be suitable for basic streaming needs, though monetization options might be limited.
Custom Development: While more time-consuming, custom development offers the most control and flexibility for your platform.
Overall, launching a streaming platform in minutes might not be entirely realistic, but these services can significantly speed up the process compared to building from scratch. Carefully consider your needs and budget when choosing the best option for you.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Utilocate offers a comprehensive solution for locate ticket management by automating and streamlining the entire process. By integrating with Geospatial Information Systems (GIS), it provides accurate mapping and visualization of utility locations, enhancing decision-making and reducing the risk of errors. The system's advanced data analytics tools help identify trends, predict potential issues, and optimize resource allocation, making the locate ticket management process smarter and more efficient. Additionally, automated ticket management ensures consistency and reduces human error, while real-time notifications keep all relevant personnel informed and ready to respond promptly.
The system's ability to streamline workflows and automate ticket routing significantly reduces the time taken to process each ticket, making the process faster and more efficient. Mobile access allows field technicians to update ticket information on the go, ensuring that the latest information is always available and accelerating the locate process. Overall, Utilocate not only enhances the efficiency and accuracy of locate ticket management but also improves safety by minimizing the risk of utility damage through precise and timely locates.
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
5. A Story of a Programmer and his Creature
Phantom: a fast web-server
Coroutine-based
There was no tool to test it
5
shutterstock.com
6. The Beast Beats Itself
OK, why, let’s use phantom to test phantom
Phantom-benchmark was written
6
shutterstock.com
7. Phantom and Friends
Scripts in Bash and Perl to simplify
testers’ daily routine
Web-service that keeps the data
together
7
shutterstock.com
8. Tank grows big
Unified toolset for common tasks
Modular structure to be flexible and
extensible
Tank became a dedicated project
8
shutterstock.com
9. Yandex.Tank today
An open source project under
LGPL license
Mostly python, some js.
Default load gen is in C++
(external project)
9
shutterstock.com
10. Meta tool
Yandex.Tank is a meta tool
Tank: common workflow for
different load generators
Load gen: generate load and
measure response
characteristics accurately
10
12. Features list
Monitoring tool included
HTML reports with interactive charts
Auto stop your test on different conditions
High speed
Convenient configuration system
Extensibility
12
13. The fast, the furious
Phantom: 100 000 requests per
second
This is about 8 000 000 000
requests per day
13
shutterstock.com
14. Configure that
Configuration in .ini files
Good defaults and you can redefine them
Redefine on multiple levels
14
shutterstock.com
16. How to configure, load.ini
[phantom]
address = my.service.com
16
Add server’s address:
17. How to configure, load.ini
[phantom]
address = my.service.com
uris = /
/mypage.html
/my/query?param=1
headers = [Host: my.service.com]
[Connection: Keep-Alive]
17
Add URI list and HTTP headers:
18. How to configure, load.ini
[phantom]
address = my.service.com
uris = /
/mypage.html
/my/query?param=1
headers = [Host: my.service.com]
[Connection: Keep-Alive]
rps_schedule = line(1, 30000, 40s) const(30000, 5m)
18
Last one, the schedule. Put it all to ‘load.ini’:
26. How to configure, command line
26
yandex-tank -c ./load.ini -o
"phantom.rps_schedule=line(1, 30, 40s) const(30, 5m)"
Vary the load schedule by adding command line option:
27. How to configure defaults
27
〉Machine defaults at /etc/yandex-tank
〉User defaults at ~/.yandex-tank
[autostop]
autostop = http(4xx,25%,10)
29. A fun part =)
Drummers contest at an IT conference:
who makes more hits in a minute?
One hit = one request to the server
More then a hundred competitors in
one day
29
36. Yandex.Tank
10 years of development
Full of features
Proven practical usefulness
〉And one more thing…
36
37. Try it at home!
Install from PPA:
sudo add-apt-repository ppa:yandex-load/main
sudo apt-get update
sudo apt-get install yandex-load-tank-base
37
Use a Docker container with Yandex.Tank + TankAPI:
sudo docker run -d -p 8080:8888
"direvius/yandex-tank-api:v0.0.1"
39. Some useful links
About Yandex.Tank project: http://yandex.github.io/yandex-tank/
Yandex.Tank on github: https://github.com/yandex/yandex-tank
Yandex Tank API on github: https://github.com/yandex-load/yandex-tank-api
phantom on github: https://github.com/mamchits/phantom
Read the docs on ReadTheDocs: https://yandextank.readthedocs.org/
Ask questions in a gitter chat room: https://gitter.im/yandex/yandex-tank
My twitter is @direvius. Yandex.Tank’s tag is #yandextank
39