¿Qué me dices?¿Que puedo separar la parte de acciones de las vistas de mi aplicación?¿Que además puedo hacerlo escalable usando Akka-persistence?¿Basado en eventos?
En esta charla, el equipo de Scalera esperamos enseñaros cómo funciona la aproximación CQRS (recientemente implantada en sistemas como Lagom de Lightbend) para crear aplicaciones altamente escalables y resilient, usando Akka persistence como herramienta para poder guardar el estado de tu aplicación de manera distribuida y basada en eventos.
Si no te ha sonado a anuncio de tele-tienda pasivo-agresivo y aún te quedan ganas de descubrir que significan todas estas siglas raras, esta es tu charla.
Design and Implementation of the Security Graph LanguageAsankhaya Sharma
Today software is built in fundamentally different
ways from how it was a decade ago. It is increasingly common
for applications to be assembled out of open-source components,
resulting in the use of large amounts of third-party code. This
third-party code is a means for vulnerabilities to make their
way downstream into applications. Recent vulnerabilities such
as Heartbleed, FREAK SSL/TLS, GHOST, and the Equifax data
breach (due to a flaw in Apache Struts) were ultimately caused
by third-party components. We argue that an automated way to
audit the open-source ecosystem, catalog existing vulnerabilities,
and discover new flaws is essential to using open-source safely.
To this end, we describe the Security Graph Language (SGL), a
domain-specific language for analysing graph-structured datasets
of open-source code and cataloguing vulnerabilities. SGL allows
users to express complex queries on relations between libraries
and vulnerabilities in the style of a program analysis language.
SGL queries double as an executable representation for vulnerabilities, allowing vulnerabilities to be automatically checked
against a database and deduplicated using a canonical representation. We outline a novel optimisation for SGL queries based on
regular path query containment, improving query performance up to 3 orders of magnitude. We also demonstrate the
effectiveness of SGL in practice to find zero-day vulnerabilities
by identifying sever
Anatomy of Data Source API : A deep dive into Spark Data source APIdatamantra
In this presentation, we discuss how to build a datasource from the scratch using spark data source API. All the code discussed in this presentation available at https://github.com/phatak-dev/anatomy_of_spark_datasource_api
Design and Implementation of the Security Graph LanguageAsankhaya Sharma
Today software is built in fundamentally different
ways from how it was a decade ago. It is increasingly common
for applications to be assembled out of open-source components,
resulting in the use of large amounts of third-party code. This
third-party code is a means for vulnerabilities to make their
way downstream into applications. Recent vulnerabilities such
as Heartbleed, FREAK SSL/TLS, GHOST, and the Equifax data
breach (due to a flaw in Apache Struts) were ultimately caused
by third-party components. We argue that an automated way to
audit the open-source ecosystem, catalog existing vulnerabilities,
and discover new flaws is essential to using open-source safely.
To this end, we describe the Security Graph Language (SGL), a
domain-specific language for analysing graph-structured datasets
of open-source code and cataloguing vulnerabilities. SGL allows
users to express complex queries on relations between libraries
and vulnerabilities in the style of a program analysis language.
SGL queries double as an executable representation for vulnerabilities, allowing vulnerabilities to be automatically checked
against a database and deduplicated using a canonical representation. We outline a novel optimisation for SGL queries based on
regular path query containment, improving query performance up to 3 orders of magnitude. We also demonstrate the
effectiveness of SGL in practice to find zero-day vulnerabilities
by identifying sever
Anatomy of Data Source API : A deep dive into Spark Data source APIdatamantra
In this presentation, we discuss how to build a datasource from the scratch using spark data source API. All the code discussed in this presentation available at https://github.com/phatak-dev/anatomy_of_spark_datasource_api
OSDC 2016 - Chronix - A fast and efficient time series storage based on Apach...NETWAYS
How to store billions of time series points and access them within a few milliseconds? Chronix!
Chronix is a young but mature open source project that allows one for example to store about 15 GB (csv) of time series in 238 MB with average query times of 21 ms. Chronix is built on top of Apache Solr a bulletproof distributed NoSQL database with impressive search capabilities. In this code-intense session we show how Chronix achieves its efficiency in both respects by means of an ideal chunking, by selecting the best compression technique, by enhancing the stored data with (pre-computed) attributes, and by specialized query functions.
Laskar: High-Velocity GraphQL & Lambda-based Software Development ModelGarindra Prahandono
Sale Stock Engineering, represented by Garindra Prahandono, presents "High-Velocity GraphQL & Lambda-based Software Development Model" in BandungJS event on May 14th, 2018.
Extending Spark for Qbeast's SQL Data Source with Paola Pardo and Cesare Cug...Qbeast
Slides of the Barcelona Spark meetup of the 24th of October 2019. The recording is available at https://www.youtube.com/watch?v=eCoCcBH4hIU.
Abstract
One of the key strengths of Spark is its flexibility as it integrates with dozens of different storage systems and file formats. However, it is not the same reading from a CSV file, or a SQL database, or an exotic stratified sampled multidimensional database. And finding the right balance between modularity and flexibility is not easy!
In this presentation, we will talk about the evolution of Spark's DataSource API, and how it integrates with the SQL optimizer, highlighting how we can make much faster queries with logical and the physical plans that better integrates with the storage. From theory to practise, we will then discuss how we extended the Spark's internals, and we built a new source integration that allows the push-down of both sampling and multidimensional filtering.
About the speakers:
Paola Pardo is a Computer Engineer from Barcelona. She graduated in Computer engineer this last summer at the Technical University of Catalunya with a thesis focused on Data storage push down optimization based on Apache Spark. She is, and she is currently working at Barcelona Supercomputing Center and in its spin-off Qbeast developing a Qbeast-Spark connector.
Cesare Cugnasco is a PhD in Computer Architecture and a researcher at the Barcelona Supercomputing Center. His research focuses on NoSQL databases, distributed computing and High-performance storage. He invented and patented a new database architecture for Big Data, and he is building a spin-off for its commercialization.
Slides for my talk event-sourced architectures with Akka. Discusses Akka Persistence as mechanism to do event-sourcing. Presented at Javaone 2014 and Jfokus 2015.
Scala + Akka + ning/async-http-client - Vancouver Scala meetup February 2015Yanik Berube
A presentation I gave at Vancouver's Scala Meetup in February 2015 documenting a modernization effort from PHP/Gearman to Scala/Akka/async-http-client.
Abstract: Hootsuite issues millions of requests to social networks on behalf of its users on a daily basis. These requests incur network latency costs which can accumulate rapidly and affect user experience and operations in general. This talk will review the modernization of Hootsuite's infrastructure for automated publishing via RSS/Atom feeds and how Scala, Akka, and Ning's asynchronous HTTP client were combined to effectively handle network latency. It will explore the motivations behind this effort, the approaches used, the benefits, and the lessons learned from developing and deploying this reactive service.
OSDC 2016 - Chronix - A fast and efficient time series storage based on Apach...NETWAYS
How to store billions of time series points and access them within a few milliseconds? Chronix!
Chronix is a young but mature open source project that allows one for example to store about 15 GB (csv) of time series in 238 MB with average query times of 21 ms. Chronix is built on top of Apache Solr a bulletproof distributed NoSQL database with impressive search capabilities. In this code-intense session we show how Chronix achieves its efficiency in both respects by means of an ideal chunking, by selecting the best compression technique, by enhancing the stored data with (pre-computed) attributes, and by specialized query functions.
Laskar: High-Velocity GraphQL & Lambda-based Software Development ModelGarindra Prahandono
Sale Stock Engineering, represented by Garindra Prahandono, presents "High-Velocity GraphQL & Lambda-based Software Development Model" in BandungJS event on May 14th, 2018.
Extending Spark for Qbeast's SQL Data Source with Paola Pardo and Cesare Cug...Qbeast
Slides of the Barcelona Spark meetup of the 24th of October 2019. The recording is available at https://www.youtube.com/watch?v=eCoCcBH4hIU.
Abstract
One of the key strengths of Spark is its flexibility as it integrates with dozens of different storage systems and file formats. However, it is not the same reading from a CSV file, or a SQL database, or an exotic stratified sampled multidimensional database. And finding the right balance between modularity and flexibility is not easy!
In this presentation, we will talk about the evolution of Spark's DataSource API, and how it integrates with the SQL optimizer, highlighting how we can make much faster queries with logical and the physical plans that better integrates with the storage. From theory to practise, we will then discuss how we extended the Spark's internals, and we built a new source integration that allows the push-down of both sampling and multidimensional filtering.
About the speakers:
Paola Pardo is a Computer Engineer from Barcelona. She graduated in Computer engineer this last summer at the Technical University of Catalunya with a thesis focused on Data storage push down optimization based on Apache Spark. She is, and she is currently working at Barcelona Supercomputing Center and in its spin-off Qbeast developing a Qbeast-Spark connector.
Cesare Cugnasco is a PhD in Computer Architecture and a researcher at the Barcelona Supercomputing Center. His research focuses on NoSQL databases, distributed computing and High-performance storage. He invented and patented a new database architecture for Big Data, and he is building a spin-off for its commercialization.
Slides for my talk event-sourced architectures with Akka. Discusses Akka Persistence as mechanism to do event-sourcing. Presented at Javaone 2014 and Jfokus 2015.
Scala + Akka + ning/async-http-client - Vancouver Scala meetup February 2015Yanik Berube
A presentation I gave at Vancouver's Scala Meetup in February 2015 documenting a modernization effort from PHP/Gearman to Scala/Akka/async-http-client.
Abstract: Hootsuite issues millions of requests to social networks on behalf of its users on a daily basis. These requests incur network latency costs which can accumulate rapidly and affect user experience and operations in general. This talk will review the modernization of Hootsuite's infrastructure for automated publishing via RSS/Atom feeds and how Scala, Akka, and Ning's asynchronous HTTP client were combined to effectively handle network latency. It will explore the motivations behind this effort, the approaches used, the benefits, and the lessons learned from developing and deploying this reactive service.
"'Capture all changes to an application state as a sequence of events' is what Martin Fowler said about Event Sourcing in 2005 and what is the starting point into that topic for this talk.
I will demonstrate how you can store events using Akka Persistence and then distribute them via AWS to be consumed by your other services.
An event based architecture has lots of technical and organisational benefits for your development team. It can be a huge gain for your development process, but can also be difficult to implement as there are lots of challenges.
I will discuss the good as well as the bad things and provide solutions to overcome common pitfalls and aforementioned challenges."
JavaOne: A tour of (advanced) akka features in 60 minutes [con1706]Johan Janssen
Akka is a very interesting and powerful framework that can be used to build high-performance applications. But what can you do with Akka? This session starts with the basics and then covers some more-advanced topics such as finite-state machines, Akka HTTP, remote actors, clustering, routing, sharing, and persistence. The presentation includes a demo done on a Raspberry Pi Akka cluster. After this session, you’ll know what is possible with Akka and will be able to start using those features yourself.
Event sourcing and Domain Driven Design are techniques that allow you to model your business more truthfully - by expressing it via commands, events and aggregates etc. The new akka-persistence module, included in Akka since the 2.3 release is aimed at easing implementing event sourced applications. Turns out the actor model and events as messages fit in here perfectly. During this session we'll discover how to build reactive, event sourcing based apps using the new abstractions provided, and investigate how to implement your own journals to back these persistent event sourced actors.
This module has full support for server and client side HTTP backed by Akka actors and Akka Streams. Akka Http is very flexible toolkit generally used for building REST APIs using high-level APIs by defining routes by using inbuilt routing directives.
Estructurar y mantener aplicaciones Rails sin morir en el intentoMoisés Maciá
¿Tu proyecto tiene mas de 500 gemas? ¿Tienes pesadillas con las cascadas de callbacks de ActiveRecord? ¿Tus Fat Models luchan por mantenerse con vida? ¿Recibes con miedo cada nueva versión de Rails?
Si te sientes identificado al leer estas lineas quizás te interese lo que tengo que contar. Rails es un framework estupendo para empezar un proyecto y entregar un MVP rápidamente pero a partir de ese momento degrada prácticamente con la misma facilidad.
En esta charla repasaremos una serie de técnicas y consejos "desde la trinchera" para poder mantener y hacer crecer proyectos Rails.
SF Big Analytics 20191112: How to performance-tune Spark applications in larg...Chester Chen
Uber developed an new Spark ingestion system, Marmaray, for data ingestion from various sources. It’s designed to ingest billions of Kafka messages every 30 minutes. The amount of data handled by the pipeline is of the order hundreds of TBs. Omar details how to tackle such scale and insights into the optimizations techniques. Some key highlights are how to understand bottlenecks in Spark applications, to cache or not to cache your Spark DAG to avoid rereading your input data, how to effectively use accumulators to avoid unnecessary Spark actions, how to inspect your heap and nonheap memory usage across hundreds of executors, how you can change the layout of data to save long-term storage cost, how to effectively use serializers and compression to save network and disk traffic, and how to reduce amortize the cost of your application by multiplexing your jobs, different techniques for reducing memory footprint, runtime, and on-disk usage. CGI was able to significantly (~10%–40%) reduce memory footprint, runtime, and disk usage.
Speaker: Omkar Joshi (Uber)
Omkar Joshi is a senior software engineer on Uber’s Hadoop platform team, where he’s architecting Marmaray. Previously, he led object store and NFS solutions at Hedvig and was an initial contributor to Hadoop’s YARN scheduler.
How to separate frontend from a highload python project with no problems - Py...Oleksandr Tarasenko
Everybody knows that it is hard to scale old highload monolithic projects that use pythonic templates for frontend. I am gonna tell how we transformed our product using trending and proper technologies like GraphQL, Apollo, Node.js with limited developer resources in a short period of time.
Data processing platforms architectures with Spark, Mesos, Akka, Cassandra an...Anton Kirillov
This talk is about architecture designs for data processing platforms based on SMACK stack which stands for Spark, Mesos, Akka, Cassandra and Kafka. The main topics of the talk are:
- SMACK stack overview
- storage layer layout
- fixing NoSQL limitations (joins and group by)
- cluster resource management and dynamic allocation
- reliable scheduling and execution at scale
- different options for getting the data into your system
- preparing for failures with proper backup and patching strategies
Unifying Frontend and Backend Development with Scala - ScalaCon 2021Taro L. Saito
Scala can be used for developing both frontend (Scala.js) and backend (Scala JVM) applications. A missing piece has been bridging these two worlds using Scala. We built Airframe RPC, a framework that uses Scala traits as a unified RPC interface between servers and clients. With Airframe RPC, you can build HTTP/1 (Finagle) and HTTP/2 (gRPC) services just by defining Scala traits and case classes. It simplifies web application design as you only need to care about Scala interfaces without using existing web standards like REST, ProtocolBuffers, OpenAPI, etc. Scala.js support of Airframe also enables building interactive Web applications that can dynamically render DOM elements while talking with Scala-based RPC servers. With Airframe RPC, the value of Scala developers will be much higher both for frontend and backend areas.
Using spark 1.2 with Java 8 and CassandraDenis Dus
Brief introduction in Spark data processing ideology, comparison Java 7 and Java 8 usage with Spark. Examples of loading and processing data with Spark Cassandra Loader.
This short text will get you up to speed in no time on creating visualizations using R's ggplot2 package. It was developed as part of a training to those who had no prior experience in R and had limited knowledge on general programming concepts. It's a must have initial guide for those exploring the field of Data Science
Mobility insights at Swisscom - Understanding collective mobility in SwitzerlandFrançois Garillot
Swisscom is the leading mobile-service provider in Switzerland, with a market share high enough to enable us to model and understand the collective mobility in every area of the country. To accomplish that, we built an urban planning tool that helps cities better manage their infrastructure based on data-based insights, produced with Apache Spark, YARN, Kafka and a good dose of machine learning. In this talk, we will explain how building such a tool involves mining a massive amount of raw data (1.5E9 records/day) to extract fine-grained mobility features from raw network traces. These features are obtained using different machine learning algorithms. For example, we built an algorithm that segments a trajectory into mobile and static periods and trained classifiers that enable us to distinguish between different means of transport. As we sketch the different algorithmic components, we will present our approach to continuously run and test them, which involves complex pipelines managed with Oozie and fuelled with ground truth data. Finally, we will delve into the streaming part of our analytics and see how network events allow Swisscom to understand the characteristics of the flow of people on roads and paths of interest. This requires making a link between network coverage information and geographical positioning in the space of milliseconds and using Spark streaming with libraries that were originally designed for batch processing. We will conclude on the advantages and pitfalls of Spark involved in running this kind of pipeline on a multi-tenant cluster. Audiences should come back from this talk with an overall picture of the use of Apache Spark and related components of its ecosystem in the field of trajectory mining.
Reactive app using actor model & apache sparkRahul Kumar
Developing Application with Big Data is really challenging work, scaling, fault tolerance and responsiveness some are the biggest challenge. Realtime bigdata application that have self healing feature is a dream these days. Apache Spark is a fast in-memory data processing system that gives a good backend for realtime application.In this talk I will show how to use reactive platform, Actor model and Apache Spark stack to develop a system that have responsiveness, resiliency, fault tolerance and message driven feature.
Spark Training Institutes: kelly technologies is the best Spark class Room training institutes in Bangalore. Providing Spark training by real time faculty in Bangalore.
As the de facto standard for large-scale data processing in the Java world, Apache Spark is the logical choice when you want to investigate big data processing. As a matter of fact, most resources online refer to the Scala API that is exposed by Spark. What to do if you and your company are much more comfortable with Java than the Scala language? These slides give pointers whether it makes sense to learn and introduce an entirely new language just for your big data processing.
Your API on Steroids - Retrofitting GraphQL by Code, Cloud Native or ServerlessQAware GmbH
OOP 2023, Online, Februar 2023, Sonja Wegner (Lead Software Architect @QAware) & Stefan Schmöller (Senior Software Engineer @QAware).
== Dokument bitte herunterladen, falls unscharf! Please download slides if blurred! ==
With GraphQL a modern and flexible way of providing APIs for our data is emerging.
The clients specify which data they need, the provisioning of data becomes more flexible and dynamic. Over-fetching or under-fetching are history.
But does this mean we have to rewrite all APIs to benefit? How can we retrofit a GraphQL API onto our existing API landscape?
In this talk we explore three different alternatives:
- The Developer Way: Writing a GraphQL API layer by hand
- The Cloud-native Way: Using lightweight API gateways such as Gloo or Tyk
- The Serverless Way: Using Cloud Provider native services
We will look at all three approaches conceptually and justify when and why each makes sense. Additionally, we will show in a live demo how GraphQL APIs can be added to an existing REST API.
Similar to Codemotion akka persistence, cqrs%2 fes y otras siglas del montón (20)
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Launch Your Streaming Platforms in MinutesRoshan Dwivedi
The claim of launching a streaming platform in minutes might be a bit of an exaggeration, but there are services that can significantly streamline the process. Here's a breakdown:
Pros of Speedy Streaming Platform Launch Services:
No coding required: These services often use drag-and-drop interfaces or pre-built templates, eliminating the need for programming knowledge.
Faster setup: Compared to building from scratch, these platforms can get you up and running much quicker.
All-in-one solutions: Many services offer features like content management systems (CMS), video players, and monetization tools, reducing the need for multiple integrations.
Things to Consider:
Limited customization: These platforms may offer less flexibility in design and functionality compared to custom-built solutions.
Scalability: As your audience grows, you might need to upgrade to a more robust platform or encounter limitations with the "quick launch" option.
Features: Carefully evaluate which features are included and if they meet your specific needs (e.g., live streaming, subscription options).
Examples of Services for Launching Streaming Platforms:
Muvi [muvi com]
Uscreen [usencreen tv]
Alternatives to Consider:
Existing Streaming platforms: Platforms like YouTube or Twitch might be suitable for basic streaming needs, though monetization options might be limited.
Custom Development: While more time-consuming, custom development offers the most control and flexibility for your platform.
Overall, launching a streaming platform in minutes might not be entirely realistic, but these services can significantly speed up the process compared to building from scratch. Carefully consider your needs and budget when choosing the best option for you.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
In the ever-evolving landscape of technology, enterprise software development is undergoing a significant transformation. Traditional coding methods are being challenged by innovative no-code solutions, which promise to streamline and democratize the software development process.
This shift is particularly impactful for enterprises, which require robust, scalable, and efficient software to manage their operations. In this article, we will explore the various facets of enterprise software development with no-code solutions, examining their benefits, challenges, and the future potential they hold.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
Codemotion akka persistence, cqrs%2 fes y otras siglas del montón
1. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Akka persistence, CQRS/ES y otras
siglas del montón
Javier Santos
2. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
About us
Javier Santos @jpaniego
«Hay dos formas de programar sin
errores; sólo la tercera funciona»
Alan J Perlis
3. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
scalera.es
@scalerablog
About us
4. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Problem to solve
5. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Problem to solve
6. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Problem to solve
7. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
DDD - Domain Driven Design
● Context - The setting in which a word or statement appears that
determines its meaning
● Domain (Ontology) - The subject area to which the user applies a
program is the domain of the software
● Model - A system of abstractions that describes selected aspects of a
domain and can be used to solve problems related to that domain
● Ubiquitous language - A language structured around the domain model
and used by all team members to connect all the activities of the team
with the software
8. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
DDD - Domain Driven Design
● Context - The setting in which a word or statement appears that
determines its meaning
● Domain (Ontology) - The subject area to which the user applies a
program is the domain of the software
● Model - A system of abstractions that describes selected aspects of a
domain and can be used to solve problems related to that domain
● Ubiquitous language - A language structured around the domain model
and used by all team members to connect all the activities of the team
with the software
9. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
DDD - Domain Driven Design
● Context - The setting in which a word or statement appears that
determines its meaning
● Domain (Ontology) - The subject area to which the user applies a
program is the domain of the software
● Model - A system of abstractions that describes selected aspects of a
domain and can be used to solve problems related to that domain
● Ubiquitous language - A language structured around the domain model
and used by all team members to connect all the activities of the team
with the software
10. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Our domain model
case class User(
name: String,
address: Address,
credit: Double,
currentBike: Option[Id[Bike]])
case class Bike(
model: String,
battery: Bike.Battery.Status,
bikeStation: Option[Id[Station]])
case class Address(
street: String,
number: String,
zipCode: String)
case class Station(
address: Address,
maxBikeCapacity: Int,
currentCapacity: Int)
11. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Anemic Domain Model
12. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Anemic Domain Model
● Think about a domain class that only contains fields and no
methods.
● Anti-pattern against the main idea of the object-oriented
programming paradigm: combining data and process
together.
● Why not using C structs then? ¬¬
13. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Anemic Domain Model
case class User(
name: String,
address: Address,
credit: Double,
currentBike: Option[Id[Bike]]) {
def startRental(bike: Id[Bike]): User = {
require(
currentBike.isEmpty,"There's another rental on course.")
require(
credit > 0, "Not enough credit to start a rental.")
this.copy(currentBike = Some(bike))
}
//...
14. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Anemic Domain Model
//...
def finishRental(creditExpense: Double): User = {
require(
currentBike.isDefined, "There is not a rental on course.")
this.copy(
currentBike = None,
credit = credit - creditExpense)
}
}
15. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Aggregate
● Cluster of domain objects
● The whole block of domain objects works as a sole entity.
User Bike
Address
16. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Aggregate
trait Aggregate extends ...{
def id: Id[This]
}
case class Station(
address: Address,
maxBikeCapacity: Int,
currentCapacity: Int)
17. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Event Sourcing
● “Every change to the state of an application is captured in an event
object, and that these event objects are themselves stored in the
sequence they were applied for the same lifetime as the application state
itself” - Martin Fowler
18. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Event Sourcing
● The system state is the sum of the events
19. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Stateful[T]
trait Stateful[S] {
type This <: Stateful[S]
type Action = scalaz.State[S, Event]
val state: S
def apply(s: S): This
def update(f: Action): (This, E) = {
val (newState, e) = f(state)
(apply(newState), e)
}
}
20. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
scalaz.State
21. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
scalaz.State
● Represents a state mutation that may generate some effect.
● type State[S, Effect] = StateT[Id, S, Effect]
● Monad = composable!
23. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
scalaz.State
val topUpAndRent: Action[(Double, Long)] =
for {
newCredit <- addCredit(10)
timesUsed <- rentBike(Id(“bike-1”))
} yield (newCredit, timesUsed)
val (relaxingAnn, (newCredit, bikeAge)) =
topUpAndRent(User(“Ann Bottle”, Addres(...), 0.0, None)
24. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Immutability perversion
25. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
StatefulAgg[T]
trait StateAgg[S] extends Stateful[S]{
_: PersistenActor =>
var aggState: S
def updateState(f: Action): Unit = {
val (newAgg, _) = update(f)
aggState = newAgg.state
}
}
26. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
StatefulAgg[T]
trait StateAgg[S] extends Stateful[S]{
_: PersistenActor =>
var aggState: S
def updateState(f: Action): Unit = {
val (newAgg, _) = update(f)
aggState = newAgg.state
}
}
27. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
28. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
CQRS
● Command Query Responsibility Segregation
● Main idea: “You can use a different model to update the
information than the model you use to read information”
29. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
CQRS
30. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
CQRS
trait Command[Agg] {
val to: Id[Agg]
}
trait Event
31. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Akka persistence
● End of 2013
● Martin Krasser
● Main idea : store the internal state of an actor
● ...BUT not directly, only through the changes the actor has
suffered.
32. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Akka persistence
33. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Akka persistence
34. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Akka persistence - Failure recovery
35. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Akka persistence - Failure recovery
36. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Akka persistence - Failure recovery
37. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Akka persistence - Failure recovery
38. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Snapshotting
39. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Snapshotting
40. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Persistent actors
trait PersistenceStuff extends PersistentActor{
def persistenceId: String
def receiveCommand: Receive
def receiveRecover: Receive
}
41. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
trait Reactive { _: Stateful[S] =>
val onEvent: Event => Action
val onSnapshot: Any => Action = {
case snapshot => State(s => (s, snapshot))
}
def asEvent(c: Command[This]): Event
}
Persistent actors
42. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
trait Aggregate[S] extends StateAgg[S] with Reactive with PersistentActor {
def persistenceId: String = id.id
def receiveRecover: Receive = {
case event: Event => update(onEvent(event))
case snapshot => update(onSnapshot(snapshot))
}
def receiveCommand: Receive = {
case cmd: Command[This@unchecked] =>
if (checkState(cmd))
persistAll(List(asEvent(cmd))){ event =>
update(onEvent(event))
sender ! event
}
else sender ! Nope
}
}
Persistent actors
43. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
case class UserAgg(state: User.Default) extends Aggregate[User] {
type This = UserAgg
def apply(s: User) = UserAgg(s)
override val onEvent = {
case e@AddedCredit(amount) =>
State(s => (s.copy(credit = s.credit+10),e))
}
def asEvent(c: Command[This]): Event = c match {
case AddCredit(_, amount) => AddedCredit(amount)
}
}
Persistent actors
boilerplate
44. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Persistence Queries
● Like a PersistentActor but with no Command part
● peristenceId subscription or by tag
● refreshInterval in most plugins
45. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Persistence Queries
val readJournal =
PersistenceQuery(system).readJournalFor[MyScaladslReadJournal](
"akka.persistence.query.my-read-journal")
// issue query to journal
val source: Source[EventEnvelope, NotUsed] =
readJournal.eventsByPersistenceId("user-1337")
// materialize stream, consuming events
implicit val mat = ActorMaterializer()
source.runForeach { event => println("Event: " + event) }
46. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
getOrCreate
● A new command is launched against user1...how to deliver it?
● Managers
class UserManager extends Actor {
override def receive = {
case cmd: Command[User@unchecked] =>
getOrCreate(cmd.addresse) forward cmd
}
}
47. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
getOrCreate
● Imagine there are 500k registered users …
● ...you create a persistent actor and deliver the just arrived
command…
● ...the actor keeps alive….
48. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
getOrCreate
49. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Passivation
● Kill the actor if it’s iddle…
trait Passivation extends PersistentActor {
import ShardRegion.Passivate
context.setReceiveTimeout(60.seconds)
override def receiveCommand = {
//…
case ReceiveTimeout =>
context.parent ! Passivate(stopMessage = Stop)
}
}
50. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
51. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Journal
● Different supported pluggable storages:
○ AndroidSQLite
○ BDB
○ Cassandra
○ Kafka
○ DynamoDB
○ HBase
○ MongoDB
○ …
● Why choosing a distributed database as Journal for High Availability?
52. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
HA : Clustering
53. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
HA : Clustering
● Send semantics
○ Send: The message is sent to an only cluster actor that fits in the
path. If several, the actor is chosen randomly.
clusterClient ! Send(«/user/handler», «hello»,
localAffinity = true)
○ SendToAll: The message is sent to all cluster actors that fit in the
path.
clusterClient ! SendToAll(«/user/handler», «hi»)
○ Publish: The message is sent to all topic subscribers
clusterClient ! Publish(«myTopic», «hello»)
54. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Partitioning : Sharding
● Study case:
○ A command is sent (in a cluster)
to aggregate1, the actor is
created in node1, the command is
processed.
○ A second command is sent (in the
same cluster) to aggregate1, but
this time the command is
received by node2, so the actor is
created there.
55. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Partitioning : Sharding
● It provides
○ Load balancing: In case a new is added to the cluster or
another one crashes, the total actor amount is balanced.
○ Unique actor identification, so messages that are sent to the
same actor will be forwarded to the node that handles in that
moment that shard id (previous case won’t take place)
56. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Partitioning : Sharding
● Semantics
○ Entry: It represents a uniquely identified entity with
persistent state (Aggregate).
○ Shard: Entry group. Experts recommend
ShardAmount = 10 x MaxNodeAmount
○ ShardRegion: Same Entry shard group.
57. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Partitioning : Sharding
58. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
#graciasCarmena
#llevameEnTuBiciceta
59. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Problem: eventual consistency
● Changes may take
some time
● If the user requires an
immediate update,
user views could be
updated when the
command part has
acknowledge the event
persistence (handle it
carefully...)
60. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Problem: Distributed transactions
● Let’s suppose credit transfer is allowed among users
● Saga Pattern: A persistent actor that represents a
transaction among aggregates
61. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Problem: Distributed transactions
62. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Problem: cluster initialization
63. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Problem: cluster initialization
64. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Problem: cluster initialization
65. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Problem: cluster initialization
66. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Problem: Testing a PersistentActor
TestActorRef + PersistentActor =
67. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Problem: Testing a PersistentActor
TestActorRef + PersistentActor =
68. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Problem : Schema evolution
● Imagine the following:
case class User(name: String, address: Address, amount: Double)
case class User(cardId: String, amount: Double, usualBike: Bike)
● Choosing proper serializers are the key (ProtoBuf, Avro, Java ser...no
way)
● Event migration (EventAdapter)
● How to:
○ add attribute
○ remove attribute
○ rename attribute
○ ...
69. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Lightbend approach
● Runs on JRE 1.8
● Multiple service control
● Event stream abstraction
● Easy scalability
● Java SDK ...
70. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Lightbend approach
● Runs on JRE 1.8
● Multiple service control
● Event stream abstraction
● Easy scalability
● Java SDK ...
71. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
scalera.es
@scalerablog
72. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
73. MADRID · NOV 18-19 · 2016
Scala Programming @ Madrid
Akka persistence, CQRS/ES y otras
siglas del montón
Javier Santos