Airframe Meetup #3: 2019 Updates & AirSpecTaro L. Saito
Presentation slides of Airframe Meetup #3 https://airframe.connpass.com/event/148169/
- Airframe 19 Milestone
- AirSpec: A new testing library for Scala
-
Airframe RPC is a framework for building RPC services by using Scala as a unified RPC interface between servers and clients. It supports Finagle (HTTP/1) and gRPC (HTTP/2) backend, and even Scala.js for web application development.
Talk video: https://www.youtube.com/watch?v=qf8wOc2YHmQ&feature=youtu.be
Documentation: https://wvlet.org/airframe/docs/airframe-rpc
Demo source code: https://github.com/wvlet/airframe/tree/master/examples/rpc-examples
Measuring the time spent on small individual fractions of program code is a common technique for analysing performance behavior and detecting performance bottlenecks. The benefits of the approach include a detailed individual attribution of performance and understandable feedback loops when experimenting with different code versions. There are however severe pitfalls when following this approach that can lead to vastly misleading results. Modern dynamic compilers use complex optimisation techniques that take a large part of the program into account. There can be therefore unexpected side-effects when combining different code snippets or even when running a presumably unrelated part of the code. This talk will present performance paradoxes with examples from the domain of dynamic compilation of Java programs. Furthermore, it will discuss an alternative approach to modelling code performance characteristics that takes the challenges of complex optimising compilers into account.
Introducing Arc: A Common Intermediate Language for Unified Batch and Stream...Flink Forward
Today's end-to-end data pipelines need to combine many diverse workloads such as machine learning, relational operations, stream dataflows, tensor transformations, and graphs. For each of these workload types exist several frontends (e.g., DataFrames/SQL, Beam, Keras) based on different programming languages as well as different runtimes (e.g., Spark, Flink, Tensorflow) that target a particular frontend and possibly a hardware architecture (e.g., GPUs). Putting all the pieces of a data pipeline together simply leads to excessive data materialisation, type conversions and hardware utilisation as well as miss-matches of processing guarantees.
Our research group at RISE and KTH in Sweden has founded Arc, an intermediate language that bridges the gap between any frontend and a dataflow runtime (e.g., Flink) through a set of fundamental building blocks for expressing data pipelines. Arc incorporates Flink and Beam-inspired stream semantics such as windows, state and out of order processing as well as concepts found in batch computation models. With Arc, we can cross- compile and optimise diverse tasks written in any programming language into a unified dataflow program. Arc programs can run on various hardware backends efficiently as well as allowing seamless, distributed execution on dataflow runtimes. To that end, we showcase Arcon a concept runtime built in Rust that can execute Arc programs natively as well as presenting a minimal set of extensions to make Flink an Arc-ready runtime.
Airframe Meetup #3: 2019 Updates & AirSpecTaro L. Saito
Presentation slides of Airframe Meetup #3 https://airframe.connpass.com/event/148169/
- Airframe 19 Milestone
- AirSpec: A new testing library for Scala
-
Airframe RPC is a framework for building RPC services by using Scala as a unified RPC interface between servers and clients. It supports Finagle (HTTP/1) and gRPC (HTTP/2) backend, and even Scala.js for web application development.
Talk video: https://www.youtube.com/watch?v=qf8wOc2YHmQ&feature=youtu.be
Documentation: https://wvlet.org/airframe/docs/airframe-rpc
Demo source code: https://github.com/wvlet/airframe/tree/master/examples/rpc-examples
Measuring the time spent on small individual fractions of program code is a common technique for analysing performance behavior and detecting performance bottlenecks. The benefits of the approach include a detailed individual attribution of performance and understandable feedback loops when experimenting with different code versions. There are however severe pitfalls when following this approach that can lead to vastly misleading results. Modern dynamic compilers use complex optimisation techniques that take a large part of the program into account. There can be therefore unexpected side-effects when combining different code snippets or even when running a presumably unrelated part of the code. This talk will present performance paradoxes with examples from the domain of dynamic compilation of Java programs. Furthermore, it will discuss an alternative approach to modelling code performance characteristics that takes the challenges of complex optimising compilers into account.
Introducing Arc: A Common Intermediate Language for Unified Batch and Stream...Flink Forward
Today's end-to-end data pipelines need to combine many diverse workloads such as machine learning, relational operations, stream dataflows, tensor transformations, and graphs. For each of these workload types exist several frontends (e.g., DataFrames/SQL, Beam, Keras) based on different programming languages as well as different runtimes (e.g., Spark, Flink, Tensorflow) that target a particular frontend and possibly a hardware architecture (e.g., GPUs). Putting all the pieces of a data pipeline together simply leads to excessive data materialisation, type conversions and hardware utilisation as well as miss-matches of processing guarantees.
Our research group at RISE and KTH in Sweden has founded Arc, an intermediate language that bridges the gap between any frontend and a dataflow runtime (e.g., Flink) through a set of fundamental building blocks for expressing data pipelines. Arc incorporates Flink and Beam-inspired stream semantics such as windows, state and out of order processing as well as concepts found in batch computation models. With Arc, we can cross- compile and optimise diverse tasks written in any programming language into a unified dataflow program. Arc programs can run on various hardware backends efficiently as well as allowing seamless, distributed execution on dataflow runtimes. To that end, we showcase Arcon a concept runtime built in Rust that can execute Arc programs natively as well as presenting a minimal set of extensions to make Flink an Arc-ready runtime.
Talk I gave at FLIP conference on Jul 23rd:
Absinthe is a GraphQL toolkit for Elixir-based Phoenix web framework. This talk will discuss Absinthe itself, as well as various patterns of designing GraphQL APIs with it
Nowadays the Kappa Architecture is surely one of the best architectural pattern to implement a streaming system. While the choice for the log / journal side is usually straightforward thanks to engines like Apache Kafka, DistributedLog and Pravega, perfectly fitting the write side of this architecture, we didn’t find an open source counterpart able to fully satisfy all the requirements we believe are essential for a time series database such as: high availability, partition tolerance, optimized time series management, security, out of the box Apache Flink integration, ad-hoc front-end streaming features based on WebSocket protocol and natural real-time Analytics readiness. For this reason we took the decision to start the development of NSDB (Natural Series DB). During this talk we will introduce the main concepts behind the ideation of NSDB focusing on our starting goals and its architecture giving an overview of its first draft implementation. We will eventually provide an explanation on how it leverages Akka cluster and how it partitions data on a time basis.
Apache Hivemall is a scalable machine learning library for Apache Hive, Apache Spark, and Apache Pig.
Hivemall provides a number of machine learning functionalities across classification, regression, ensemble learning, and feature engineering through UDFs/UDAFs/UDTFs of Hive.
We have released the first Apache release (v0.5.0-incubating) on Mar 5, 2018 and the project plans to release v0.5.2 in Q2, 2018.
We will first give a quick walk-through of features, usages, what's new in v0.5.0, and future roadmaps of Apache Hivemall. Next, we will introduce Hivemall on Apache Spark in depth such as DataFrame integration and Spark 2.3 supports in Hivemall.
Grokking Techtalk #38: Escape Analysis in Go compilerGrokking VN
Trong quá trình phân tích hiệu năng, hiểu và nắm vững ngôn ngữ lập trình cũng như cách thiết kế của nó là rất hữu ích. Go là một trong những ngôn ngữ được sử dụng phổ biến trong các hệ thống phân tán có hiệu năng cao. Để hiểu rõ hơn cách mà Go compiler phân tích cách cấp phát bộ nhớ khi biên dịch chương trình, hãy nghe những chia sẻ của anh Cường về Escape Analysis trong Go compiler.
Về diễn giả:
Anh Lê Mạnh Cường là một kĩ sư phần mềm có 8 năm kinh nghiệm chuyên sâu trong backend và Quản trị hệ thống Linux. Là một OSS contributor tích cực, anh Cường đã có nhiều cống hiến vào cộng đồng mã nguồn mở, đặc biệt là Go và ecosystem của Go.
Fast partial access to objects from very large files in the SDSC Storage Resource Broker (SRB5) can be extremely challenging, even when those objects are small. The HDF-SRB project integrates the SRB and NCSA Hierarchical Data Format (HDF5), to create an access mechanism within the SRB that is can be orders of magnitude more efficient than current methods for accessing object-based file formats. The project provides interactive and efficient access to datasets or subsets of datasets in large files without bringing entire files into local machines. A new set of data structures and APIs have been implemented to the SRB support such object-level data access. A working prototype of the HDF5-SRB data system has been developed and tested. The SRB support is implemented in HDFView as a client application.
Quickstart for the installation of python and other supporting libraries through anaconda, knime and orange.
For deployment Git, Github desktop, Heoku and Streamlit could be installed
Flink Forward San Francisco 2019: Managing Flink on Kubernetes - FlinkK8sOper...Flink Forward
Managing Flink on Kubernetes - FlinkK8sOperator
The goal of Lyft is to “Improve people’s lives with the world’s best transportation”. Our product is fundamentally real-time and building a reliable platform that consumes and processes massive amounts of streaming data empowers us to achieve our mission. The advent of containers and Kubernetes has completely changed how we deploy and manage stateless services. At Lyft we have doubled down on Docker containers and Kubernetes for all the services in production. To achieve a homogenous infrastructure we decided to extend Kubernetes to manage stateful streaming services like Flink. We developed the FlinkK8sOperator which leverages Kubernetes CustomResourceDefinition to enable native management of Flink applications on Kubernetes. FlinkK8sOperator employs a state machine that transitions the application through a series of states, until a stable state is attained. Each Flink application on Kubernetes spins up a separate Flink Cluster, with its own UI, providing clear isolation for monitoring and debugging. This talk provides an overview of running Flink applications on Kubernetes using FlinkK8sOperator, showcasing the entire lifecycle of the application from creation to execution, with focus of transitions during deployments and stateful updates, concluding with a demo.
Flink Forward San Francisco 2018 keynote: Anand Iyer - "Apache Flink + Apach...Flink Forward
Over the past few months, the Apache Flink and Apache Beam communities have been busy developing an industry leading solution to author batch and streaming pipelines with Python. This was made possible by a significant effort to revamp Beam’s portability framework, build the corresponding Flink Runner, and simplify Flink’s artifact distribution & deployment mechanisms.
What is the “killer big-data app” enabled by this integration: production TensorFlow pipelines. Building production machine learning pipelines that process large distributed data sets can get complex. In this talk, we will describe a set of open source libraries developed at Google, that simplify and unify pre and post processing stages for a production TensorFlow pipeline. These libraries are authored on Beam’s python SDK, and can be run on Apache Flink at scale.
Last, but not least, we will describe how Beam & Flink aim to bring the power of big-data to newer audiences, in particular, developers of the Go programming language.
Unifying Frontend and Backend Development with Scala - ScalaCon 2021Taro L. Saito
Scala can be used for developing both frontend (Scala.js) and backend (Scala JVM) applications. A missing piece has been bridging these two worlds using Scala. We built Airframe RPC, a framework that uses Scala traits as a unified RPC interface between servers and clients. With Airframe RPC, you can build HTTP/1 (Finagle) and HTTP/2 (gRPC) services just by defining Scala traits and case classes. It simplifies web application design as you only need to care about Scala interfaces without using existing web standards like REST, ProtocolBuffers, OpenAPI, etc. Scala.js support of Airframe also enables building interactive Web applications that can dynamically render DOM elements while talking with Scala-based RPC servers. With Airframe RPC, the value of Scala developers will be much higher both for frontend and backend areas.
Kubernetes is hard! Lessons learned taking our apps to Kubernetes - Eldad Ass...Cloud Native Day Tel Aviv
You might think taking your application to Kubernetes is easy. Just pack them in a Docker container, deploy and you're done!
In reality, the challenges of taking your existing application to the cloud native environment of Kubernetes are huge! They require changes in the way your applications behave and the way you administer them.
Do you really know how to get up and running with your existing applications in Kubernetes?
In this talk I will share my lessons learned taking JFrog's existing applications, prepping and deploying them to Kubernetes.
I'll go over some best practices of preparing your application for Kubernetes with some examples for what we did.
Talk I gave at FLIP conference on Jul 23rd:
Absinthe is a GraphQL toolkit for Elixir-based Phoenix web framework. This talk will discuss Absinthe itself, as well as various patterns of designing GraphQL APIs with it
Nowadays the Kappa Architecture is surely one of the best architectural pattern to implement a streaming system. While the choice for the log / journal side is usually straightforward thanks to engines like Apache Kafka, DistributedLog and Pravega, perfectly fitting the write side of this architecture, we didn’t find an open source counterpart able to fully satisfy all the requirements we believe are essential for a time series database such as: high availability, partition tolerance, optimized time series management, security, out of the box Apache Flink integration, ad-hoc front-end streaming features based on WebSocket protocol and natural real-time Analytics readiness. For this reason we took the decision to start the development of NSDB (Natural Series DB). During this talk we will introduce the main concepts behind the ideation of NSDB focusing on our starting goals and its architecture giving an overview of its first draft implementation. We will eventually provide an explanation on how it leverages Akka cluster and how it partitions data on a time basis.
Apache Hivemall is a scalable machine learning library for Apache Hive, Apache Spark, and Apache Pig.
Hivemall provides a number of machine learning functionalities across classification, regression, ensemble learning, and feature engineering through UDFs/UDAFs/UDTFs of Hive.
We have released the first Apache release (v0.5.0-incubating) on Mar 5, 2018 and the project plans to release v0.5.2 in Q2, 2018.
We will first give a quick walk-through of features, usages, what's new in v0.5.0, and future roadmaps of Apache Hivemall. Next, we will introduce Hivemall on Apache Spark in depth such as DataFrame integration and Spark 2.3 supports in Hivemall.
Grokking Techtalk #38: Escape Analysis in Go compilerGrokking VN
Trong quá trình phân tích hiệu năng, hiểu và nắm vững ngôn ngữ lập trình cũng như cách thiết kế của nó là rất hữu ích. Go là một trong những ngôn ngữ được sử dụng phổ biến trong các hệ thống phân tán có hiệu năng cao. Để hiểu rõ hơn cách mà Go compiler phân tích cách cấp phát bộ nhớ khi biên dịch chương trình, hãy nghe những chia sẻ của anh Cường về Escape Analysis trong Go compiler.
Về diễn giả:
Anh Lê Mạnh Cường là một kĩ sư phần mềm có 8 năm kinh nghiệm chuyên sâu trong backend và Quản trị hệ thống Linux. Là một OSS contributor tích cực, anh Cường đã có nhiều cống hiến vào cộng đồng mã nguồn mở, đặc biệt là Go và ecosystem của Go.
Fast partial access to objects from very large files in the SDSC Storage Resource Broker (SRB5) can be extremely challenging, even when those objects are small. The HDF-SRB project integrates the SRB and NCSA Hierarchical Data Format (HDF5), to create an access mechanism within the SRB that is can be orders of magnitude more efficient than current methods for accessing object-based file formats. The project provides interactive and efficient access to datasets or subsets of datasets in large files without bringing entire files into local machines. A new set of data structures and APIs have been implemented to the SRB support such object-level data access. A working prototype of the HDF5-SRB data system has been developed and tested. The SRB support is implemented in HDFView as a client application.
Quickstart for the installation of python and other supporting libraries through anaconda, knime and orange.
For deployment Git, Github desktop, Heoku and Streamlit could be installed
Flink Forward San Francisco 2019: Managing Flink on Kubernetes - FlinkK8sOper...Flink Forward
Managing Flink on Kubernetes - FlinkK8sOperator
The goal of Lyft is to “Improve people’s lives with the world’s best transportation”. Our product is fundamentally real-time and building a reliable platform that consumes and processes massive amounts of streaming data empowers us to achieve our mission. The advent of containers and Kubernetes has completely changed how we deploy and manage stateless services. At Lyft we have doubled down on Docker containers and Kubernetes for all the services in production. To achieve a homogenous infrastructure we decided to extend Kubernetes to manage stateful streaming services like Flink. We developed the FlinkK8sOperator which leverages Kubernetes CustomResourceDefinition to enable native management of Flink applications on Kubernetes. FlinkK8sOperator employs a state machine that transitions the application through a series of states, until a stable state is attained. Each Flink application on Kubernetes spins up a separate Flink Cluster, with its own UI, providing clear isolation for monitoring and debugging. This talk provides an overview of running Flink applications on Kubernetes using FlinkK8sOperator, showcasing the entire lifecycle of the application from creation to execution, with focus of transitions during deployments and stateful updates, concluding with a demo.
Flink Forward San Francisco 2018 keynote: Anand Iyer - "Apache Flink + Apach...Flink Forward
Over the past few months, the Apache Flink and Apache Beam communities have been busy developing an industry leading solution to author batch and streaming pipelines with Python. This was made possible by a significant effort to revamp Beam’s portability framework, build the corresponding Flink Runner, and simplify Flink’s artifact distribution & deployment mechanisms.
What is the “killer big-data app” enabled by this integration: production TensorFlow pipelines. Building production machine learning pipelines that process large distributed data sets can get complex. In this talk, we will describe a set of open source libraries developed at Google, that simplify and unify pre and post processing stages for a production TensorFlow pipeline. These libraries are authored on Beam’s python SDK, and can be run on Apache Flink at scale.
Last, but not least, we will describe how Beam & Flink aim to bring the power of big-data to newer audiences, in particular, developers of the Go programming language.
Unifying Frontend and Backend Development with Scala - ScalaCon 2021Taro L. Saito
Scala can be used for developing both frontend (Scala.js) and backend (Scala JVM) applications. A missing piece has been bridging these two worlds using Scala. We built Airframe RPC, a framework that uses Scala traits as a unified RPC interface between servers and clients. With Airframe RPC, you can build HTTP/1 (Finagle) and HTTP/2 (gRPC) services just by defining Scala traits and case classes. It simplifies web application design as you only need to care about Scala interfaces without using existing web standards like REST, ProtocolBuffers, OpenAPI, etc. Scala.js support of Airframe also enables building interactive Web applications that can dynamically render DOM elements while talking with Scala-based RPC servers. With Airframe RPC, the value of Scala developers will be much higher both for frontend and backend areas.
Kubernetes is hard! Lessons learned taking our apps to Kubernetes - Eldad Ass...Cloud Native Day Tel Aviv
You might think taking your application to Kubernetes is easy. Just pack them in a Docker container, deploy and you're done!
In reality, the challenges of taking your existing application to the cloud native environment of Kubernetes are huge! They require changes in the way your applications behave and the way you administer them.
Do you really know how to get up and running with your existing applications in Kubernetes?
In this talk I will share my lessons learned taking JFrog's existing applications, prepping and deploying them to Kubernetes.
I'll go over some best practices of preparing your application for Kubernetes with some examples for what we did.
Перехват функций (хуки) под Windows в приложениях с помощью C/C++corehard_by
Я расскажу о перехватах функций в приложениях написанных на различных языках и технологиях. В докладе будетут рассмотрены: базовые понятия, области применения, разновидности методов перехвата и их технические особенности, готовые библиотеки.
Developer insight into why applications run amazingly Fast in CF 2018Pavan Kumar
One of the Release goals of ColdFusion 2018 is improve the out of the box performance of the server to an extent that it becomes the best performing CFML engine out there. This talk delves into the Overall strategy adopted in going about measuring & improving performance. The Design challenges we confronted & resolved . The optimization we have done across various CFML constructs . We shall also delve into how a developer can leverage server features and configuration to further improve application performance. We shall also discuss and share the our performance metrics collected across various applications. In spite of having a high performing server one can still face issues so help on that we shall also demonstrate how developers can go about cracking performance bottle necks faced in their applications using tools available.
Presto At Arm Treasure Data - 2019 UpdatesTaro L. Saito
Presentation at Presto Conference Tokyo 2019
- Arm Treasure Data
- Plazma DB Indexes
- Real-time, Archive Storages
- Schema-on-read data processing
- Physical partition maintenance via presto-stella plugin
These are the slides for the talk I did together with John Sullivan on how to use various open source technologies, like JFR and Open Tracing together to facilitate deep tracing of microservices written in Java. We also showed how these technologies are used in the Oracle Management Cloud APM solution.
Five cool ways the JVM can run Apache Spark fasterTim Ellison
The IBM JVM runs Apache Spark fast! This talk explains some of the findings and optimizations from our experience of running Spark workloads.
The talk was originally presented at the SparkEU Summit 2015 in Amsterdam.
In the big data world, it's not always easy for Python users to move huge amounts of data around. Apache Arrow defines a common format for data interchange, while Arrow Flight introduced in version 0.11.0, provides a means to move that data efficiently between systems. Arrow Flight is a framework for Arrow-based messaging built with gRPC. It enables data microservices where clients can produce and consume streams of Arrow data to share it over the wire. In this session, I'll give a brief overview of Arrow Flight from a Python perspective, and show that it's easy to build high performance connections when systems can talk Arrow. I'll also cover some ongoing work in using Arrow Flight to connect PySpark with TensorFlow - two systems with great Python APIs but very different underlying internal data.
Journey of Migrating 1 Million Presto Queries - Presto Webinar 2020Taro L. Saito
Arm Treasure Data utilizes Presto as the query engine processing over 1 million queries per day to support the data business of 500+ companies in three regions; US, EU, and Asia. Arm Treasure Data had been using Presto 0.205 and in 2019 started a big migration project to Presto 317. Although we performed extensive query simulations to check any incompatibilities, we faced many unexpected challenges during the migration at production.
BUD17-310: Introducing LLDB for linux on Arm and AArch64 Linaro
"Session ID: BUD17-310
Session Name: Introducing LLDB for linux on Arm and AArch64 - BUD17-310
Speaker: Omair Javaid
Track: Toolchain
★ Session Summary ★
This session provides an introduction of LLDB - Debugger from LLVM project and its status on Arm and AArch64 Linux. A brief overview of various components in LLDB will be presented with a focus on LLDB commandline and how LLDB can provide debugging experience similar or different from GDB.
---------------------------------------------------
★ Resources ★
Event Page: http://connect.linaro.org/resource/bud17/bud17-310/
Presentation: https://www.slideshare.net/linaroorg/bud17310-introducing-lldb-for-linux-on-arm-and-aarch64
Video: https://youtu.be/6q1KfQPX4zs
---------------------------------------------------
★ Event Details ★
Linaro Connect Budapest 2017 (BUD17)
6-10 March 2017
Corinthia Hotel, Budapest,
Erzsébet krt. 43-49,
1073 Hungary
---------------------------------------------------
Keyword: toolchain, AArch64, LLDB, ARM
http://www.linaro.org
http://connect.linaro.org
---------------------------------------------------
Follow us on Social Media
https://www.facebook.com/LinaroOrg
https://twitter.com/linaroorg
https://www.youtube.com/user/linaroorg?sub_confirmation=1
https://www.linkedin.com/company/1026961"
Scala for Everything: From Frontend to Backend Applications - Scala Matsuri 2020Taro L. Saito
Scala is a powerful language; You can build front-end applications with Scala.js, and efficient backend application servers for JVM. In this session, we will learn how to build everything with Scala by using Airframe OSS framework.
Airframe is a library designed for maximizing the advantages of Scala as a hybrid of object-oriented and functional programming language. In this session, we will learn how to use Airframe to build REST APIs and RPC (with Finagle or gRPC) services, and how to create frontend applications in Scala.js that interact with the servers using functional interfaces for dynamically updating web pages.
Presto @ Treasure Data - Presto Meetup Boston 2015Taro L. Saito
Treasure Data simplifies event analytics for the complex digital
world. Our customers send us 1,000,000 events per second and issue 30,000+ Presto queries everyday to understand their customers better. One of the challenges is designing a cloud database with zero downtime to support a global customer base. We have achieved this goal by developing several open-source technologies; Fluentd and Embulk enable seamless log collection from stream/batch sources, and with MessagePack we can provide an extensible columnar store that accommodates future schema changes. Finally, Presto allows us to serve a wide variety of data processing our customers perform on our service. In this talk, I will present an overview of our system, and how our customers keep using Presto while collecting and extending their data set.
Weaving Dataflows with Silk - ScalaMatsuri 2014, TokyoTaro L. Saito
Silk is a framework for building dataflows in Scala. In Silk users write data processing code with collection operators (e.g., map, filter, reduce, join, etc.). Silk uses Scala Macros to construct a DAG of dataflows, nodes of which are annotated with variable names in the program. By using these variable names as markers in the DAG, Silk can support interruption and resume of dataflows and querying the intermediate data. By separating dataflow descriptions from its computation, Silk enables us to switch executors, called weavers, for in-memory or cluster computing without modifying the code. In this talk, we will show how Silk helps you run data-processing pipelines as you write the code.
HEAP SORT ILLUSTRATED WITH HEAPIFY, BUILD HEAP FOR DYNAMIC ARRAYS.
Heap sort is a comparison-based sorting technique based on Binary Heap data structure. It is similar to the selection sort where we first find the minimum element and place the minimum element at the beginning. Repeat the same process for the remaining elements.
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...ssuser7dcef0
Power plants release a large amount of water vapor into the
atmosphere through the stack. The flue gas can be a potential
source for obtaining much needed cooling water for a power
plant. If a power plant could recover and reuse a portion of this
moisture, it could reduce its total cooling water intake
requirement. One of the most practical way to recover water
from flue gas is to use a condensing heat exchanger. The power
plant could also recover latent heat due to condensation as well
as sensible heat due to lowering the flue gas exit temperature.
Additionally, harmful acids released from the stack can be
reduced in a condensing heat exchanger by acid condensation. reduced in a condensing heat exchanger by acid condensation.
Condensation of vapors in flue gas is a complicated
phenomenon since heat and mass transfer of water vapor and
various acids simultaneously occur in the presence of noncondensable
gases such as nitrogen and oxygen. Design of a
condenser depends on the knowledge and understanding of the
heat and mass transfer processes. A computer program for
numerical simulations of water (H2O) and sulfuric acid (H2SO4)
condensation in a flue gas condensing heat exchanger was
developed using MATLAB. Governing equations based on
mass and energy balances for the system were derived to
predict variables such as flue gas exit temperature, cooling
water outlet temperature, mole fraction and condensation rates
of water and sulfuric acid vapors. The equations were solved
using an iterative solution technique with calculations of heat
and mass transfer coefficients and physical properties.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...Amil Baba Dawood bangali
Contact with Dawood Bhai Just call on +92322-6382012 and we'll help you. We'll solve all your problems within 12 to 24 hours and with 101% guarantee and with astrology systematic. If you want to take any personal or professional advice then also you can call us on +92322-6382012 , ONLINE LOVE PROBLEM & Other all types of Daily Life Problem's.Then CALL or WHATSAPP us on +92322-6382012 and Get all these problems solutions here by Amil Baba DAWOOD BANGALI
#vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore#blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #blackmagicforlove #blackmagicformarriage #aamilbaba #kalajadu #kalailam #taweez #wazifaexpert #jadumantar #vashikaranspecialist #astrologer #palmistry #amliyaat #taweez #manpasandshadi #horoscope #spiritual #lovelife #lovespell #marriagespell#aamilbabainpakistan #amilbabainkarachi #powerfullblackmagicspell #kalajadumantarspecialist #realamilbaba #AmilbabainPakistan #astrologerincanada #astrologerindubai #lovespellsmaster #kalajaduspecialist #lovespellsthatwork #aamilbabainlahore #Amilbabainuk #amilbabainspain #amilbabaindubai #Amilbabainnorway #amilbabainkrachi #amilbabainlahore #amilbabaingujranwalan #amilbabainislamabad
KuberTENes Birthday Bash Guadalajara - K8sGPT first impressionsVictor Morales
K8sGPT is a tool that analyzes and diagnoses Kubernetes clusters. This presentation was used to share the requirements and dependencies to deploy K8sGPT in a local environment.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Airframe: Lightweight Building Blocks for Scala - Scale By The Bay 2018
1. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Taro L. Saito, Ph.D.
GitHub: @xerial
Arm Treasure Data
Airframe: Lightweight Building
Blocks for Scala
November 16th, 2018
Scale By The Bay 2018 - Unconference
1
2. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
About Me: Taro L. Saito (Leo)
● An Engineer with Research Background
● Ph.D., University of Tokyo
● DBMS & Genome Science
● Leading Query Engine Team
● Active OSS Developer
● airframe
● sqlite-jdbc
■ More than 1000 GitHub stars
● snappy-java
■ Compression library used in
Spark, Parquet
● sbt-sonatype
■ Used in 2000+ Scala projects
● ...
2
3. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Airframe
● Lightweight Building Blocks for Scala
● Essential libraries for building any applications
● Used in production for 2+ years
● Based on my code collection since 2009
● Initially written in Java
● Gradually migrated to Scala
● Repackaged into wvlet.airframe in 2016
● For maintainability
● 18 Modules
● Simplifying your daily programming in Scala
3
4. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
18 Airframe Modules
● Bootstrap
● airframe-config Configuration loader
● airframe-launcher Command-line program launcher
● Object Serialization
● airframe-codec encoder/decoder SPI + standard codecs
● airframe-msgpack pure-Scala MessagePack implementation
● airframe-tablet CSV/TSV/JSON/JDBC ResultSet <-> Object
● Monitoring & Debugging
● airframe-log Logging
● airframe-metrics Human-readable metrics for time, date, data size, etc
● airframe-jmx Object metrics provider through JMX
● Building Service Objects
● airframe Dependency injection
● airframe-surface Object type inspector
● Misc:
● airframe-control, airframe-jdbc, airframe-json, airframe-http, etc.
4
5. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Utilities for Debugging Applications
5
6. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Debugging Applications: Airframe Log
● Airframe Log: A Modern Logging Library for Scala (Medium Blog)
● ANSI color, source code location support
6
7. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Airframe Metrics
● Human Readable Data Format (Duration, DataSize, etc.)
● Handy Time Window String Support
7
“-1d”
“-1w”
“-7d”
8. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Airframe JMX
● Checking the internal states of remote JVM processes
● JMX clients
● jconsole has JMX metric monitor
● Airframe JMX -> DataDog -> Dashboard
8
9. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Airframe Launcher
● Using Class & Function Command Line Programs
9
10. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Dependency Injection with Airframe
10
11. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Today’s Goals
● Lean How to Use Airframe DI (Dependency Injection)
● Understand what can be simplified with DI
● Learn 5 Airframe DI Design Patterns
● That improve the thought processes in programming
11
12. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
What is Dependency Injection (DI)?
● Many Articles…
● Inversion of Control Containers and the Dependency Injection pattern. Martin
Fowler (2004)
● StackOverflow, Wikipedia, …
● Many Frameworks...
● Spring, Google Guice, Scaldi, Macwire, Grafter, Weld, etc.
● No framework approaches also exist (Pure-Scala DI)
● Recent Definition:
● Dependency Injection is the process of creating the static, stateless graph of
service objects, where each service is parameterised by its dependencies.
■ What is Dependency Injection? by Adam Warski (2018)
● However, it’s still difficult to understand what is DI
12
13. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Simplifying DI with Airframe
● Airframe Usage
● import wvlet.airframe._
● Simple 3 Step DI
● bind
● design
● build
13
14. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Design Patterns In Airframe
● Pattern 0 (basic): Building Service Objects with Auto-Wiring
● Pattern 1: Configuring Modules
● Pattern 2: Switching Bindings for Tests
● Pattern 3: Managing Lifecycle in FILO order
● Pattern 4: Bundling Service Traits
● Flower-bundle pattern
● Pattern 5: Binding Factory
14
15. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Constructor vs In-Trait Injections
● Constructor Injection
● In-Trait Injection
15
16. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Building Service Objects
● When coding A and B
● You can focus on only direct dependencies
● You can forget about indirect dependencies
● Airframe DI builds A, B, and direct/indirect dependencies on your behalf.
A
DB
Connection
Pool
DB
Client
DB Monitor
Fluentd
Logger
HttpClient
B
16
You can forget this part
17. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Code Example
● Focus on Logic
17
A
DB
Client
Fluentd
Logger
B
18. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Hand Wiring vs Auto Wiring
18
19. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Configuring Modules
● How to pass configuration objects to corresponding modules?
● new A(new DBClient(new ConnectionPool(new DB(new DBConfig(...)), new
ConnectionPoolConfig(...), new DBClientConfig(...)))
● Things You Need to Remember
● Argument orders (or argument names in Scala) of individual modules
● How to instantiate modules
A
DB
Connection
Pool
DB
Client
DB Monitor
Fluentd
Logger
HttpClient
B
19
DB
Config
Connection
Pool Config
HttpClient
Config
20. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Pattern 1: Adding Config
● Put config to the closest place in the code
20
21. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Pattern 2: Switching Bindings For Testing
● In Airframe Design
● You can replace DB and FluentdLogger to In-Memory Impl
● How to build A and B differs, but the same code can be used
A
Memory
DB
Connection
Pool
DB
Client
DB Monitor
Fluentd
Logger
In-memory
Logger
B
21
Overriding Design for Testing
22. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Pattern 2: Switching Bindings For Testing
22
23. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Pattern 3: Managing Lifecycle
● onStart, onShutdown hooks
● JSR-250 annotations
23
24. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Complex FILO Order Resource Management
● FILO := First-In Last-Out
● Airframe registers onStart and onShutdown lifecycle hooks when creating
instances
● When closing sessions, onShutdown will be called in the reverse order
● Dependencies forms DAG
● Dependencies will be generated when creating new service objects
A
DB
Connection
Pool
DB
Client
DB Monitor
Fluentd
Logger
HttpClient
B
134
56
7
2
8
Shared
Resource
24
25. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Pattern 4: Bundling Service Traits
● Flower-Bundle Pattern
● Create composable services
25
26. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Pattern 5: Bind Factory
● Create a factory that change partial dependencies
● e.g., creating differently configured instances of Databases
26
27. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
3 Things You Can Forget With Airframe DI
● 1. How to Build Service Objects
● config, auto-wiring, flower bundle pattern
● 2. How to Manage Resource Lifecycle
● FILO order
● 3. How to Use DI Itself (!!)
● Only need to understand bind, design, and build.
27
28. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Summary: Reducing Code Complexity with Airframe DI
● You can effectively forget about:
● How to build service objects
● How to manage resources in FILO order
● How to use DI itself
A
DB
Connection
Pool
DB
Client
DB Monitor
Fluentd
Logger
HttpClient
B
134
56
7
2
8
28
Implementation Details
29. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Airframe Internals
29
30. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Details
● bind[X]
● X becomes singleton
● Usually there is no need to declare bind[X].toSingleton
■ Unless you want to initialize X early with production mode
● bindInstance[X]
● When you need to call new X(d1, d2, …) every time
● bindFactory[A1 => X]
● For customizing A1 in X
● Design
● Immutable & Serializable
● design1 + design2 + ….
■ Overriding the previous design (adding order is important)
● Session
● withLazyMode/withProductionMode
● noLifeCycleLogging
30
31. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Injecting Airframe Session with Scala Macros
31
32. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Airframe Surface: Object Shape Inspector
● Reading Type Signatures From ScalaSig
● Scala compiler embeds Scala Type Signatures (ScalaSig) to class files
● Surface supports
● type alias, tagged type
● higher-kinded types
class A (data:List[B])
class A
data: List[java.lang.Object]
class A
data: List[java.lang.Object]
ScalaSig: data:List[B]
javac
scalac
Surface.of[A]
data: List[B]
scala.reflect.runtime.
universe.TypeTag
Type Erasure
32
???
33. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Future Work & Summary
33
34. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Current State of Airframe
● Version 0.73 (As of November 2018)
● We already had 40+ releases in 2018
● Automated Release
● Cross building libraries for Scala 2.11, 2.12, 2.13, and Scala.js
● ‘sbt release’ command took 3 hours
■ Sequential steps:
○ compile -> test -> package -> upload x 18 modules x 4 Scala versions
● Now a new version can be released in 10 minutes on Travis CI
● Blog
● 3 Tips for Maintaining Scala Projects
● Future Work
● Adding child session support
● Support multiple constructor argument blocks
34
35. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Philosophy: Simplicity By Design
● “Simplicity” by Philippe Dufour
● A clock made by a legendary
watchmaker in Switzerland
● Every part of the clock is built
by himself
● Airframe
● Provides simplicity for
application developers
35
36. Copyright 1995-2018 Arm Limited (or its affiliates). All rights reserved.
Summary
● To understand DI, think about what you can
simplify
● How to build objects
● How to manage resources (FILO)
● Learning DI framework itself
● 5 Airframe Design Patterns
● Pattern 1: Configuring Modules
● Pattern 2: Overriding Bindings for Tests
● Pattern 3: Managing Lifecycle in FILO order
● Pattern 4: Bundling Service Traits
■ Flower-bundle pattern
● Pattern 5: Binding Factory
Don’t Forget Adding GitHub Star!
wvlet/airframe
36