Crude-Oil Blend Scheduling Optimization: An Application with Multi-Million D...Alkis Vazacopoulos
The economic and operability benefits associated with better crude-oil blend scheduling are numerous and significant. The crude-oils that arrive at the oil-refinery to be processed into the various refined-oils must be carefully handled and mixed before they are charged to the atmospheric and vacuum distillation unit or pipestill. The intent of this article is to highlight the importance and details of optimizing the scheduling of an oil-refinery’s crude-oil feedstocks from the receipt to the charging of the pipestills.
Crude-Oil Blend Scheduling Optimization: An Application with Multi-Million D...Alkis Vazacopoulos
The economic and operability benefits associated with better crude-oil blend scheduling are numerous and significant. The crude-oils that arrive at the oil-refinery to be processed into the various refined-oils must be carefully handled and mixed before they are charged to the atmospheric and vacuum distillation unit or pipestill. The intent of this article is to highlight the importance and details of optimizing the scheduling of an oil-refinery’s crude-oil feedstocks from the receipt to the charging of the pipestills.
The power of observation: 5 ways to ensure teacher evaluations lead to teache...Learning Forward
The teacher’s ability to assess student learning, analyze outcomes, and adapt instruction to meet student needs may not always show up on a state standardized exam. However, principals who pay attention to a range of measures of teacher effectiveness can provide more meaningful teacher evaluations that promote teacher growth. Discover several areas in which principals can concentrate to ensure growth-oriented evaluations.
Artikel van de rijt & witteveen prestatieinkoop bij rijkswaterstaat deal maar...Jeroen Van de Rijt
In dit artikel beschrijven Jeroen van de Rijt en Wiebe Witteveen hoe prestatie-inkoop is toegepast bij enkele projecten uit de Spoedaanpak van Rijkswaterstaat.
Follow-Up Hearing Test in Long Island NY Recommended After National Hearing TestEast End Hearing
The audiologists at East End Hearing – Long Island Hearing Test Professionals agree that getting the word out about the National Hearing Test is important.
Full service audiologist with the best selection of hearing aids in Long Island NY. See us for hearing tests, custom ear protection, tinnitus treatment, ear wax removal, hearing aid repair.
Advanced Production Accounting of an Olefins Plant Industrial Modeling Framew...Alkis Vazacopoulos
Presented in this short document is a description of what we call "Advanced" Production Accounting (APA) applied to a small Olefins Plant found in Sanchez and Romagnoli (1996). APA is the term given to the technique of vetting, screening or cleaning the past production data using statistical data reconciliation and regression (DRR) when continuous-processes are assumed to be at steady-state (Kelly and Hedengren, 2013) i.e., there is no significant material accumulation. For this case, the model and data define a simultaneous mass or volume linear DRR problem. Figure 1a shows the Olefins Plant using simple number indices for both the nodes and streams where Figure 1b depicts the same problem configured in our unit-operation-port-state superstructure (UOPSS) (Kelly, 2004, 2005; Zyngier and Kelly, 2012).
Presented in this short document is a description of what we call "Partitioning" and "Positioning". Partitioning is the notion of decomposing the problem into smaller sub-problems along its “hierarchical” (Kelly and Zyngier, 2008), “structural” (Kelly and Mann, 2004), “operational” (Kelly, 2006), “temporal” (Kelly, 2002) and now “phenomenological” (Kelly, 2003, Kelly and Mann, 2003, Kelly and Zyngier, 2014 and Menezes, 2014) dimensions. Positioning is the ability to configure the lower and upper hard bounds and target soft bounds for any time-period over the future time-horizon within the problem or sub-problem and is especially useful to fix variables (i.e., its lower and upper bounds are set equal) which will ultimately remove or exclude these variables from the solver’s model or matrix.
The power of observation: 5 ways to ensure teacher evaluations lead to teache...Learning Forward
The teacher’s ability to assess student learning, analyze outcomes, and adapt instruction to meet student needs may not always show up on a state standardized exam. However, principals who pay attention to a range of measures of teacher effectiveness can provide more meaningful teacher evaluations that promote teacher growth. Discover several areas in which principals can concentrate to ensure growth-oriented evaluations.
Artikel van de rijt & witteveen prestatieinkoop bij rijkswaterstaat deal maar...Jeroen Van de Rijt
In dit artikel beschrijven Jeroen van de Rijt en Wiebe Witteveen hoe prestatie-inkoop is toegepast bij enkele projecten uit de Spoedaanpak van Rijkswaterstaat.
Follow-Up Hearing Test in Long Island NY Recommended After National Hearing TestEast End Hearing
The audiologists at East End Hearing – Long Island Hearing Test Professionals agree that getting the word out about the National Hearing Test is important.
Full service audiologist with the best selection of hearing aids in Long Island NY. See us for hearing tests, custom ear protection, tinnitus treatment, ear wax removal, hearing aid repair.
Advanced Production Accounting of an Olefins Plant Industrial Modeling Framew...Alkis Vazacopoulos
Presented in this short document is a description of what we call "Advanced" Production Accounting (APA) applied to a small Olefins Plant found in Sanchez and Romagnoli (1996). APA is the term given to the technique of vetting, screening or cleaning the past production data using statistical data reconciliation and regression (DRR) when continuous-processes are assumed to be at steady-state (Kelly and Hedengren, 2013) i.e., there is no significant material accumulation. For this case, the model and data define a simultaneous mass or volume linear DRR problem. Figure 1a shows the Olefins Plant using simple number indices for both the nodes and streams where Figure 1b depicts the same problem configured in our unit-operation-port-state superstructure (UOPSS) (Kelly, 2004, 2005; Zyngier and Kelly, 2012).
Presented in this short document is a description of what we call "Partitioning" and "Positioning". Partitioning is the notion of decomposing the problem into smaller sub-problems along its “hierarchical” (Kelly and Zyngier, 2008), “structural” (Kelly and Mann, 2004), “operational” (Kelly, 2006), “temporal” (Kelly, 2002) and now “phenomenological” (Kelly, 2003, Kelly and Mann, 2003, Kelly and Zyngier, 2014 and Menezes, 2014) dimensions. Positioning is the ability to configure the lower and upper hard bounds and target soft bounds for any time-period over the future time-horizon within the problem or sub-problem and is especially useful to fix variables (i.e., its lower and upper bounds are set equal) which will ultimately remove or exclude these variables from the solver’s model or matrix.
Presented in this short document is a description of how to model and solve multi-utility scheduling optimization (MUSO) problems in IMPL. Multi-utility systems (co/tri-generation) are typically found in petroleum refineries and petrochemical plants (multi-commodity systems) especially when fuel-gas (i.e., off-gases of methane and ethane) is a co- or by-product of the production from which multi-pressure heating-, motive- and process-steam are generated on-site. Other utilities include hydrogen, electricity, water, cooling media, air, nitrogen, chemicals, etc. where a multi-utility system is shown in Figure 1 with an intermediate or integrated utility (both produced and consumed) such as fuel-gas, steam or electricity. Itemized benefit areas just for better management of an integrated steam network can be found in Pelham (2013) where his sample multi-pressure steam utility flowsheet is found in Figure 2.
Unit-Operation Nonlinear Modeling for Planning and Scheduling ApplicationsAlkis Vazacopoulos
The focus of this chapter is to detail the quantity and quality modeling aspects of production flowsheets found in all process industries. Production flowsheets are typically at a higher-level than process flowsheets given that in many cases more direct business or economic related decisions are being made such as maximizing profit and performance for the overall plant and/or for several integrated plants together with shared resources. These decisions are usually planning and scheduling related, often referred to as production control, which require a larger spatial and temporal scope compared to more myopic process flowsheets which detail the steady or unsteady-state material, energy and momentum balances of a particular process unit-operation over a relatively short time horizon. This implies that simpler but still representative mathematical models of the individual processes are necessary in order to solve the multi time-period nonlinear system using nonlinear optimizers such as successive linear programming (SLP) and sequential quadratic programming (SQP). In this chapter we describe six types of unit-operation models which can be used as fundamental building blocks or objects to formulate large production flowsheets. In addition, we articulate the differences between continuous and batch processes while also discussing several other important implementation issues regarding the use of these unit-operation models within a decision-making system. It is useful to also note that the quantity and quality modeling system described in this chapter complements the quantity and logic modeling used to describe production and inventory systems outlined in Zyngier and Kelly (2009).
Presented in this short document is a description of what is called a “Pipeline Scheduling Optimization Problem” and was first described in Rejowski and Pinto (2003) where they modeled the first-in-first-out (FIFO) and multi-product nature of the segregated pipeline using both discretized space (multi-batches, packs or pipes) and time (multi-intervals, slots or periods). The same MILP model can also be found in Zyngier and Kelly (2009) along with other related production/process objects.
Advanced Parameter Estimation (APE) for Motor Gasoline Blending (MGB) Indust...Alkis Vazacopoulos
Presented in this short document is a description of how to model and solve advanced parameter estimation (APE) problems in IMPL. APE is the term given to the application of estimating, fitting or calibrating parameters in models involving a network, topology, superstructure or flowsheet. When estimating parameters with multiple linear regression (MLR), ordinary least squares (OLS), ridge regression (RR), principal component regression (PCR) and partial least squares (PLS) there is no explicit model but simply an X-block and Y-block of data. Hence, these methods are referred to as “non-parametric” or “data-based” methods as opposed to the “parametric” or “model-based” method used here. To solve these types of problems we use what is commonly referred to as “error-in-variables” (EIV) regression which is conveniently implemented as nonlinear data reconciliation and regression (NDRR) using the technology found in Kelly (1998a; 1998b; 1999) and Kelly and Zyngier (2008a). The primary benefit of using EIV (NDRR) over the other regression methods is that we can easily handle the inclusion of conservation laws and constitutive relations, explicitly, a must for any industrial estimation problem (IEP).
Advanced Process Monitoring for Startups, Shutdowns & Switchovers Industrial ...Alkis Vazacopoulos
Presented in this short document is a description of what is called “Advanced” Process Monitoring as described by Hedengren (2013) but related to Startups, Shutdowns and Switchovers-to-Others (APM-SUSDSO). APM is the term given to the technique of estimating or fitting unmeasured but observable variables or "states" using statistical data reconciliation and regression (DRR) in an off-line or real-time environment. It is also referred to as Moving Horizon Estimation (MHE) (Robertson et. al., 1996) in Advanced Process Control (APC) which goes beyond simply updating a bias to implement some form of measurement or parameter feedback (Kelly and Zyngier, 2008b). Essentially, the model and data define a simultaneous nonlinear and dynamic DRR problem where the model is either engineering-based (first-principles, fundamental, mechanistic, causal, rigorous) or empirical-based (correlation, statistical data-based, observational, regressed) or some combination of both (hybrid) (Pantelides and Renfro, 2012).
Generalized Capital Investment Planning w/ Sequence-Dependent Setups Industri...Alkis Vazacopoulos
Presented in this short document is a description of what we call the “Generalized” Capital Investment Planning (GCIP) problem where conventional capital investment planning (CIP), and specifically for the “retrofit” problem, is discussed in Sahinidis and Grossmann (1989) and Liu and Sahinidis (1996). CIP is the optimization problem where it is desired to expand the capacity and/or extend the capability (conversion) of either the “expansion” of an existing unit or the “installation” of a new unit (Jackson and Grossmann, 2002).
Figure 1 shows the three types of CIP problems as defined in Vazacopoulos et. al. (2014) and Menezes (2014) with its capital cost and time scales.
Presented in this short document is a description of what we call "Phasing" and "Planuling". Phasing is a variation of the sequence-dependent changeover problem (Kelly and Zyngier, 2007, Balas et. al., 2008) except that the sequencing, cycling or phasing is fixed as opposed to being variable or free. Planuling is a portmanteau of planning and scheduling where we "schedule" slow processes and we "plan" fast processes together inside the same time-horizon and can also be considered as "hybrid" planning and scheduling.
All phosphoric acid concentration units suffer from fouling requiring frequent shutdowns, cleaning and start-up cycles. These cycles are time consuming. Any process and control improvement facilitating the operations can lead to significant increase on strong acid production. This study shows how dynamic simulation can be used to conduct engineering studies, operational studies and training simulators to optimize the operability of a greenfield phosphoric acid concentration unit. In order to perform this optimization, a first principle model predicting how process and associated control will respond as a function of time was created based on all plant’s engineering information. The model can be further combined to DCS graphics and field operated devices schematics to facilitate procedure testing. Constant pressure and constant temperature start-up procedures were tested in order to estimate which procedure minimizes time to concentrate acid from 25% to 50% P2O5. It was found that both procedures were equivalent in terms of time and energy consumption, but constant pressure strategy is simpler and safer, potentially leading to less human related losses. These procedures were performed executing step-by-step actions, allowing determining most frequent mistakes, any missing actions and improving the existing written procedure. Furthermore, the simulator allowed verifying equipment design, interlocks, control logic and identifying new control enhancement opportunity. In addition, many tools available with the dynamic simulator can be used for operator training purposes leading to potential operability gains.
Dynamic operator training simulators for sulphuric acid, phosphoric acid, and...Sergio Joao
Dynamic process simulators are widely used in the chemical and petrochemical industries for operator training, plant design, and optimization;but there is a lack of rigorous simulators in the phosphate fertilizer industry. Some of the many difficulties encountered in phosphate fertilizer simulation include: lack of knowledge of thermodynamic properties, presence of many phases (gas, liquid, and solids), high levels and variation of impurities in phosphate rock producing unknown effects, complexity in modeling particle size distribution, etc. Dynamic training simulators were successfully developed for sulphuric acid, phosphoric acid, and DAP production units of OCP Group’s Jorf Lasfar complex using a commercial simulation platform. A new thermodynamic property package was developed for sulphuric acid and oleum to correctly predict vapor pressure, density, enthalpy, and SO2 solubility. Also, a rotary drum granulator was developed to consider the reaction chemistry of DAP production and the stochastic nature of solids created. The granulator can accurately predict particle size distribution, moisture content, ammonia and dust losses, and gas/solid temperatures. It was shown that the simulators could precisely reproduce control room and field operations to model plant start-ups, emergency or normal shutdowns, process upsets, and normal operations.
Similar to Developing the next generation of Real Time Optimization Technologies (Blend Optimization) (20)
We tested ODH|CPLEX 4.24 on Miplib Open-v7 Models, a public collection of 286 models to which and optimal solution has not been proven. 257 of these are known to have a feasible solution.
ODH|CPLEX proved optimality on 6 models and found better solutions in 2 hours, to 40% of the models with 12 threads and 35% with 8 threads. ODH|CPLEX matched on 21% of the models.
EX Optimization Studio* solves large-scale optimization problems and enables better business decisions and resulting financial benefits in areas such as supply chain management, operations, healthcare, retail, transportation, logistics and asset management. It has been applied in sectors as diverse as manufacturing, processing, distribution, retailing, transport, finance and investment. CPLEX Optimization Studio is an analytical decision support toolkit for rapid development and deployment of optimization models using mathematical and constraint programming. It combines an integrated development environment (IDE) with the powerful Optimization Programming Language (OPL) and high-performance ILOG CPLEX optimizer solvers. CPLEX Optimization Studio enables clients to: Optimize business decisions with high-performance optimization engines. Develop and deploy optimization models quickly by using flexible interfaces and prebuilt deployment scenarios. Create real-world applications that can significantly improve business outcomes. Optimization Direct has partnered with and entered into a technology licensing and distribution agreement with IBM. By combining the founders' industry and software experience and IBM’s CPLEX Optimization Studio product with the arsenal of Optimization modeling and solving tools from IBM provides customers the most powerful capabilities in the industry.
Missing-Value Handling in Dynamic Model Estimation using IMPL Alkis Vazacopoulos
Presented in this short document is a description of how IMPL handles missing-values or missing-data when estimating dynamic models which inherently involve time-lagged or time-shifted input and output variables. Missing-values in a data set imply that for some reason the data is not available most likely due to a mal-functioning instrument or even lack of proper accounting. Missing-data handling is relatively well-studied especially for time-series or dynamic data given that it is not as easy as removing, ignoring or deleting bad sections of data when static or steady-state models are calibrated (Honaker and King, 2010; Smits and Baggelaar, 2010; Fisher and Waclawski, 2015). Unfortunately, all of their methods involve what is known as “imputation” i.e., replacing or substituting missing-data with some reasonably assumed value which is at the very least is a biased estimate. When regression techniques such as PLS and PCR are used (Nelson et. al., 2006) then missing-data can be handled without imputation by computing the input-output covariance matrices excluding the contribution from the missing-values given the temporal and structural redundancy in the system. However, it is shown in Dayal (1996) that using PLS and other types of regression techniques such as Canonical Correlation Regression (CCR) and Reduced Rank Regression (RRR) to fit non-parsimonious and non-parametric finite impulse/step response models (FIR/FSR), that this is not as reliable as fitting lower-ordered transfer functions especially considering the robust stability of the resulting model predictive controller if that is its intended use.
Finite Impulse Response Estimation of Gas Furnace Data in IMPL Industrial Mod...Alkis Vazacopoulos
Presented in this short document is a description of how to estimate deterministic and stochastic non-parametric finite impulse response (FIR) models in IMPL applied to industrial gas furnace data identical to that found in TSE-GFD-IMF using parametric transfer-functions. The methodology of time-series analysis or system identification involves essentially three (3) stages (Box and Jenkins, 1976): (1) model structure identification, (2) model parameter estimation and (3) model checking and diagnostics. We do not address (1) which requires stationarity and seasonality assessment/adjustment, auto-, cross- and partial-correlation, etc. to establish the parametric transfer function polynomial degrees especially when we are using non-parametric FIR estimation. Instead we focus only on the parameter estimation and diagnostics. These types of parameter estimation problems involve dynamic and nonlinear relationships shown below and we solve these using IMPL’s Sequential Equality-Constrained QP Engine (SECQPE) and Supplemental Observability, Redundancy and Variability Estimator (SORVE). Other types of non-parametric identification known as Subspace Identification (Qin, 2006) and can used to estimate state-space models.
Our Industrial Modeling Service (IMS) involves several important (but rarely implemented) methods to significantly improve and advance your existing models and data. Since it is well-known that good decision-making requires good models and data, IMS is ideally suited to support this continuous-improvement endeavour. IMS is specifically designed to either co-exist with your existing design, planning, scheduling, etc. applications or these same models and data can be used seamlessly into our Industrial Modeling and Programming Language (IMPL) to create new value-added applications. The following techniques form the basis of our IMS offering.
This short note describes a relatively simple methodology, procedure or approach to increase the performance of already installed industrial models used for optimization, control, simulation and/or monitoring purposes. The method is called Excess or X-Model Regression (XMR) where the concept of “excess modeling” or an X-model is taken from the field of thermodynamics to describe the departure or residual behaviour of real (non-ideal) gases and liquids from their ideal state (Kyle, 1999; Poling et. al., 2001; Smith et. al., 2001). It has also been applied to model the non-ideal or nonlinear behaviour of blending motor gasoline octanes with its synergistic and antagonistic interactional effects (Muller, 1992).
The fundamental idea of XMR is to calibrate, train, fit or estimate, using actual data and multiple linear regression (MLR) or ordinary least squares (OLS), the deviations of the measured responses from the existing model responses. The existing model may be a glass, grey or black-box model (known or unknown, linear or nonlinear, implicit/open or explicit/closed) depending on the use of the model. That is, for optimization and control the model structure and parameters are available given that derivative information is required although for simulation and monitoring, the model may only be observed through the dependent output variables given the necessary independent input variables.
Presented in this short document is a description of modeling and solving partial differential equations (PDE’s) in both the temporal and spatial dimensions using IMPL. The sample PDE problem is taken from Cutlip and Shacham (1999 and 2014) and models the process of unsteady-state heat transfer or conduction in a one dimensional (1D) slab with one face insulated and constant thermal conductivity as discussed by Geankoplis (1993).
Presented in this short document is a description of what is well-known as Advanced Process Control (APC) applied to a small linear three (3) manipulated variable (MV) by two (2) controlled variable (CV) problem. These problems are also known as Model Predictive Control (MPC) (Grimm et. al., 1989) and Moving Horizon Control (MHC). Figure 1 shows the 3 x 2 APC problem configured in our unit-operation-port-state superstructure (UOPSS) (Kelly, 2004, 2005; Zyngier and Kelly, 2012) as an Advanced Planning and Scheduling (APS) problem as opposed to a traditional APC problem.
Although there is a tremendous amount of stability, performance and robustness theory associated with APC which can be directly assumed to APS problems (Mastragostino et. al., 2014), our approach is to show that APC can equally be set into an APS framework except that APS has far less sensitivity technology due to its inherent discrete and nonlinear modeling complexities i.e., especially non-convexities. In order to eliminate the steady-state offset between the actual value and its target, it is well-known to apply bias-updating though other forms of “parameter-feedback” is possible. Typically, APS applications only employ “variable-feedback” i.e., opening or initial inventories, properties, etc. but this alone will not alleviate the steady-state offset as demonstrated by Kelly and Zyngier (2008).
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Key Trends Shaping the Future of Infrastructure.pdf
Developing the next generation of Real Time Optimization Technologies (Blend Optimization)
1. Real-‐Time
Blend
Optimization
Industrial
Modeling
Framework
(RTBO-‐IMF)
i
n
d
u
s
t
r
IAL
g
o
r
i
t
h
m
s
LLC.
(IAL)
www.industrialgorithms.com
June
2013
Introduction
to
Real-‐Time
Blend
Optimization,
UOPSS
and
QLQP
Presented
in
this
short
document
is
a
description
of
what
is
typically
known
as
on-‐line
or
real-‐time
"multi-‐process",
"multi-‐pool",
"multi-‐product",
"multi-‐property"
and
"multi-‐period"
blend
optimization.
This
kind
of
processing
is
found
in
all
petroleum
refineries
where
the
blending
process
mixes
diverse
refinery
rundown
streams
or
components
into
various
types
and
grades
of
gasoline,
jet
fuel,
diesel
and
heating
oil.
Figure
1
depicts
these
four
types
of
blended
products
with
shared
components
resources
including
their
inventory
such
as
cracked
naphtha,
kerosene,
etc.
configured
in
our
unit-‐operation-‐port-‐state
superstructure
(UOPSS)
(Kelly,
2004b,
2005,
and
Zyngier
and
Kelly,
2012).
Figure
1.
Gasoline,
Jet
Fuel,
Diesel
&
Heating
Oil
Blending
Flowsheet
Example.
2. The
CTank's
and
PTank's
(triangle
shapes)
in
Figure
1
represent
component
and
product
tanks
or
pools
where
the
small
circle
shapes
define
what
we
call
inlet
and
outlet
(with
"x")
ports
and
are
only
found
in
our
UOPSS.
The
Blender's
(rectangle
shapes
with
"x")
are
controlled
mixers
in
the
sense
that
component
flows
into
the
blenders
can
be
regulated
and
are
sometimes
referred
to
as
pools
with
no
inventory
and
maybe
either
in
continuous
or
semi-‐continuous
operation.
The
diamond
shapes
are
called
perimeters
and
are
the
usual
source
and
sink
nodes
found
in
other
types
of
network
flow
representations.
On-‐line
analyzers
or
instruments
are
usually
available
to
measure
the
intensive
property
specifications
of
the
material
such
as
octane,
cetane,
sulfur,
viscosity,
density,
vapor
pressure,
distillation
temperature,
flash
point,
cloud
point,
etc.
just
to
name
a
few.
The
other
type
of
continuous
process
configured
is
a
Hydrotreater
which
reacts
hydrogen
with
the
virgin
diesel
stream
(VDiesel)
in
the
presence
of
a
catalyst
at
high
pressure
to
reduce
its
sulfur
content
i.e.,
HDiesel
will
have
a
very
low
sulfur
concentration.
The
"severity"
(i.e.,
its
process/operating
condition)
of
the
hydrotreater
is
also
modeled
in
order
to
be
able
to
manipulate
or
optimize
the
degree
or
extent
to
which
the
virgin
diesel
is
desulfurized.
The
quantity
(flows
and
inventories)
and
quality
(properties
and
conditions)
aspects
of
the
problem
as
well
as
its
logic
attributes
(Kelly,
2006)
define
what
we
call
the
quantity-‐logic-‐quality
phenomena
(QLQP)
where
more
details
around
the
blending
process
modeling
and
its
planning
and
scheduling
can
also
be
found
in
Kelly
(2004a)
and
Castillo,
Kelly
and
Mahalec
(2013).
Another
important
issue
is
the
handling
feedback
especially
when
controlling
flows,
inventories
and
properties
in
real-‐time
or
closed-‐loop.
This
is
addressed
using
our
state-‐of-‐the-‐art
dynamic
and
nonlinear
data
reconciliation
and
regression
technology
(Kelly,
1998
and
2004c)
implemented
inside
a
"moving
horizon
estimation"
(MHE)
framework
(Kelly
and
Zyngier,
2008).
What
makes
this
blending
configuration
interesting
is
the
modeling
of
all
four
products
together
into
a
single
blending
optimization
problem.
Due
to
the
sharing
of
rundown
components
between
one
or
more
blenders
at
different
times,
there
is
tremendous
opportunity
to
produce
on-‐
specification
product
using
the
lowest
cost
and
most
available
components.
Existing
blend
control
and
optimization
software
only
manage
one
blender
at
a
time
with
no
other
pools
such
as
tanks
included,
and
they
look
out
no
further
than
the
current
blend
(mono-‐period).
In
our
formulation
we
look
out
over
multiple
blends
of
product
over
multiple
blenders
considering
multiple
periods
or
time-‐intervals
into
the
future
where
these
time-‐periods
can
be
either
of
equal
or
unequal
duration.
In
addition
and
unique
to
our
formulation,
we
also
allow
the
integration
of
other
types
of
processes
(not
only
hydrotreaters)
such
as
crude
distillation
units,
catalytic
reformers,
fluidized
catalytic
converters,
hydrocrackers
and
alkylation
units.
This
allows
for
upstream
manipulations
of
process/operating
conditions
to
produce
more
appropriate
component
rundown
properties
before
they
even
enter
the
blending
area.
This
alleviates
possible
quantity
and/or
quality
bottlenecks
(long
and
shorts
of
material)
that
may
arise
during
the
blending
operation
avoiding
off-‐specification
events
as
well
as
minimizing
over
and
under-‐use
of
high-‐octane,
high-‐cetane,
low-‐sulfur
and/or
low-‐viscosity
component
rundowns.
Benefits
for
such
a
RTBO
application
can
be
in
the
millions
of
dollars
and
are
comparable
to
the
benefits
defined
by
Kelly
and
Mann
(2003)
for
crude-‐oil
blend
optimization.
More
specifically,
a
similar
installation
of
this
technology
and
its
approach
installed
at
a
major
oil
company's
refinery
in
Europe
quoted
a
payback
period
of
only
two-‐weeks!
Industrial
Modeling
Framework
(IMF),
IMPRESS
and
SIIMPLE
3. To
implement
the
mathematical
formulation
of
this
and
other
systems,
IAL
offers
a
unique
approach
and
is
incorporated
into
our
Industrial
Modeling
and
Pre-‐Solving
System
we
call
IMPRESS.
IMPRESS
has
its
own
modeling
language
called
IML
(short
for
Industrial
Modeling
Language)
which
is
a
flat
or
text-‐file
interface
as
well
as
a
set
of
API's
which
can
be
called
from
any
computer
programming
language
such
as
C,
C++,
Fortran,
Java
(SWIG),
C#
or
Python
(CTYPES)
called
IPL
(short
for
Industrial
Programming
Language)
to
both
build
the
model
and
to
view
the
solution.
Models
can
be
a
mix
of
linear,
mixed-‐integer
and
nonlinear
variables
and
constraints
and
are
solved
using
a
combination
of
LP,
QP,
MILP
and
NLP
solvers
such
as
COINMP,
GLPK,
LPSOLVE,
SCIP,
CPLEX,
GUROBI,
LINDO,
XPRESS,
CONOPT,
IPOPT
and
KNITRO
as
well
as
our
own
implementation
of
SLP
called
SLPQPE
(successive
linear
&
quadratic
programming
engine)
which
is
a
very
competitive
alternative
to
the
other
nonlinear
solvers
and
embeds
all
available
LP
and
QP
solvers.
The
underlying
system
architecture
of
IMPRESS
is
called
SIIMPLE
(we
hope
literally)
which
is
short
for
Server,
Interacter
(IPL),
Interfacer
(IML),
Modeler,
Presolver
Libraries
and
Executable.
The
Server,
Presolver
and
Executable
are
primarily
model
or
problem-‐independent
whereas
the
Interacter,
Interfacer
and
Modeler
are
typically
domain-‐specific
i.e.,
model
or
problem-‐dependent.
Fortunately,
for
most
industrial
planning,
scheduling,
optimization,
control
and
monitoring
problems
found
in
the
process
industries,
IMPRESS's
standard
Interacter,
Interfacer
and
Modeler
are
well-‐suited
and
comprehensive
to
model
the
most
difficult
of
production
and
process
complexities
allowing
for
the
formulations
of
straightforward
coefficient
equations,
ubiquitous
conservation
laws,
rigorous
constitutive
relations,
empirical
correlative
expressions
and
other
necessary
side
constraints.
User,
custom,
adhoc
or
external
constraints
can
be
augmented
or
appended
to
IMPRESS
when
necessary
in
several
ways.
For
MILP
or
logistics
problems
we
offer
user-‐defined
constraints
configurable
from
the
IML
file
or
the
IPL
code
where
the
variables
and
constraints
are
referenced
using
unit-‐operation-‐port-‐state
names
and
the
quantity-‐logic
variable
types.
It
is
also
possible
to
import
a
foreign
LP
file
(row-‐based
MPS
file)
which
can
be
generated
by
any
algebraic
modeling
language
or
matrix
generator.
This
file
is
read
just
prior
to
generating
the
matrix
and
before
exporting
to
the
LP,
QP
or
MILP
solver.
For
NLP
or
quality
problems
we
offer
user-‐defined
formula
configuration
in
the
IML
file
and
single-‐value
and
multi-‐value
function
blocks
writable
in
C,
C++
or
Fortran.
The
nonlinear
formulas
may
include
intrinsic
functions
such
as
EXP,
LN,
LOG,
SIN,
COS,
TAN,
MIN,
MAX,
IF,
LE,
GE
and
KIP,
LIP,
SIP
(constant,
linear
and
monotonic
spline
interpolation)
as
well
as
user-‐written
extrinsic
functions.
Industrial
modeling
frameworks
or
IMF's
are
intended
to
provide
a
jump-‐start
to
an
industrial
project
implementation
i.e.,
a
pre-‐project
if
you
will,
whereby
pre-‐configured
IML
files
and/or
IPL
code
are
available
specific
to
your
problem
at
hand.
The
IML
files
and/or
IPL
code
can
be
easily
enhanced,
extended,
customized,
modified,
etc.
to
meet
the
diverse
needs
of
your
project
and
as
it
evolves
over
time
and
use.
IMF's
also
provide
graphical
user
interface
prototypes
for
drawing
the
flowsheet
as
in
Figure
1
and
typical
Gantt
charts
and
trend
plots
to
view
the
solution
of
quantity,
logic
and
quality
time-‐profiles.
Current
developments
use
Python
2.3
and
2.7
integrated
with
open-‐
source
Dia
and
Matplotlib
modules
respectively
but
other
prototypes
embedded
within
Microsoft
Excel/VBA
for
example
can
be
created
in
a
straightforward
manner.
However,
the
primary
purpose
of
the
IMF's
is
to
provide
a
timely,
cost-‐effective,
manageable
and
maintainable
deployment
of
IMPRESS
to
formulate
and
optimize
complex
industrial
manufacturing
systems
in
either
off-‐line
or
on-‐line
environments.
Using
IMPRESS
alone
would
be
somewhat
similar
(but
not
as
bad)
to
learning
the
syntax
and
semantics
of
an
AML
as
well
as
having
to
code
all
4. of
the
necessary
mathematical
representations
of
the
problem
including
the
details
of
digitizing
your
data
into
time-‐points
and
periods,
demarcating
past,
present
and
future
time-‐horizons,
defining
sets,
index-‐sets,
compound-‐sets
to
traverse
the
network
or
topology,
calculating
independent
and
dependent
parameters
to
be
used
as
coefficients
and
bounds
and
finally
creating
all
of
the
necessary
variables
and
constraints
to
model
the
complex
details
of
logistics
and
quality
industrial
optimization
problems.
Instead,
IMF's
and
IMPRESS
provide,
in
our
opinion,
a
more
elegant
and
structured
approach
to
industrial
modeling
and
solving
so
that
you
can
capture
the
benefits
of
advanced
decision-‐making
faster,
better
and
cheaper.
References
Kelly,
J.D.,
"A
regularization
approach
to
the
reconciliation
of
constrained
data
sets",
Computers
&
Chemical
Engineering,
1771,
(1998).
Kelly,
J.D.,
Mann,
J.M.,
"Crude-‐oil
blend
scheduling
optimization:
an
application
with
multi-‐million
dollar
benefits",
Hydrocarbon
Processing,
June,
47,
July,
72,
(2003).
Kelly,
J.D.,
"Formulating
production
planning
models",
Chemical
Engineering
Progress,
January,
43,
(2004a).
Kelly,
J.D.,
"Production
modeling
for
multimodal
operations",
Chemical
Engineering
Progress,
February,
44,
(2004b).
Kelly,
J.D.,
"Techniques
for
solving
industrial
nonlinear
data
reconciliation
problems",
Computers
&
Chemical
Engineering,
2837,
(2004c).
Kelly,
J.D.,
"The
unit-‐operation-‐stock
superstructure
(UOSS)
and
the
quantity-‐logic-‐quality
paradigm
(QLQP)
for
production
scheduling
in
the
process
industries",
In:
MISTA
2005
Conference
Proceedings,
327,
(2005).
Kelly,
J.D.,
"Logistics:
the
missing
link
in
blend
scheduling
optimization",
Hydrocarbon
Processing,
June,
45,
(2006).
Kelly,
J.D.,
Zyngier,
D.,
"Continuously
improve
planning
and
scheduling
models
with
parameter
feedback",
FOCAPO
2008,
July,
(2008).
Zyngier,
D.,
Kelly,
J.D.,
"UOPSS:
a
new
paradigm
for
modeling
production
planning
and
scheduling
systems",
ESCAPE
22,
June,
(2012).
Castillo,
P.A.,
Kelly,
J.D.,
Mahalec,
V.,
"Inventory
pinch
analysis
for
gasoline
blend
planning",
AIChE
J.,
June,
(2013).