The document discusses the history and capabilities of CP Optimizer, a constraint programming engine developed by IBM for scheduling problems. It was first released in 2007 and is now on version 12.8. CP Optimizer is used for scheduling applications across various industries and is part of CPLEX Optimization Studio. The document outlines key features like simplified modeling, automatic search, and interfaces in multiple programming languages.
An Update on the Comparison of MIP, CP and Hybrid Approaches for Mixed Resour...Philippe Laborie
We consider a well known resource allocation and scheduling problem for which different approaches like mixed-integer programming (MIP), constraint programming (CP), constraint integer programming (CIP), logic-based Benders decompositions (LBBD) and SAT-modulo theories (SMT) have been proposed and experimentally compared in the last decade. Thanks to the recent improvements in CP Optimizer, a commercial CP solver for solving generic scheduling problems, we show that a standalone tiny CP model can out-perform all previous approaches and close all the 335 instances of the benchmark. The article explains which components of the automatic search of CP Optimizer are responsible for this success. We finally propose an extension of the original benchmark with larger and more challenging instances.
Classical scheduling problems (like job-shop or RCPSP) are among the most difficult problems studied in combinatorial optimization. Still, they are far from accounting for all the complexity of industrial scheduling applications. Since more than 20 years, our team at ILOG (now IBM) develops and integrates a large panel of techniques from AI (constraint programming, temporal reasoning, learning, ...) and OR (mathematical programming, graph algorithms, local search, ...) to solve our customers most complex scheduling problems. These works have lead to the design of CP Optimizer, a generic solver based on a very expressive (but still, quite concise) mathematical modeling language to formulate complex scheduling problems. The models are solved with an automatic search algorithm that is exact, efficient, robust and continuously improving. This talk gives a short overview of CP Optimizer.
A (Not So Short) Introduction to CP Optimizer for SchedulingPhilippe Laborie
This document provides an introduction to a tutorial on CP Optimizer for scheduling. It discusses CP Optimizer's modeling concepts for scheduling problems that are based on intervals of time, sequences of intervals, and functions of time. These concepts provide a more natural way to model scheduling problems compared to traditional mathematical programming and constraint programming approaches. The document gives examples of how piecewise-linear functions, stepwise functions, and transition matrices can be used as constant structures in CP Optimizer models. It also introduces the concept of interval variables, which represent intervals of time, and optional interval variables, which allow intervals to be present or absent in solutions.
CP Optimizer is a generic system, largely based on CP, to model and solve real-world combinatorial optimization problems with a particular focus on planning and scheduling applications. It provides an algebraic language with simple mathematical concepts (such as intervals, sequences or functions) to capture the temporal dimension of planning and scheduling problems in a combinatorial optimization framework. CP Optimizer implements a model-and-run paradigm that vastly reduces the burden on the user to understand CP or scheduling algorithms: modeling is by far the most important. The automatic search integrates a large panel of techniques from Artificial Intelligence (constraint programming, temporal reasoning, learning, ...) and Operations Research (mathematical programming, graph algorithms, local search, ...) into an exact algorithm that provides good performance out of the box and is continuously improving. This tutorial gives an overview of CP Optimizer for planning and scheduling: typical applications, modeling concepts with examples, ingredients of the automatic search, tools and performance.
CP Optimizer pour la planification et l'ordonnancementPhilippe Laborie
CP Optimizer is a tool for modeling and solving complex scheduling problems. It uses an interval-based approach to integrate temporal reasoning into mixed integer programming models. This allows it to more easily model problems involving time, durations, precedence constraints and resource allocation over time. CP Optimizer uses constraint programming techniques like temporal constraint networks along with artificial intelligence search methods to efficiently solve large, real-world scheduling instances that are difficult for MIP solvers. It has been applied successfully to problems in areas like semiconductor manufacturing.
Modeling and Solving Scheduling Problems with CP OptimizerPhilippe Laborie
This presentation focuses on using CP Optimizer to address scheduling problems. We will initially cover modeling concepts related with scheduling in CP Optimizer. Using examples we will then provide details on tools, functionalities and tips for speeding-up the development of your scheduling models and improving their efficiency.
Solving Industrial Scheduling Problems with Constraint ProgrammingPhilippe Laborie
Scheduling problems represent an important class of application for Constraint Programming (CP) in the industry. For more than 20 years, our team at ILOG (now IBM) has been developing CP technology and applying it to solve our customers' scheduling problems. In the first part of the talk, we will present some extensions of CP (interval variables, functions, sequences) that capture the temporal dimension of scheduling and make it possible/easier to design efficient models for complex problems. But a powerful modeling language is not sufficient: in an industrial context, one must also simplify the complete process of model design (connection to data, data processing and consistency checking, model debugging, etc.) and problem resolution (robust automatic search algorithm, warm start, model chaining, etc.), this will be the topic of the second part of the talk. The presentation will be illustrated with examples using IBM ILOG CP Optimizer.
The document discusses the history and capabilities of CP Optimizer, a constraint programming engine developed by IBM for scheduling problems. It was first released in 2007 and is now on version 12.8. CP Optimizer is used for scheduling applications across various industries and is part of CPLEX Optimization Studio. The document outlines key features like simplified modeling, automatic search, and interfaces in multiple programming languages.
An Update on the Comparison of MIP, CP and Hybrid Approaches for Mixed Resour...Philippe Laborie
We consider a well known resource allocation and scheduling problem for which different approaches like mixed-integer programming (MIP), constraint programming (CP), constraint integer programming (CIP), logic-based Benders decompositions (LBBD) and SAT-modulo theories (SMT) have been proposed and experimentally compared in the last decade. Thanks to the recent improvements in CP Optimizer, a commercial CP solver for solving generic scheduling problems, we show that a standalone tiny CP model can out-perform all previous approaches and close all the 335 instances of the benchmark. The article explains which components of the automatic search of CP Optimizer are responsible for this success. We finally propose an extension of the original benchmark with larger and more challenging instances.
Classical scheduling problems (like job-shop or RCPSP) are among the most difficult problems studied in combinatorial optimization. Still, they are far from accounting for all the complexity of industrial scheduling applications. Since more than 20 years, our team at ILOG (now IBM) develops and integrates a large panel of techniques from AI (constraint programming, temporal reasoning, learning, ...) and OR (mathematical programming, graph algorithms, local search, ...) to solve our customers most complex scheduling problems. These works have lead to the design of CP Optimizer, a generic solver based on a very expressive (but still, quite concise) mathematical modeling language to formulate complex scheduling problems. The models are solved with an automatic search algorithm that is exact, efficient, robust and continuously improving. This talk gives a short overview of CP Optimizer.
A (Not So Short) Introduction to CP Optimizer for SchedulingPhilippe Laborie
This document provides an introduction to a tutorial on CP Optimizer for scheduling. It discusses CP Optimizer's modeling concepts for scheduling problems that are based on intervals of time, sequences of intervals, and functions of time. These concepts provide a more natural way to model scheduling problems compared to traditional mathematical programming and constraint programming approaches. The document gives examples of how piecewise-linear functions, stepwise functions, and transition matrices can be used as constant structures in CP Optimizer models. It also introduces the concept of interval variables, which represent intervals of time, and optional interval variables, which allow intervals to be present or absent in solutions.
CP Optimizer is a generic system, largely based on CP, to model and solve real-world combinatorial optimization problems with a particular focus on planning and scheduling applications. It provides an algebraic language with simple mathematical concepts (such as intervals, sequences or functions) to capture the temporal dimension of planning and scheduling problems in a combinatorial optimization framework. CP Optimizer implements a model-and-run paradigm that vastly reduces the burden on the user to understand CP or scheduling algorithms: modeling is by far the most important. The automatic search integrates a large panel of techniques from Artificial Intelligence (constraint programming, temporal reasoning, learning, ...) and Operations Research (mathematical programming, graph algorithms, local search, ...) into an exact algorithm that provides good performance out of the box and is continuously improving. This tutorial gives an overview of CP Optimizer for planning and scheduling: typical applications, modeling concepts with examples, ingredients of the automatic search, tools and performance.
CP Optimizer pour la planification et l'ordonnancementPhilippe Laborie
CP Optimizer is a tool for modeling and solving complex scheduling problems. It uses an interval-based approach to integrate temporal reasoning into mixed integer programming models. This allows it to more easily model problems involving time, durations, precedence constraints and resource allocation over time. CP Optimizer uses constraint programming techniques like temporal constraint networks along with artificial intelligence search methods to efficiently solve large, real-world scheduling instances that are difficult for MIP solvers. It has been applied successfully to problems in areas like semiconductor manufacturing.
Modeling and Solving Scheduling Problems with CP OptimizerPhilippe Laborie
This presentation focuses on using CP Optimizer to address scheduling problems. We will initially cover modeling concepts related with scheduling in CP Optimizer. Using examples we will then provide details on tools, functionalities and tips for speeding-up the development of your scheduling models and improving their efficiency.
Solving Industrial Scheduling Problems with Constraint ProgrammingPhilippe Laborie
Scheduling problems represent an important class of application for Constraint Programming (CP) in the industry. For more than 20 years, our team at ILOG (now IBM) has been developing CP technology and applying it to solve our customers' scheduling problems. In the first part of the talk, we will present some extensions of CP (interval variables, functions, sequences) that capture the temporal dimension of scheduling and make it possible/easier to design efficient models for complex problems. But a powerful modeling language is not sufficient: in an industrial context, one must also simplify the complete process of model design (connection to data, data processing and consistency checking, model debugging, etc.) and problem resolution (robust automatic search algorithm, warm start, model chaining, etc.), this will be the topic of the second part of the talk. The presentation will be illustrated with examples using IBM ILOG CP Optimizer.
New Results for the GEO-CAPE Observation Scheduling ProblemPhilippe Laborie
A challenging Earth-observing satellite scheduling problem was recently studied in (Frank, Do and Tran 2016) for which the best resolution approach so far on the proposed benchmark is a time-indexed Mixed Integer Linear Program (MILP) formulation. This MILP formulation produces feasible solutions but is not able to prove optimality or to provide tight optimality gaps, making it difficult to assess the quality of existing solutions. In this paper, we first introduce an alternative disjunctive MILP formulation that manages to close more than half of the instances of the benchmark. This MILP formulation is then relaxed to provide good bounds on optimal values for the unsolved instances. We then propose a CP Optimizer model that consistently outperforms the original time-indexed MILP formulation, reducing the optimality gap by more than 4 times. This Constraint Programming (CP) formulation is very concise: we give its complete OPL implementation in the presentation. Some improvements of this CP model are reported resulting in an approach that produces optimal or near-optimal solutions (optimality gap smaller than 1%) for about 80% of the instances. Unlike the MILP formulations, it is able to quickly produce good quality schedules and it is expected to be flexible enough to handle the changing requirements of the application.
Reference: Philippe Laborie and Bilal Messaoudi. New Results for the GEOCAPE Observation Scheduling Problem. Proceedings ICAPS-2017.
Recent advances on large scheduling problems in CP OptimizerPhilippe Laborie
1) CP Optimizer is a modeling language that extends ILP and CP with interval variables and functions, allowing compact formulations for scheduling problems. It uses an automatic search algorithm that is complete, anytime, efficient, and scalable.
2) Performance improvements in the upcoming CP Optimizer 12.9 release allow it to solve classical RCPSP benchmarks quickly and address much larger instances with up to 500,000 tasks.
3) The automatic search employs techniques like failure-directed search, large neighborhood search, and iterative diving to cooperate and solve problems faster.
This paper presents the concept of objective landscape in the context of Constraint Programming. An objective landscape is a light-weight structure providing some information on the relation between decision variables and objective values, that can be quickly computed once and for all at the beginning of the resolution and is used to guide the search. It is particularly useful on decision variables with large domains and with a continuous semantics, which is typically the case for time or resource quantity variables in scheduling problems. This concept was recently implemented in the automatic search of CP Optimizer and resulted in an average speed-up of about 50% on scheduling problems with up to almost 2 orders of magnitude for some applications.
Industrial project and machine scheduling with Constraint ProgrammingPhilippe Laborie
More often than not, project and machine scheduling problems are addressed either by generic mathematical programming techniques (like MILP) or by problem-specific exact or heuristic approaches. MILP formulations are commonly used to describe the problem in mathematical terms and to provide optimal solutions or bounds to small problem instances. As they usually do not scale well, one usually resorts to using heuristics for handling large and complex industrial problems.
Though constraint programming (CP) techniques represent the state of the art in several classical project and machine scheduling benchmarks and have been used for almost 30 years for solving industrial problems, they are still seldom considered as an alternative approach in the scheduling community. A possible reason is that, for years, in the absence of efficient and robust automatic search algorithms, CP techniques have been difficult to use for non-CP experts.
We will explain why we think this time is over and illustrate our arguments with CP Optimizer, a generic system, largely based on CP, for modeling and solving real-world scheduling problems.
CP Optimizer extends linear programming with an algebraic language using simple mathematical concepts (such as intervals, sequences and functions) to capture the temporal dimension of scheduling problems in a combinatorial optimization framework. CP Optimizer implements a model-and-run paradigm that vastly reduces the burden on the user to understand CP or scheduling algorithms: modeling is by far the most important. The automatic search combines a wide variety of techniques from Artificial Intelligence (constraint programming, temporal reasoning, learning etc.) and Operations Research (mathematical programming, graph algorithms, local search, etc.) in an exact algorithm that provides good performance out of the box and which is continuously improving.
Optimization: from mathematical tools to real applicationsPhilippe Laborie
This document discusses the gap between mathematical optimization tools and real-world applications. It covers several key areas in bridging this gap, including having different stakeholders with different objectives, using appropriate language and terminology, properly defining the problem, dealing with data issues, determining good solutions, and facilitating solution display and user interaction. The goal is to provide approximate answers to the right real-world problems, rather than perfectly solving simplified mathematical abstractions.
Multiclass classification using Massively Threaded Multiprocessorssergherrero
This document presents a method for parallelizing multiclass support vector machine (SVM) classification on massively threaded multiprocessors like graphics processing units (GPUs). It begins with background on data growth, multiclass classification, SVMs, related work, and GPU architecture. It then introduces Parallel-Parallel SMO (P2SMO), which parallelizes the sequential minimal optimization (SMO) algorithm used to train SVMs by assigning each thread to optimize two Lagrange multipliers. Performance results demonstrating reduced training and classification time on GPUs are also discussed.
The document provides a summary of Pramod Govind Baravkar's work experience and qualifications. It summarizes his experience as a Project Engineer at Honeywell Automation India Ltd, where he has led teams and worked on projects in various industries. It also lists his education and certifications in computer science and instrumentation and control systems.
Business Optimizer is an optimization engine (or “constraint satisfaction solver”) platform provided with Red Hat Decision Manager. It enables regular Java developers to create solvers for complex planning problems using a variety of out-of-the-box provided algorithms.
Enterprise COBOL for z/OS, V5.1 at Innovate 2014Dwayne Moore
This document contains information about several sessions at an IBM conference focusing on the IBM z/OS compilers. It provides details on session topics like how compilers can help reduce operating costs for business applications on System z and techniques for revitalizing COBOL applications. It also lists times and locations for sessions on optimizing performance with the latest System z compilers. Attendees can register separately for some sessions and meet with product managers for one-on-one discussions.
The document discusses the history and development of the Eclipse software platform. It describes how Eclipse was created by IBM in 1998 as a Java IDE. It was opened to open source in 2001 to accelerate adoption. The Eclipse Foundation was formed in 2004 to oversee the project. The document outlines Eclipse's modular architecture based on OSGi bundles and its use of extension points and stable APIs to enable growth. It also discusses Eclipse's planning and release cycles which involve continuous integration and input from the Eclipse community.
Deploying End-to-End Deep Learning Pipelines with ONNXDatabricks
A deep learning model is often viewed as fully self-contained, freeing practitioners from the burden of data processing and feature engineering. However, in most real-world applications of AI, these models have similarly complex requirements for data pre-processing, feature extraction and transformation as more traditional ML models. Any non-trivial use case requires care to ensure no model skew exists between the training-time data pipeline and the inference-time data pipeline.
This is not simply theoretical – small differences or errors can be difficult to detect but can have dramatic impact on the performance and efficacy of the deployed solution. Despite this, there are currently few widely accepted, standard solutions for enabling simple deployment of end-to-end deep learning pipelines to production. Recently, the Open Neural Network Exchange (ONNX) standard has emerged for representing deep learning models in a standardized format.
While this is useful for representing the core model inference phase, we need to go further to encompass deployment of the end-to-end pipeline. In this talk I will introduce ONNX for exporting deep learning computation graphs, as well as the ONNX-ML component of the specification, for exporting both ‘traditional’ ML models as well as common feature extraction, data transformation and post-processing steps.
I will cover how to use ONNX and the growing ecosystem of exporter libraries for common frameworks (including TensorFlow, PyTorch, Keras, scikit-learn and now Apache SparkML) to deploy complete deep learning pipelines.
Finally, I will explore best practices for working with and combining these disparate exporter toolkits, as well as highlight the gaps, issues and missing pieces to be taken into account and still to be addressed.
This document provides an overview and introduction to DevOps. It discusses that DevOps involves people, processes, and tools working together. It describes how DevOps extends lean and agile principles across the entire application development lifecycle from development to operations. Specifically, it promotes a culture of collaboration between development and operations teams. It also explains how DevOps aims to address bottlenecks in the software delivery process through practices like continuous integration, deployment, and testing.
This document discusses modular training for IBM software. It proposes creating self-contained training modules that can be combined as needed by global training partners. Each module should start with students unable to perform a task and end with them able to do so. Examples of potential modules for Apache HTTP Server are provided, along with suggestions for using virtual machines and lab exercises across modules. The challenges of product enhancements and maintaining narrative structure with modular content are also addressed.
Continuous integration and deployment has become an increasingly standard and common practice in software development. However, doing this for machine learning models and applications introduces many challenges. Not only do we need to account for standard code quality and integration testing, but how do we best account for changes in model performance metrics coming from changes to code, deployment framework or mechanism, pre- and post-processing steps, changes in data, not to mention the core deep learning model itself?
In addition, deep learning presents particular challenges:
* model sizes are often extremely large and take significant time and resources to train
* models are often more difficult to understand and interpret making it more difficult to debug issues
* inputs to deep learning are often very different from the tabular data involved in most ‘traditional machine learning’ models
* model formats, frameworks and the state-of-the art models and architectures themselves are changing extremely rapidly
* usually many disparate tools are combined to create the full end-to-end pipeline for training and deployment, making it trickier to plug together these components and track down issues.
We also need to take into account the impact of changes on wider aspects such as model bias, fairness, robustness and explainability. And we need to track all of this over time and in a standard, repeatable manner. This talk explores best practices for handling these myriad challenges to create a standardized, automated, repeatable pipeline for continuous deployment of deep learning models and pipelines. I will illustrate this through the work we are undertaking within the free and open-source IBM Model Asset eXchange.
NBCUniversal is implementing DevOps practices like continuous integration, delivery, and testing using tools from IBM like UrbanCode Deploy, IBM Dev-Test Environment as a Service (IDTES), and IBM Cloud Orchestrator. This allows them to continuously test code, deploy applications across hybrid clouds, and improve collaboration between development and operations teams. NBCUniversal's DevOps practices aim to address issues like slow release processes and lack of integration between development stages.
Unicorns on an Aircraft Carrier: CDSummit London and Stockholm KeynoteSanjeev Sharma
The document discusses achieving business value through innovation and optimization using DevOps practices. It describes how DevOps works well for small isolated teams but greater collaboration is needed across larger organizations. The author advocates a multi-speed approach to IT that balances innovation using newer technologies with stability from more traditional systems. Standardizing tools and practices can help scale DevOps across the enterprise by breaking down silos.
S106 using ibm urban code deploy to deliver your apps to cicsnick_garrod
GSE Nordic 2015 Using IBM UrbanCode Deploy to deliver your apps to CICS. Deploying applications to CICS can be tricky, and you may be struggling to figure out how to handle the many new zFS artifacts such as cloud, bundles, Java, and web services. This could even be slowing down the adoption of new technologies that could deliver the solutions your business needs. This session will introduce IBM UrbanCode Deploy as a tool to automate many types of application deployments through your environments. It can provide rapid feedback and continuous delivery in agile development while providing the audit trails, versioning and approvals needed in production. See the new z/OS and CICS TS plug-ins for UrbanCode Deploy in action to deploy COBOL, web services, and Java applications to CICS in a single action.
This document provides an overview of BlueMix DevOps Services. It begins by stating that BlueMix DevOps Services is a fully hosted, cloud-based software development tool to enable quicker startup and time to value. It then introduces IBM BlueMix DevOps Services, highlighting that it promotes incremental and frictionless adoption of DevOps services for BlueMix. The document outlines some key DevOps Services like GIT hosting, mobile quality, an integrated development environment, agile planning and tracking, performance monitoring, and deployment automation. It provides steps to get started with BlueMix DevOps Services in minutes and notes that agile development in the cloud is easy with built-in agile process support, work items to track planning, agile tools,
Decision Optimization - CPLEX Optimization Studio - Product Overview(2).PPTXSanjayKPrasad2
This document provides an overview of IBM's prescriptive analytics software IBM ILOG CPLEX Optimization Studio (COS). It discusses how prescriptive analytics fits within the analytics portfolio including descriptive, predictive, and prescriptive analytics. It then describes COS's target audiences, key features like optimization engines and modeling tools, example applications that have achieved significant cost savings, and how optimization models are created and deployed.
The document provides instructions for installing OpenERP on Ubuntu, including installing PostgreSQL, installing and configuring the OpenERP server and modules, and installing the OpenERP client and web interfaces to access the system. It also briefly outlines installing OpenERP on Windows using an all-in-one installer or independent installation of each component. The goal is to get the reader started with a basic OpenERP installation and understand the core components and architecture.
The document discusses the Eclipse Way, including how Eclipse was created and developed as an open source project. It describes IBM's Krakow Software Lab and their practices for planning releases, continuous integration, code reviews, and using tools like Mylyn. The presentation emphasizes community involvement, stable APIs, modularity, and following practices that allow Eclipse to be industry-leading, extendable, and constantly evolving over 10+ years.
CampDevOps keynote - DevOps: Using 'Lean' to eliminate BottlenecksSanjeev Sharma
This document summarizes Sanjeev Sharma's presentation on adopting DevOps practices to eliminate bottlenecks using Lean principles. The presentation covers: 1) viewing DevOps through a "Lean" lens to reduce waste and improve flow, 2) addressing bottlenecks with techniques like shifting left testing, full stack deployment, and emphasizing culture and people; and 3) resources for DevOps assessments and further information. The overall message is that DevOps can help optimize software delivery through collaboration, automation, and continuous feedback.
New Results for the GEO-CAPE Observation Scheduling ProblemPhilippe Laborie
A challenging Earth-observing satellite scheduling problem was recently studied in (Frank, Do and Tran 2016) for which the best resolution approach so far on the proposed benchmark is a time-indexed Mixed Integer Linear Program (MILP) formulation. This MILP formulation produces feasible solutions but is not able to prove optimality or to provide tight optimality gaps, making it difficult to assess the quality of existing solutions. In this paper, we first introduce an alternative disjunctive MILP formulation that manages to close more than half of the instances of the benchmark. This MILP formulation is then relaxed to provide good bounds on optimal values for the unsolved instances. We then propose a CP Optimizer model that consistently outperforms the original time-indexed MILP formulation, reducing the optimality gap by more than 4 times. This Constraint Programming (CP) formulation is very concise: we give its complete OPL implementation in the presentation. Some improvements of this CP model are reported resulting in an approach that produces optimal or near-optimal solutions (optimality gap smaller than 1%) for about 80% of the instances. Unlike the MILP formulations, it is able to quickly produce good quality schedules and it is expected to be flexible enough to handle the changing requirements of the application.
Reference: Philippe Laborie and Bilal Messaoudi. New Results for the GEOCAPE Observation Scheduling Problem. Proceedings ICAPS-2017.
Recent advances on large scheduling problems in CP OptimizerPhilippe Laborie
1) CP Optimizer is a modeling language that extends ILP and CP with interval variables and functions, allowing compact formulations for scheduling problems. It uses an automatic search algorithm that is complete, anytime, efficient, and scalable.
2) Performance improvements in the upcoming CP Optimizer 12.9 release allow it to solve classical RCPSP benchmarks quickly and address much larger instances with up to 500,000 tasks.
3) The automatic search employs techniques like failure-directed search, large neighborhood search, and iterative diving to cooperate and solve problems faster.
This paper presents the concept of objective landscape in the context of Constraint Programming. An objective landscape is a light-weight structure providing some information on the relation between decision variables and objective values, that can be quickly computed once and for all at the beginning of the resolution and is used to guide the search. It is particularly useful on decision variables with large domains and with a continuous semantics, which is typically the case for time or resource quantity variables in scheduling problems. This concept was recently implemented in the automatic search of CP Optimizer and resulted in an average speed-up of about 50% on scheduling problems with up to almost 2 orders of magnitude for some applications.
Industrial project and machine scheduling with Constraint ProgrammingPhilippe Laborie
More often than not, project and machine scheduling problems are addressed either by generic mathematical programming techniques (like MILP) or by problem-specific exact or heuristic approaches. MILP formulations are commonly used to describe the problem in mathematical terms and to provide optimal solutions or bounds to small problem instances. As they usually do not scale well, one usually resorts to using heuristics for handling large and complex industrial problems.
Though constraint programming (CP) techniques represent the state of the art in several classical project and machine scheduling benchmarks and have been used for almost 30 years for solving industrial problems, they are still seldom considered as an alternative approach in the scheduling community. A possible reason is that, for years, in the absence of efficient and robust automatic search algorithms, CP techniques have been difficult to use for non-CP experts.
We will explain why we think this time is over and illustrate our arguments with CP Optimizer, a generic system, largely based on CP, for modeling and solving real-world scheduling problems.
CP Optimizer extends linear programming with an algebraic language using simple mathematical concepts (such as intervals, sequences and functions) to capture the temporal dimension of scheduling problems in a combinatorial optimization framework. CP Optimizer implements a model-and-run paradigm that vastly reduces the burden on the user to understand CP or scheduling algorithms: modeling is by far the most important. The automatic search combines a wide variety of techniques from Artificial Intelligence (constraint programming, temporal reasoning, learning etc.) and Operations Research (mathematical programming, graph algorithms, local search, etc.) in an exact algorithm that provides good performance out of the box and which is continuously improving.
Optimization: from mathematical tools to real applicationsPhilippe Laborie
This document discusses the gap between mathematical optimization tools and real-world applications. It covers several key areas in bridging this gap, including having different stakeholders with different objectives, using appropriate language and terminology, properly defining the problem, dealing with data issues, determining good solutions, and facilitating solution display and user interaction. The goal is to provide approximate answers to the right real-world problems, rather than perfectly solving simplified mathematical abstractions.
Multiclass classification using Massively Threaded Multiprocessorssergherrero
This document presents a method for parallelizing multiclass support vector machine (SVM) classification on massively threaded multiprocessors like graphics processing units (GPUs). It begins with background on data growth, multiclass classification, SVMs, related work, and GPU architecture. It then introduces Parallel-Parallel SMO (P2SMO), which parallelizes the sequential minimal optimization (SMO) algorithm used to train SVMs by assigning each thread to optimize two Lagrange multipliers. Performance results demonstrating reduced training and classification time on GPUs are also discussed.
The document provides a summary of Pramod Govind Baravkar's work experience and qualifications. It summarizes his experience as a Project Engineer at Honeywell Automation India Ltd, where he has led teams and worked on projects in various industries. It also lists his education and certifications in computer science and instrumentation and control systems.
Business Optimizer is an optimization engine (or “constraint satisfaction solver”) platform provided with Red Hat Decision Manager. It enables regular Java developers to create solvers for complex planning problems using a variety of out-of-the-box provided algorithms.
Enterprise COBOL for z/OS, V5.1 at Innovate 2014Dwayne Moore
This document contains information about several sessions at an IBM conference focusing on the IBM z/OS compilers. It provides details on session topics like how compilers can help reduce operating costs for business applications on System z and techniques for revitalizing COBOL applications. It also lists times and locations for sessions on optimizing performance with the latest System z compilers. Attendees can register separately for some sessions and meet with product managers for one-on-one discussions.
The document discusses the history and development of the Eclipse software platform. It describes how Eclipse was created by IBM in 1998 as a Java IDE. It was opened to open source in 2001 to accelerate adoption. The Eclipse Foundation was formed in 2004 to oversee the project. The document outlines Eclipse's modular architecture based on OSGi bundles and its use of extension points and stable APIs to enable growth. It also discusses Eclipse's planning and release cycles which involve continuous integration and input from the Eclipse community.
Deploying End-to-End Deep Learning Pipelines with ONNXDatabricks
A deep learning model is often viewed as fully self-contained, freeing practitioners from the burden of data processing and feature engineering. However, in most real-world applications of AI, these models have similarly complex requirements for data pre-processing, feature extraction and transformation as more traditional ML models. Any non-trivial use case requires care to ensure no model skew exists between the training-time data pipeline and the inference-time data pipeline.
This is not simply theoretical – small differences or errors can be difficult to detect but can have dramatic impact on the performance and efficacy of the deployed solution. Despite this, there are currently few widely accepted, standard solutions for enabling simple deployment of end-to-end deep learning pipelines to production. Recently, the Open Neural Network Exchange (ONNX) standard has emerged for representing deep learning models in a standardized format.
While this is useful for representing the core model inference phase, we need to go further to encompass deployment of the end-to-end pipeline. In this talk I will introduce ONNX for exporting deep learning computation graphs, as well as the ONNX-ML component of the specification, for exporting both ‘traditional’ ML models as well as common feature extraction, data transformation and post-processing steps.
I will cover how to use ONNX and the growing ecosystem of exporter libraries for common frameworks (including TensorFlow, PyTorch, Keras, scikit-learn and now Apache SparkML) to deploy complete deep learning pipelines.
Finally, I will explore best practices for working with and combining these disparate exporter toolkits, as well as highlight the gaps, issues and missing pieces to be taken into account and still to be addressed.
This document provides an overview and introduction to DevOps. It discusses that DevOps involves people, processes, and tools working together. It describes how DevOps extends lean and agile principles across the entire application development lifecycle from development to operations. Specifically, it promotes a culture of collaboration between development and operations teams. It also explains how DevOps aims to address bottlenecks in the software delivery process through practices like continuous integration, deployment, and testing.
This document discusses modular training for IBM software. It proposes creating self-contained training modules that can be combined as needed by global training partners. Each module should start with students unable to perform a task and end with them able to do so. Examples of potential modules for Apache HTTP Server are provided, along with suggestions for using virtual machines and lab exercises across modules. The challenges of product enhancements and maintaining narrative structure with modular content are also addressed.
Continuous integration and deployment has become an increasingly standard and common practice in software development. However, doing this for machine learning models and applications introduces many challenges. Not only do we need to account for standard code quality and integration testing, but how do we best account for changes in model performance metrics coming from changes to code, deployment framework or mechanism, pre- and post-processing steps, changes in data, not to mention the core deep learning model itself?
In addition, deep learning presents particular challenges:
* model sizes are often extremely large and take significant time and resources to train
* models are often more difficult to understand and interpret making it more difficult to debug issues
* inputs to deep learning are often very different from the tabular data involved in most ‘traditional machine learning’ models
* model formats, frameworks and the state-of-the art models and architectures themselves are changing extremely rapidly
* usually many disparate tools are combined to create the full end-to-end pipeline for training and deployment, making it trickier to plug together these components and track down issues.
We also need to take into account the impact of changes on wider aspects such as model bias, fairness, robustness and explainability. And we need to track all of this over time and in a standard, repeatable manner. This talk explores best practices for handling these myriad challenges to create a standardized, automated, repeatable pipeline for continuous deployment of deep learning models and pipelines. I will illustrate this through the work we are undertaking within the free and open-source IBM Model Asset eXchange.
NBCUniversal is implementing DevOps practices like continuous integration, delivery, and testing using tools from IBM like UrbanCode Deploy, IBM Dev-Test Environment as a Service (IDTES), and IBM Cloud Orchestrator. This allows them to continuously test code, deploy applications across hybrid clouds, and improve collaboration between development and operations teams. NBCUniversal's DevOps practices aim to address issues like slow release processes and lack of integration between development stages.
Unicorns on an Aircraft Carrier: CDSummit London and Stockholm KeynoteSanjeev Sharma
The document discusses achieving business value through innovation and optimization using DevOps practices. It describes how DevOps works well for small isolated teams but greater collaboration is needed across larger organizations. The author advocates a multi-speed approach to IT that balances innovation using newer technologies with stability from more traditional systems. Standardizing tools and practices can help scale DevOps across the enterprise by breaking down silos.
S106 using ibm urban code deploy to deliver your apps to cicsnick_garrod
GSE Nordic 2015 Using IBM UrbanCode Deploy to deliver your apps to CICS. Deploying applications to CICS can be tricky, and you may be struggling to figure out how to handle the many new zFS artifacts such as cloud, bundles, Java, and web services. This could even be slowing down the adoption of new technologies that could deliver the solutions your business needs. This session will introduce IBM UrbanCode Deploy as a tool to automate many types of application deployments through your environments. It can provide rapid feedback and continuous delivery in agile development while providing the audit trails, versioning and approvals needed in production. See the new z/OS and CICS TS plug-ins for UrbanCode Deploy in action to deploy COBOL, web services, and Java applications to CICS in a single action.
This document provides an overview of BlueMix DevOps Services. It begins by stating that BlueMix DevOps Services is a fully hosted, cloud-based software development tool to enable quicker startup and time to value. It then introduces IBM BlueMix DevOps Services, highlighting that it promotes incremental and frictionless adoption of DevOps services for BlueMix. The document outlines some key DevOps Services like GIT hosting, mobile quality, an integrated development environment, agile planning and tracking, performance monitoring, and deployment automation. It provides steps to get started with BlueMix DevOps Services in minutes and notes that agile development in the cloud is easy with built-in agile process support, work items to track planning, agile tools,
Decision Optimization - CPLEX Optimization Studio - Product Overview(2).PPTXSanjayKPrasad2
This document provides an overview of IBM's prescriptive analytics software IBM ILOG CPLEX Optimization Studio (COS). It discusses how prescriptive analytics fits within the analytics portfolio including descriptive, predictive, and prescriptive analytics. It then describes COS's target audiences, key features like optimization engines and modeling tools, example applications that have achieved significant cost savings, and how optimization models are created and deployed.
The document provides instructions for installing OpenERP on Ubuntu, including installing PostgreSQL, installing and configuring the OpenERP server and modules, and installing the OpenERP client and web interfaces to access the system. It also briefly outlines installing OpenERP on Windows using an all-in-one installer or independent installation of each component. The goal is to get the reader started with a basic OpenERP installation and understand the core components and architecture.
The document discusses the Eclipse Way, including how Eclipse was created and developed as an open source project. It describes IBM's Krakow Software Lab and their practices for planning releases, continuous integration, code reviews, and using tools like Mylyn. The presentation emphasizes community involvement, stable APIs, modularity, and following practices that allow Eclipse to be industry-leading, extendable, and constantly evolving over 10+ years.
CampDevOps keynote - DevOps: Using 'Lean' to eliminate BottlenecksSanjeev Sharma
This document summarizes Sanjeev Sharma's presentation on adopting DevOps practices to eliminate bottlenecks using Lean principles. The presentation covers: 1) viewing DevOps through a "Lean" lens to reduce waste and improve flow, 2) addressing bottlenecks with techniques like shifting left testing, full stack deployment, and emphasizing culture and people; and 3) resources for DevOps assessments and further information. The overall message is that DevOps can help optimize software delivery through collaboration, automation, and continuous feedback.
Webcast Automação Implantação de Aplicações (DevOps)Felipe Freire
The document discusses DevOps and application deployment automation using IBM UrbanCode Deploy. It begins with an introduction to DevOps and the challenges of traditional software delivery approaches. It then outlines the principles and values of DevOps in integrating development and operations. The remainder of the document demonstrates the key capabilities of IBM UrbanCode Deploy for modeling applications and components, managing environments, designing automated deployment processes, and integrating with other tools. It concludes with a demonstration of the basic functionality.
Scaling your application efficiently is is key to achieving a good rate of return and performance monitoring is an important tool to ensure you scale as expected.
Performance monitoring of single Node.js applications is relatively straight forward with a variety of technigues and tooling options available to a developer. In this presentation, we will follow the journey of how to apply these techniques when scaling up to a clustered Node.js deployment in the cloud. We will show how to use freely available monitoring tooling and open source solutions like appmetrics, Elasticsearch and Kibana to provide real-time monitoring and performance tracking for Enterprise solutions. Come and learn how to keep on top on how your application is performing and find out about problems before they occur.
Extracting Business Rules from COBOL: A Model-Based FrameworkValerio Cosentino
This document presents a model-based framework for extracting business rules from COBOL programs. The framework includes steps for discovering the model from COBOL code, identifying business terms and rules, and representing the rules both textually and visually. An early validation on a sample IBM COBOL application found that the framework could identify complete business rules, though some included technical details. Future work includes further validation and expanding the framework to extract rules across all layers of an application.
The document discusses the new features in Eclipse Juno, including a redesigned workbench with a new programming model, global search bar, more flexible part layout, support for Git, and tools to support Java 7 language features like diamond operator, multi-catch, and try-with-resources. It also outlines plans for Eclipse 4.3 including support for Java 8 language features and code recommenders.
The document discusses an IBM tool for managing and enforcing API contracts. It provides reports on incorrect API usage, missing documentation, and binary compatibility issues. The tool integrates with Eclipse and the build process to validate APIs during development and builds. Future work may include support for additional project types and package versioning.
Similar to Some "Do"s and "Dont'"s of Benchmarking (20)
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.