Managing large (and small) R based solutions with R SuiteWit Jakuczun
The presentation I gave at DataMass Gdańsk Summit in 2017:
R is a great tool for data scientist. Being very dynamic and popular is now one of the most important technology on the market. Unfortunately out-of-the-box R is not suited for large scale applications. I will present R Suite that is an open-source solution developed by us for us to manage R development process.
There are a multitude of organisations in Australia and New Zealand pursuing spatial data supply chain initiatives. There is little to no co-ordination of these developments, leading to duplication of effort, wasted investment and missed opportunities. This presentation presents the results of the CRC-SI “Alignment Study”; an inventory of these initiatives, gaps and overlaps and research opportunities that arise.
In Information and Communication Technology (ICT) a ‘deliverable’ may be either software (perceived as an ‘output’) or a service (perceived as an ‘outcome’). On the one hand, the differences between software and service have led to the design of parallel models and lifecycles with more commonalities than differences, thereby not supporting the adoption of different frameworks. For instance, a software project could be managed applying best practices for services (e.g. ITIL), while some processes (e.g. Verification & Validation) are better defined in models of the Software Management domain. Thus, this paper aims at reconciling these differences and provides suggestions for a better joint usage of models/frameworks. To unify existing models we use the LEGO approach, which aims at keeping the element of interest from any potential model/framework for being inserted in the process architecture of the target Business Process Model (BPM) of an organization, strengthening the organizational way of working. An example of a LEGO application is presented to show the benefit from the joint view of the ‘software + service’ sides as a whole across the project lifecycle, increasing the opportunity to have many more sources for this type of improvement task.
How to Streamline Incident Response with InfluxDB, PagerDuty and RundeckInfluxData
Mean Time to Resolution (MTTR) is a foundational KPI for most organizations. DevOps and SRE teams are under intense pressure to reduce MTTR when resolving incidents. Often parts of incident response processes are manual, bringing together alerts, runbooks, ad-hoc scripts, and people to form a response.
In this webinar, we will show you how to improve resolution time by configuring InfluxDB notification endpoints to PagerDuty and triggering auto-remediations with Rundeck. Using Rundeck’s automated runbooks, customers have experienced up to 50% reduction in incident response time, greatly improving team productivity and reducing unnecessary outage time.
Managing large (and small) R based solutions with R SuiteWit Jakuczun
The presentation I gave at DataMass Gdańsk Summit in 2017:
R is a great tool for data scientist. Being very dynamic and popular is now one of the most important technology on the market. Unfortunately out-of-the-box R is not suited for large scale applications. I will present R Suite that is an open-source solution developed by us for us to manage R development process.
There are a multitude of organisations in Australia and New Zealand pursuing spatial data supply chain initiatives. There is little to no co-ordination of these developments, leading to duplication of effort, wasted investment and missed opportunities. This presentation presents the results of the CRC-SI “Alignment Study”; an inventory of these initiatives, gaps and overlaps and research opportunities that arise.
In Information and Communication Technology (ICT) a ‘deliverable’ may be either software (perceived as an ‘output’) or a service (perceived as an ‘outcome’). On the one hand, the differences between software and service have led to the design of parallel models and lifecycles with more commonalities than differences, thereby not supporting the adoption of different frameworks. For instance, a software project could be managed applying best practices for services (e.g. ITIL), while some processes (e.g. Verification & Validation) are better defined in models of the Software Management domain. Thus, this paper aims at reconciling these differences and provides suggestions for a better joint usage of models/frameworks. To unify existing models we use the LEGO approach, which aims at keeping the element of interest from any potential model/framework for being inserted in the process architecture of the target Business Process Model (BPM) of an organization, strengthening the organizational way of working. An example of a LEGO application is presented to show the benefit from the joint view of the ‘software + service’ sides as a whole across the project lifecycle, increasing the opportunity to have many more sources for this type of improvement task.
How to Streamline Incident Response with InfluxDB, PagerDuty and RundeckInfluxData
Mean Time to Resolution (MTTR) is a foundational KPI for most organizations. DevOps and SRE teams are under intense pressure to reduce MTTR when resolving incidents. Often parts of incident response processes are manual, bringing together alerts, runbooks, ad-hoc scripts, and people to form a response.
In this webinar, we will show you how to improve resolution time by configuring InfluxDB notification endpoints to PagerDuty and triggering auto-remediations with Rundeck. Using Rundeck’s automated runbooks, customers have experienced up to 50% reduction in incident response time, greatly improving team productivity and reducing unnecessary outage time.
Crossing the low-code and pro-code chasm: a platform approachAsanka Abeysinghe
Organizations are now using low-code and pro-code tools to build digital experiences internally and externally. However, not having the right alignment between these two approaches slows down delivery.
Different developer personas that work in silos, no connection between low-code and pro-code applications, low-code creating unmanageable shadow IT applications, no single codebase or build pipeline, and interruptions to the professional developer flow are some significant drawbacks.
In this session, Asanka will look at a platform approach to bridge the low-code and pro-code chasm.
INTERFACE, by apidays - Crossing the low-code and pro-code chasm: a platform...apidays
INTERFACE, by apidays 2021 - It’s APIs all the way down
June 30, July 1 & 2, 2021
Crossing the low-code and pro-code chasm: a platform approach
Asanka Abeysinghe, Chief Technology Evangelist at WSO2
Measure and Increase Developer Productivity with Help of Serverless at JCON 2...Vadym Kazulkin
The goal of Serverless is to focus on writing the code that delivers business value and offload everything else to your trusted partners (like Cloud providers or SaaS vendors). You want to iterate quickly and today’s code quickly becomes tomorrow’s technical debt. In this talk we will show why Serverless adoption increases the developer productivity and how to measure it. We will also go through AWS Serverless architectures where you only glue together different Serverless managed services relying solely on configuration, minimizing the amount of the code written.
This talk, given at the VA Smalltalk Forum Europe 2010 in Stuttgart, gives an overview of techniques and tools to get existing Smalltalk projects back to speed and productivity.
The talk included some demos of tools we created for some of our customers to make their project life much easier.
CodeValue Architecture Next 2018 - Executive track dilemmas and solutions in...Erez PEDRO
Moderen Software projects are challenging to develop. Eran Stiller, Ronen Rubinfeld, and Erez Pedro from CodeValue show a method for conducting multidisciplinary product discovery.
A talk about the OSGeo Live project; covering 43 projects that are available in a live DVD format (for you to run without installing). The project is much improved with OGC documentation and a description of many of the projects. New this year (thanks to some sponsorship) is quickstarts for several of the projects.
An amazing E-Degree that will help you to learn Fullstack web development irrespective of any discipline to make their career in the Web Development world! If you are just starting in web development or even have a few years of experience under your belt this unique E-Degree program will be perfect for you to master the entire JavaScript ecosystem.
So, why are you waiting for?? Enroll now and get started!
Embedded Projects in GlobalLogic: News from the Front LineGlobalLogic Ukraine
This report provides a look inside the ongoing successful Embedded projects in GlobalLogic. You will learn some interesting details: Who are the main clients of the company in the embedded field? What technologies are used? How communication process is being built? etc.
This presentation by Oleksandr Shevchenko (Engineering Consultant, GlobalLogic Lviv) was delivered at GlobalLogic Lviv Embedded TechTalk on November 23, 2017.
Measure and Increase Developer Productivity with Help of Serverless at AWS Co...Vadym Kazulkin
The goal of Serverless is to focus on writing the code that delivers business value and offload everything else to your trusted partners (like Cloud providers or SaaS vendors). You want to iterate quickly and today’s code quickly becomes tomorrow’s technical debt. In this talk we will show why Serverless adoption increases the developer productivity and how to measure it. We will also go through AWS Serverless architectures where you only glue together different Serverless managed services relying solely on configuration, minimizing the amount of the code written.
IBM Bluemix OpenWhisk: Interconnect 2016, Las Vegas: CCD-1088: The Future of ...OpenWhisk
Learn more about the IBM Bluemix OpenWhisk, a serverless event-driven compute platform, which quickly executes application logic in response to events or direct invocations from web/mobile apps or other endpoints.
In many projects, the learning curve for new project members is simply too steep. Following a high-level systems introduction (frequently laden with slews of somewhat meaningless presentation pictures), a new developer is assigned to a team and exposed to a large and unknown legacy code base.
The next ? frustrating ? phase taxes the patience of managers, colleagues, and newcomers alike: everyone wants to reduce the time before the newcomer can become productive. How can the code structure help achieve this?
This session presents some battle-proven recommendations for structuring projects and code to increase visibility and reduce the learning curve for old and new project members alike.
Lennart Jörelid, jGuru
Presentation delivered during Data Science Rzeszow meetup:
I will present reasons for optimization being superior to predictive algorithms in data science practical applications. I will cover exemplary case studies, tools and hints from my experience on delivering hybrid solutions that exploit both prediction and optimization
Always Be Deploying. How to make R great for machine learning in (not only) E...Wit Jakuczun
The presentation I delivered at WhyR 2019.
Abstract:
For many years software engineers have put enormous effort to develop best practices to deliver stable and maintainable software. How R users can benefit from this experience? I will try to answer this question going through several concepts and tools that are natural for software engineers but are often undervalued by R users.
I will start with a description of the deployment process because this is the ultimate step that exposes all weaknesses. You will learn about structuring R project, using abstractions to manage model’s features, automating models building process, optimizing the performance of the solution and the challenges of the deployment process itself.
More Related Content
Similar to Bringing the Power of LocalSolver to R: a Real-Life Case-Study
Crossing the low-code and pro-code chasm: a platform approachAsanka Abeysinghe
Organizations are now using low-code and pro-code tools to build digital experiences internally and externally. However, not having the right alignment between these two approaches slows down delivery.
Different developer personas that work in silos, no connection between low-code and pro-code applications, low-code creating unmanageable shadow IT applications, no single codebase or build pipeline, and interruptions to the professional developer flow are some significant drawbacks.
In this session, Asanka will look at a platform approach to bridge the low-code and pro-code chasm.
INTERFACE, by apidays - Crossing the low-code and pro-code chasm: a platform...apidays
INTERFACE, by apidays 2021 - It’s APIs all the way down
June 30, July 1 & 2, 2021
Crossing the low-code and pro-code chasm: a platform approach
Asanka Abeysinghe, Chief Technology Evangelist at WSO2
Measure and Increase Developer Productivity with Help of Serverless at JCON 2...Vadym Kazulkin
The goal of Serverless is to focus on writing the code that delivers business value and offload everything else to your trusted partners (like Cloud providers or SaaS vendors). You want to iterate quickly and today’s code quickly becomes tomorrow’s technical debt. In this talk we will show why Serverless adoption increases the developer productivity and how to measure it. We will also go through AWS Serverless architectures where you only glue together different Serverless managed services relying solely on configuration, minimizing the amount of the code written.
This talk, given at the VA Smalltalk Forum Europe 2010 in Stuttgart, gives an overview of techniques and tools to get existing Smalltalk projects back to speed and productivity.
The talk included some demos of tools we created for some of our customers to make their project life much easier.
CodeValue Architecture Next 2018 - Executive track dilemmas and solutions in...Erez PEDRO
Moderen Software projects are challenging to develop. Eran Stiller, Ronen Rubinfeld, and Erez Pedro from CodeValue show a method for conducting multidisciplinary product discovery.
A talk about the OSGeo Live project; covering 43 projects that are available in a live DVD format (for you to run without installing). The project is much improved with OGC documentation and a description of many of the projects. New this year (thanks to some sponsorship) is quickstarts for several of the projects.
An amazing E-Degree that will help you to learn Fullstack web development irrespective of any discipline to make their career in the Web Development world! If you are just starting in web development or even have a few years of experience under your belt this unique E-Degree program will be perfect for you to master the entire JavaScript ecosystem.
So, why are you waiting for?? Enroll now and get started!
Embedded Projects in GlobalLogic: News from the Front LineGlobalLogic Ukraine
This report provides a look inside the ongoing successful Embedded projects in GlobalLogic. You will learn some interesting details: Who are the main clients of the company in the embedded field? What technologies are used? How communication process is being built? etc.
This presentation by Oleksandr Shevchenko (Engineering Consultant, GlobalLogic Lviv) was delivered at GlobalLogic Lviv Embedded TechTalk on November 23, 2017.
Measure and Increase Developer Productivity with Help of Serverless at AWS Co...Vadym Kazulkin
The goal of Serverless is to focus on writing the code that delivers business value and offload everything else to your trusted partners (like Cloud providers or SaaS vendors). You want to iterate quickly and today’s code quickly becomes tomorrow’s technical debt. In this talk we will show why Serverless adoption increases the developer productivity and how to measure it. We will also go through AWS Serverless architectures where you only glue together different Serverless managed services relying solely on configuration, minimizing the amount of the code written.
IBM Bluemix OpenWhisk: Interconnect 2016, Las Vegas: CCD-1088: The Future of ...OpenWhisk
Learn more about the IBM Bluemix OpenWhisk, a serverless event-driven compute platform, which quickly executes application logic in response to events or direct invocations from web/mobile apps or other endpoints.
In many projects, the learning curve for new project members is simply too steep. Following a high-level systems introduction (frequently laden with slews of somewhat meaningless presentation pictures), a new developer is assigned to a team and exposed to a large and unknown legacy code base.
The next ? frustrating ? phase taxes the patience of managers, colleagues, and newcomers alike: everyone wants to reduce the time before the newcomer can become productive. How can the code structure help achieve this?
This session presents some battle-proven recommendations for structuring projects and code to increase visibility and reduce the learning curve for old and new project members alike.
Lennart Jörelid, jGuru
Presentation delivered during Data Science Rzeszow meetup:
I will present reasons for optimization being superior to predictive algorithms in data science practical applications. I will cover exemplary case studies, tools and hints from my experience on delivering hybrid solutions that exploit both prediction and optimization
Always Be Deploying. How to make R great for machine learning in (not only) E...Wit Jakuczun
The presentation I delivered at WhyR 2019.
Abstract:
For many years software engineers have put enormous effort to develop best practices to deliver stable and maintainable software. How R users can benefit from this experience? I will try to answer this question going through several concepts and tools that are natural for software engineers but are often undervalued by R users.
I will start with a description of the deployment process because this is the ultimate step that exposes all weaknesses. You will learn about structuring R project, using abstractions to manage model’s features, automating models building process, optimizing the performance of the solution and the challenges of the deployment process itself.
Driving your marketing automation with multi-armed bandits in real timeWit Jakuczun
Presentation delivered at Big Data Tech Warsaw 2019 by me and Maciej Próchniak from TouK.
Multiarmed bandits vs simple A/B testing. Architecture of solution – how to connect Flink, Nussknacker and R? Other uses cases – what are other good fits for similar architecture.
We observe that many of our customers are actively adapting various marketing automation solutions. While most of them offer some basic A/B testing modules they are often too simple for highly dynamic conditions. Better outcomes can be achieved using e.g. multiarmed bandits algorithms, however, it’s not so straightforward to deploy them in a realtime production environment.
In our presentation, we will use a platform based on Apache Flink, Nussknacker – our custom GUI and R Studio + R Suite – everything deployed on Kubernetes.The main goal of our talk is to show how using proposed tools we can create complete flow – from model creation, through deployment and reinforcement learning – that helps to automate marketing communication without the need for custom code development.
The talk is partially based on our former deployments of similar solutions, many ideas are new, however.
Know your R usage workflow to handle reproducibility challengesWit Jakuczun
R is used in a vast ways. From pure ad-hoc by hobbysts to an organized and structured way in an enterprise. Each way of R usage brings different reproducibility challenges. Going through range of typical workflows we will show that understanding reproducibility must start with understanding your workflow. Presenting workflows we will show how we deal reproducibiilty challenges with open-source R Suite (http://rsuite.io) solution developed by us to support our large scale R development.
Large scale machine learning projects with r suiteWit Jakuczun
Agenda for the workshop I conducted at ML@Enterprise conference that took place on 14th of December 2017 in Warsaw.
Machine Learning is not only about algorithms. Machine learning is about value and this can be achieved only after proper deployment of Machine Learning solutions. I will present best practices regarding managing R based ML projects. I will use our open-source tool R Suite (http://rsuite.io/). During the workshop I will talk about:
– project structure
– development cycle
– deployment
– test
20170928 why r_r jako główna platforma do zaawansowanej analityki w enterpriseWit Jakuczun
Presentation (in polish) I gave at WhyR conference in Warsaw. The abstract:
The world of hermetic analytical platforms is slowly becoming history. Today, advanced analytics is being pushed forward by the open-source world supported by the biggest players. In various discussions R's maturity is being questioned if Enterprise point of view is considered. Based on the R deployment in large telecom, I will tell why I claim R can be number one in advanced analytics in any large corporation. I will show what virtues and vices of migrating to R.
Prezentacja z Data Science Summit 2017:
R spowodował wywrócenie świata analityki. Widzą to duzi gracze jak np. Microsoft czy Oracle. Ale powstaje pytanie jak nowoczesność i zmienność R przełożyć na wartość w stabilnym świecie Enterprise? Ile to kosztuje czasu i pieniędzy? I jak to zrobić bezpiecznie? Odpowiem na te pytania na podstawie wdrażania R w dużym telekomie.
Case Studies in advanced analytics with RWit Jakuczun
A talk I gave at SQLDay 2017:
About 1,5 years ago Microsoft finalised acquisition of Revolution Analytics – a provider of software and services for R. In my opinion this was one of the most important event for R community. Now it is crucial to present its capabilities to SQL Server community. It will be beneficial for both parties. I will present three case studies: cash optimisation in Deutsche Bank, midterm model for energy prices forecasting, workforce demand optimising. The case studies were implemented with our analytical workflow R Suite that will be also shortly presented.
StarCompliance is a leading firm specializing in the recovery of stolen cryptocurrency. Our comprehensive services are designed to assist individuals and organizations in navigating the complex process of fraud reporting, investigation, and fund recovery. We combine cutting-edge technology with expert legal support to provide a robust solution for victims of crypto theft.
Our Services Include:
Reporting to Tracking Authorities:
We immediately notify all relevant centralized exchanges (CEX), decentralized exchanges (DEX), and wallet providers about the stolen cryptocurrency. This ensures that the stolen assets are flagged as scam transactions, making it impossible for the thief to use them.
Assistance with Filing Police Reports:
We guide you through the process of filing a valid police report. Our support team provides detailed instructions on which police department to contact and helps you complete the necessary paperwork within the critical 72-hour window.
Launching the Refund Process:
Our team of experienced lawyers can initiate lawsuits on your behalf and represent you in various jurisdictions around the world. They work diligently to recover your stolen funds and ensure that justice is served.
At StarCompliance, we understand the urgency and stress involved in dealing with cryptocurrency theft. Our dedicated team works quickly and efficiently to provide you with the support and expertise needed to recover your assets. Trust us to be your partner in navigating the complexities of the crypto world and safeguarding your investments.
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
As Europe's leading economic powerhouse and the fourth-largest hashtag#economy globally, Germany stands at the forefront of innovation and industrial might. Renowned for its precision engineering and high-tech sectors, Germany's economic structure is heavily supported by a robust service industry, accounting for approximately 68% of its GDP. This economic clout and strategic geopolitical stance position Germany as a focal point in the global cyber threat landscape.
In the face of escalating global tensions, particularly those emanating from geopolitical disputes with nations like hashtag#Russia and hashtag#China, hashtag#Germany has witnessed a significant uptick in targeted cyber operations. Our analysis indicates a marked increase in hashtag#cyberattack sophistication aimed at critical infrastructure and key industrial sectors. These attacks range from ransomware campaigns to hashtag#AdvancedPersistentThreats (hashtag#APTs), threatening national security and business integrity.
🔑 Key findings include:
🔍 Increased frequency and complexity of cyber threats.
🔍 Escalation of state-sponsored and criminally motivated cyber operations.
🔍 Active dark web exchanges of malicious tools and tactics.
Our comprehensive report delves into these challenges, using a blend of open-source and proprietary data collection techniques. By monitoring activity on critical networks and analyzing attack patterns, our team provides a detailed overview of the threats facing German entities.
This report aims to equip stakeholders across public and private sectors with the knowledge to enhance their defensive strategies, reduce exposure to cyber risks, and reinforce Germany's resilience against cyber threats.
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
Adjusting primitives for graph : SHORT REPORT / NOTES
Bringing the Power of LocalSolver to R: a Real-Life Case-Study
1. Wit Jakuczun, PhD – CEO at WLOG Solutions
wit.jakuczun@wlogsolutions.pl
Optimization with R
Bringing the Power of LocalSolver to R: a
Real-Life Case-Study
2. Wit Jakuczun, PhD – CEO at WLOG Solutions
wit.jakuczun@wlogsolutions.pl
Diversity challenges at WLOG Solutions
• finance, logistics, production, telecoms, publicIndustries:
• on-site, near shore, off shoreDelivery models:
• consulting, solution implementation, trainingContract types:
• data fusion, prediction, visualization, simulation,
optimizationAnalytical problems:
3. Wit Jakuczun, PhD – CEO at WLOG Solutions
wit.jakuczun@wlogsolutions.pl
4. Wit Jakuczun, PhD – CEO at WLOG Solutions
wit.jakuczun@wlogsolutions.pl
One size does
not fit all
Software ecosystems:
R, SAS, Python,
JavaScript, PHP,
Java, C++, …
Optimization:
LocalSolver, Gurobi, IBM (ILOG),
Sicstus, ECLiPSe, COIN-OR, GLPK
5. Wit Jakuczun, PhD – CEO at WLOG Solutions
wit.jakuczun@wlogsolutions.pl
Optimization tool heaven for R
• Seamless to wrap into R processing workflow
• Separation of model and data specification
• High level definition of optimization task
• Swiss army knife (we would love to get a free
lunch )
• Reliable support and continued development
• Free and open-source
6. Wit Jakuczun, PhD – CEO at WLOG Solutions
wit.jakuczun@wlogsolutions.pl
LOCALSOLVER PACKAGE
7. Wit Jakuczun, PhD – CEO at WLOG Solutions
wit.jakuczun@wlogsolutions.pl
localsolver package architecture
Solution presentation
GNU R (e.g. shiny app)
Solving
LocalSolver engine
Model building
LSP language
Data preparation
GNU R
8. Wit Jakuczun, PhD – CEO at WLOG Solutions
wit.jakuczun@wlogsolutions.pl
LocalSolver engine
Innovative math modeling language New generation hybrid solver
9. Wit Jakuczun, PhD – CEO at WLOG Solutions
wit.jakuczun@wlogsolutions.pl
Why localsolver package?
Current optimization
packages
• Many tools for one task
• Low level API
• Low performance
• Restricted modelling
approach
localsolver package
• One tool for many tasks
• High-level API
• High performance
• Wide range of
applications
• What we got:
• Shorter projects
• Simpler to debug
• Lower delivery costs
10. Wit Jakuczun, PhD – CEO at WLOG Solutions
wit.jakuczun@wlogsolutions.pl
Solving k-medoids: a comparison
Rglpk
• 45 LOC
• 1 hour including „stupid”
bugs
• Need to populate
constraint matrix from R’s
data structures
• No high-level math
modelling language
localsolver
• 18 LOC
• 15 minutes spent mostly
inventing model
• Staying with R’s data
structures
• Flexible high-level math
modelling language
http://rsnippets.blogspot.com/2014/07/comparing-localsolver-with-rglpk-on-k.html
11. Wit Jakuczun, PhD – CEO at WLOG Solutions
wit.jakuczun@wlogsolutions.pl
LOGISTIC NETWORK PLANNING
FOR POULTRY MEAT PRODUCER
12. Wit Jakuczun, PhD – CEO at WLOG Solutions
wit.jakuczun@wlogsolutions.pl
Poultry meat logistic network planning
problem
Factories
3 locations
Warehouses
18 locations
3 different sizes
Customers
3000 locations
Products
2 types (fresh,
processed)
thousands of SKUs
Choose optimal locations and
capacities of warehouses
13. Wit Jakuczun, PhD – CEO at WLOG Solutions
wit.jakuczun@wlogsolutions.pl
THANK YOU!
Editor's Notes
Tego nie łapię – toolbox?
Jakie story za tym slajdem?
Ja bym jednak pokazał, że analityka bez optymalizacji nie jest pełna.
Ostatnie to nie do końca bo LocalSolver jest proprietrary