Financial institutions today are under intense pressure to provide more value add to the customers, reduce IT costs and also grow year to year. This challenge has been further complicated by huge amounts of data being generated as well as mandatory federal compliances in place.
Similarly, Manufacturing industry today also is facing the challenge to process huge amount of data in real time and predict failures as early as possible to reduce cost and increase production efficiency.
The session will cover some high level Big Data use cases applicable to financial and manufacturing domain and how big data technologies are being used successfully to solve these challenges using some examples in credit card/banking industry in financial domain and semi-conductor production in manufacturing domain.
DataWorks Summit 2017 - Sydney Keynote
Scott Gnau, Chieft Technology Officer, Hortonworks
Data has become the most valuable asset for every enterprise. As businesses undergo data transformation, leading organizations are turning to data science and machine learning to drive more business value out of their data. In this talk, Scott will examine the trends and the key requirements needed to evolve to next-generation analytics and operations.
How Starbucks Forecasts Demand at Scale with Facebook Prophet and DatabricksNavin Albert
Performing fine-grained forecasts on day-store-SKU is beyond the ability of legacy, data warehousing based forecasting tools. Demand for products varies by product, store and day, and yet traditional demand forecasting solutions perform their forecasts at the aggregate market, week and promo group levels.
With the introduction of the Databricks Unified Data Analytics Platform, retailers are able to see double-digit improvements in their forecast accuracy. They can perform fine-grained forecasts at the SKU, store and day as well as include hundreds of additional features to improve the accuracy of models. They can further enhance their forecasts with localization and the easy inclusion of additional data sets. And they’re running these forecasts daily, providing their planners and retail operations team with timely data for better execution.
In this webinar, we reviewed:
How to perform fine-grained demand forecasts on a day/store/SKU level with Databricks
How to forecast time series data precisely using Facebook’s Prophet
Also, how Starbucks does custom forecasting with relative ease
How to train a large number of models using the defacto distributed data processing engine, Apache Spark™
Finally, we then presented this data to analysts and managers using BI tools to enable the decision making required to drive the required business outcomes
"Empower Developers with HPE Machine Learning and Augmented Intelligence", Dr...Dataconomy Media
"Empower Developers with HPE Machine Learning and Augmented Intelligence", Dr. Abdourahmane Faye, Big Data SME Lead DACH at HPE
Watch more from Data Natives Berlin 2016 here: http://bit.ly/2fE1sEo
Visit the conference website to learn more: www.datanatives.io
Follow Data Natives:
https://www.facebook.com/DataNatives
https://twitter.com/DataNativesConf
Stay Connected to Data Natives by Email: Subscribe to our newsletter to get the news first about Data Natives 2017: http://bit.ly/1WMJAqS
About the Author:
Abdou Faye is Subject Matter Expert in Big Data, Predictive Analytics / Machine Learning and Business Intelligence, with more than 19 years of experience in that area in various leading and executive roles, both from a Technical, Architecture and Sales perspectives. He recently joins HPE coming from SAP, where he was leading the Predictive Analysis & Big Data CoE (Center Of Excellence) business since 2010 for DACH, CEE and CIS region, in charge of Business Development and Sales Support. Prior to SAP, he worked 4 Years at Microsoft as Senior BI & SQL-Server Consultant in Switzerland, after 10 years spent at Philip Morris (CH), Orange Telco (CH) and SEMA Group (FR). Abdou graduated from Paris 11 University in 2000, where he completed a PhD on Data Mining/Predictive Analytics, after completing a Master in Computer Science.
DataWorks Summit 2017 - Sydney Keynote
Scott Gnau, Chieft Technology Officer, Hortonworks
Data has become the most valuable asset for every enterprise. As businesses undergo data transformation, leading organizations are turning to data science and machine learning to drive more business value out of their data. In this talk, Scott will examine the trends and the key requirements needed to evolve to next-generation analytics and operations.
How Starbucks Forecasts Demand at Scale with Facebook Prophet and DatabricksNavin Albert
Performing fine-grained forecasts on day-store-SKU is beyond the ability of legacy, data warehousing based forecasting tools. Demand for products varies by product, store and day, and yet traditional demand forecasting solutions perform their forecasts at the aggregate market, week and promo group levels.
With the introduction of the Databricks Unified Data Analytics Platform, retailers are able to see double-digit improvements in their forecast accuracy. They can perform fine-grained forecasts at the SKU, store and day as well as include hundreds of additional features to improve the accuracy of models. They can further enhance their forecasts with localization and the easy inclusion of additional data sets. And they’re running these forecasts daily, providing their planners and retail operations team with timely data for better execution.
In this webinar, we reviewed:
How to perform fine-grained demand forecasts on a day/store/SKU level with Databricks
How to forecast time series data precisely using Facebook’s Prophet
Also, how Starbucks does custom forecasting with relative ease
How to train a large number of models using the defacto distributed data processing engine, Apache Spark™
Finally, we then presented this data to analysts and managers using BI tools to enable the decision making required to drive the required business outcomes
"Empower Developers with HPE Machine Learning and Augmented Intelligence", Dr...Dataconomy Media
"Empower Developers with HPE Machine Learning and Augmented Intelligence", Dr. Abdourahmane Faye, Big Data SME Lead DACH at HPE
Watch more from Data Natives Berlin 2016 here: http://bit.ly/2fE1sEo
Visit the conference website to learn more: www.datanatives.io
Follow Data Natives:
https://www.facebook.com/DataNatives
https://twitter.com/DataNativesConf
Stay Connected to Data Natives by Email: Subscribe to our newsletter to get the news first about Data Natives 2017: http://bit.ly/1WMJAqS
About the Author:
Abdou Faye is Subject Matter Expert in Big Data, Predictive Analytics / Machine Learning and Business Intelligence, with more than 19 years of experience in that area in various leading and executive roles, both from a Technical, Architecture and Sales perspectives. He recently joins HPE coming from SAP, where he was leading the Predictive Analysis & Big Data CoE (Center Of Excellence) business since 2010 for DACH, CEE and CIS region, in charge of Business Development and Sales Support. Prior to SAP, he worked 4 Years at Microsoft as Senior BI & SQL-Server Consultant in Switzerland, after 10 years spent at Philip Morris (CH), Orange Telco (CH) and SEMA Group (FR). Abdou graduated from Paris 11 University in 2000, where he completed a PhD on Data Mining/Predictive Analytics, after completing a Master in Computer Science.
When you look at traditional ERP or management systems, they are usually used to manage the supply chain originating from either the point of Origin or point of destination which all our primarily physical locations. And for these, you have several processes like order to cash, source to pay, physical distribution, production etc.
MongoDB IoT City Tour STUTTGART: Hadoop and future data management. By, ClouderaMongoDB
Bernard Doering, Senior Slaes Director DACH, Cloudera.
Hadoop and the Future of Data Management. As Hadoop takes the data management market by storm, organisations are evolving the role it plays in the modern data centre. Explore how this disruptive technology is quickly transforming an industry and how you can leverage it today, in combination with MongoDB, to drive meaningful change in your business.
Real-time Microservices and In-Memory Data GridsAli Hodroj
How in-memory data grids enable a real-time microservices architecture while diminishing the accidental complexity of persistence, orchestration, and fragmentation of scale.
Unlocking data science in the enterprise - with Oracle and ClouderaCloudera, Inc.
Today, leading organizations struggle to make their data scientists productive in their modern data platforms. Data scientists find it difficult to use their existing open source languages (e.g. Python, R) and libraries with Hadoop, especially when the clusters are secured with Kerberos. At the same time, IT doesn't want to give special access to these users, who require very diverse and specific environment configurations to run their experiments. As a result, most data science teams work away from the big data cluster, often on their laptops or in other data silos. The negative business impacts are a lack of insight and agility for the most advanced users, and the security, governance, and cost issues that arise from data silos.
How a Media Data Platform Drives Real-time Insights & Analytics using Apache ...Databricks
Roularta is a leading publishing company in Belgium. As digital news and channels move at a rapid pace and contain massive volumes of data, Roularta decided in 2019 to invest in a Spark-based data platform to drive true real-time website analytics and unlock insights on previously untouched (big) data sources. In this talk we’ll first explain why and how Roularta embarked from a classical data warehouse to a Spark-based Lakehouse using Delta. We’ll outline the series of publishing & marketing use-cases done in the last 12 months and highlight for each use-case the advantages of Spark and how the team further tuned performance to truly deliver insights with high velocity.
Strategizing Big Data in Telco
Big data feels to be a very hot topic nowadays. Some industries depend on it completely, some have opportunities to roll out their strategies and execute, some just considering when it is a right time to hop in.
To my mind, Big Data is not about technology. Big data is about people generating data and data used for the benefit of people.
Big data is a pool of activities intended at processing the data a company owns (internal and external) so that to open new revenue opportunities, minimize costs and enhance UX.
I had some ideas and thoughts on what telecommunication companies may start from in formulating the Big Data Strategy and so packed some of the most important pieces of thoughts into a small presentation.
What is the difference between Small Data and Big Data?
What kind of data is used currently and which is to be relied on a new paradigm?
What kind of products are expected from telcos?
My personal ranking of operators in terms of their Big Data execution
What are the stages telcos should pass through to become a Big Data operator?
Prerequisites for Big Data transformation
Please take a look at the presentation to find answers to these questions and feel free to share your opinion.
Thanks!
Big data expert and Infochimps CEO, Jim Kaskade presents the Infinite Monkey Theorem at CloudCon Expo. He provides an energetic, inspiring, and practical perspective on why Big Data is disrupting. It’s more than historic data analyzed on Hadoop. It’s also more than real-time streaming data stored and queried using NoSQL. Learn more at www.Infochimps.com
Big Data, IoT, data lake, unstructured data, Hadoop, cloud, and massively parallel processing (MPP) are all just fancy words unless you can find uses cases for all this technology. Join me as I talk about the many use cases I have seen, from streaming data to advanced analytics, broken down by industry. I’ll show you how all this technology fits together by discussing various architectures and the most common approaches to solving data problems and hopefully set off light bulbs in your head on how big data can help your organization make better business decisions.
Use dependency injection to get Hadoop *out* of your application codeDataWorks Summit
Hadoop MapReduce provides transparent parallelization but often results in specialized code bases that interact with low-level data formats. We present a means of using dependency injection to manage data flows in MapReduce which in turn supports reusable, Hadoop-agnostic application code that interacts with high-level business domain objects. An example is provided that applies Dependency Injection to the Hadoop WordCount example and shows how the same code invoked from the WordCount MapReduce job can be reused in a real-time context. We then discuss Opower’s application of this pattern to employ the same core calculations in both batch processing and in servicing real-time requests from end users. This topic will be of interest to those interested in reusing core batch calculations in real-time contexts. It also provides a means forward for organizations moving to Hadoop that have existing code components that they would like to employ in batch MapReduce computations.
Modernizing to a Cloud Data ArchitectureDatabricks
Organizations with on-premises Hadoop infrastructure are bogged down by system complexity, unscalable infrastructure, and the increasing burden on DevOps to manage legacy architectures. Costs and resource utilization continue to go up while innovation has flatlined. In this session, you will learn why, now more than ever, enterprises are looking for cloud alternatives to Hadoop and are migrating off of the architecture in large numbers. You will also learn how elastic compute models’ benefits help one customer scale their analytics and AI workloads and best practices from their experience on a successful migration of their data and workloads to the cloud.
Overview of analytics and big data in practiceVivek Murugesan
Intended to give an overview of analytics and big data in practice. With set of industry use cases from different domains. Would be useful for someone who is trying to understand Analytics and Big Data.
Scaling Production Machine Learning Pipelines with DatabricksDatabricks
Conde Nast is a global leader in the media production space housing iconic brands such as The New Yorker, Wired, Vanity Fair, and Epicurious, among many others. Along with our content production, Conde Nast invests heavily in companion products to improve and enhance our audience’s experience.
The strategic relationship between Hortonworks and SAP enables SAP to resell Hortonworks Data Platform (HDP) and provide enterprise support for their global customer base. This means SAP customers can incorporate enterprise Hadoop as a complement within a data architecture that includes SAP HANA, Sybase and SAP BusinessObjects enabling a broad range of new analytic applications.
When you look at traditional ERP or management systems, they are usually used to manage the supply chain originating from either the point of Origin or point of destination which all our primarily physical locations. And for these, you have several processes like order to cash, source to pay, physical distribution, production etc.
MongoDB IoT City Tour STUTTGART: Hadoop and future data management. By, ClouderaMongoDB
Bernard Doering, Senior Slaes Director DACH, Cloudera.
Hadoop and the Future of Data Management. As Hadoop takes the data management market by storm, organisations are evolving the role it plays in the modern data centre. Explore how this disruptive technology is quickly transforming an industry and how you can leverage it today, in combination with MongoDB, to drive meaningful change in your business.
Real-time Microservices and In-Memory Data GridsAli Hodroj
How in-memory data grids enable a real-time microservices architecture while diminishing the accidental complexity of persistence, orchestration, and fragmentation of scale.
Unlocking data science in the enterprise - with Oracle and ClouderaCloudera, Inc.
Today, leading organizations struggle to make their data scientists productive in their modern data platforms. Data scientists find it difficult to use their existing open source languages (e.g. Python, R) and libraries with Hadoop, especially when the clusters are secured with Kerberos. At the same time, IT doesn't want to give special access to these users, who require very diverse and specific environment configurations to run their experiments. As a result, most data science teams work away from the big data cluster, often on their laptops or in other data silos. The negative business impacts are a lack of insight and agility for the most advanced users, and the security, governance, and cost issues that arise from data silos.
How a Media Data Platform Drives Real-time Insights & Analytics using Apache ...Databricks
Roularta is a leading publishing company in Belgium. As digital news and channels move at a rapid pace and contain massive volumes of data, Roularta decided in 2019 to invest in a Spark-based data platform to drive true real-time website analytics and unlock insights on previously untouched (big) data sources. In this talk we’ll first explain why and how Roularta embarked from a classical data warehouse to a Spark-based Lakehouse using Delta. We’ll outline the series of publishing & marketing use-cases done in the last 12 months and highlight for each use-case the advantages of Spark and how the team further tuned performance to truly deliver insights with high velocity.
Strategizing Big Data in Telco
Big data feels to be a very hot topic nowadays. Some industries depend on it completely, some have opportunities to roll out their strategies and execute, some just considering when it is a right time to hop in.
To my mind, Big Data is not about technology. Big data is about people generating data and data used for the benefit of people.
Big data is a pool of activities intended at processing the data a company owns (internal and external) so that to open new revenue opportunities, minimize costs and enhance UX.
I had some ideas and thoughts on what telecommunication companies may start from in formulating the Big Data Strategy and so packed some of the most important pieces of thoughts into a small presentation.
What is the difference between Small Data and Big Data?
What kind of data is used currently and which is to be relied on a new paradigm?
What kind of products are expected from telcos?
My personal ranking of operators in terms of their Big Data execution
What are the stages telcos should pass through to become a Big Data operator?
Prerequisites for Big Data transformation
Please take a look at the presentation to find answers to these questions and feel free to share your opinion.
Thanks!
Big data expert and Infochimps CEO, Jim Kaskade presents the Infinite Monkey Theorem at CloudCon Expo. He provides an energetic, inspiring, and practical perspective on why Big Data is disrupting. It’s more than historic data analyzed on Hadoop. It’s also more than real-time streaming data stored and queried using NoSQL. Learn more at www.Infochimps.com
Big Data, IoT, data lake, unstructured data, Hadoop, cloud, and massively parallel processing (MPP) are all just fancy words unless you can find uses cases for all this technology. Join me as I talk about the many use cases I have seen, from streaming data to advanced analytics, broken down by industry. I’ll show you how all this technology fits together by discussing various architectures and the most common approaches to solving data problems and hopefully set off light bulbs in your head on how big data can help your organization make better business decisions.
Use dependency injection to get Hadoop *out* of your application codeDataWorks Summit
Hadoop MapReduce provides transparent parallelization but often results in specialized code bases that interact with low-level data formats. We present a means of using dependency injection to manage data flows in MapReduce which in turn supports reusable, Hadoop-agnostic application code that interacts with high-level business domain objects. An example is provided that applies Dependency Injection to the Hadoop WordCount example and shows how the same code invoked from the WordCount MapReduce job can be reused in a real-time context. We then discuss Opower’s application of this pattern to employ the same core calculations in both batch processing and in servicing real-time requests from end users. This topic will be of interest to those interested in reusing core batch calculations in real-time contexts. It also provides a means forward for organizations moving to Hadoop that have existing code components that they would like to employ in batch MapReduce computations.
Modernizing to a Cloud Data ArchitectureDatabricks
Organizations with on-premises Hadoop infrastructure are bogged down by system complexity, unscalable infrastructure, and the increasing burden on DevOps to manage legacy architectures. Costs and resource utilization continue to go up while innovation has flatlined. In this session, you will learn why, now more than ever, enterprises are looking for cloud alternatives to Hadoop and are migrating off of the architecture in large numbers. You will also learn how elastic compute models’ benefits help one customer scale their analytics and AI workloads and best practices from their experience on a successful migration of their data and workloads to the cloud.
Overview of analytics and big data in practiceVivek Murugesan
Intended to give an overview of analytics and big data in practice. With set of industry use cases from different domains. Would be useful for someone who is trying to understand Analytics and Big Data.
Scaling Production Machine Learning Pipelines with DatabricksDatabricks
Conde Nast is a global leader in the media production space housing iconic brands such as The New Yorker, Wired, Vanity Fair, and Epicurious, among many others. Along with our content production, Conde Nast invests heavily in companion products to improve and enhance our audience’s experience.
The strategic relationship between Hortonworks and SAP enables SAP to resell Hortonworks Data Platform (HDP) and provide enterprise support for their global customer base. This means SAP customers can incorporate enterprise Hadoop as a complement within a data architecture that includes SAP HANA, Sybase and SAP BusinessObjects enabling a broad range of new analytic applications.
Bringing Big Data Analytics to Network MonitoringSavvius, Inc
The first things that typically come to mind with big data are Internet search indexing, supercomputing scientific studies, and social media data analysis. But did you ever stop and consider the monitoring and performance data on your enterprise network? As 10G networking becomes the norm, and the demand for actionable network performance data increases, network monitoring and reporting solutions are facing the same big data challenges: capturing, storing, analyzing, and displaying huge quantities of data.
WildPackets, an industry leader in network analysis and reporting, faced these same challenges. By partnering with HP Vertica, an industry leader in the big data revolution, WildPackets addressed these big data challenges with WatchPoint, a network monitoring and reporting solution that provides mid-sized and large enterprises with a centralized, comprehensive view of their networks to support capacity planning, operations management, and network and application troubleshooting. Come join us for a 30-minute presentation and demonstration to see how you can apply WildPackets’ best-in-class analytics to your high-speed network, without compromising precision through sampling or polling, providing a single view of your network and its historical performance in unprecedented detail and scope.
In this webinar, we will cover:
Big data and its application to network monitoring and reporting
The unique capabilities of the HP Vertica solution
A 15-minute demo of WildPackets’ WatchPoint Network Monitor Solution
You will learn:
Why data precision must be retained throughout history
How precise data feeds capacity planning, day to day operations management, and detailed network troubleshooting
This expedited period has created vehicles with advanced capabilities but few protections. Estimations transpires that 104 million cars will have some form of connectivity by 2025.
The considered issue is taking the first priority in agendas of all the leading manufacturers, associations and of the Governments' as well !
Qrious about Insights -- Big Data in the Real WorldGuy K. Kloss
Presentation for the Data Science Research Group Workshop on 7 February 2017 at AUT. The talk centres around the problem in Big Data analytics, tools for overcoming these problems, and the way the company Qrious leverages these to build solutions.
Accelerating the Value of Big Data Analytics for P&C Insurers with Hortonwork...Hortonworks
As the Big Data Analytics and the Apache Hadoop ecosystem has matured and gained increasing traction in established industries with faster adoption in the insurance market than originally anticipated, it is clear that the potential benefits for data management and business intelligence are staggering. At the same time, many big data programs have stalled or failed to deliver on their aspirational value proposition, resulting in a substantial gap between expectations of analytics consumers and the ability of big data analytics programs to deliver. Join Hortonworks and Clarity as we review the common needs of Property and Casualty (P&C) Insurers and how to unlock the true value of big data analytics:
Information agility – Centralization of data and decentralization of analysis
Expanded capability – Conventional analysis combined with real-time analytics demands
Reduced expense – Lower costs through cheaper storage while maintaining scalability
We will discuss a modern data architecture that constitutes a mature, enterprise strength Hadoop framework for P&C Insurers that answers the need for governance processes across the enterprise stack. We will cover how a modern data architecture allows organizations to collect, store, analyze and manipulate massive quantities of data on their own terms—regardless of the source of that data - accelerating the real lifetime value of big data and Hadoop analytics for claims, customer sentiment and telematics.
An Amsterdam perspective on Design Thinkingicemobile
Dutch people are very direct in their communication. At the same way, Dutch designers are very direct in their design. We present an ‘Amsterdam perspective on Design Thinking’ at UX/D Toronto sharing insights of our design adventures here in Canada together with LoyaltyOne, and distilled what we as designers can all learn from a clash of cultures.
Měla Hillary vůbec šanci? Podívejte se na porovnání typologií Trumpa a Clintonové, Jak chránit své energie v zimě?, Tao - na vlně života, bydlení: interiér v zimě, Energie dní na týden od 14.11. 2016
Gain New Insights by Analyzing Machine Logs using Machine Data Analytics and BigInsights.
Half of Fortune 500 companies experience more than 80 hours of system down time annually. Spread evenly over a year, that amounts to approximately 13 minutes every day. As a consumer, the thought of online bank operations being inaccessible so frequently is disturbing. As a business owner, when systems go down, all processes come to a stop. Work in progress is destroyed and failure to meet SLA’s and contractual obligations can result in expensive fees, adverse publicity, and loss of current and potential future customers. Ultimately the inability to provide a reliable and stable system results in loss of $$$’s. While the failure of these systems is inevitable, the ability to timely predict failures and intercept them before they occur is now a requirement.
A possible solution to the problem can be found is in the huge volumes of diagnostic big data generated at hardware, firmware, middleware, application, storage and management layers indicating failures or errors. Machine analysis and understanding of this data is becoming an important part of debugging, performance analysis, root cause analysis and business analysis. In addition to preventing outages, machine data analysis can also provide insights for fraud detection, customer retention and other important use cases.
Expert data analytics prove to be highly transformative when applied in context to corporate business strategies.
This webinar covers various approaches and strategies that will give you a detailed insight into planning and executing your Data Analytics projects.
how i managed to Develop a Analytics story for services about 4 years back. Contains
Maturity Model, Business Potential, Services Structures Areas that analytics can be applied to
20150108 create time stamp
Richard Vermillion, CEO of After, Inc. and Fulcrum Analytics, Inc. discusses data lakes and their value in supporting the warranty and extended service plain chain.
In today’s extremely challenging business environment, many telecommunication companies are measuring their success by the size, reducing costs and growth of their profit margins. As a result, they are under intense pressure to reduce or eliminate the major threats to these slim margins.
DataOps - Big Data and AI World London - March 2020 - Harvinder AtwalHarvinder Atwal
Title
DataOps, the secret weapon for delivering AI, data science, and business intelligence value at speed.
Synopsis
● According to recent research, just 7.3% of organisations say the state of their data and analytics is excellent, and only 22% of companies are currently seeing a significant return from data science expenditure.
● Poor returns on data & analytics investment are often the result of applying 20th-century thinking to 21st-century challenges and opportunities.
● Modern data science and analytics require secure, efficient processes to turn raw data from multiple sources and in numerous formats into useful inputs to a data product.
● Developing, orchestrating and iterating modern data pipelines is an extremely complex process requiring multiple technologies and skills.
● Other domains have to successfully overcome the challenge of delivering high-quality products at speed in complex environments. DataOps applies proven agile principles, lean thinking and DevOps practices to the development of data products.
● A DataOps approach aligns data producers, analytical data consumers, processes and technology with the rest of the organisation and its goals.
Future-Proof Your Streaming Analytics Architecture- StreamAnalytix WebinarImpetus Technologies
Future-Proof Your Streaming Analytics Architecture- StreamAnalytix Webinar
View the webcast on http://bit.ly/1HFD8YR
The speakers from Forrester and Impetus talk about the options and optimal architecture to incorporate real-time insights into your apps that provisions benefitting from future innovation also.
ADV Slides: How to Improve Your Analytic Data Architecture MaturityDATAVERSITY
Many organizations are immature when it comes to data use. The answer lies in delivering a greater level of insight from data, straight to the point of need. Enter: machine learning.
In this webinar, William will look at categories of organizational response to the challenge across strategy, architecture, modeling, processes, and ethics. Machine learning maturity levels tend to move in harmony across these categories. As a general principle of maturity models, you can’t skip levels in any category, nor can you advance in one category well beyond the others.
Vis-à-vis ML, attaining and retaining momentum up the model is paramount for success. You will ascend the model through concerted efforts delivering business wins utilizing progressive elements of the model, and thereby increasing your machine learning maturity. The model will evolve. No plateaus are comfortable for long.
With ML maturity markers, sequencing, and tactics, this webinar provides a plan for how to build analytic Data Architecture maturity in your organization.
Entry Points – How to Get Rolling with Big Data AnalyticsInside Analysis
The Briefing Room with Robin Bloor and IBM
Live Webcast Sept. 24, 2013
Watch the archive: https://bloorgroup.webex.com/bloorgroup/lsr.php?AT=pb&SP=EC&rID=7501927&rKey=664935ceb7de1aec
Where to begin? That question remains prominent for many organizations who are trying to leverage the value of big data analytics. Most sources of big data are quite different than traditional enterprise data systems. This requires new skill sets, both for the granular integration work, as well as the strategic business perspective required to design useful solutions.
Register for this episode of The Briefing Room to hear veteran Analyst Dr. Robin Bloor as he explains the pain points associated with modern data volumes and types. He will be briefed by Rick Clements of IBM, who will tout IBM's big data platform, specifically InfoSphere BigInsights, InfoSphere Streams and InfoSphere Data Explorer. He will also present specific use cases that demonstrate how IT and the line of business can springboard over existing challenges, gain insight and improve operational performance.
Visit InsideAnalysis.com for more information
Similar to Global Big Data Conference Hyderabad-2Aug2013- Finance/Manufacturing Use Cases (20)
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
2. Impetus Big Data Services
2
Big Data
Platform Implementation
Operations and Visualization
Business Analytics
and Data Science
Solution Architecture, POC
and Production planning
Technology strategy, Use Case
development & Validation
BUSINESS
PROCESS
MANAGEMENT
Assessment
Solution
Modeling
Solution
Analysis
People, Proc
ess, Technol
ogy Impact
Analyze &
Optimize
Objectives &
Strategy
Model
Fraud Detection:You may not be surprised that Banks and Credit cards are monitoring your spending habits on real-time basis. One of the large credit card issuing bank has implemented fraud detection system that would disable your card if they see suspicious activity based on your past history with spending patters and trends. In addition to the transaction records for authorization and approvals, banks and credit card companies are collecting lot more information from location, your life style, spending patterns. Credit card companies manage huge volume of data from individual Social Security number and income, account balances and employment details, and credit history and transaction history. All this put together helps credit card companies to fight fraud in real-time. Big Data architecture provides that scalability to analyze the incoming transaction against individual history and approve/decline the transaction and alert the account owner.Fraud detection and analysis -‐ Hadoop provides a scalable method to more easily detect many types of fraud or loss prevention, and perform effective risk management. Hadoop is also being used to develop models that predict future fraud events.Credit Risk – Valuations for credit risk are very computationally intensive. The number of risk factors that need to be modeled are in the 1,000s, with each risk factor taking 1,000s of stochastic paths. This problem is traditionally solved using very large grids of compute nodes. Big Data technologies offer a significantly cheaper alternative, with a huge number of loosely coupled Hadoop nodes that these computations can be offloaded onto.Risk Mitigation -‐ Hadoop is used to analyze potential market trends and understand future possibilities to mitigate the risk of financial positions, total portfolio assets, and capital returns.Customer insights -‐ Large financial service providers have adopted Hadoop to improve customer profile analysis to help determine eligibility for equity capital, insurance, mortgage or credit.Customer Segmentation applies in every industry from Banking to Retail to Aviation to Utility and others where they deal with end customer who consume their products and services. In Banking & Financial industry, customer segmentation is a key tool in risk scoring analysis and for sales, promotion and marketing campaigns. In addition to existing information that banks and FIs collect from day to day transactions from customers, they are also buying external data like home values, merchant records from hotels, aviation, retailers, etc. The 360 degree view of customer is still a work in progress and Big Data is enabling filling in the gaps by providing the processing power needed to mine for intelligence from underlying data.The major objectives of segmentation are:Customized product offeringCustomized and priority serviceImprove relationship with profitable customers and cut resources spent on loss making customersBetter offering to new customers based on the intelligence gained from the existing customer segment they belong New product development and bundling as per the customer segment profileCustomer Sentiment AnalysisThe bank can now respond to negative (or positive) brand perception by focusing its communication strategies on particular Internet sites, countering – or backing up – the most outspoken authors on Twitter, boards and blogs. When a company releases a new product that’s causing problems, analyzing comments in social media sites or product review sites can enable it to quickly remediate.Next Best Offer – Banks can use predictive analytics on a combination of data to create a series of targeted offers for customers, and make these offers available in real time at the next point of customer interactionMicro targeting -‐ Banks have numerous disparate data systems (e.g., loans, mortgages, investments) that need to be aggregated in order to provide an up-‐to-‐date view on customer profitability, consistent CRM, and customized product recommendations and offerings. Payments Analytics – Banks can offer value-added services to their merchant customers by analysing payments cleared by them and alerting them of opportunities if they see certain patterns emergeTrade Analysis -‐ Financial service firms use Hadoop to analyze the daily streams of transaction data in conjunction with unstructured news or social media feeds, or to back-‐test their trading algorithmsPricing Management – Banks, Capital Markets institutions and insurance firms can use information from both types of data source to price products for individual customers, taking into account risk, capital, cost allocations, transfer pricing and a multitude of additional dimensionsLong-‐Term Storage & Analytics – Hadoop provides a drastic cost reduction for the long-‐term storage and analytics of transaction data. New Opportunities – Many large firms are using Hadoop to identify cross-‐ sell and upsell opportunities by cross referencing sentiment analysis with internal customer profile data.Reputational Risk – All institutions worry about their reputation and getting feedback on newly adopted policies and newly launched products. Most of the ’noise’ around a brand comes from new data sources, and Big Data technologies can be very effective at quickly gathering this information for analysis.Sales and Marketing CampaignsOn the customer experience side, every time you get closer to delighting your customer by showing that you understand what their real needs are, without blindly sending them emails and credit card offers, it makes the customer view their institution as caring about them and understanding what their needs are.Fraud, AML, Trader and Broker Compliance – All categories of fraud suffer from over-zealous software that generates a high number of false positives. These result in a significant operational problem as they need to be analysed manually. Tuning the software to reduce the number of these alerts results in the opposite problem, with real fraud going unreported. Using both sources of seemingly unrelated data offers the potential to catch fraudulent activities earlier than current methods allow. In the case of internal fraud monitoring, trader and broker compliance software can monitor trading activity coupled with additional data points from sources such as social media, SMS and emails, and create a graph analysis that traditional tools are unable to provide, in order to detect any patterns.Web-‐scale Analysis -‐ Hadoop is used to analyze what is being said on the web at large and on social networks in particular. Sentiment can apply to individual companies or products or reflect general customer satisfaction. This has potential to improve marketing to existing customers through better targeting. CrowdsourcingSome of the larger institutions have realized they can use analytics to learn about new lines of business and products, to ask customers what they think, and to get ideas. In a move to expand its utility beyond simply finding better answers to known statistical problems, data-science startup Kaggle is now letting its stable of expert data scientists compete to tell companies how they can improve their businesses using machine learning.Call Center AnalysisFor decades, companies have been analyzing call center data for staffing, agent performance, network management. But with big data age, many new interesting software are being implemented today in attempt to take unstructured voice recordings and analyze them for content and sentiment. Banks are applying text and sentiment analysis to this unstructured data, and looking for patterns and trends. Many banks are integrating this call center data with their transactional data warehouse to reduce customer churn, and drive up-sell, cross-sell, customer monitoring alerts and fraud detection.These are just few of the use cases that I have highlighted here to give a fair idea about how Big Data is being leveraged in this industry. If you have use-case that you are working with, please add it to the comment section.