It's a cliché that modern enterprise applications are simply web applications. But is that the whole truth? And if it isn't, what are all the pieces of an enterprise application and how do they fit together? How can we continue to use older technologies within these applications and how might we exploit new technologies in the future? What new challenges do enterprises face in the 21st Century and how might they affect the design of applications and programming systems?
Big Data Whitepaper - Streams and Big Insights Integration PatternsMauricio Godoy
This document discusses designing integrated applications across IBM InfoSphere Streams and IBM InfoSphere BigInsights to address challenges posed by big data. It describes three main application scenarios for the integration: 1) scalable data ingest from Streams to BigInsights, 2) using historical context from BigInsights to bootstrap and enrich real-time analytics on Streams, and 3) generating adaptive analytics models on BigInsights to analyze incoming data on Streams and updating models based on real-time observations.
The document discusses the importance of systems of record for businesses. It notes that systems of record are highly structured, transactional, reliable, and core to the business. In contrast, systems of engagement are loosely structured, quick to adapt, conversational, and at the edge of the business. The document advocates developing a strategy to modernize applications and transition to newer architectures like cloud, while ensuring systems of record still meet business needs as engagement systems evolve.
Vectorwise is an extremely fast database that enables quick decision making through real-time analytics. It is multiple times faster than other databases, with some customer queries seeing speed increases of 70x. This speed is due to its innovative vector processing approach. Customers report being able to reduce BI project timelines by 50% using Vectorwise due to its ease of use and lack of need for tuning. It also reduces infrastructure costs through requiring less hardware and IT resources.
The document provides an overview of IBM's Big Data platform vision. The platform addresses big data use cases involving high volume, velocity and variety of data. It integrates with existing data warehouse and master data management systems. The platform handles different data types and formats, provides real-time and batch analytics, and has tools to make it easy for developers and users to work with. It is designed with enterprise-grade security, scalability and failure tolerance. The platform allows organizations to analyze big data from various sources to gain insights.
The document discusses breakthroughs in information technology that can make cities smarter. It describes how sensors, networks, and data analytics can provide insights that improve outcomes across various city systems, including transportation, energy, water, and public safety. The core idea is that digital and physical systems are converging, allowing cities to leverage data to develop insight and wisdom. Examples are provided of cities using these technologies to monitor infrastructure in real-time, predict problems, and better coordinate resources.
This document proposes a big data infrastructure and analytics solution using Hadoop. It discusses (1) constructing a Hadoop cluster on two physical machines, (2) transmitting both structured and unstructured data to HDFS, and (3) performing reporting, analysis, monitoring, and prediction using Hive, HBase, and Mahout. Experimental results show the Hadoop components running and sample queries executing successfully. Future work involves validating the infrastructure with real-world data and further predictive analytics research.
Steve Mills - Dispelling the Vapor Around Cloud ComputingMauricio Godoy
The document discusses IBM's perspective on cloud computing. It defines cloud computing, outlines various cloud service and delivery models, and summarizes IBM's cloud computing offerings including consulting services, infrastructure, platforms, and applications.
Big Data Whitepaper - Streams and Big Insights Integration PatternsMauricio Godoy
This document discusses designing integrated applications across IBM InfoSphere Streams and IBM InfoSphere BigInsights to address challenges posed by big data. It describes three main application scenarios for the integration: 1) scalable data ingest from Streams to BigInsights, 2) using historical context from BigInsights to bootstrap and enrich real-time analytics on Streams, and 3) generating adaptive analytics models on BigInsights to analyze incoming data on Streams and updating models based on real-time observations.
The document discusses the importance of systems of record for businesses. It notes that systems of record are highly structured, transactional, reliable, and core to the business. In contrast, systems of engagement are loosely structured, quick to adapt, conversational, and at the edge of the business. The document advocates developing a strategy to modernize applications and transition to newer architectures like cloud, while ensuring systems of record still meet business needs as engagement systems evolve.
Vectorwise is an extremely fast database that enables quick decision making through real-time analytics. It is multiple times faster than other databases, with some customer queries seeing speed increases of 70x. This speed is due to its innovative vector processing approach. Customers report being able to reduce BI project timelines by 50% using Vectorwise due to its ease of use and lack of need for tuning. It also reduces infrastructure costs through requiring less hardware and IT resources.
The document provides an overview of IBM's Big Data platform vision. The platform addresses big data use cases involving high volume, velocity and variety of data. It integrates with existing data warehouse and master data management systems. The platform handles different data types and formats, provides real-time and batch analytics, and has tools to make it easy for developers and users to work with. It is designed with enterprise-grade security, scalability and failure tolerance. The platform allows organizations to analyze big data from various sources to gain insights.
The document discusses breakthroughs in information technology that can make cities smarter. It describes how sensors, networks, and data analytics can provide insights that improve outcomes across various city systems, including transportation, energy, water, and public safety. The core idea is that digital and physical systems are converging, allowing cities to leverage data to develop insight and wisdom. Examples are provided of cities using these technologies to monitor infrastructure in real-time, predict problems, and better coordinate resources.
This document proposes a big data infrastructure and analytics solution using Hadoop. It discusses (1) constructing a Hadoop cluster on two physical machines, (2) transmitting both structured and unstructured data to HDFS, and (3) performing reporting, analysis, monitoring, and prediction using Hive, HBase, and Mahout. Experimental results show the Hadoop components running and sample queries executing successfully. Future work involves validating the infrastructure with real-world data and further predictive analytics research.
Steve Mills - Dispelling the Vapor Around Cloud ComputingMauricio Godoy
The document discusses IBM's perspective on cloud computing. It defines cloud computing, outlines various cloud service and delivery models, and summarizes IBM's cloud computing offerings including consulting services, infrastructure, platforms, and applications.
Big Data World Forum (BDWF http://www.bigdatawf.com/) is specially designed for data-driven decision makers, managers, and data practitioners, who are shaping the future of the big data.
This document discusses integrating Supermicro, Greenplum, and SAS to enable big data analytics platforms and infrastructure. It provides an agenda that includes discussing big data analytics platforms and infrastructure as well as a 1,000 node Hadoop cluster using EMC and Supermicro.
Moving from Records to Engagement to InsightJohn Mancini
Open innovation (OI) is used widely by many organizations and has yielded major changes to internal processes and external offerings for many. However, OI capabilities like idea voting are underutilized, and OI is not yet tightly integrated with most company cultures. While OI appears widespread internally, very few organizations open participation to outsiders.
These slides—based on the research webinar from leading IT research firm EMA and Pluribus Networks—dive into how software defined packet brokers can close the visibility gap on your network.
The Briefing Room with Colin White and Composite Software
Live Webcast Feb. 26, 2013
The modern business analyst needs data from all over the place: yes, the data warehouse, but also the Web, big data, production systems, as well as via partners and vendors. In fact, the typical analyst spends more than 50% of the time chasing data, which slows delivery of analytic insights and limits the time available for thorough analysis. Some practitioners refer to this conundrum as "the data problem."
Check out the slides from this episode of The Briefing Room to hear veteran Analyst Colin White of BI Research as he explains why analytical sandboxes and data hubs can be an analyst's best friend. He'll be briefed by Bob Eve of Composite Software who will discuss his company's mature data virtualization platform, which includes a number of capabilities that help organizations leverage agile analytics. He will discuss why time-to-insight is fast becoming the battle cry of analysis-driven organizations.
Visit: http://www.insideanalysis.com
The document discusses top storage trends that will reshape datacenters in 2012 according to IDC predictions. It finds that data is exploding due to more connected devices and digital content creation. Survey results show organizations prioritizing IT security and cost reduction. IDC predicts that in 2012, storage virtualization will go mainstream, SSDs will be integrated into ROI strategies, unified storage will be standard, and cloud storage services will provide more sophisticated features to help organizations manage big data.
Sustainable IT for Energy Management: Approaches, Challenges, and TrendsEdward Curry
An invited talk to the Galway-Mayo Institute of Technology on the current state of the art in Sustainable IT for energy management, the challenges, and the emerging trends.
Robert LeBlanc - Cloud Forum Presentation Mauricio Godoy
The document discusses cloud computing and IBM's role in innovating cloud technologies over several decades. It outlines IBM's comprehensive cloud offerings, including infrastructure as a service, platform as a service, and business process as a service. The key capabilities of IBM's cloud services are around application and data integration, workload deployment patterns, image management, and security controls.
Dale Vile, CEO of Freeform Dynamics Ltd, gave a presentation on cloud computing trends and perspectives. He discussed how cloud computing has evolved from hype to emerging clarity, with confusion persisting over definitions. Vile outlined different views of cloud, including technology vs services and the service stack. He noted that cloud will have a significant impact on IT delivery and management, but that a hybrid model is emerging. Looking ahead, Vile argued organizations should focus on business services rather than aiming to "move to the cloud," and that cloud represents a shift to a service-centric view of IT.
The document discusses technology trends for 2012, including Gartner's top 10 strategic technologies and trends according to their annual report. It then outlines the top 8 trends for Thailand in 2012, with speakers covering each one. These include business continuity planning in response to recent flooding, cloud computing and opportunities it provides, the growth of tablets and mobile applications, and predictive analytics. The presentation concludes with an overview of Thailand's top technology trends for 2012.
Enterprise Energy Management using a Linked Dataspace for Energy IntelligenceEdward Curry
Energy Intelligence platforms can help organizations manage power consumption more efficiently by providing a functional view of the entire organization so that the energy consumption of business activities can be understood, changed, and reinvented to better support sustainable practices. Significant technical challenges exist in terms of information management, cross-domain data integration, leveraging real-time data, and assisting users to interpret the information to optimize energy usage. This paper presents an architectural approach to overcome these challenges using a Dataspace, Linked Data, and Complex Event Processing. The paper describes the fundamentals of the approach and demonstrates it within an Enterprise Energy Observatory.
E. Curry, S. Hasan, and S. O’Riáin, “Enterprise Energy Management using a Linked Dataspace for Energy Intelligence,” in The Second IFIP Conference on Sustainable Internet and ICT for Sustainability (SustainIT 2012), 2012.
This document discusses how information sprawl is a major problem for IT organizations. As more information and applications are created, it leads to uncontrolled growth in storage costs, backup costs, and application costs. This information sprawl consumes most of an IT budget, leaving little funding for innovation. The document proposes using a structured records management solution to help manage information lifecycles and automatically dispose of data after retention periods end. This can help reduce storage costs by 50%, backup costs by 70%, and application costs by 60%, while ensuring compliance and reducing risk.
AIOps: Anomalies Detection of Distributed TracesJorge Cardoso
Introduction to the field of AIOps. large-scale monitoring, and observability. Provides an example illustrating how Deep Learning can be used to analyze distributed traces to reveal exactly which component is causing a problem in microservice applications.
Presentation given at the National University of Ireland, Galway (NUI Galway)
on 2019.08.20.
Thanks to Prof. John Breslin
An Environmental Chargeback for Data Center and Cloud Computing ConsumersEdward Curry
Government, business, and the general public increasingly agree that the polluter should pay. Carbon dioxide and environmental damage are considered viable chargeable commodities. The net effect of this for data center and cloud computing operators is that they should look to “chargeback” the environmental impacts of their services to the consuming end-users. An environmental chargeback model can have a positive effect on environmental impacts by linking consumers to the indirect impacts of their usage, facilitating clearer understanding of the impact of their actions. In this paper we motivate the need for environmental chargeback mechanisms. The environmental chargeback model is described including requirements, methodology for definition, and environmental impact allocation strategies. The paper details a proof-of-concept within an operational data center together with discussion on experiences gained and future research directions.
Curry, E.; Hasan, S.; White, M.; and Melvin, H. 2012. An Environmental Chargeback for Data Center and Cloud Computing Consumers. In Huusko, J.; de Meer, H.; Klingert, S.; and Somov, A., eds., First International Workshop on Energy-Efficient Data Centers. Madrid, Spain: Springer Berlin / Heidelberg.
This document discusses different analytics tools for marketing and advertising requirements. It compares paid vs free tools and outlines key factors to consider such as business type, legal risks, integration capabilities, service and support offerings. The panel then provides examples from Budget Direct's experience using Omniture tools for cross-channel campaign measurement and leveraging customer data insights. Integration of tools and a focus on innovation is highlighted as important for maximizing ROI and marketing effectiveness.
These slides based on the webinar provide key results from the EMA “Advanced Network Analytics: Applying Machine Learning and More to Network Engineering and Operations” research report.
Topics covered include technology strategies, data collection priorities, organizational benefits, and challenges of cutting edge network analytics strategies.
System of Systems Information Interoperability using a Linked DataspaceEdward Curry
System of Systems pose significant technical challenges in terms of information interoperability that require overcoming conceptual barriers (both syntax and semantic) and technological barriers. This paper presents an approach to System of Systems information interoperability based on the Dataspace data management abstraction and the Linked Data approach to sharing information on the web. The paper describes the fundamentals of the approach and demonstrates the concept with a System of Systems for enterprise energy management.
Curry E. System of Systems Information Interoperability using a Linked Dataspace. In: IEEE 7th International Conference on System of Systems Engineering (SOSE 2012)
Further Reading:
http://www.edwardcurry.org/publications/Curry_LinkedDataspaceForSOS_SOSE.pdf
CloudBrew 2016 - Building IoT solution with Service FabricTeemu Tapanila
When building your IoT solution you face different steps. They normally include Device registration, Data Ingestion, Data processing and Data analysis. So come hear how to model this process on micro services architecture and then hosting whole thing on premise or on a cloud with Azure Service Fabric.
Big Data World Forum (BDWF http://www.bigdatawf.com/) is specially designed for data-driven decision makers, managers, and data practitioners, who are shaping the future of the big data.
This document discusses integrating Supermicro, Greenplum, and SAS to enable big data analytics platforms and infrastructure. It provides an agenda that includes discussing big data analytics platforms and infrastructure as well as a 1,000 node Hadoop cluster using EMC and Supermicro.
Moving from Records to Engagement to InsightJohn Mancini
Open innovation (OI) is used widely by many organizations and has yielded major changes to internal processes and external offerings for many. However, OI capabilities like idea voting are underutilized, and OI is not yet tightly integrated with most company cultures. While OI appears widespread internally, very few organizations open participation to outsiders.
These slides—based on the research webinar from leading IT research firm EMA and Pluribus Networks—dive into how software defined packet brokers can close the visibility gap on your network.
The Briefing Room with Colin White and Composite Software
Live Webcast Feb. 26, 2013
The modern business analyst needs data from all over the place: yes, the data warehouse, but also the Web, big data, production systems, as well as via partners and vendors. In fact, the typical analyst spends more than 50% of the time chasing data, which slows delivery of analytic insights and limits the time available for thorough analysis. Some practitioners refer to this conundrum as "the data problem."
Check out the slides from this episode of The Briefing Room to hear veteran Analyst Colin White of BI Research as he explains why analytical sandboxes and data hubs can be an analyst's best friend. He'll be briefed by Bob Eve of Composite Software who will discuss his company's mature data virtualization platform, which includes a number of capabilities that help organizations leverage agile analytics. He will discuss why time-to-insight is fast becoming the battle cry of analysis-driven organizations.
Visit: http://www.insideanalysis.com
The document discusses top storage trends that will reshape datacenters in 2012 according to IDC predictions. It finds that data is exploding due to more connected devices and digital content creation. Survey results show organizations prioritizing IT security and cost reduction. IDC predicts that in 2012, storage virtualization will go mainstream, SSDs will be integrated into ROI strategies, unified storage will be standard, and cloud storage services will provide more sophisticated features to help organizations manage big data.
Sustainable IT for Energy Management: Approaches, Challenges, and TrendsEdward Curry
An invited talk to the Galway-Mayo Institute of Technology on the current state of the art in Sustainable IT for energy management, the challenges, and the emerging trends.
Robert LeBlanc - Cloud Forum Presentation Mauricio Godoy
The document discusses cloud computing and IBM's role in innovating cloud technologies over several decades. It outlines IBM's comprehensive cloud offerings, including infrastructure as a service, platform as a service, and business process as a service. The key capabilities of IBM's cloud services are around application and data integration, workload deployment patterns, image management, and security controls.
Dale Vile, CEO of Freeform Dynamics Ltd, gave a presentation on cloud computing trends and perspectives. He discussed how cloud computing has evolved from hype to emerging clarity, with confusion persisting over definitions. Vile outlined different views of cloud, including technology vs services and the service stack. He noted that cloud will have a significant impact on IT delivery and management, but that a hybrid model is emerging. Looking ahead, Vile argued organizations should focus on business services rather than aiming to "move to the cloud," and that cloud represents a shift to a service-centric view of IT.
The document discusses technology trends for 2012, including Gartner's top 10 strategic technologies and trends according to their annual report. It then outlines the top 8 trends for Thailand in 2012, with speakers covering each one. These include business continuity planning in response to recent flooding, cloud computing and opportunities it provides, the growth of tablets and mobile applications, and predictive analytics. The presentation concludes with an overview of Thailand's top technology trends for 2012.
Enterprise Energy Management using a Linked Dataspace for Energy IntelligenceEdward Curry
Energy Intelligence platforms can help organizations manage power consumption more efficiently by providing a functional view of the entire organization so that the energy consumption of business activities can be understood, changed, and reinvented to better support sustainable practices. Significant technical challenges exist in terms of information management, cross-domain data integration, leveraging real-time data, and assisting users to interpret the information to optimize energy usage. This paper presents an architectural approach to overcome these challenges using a Dataspace, Linked Data, and Complex Event Processing. The paper describes the fundamentals of the approach and demonstrates it within an Enterprise Energy Observatory.
E. Curry, S. Hasan, and S. O’Riáin, “Enterprise Energy Management using a Linked Dataspace for Energy Intelligence,” in The Second IFIP Conference on Sustainable Internet and ICT for Sustainability (SustainIT 2012), 2012.
This document discusses how information sprawl is a major problem for IT organizations. As more information and applications are created, it leads to uncontrolled growth in storage costs, backup costs, and application costs. This information sprawl consumes most of an IT budget, leaving little funding for innovation. The document proposes using a structured records management solution to help manage information lifecycles and automatically dispose of data after retention periods end. This can help reduce storage costs by 50%, backup costs by 70%, and application costs by 60%, while ensuring compliance and reducing risk.
AIOps: Anomalies Detection of Distributed TracesJorge Cardoso
Introduction to the field of AIOps. large-scale monitoring, and observability. Provides an example illustrating how Deep Learning can be used to analyze distributed traces to reveal exactly which component is causing a problem in microservice applications.
Presentation given at the National University of Ireland, Galway (NUI Galway)
on 2019.08.20.
Thanks to Prof. John Breslin
An Environmental Chargeback for Data Center and Cloud Computing ConsumersEdward Curry
Government, business, and the general public increasingly agree that the polluter should pay. Carbon dioxide and environmental damage are considered viable chargeable commodities. The net effect of this for data center and cloud computing operators is that they should look to “chargeback” the environmental impacts of their services to the consuming end-users. An environmental chargeback model can have a positive effect on environmental impacts by linking consumers to the indirect impacts of their usage, facilitating clearer understanding of the impact of their actions. In this paper we motivate the need for environmental chargeback mechanisms. The environmental chargeback model is described including requirements, methodology for definition, and environmental impact allocation strategies. The paper details a proof-of-concept within an operational data center together with discussion on experiences gained and future research directions.
Curry, E.; Hasan, S.; White, M.; and Melvin, H. 2012. An Environmental Chargeback for Data Center and Cloud Computing Consumers. In Huusko, J.; de Meer, H.; Klingert, S.; and Somov, A., eds., First International Workshop on Energy-Efficient Data Centers. Madrid, Spain: Springer Berlin / Heidelberg.
This document discusses different analytics tools for marketing and advertising requirements. It compares paid vs free tools and outlines key factors to consider such as business type, legal risks, integration capabilities, service and support offerings. The panel then provides examples from Budget Direct's experience using Omniture tools for cross-channel campaign measurement and leveraging customer data insights. Integration of tools and a focus on innovation is highlighted as important for maximizing ROI and marketing effectiveness.
These slides based on the webinar provide key results from the EMA “Advanced Network Analytics: Applying Machine Learning and More to Network Engineering and Operations” research report.
Topics covered include technology strategies, data collection priorities, organizational benefits, and challenges of cutting edge network analytics strategies.
System of Systems Information Interoperability using a Linked DataspaceEdward Curry
System of Systems pose significant technical challenges in terms of information interoperability that require overcoming conceptual barriers (both syntax and semantic) and technological barriers. This paper presents an approach to System of Systems information interoperability based on the Dataspace data management abstraction and the Linked Data approach to sharing information on the web. The paper describes the fundamentals of the approach and demonstrates the concept with a System of Systems for enterprise energy management.
Curry E. System of Systems Information Interoperability using a Linked Dataspace. In: IEEE 7th International Conference on System of Systems Engineering (SOSE 2012)
Further Reading:
http://www.edwardcurry.org/publications/Curry_LinkedDataspaceForSOS_SOSE.pdf
CloudBrew 2016 - Building IoT solution with Service FabricTeemu Tapanila
When building your IoT solution you face different steps. They normally include Device registration, Data Ingestion, Data processing and Data analysis. So come hear how to model this process on micro services architecture and then hosting whole thing on premise or on a cloud with Azure Service Fabric.
To create software architecture that exhibits structural modularity, is truly agile and can adapt to new changes it’s not enough to put everything in the same project even if the source code is logically structured. Within ING we aim towards a modular approach of developing software: to have formal modules, versioning, explicit contracts based on requirements and capabilities, (micro)services based collaboration between them, run-time dynamism and so on.
This document discusses microservices on Azure. It provides an overview of microservice patterns including the benefits of microservices like increased autonomy, scalability and team allocation. It also discusses challenges like discoverability. The document introduces Azure Service Fabric for building microservices and related cloud patterns like proxy, shared data and load leveling microservice patterns. It recommends microservices for highly skilled developers working on complex projects.
This is the slide deck from my keynote at the EA User Event in Brussels, September 2015. Micro-services and micro-services architecture are the next hype in software development. Websites and blogs are full of introducing posts, the first books are being written and the first conferences organized. There’s big promises of scalability, flexibility and replaceability of individual elements in your landscape. However, when you are knee deep in the mud as a software architect at an insurance, it is very hard to find help on how to design applications and components in a micro-services architecture. During this talk Sander will show how he used Enterprise Architect to model the micro services architecture, and will explain the difficulties and the lessons learned, using many real-life examples.
Adopting Azure, Cloud Foundry and Microservice Architecture at Merrill Corpor...VMware Tanzu
SpringOne Platform 2016
Speakers: Thomas Fredell; Chief Product Officer, Merrill & Ashish Pagey; Architecture Team Lead, Merrill
Come learn how Merrill Corporation is solving real business challenges and transforming their business directly from Merill's product and architecture leaders. By partnering with Pivotal and Microsoft Merill can rapidly deliver software as Java microservices deployed to Pivotal Cloud Foundry running on Microsoft Azure.
WSO2 is a global open source software company that provides a middleware platform for enterprise integration. This document discusses integration patterns that can be implemented using WSO2's middleware products, including the Enterprise Service Bus (ESB), which supports all enterprise integration patterns and can integrate disparate systems. Specific patterns covered include service orchestration, RESTful integration, SAP integration, guaranteed delivery, API facades, cloud-to-cloud and cloud-to-on-premise integration, high availability, and security patterns. Real-world use cases demonstrate how to achieve integration for connected businesses.
The Past, Present and Future of Enterprise IntegrationKasun Indrasiri
The document discusses the past, present, and future of enterprise integration. It describes how integration has evolved from homegrown and proprietary solutions to standards-based approaches like enterprise application integration (EAI) using hub-and-spoke or bus architectures. Service-oriented architecture (SOA) and enterprise service buses (ESBs) became popular integration approaches, but have limitations for today's API-driven landscape. The future of integration is hybrid and cloud-based, combining on-premise, cloud, mobile, and social integration using approaches like integration platform as a service (iPaaS). The document also discusses how WSO2's integration platform can be used to develop and manage hybrid integration scenarios and templates.
Building a Bank out of Microservices (NDC Sydney, August 2016)Graham Lea
From April 2014, Tyro Payments assigned more than half of it's Engineering team to developing and deploying a bespoke core banking system. Over the course of 18 months we shipped 21 new services and a new mobile app, as well as integrating with new external partners and Tyro's existing systems.
In this talk I presented a case study of the project, covering:
• the core tenets and some of the more interesting aspects of our architecture;
• why we were well positioned to use microservices for this greenfield work;
• the decisions we made that turned out well and the ones that didn't;
• security (we know a bit about that);
• testing (we do lots of it);
• deployment;
• how the system and the team is evolving.
Microservices and the Cloud based future of integration finalBizTalk360
The software integration market is heating up with dozens of new cloud-based vendors and a sea-change in customer expectations. What does this means for traditional Enterprise Application Integration? What do modern integration tools give us and where is this all heading. The answer is cloud-based microservices PaaS, and Microsoft is leading the charge forward. What are microservices, what is the next-generation Azure PaaS platform all about and how will this transform the world of application and service integration in the future?
Open Bank Project workshop at API Days, Open BankIng and Fintech, London 2015TESOBE
Slides of OBP workshop. OBP is an open source RESTful API for banks that connects to and abstracts the core banking systems underneath. A bit more technical than slides from previous day. Contains notes on API versioning, catalog, system diagram.
A microservice approach for legacy modernisationluisw19
Very large portion of the world’s business critical systems are considered to be ‘legacy’ –and so is the code underpinning them (ie COBOL, PASCAL, C, to name a few). Although in many cases it is the case that these systems are robust, stable and fit for the main purpose they were originally built, they aren’t flexible and scalable enough to support emerging requirements mainly derived from a more demanding ‘always on the move’ and ‘always connected’ user.
These systems struggle to meet these demands mainly because of the ‘monolithic’ approach on which they were built and the complexity hidden in millions of lines of code that is only understood a very few hand-full of people that still remain active from the teams that several years ago developed these systems.
In almost an equal amount there have also been thousands of failed attempts to modernise these legacy systems. The ‘eating the elephant’ in one go approach certainly didn’t work, and the traditional SOA approach alone although worked till certain extend, it also fell short when it came down to addressing specific requirements around scalability and platform/service inter-dependencies.
In this presentation I will talk about how a legacy modernisation framework based on Microservice Architecture (MSA) in conjunction with some other known SOA patterns (ie. ESB, API Gateway), can be applied to ‘eat the elephant one piece at the time’ but most importantly ‘without getting indigestion’
Microservices are all the rage. They are the silver bullet of architectural styles. But what does it take to implement them and make them work? What are the foundations to build using this architectural style. In this session, you will learn about microservices from a pragmatic standpoint, based on about 2 years of experience in consulting on the architectural style. Rather than look at the purist approach, as outlined by Martin Fowler, and others, you will learn what works and what doesn't, based on experience in the field. Session includes:
* Foundational topics necessary to implement microservices
* Basics on the architectural style as they apply to real world problems
* Necessary It and shifts to implement microservices in the Enterprise
API Adoption Patterns in Banking & The Promise of MicroservicesAkana
Akana VP of Product Marketing, Sachin Agarwal, explains API adoption patterns that are specific to banking, and how microservices can be used to help develop financial applications.
This document discusses moving from traditional monolithic and SOA architectures to microservices architectures. It covers principles of microservices like high cohesion, low coupling, independent deployability and scaling of services. It also discusses organizational implications, noting that teams are typically organized around business capabilities rather than technical layers in a microservices structure. Key challenges of microservices like increased complexity and performance overhead are also outlined.
1. The document discusses how organizations can leverage data, analytics, and insights to fundamentally change and pioneer new business models.
2. It emphasizes that data analytics cannot be accomplished in a silo and must involve the entire organization. Modern cloud platforms, software methodologies, and data tools are needed.
3. Examples are provided of how various organizations have used tools like Pivotal Greenplum to gain insights from data to solve problems in areas like predictive maintenance, risk management, and national security.
Le sfide legate alla gestione di un IT sempre piu’ dinamica e pervasiva non possono essere imbrigliate in un approccio che deve conoscere a priori quali sono oggetti, metriche e situazioni da osservare per intercettare e risolvere gli “incidents” di servizio. Oggi e’ possibile raccogliere, memorizzare e analizzare in real time TUTTE le informazioni prodotte dinamicamente da infrastrutture, applicazioni, servizi IT e utenti – i BIG DATA dell’IT – per derivarne nuova conoscenza e azioni atte a prevenire o risolvere velocemente le anomalie : e’ l’IT Operations Analytics secondo HP.
Mauro Ferrami , HP Software Business Consultant
Watch full webinar here: https://bit.ly/3mdj9i7
You will often hear that "data is the new gold"? In this context, data management is one of the areas that has received more attention from the software community in recent years. From Artificial Intelligence and Machine Learning to new ways to store and process data, the landscape for data management is in constant evolution. From the privileged perspective of an enterprise middleware platform, we at Denodo have the advantage of seeing many of these changes happen.
In this webinar, we will discuss the technology trends that will drive the enterprise data strategies in the years to come. Don't miss it if you want to keep yourself informed about how to convert your data to strategic assets in order to complete the data-driven transformation in your company.
Watch this on-demand webinar as we cover:
- The most interesting trends in data management
- How to build a data fabric architecture?
- How to manage your data integration strategy in the new hybrid world
- Our predictions on how those trends will change the data management world
- How can companies monetize the data through data-as-a-service infrastructure?
- What is the role of voice computing in future data analytic
The document discusses analyzing data from the Internet of Things (IoT) to gain actionable intelligence. It describes how deriving value from IoT data requires collecting data from devices and sensors, performing analytics on the device, at the network edge and in the cloud, and having capabilities for streaming, real-time, and historical analytics as well as data integration and event management. Challenges of analyzing IoT data include a lack of standardization, need for real-time analysis of fast data, inconsistent security practices, and lack of integration platforms.
Why Infrastructure Matters for Big Data & AnalyticsRick Perret
This document discusses how infrastructure is important for big data and analytics. It provides examples of how access, speed, and availability of infrastructure impact organizations' ability to gain insights from data. Specifically, it discusses how IBM's infrastructure capabilities such as data optimization, parallel processing, low latency, and scalability help companies like Bank of Quanzhou, Coca Cola Bottling, and Sui Southern Gas Company optimize access to data, accelerate insights, and maximize availability of information.
Enterprise Information Management (EIM) involves managing and governing all types of data and information throughout its lifecycle from creation to retirement. EIM covers both structured and unstructured data, including documents, emails, and multimedia content. SAP's EIM solutions are designed to manage information as it moves through its natural lifecycle. EIM impacts SAP's strategy by supporting its applications and software portfolio through services that integrate, cleanse, and govern data to ensure high quality information is available across the enterprise.
Stream Computing is an advanced analytic platform that allows user-developed applications to quickly ingest, analyze and correlate information as it arrives from thousands of real-time sources. The solution can handle very high data throughput rates, up to millions of events or messages per second.
FIA Dublin presentations: Overcoming Enterprise API challenges by Mícheál Ó F...openi_ict
This document discusses the growing enterprise API ecosystem. It notes that 61% of enterprises plan to enhance their mobility capabilities in the next two years. By 2015, 55% of mobile devices used for business will be employee-owned rather than corporate-owned. The document outlines how APIs have become the key enabler for integrating legacy systems and enabling new applications. It provides examples of large companies using mobile and cloud technologies to create applications that integrate with their backend systems.
This document discusses Hadoop and big data. It notes that digital data doubles every two years and that 85% of data is unstructured. Hadoop provides a cheaper way to store large amounts of both structured and unstructured data compared to traditional storage options. Hadoop also allows data to be stored first before defining what questions will be asked of the data.
Data Pioneers - Roland Haeve (Atos Nederland) - Big data in organisatiesMultiscope
This document discusses big data and its growth. It notes that in 2000, 2 exabytes of new data were produced, while in 2011 1.8 zettabytes of new data were produced. By 2020, data production is expected to grow 40 times to 35 zettabytes. The traditional 3-4 V's of big data (volume, velocity, variety, veracity) are expanding to 5-7 V's with the addition of viscosity, virality, and value. Examples of big data use cases include sensor data from CERN and jet engines, social media data from Twitter, and transactional data from Walmart. Atos provides big data analytics solutions and has implemented projects for smart metering,
This document describes a training course on the Federation Business Data Lake. The FBDL allows organizations to ingest diverse data sources, perform various types of analytics including real-time, interactive, and exploratory analytics, and develop applications using insights from big data. The document provides a use case of a restaurant chain that uses the FBDL to analyze social media data and inform menu decisions. It details how the company ingests Twitter data, analyzes it using Hadoop and NoSQL, and uses a dashboard to aid management decisions. The FBDL provides an integrated solution for the full analytics lifecycle from data ingestion to application development.
This document discusses enterprise asset management for aviation. It provides an overview of IBM's vision, which includes leveraging condition monitoring, visualization, mobility, analytics and intelligence to optimize asset management. Some key goals are improving reliability, reducing costs, recovering lost revenue, and assuring safety. The aviation industry faces challenges from factors like economic growth, passenger growth and globalization, which are driving new technology solutions to better predict demand, improve operations and customer experience, and increase efficiency and security.
The document provides an overview of analyzing big data using IBM technologies. It discusses how big data is growing rapidly from various sources and the challenges of handling large volumes, varieties, velocities, and veracities of data. It then summarizes IBM's approach to big data analytics using their software stack and platforms like Hadoop and Power Systems. The future of analytics is discussed with the OpenPOWER Foundation and POWER8's Coherent Accelerator Processor Interface (CAPI) which allows custom hardware to participate directly in application memory spaces.
Real Time Business Platform by Ivan Novick from PivotalVMware Tanzu Korea
This document discusses Pivotal's real time business platform for maximizing the value of data investments. It recommends identifying business problems with high ROI potential, then focusing data solutions on high-speed ingestion, consolidation, real-time queries, and analytics to drive real-time insights. The platform combines Gemfire for fast transactions with Greenplum for analytics. Use cases discussed include predictive maintenance, fraud detection, and recommendation engines. The platform provides a complete solution from data capture and analytics to application integration.
Creating the Foundations for the Internet of ThingsCapgemini
The document discusses the challenges and opportunities presented by the Internet of Things (IoT) for companies. It outlines four challenges for organizations to address to be ready for the IoT: storing large data volumes, handling high data streams from devices, predictive analytics based on historical data, and using machine learning to drive adaptive analytics in real time. The value comes from applying analytics to gain operational efficiencies. The biggest challenge is creating an infrastructure to deliver that value by ingesting and storing data cost-effectively and extracting insights through data science.
Smarter Analytics and Big Data
Building The Next Generation Analytical insights
Joel Waterman, Regional Director of Business Analytics for the Middle East and Africa, discusses how IBM is making significant investments in smarter analytics and big data through acquisitions, technical expertise, and research. IBM's big data platform moves analytics closer to data through technologies like Hadoop, stream computing, and data warehousing. The platform is designed for analytic application development and integration using accelerators, user interfaces, and IBM's ecosystem of business partners.
This document provides an introduction to Apache Druid, describing it as a real-time OLAP database for data-driven applications. It outlines the evolution from first generation on-premises data warehouses to modern data lakes and data rivers. Druid is presented as a high performance analytics database designed for event-driven data at large scales with low latency. Example use cases include digital advertising, user analytics, and IoT. The document encourages readers to learn more and get involved with the Druid open source community.
The document discusses how utilities are increasingly collecting and generating large amounts of data from smart meters and other sensors. It notes that utilities must learn to leverage this "big data" by acquiring, organizing, and analyzing different types of structured and unstructured data from various sources in order to make more informed operational and business decisions. Effective use of big data can help utilities optimize operations, improve customer experience, and increase business performance. However, most utilities currently underutilize data analytics capabilities and face challenges in integrating diverse data sources and systems. The document advocates for a well-designed data management platform that can consolidate utility data to facilitate deeper analysis and more valuable insights.
Similar to BCS APSG The landscape of enterprise applications (20)
The document discusses systems theory and provides definitions and principles about systems. It defines a system as a collection of components bound more strongly to each other than their environment. Systems can exist because of stable components and binding forces. Complex systems can exhibit emergent behaviors from simple local rules operating at a large scale. All complex adaptive systems use some form of computation, and the theory of evolution describes how selective pressure favors replication of better adapted systems in large ecosystems of variable systems.
The document summarizes the evolution of enterprise systems from 1965 to 2005. In 1965, IBM introduced the first online transaction processing (OLTP) system for airline reservations, marking a shift from batch processing to real-time systems. Early OLTP systems faced challenges from slow hardware and software that was not designed for concurrent transactions. This led to the development of database management systems, data communication systems, and OLTP monitors to support the new paradigm. By 2005, thanks to exponential improvements from Moore's law, the internet, and new application servers, OLTP had become the dominant form of enterprise computing, processing billions of transactions daily on a global scale.
Enterprise systems have evolved significantly from 1965 to 2005 due to technological advances like Moore's Law and the emergence of the internet. Early enterprise systems in 1965 used batch processing on mainframes for applications like sales, distribution and billing. The development of online transaction processing (OLTP) in 1965 allowed real-time processing of transactions. While hardware improved due to Moore's Law, new software was also needed to efficiently handle concurrent transactions, leading to the creation of OLTP monitors. Competition emerged for mainframe OLTP from minicomputers and Unix systems in later decades. The rise of the internet in the 1990s revolutionized enterprise systems by enabling much larger markets through web technologies and increasing demands for scalability.
Geoff Sharman gives a tutorial on the foundations of computing from billiard balls to quantum computing. He discusses early pioneers like Turing, Landauer, Bennett, Feynman, and Deutsch and their key contributions. Turing showed computing is a physical process subject to thermodynamics. Landauer established the minimum energy required to erase a bit of information. Bennett showed computation can be reversible with no energy loss if all information is retained. Feynman introduced nanotechnology and the idea that any two-state system like an atom or electron could represent a bit. Deutsch showed quantum computers could simulate any physical process. Practical progress has been made but large-scale quantum computing still faces challenges like maintaining quantum coherence long enough
This document introduces coarrays in Fortran 2008, which allow parallel programming using a single program running across multiple images or processes. Coarrays allow variables to be accessed across images using additional subscripts and provide intrinsic functions and statements for synchronization and image control. The additions enable easier development of parallel programs compared to MPI and allow optimizations between synchronization points.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
2. Typical Web Application?
Web servers Application Database
Servers Servers
Requests
from web
browsers Member
DB
Auction DB
Static pages served from web server/content management
system
Dynamic pages assembled by applications on application servers
2 May 12th 2011 APSG Enterprise Applications 2
3. Did you do any of these today?
Buy something in a supermarket?
Buy a ticket for travel or entertainment?
Make a telephone call (mobile or fixed)?
Use a cash machine or debit card?
Pay for something with a credit card?
Use electricity, gas or water?
The chances are you used a traditional online
transaction system (running on a mainframe?)
May 12th 2011 APSG Enterprise Applications 3
5. What's an Enterprise, or an
Enterprise System?
May 12th 2011 APSG Enterprise Applications 5
6. What is a System?
Outputs Inputs
information information
control function
energy energy
internal organisation,
possibly including
material sub-systems material
Physical, chemical, biological, social systems - real-time dynamic behaviour
May 12th 2011 APSG Enterprise Applications 6
7. An Enterprise is a System
for delivering economic & social outputs
Outputs Inputs
products, information,
services finance
Control function
information labour,
energy
internal organisation,
possibly including
employment, sub-systems materials,
financial services
returns
Related pairs of inputs & outputs are often referred to as transactions
May 12th 2011 APSG Enterprise Applications 7
8. Interesting
What's an Enterprise?
Large numbers of customers
Large numbers of transactions
Large financial throughputs
Complex behaviour/operations
Sustainable operation
➔
High scale for an extended period
May 12th 2011 APSG Enterprise Applications 8
9. An Enterprise System
is the automated part of an enterprise
= a real-time model of the enterprise
Outputs Inputs
products, information,
services finance
Control function
information labour,
energy
hardware/software
employment, Enterprise System materials,
financial services
returns
May 12th 2011 APSG Enterprise Applications 9
10. Where would you find an ES?
Probably not here: (primary industry, 5% of economy)
Agriculture, fisheries, forestry, water extraction
Mining, oil & gas extraction
Possibly here: (secondary industry, 40% of economy)
Construction, utilities
Transport, distribution, communications
Manufacturing (discrete & continuous)
Probably here: (tertiary industry, 55% of the economy)
Financial & business services, media, retail
Education, healthcare, tourism, entertainment
Public admin
10 May 12th 2011 APSG Enterprise Applications 10
11. Brief History of Enterprise
Systems
May 12th 2011 APSG Enterprise Applications 11
12. Pioneers of Enterprise Systems
1952 – Joe Lyons & Co. Leo system
- batch accounting & payroll operations
1965 – American Airlines Sabre system
- online flight reservations & check in
1993 – Amazon.com
- direct customer service, just-in-time delivery
2001 – Google.com
- customised search using large amounts of data
2008 – Apple iPhone
- mobile applications
May 12th 2011 APSG Enterprise Applications 12
13. Getting Closer to the Customer
6
5
4
System Type
3 1=Batch, 2=Online,
3=Network, 4=Web,
5=Social/mobile
2
1
0
1960s 1970s 1980s 1990s 2000s
May 12th 2011 APSG Enterprise Applications 13
14. What were the Key Innovations?
1950s – main storage (delay lines, ferrite cores),
secondary storage (magnetic tape), batch job
scheduler
1960s – operating system, direct access storage,
database management, time sharing terminals
1970s – TP monitor, relational database
1980s – personal computer, networking
1990s – World Wide Web
2000s – search engine, social networking, mobile
May 12th 2011 APSG Enterprise Applications 14
16. Time Sharing/Conversational
●
At logon time, operating system allocates:
●
Memory address space for application
●
Operating system process
●
Files, communications channels, etc.
●
These remain dedicated to user until logoff
●
Paradigm is widely used, but:
●
No sharing of resources
●
Not scalable beyond few hundred users
May 12th 2011 APSG Enterprise Applications 16
17. TP Monitor PseudoConversations
●
TP Monitor acquires & retains shared resources
●
Applications, memory, processes, threads, files,
databases, communication channels, etc.
●
On receipt of user transaction request, provides
concurrent access to resources for application
●
Frees resources as soon as output message sent
●
Highly scalable to 10s of thousands of users
●
Requires stateless application programming
●
Conversation state held in “scratchpad” files
May 12th 2011 APSG Enterprise Applications 17
18. Representational State Transfer
●
Underlying paradigm for Web hypertext transfers
●
Commonly abbreviated as REST
●
Web servers manage network & provide
concurrent access
●
Defines stateless clients for rendering data
●
Highly scalable to 10s of thousands of users
●
Does not define how to build update applications
on the Web
●
Disallows “cookies” - no scratchpad
●
Does not define server application model
May 12th 2011 APSG Enterprise Applications 18
19. Google Applications
●
Underlying paradigm for Google search and
other applications
●
Uses GAE (Google App engine)
●
Requires stateless clients
●
Concurrent access to “scratchpad” storage via
GFS/BigTable
●
Highly scalable to 10s of thousands of users
●
Especially suitable for applications using read-
only data, e.g. search data, maps, etc.
May 12th 2011 APSG Enterprise Applications 19
20. Why do these Paradigms Work?
All these paradigms embody the many-to-one
relationship between customers and the
enterprise
The TP, Restful, & Google paradigms provide
scalable concurrency & enable the enterprise to
exploit economies of scale
None of them is a complete description of
what modern enterprise systems need
May 12th 2011 APSG Enterprise Applications 20
21. What Paradigm is Needed?
●
Stateless applications provide the highest
scalability and work well for read only requests
●
But commercial applications, e.g. web shopping,
need conversation state & concurrent update
●
Use HTTP because it supports any-client-to-any
server, unlike object-based protocols
●
Hold state on client or replicated server file system
●
Collect updates that form part of a transaction
●
Permanently save data at end of conversation
May 12th 2011 APSG Enterprise Applications 21
23. Enterprise Business Challenges
Enterprise business people care about two
primary objectives:
●
Reducing costs:
●
automating/eliminating internal processes
●
reducing operating costs for enterprise systems
●
reducing ownership costs for enterprise systems
●
Increasing revenue:
●
Winning new customers
●
Retaining existing customers
●
Getting more business from existing customers
May 12th 2011 APSG Enterprise Applications 23
24. Enterprise System Challenges
1) Multi-channel applications
- acting consistently to the customer
2) Multi-business service
- providing multiple offers consistently
3) Effective customer knowledge
- acting more intelligently to the customer
4) Effective market knowledge
- foreseeing what customers will want next
May 12th 2011 APSG Enterprise Applications 24
25. Multi-Channel Applications
Many enterprise systems are designed to
support particular sales channels, eg.:
Store checkout systems
Kiosk/ticketing systems
Call centre systems
Web based systems
Mobile systems
Business offer may depend on channel, but
Applications should treat the customer
consistently, whichever channel he/she uses
May 12th 2011 APSG Enterprise Applications 25
26. 2
Typical M Architecture
sync msgs
async msgs
sync msgs
Line of business
Presentation / Integration server Application and
Channel server(s) - Tight coupling Data server(s)
- Static/ Dynamic web - Loose coupling
pages - Stand-in processing
- Portals - Flow ctrl/compensation
- Channel specific
M2 = Multi-Channel, Multi-Business
May 12th 2011 APSG Enterprise Applications 26
27. Customer Knowledge
●
Many web systems allow customer to explore
options before & after a transaction:
●
high “browse to buy” ratio in web shopping
●
evaluations of product, service, etc.
●
If we identify the customer, we can study:
●
search patterns
●
history of actual transactions
●
customer likes & dislikes
●
May enable better offers to the customer
●
need more data & may require real time parallel
computation
May 12th 2011 APSG Enterprise Applications 27
28. Market Knowledge
●
Many enterprise systems collect data about a
mass of customer transactions:
●
Collected/refined in data warehouse
●
Linked with tools for analytics / Bus Intelligence
●
Used to produce periodic reports & analyses
●
This process may be ineffective:
●
Too slow/costly for business needs
●
Only structured data – much unstructured data
●
New methods use very large data sets
●
Best practice uses highly parallel processing
May 12th 2011 APSG Enterprise Applications 28
29. “Highly Parallel” Processing
●
Google is the best known exponent
●
Many processes crawling the Web in parallel
●
Combine results using MapReduce technique
●
Store results in Google File System
●
Effectively substitutes concurrency for parallelism
●
Also widely used in scientific applications:
●
e.g. SETI @Home used subscriber PCs
●
IBM “Blue Gene” protein modelling project
●
4K processors generated 10 μsec simulation
●
Uses hardware cluster plus GPRS
May 12th 2011 APSG Enterprise Applications 29
30. Summary
●
When building an Enterprise System, we are
building a model of (part of) the enterprise:
●
Model must be real time and scalable
●
Customer can use anywhere, anytime, any device
●
Access any business offering consistently
●
Know and respond intelligently to each customer
●
Meta-Enterprise System should analyse system &
aggregate behaviour of customers
●
Detect trends and respond to them
May 12th 2011 APSG Enterprise Applications 30