See the video here: https://vimeo.com/131631801
IoT projects are really integration projects. This talk introduces Sesam and Data Oriented Architecture, useful for IoT, Micro Services, Master Data Management. Why DOA is better than SOA, and a new way of thinking! The talk will become available on Vimeo soon. I realize that the "SOA is dead commandment" is provocative. SOA is not dead but lacking when implemented with canonical data model. Perhaps we should make a Sesam manifesto instead, with "we value X over Y"
The Hive Think Tank: Translating IoT into Innovation at Every Level by Prith ...The Hive
In this presentation Prith Banerjee discusses how a sustainable future must become radically more efficient with the way we use energy. He shared how the Internet of Things (IoT) and the convergence of Operational Technology (OT) and Information Technology (IT) are enabling Schneider Electric's innovation at every level, redefining power and automation for a new world of energy which is more electric, decarbonized, decentralized and digitized. Prith shared how, in this new world of energy, Schneider ensures that Life Is On everywhere, for everyone and at every moment. He also shared a set of IoT predictions for the future, based on findings of the company’s recent IoT Survey of 2,500 top business executives.
A presentation pertaining to the integration of real-time data to the cloud with significant potential in the areas of Industrial IT,Real-time sensor information processing and Smart grids applied to various vertical industries. This is related to my blog post at www.cloudshoring.in
JavaZone 2015: Semantisk integrasjon hos Hafslund AMSSimen Sommerfeldt
- hvordan "data oriented architecture" brukes i Norges største IOT prosjekt. Du får en intro til prosjektet, til semantisk integrasjon, hvordan det settes opp, og hvordan det ble brukt i prosjektet. Opptak her: https://vimeo.com/album/3556815/video/138849272
We're all distributed systems devs now: a crash course in distributed program...petabridge
Going forward, every developer who works in server-side development will be expected to understand the fundamental concepts that drive the design of distributed systems. It's a matter of when, not if.
In this talk we'll dive into concepts such as the CAP theorem, eventual consistency, microservices, event-driven architectures – and how to apply each of these tools to build effective, resilient, distributed systems.
Big Data Warehousing Meetup: Real-time Trade Data Monitoring with Storm & Cas...Caserta
Caserta Concepts' implementation team presented a solution that performs big data analytics on active trade data in real-time. They presented the core components – Storm for the real-time ingest, Cassandra, a NoSQL database, and others. For more information on future events, please check out http://www.casertaconcepts.com/.
This is a quick overview of the challenges that BigData and Flexible Schema Databases like MongoDB offer regarding Data Treatment and strategies to overcome them.
The Hive Think Tank: Translating IoT into Innovation at Every Level by Prith ...The Hive
In this presentation Prith Banerjee discusses how a sustainable future must become radically more efficient with the way we use energy. He shared how the Internet of Things (IoT) and the convergence of Operational Technology (OT) and Information Technology (IT) are enabling Schneider Electric's innovation at every level, redefining power and automation for a new world of energy which is more electric, decarbonized, decentralized and digitized. Prith shared how, in this new world of energy, Schneider ensures that Life Is On everywhere, for everyone and at every moment. He also shared a set of IoT predictions for the future, based on findings of the company’s recent IoT Survey of 2,500 top business executives.
A presentation pertaining to the integration of real-time data to the cloud with significant potential in the areas of Industrial IT,Real-time sensor information processing and Smart grids applied to various vertical industries. This is related to my blog post at www.cloudshoring.in
JavaZone 2015: Semantisk integrasjon hos Hafslund AMSSimen Sommerfeldt
- hvordan "data oriented architecture" brukes i Norges største IOT prosjekt. Du får en intro til prosjektet, til semantisk integrasjon, hvordan det settes opp, og hvordan det ble brukt i prosjektet. Opptak her: https://vimeo.com/album/3556815/video/138849272
We're all distributed systems devs now: a crash course in distributed program...petabridge
Going forward, every developer who works in server-side development will be expected to understand the fundamental concepts that drive the design of distributed systems. It's a matter of when, not if.
In this talk we'll dive into concepts such as the CAP theorem, eventual consistency, microservices, event-driven architectures – and how to apply each of these tools to build effective, resilient, distributed systems.
Big Data Warehousing Meetup: Real-time Trade Data Monitoring with Storm & Cas...Caserta
Caserta Concepts' implementation team presented a solution that performs big data analytics on active trade data in real-time. They presented the core components – Storm for the real-time ingest, Cassandra, a NoSQL database, and others. For more information on future events, please check out http://www.casertaconcepts.com/.
This is a quick overview of the challenges that BigData and Flexible Schema Databases like MongoDB offer regarding Data Treatment and strategies to overcome them.
Nov 2014 talk to SW Data Meetup by Mike Olson, co-founder and chairman of Cloudera.
In business, we often deal with hype around trends in society, politics, economy and technology. We know we need to take claims of the next big thing with a grain of salt and that we should be careful not to set expectations too high. However, with Big Data analytics, the opposite is true. The hype that accompanies it actually conceals the enormity of its impact on the way we do business. In this talk I’ll discuss how new 'Data Driven' economies are emerging through relentless innovation across the public and private sectors.
Mike (co-founded Cloudera in 2008 and served as its CEO until 2013 when he took on his current role of chief strategy officer (CSO.) As CSO, Mike is responsible for Cloudera’s product strategy, open source leadership, engineering alignment and direct engagement with customers. Prior to Cloudera Mike was CEO of Sleepycat Software, makers of Berkeley DB, the open source embedded database engine. Mike spent two years at Oracle Corporation as vice president for Embedded Technologies after Oracle’s acquisition of Sleepycat in 2006. Prior to joining Sleepycat, Mike held technical and business positions at database vendors Britton Lee, Illustra Information Technologies and Informix Software. Mike has a Bachelor’s and a Master’s Degree in Computer Science from the University of California, Berkeley.
Amazon Web Services proporciona una amplia gama de servicios que le ayudarán a crear e implementar aplicaciones de análisis de big data de forma rápida y sencilla. AWS ofrece un acceso rápido a recursos de TI económicos y flexibles, algo que permitirá escalar prácticamente cualquier aplicación de big data con rapidez, incluidos almacenamiento de datos, análisis de clics, detección de elementos fraudulentos, motores de recomendación, proceso ETL impulsado por eventos, informática sin servidor y procesamiento del Internet de las cosas. Con AWS no necesita hacer grandes inversiones iniciales de tiempo o dinero para crear y mantener la infraestructura. En su lugar, puede aprovisionar exactamente el tipo y el tamaño adecuado de los recursos que necesita para impulsar sus aplicaciones de análisis de big data. Puede obtener acceso a tantos recursos como necesite, prácticamente al instante, y pagar únicamente por los utilice.
Data warehousing in the era of Big Data: Deep Dive into Amazon RedshiftAmazon Web Services
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze all of your data for a fraction of the cost of traditional data warehouses. In this session, we take an in-depth look at data warehousing with Amazon Redshift for big data analytics. We cover best practices to take advantage of Amazon Redshift's columnar technology and parallel processing capabilities to deliver high throughput and query performance. We also discuss how to design optimal schemas, load data efficiently, and use work load management.
FSI201 FINRA’s Managed Data Lake – Next Gen Analytics in the CloudAmazon Web Services
FINRA’s Data Lake unlocks the value in its data to accelerate analytics and machine learning at scale. FINRA's Technology group has changed its customer's relationship with data by creating a Managed Data Lake that enables discovery on Petabytes of capital markets data, while saving time and money over traditional analytics solutions. FINRA’s Managed Data Lake includes a centralized data catalog and separates storage from compute, allowing users to query from petabytes of data in seconds. Learn how FINRA uses Spot instances and services such as Amazon S3, Amazon EMR, Amazon Redshift, and AWS Lambda to provide the 'right tool for the right job' at each step in the data processing pipeline. All of this is done while meeting FINRA’s security and compliance responsibilities as a financial regulator.
Fueling AI & Machine Learning: Legacy Data as a Competitive AdvantagePrecisely
The data fueling your AI or machine learning initiatives plays a critical role. Different data sources provide different outcomes. The most important thing a business can do to prepare for success with AI and machine learning is to understand and provide access to all of the data that you can possibly get to. In addition to newer data sources, like IoT and Social Media, what will set your results apart – and give your business a competitive advantage – is powering AI and machine learning with your historical and proprietary data: the data sitting in your mainframe, legacy, and other traditional systems.
View this on-demand webcast with Wikibon Analyst James Kobielus as we discuss:
• Using your historical customer data to train predictive AI/ML models for effective target marketing
• Leveraging social, mobile, and IoT data to give your marketing an extra level of personalization
• Making the most of your legacy and proprietary data while protecting customer privacy and ensuring regulatory compliance
Engineering Machine Learning Data Pipelines Series: Streaming New Data as It ...Precisely
Tackling the challenge of designing a machine learning model and putting it into production is the key to getting value back – and the roadblock that stops many promising machine learning projects. After the data scientists have done their part, engineering robust production data pipelines has its own set of challenges. Syncsort software helps the data engineer every step of the way.
Building on the process of finding and matching duplicates to resolve entities, the next step is to set up a continuous streaming flow of data from data sources so that as the sources change, new data automatically gets pushed through the same transformation and cleansing data flow – into the arms of machine learning models.
Some of your sources may already be streaming, but the rest are sitting in transactional databases that change hundreds or thousands of times a day. The challenge is that you can’t affect performance of data sources that run key applications, so putting something like database triggers in place is not the best idea. Using Apache Kafka or similar technologies as the backbone to moving data around doesn’t solve the problem of needing to grab changes from the source pushing them into Kafka and consuming the data from Kafka to be processed. If something unexpected happens – like connectivity is lost on either the source or the target side, you don’t want to have to fix it or start over because the data is out of sync.
View this 15-minute webcast on-demand to learn how to tackle these challenges in large scale production implementations.
Modernising the data warehouse - January 2019Phil Watt
I was invited to present on Modernising the Data Warehouse to post-graduate students at the University of Melbourne in January 2019. These slides describe my experience and perspective on this topic that many, if not most, large organisations face. At Escient, we can help organisations navigate this area, and drive better outcomes from data.
Evolving From Monolithic to Distributed Architecture Patterns in the CloudDenodo
Watch full webinar here: https://goo.gl/rSfYKV
Gartner states in its Predicts 2018: Data Management Strategies Continue to Shift Toward Distributed,
“As data management activities are becoming more widespread in both distributed processing use cases, like IoT, and demands for new types of data, emerging roles such as data scientists or data engineers are expected to be driving the new data management requirements in the coming two years. These trends indicate that both the collection of data as well as the need to connect to data are rapidly becoming the new normal, and that the days of a single data store with all the data of interest — the enterprise data warehouse — are long gone.”
Data management solutions are becoming distributed, heterogeneous and extremely diverse.
Attend this session to learn:
• How to evolve architecture patterns in the cloud using data virtualization.
• How data virtualization accelerates cloud migration and modernization.
• Successful cloud implementation case studies.
The seminar is about Data warehousing, in here we are gonna discuss about what is data warehousing, comparison b/w database and data warehouse, different data warehouse models.about Data mart, and disadvantages of data warehousing.
MapR on Azure: Getting Value from Big Data in the Cloud -MapR Technologies
Public cloud adoption is exploding and big data technologies are rapidly becoming an important driver of this growth. According to Wikibon, big data public cloud revenue will grow from 4.4% in 2016 to 24% of all big data spend by 2026. Digital transformation initiatives are now a priority for most organizations, with data and advanced analytics at the heart of enabling this change. This is key to driving competitive advantage in every industry.
There is nothing better than a real-world customer use case to help you understand how to get value from big data in the cloud and apply the learnings to your business. Join Microsoft, MapR, and Sullexis on November 10th to:
Hear from Sullexis on the business use case and technical implementation details of one of their oil & gas customers
Understand the integration points of the MapR Platform with other Azure services and why they matter
Know how to deploy the MapR Platform on the Azure cloud and get started easily
You will also get to hear about customer use cases of the MapR Converged Data Platform on Azure in other verticals such as real estate and retail.
Speakers
Rafael Godinho
Technical Evangelist
Microsoft Azure
Tim Morgan
Managing Director
Sullexis
While many enterprises consider cloud computing the savior of their data strategy, there is a process they should be following when looking to leveraging database-as-a-service. This includes understanding their own data requirements, selecting the right cloud computing candidate, and then planning for the migration and operations. A huge number of issues and obstacles will inevitably arise, but fortunately best practices are emerging. This presentation will take you through the process of moving data to cloud computing providers.
The adoption of NoSQL databases by large enterprises for mission-critical applications is accelerating. It started with Internet-age companies like Google, Amazon, Facebook, and LinkedIn. Today, enterprises in virtually every industry are deploying NoSQL databases to power customer-facing, revenue-driving web and mobile applications with millions of consumers and customers.
Faced with new business goals, increased customer expectations, and an immediate need to innovate in order to remain competitive, large enterprises are looking to NoSQL databases to overcome the limitations of legacy relational databases.
In this webinar, we’ll highlight the business goals and technical challenges faced by the top 10 enterprise use cases for NoSQL databases.
Personalization
Profile Management
Real-Time Big Data
Content Management
Product and Service Catalogs
Customer 3600 Views
Mobile Applications
Internet of Things
Digital Communication
Fraud Detection
How Enterprises are Using NoSQL for Mission-Critical ApplicationsDATAVERSITY
NoSQL databases including Couchbase are increasingly being selected as the backend technology for web and mobile apps. Document databases in particular are well suited for a large number of different use cases as an operational datastore.
In this webinar, Perry Krug, Principal Solutions Architect at Couchbase, will give a brief overview of Couchbase Server, a document database and its underlying distributed architecture. In addition, Perry will share how some of the biggest brands in the world use Couchbase, including:
Paypal A scalable NoSQL and big data architecture with real time analytics
Concur A highly available cache solution that supports 1B operations/day
Amadeus A backend data store that supports 1.6B transactions/day
GoforIT tar tak i de systemiske utfordringene vi har: UH-sektoren og arbeidslivet må gå i takt for å nå målene i Parisavtalen. I tillegg trenger vi titusenvis av digitale hoder med riktig bærekraftkompetanse for å erstatte de 200.000 arbeidsplassene som må skapes etter avviklingen i Nordsjøen. Mali Hole Skogen fra IKT-Norge og jeg ble invitert av Direktoratet for høyere utdanning og kompetanse til å holde et innlegg om GoForIT på "DigiNorden" - en konferanse om digital kompetanse og livslang læring for bærekraftig omstilling i Norden.
Hva kan moderne software-prosjekter kan lære av en gammel jagerflyger?Simen Sommerfeldt
NB Dette er fra 2010... En muntlig fremstilling av et paper av Steve Adolph der vi drøfter hvordan OODA loopen og blitzkrieg prinsippene kan brukes for å forstå målgruppene i et prosjekt, jobbe proaktivt og ha godt samhold i teamet. Paperet finner du her. https://www.agileleanhouse.com/lib/lib/Topics/OODALoop/15670827-John-Boyd-Lessons-from-a-fighter-pilot.pdf
More Related Content
Similar to Hafslund AMS - Drinking from the fire hose at a large IoT project
Nov 2014 talk to SW Data Meetup by Mike Olson, co-founder and chairman of Cloudera.
In business, we often deal with hype around trends in society, politics, economy and technology. We know we need to take claims of the next big thing with a grain of salt and that we should be careful not to set expectations too high. However, with Big Data analytics, the opposite is true. The hype that accompanies it actually conceals the enormity of its impact on the way we do business. In this talk I’ll discuss how new 'Data Driven' economies are emerging through relentless innovation across the public and private sectors.
Mike (co-founded Cloudera in 2008 and served as its CEO until 2013 when he took on his current role of chief strategy officer (CSO.) As CSO, Mike is responsible for Cloudera’s product strategy, open source leadership, engineering alignment and direct engagement with customers. Prior to Cloudera Mike was CEO of Sleepycat Software, makers of Berkeley DB, the open source embedded database engine. Mike spent two years at Oracle Corporation as vice president for Embedded Technologies after Oracle’s acquisition of Sleepycat in 2006. Prior to joining Sleepycat, Mike held technical and business positions at database vendors Britton Lee, Illustra Information Technologies and Informix Software. Mike has a Bachelor’s and a Master’s Degree in Computer Science from the University of California, Berkeley.
Amazon Web Services proporciona una amplia gama de servicios que le ayudarán a crear e implementar aplicaciones de análisis de big data de forma rápida y sencilla. AWS ofrece un acceso rápido a recursos de TI económicos y flexibles, algo que permitirá escalar prácticamente cualquier aplicación de big data con rapidez, incluidos almacenamiento de datos, análisis de clics, detección de elementos fraudulentos, motores de recomendación, proceso ETL impulsado por eventos, informática sin servidor y procesamiento del Internet de las cosas. Con AWS no necesita hacer grandes inversiones iniciales de tiempo o dinero para crear y mantener la infraestructura. En su lugar, puede aprovisionar exactamente el tipo y el tamaño adecuado de los recursos que necesita para impulsar sus aplicaciones de análisis de big data. Puede obtener acceso a tantos recursos como necesite, prácticamente al instante, y pagar únicamente por los utilice.
Data warehousing in the era of Big Data: Deep Dive into Amazon RedshiftAmazon Web Services
Analyzing big data quickly and efficiently requires a data warehouse optimized to handle and scale for large datasets. Amazon Redshift is a fast, petabyte-scale data warehouse that makes it simple and cost-effective to analyze all of your data for a fraction of the cost of traditional data warehouses. In this session, we take an in-depth look at data warehousing with Amazon Redshift for big data analytics. We cover best practices to take advantage of Amazon Redshift's columnar technology and parallel processing capabilities to deliver high throughput and query performance. We also discuss how to design optimal schemas, load data efficiently, and use work load management.
FSI201 FINRA’s Managed Data Lake – Next Gen Analytics in the CloudAmazon Web Services
FINRA’s Data Lake unlocks the value in its data to accelerate analytics and machine learning at scale. FINRA's Technology group has changed its customer's relationship with data by creating a Managed Data Lake that enables discovery on Petabytes of capital markets data, while saving time and money over traditional analytics solutions. FINRA’s Managed Data Lake includes a centralized data catalog and separates storage from compute, allowing users to query from petabytes of data in seconds. Learn how FINRA uses Spot instances and services such as Amazon S3, Amazon EMR, Amazon Redshift, and AWS Lambda to provide the 'right tool for the right job' at each step in the data processing pipeline. All of this is done while meeting FINRA’s security and compliance responsibilities as a financial regulator.
Fueling AI & Machine Learning: Legacy Data as a Competitive AdvantagePrecisely
The data fueling your AI or machine learning initiatives plays a critical role. Different data sources provide different outcomes. The most important thing a business can do to prepare for success with AI and machine learning is to understand and provide access to all of the data that you can possibly get to. In addition to newer data sources, like IoT and Social Media, what will set your results apart – and give your business a competitive advantage – is powering AI and machine learning with your historical and proprietary data: the data sitting in your mainframe, legacy, and other traditional systems.
View this on-demand webcast with Wikibon Analyst James Kobielus as we discuss:
• Using your historical customer data to train predictive AI/ML models for effective target marketing
• Leveraging social, mobile, and IoT data to give your marketing an extra level of personalization
• Making the most of your legacy and proprietary data while protecting customer privacy and ensuring regulatory compliance
Engineering Machine Learning Data Pipelines Series: Streaming New Data as It ...Precisely
Tackling the challenge of designing a machine learning model and putting it into production is the key to getting value back – and the roadblock that stops many promising machine learning projects. After the data scientists have done their part, engineering robust production data pipelines has its own set of challenges. Syncsort software helps the data engineer every step of the way.
Building on the process of finding and matching duplicates to resolve entities, the next step is to set up a continuous streaming flow of data from data sources so that as the sources change, new data automatically gets pushed through the same transformation and cleansing data flow – into the arms of machine learning models.
Some of your sources may already be streaming, but the rest are sitting in transactional databases that change hundreds or thousands of times a day. The challenge is that you can’t affect performance of data sources that run key applications, so putting something like database triggers in place is not the best idea. Using Apache Kafka or similar technologies as the backbone to moving data around doesn’t solve the problem of needing to grab changes from the source pushing them into Kafka and consuming the data from Kafka to be processed. If something unexpected happens – like connectivity is lost on either the source or the target side, you don’t want to have to fix it or start over because the data is out of sync.
View this 15-minute webcast on-demand to learn how to tackle these challenges in large scale production implementations.
Modernising the data warehouse - January 2019Phil Watt
I was invited to present on Modernising the Data Warehouse to post-graduate students at the University of Melbourne in January 2019. These slides describe my experience and perspective on this topic that many, if not most, large organisations face. At Escient, we can help organisations navigate this area, and drive better outcomes from data.
Evolving From Monolithic to Distributed Architecture Patterns in the CloudDenodo
Watch full webinar here: https://goo.gl/rSfYKV
Gartner states in its Predicts 2018: Data Management Strategies Continue to Shift Toward Distributed,
“As data management activities are becoming more widespread in both distributed processing use cases, like IoT, and demands for new types of data, emerging roles such as data scientists or data engineers are expected to be driving the new data management requirements in the coming two years. These trends indicate that both the collection of data as well as the need to connect to data are rapidly becoming the new normal, and that the days of a single data store with all the data of interest — the enterprise data warehouse — are long gone.”
Data management solutions are becoming distributed, heterogeneous and extremely diverse.
Attend this session to learn:
• How to evolve architecture patterns in the cloud using data virtualization.
• How data virtualization accelerates cloud migration and modernization.
• Successful cloud implementation case studies.
The seminar is about Data warehousing, in here we are gonna discuss about what is data warehousing, comparison b/w database and data warehouse, different data warehouse models.about Data mart, and disadvantages of data warehousing.
MapR on Azure: Getting Value from Big Data in the Cloud -MapR Technologies
Public cloud adoption is exploding and big data technologies are rapidly becoming an important driver of this growth. According to Wikibon, big data public cloud revenue will grow from 4.4% in 2016 to 24% of all big data spend by 2026. Digital transformation initiatives are now a priority for most organizations, with data and advanced analytics at the heart of enabling this change. This is key to driving competitive advantage in every industry.
There is nothing better than a real-world customer use case to help you understand how to get value from big data in the cloud and apply the learnings to your business. Join Microsoft, MapR, and Sullexis on November 10th to:
Hear from Sullexis on the business use case and technical implementation details of one of their oil & gas customers
Understand the integration points of the MapR Platform with other Azure services and why they matter
Know how to deploy the MapR Platform on the Azure cloud and get started easily
You will also get to hear about customer use cases of the MapR Converged Data Platform on Azure in other verticals such as real estate and retail.
Speakers
Rafael Godinho
Technical Evangelist
Microsoft Azure
Tim Morgan
Managing Director
Sullexis
While many enterprises consider cloud computing the savior of their data strategy, there is a process they should be following when looking to leveraging database-as-a-service. This includes understanding their own data requirements, selecting the right cloud computing candidate, and then planning for the migration and operations. A huge number of issues and obstacles will inevitably arise, but fortunately best practices are emerging. This presentation will take you through the process of moving data to cloud computing providers.
The adoption of NoSQL databases by large enterprises for mission-critical applications is accelerating. It started with Internet-age companies like Google, Amazon, Facebook, and LinkedIn. Today, enterprises in virtually every industry are deploying NoSQL databases to power customer-facing, revenue-driving web and mobile applications with millions of consumers and customers.
Faced with new business goals, increased customer expectations, and an immediate need to innovate in order to remain competitive, large enterprises are looking to NoSQL databases to overcome the limitations of legacy relational databases.
In this webinar, we’ll highlight the business goals and technical challenges faced by the top 10 enterprise use cases for NoSQL databases.
Personalization
Profile Management
Real-Time Big Data
Content Management
Product and Service Catalogs
Customer 3600 Views
Mobile Applications
Internet of Things
Digital Communication
Fraud Detection
How Enterprises are Using NoSQL for Mission-Critical ApplicationsDATAVERSITY
NoSQL databases including Couchbase are increasingly being selected as the backend technology for web and mobile apps. Document databases in particular are well suited for a large number of different use cases as an operational datastore.
In this webinar, Perry Krug, Principal Solutions Architect at Couchbase, will give a brief overview of Couchbase Server, a document database and its underlying distributed architecture. In addition, Perry will share how some of the biggest brands in the world use Couchbase, including:
Paypal A scalable NoSQL and big data architecture with real time analytics
Concur A highly available cache solution that supports 1B operations/day
Amadeus A backend data store that supports 1.6B transactions/day
Similar to Hafslund AMS - Drinking from the fire hose at a large IoT project (20)
GoforIT tar tak i de systemiske utfordringene vi har: UH-sektoren og arbeidslivet må gå i takt for å nå målene i Parisavtalen. I tillegg trenger vi titusenvis av digitale hoder med riktig bærekraftkompetanse for å erstatte de 200.000 arbeidsplassene som må skapes etter avviklingen i Nordsjøen. Mali Hole Skogen fra IKT-Norge og jeg ble invitert av Direktoratet for høyere utdanning og kompetanse til å holde et innlegg om GoForIT på "DigiNorden" - en konferanse om digital kompetanse og livslang læring for bærekraftig omstilling i Norden.
Hva kan moderne software-prosjekter kan lære av en gammel jagerflyger?Simen Sommerfeldt
NB Dette er fra 2010... En muntlig fremstilling av et paper av Steve Adolph der vi drøfter hvordan OODA loopen og blitzkrieg prinsippene kan brukes for å forstå målgruppene i et prosjekt, jobbe proaktivt og ha godt samhold i teamet. Paperet finner du her. https://www.agileleanhouse.com/lib/lib/Topics/OODALoop/15670827-John-Boyd-Lessons-from-a-fighter-pilot.pdf
Orientering til personvernkommisjonen om personvern i skolen Simen Sommerfeldt
Vi ble invitert til å gi en orientering om rapporten vår om personvern i skolen, som ble lansert på Arendalsuka. Kommisjonen kan tenkes å hente elementer fra den i sin NOU til regjeringen. Vi går gjennom problemstillingene vi tok for oss, reaksjonene i debatten på Arendalsuka, og mediedekningen. Det er klikkbare linker til alle artikler og opptak
Personvernkommisjonen er oppnevnt av Regjeringen for å belyse de viktigste utfordringene og utviklingstrekkene av betydning for personvern og for å foreslå tiltak som kan forbedre personvernets stilling i Norge. Jeg ble invitert til å gi innspill til dem om teknologier som kan true personvernet, og hvilke strategier samfunnet kan ha
Gjesteforelesning om strategisk bærekraft og GoForIT til UiASimen Sommerfeldt
GoForiT består av mange av de største aktørene innenfor bransjen, med både TEKNA, NITO, Accenture, Microsoft, UiA, NTNU, Sopra Steria, CGI, Bouvet, Itera og flere. Her kan du se hvordan vi tenker rundt strategisk bærekraft, og skal samarbeide for å sørge for at vi utdanner folk i takt med hvordan vi benytter bærekraft i arbeidslivet. Si fra hvis du ønsker link til opptak av foredraget
Jeg ble oppfordret til å dele inntrykkene og tankene til oss som sto bak oppropet om en mer personvern-vennlig Smittestopp-løsning til Normen-konferansen. Jeg opplevde at det harmonerte bra med uttalelsene fra Datatilsynet og FHI
Om GoForIT - samarbeid om bærekraft mellom Akademia og arbeidslivet Simen Sommerfeldt
Beskrivelse av Grønn Utvikling for IT - et samarbeid mellom akademia og arbeidslivet for å koordinere bærekraft-satsing. Med en liten brannfakkel om FNs bærekraftmål. Fra dagens innspillmøte til kunnskapsdepartementet. De som er med i GoForIT er Bouvet, Sopra Steria, IKT-Norge, NTNU og UiA. Microsoft og UiO er på vei inn
Innledning til teknologi og rettstatsprinsipper i krisetiderSimen Sommerfeldt
Min innleding dannet bakteppet for Tekna og Juristforbundets nett-debatt 15. juni 2020: om smittestopp og veien videre. NB Slide 2 og 28 ble lagt til etterpå for å gi mer info. Slide 28 mangler litt på layout! Se opptak her https://www.tekna.no/fag-og-nettverk/IKT/ikt-bloggen/teknologi-i-krisetider/
GDPR gjør Europa til en foregangsverdensdel. Er UH-sektoren klar til å gripe ...Simen Sommerfeldt
Keynote til OsloMET sin personverndag: Litt om hvordan manglende tillit til aktørene hindrer oss i å oppnå bærekraft gjennom smarte byer. Selvfølgelig noe om GDPR, og hvordan personvern og sikkerhet gjør Europa til en foregangsverdensdel. Mine betraktninger om GDPR og endringsledelse. Selvfølgelig også noe om hvordan OsloMet kan spise videre på elefanten
Digtialiseringskompetanse for ledere tli teknologidagen 2019Simen Sommerfeldt
Alle snakker om digitalisering, men kan de ordene de trenger for å kommunisere? Har de god nok kompetanse til å ta de informerte risikoene forbundet med innovasjon? Jeg tar utgangspunkt i hva vi gjorde i Digital21, og hvilke begreper en bør forstå for å kunne delta i samtaler
GDPR - et vannskille. Hva nå? Til fagpressedagen 2018Simen Sommerfeldt
Mange gjorde mye forarbeid, og så skjedde det ikke noe? I stedet for å repetere GDPR går jeg litt inn på driverne, historien, hva som skjer i Europa nå, og hva vi kan forvente. Til slutt gir jeg noen råd på veien
Performance-marketing bransjen står overfor et vannskille med GDPR, og sammen med Tore Tomasgaard i Ko&Co har jeg sett på hvilke hovedutfordringer og muligheter som kommer.
Konferansiér-presentasjonen jeg laget til Yggdrasil 2018. Det er en del videoer og animasjoner som selvfølgelig ikke kommer med her. Designmanualen jeg baserte meg på er laget av Kristin Kokkersvold fra Studio Netting
Røverhistorie om GDPR til "Fredag morgen hos dataforeningen"Simen Sommerfeldt
Et "worst case" scenario vi bruker i kurset vårt - og litt om hvordan vi bruker Service Design / Kundereiser for å oppdage hva man bør endre på.
Link til video: https://youtu.be/1Z78o1IkZpg?t=1h38m39s
Trender som påvirker Sosiale medier - til Social media days 2018Simen Sommerfeldt
Foredraget mitt til #somed2018 : Hvordan kundekommunikasjon i USA og Europa får helt forskjellige vilkår - spesielt med tanke på #GDPR og Kunstig Intelligens.
Til "Digital 2017" konferansen. Vi forteller om hvordan RPA har hjulpet Bergen kommune, og peker litt fremover om hvordan en kan bruke Kunstig Intelligens og Machine Learning i offentlig sektor
Jeg har laget en info-pakke til deg som er arkitekt eller utvikler, og skal jobbe med personopplysninger.
- Kort intro til grunnbegrepene i personvern, og hva GDPR medfører av plikter og rettigheter
- En gjennomgang av hva de viktigste artiklene i forordningen vil medføre av krav til funksjonalitet
- En introduksjon til mekanismene for å anonymisere og pseudonymisere data
- Hva slags kompetanse du må ha, og viktige elementer i utviklingsprosessen
- Nyttige design- og arkitekturføringer
- Hva du bør spørre etter for å kunne jobbe bra med personvern
- Noen relevante verktøy som du kan bruke
Om du er arkitekt eller utvikler, vil du ha stor nytte av denne introen til GDPR: Lovverket trår i kraft 25.mai 2018, og setter helt absolutte krav til sikkerhet og personvern. Med bøter på opp til 4% av global konsernomsetning, blir vi tvunget til å lage nye systemer helt annerledes, og mange eksisterende må skrives om.
Hafslund AMS - Drinking from the fire hose at a large IoT project
1. Hafslund AMS
Drinking from the fire hose
at a large IOT project
Jon Andreas Pretorius, Hafslund Nett
Axel Borge,Sesam
Simen Sommerfeldt, Bouvet
to NDC 2015
4. • Hafslund Nett owns and operates Norway's largest
electricity grid and has long had one of the lowest net
rents
• Hafslund Nett owns and operates the regional grid in
Oslo, Akershus county and Østfold county
• Hafslund Nett owns and operates the distribution network
in Oslo and most of Akershus and Østfold counties
• Number of distribution network customers are 675,000
• The network consists of 37,000 km overhead lines and
underground cables
• Hafslund Driftssentral is one of Europe's most advanced
operating centers, that controls, monitors and optimizes
power to 1.4 million people, Hafslund Varme's district
heating plants in the Oslo area and Hafslund Produksjon's
power plants in Glomma
Business Area Network
s.4
6. There will be more changes to the power grid
operation the next five years than the 100 last
years
1899 1911 2011 2020
?&
7. Changes in regulations will increase complexity
significant and increase demand for automation
s.7
35 000 enkle fjernavleste målere
Årlig driftskost pr måler: ca 950
700 000 komplekse fjernavleste
målere
Har i dag buffer for feilretting.
Stort sett bare ifm måler-
avlesning at programmet er tett
Årlig driftskost pr måler: 150
Alt online og ingen buffer eller
servicevindu. Alt må alltid være
tilgjengelig. Kan illustreres ved å
tenke at det var måleravlesning
hver dag hele døgnet
IT er i liten grad en trussel for
omdømme
Hacking, virus mm vil utgjøre en
mye større trussel generelt og
målere vil kunne hackes
Kompleksitet høy, men vi reddes
av rolige perioder
Kompleksitet vil være betydelig
høyere og konstant
I dag 2020
AMS
elHUB
Kundens forventning som i 1990
Hvordan vil kundens forventning
endres?
9. s.9
Elhub and the supplier centric model
ElHub
Statnett has been commissioned by NVE to establish
Elhub.
Elhub shall collect all metering values for Norway and
makie these values available for power suppliers and
their end customers. Furthermore Elhub will support
processes for customers moving or switching suppliers,
and compile data for clearing between participants in
the electricity market
For Hafslund Nett this means that collected and verified
hour ly values from all AMS meters shall be transferred
to ElHub once a day
When vthe supplier centric model is established ,
customers will only deal with the electricity company
(example service and infrastructure provider of mobile
telephony)
The supplier centric model creates major changes in
business processes and data exchange in the industry
Drawing from elhub.no
10. s.10
System&D& System&…&
System&C&System&B&System&A&
System&N&
Hafslund investigated two
alternative solutions for
integration architecture that will
support the demands of new
AMS solution;
- ServiceBus
- Data hub(Semantic/RDF)
Hafslund has experience with
both solutions, but the project
consider a Data Hub based
solution most appropriate in this
context;
- Increased stability
(asynchronous data
exchange)
- Fewer integration points
- Similarly architecture chosen
for central El Hub
DataNAV&
Choice$of$integra/on$solu/on
11. s.11
IFS$ERP$
Warehousing&&&
Logis1cs&
Project&module& WO7module& 360º&Scheduling&
New&field&system&
Economy&installa1on®istry&
Documenta1on&
HR/resource&
Rollout$
AMS$
Opera/ons$
AMS$
Stage&Planning&and&
monitoring& Assign& Start&& Perform& Report&
Project,)Opera-ons,)
Maintanance)excis-ng)
Recep1on&7&
withdrawals&goods&
Data&Hub&
GeoNIS&
#installa1on&
Quant&
#AMS&&
Generis&
#old&meters&
CAB&
#Customer&&
Datawarehouse&/&
archive&
Consolidated&customer&and&
installa1on&data&from&Data&hub&
Data&sources&for&
rollout&
Data&Recipients&
rollout&
Historical&data&archive&
and&analysis&
Integration engine
All masterdata is
consolidated in
Data Hub
Data Hub is the
only source for all
business
applications
In the semantic
data base all data
are connected
Data Hub provides
great potential for
management of the
information model
and analysis
Established$applica/on$solu/on$design
30. Convert data to triplets - RDF
ID Name Position Born E-mail Manager
101 Tim Berners-
Lee
Programmer 08061955
timbl@w3c.org 958
958 Vint Cerf Inventor 23061940 vint@stanford.edu 999
765 Pål Spilling Professor 04111940 pspilling@uio.no 765
Subject Predicate Object
101
101
101
101
Type Person
Name
Position
Born
E-mail
Manager
101
timbl@w3c.org08061955 958ProgrammerTim Berners-Lee101
31. Universally unique identifiers
Subject Predicate Object
www.org.no/data/system/person/1 Type Person
www.org.no/data/system/person/1 Name Tim Berners-Lee
www.org.no/data/system/person/1 Position Programmer
www.org.no/data/system/person/1 Manager www.org.no/data/system/person/2
www.org.no/data/system/person/2 Name Vint Cerf
www.org.no = Unique organisation on the internet
www.org.no/data/system/person/1 = unique id of the information element
37. HR Dest
SDShare
Source
HUB
• Based on Atom: Pull data, don’t push
• Asynchronous
• Subscribers ask for data that has changed
since the last time
• Update frequencies are adjustable
• You can ask for changes or the whole dataset
• Data formats changed in transfer.
43. Kafka for extra throughput
SDShare'Server'
KaDa'
Provider'
RDF'Store'
KaDa'Queue'
The Kafka Provider Pulls Information off from the Queue and can add
extra data from the RDF store before exposing it out via SDShare. It can
also apply filters based on data in the hub or the item on the queue.
44. • SQL Databases via jdbc
• CSV files
• RDF triple stores
• Sharepoint
• Kafka
• XML files
• LDAP providers
• Excel files
• MS Exchange server (mail and calendar)
• SDSHARE – anything! (MongoDB, etc)
Data sources and sinks
46. Data Analytics & Enhancement Existing Systems
Processes Search and Reporting
All Data Indexed
Contribute data
Drive process through
state change
Models in data,
Constraints in data
Act on all data
Analytics results are
just more data
Complete
views of all
systems and
processes
Use Data
All people can ask
all questions
U
niform
ly
Structured
data
from
heterogeneous
sources
System Improved
47. Other systems can keep running
even if one is down. And you
can upgrade a system or install
a new with fewer impacts
48. The customer controls the
information model and
becomes more independent
from vendors
50. 1. Thou shall only get data from other domains through Sesam
2. SOA is dead, long live DOA. Processes advance through state changes
3. There can never be a common data model in the company
4. Thou shall never query Sesam directly, but through SDSHARE
5. Thou shall be comfortable with eventual consistency
6. Thou will always get the same answer when you ask Sesam the same
question. And Sesam can say the same things many times
7. The world is asynchronus, as is Sesam. Don’t try to shoehorn synchronicity
8. Thou shall embrace that data can have different sources/master and values
9. The systems need not know about Sesam
10. Sesam is not a backup.
The Commandments of Sesam
51. • Runs in Docker containers
• Github and Saltstack are used to keep all
installations up-to-date
• At the core: Virtuoso Triple Store
• Includes a data browser
• Indexed with SOLR to provide universal search
• All communication happens with SDSHARE
• Configuration over coding
Sesam tech
52. A paradigm shift for developers
• Eventual consistency
• “Pilfering” of data
• RDF and SDShare
• Sparql is not SQL
• Idempotence: Sesam
can send duplicates
• No RPC calls or
message passing
• You need an information
architect in the project
• Don´t add more queues.
55. A recap of the requirements
• Massive amounts of data
• Many systems must be coordinated
• Many stages in the deployment, with changing
needs
• Systems will be upgraded and changed
• The systems were not designed to cooperate with
each other
• Bugs and errors happen – in systems and human
actions.
68. When to use Sesam
• When all else is tried – you are f***ked
• If you have many domains in the company
• If your integration work involves a lot of
data transformation, lookup and
conversion
• If the logic in the ESB rivals that of the
systems
• For Internet of things projects
• As a collector for big data projects
70. Want to know more?
• contact us at info@sesam.no and we will
help you get started
• www.sesam.no
• www.sdshare.org
71. • Anders Volle
• Ståle Heitmann
• Steinar Rudsar
• Axel Borge
• Øystein Isaksen
• Graham Moore
• Lars Marius Garshol
• Steinar Rune Eriksen
Thanks to...