This document discusses DataOps, which is an agile methodology for developing and deploying data-intensive applications. DataOps supports cross-functional collaboration and fast time to value. It expands on DevOps practices to include data-related roles like data engineers and data scientists. The key goals of DataOps are to promote continuous model deployment, repeatability, productivity, agility, self-service, and to make data central to applications. It discusses how DataOps brings flexibility and focus to data-driven organizations through principles like continuous model deployment, improved efficiency, and faster time to value.
Microservices at Scale: How to Reduce Overhead and Increase Developer Product...DevOps.com
As a cloud native application grows in size—more microservices, more dependencies, more teams—there’s a corresponding increase in…
Complexity: Over time, the application becomes a lot harder for a single developer to reason about and contribute to. Staying on top of READMEs and managing cross-team communication is practically a full-time job.
Scaling challenges: The reality of building, deploying, and testing a 100+ service distributed application means developers are going to spend a lot of time sitting around waiting.
But it doesn’t have to end up this way, and there are concrete steps that DevOps engineers can take to keep their developers moving quickly even as an application grows. In this webinar, we’ll show you how to use open source products to:
Make it easy for your developers to code and run on-demand tests against a production-like environment—without having to constantly deal with the complexity that comes with a large application
Codify the relationship between all your services and tests, making your system self-documented and easy to understand
Keep your integration tests running fast so that devs can more easily write and debug their tests and get the quick feedback loops they need
Facilitate remote, in-cluster development and give every developer their own isolated namespace—and never again ask a developer to deploy the application on their laptop
Here is the small presentation on DevOps to DevSecOps Journey..
- What is DevOps and their best practices.
- Practical Scenario of DevOps practices.
- DevOps transformation Journey.
- Transition to DevSecOps and why we need it.
- Enterprise CI/CD Pipeline.
Microservices at Scale: How to Reduce Overhead and Increase Developer Product...DevOps.com
As a cloud native application grows in size—more microservices, more dependencies, more teams—there’s a corresponding increase in…
Complexity: Over time, the application becomes a lot harder for a single developer to reason about and contribute to. Staying on top of READMEs and managing cross-team communication is practically a full-time job.
Scaling challenges: The reality of building, deploying, and testing a 100+ service distributed application means developers are going to spend a lot of time sitting around waiting.
But it doesn’t have to end up this way, and there are concrete steps that DevOps engineers can take to keep their developers moving quickly even as an application grows. In this webinar, we’ll show you how to use open source products to:
Make it easy for your developers to code and run on-demand tests against a production-like environment—without having to constantly deal with the complexity that comes with a large application
Codify the relationship between all your services and tests, making your system self-documented and easy to understand
Keep your integration tests running fast so that devs can more easily write and debug their tests and get the quick feedback loops they need
Facilitate remote, in-cluster development and give every developer their own isolated namespace—and never again ask a developer to deploy the application on their laptop
Here is the small presentation on DevOps to DevSecOps Journey..
- What is DevOps and their best practices.
- Practical Scenario of DevOps practices.
- DevOps transformation Journey.
- Transition to DevSecOps and why we need it.
- Enterprise CI/CD Pipeline.
Hardening Your CI/CD Pipelines with GitOps and Continuous SecurityWeaveworks
Join us for a webinar on how to secure your CI/CD pipeline for Kubernetes with GitOps best practices and continuous runtime protection. As modern developers and DevOps teams are embarking on a quest for speed and reliability through automated CI/CD pipelines for Kubernetes, enterprises still need to ensure security and regulatory compliance.
Together with Deepfence, the Weaveworks team will explain and demonstrate how GitOps continuous delivery pipelines, combined with continuous security observability, improves the overall security of your development workflow - from Git to production.
In this webinar we will demonstrate:
Deepfence container scanning
Git-to-Kubernetes using FluxCD
Deepfence continuous runtime security
Comparing Microsoft SQL Server 2019 Performance Across Various Kubernetes Pla...DevOps.com
With the growing adoption of Kubernetes, organizations want to take advantage of containerized Microsoft SQL Server 2019 to optimize transactional performance and accelerate time-to-insights from their business-critical data. However, as enterprises embrace hybrid cloud strategy, they need to consider several aspects based on the performance, cost and data protection requirements for running enterprise-grade SQL Server databases.
In this webinar, we will compare and contrast various cloud-native platforms for SQL Server that would help CIOs, DevOps engineers, database administrators and applications architects to determine the most suitable platform that fits their business needs.
Join us as we explore some exciting results from a recent performance benchmark study conducted by McKnight Consulting Group, an independent consulting firm, to compare the performance of Microsoft SQL Server 2019 on the best possible configurations of the following Kubernetes platforms:
Diamanti Enterprise Kubernetes Platform
Amazon Web Services Elastic Kubernetes Service (AWS EKS)
Azure Kubernetes Service (AKS)
Topics will include:
Platform considerations and requirements for running Microsoft SQL Server 2019
Performance comparison and analysis of running SQL Server on various platform
Best practices for running containerized SQL Server databases in Kubernetes environment
Next Generation Vulnerability Assessment Using Datadog and SnykDevOps.com
Vulnerability assessment for teams can often be overwhelming. The dependency graph could be thousands of packages depending on the application. Triaging vulnerability data and prioritizing actions has historically been a very manual process, until now. With Datadog and Snyk, learn how to trace security and performance issues by leveraging continuous profiling capabilities for actionable insight that help developers remediate problems.
Join us on Thursday, January 21 for a unique opportunity to learn more about continuous profiling, vulnerability management, and the benefit to customers from using both of these products. In this webinar, you will:
Bust some myths around continuous profiling and learn how Datadog differentiates itself
See decorated traces in action for sample Java applications and understand how Snyk + Datadog reduce time to triage supply chain vulnerabilities
Learn roadmap information for upcoming public announcements from both partners
OPENING KEYNOTE:
The Cloud Native Computing Foundation (CNCF) is an open source software foundation dedicated to making cloud native computing universal and sustainable. With over 300 members including the world’s largest public cloud and enterprise software companies, Alexis Richardson, CEO of Weaveworks and chair of the CNCF Technical Oversight Committee will walk you through some success stories, and why cloud native is the way forward. You’ll learn why Kubernetes and other CNCF projects have some of the fastest adoption rates in the history of open source, and how this is only the beginning.
Alexis will then show how you can increase speed and reliability in your development workflows even further by using the GitOps model, which has been developed at Weaveworks. You’ll learn about the core concepts of GitOps, including customer success stories, and how you can benefit from using this model.
Kubernetes Administration Certification Cost-Register Now(7262008866)Novel Vista
Kubernetes Administration Certification Cost was originally designed by Google and is now maintained by the Cloud Native Computing Foundation. Kubernetes is basically the most popular container orchestration tool available in the market. Classroom training during weekends, Practice tests to make you certification-ready, Virtual & Interactive Training sessions. There is no particular prerequisite for Kubernetes Administrator training as such. Although, a solid understanding of containers, and Docker, in particular, is beneficial.
Monitoring Serverless Applications with DatadogDevOps.com
Join Datadog for a webinar on monitoring serverless applications with AWS Lambda. You'll learn how to get the most of Datadog's platform, as well ask the following key takeaways:
Learn how to set up a Twitter bot that makes API calls with Node.js
Deploying Serverless Applications
What does observability look like with less infrastructure?
Journey Through Four Stages of Kubernetes Deployment MaturityAltoros
In this webinar we will discuss a crawl, walk, run approach to continuous delivery (CD) for applications, point by point:
Where to start, how to advance, and how to reach the level of maximum automation.
How to orchestrate CI/CD processes along with routing and business continuity.
When the automation level is sufficient.
GitOps principles and their benefits.
What tools should be used to automate CI, CD, GitOps, Container Registry, Secrets management, etc
This presentation is to reflect on the amazing advancement of the open source community in the field of Cloud Computing and how does it now allow us to build reliable software components quickly within truly agile infrastructure.
A session on how to use Azure DevOps best practices for developing and publishing applications and infrastructure to Azure, whether you use PaaS, FaaS or IaaS
Data-Driven DevOps: Improve Velocity and Quality of Software Delivery with Me...Splunk
Much of the value of DevOps comes from a (renewed) focus on measurement, sharing, and continuous feedback loops. In increasingly complex DevOps workflows and environments, and especially in larger, regulated, or more crystallized organizations, these core concepts become even more critical.
This session will show how, by focusing on 'metrics that matter,' you can provide objective, transparent, and meaningful feedback on DevOps processes to all stakeholders. Learn from real-life examples how to use the data generated throughout application delivery to continuously identify, measure, and improve deployment speed, code quality, process efficiency, outsourcing value, security coverage, audit success, customer satisfaction, and business alignment.
InfoSec: Evolve Thyself to Keep Pace in the Age of DevOpsVMware Tanzu
Companies going through digital transformation initiatives need their IT organizations to support an increased business tempo. While DevOps practices have helped IT increase their pace to keep up with market dynamics, security teams still need to follow suit.
InfoSec practitioners must modernize their practices to realize efficiencies in some of their most burdensome processes, like patching, credential management, and compliance.
By embracing a ‘secure by default’ posture security teams can position themselves as enabling innovation rather than hindering it.
Join Pivotal’s Justin Smith and guest speaker, Fernando Montenegro from 451 Research, in a conversation about how security can enable innovation while maintaining best security practices. They will examine best practices and cultural shifts that are required to be secure by default, as well as the role processes and platforms play in this transition.
SPEAKERS:
Guest Speaker: Fernando Montenegro, Senior Analyst, Information Security, 451 Research
Justin Smith, Chief Security Officer for Product, Pivotal
Jared Ruckle, Product Marketing Manager, Pivotal
How a Semantic Layer Makes Data Mesh Work at ScaleDATAVERSITY
Data Mesh is a trending approach to building a decentralized data architecture by leveraging a domain-oriented, self-service design. However, the pure definition of Data Mesh lacks a center of excellence or central data team and doesn’t address the need for a common approach for sharing data products across teams. The semantic layer is emerging as a key component to supporting a Hub and Spoke style of organizing data teams by introducing data model sharing, collaboration, and distributed ownership controls.
This session will explain how data teams can define common models and definitions with a semantic layer to decentralize analytics product creation using a Hub and Spoke architecture.
Attend this session to learn about:
- The role of a Data Mesh in the modern cloud architecture.
- How a semantic layer can serve as the binding agent to support decentralization.
- How to drive self service with consistency and control.
Hardening Your CI/CD Pipelines with GitOps and Continuous SecurityWeaveworks
Join us for a webinar on how to secure your CI/CD pipeline for Kubernetes with GitOps best practices and continuous runtime protection. As modern developers and DevOps teams are embarking on a quest for speed and reliability through automated CI/CD pipelines for Kubernetes, enterprises still need to ensure security and regulatory compliance.
Together with Deepfence, the Weaveworks team will explain and demonstrate how GitOps continuous delivery pipelines, combined with continuous security observability, improves the overall security of your development workflow - from Git to production.
In this webinar we will demonstrate:
Deepfence container scanning
Git-to-Kubernetes using FluxCD
Deepfence continuous runtime security
Comparing Microsoft SQL Server 2019 Performance Across Various Kubernetes Pla...DevOps.com
With the growing adoption of Kubernetes, organizations want to take advantage of containerized Microsoft SQL Server 2019 to optimize transactional performance and accelerate time-to-insights from their business-critical data. However, as enterprises embrace hybrid cloud strategy, they need to consider several aspects based on the performance, cost and data protection requirements for running enterprise-grade SQL Server databases.
In this webinar, we will compare and contrast various cloud-native platforms for SQL Server that would help CIOs, DevOps engineers, database administrators and applications architects to determine the most suitable platform that fits their business needs.
Join us as we explore some exciting results from a recent performance benchmark study conducted by McKnight Consulting Group, an independent consulting firm, to compare the performance of Microsoft SQL Server 2019 on the best possible configurations of the following Kubernetes platforms:
Diamanti Enterprise Kubernetes Platform
Amazon Web Services Elastic Kubernetes Service (AWS EKS)
Azure Kubernetes Service (AKS)
Topics will include:
Platform considerations and requirements for running Microsoft SQL Server 2019
Performance comparison and analysis of running SQL Server on various platform
Best practices for running containerized SQL Server databases in Kubernetes environment
Next Generation Vulnerability Assessment Using Datadog and SnykDevOps.com
Vulnerability assessment for teams can often be overwhelming. The dependency graph could be thousands of packages depending on the application. Triaging vulnerability data and prioritizing actions has historically been a very manual process, until now. With Datadog and Snyk, learn how to trace security and performance issues by leveraging continuous profiling capabilities for actionable insight that help developers remediate problems.
Join us on Thursday, January 21 for a unique opportunity to learn more about continuous profiling, vulnerability management, and the benefit to customers from using both of these products. In this webinar, you will:
Bust some myths around continuous profiling and learn how Datadog differentiates itself
See decorated traces in action for sample Java applications and understand how Snyk + Datadog reduce time to triage supply chain vulnerabilities
Learn roadmap information for upcoming public announcements from both partners
OPENING KEYNOTE:
The Cloud Native Computing Foundation (CNCF) is an open source software foundation dedicated to making cloud native computing universal and sustainable. With over 300 members including the world’s largest public cloud and enterprise software companies, Alexis Richardson, CEO of Weaveworks and chair of the CNCF Technical Oversight Committee will walk you through some success stories, and why cloud native is the way forward. You’ll learn why Kubernetes and other CNCF projects have some of the fastest adoption rates in the history of open source, and how this is only the beginning.
Alexis will then show how you can increase speed and reliability in your development workflows even further by using the GitOps model, which has been developed at Weaveworks. You’ll learn about the core concepts of GitOps, including customer success stories, and how you can benefit from using this model.
Kubernetes Administration Certification Cost-Register Now(7262008866)Novel Vista
Kubernetes Administration Certification Cost was originally designed by Google and is now maintained by the Cloud Native Computing Foundation. Kubernetes is basically the most popular container orchestration tool available in the market. Classroom training during weekends, Practice tests to make you certification-ready, Virtual & Interactive Training sessions. There is no particular prerequisite for Kubernetes Administrator training as such. Although, a solid understanding of containers, and Docker, in particular, is beneficial.
Monitoring Serverless Applications with DatadogDevOps.com
Join Datadog for a webinar on monitoring serverless applications with AWS Lambda. You'll learn how to get the most of Datadog's platform, as well ask the following key takeaways:
Learn how to set up a Twitter bot that makes API calls with Node.js
Deploying Serverless Applications
What does observability look like with less infrastructure?
Journey Through Four Stages of Kubernetes Deployment MaturityAltoros
In this webinar we will discuss a crawl, walk, run approach to continuous delivery (CD) for applications, point by point:
Where to start, how to advance, and how to reach the level of maximum automation.
How to orchestrate CI/CD processes along with routing and business continuity.
When the automation level is sufficient.
GitOps principles and their benefits.
What tools should be used to automate CI, CD, GitOps, Container Registry, Secrets management, etc
This presentation is to reflect on the amazing advancement of the open source community in the field of Cloud Computing and how does it now allow us to build reliable software components quickly within truly agile infrastructure.
A session on how to use Azure DevOps best practices for developing and publishing applications and infrastructure to Azure, whether you use PaaS, FaaS or IaaS
Data-Driven DevOps: Improve Velocity and Quality of Software Delivery with Me...Splunk
Much of the value of DevOps comes from a (renewed) focus on measurement, sharing, and continuous feedback loops. In increasingly complex DevOps workflows and environments, and especially in larger, regulated, or more crystallized organizations, these core concepts become even more critical.
This session will show how, by focusing on 'metrics that matter,' you can provide objective, transparent, and meaningful feedback on DevOps processes to all stakeholders. Learn from real-life examples how to use the data generated throughout application delivery to continuously identify, measure, and improve deployment speed, code quality, process efficiency, outsourcing value, security coverage, audit success, customer satisfaction, and business alignment.
InfoSec: Evolve Thyself to Keep Pace in the Age of DevOpsVMware Tanzu
Companies going through digital transformation initiatives need their IT organizations to support an increased business tempo. While DevOps practices have helped IT increase their pace to keep up with market dynamics, security teams still need to follow suit.
InfoSec practitioners must modernize their practices to realize efficiencies in some of their most burdensome processes, like patching, credential management, and compliance.
By embracing a ‘secure by default’ posture security teams can position themselves as enabling innovation rather than hindering it.
Join Pivotal’s Justin Smith and guest speaker, Fernando Montenegro from 451 Research, in a conversation about how security can enable innovation while maintaining best security practices. They will examine best practices and cultural shifts that are required to be secure by default, as well as the role processes and platforms play in this transition.
SPEAKERS:
Guest Speaker: Fernando Montenegro, Senior Analyst, Information Security, 451 Research
Justin Smith, Chief Security Officer for Product, Pivotal
Jared Ruckle, Product Marketing Manager, Pivotal
How a Semantic Layer Makes Data Mesh Work at ScaleDATAVERSITY
Data Mesh is a trending approach to building a decentralized data architecture by leveraging a domain-oriented, self-service design. However, the pure definition of Data Mesh lacks a center of excellence or central data team and doesn’t address the need for a common approach for sharing data products across teams. The semantic layer is emerging as a key component to supporting a Hub and Spoke style of organizing data teams by introducing data model sharing, collaboration, and distributed ownership controls.
This session will explain how data teams can define common models and definitions with a semantic layer to decentralize analytics product creation using a Hub and Spoke architecture.
Attend this session to learn about:
- The role of a Data Mesh in the modern cloud architecture.
- How a semantic layer can serve as the binding agent to support decentralization.
- How to drive self service with consistency and control.
The catalyst for the success of automobiles came not through the invention of the car but rather through the establishment of an innovative assembly line. History shows us that the ability to mass produce and distribute a product is the key to driving adoption of any innovation, and machine learning is no different. MLOps is the assembly line of Machine Learning and in this presentation we will discuss the core capabilities your organization should be focused on to implement a successful MLOps system.
It is a fascinating, explosive time for enterprise analytics.
It is from the position of analytics leadership that the mission will be executed and company leadership will emerge. The data professional is absolutely sitting on the performance of the company in this information economy and has an obligation to demonstrate the possibilities and originate the architecture, data, and projects that will deliver analytics. After all, no matter what business you’re in, you’re in the business of analytics.
The coming years will be full of big changes in enterprise analytics and Data Architecture. William will kick off the fourth year of the Advanced Analytics series with a discussion of the trends winning organizations should build into their plans, expectations, vision, and awareness now.
Software engineering practices for the data science and machine learning life...DataWorks Summit
With the advent of newer frameworks and toolkits, data scientists are now more productive than ever and starting to prove indispensable to enterprises. Typical organizations have large teams of data scientists who build out key analytics assets that are used on a daily basis and an integral part of live transactions. However, there is also quite a lot of chaos and complexities that get introduced because of the state of the industry. Many packages used by data scientists are from open source, and even if they are well curated, there is a growing tendency to pick out the cutting-edge or unstable packages and frameworks to accelerate analytics. Different data scientists may use different versions of runtimes, different Python or R versions, or even different versions of the same packages. Predominantly data scientists work on their laptops and it becomes difficult to reproduce their environments for use by others. Since data science is now a team sport across multiple personas, involving non-practitioners, traditional application developers, execs, and IT operators, how does an enterprise create a platform for productive cross-role collaboration?
Enterprises need a very reliable and repeatable process, especially when it results in something that affects their production environments. They also require a well managed approach that enables the graduation of an asset from development through a testing and staging process to production. Given the pace of businesses nowadays, the process needs to be quite agile and flexible too—even enabling an easy path to reversing a change. Compliance and audit processes require clear lineage and history as well as approval chains.
In the traditional software engineering world, this lifecycle has been well understood and best practices have been followed for ages. But what does it mean when you have non-programmers or users who are not really trained in software engineering philosophies or who perceive all of this as "big process" roadblocks in their daily work ? How do you we engage them in a productive manner and yet support enterprise requirements for reliability, tracking, and a clear continuous integration and delivery practice? The presenters, in this session, will bring up interesting techniques based on their user research, real life customer interviews, and productized best practices. The presenters also invite the audience to share their stories and best practices to make this a lively conversation.
Speaker
Sriram Srinivasan, Senior Technical Staff Member, Analytics Platform Architect, IBM
AUSOUG - NZOUG-GroundBreakers-Jun 2019 - AI and Machine LearningSandesh Rao
Autonomous Database is one of the hottest Oracle products where we have attempted to use Machine Learning for several aspects of the service. This presentation takes a view on our current state of Diagnostic methodology in the Autonomous Database Cloud services and how do we process this data to find anomalies in them to troubleshoot them at a scale of several petabytes a year and conduct AIOps. Some of the use cases we will cover are a Log Anomaly timeline which we reduce significant amounts of logs using semi-supervised machine learning techniques to reduce logs and match them in near real time. We will cover techniques to analyze database issues using Machine learning techniques like Kmeans , TFIDF, Random Forests, and z-scores to predict if a spike in the CPU is a normal or abnormal spike. We will also talk about RNN’s with LSTM/GRU as some of the applications of how to predict faults before they happen. Some of the other use cases are to use convolution filters to determine maintenance windows within the database workloads, determine best times to do database backups, security anomaly timelines and many others. This is a production service and this can be used if you have a customer SR/defect today. The service is much more extensive inside the Oracle Autonomous Database Cloud. This presentation will accompany several examples with how to apply these techniques, machine learning knowledge is preferred but not a prerequisite
Advanced Analytics and Machine Learning with Data Virtualization (India)Denodo
Watch full webinar here: https://bit.ly/3dMN503
Advanced data science techniques, like machine learning, have proven an extremely useful tool to derive valuable insights from existing data. Platforms like Spark, and complex libraries for R, Python, and Scala put advanced techniques at the fingertips of the data scientists. However, these data scientists spend most of their time looking for the right data and massaging it into a usable format. Data virtualization offers a new alternative to address these issues in a more efficient and agile way.
Watch this session to learn how companies can use data virtualization to:
- Create a logical architecture to make all enterprise data available for advanced analytics exercise
- Accelerate data acquisition and massaging, providing the data scientist with a powerful tool to complement their practice
- Integrate popular tools from the data science ecosystem: Spark, Python, Zeppelin, Jupyter, etc
Cloud and Analytics - From Platforms to an EcosystemDatabricks
Zurich North America is one of the largest providers of insurance solutions and services in the world with customers representing a wide range of industries from agriculture to construction and more than 90 percent of the Fortune 500.
CSC - Presentation at Hortonworks Booth - Strata 2014Hortonworks
Come hear about how companies are kick-starting their big data projects without having to find good people, hire them, and get IT to prioritize it to get your project off the ground. Remove risk from your project, ensure scalability , and pay for just the nodes you use in a monthly utility pricing model. Worried about Data Governance, Security, want it in the cloud, can’t have it in the cloud….eliminate the hurdles with a fully managed service backed by CSC. Get your modern data architecture up and running in as little as 30 days with the Big Data Platform As A Service offering from CSC. Computer Science Corporation is a Certified Technology Partner of Hortonworks and is a Global System Integrator with over 80,000 employees globally.
Using standards, open-source and advances in technology to bring down soft co...Infiswift Solutions
A look at some of the standards and open source tech that can help solar PV plants bring down their soft costs. This presentation also looks at how the internet of things (IoT) can utilize these tools to offer next generation services to the solar PV industry.
Quicker Insights and Sustainable Business Agility Powered By Data Virtualizat...Denodo
Watch full webinar here: https://bit.ly/3xj6fnm
Presented at Chief Data Officer Live 2021 A/NZ
The world is changing faster than ever. And for companies to compete and succeed they need to be agile in order to respond quickly to market changes and emerging opportunities. Data plays an integral role in achieving this business agility. However, given the complex nature of the enterprise data architecture finding and analysing data is an increasingly challenging task. Data virtualization is a modern data integration technique that integrates data in real-time, without having to physically replicate it.
Watch on-demand this session to understand what data virtualization is and how it:
- Delivers data in real-time, and without replication
- Creates a logical architecture to provide a single view of truth
- Centralises the data governance and security framework
- Democratises data for faster decision making and business agility
Big Data Made Easy: A Simple, Scalable Solution for Getting Started with HadoopPrecisely
With so many new, evolving frameworks, tools, and languages, a new big data project can lead to confusion and unwarranted risk.
Many organizations have found Data Warehouse Optimization with Hadoop to be a good starting point on their Big Data journey. Offloading ETL workloads from the enterprise data warehouse (EDW) into Hadoop is a well-defined use case that produces tangible results for driving more insights while lowering costs. You gain significant business agility, avoid costly EDW upgrades, and free up EDW capacity for faster queries. This quick win builds credibility and generates savings to reinvest in more Big Data projects.
A proven reference architecture that includes everything you need in a turnkey solution – the Hadoop distribution, data integration software, servers, networking and services – makes it even easier to get started.
Think of big data as all data, no matter what the volume, velocity, or variety. The simple truth is a traditional on-prem data warehouse will not handle big data. So what is Microsoft’s strategy for building a big data solution? And why is it best to have this solution in the cloud? That is what this presentation will cover. Be prepared to discover all the various Microsoft technologies and products from collecting data, transforming it, storing it, to visualizing it. My goal is to help you not only understand each product but understand how they all fit together, so you can be the hero who builds your companies big data solution.
How Data Virtualization Puts Enterprise Machine Learning Programs into Produc...Denodo
Watch full webinar here: https://bit.ly/3offv7G
Presented at AI Live APAC
Advanced data science techniques, like machine learning, have proven an extremely useful tool to derive valuable insights from existing data. Platforms like Spark, and complex libraries for R, Python and Scala put advanced techniques at the fingertips of the data scientists. However, these data scientists spend most of their time looking for the right data and massaging it into a usable format. Data virtualization offers a new alternative to address these issues in a more efficient and agile way.
Watch this on-demand session to learn how companies can use data virtualization to:
- Create a logical architecture to make all enterprise data available for advanced analytics exercise
- Accelerate data acquisition and massaging, providing the data scientist with a powerful tool to complement their practice
- Integrate popular tools from the data science ecosystem: Spark, Python, Zeppelin, Jupyter, etc.
Similar to DevOps Spain 2019. Olivier Perard-Oracle (20)
¿Sabes a ciencia cierta el sentimiento de los usuarios por tus servicios TI? ¿Es actualmente una debilidad o "un must a mejorar"? Si lo que quieres es conseguir es un soporte eficiente que cautive a tus usuarios ¡inscríbete y descúbre cómo conseguirlo.
En esta Bizz-chAT te contamos cómo implantar de forma ágil y práctica tu solución ITSM con herramientas Atlassian de forma flexible y personalizada.
Desde atSistemas compartimos la visión de Atlassian para implantar una solución de Gestión de Servicios TI de forma iterativa y práctica, basada en los procesos principales de ITIL, aportando toda nuestra experiencia en implantar este tipo de soluciones en multitud de clientes de distintos tamaños y sectores.
¡Las herramientas Atlassian proporcionan madurez y solvencia que, unida a la experiencia, conocimiento y saber hacer de atSistemas son una combinación que garantiza el éxito de tu implantación!
El ritmo de los cambios en el panorama empresarial se está acelerando cada vez más; y en parte es debido a que en los últimos años estamos siendo bombardeados por nuevas tecnologías digitales tales como: social, mobile, analytics, cloud, IoT, artificial intelligence, blockchain, biometrics, robotics, entre otras.
Lo interesantísimo acerca de estas tecnologías es como son capaces de cambiar la propuesta de valor que las organizaciones pueden ofrecer a sus clientes.
Para las organizaciones ya establecidas el reto es mayor, si cabe, debido a que deberán emprender un largo y complejo viaje para comprender y apalancarse en los beneficios de las tecnologías digitales. Este proceso es lo que llamamos metamorfosis digital y será el eje central de este webinar.
Agenda:
- Las tecnologías digitales y su impacto en la creación de nuevas propuestas de valor.
- La necesidad de re-diseño para ser capaz de crear nuevas propuestas de valor.
- Implicaciones de la metamorfosis digital en terminos de transformación.
- Los 5 bloques de construcción para la transformación.
NET5 es la nueva plataforma de desarrollo unificada, para programar cualquier tipo de aplicaciones modernas con Visual Studio. La construcción de nuevas aplicaciones .NET5, se pueden desarrollar y publicar en diferentes sistemas operativos, plataformas de nube, plataformas móviles, IoT y otros dispositivos con diferentes lenguajes y herramientas.
Webinar Speed Up Academy: Acelera la incorporación de talento.atSistemas
Presentamos el framework diseñado por atSistemas para la gestión del talento en las organizaciones y gestionado por DEXS. Es un concepto innovador dentro del contexto de la gestión del talento, cuyo objetivo es la gestión y distribución del conocimiento en los equipos distribuido en 4 etapas:
1ª. Dar cobertura al proceso de onboarding mediante un plan formativo estructurado que incluye toda la información necesaria para las nuevas incorporaciones.
2ª. Proceso de mentorización para mantener a los miembros de los equipos motivados y alineados con la estrategia global del proyecto.
3ª. Planes de Formación para adecuar continuamente las capacidades a las necesidades y evolución de las capacidades y habilidades.
4ª. Gestión del Conocimiento entre equipos y departamentos.
Webinar: Descubre los diferentes servicios Cloud Native en AzureatSistemas
En el contexto actual, las plataformas y tecnologías Cloud están impulsando una serie de cambios en la forma en que se analizan, desarrollan, implementan, despliegan y monitorizan las aplicaciones.
Las tecnologías y herramientas Cloud Native se utilizan para desarrollar y desplegar aplicaciones construidas con tecnologías de Containers, Microservicios o Serverless. En esta sesión os animamos a que conozcáis, los diferentes escenarios y enfoques para el diseño de Arquitecturas y Aplicaciones Cloud Native utilizando los servicios de Azure.
¿Qué descubrirás en este webinar?
Qué son las plataformas Cloud
Qué es Cloud Native y CNCF - Cloud Native Computing Foundation-.
Cómo funciona Cloud Native en Azure
Demo en Azure: planteando diferentes escenarios
Blockchain Spain II Edición - Ángel Miguel MartínezatSistemas
En esta presentación, Ángel Miguel Martínez, nos mostrará la primera solución 360º desarrollada por atSistemas que combina la suite de Atlassian con tecnología Blockchain.
Se trata de un sistema desarrollado con el empleo de componentes de Atlassian y herramientas Open Source basadas en estándares Ethereum.
Este desarrollo consiste en un sistema corporativo de criptomoneda con funcionalidades expuestas mediante un API REST. Este enfoque arquitectónico permite distribuir las criptomonedas entre los empleados, empleando técnicas de gamificación. El objetivo es incentivar la productividad, el crecimiento profesional y personal, así como la colaboración entre los propios empleados, favoreciendo al mismo tiempo la adopción de la tecnología Blockchain.
Las criptomonedas obtenidas pueden luego ser utilizadas como medio de pago para la realización de cursos, recompensa a compañeros, etc. Estas criptomonedas también permiten adquirir objetos físicos ofrecidos en un marketplace.
Ángel Miguel mostrará el proceso de creación y gestión de esta criptomoneda así como sus posibilidades para técnicas de gamificación, poniendo especial foco en la facilidad para introducir estas soluciones en los sistemas corporativos de cualquier empresa.
Nestor Gandara nos hará un recorrido técnico por los componentes y las herramientas que ofrece Amazon Web Services (AWS) para construir y desarrollar soluciones Blockchain.
La tarea de conceptualizar, diseñar, desplegar y poner en producción una red Blockchain permisionada impone una serie de retos de carácter organizativo y tecnológico.
Estos retos adquieren una mayor dimensión cuando se involucran grandes actores del ecosistema español e internacional pertenecientes a múltiples sectores.
En esta presentación, Juan Luis Gozalo explicará los distintos componentes, las complejidades, los logros alcanzados y el roadmap de una red Blockchain permisionada.
Ramón Abruña nos acercará las herramientas que está facilitando SAP al mercado para que las empresas puedan aplicar e integrar Blockchain en sus procesos de negocio. Presentará ejemplos reales aplicados a la trazablidad de productos en distintos sectores, así como las opciones de colaboración que el fabricante propone para que terceros ofrezcan sus servicios Blockchain a su ecosistema de clientes.
En esta presentación, Santiago Chamat nos expondrá las repercusiones y los retos legales que debemos tener en cuenta a la hora de implementar contratos inteligentes para automatizar determinados procesos y atender propósitos específicos. Así mismo, nos mostrará los esfuerzos que, en materia de marco legal, se están desarrollando en el panorama español e internacional.
Muchos de los casos de uso interesantes de Blockchain se encuentran con problemas en el momento de desarrollarlos por la complejidad de la propia tecnología. Oracle Blockchain Plataform permite centrarse en el caso de negocio un beneficio rápido para las organizaciones. Antonio Gómez nos presentará la plataforma de Oracle y cómo está ayudando a compañías a obtener beneficios en producción.
En esta presentación, Miguel Ángel Rojas nos planteará como el mundo de la energía esta actualmente en un punto de inflexión: vehículos eléctricos, prosumers, trading P2P, energías renovables…en definitiva, un cambio de paradigma en el modelo enérgetico. El papel que juega blockchain es puede ser un factor diferenciador para las empresas y consumidores en la nueva transición energética. ¿Cuál es ese papel y cuales son algunos de los casos de uso existente actualmente?
Ledger es un Venture Builder subvencionado por la Comisión Europea que financia hasta 200.000€ proyectos en tecnologías descentralizadas, así como la participación durante 12 meses en un Venture Builder Programme.
La segunda Open Call abrirá en noviembre y seleccionará 16 proyectos que proveen soluciones para ayudar en la devolución del control de los datos a los usuarios y tener un impacto directo y favorable en ellos.
Blockchain Spain II Edición - Autoridad Portuaria de Cartagena, Ilboc, RepsolatSistemas
En esta presentación expondrán conjuntamente Teresa Martín, Paloma Escudero y Fernando Barragán, un caso de éxito con visión y escalado, donde podremos apreciar cómo la tecnología Blockchain aporta valor en la cadena de suministro.
Concretamente explicarán el proceso de nominaciones en la operaciones de carga de mercancía a granel en el puerto de Cartagena, y denominado Noms4all.
Esta presentación reflejará el trabajo colaborativo de Repsol, Autoridad Portuaria de Cartagena e Ilboc en un entorno común de proyecto y dentro de su visión de construcción de futuro.
Blockchain Spain II Edición - Juan Manuel MartínezatSistemas
¿Qué influencia puede tener la aplicación de este nuevo concepto tecnológico a un sector tan atomizado, regulado, desconfiado y poco automatizado? ¿Y por qué Blockchain y no los sistemas tradicionales?
En esta ponencia, Juan Manuel Martínez dará las claves, analizará las ventajas e inconvenientes, y realizará el DAFO de la tecnología Blockchain. Lo hará a través de ejemplos y casos de éxito en el ecosistema del Transporte y la Logística, tanto a nivel local como internacional.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
2. DataOps
Definitions
VP Technology Strategy, MapR
DataOps is an agile methodology for developing and deploying data-intensive
applications, including data science and machine learning. A DataOps workflow supports
cross-functional collaboration and fast time to value.
http://www.gartner.com/it-glossary/data-ops/
A hub for collecting and distributing data, with a mandate to provide controlled access to systems
of record for customer and marketing performance data, while protecting privacy, usage
restrictions, and data integrity..
Tamr CEO Andy Palmer
DataOps is an enterprise collaboration framework that aligns data-management
objectives with data-consumption ideals to maximize data-derived value.
Nexla CEO
DataOps is the function within an organization that controls the data journey from
source to value.
3. DataOps
Gartner
Data & Analytics Summit 2018
DataOps, la plataforma de base de datos de nube privada como servicio (dbPaaS) y la gestión de
datos habilitados para el aprendizaje automático.
DataOps es una nueva práctica sin estándares ni frameworks
Nick Heudecker, vicepresidente de investigación de Gartner
5. DataOps
Brings Flexibility & Focus
Expands DevOps to include data-heavy roles
Organized around data-related goals
Better collaboration and communication between roles
6. DataOps
AN AGILE METHODOLOGY
FOR DATA-DRIVEN
ORGANIZATIONS
AXIOMS:
Continuous model deployment
Promote repeatability
Promote productivity -- focus on core competencies
Promote agility
Promote self-service
Data is central to disruptive enterprise applications
• Lightweight, stateless functions do not represent the majority of workloads
Data science and machine learning are an important paradigm
• Scientists become active users -- no longer just application developers
• Iterative workflow with different data usage patterns
Data volumes continue to grow
Moving data is a performance bottleneck
DataOps Goals:
7. DataOps 7
Analyze and VisualizeStore and ProcessConnect and Integrate
Structured
Data
Unstructured
Data
1010101
01010 Sandboxes
Data lakes
Varying data
types
Quick and actionable
business insights
Focus on algorithms,
not infrastructure
Data available from
structured and
unstructured sources
Data marts / warehouses
DATA PLATFORM DATA Stream DATA ANALYTICS
8. Data Science
Platforms CLOUD PROVIDERS
ETL & DATA
ENGINEERING VERTICAL
APPLICATIONS
BI & VISUALIZATION
TOOLS
SECURIT
Y
INFRASTRUCTU
RE
LIBRARIE
S
TOOL
S
DATA PLATFORMS
DATA SCIENCE PLATFORMS
9. DataOps
Approach Advantages
Data Self-Service
• Data Scientists need to develop Use Cases
quickly using the enterprise’s data without
any restrictions from IT.
Improved efficiency and better use of Team’s time
• Deploy Analytic platform in one click
Faster Time-to-Value
Improve productivity
• Implement use cases in parallel using the
same data, but with dedicated platforms to
each analytic teams. Storage
Compute
LIBRARI
ES
TOO
LS
DATA SCIENCE
PLATFORMS
10. DataOps
Continuous Model
Deployment
Key Building Blocks for Agility:
• Unified data platform
• Data governance
• Self-service data and compute access
• Multitenancy and resource management
Data
Engineering
Model
Development
Model
Management
Model
Deployment
Model Monitoring &
Rescoring
13. DataOps
Data-Driven Architecture
Traditional and Modern
Legacy, Custom, Mainframe, SaaS, Microservices, …
Source: Oracle Insight
Data Platform
Analytics
• Advanced Analytics
• Self-service
• Predictive
Data Science
• Machine Learning
• Deep Learning
Modern Data
Platform
Security & Compliance
X Data
Applications
Real-time Analytics
• Real-time
Marketing
• Fraud detection • Exec
Dashboarding
Real-time
Real-time Services
{OOP}
SparklineData
• Accessing multiple source of data
(Technologies, Silos/Locations,
Clouds) …
• … with high performances …
• … for broader Cross Multi-model
queries/algorithms on real-time
data as well as historical data
Applications
BigData SQL
14. DataOps
Cloud Native & Open Source
Community
Artificial
Intelligence Block Chain Internet of
Things
Container Native Microservices
Open Serverless Computing DevOps
Prometeus
Open Source
Cloud Native
Innovation
Open Source
Cloud Native
Development
ISTIO
Cloud-Native and Community Driven Innovation
Open Source Managed and Autonomous Cloud Native
15. DataOps
Data Stream
Data Preparation
Data Replication
Data ETLLogs
Oracle Cloud Infrastructure
Analytics
Consumers
Data Platform
BI
NL / AI
Data Integration
CDC / ETL
Discovering Structuring Cleaning Enriching Validating Deploying
17. Oracle Data
Science
Data Science Requires a Comprehensive Platform to Simplify Operations
and Deliver Value at Scale
• Accelerate use of proper tools, frameworks and infrastructure
• Overcome restricted skillsets with a simple, collaborative platform
• Quickly leverage predictive analytics to drive positive business outcomes
Collaborate
securely
Power
business
Work in standardized
environments
A Robust, Easy-to-Use Data Science Platform Removes Barriers to
Deploying Valuable Machine Learning Models in Production
Manage data
and tools
18. Oracle Data
Science
Projects LifeCycle
Reproducibility
Data
Versioning
Code
Versioning
Model
Versioning
Environment
Management
Model Deployment
Operationalize Models as
Scalable APIs
Model Management
Monitor and Optimize Model
Performance
Data Exploration
Collaborative Data Analysis /
Feature Engineering
Model Build and
Train
with Open Source
Frameworks
Collaborators
∙ Data Scientists
∙ Business Stakeholders
∙ App Developers
∙ IT Admins
Business
Analyst/Leader
Defining business
problem and
objective of analyses
Data Engineer
Prepare data, build
pipelines, and provide
data access for
analytical or
operational uses.
IT Admin
Oversees underlying
process, architecture,
operations, resource
constraints.
Data Scientist
Analyze data using
statistical methods
and coding languages
like Python, R, Scala
Application
Developer
Deploy data science
models into
applications. Build
data products.
19. Oracle Data
Science
Modules
Collaborative
Integrated
Enterprise-Grade
Oracle Data Science Cloud
Oracle PaaS & IaaS
Projects Notebooks
Open Source
Languages &
Libraries
Version Control Use Case
Templates
Model
Build & Train
Self-Service Scalable Compute (OCI)
Object
Store
Catalog Data Lake Streaming
Autonomous
Data Warehouse
Model
Deployment
Model
Monitoring
Access
Controls &
Security
Project driven UI enables teams to easily
work together on end-to-end modeling
workflows with self-service access to data
and resources
Support for latest open source tools, version
control, and tight integration with OCI and
Oracle Big Data Platform
A fully managed platform built to meet the
needs of the modern enterprise
21. Oracle Data
Science
Configure, Train & Deploy
Oracle PaaS
Language
Image
Video
HREmotion
Easy Deployment
3
Deploy
Model
Train
Data
Definitio
n
Model
Test
Publish
API
Data
Select
Code
Noteboo
k
2
Train
• Frameworks
• AI libraries
• Samples
• GPU clusters
• Connect to data
• Auto scale, updates
• HS network, storage
•Object Stores
•Database CS
•Spark
Easy Data Access
+
1
Configure
Autonomous
Setup
Model Sharing Model Library APIsModel Analytics
IT Persona
DevOps
Data Scientist
Data Scientist
Easy Development
Easy setup
24. DataOps
Conclusiones
Multi-Model Data Access
Interoperability
Data preparation and pipeline
Automation
Elasticity
Multidimensional agility
Automated governance
Next Generation
Platform for
All Data
Complete,
Integrated, Open
AI and Machine
Learning
ALL IN ONE
ORACLE PROVIDES