Despite the word "DevOps" has been made recently, I've been one of the lucky ones who could work in the way this culture suggests, since 20 years. At that time, no Powershell was available, there was poor internet connection (at least in Italy), there weren't any tools for automation. Anyways my team have understood that mindset before it became mainstream. During my professional experience, I've gathered many scenarios in different businesses and I've learned many lessons. Straight to the point, the problem is focused on "change ourselves". In this session we will try to reply to the following questions:
As a legacy DBA, how to change our way of work? How to forget the bad habits? How to take advantage from our experience and awareness? Just my two cents. Hopefully interesting.
Microsoft's Principal Cloud Advocate & DevOps Lead Abel Wang and Redgate's Steve Jones cover:
- What is DevOps?
- How to explain the value of DevOps to both leadership and engineers
- Tips for advocating for DevOps as part of your 2020 planning
- How other organizations have had success implementing DevOps
- Lessons learned from Microsoft's DevOps transformation
Managing Databases In A DevOps Environment 2016Robert Treat
Given at #pgdayphilly2016, this talk covers how configuration management, monitoring, and rapid deployments are impacting how we think about database management.
As BI professionals we are straddled with multiple issues pertaining to Copy Data.
We will discuss what 'Copy Data' is along with various terminologies that go with it.
The common issues are:
1. Space
2. Network Bandwidth
3. Time
4. Security
5. Obfuscation/Masking
6. Which Server does this go to?
7. Onward protection of Copy Data
In this session we will study the issues above and see how we can avoid these issues. We will examine what technologies/products are available that help us mitigate such a massive problem.
Peter Marshall, Technology Evangelist at Imply
Abstract: Apache Druid® can revolutionise business decision-making with a view of the freshest of fresh data in web, mobile, desktop, and data science notebooks. In this talk, we look at key activities to integrate into Apache Druid POCs, discussing common hurdles and signposting to important information.
Bio: Peter Marshall (https://petermarshall.io) is an Apache Druid Technology Evangelist at Imply (http://imply.io/), a company founded by original developers of Apache Druid. He has 20 years architecture experience in CRM, EDRM, ERP, EIP, Digital Services, Security, BI, Analytics, and MDM. He is TOGAF certified and has a BA degree in Theology and Computer Studies from the University of Birmingham in the United Kingdom.
Microsoft's Principal Cloud Advocate & DevOps Lead Abel Wang and Redgate's Steve Jones cover:
- What is DevOps?
- How to explain the value of DevOps to both leadership and engineers
- Tips for advocating for DevOps as part of your 2020 planning
- How other organizations have had success implementing DevOps
- Lessons learned from Microsoft's DevOps transformation
Managing Databases In A DevOps Environment 2016Robert Treat
Given at #pgdayphilly2016, this talk covers how configuration management, monitoring, and rapid deployments are impacting how we think about database management.
As BI professionals we are straddled with multiple issues pertaining to Copy Data.
We will discuss what 'Copy Data' is along with various terminologies that go with it.
The common issues are:
1. Space
2. Network Bandwidth
3. Time
4. Security
5. Obfuscation/Masking
6. Which Server does this go to?
7. Onward protection of Copy Data
In this session we will study the issues above and see how we can avoid these issues. We will examine what technologies/products are available that help us mitigate such a massive problem.
Peter Marshall, Technology Evangelist at Imply
Abstract: Apache Druid® can revolutionise business decision-making with a view of the freshest of fresh data in web, mobile, desktop, and data science notebooks. In this talk, we look at key activities to integrate into Apache Druid POCs, discussing common hurdles and signposting to important information.
Bio: Peter Marshall (https://petermarshall.io) is an Apache Druid Technology Evangelist at Imply (http://imply.io/), a company founded by original developers of Apache Druid. He has 20 years architecture experience in CRM, EDRM, ERP, EIP, Digital Services, Security, BI, Analytics, and MDM. He is TOGAF certified and has a BA degree in Theology and Computer Studies from the University of Birmingham in the United Kingdom.
In complex enterprise environments standards to keep databases running at peak performance fall short, especially when multiple types of databases are present. Greg Keller, chief evangelist for DatabaseGear Products at Embarcadero Technologies explains why database performance is important to the business, and describes new solutions that keep data environments running at peak performance.
Optimizing Your Database Performance | Embarcadero TechnologiesMichael Findling
In complex enterprise environments standards to keep databases running at peak performance fall short, especially when multiple types of databases are present. Greg Keller, chief evangelist for DatabaseGear Products at Embarcadero Technologies explains why database performance is important to the business, and describes new solutions that keep data environments running at peak performance.
Wouldn’t it be great to remove the “it works on my machine” scenario? Don’t you have better things to do with your time then manually configure systems? In this live, hands-on demonstration Matt will introduce you to the concepts of Infrastructure as Code and Automation; show you how we to use Chef to develop and test system configuration locally, and then deploy them to a production environment in Microsoft Azure.
Web is now visible everywhere. It's highest time to learn webdevelopment! Know why it's great branch of IT, what it's made of and what tasks are waiting the for today's web developers.
Learn the HTML, JS and CSS from basics. Do not read HTML courses written 10 years ago.
Want to do backend, but still wondering whether to choose PHP, Ruby, Python, nodeJS ? No fear! We'll try to show pros & cons of every language AND also give a short guide how to learn them quickly.
Original presentation: http://akai.org.pl/slides/webstarter/
Database Architechs is a database-focused consulting firm employing the world's databases' top experts and providing a wide variety of database and data services.
www.dbarchitechs.com
Database Performance monitoring tool for Microsoft SQL Server 2005 & 2008 (included in "SQL Server 2008 R2 Unleashed" best-selling book), Sybase ASE 11.5 to 15.5 and Oracle 8i to 11g.
The Key to Effective Analytics: Fast-Returning QueriesEric Kavanagh
The best business analysts understand the value of having a "conversation" with their data. The idea is that they can pose queries, examine results, then quickly modify their questions to home in on a desired answer. This kind of iterative process creates a fluid environment that is highly conducive for identifying meaningful patterns in data. Register for this episode of Hot Technologies to hear Bloor Group Chief Analyst Dr. Robin Bloor and Data Scientist Dez Blanchfield as they outline why fluid analytics should be the norm and which hurdles still stand in the way. They'll be briefed by Bullett Manale of IDERA who will demonstrate his company's diagnostic platform for analytics. He'll provide context, and also deliver a demo that shows real-world solutions that enable iterative analytics.
Horses for Courses: Database RoundtableEric Kavanagh
The blessing and curse of today's database market? So many choices! While relational databases still dominate the day-to-day business, a host of alternatives has evolved around very specific use cases: graph, document, NoSQL, hybrid (HTAP), column store, the list goes on. And the database tools market is teeming with activity as well. Register for this special Research Webcast to hear Dr. Robin Bloor share his early findings about the evolving database market. He'll be joined by Steve Sarsfield of HPE Vertica, and Robert Reeves of Datical in a roundtable discussion with Bloor Group CEO Eric Kavanagh. Send any questions to info@insideanalysis.com, or tweet with #DBSurvival.
Best Practices for Building and Deploying Data Pipelines in Apache SparkDatabricks
Many data pipelines share common characteristics and are often built in similar but bespoke ways, even within a single organisation. In this talk, we will outline the key considerations which need to be applied when building data pipelines, such as performance, idempotency, reproducibility, and tackling the small file problem. We’ll work towards describing a common Data Engineering toolkit which separates these concerns from business logic code, allowing non-Data-Engineers (e.g. Business Analysts and Data Scientists) to define data pipelines without worrying about the nitty-gritty production considerations.
We’ll then introduce an implementation of such a toolkit in the form of Waimak, our open-source library for Apache Spark (https://github.com/CoxAutomotiveDataSolutions/waimak), which has massively shortened our route from prototype to production. Finally, we’ll define new approaches and best practices about what we believe is the most overlooked aspect of Data Engineering: deploying data pipelines.
Confoo-Montreal-2016: Controlling Your Environments using Infrastructure as CodeSteve Mercier
Slides from my talk at ConFoo Montreal, February 2016. A presentation on how to apply configuration management (CM) principles for your various environments, to control changes made to them. You apply CM on your code, why not on your environments content? This presentation will present the infrastructure as code principles using Chef and/or Ansible. Topics discussed include Continuous Integration, Continuous Delivery/Deployment principles, Infrastructure As Code and DevOps.
Do you know what Copy Data is? Do you know how it consumes your life? You should know, how 1TB of database can translate to almost 2PB. What if you have to restore these databases; at the drop of a hat. This chat helps you do it.
Azure Weekly - 2015.01.20 - Marco Parenzan - Data Opportunities with AzureMarco Parenzan
The Azure Weekly runs every Tuesday from 12:30 – 14:00 (UK timezone) and is aimed at the techie who has not yet had any/much exposure to Azure but who just wants a leg-up to get started. This is a more practically focused session than a theoretical/architectural session and is mostly based around the following demos: • Creating a Microsoft Azure WordPress website • Creating a Microsoft Azure ASP.Net website • Creating a Microsoft Azure Virtual Machine • Creating a Microsoft Azure Mobile Services (with Android client) • Creating a Microsoft Azure Cloud Service • How to sign up for a free Microsoft Azure Trial subscription. Azure is an opportunity because it gives to programmers many opportunities different from "just" scaffolding an SQL relational database. In our guest presentation we will show you how to make a MVC Web App that born with an SQL Backend, can be enhanced with Azure.
In un mondo in cui il termine smart è ovunque e la coppia smart-working è abusata, meglio concentrarsi sul vero significato del termine. Autonomia, responsabilità, fiducia e flessibilità, unitamente a strumenti tecnologici a supporto.
Sql Wars - SQL the attack of the Clones and the rebellion of the Containers Alessandro Alpi
How to solve three of he most tricky problems:
- isolating and repeating tests on production data without affecting production databases
- debug and resolve bugs using production-like databases
- review deploy scripts before executing them in production databases
It's a matter of Provisioning
More Related Content
Similar to Mvp4 croatia - Being a dba in a devops world
In complex enterprise environments standards to keep databases running at peak performance fall short, especially when multiple types of databases are present. Greg Keller, chief evangelist for DatabaseGear Products at Embarcadero Technologies explains why database performance is important to the business, and describes new solutions that keep data environments running at peak performance.
Optimizing Your Database Performance | Embarcadero TechnologiesMichael Findling
In complex enterprise environments standards to keep databases running at peak performance fall short, especially when multiple types of databases are present. Greg Keller, chief evangelist for DatabaseGear Products at Embarcadero Technologies explains why database performance is important to the business, and describes new solutions that keep data environments running at peak performance.
Wouldn’t it be great to remove the “it works on my machine” scenario? Don’t you have better things to do with your time then manually configure systems? In this live, hands-on demonstration Matt will introduce you to the concepts of Infrastructure as Code and Automation; show you how we to use Chef to develop and test system configuration locally, and then deploy them to a production environment in Microsoft Azure.
Web is now visible everywhere. It's highest time to learn webdevelopment! Know why it's great branch of IT, what it's made of and what tasks are waiting the for today's web developers.
Learn the HTML, JS and CSS from basics. Do not read HTML courses written 10 years ago.
Want to do backend, but still wondering whether to choose PHP, Ruby, Python, nodeJS ? No fear! We'll try to show pros & cons of every language AND also give a short guide how to learn them quickly.
Original presentation: http://akai.org.pl/slides/webstarter/
Database Architechs is a database-focused consulting firm employing the world's databases' top experts and providing a wide variety of database and data services.
www.dbarchitechs.com
Database Performance monitoring tool for Microsoft SQL Server 2005 & 2008 (included in "SQL Server 2008 R2 Unleashed" best-selling book), Sybase ASE 11.5 to 15.5 and Oracle 8i to 11g.
The Key to Effective Analytics: Fast-Returning QueriesEric Kavanagh
The best business analysts understand the value of having a "conversation" with their data. The idea is that they can pose queries, examine results, then quickly modify their questions to home in on a desired answer. This kind of iterative process creates a fluid environment that is highly conducive for identifying meaningful patterns in data. Register for this episode of Hot Technologies to hear Bloor Group Chief Analyst Dr. Robin Bloor and Data Scientist Dez Blanchfield as they outline why fluid analytics should be the norm and which hurdles still stand in the way. They'll be briefed by Bullett Manale of IDERA who will demonstrate his company's diagnostic platform for analytics. He'll provide context, and also deliver a demo that shows real-world solutions that enable iterative analytics.
Horses for Courses: Database RoundtableEric Kavanagh
The blessing and curse of today's database market? So many choices! While relational databases still dominate the day-to-day business, a host of alternatives has evolved around very specific use cases: graph, document, NoSQL, hybrid (HTAP), column store, the list goes on. And the database tools market is teeming with activity as well. Register for this special Research Webcast to hear Dr. Robin Bloor share his early findings about the evolving database market. He'll be joined by Steve Sarsfield of HPE Vertica, and Robert Reeves of Datical in a roundtable discussion with Bloor Group CEO Eric Kavanagh. Send any questions to info@insideanalysis.com, or tweet with #DBSurvival.
Best Practices for Building and Deploying Data Pipelines in Apache SparkDatabricks
Many data pipelines share common characteristics and are often built in similar but bespoke ways, even within a single organisation. In this talk, we will outline the key considerations which need to be applied when building data pipelines, such as performance, idempotency, reproducibility, and tackling the small file problem. We’ll work towards describing a common Data Engineering toolkit which separates these concerns from business logic code, allowing non-Data-Engineers (e.g. Business Analysts and Data Scientists) to define data pipelines without worrying about the nitty-gritty production considerations.
We’ll then introduce an implementation of such a toolkit in the form of Waimak, our open-source library for Apache Spark (https://github.com/CoxAutomotiveDataSolutions/waimak), which has massively shortened our route from prototype to production. Finally, we’ll define new approaches and best practices about what we believe is the most overlooked aspect of Data Engineering: deploying data pipelines.
Confoo-Montreal-2016: Controlling Your Environments using Infrastructure as CodeSteve Mercier
Slides from my talk at ConFoo Montreal, February 2016. A presentation on how to apply configuration management (CM) principles for your various environments, to control changes made to them. You apply CM on your code, why not on your environments content? This presentation will present the infrastructure as code principles using Chef and/or Ansible. Topics discussed include Continuous Integration, Continuous Delivery/Deployment principles, Infrastructure As Code and DevOps.
Do you know what Copy Data is? Do you know how it consumes your life? You should know, how 1TB of database can translate to almost 2PB. What if you have to restore these databases; at the drop of a hat. This chat helps you do it.
Azure Weekly - 2015.01.20 - Marco Parenzan - Data Opportunities with AzureMarco Parenzan
The Azure Weekly runs every Tuesday from 12:30 – 14:00 (UK timezone) and is aimed at the techie who has not yet had any/much exposure to Azure but who just wants a leg-up to get started. This is a more practically focused session than a theoretical/architectural session and is mostly based around the following demos: • Creating a Microsoft Azure WordPress website • Creating a Microsoft Azure ASP.Net website • Creating a Microsoft Azure Virtual Machine • Creating a Microsoft Azure Mobile Services (with Android client) • Creating a Microsoft Azure Cloud Service • How to sign up for a free Microsoft Azure Trial subscription. Azure is an opportunity because it gives to programmers many opportunities different from "just" scaffolding an SQL relational database. In our guest presentation we will show you how to make a MVC Web App that born with an SQL Backend, can be enhanced with Azure.
In un mondo in cui il termine smart è ovunque e la coppia smart-working è abusata, meglio concentrarsi sul vero significato del termine. Autonomia, responsabilità, fiducia e flessibilità, unitamente a strumenti tecnologici a supporto.
Sql Wars - SQL the attack of the Clones and the rebellion of the Containers Alessandro Alpi
How to solve three of he most tricky problems:
- isolating and repeating tests on production data without affecting production databases
- debug and resolve bugs using production-like databases
- review deploy scripts before executing them in production databases
It's a matter of Provisioning
Presentazione su HomeGen (CloudGen): utilizzo di Cloni e Container per l'automazione dei processi DevOps SQL Server. Utilizzo di VS Code e Azure DevOps.
Doaw2020 - Dalla produzione alla QA, provisioning su SQL ServerAlessandro Alpi
In questa sessione vedremo come portare i dati in ambienti QA direttamente dalla produzione, evitando ogni problema conosciuto ad oggi: spazio, tempo, numero di copie, isolamento e via discorrendo. Un annoso problema che può essere finalmente risolto con pochi click.
Wpc2019 - Distruggere DevOps, la storia di un vero teamAlessandro Alpi
Consigli per evitare la distruzione della migrazione culturale verso DevOps. Vedremo un team con "attori" importanti provare a migrare verso buone pratiche e capiremo quanto è difficile arrivare, ma semplice distruggere tutto.
Sql start!2019 Migliorare la produttività per lo sviluppo su SQL ServerAlessandro Alpi
SQL Server non è un mondo, è un universo ricco di funzionalità, architetture e tecnologie. Può spaventare e può essere difficile muoversi al suo interno con la necessaria fluidità. In questa sessione vedremo quali plugin e strumenti sono disponibili per velocizzare lo sviluppo su SQL Server, da Visual Studio Code a Management Studio, da SQL Operations Studio ai tool di RedGate. Migliorare gestione e scrittura di codice, con un occhio di riguardo alla condivisione e al team working, in un mondo in cui DevOps la fa da padrone.
Configuration e change management con Disciplined Agile FrameworkAlessandro Alpi
How to manage changes and configuration management using the Disciplined Agile Framework for DevOps (classic and prescriptive vs automated and iterative solutions). Software Configuration Management (SCM) summit: http://www.snescm.org/Common/Italian-chapter/Summits/2018/index.html
In questa sessione parleremo delle peggiori pratiche da seguire per infrangere ogni nostro sogno di realizzazione di DevOps. Peggiori e molto comuni e, proprio per questo, estremamente pericolose. Dal non rispettare i principi, al fare eccessive personalizzazioni, fino all'estremizzare tutte le buone pratiche. Non serve chissà cosa, e la tentazione è dietro l'angolo.
Automatizzare il processo di link dei database con redgate source controlAlessandro Alpi
Per chi è abituato a lavorare in ambienti molto distribuiti, casi in cui i database sono molteplici, e per chi usa Red Gate Source Control con VSTS (git o TFS), può diventare oneroso effettuare numerose operazioni manuali di collegamento. Tramite PowerShell, è possibile consumare le API del RedGate DLM Automation tool per velocizzare il processo di link dei nostri database. Alla base di DevOps ci sono infatti i concetti di ripetibilità e automatizzazione. Questo è quello che nel nostro team ci ha portato a ridurre l'errore umano all'osso e a velocizzare il provisioning delle nostre soluzioni distribuite.
Sql saturday parma 2017 (#sqlsat675) - Deep space Cosmos DBAlessandro Alpi
Azure Cosmos DB is a globally distributed database service designed to enable you to elastically and independently scale throughput and storage across any number of geographical regions with a comprehensive SLA. In this session we will discover how Cosmos DB works and what are the key features that enables you to become polyglot in persistency. A single "database" for multiple models.
Sql Saturday a Pordenone - Sql Server journey, da dev a opsAlessandro Alpi
DevOps e SQL Server, l'importanza di automatizzare i processi ripetibili, collaborare, condividere ed integrarsi per velocizzare e rendere più affidabile i nostri processi di deploy per il database.
PASS Virtual Chapter - SQL Server Continuous IntegrationAlessandro Alpi
Build automatizzate, esecuzione di unit test, creazione di un pacchetto nuget, ecco cosa serve per essere pronti con SQL Server e la continuous integration
PASS Virtual Chapter - SQL Server Continuous DeploymentAlessandro Alpi
Dopo aver visto come effettuare la continuous integration terminiamo il ciclo di vita del nostro database effettuandone il deploy, includendo concetti come DevOps e automazione dei processi
DevOpsHeroes 2016 - Realizzare Continouous Integration con SQL Server e Visua...Alessandro Alpi
In questa serie di slide vedremo come creare i build step su Visual Studio Team Services sfruttando gli add-on forniti da Red Gate, come DLM Automation 2: Build.
PASS Virtual Chapter - Unit Testing su SQL ServerAlessandro Alpi
Con quanto segue andremo ad approfondire il concetto di unit test e, nella fattispecie, del testing tramite il framework free tSQLt, utilizzando t-sql e SQL Server Management Studio.
#DOAW16 - DevOps@work Roma 2016 - Testing your databasesAlessandro Alpi
In these slides we will speak about how to unit test our programmability in SQL Server and how to move from a manual process to an automated one in order to achieve the goals of DevOps
#DOAW16 - DevOps@work Roma 2016 - Databases under source controlAlessandro Alpi
In these slides we will speak about how we can put our databases under source control. What are the type of source control models and links, and, last but not least, how to move from a manual process to an automated one, in order to achieve the goals of DevOps
SQL Server 2016 porterà tantissime novità, tra cui, per quanto riguarda la programmabilità, il supporto al formato JSON. vedremo com'è possibile serializzare i risultati delle nostre query tramite la clausola FOR JSON, inclusa negli ultimi rilasci della piattaforma.
In this presentation we'll learn about the native JSON support in SQL Server 2016. We will speak about Import/Export features, storage considerations and advantages/limitations on using this format in SQL Server.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
2. My story so far
Started out in the 1999, yes I’m old
Developer with FoxPro, Java, ASP, .Net until 2003 and SQL Server
Many years with SQL Server, DBA
Worked also with other DBMS, like Oracle, IBM DB2, NoSQL and so on
Mindset: always “DevOps”
I’ve changed many times.
3. The (former) DBA
Hardware knowledge
Networking knowledge
O.S. knowledge/Software
Oh, I almost forgot this one: Database management systems knowledge!
Ok, what is a DBA?
5. From administrators…
Working often on production systems
Acting as an Operation guy
Ignoring the dev’s work (how to)
Ignoring what happens “before us”
Keeping distances from other depts
6. …to engineers
Working together with operations and development teams
Acting as an engineer, with system thinking early in the project
Breaking down the barriers with development
Automating the manual tasks
Being committed from the start of the project
No “one man band” and no “hero syndrome”
Database reliability engineering (DBRE)
7. From gatekeepers…
Everything is mission-critical, because it’s data
Stop any release on mission-critical targets (everything)
Let’s centralize on us all the rules and validations
As a result: release flow slowed down
8. …to facilitators
Try to make everything easier and clear the obstacles
Delegate work to trusted people and build trust within the team
Let’s release more frequently, so the releases will be less risky
As a result: continuous delivery, forget about automatic tasks
10. My two cents
Change your way of work since the beginning
Be involved in development and team management
Design the deployment with developers and operations
Be proactive in monitoring and committed about the solution you deliver
Share your thoughts, it can be useful for everyone
Take advantage from collaboration using collaboration tools
Participate to the software lifecycle
11. My two cents
Understand the business value
Understand the customer satisfaction
Make a trusted workflow
Reduce the wasting of time considering provisioning data
Consider a set of tools for generating new instances (dbatools rulez!)
Consider the right metrics to measure in monitoring
19. Thank you for participating in this event,
donations will be used to help rebuild
schools, homes and lives of people that were
effected badly by earthquakes in Croatia
28.-29.12.2020.
U P DAT E S O N D O N AT I O N S :
H T T P S : / / G O G E T F U N D I N G . CO M / S I S A K P E T R I N JA S T R A S N I K - E A R T H Q UA K E - R E L I E F /
M O R E I N F O & O R G A N I Z E R CO N TAC T :
H T T P S : / / M V P S 4 C R OAT I A . CO M /
Editor's Notes
Hi everyone, I’m Alessandro Alpi, a Microsoft Data Platform MVP from Italy since 2008. In this “small talk”, I’m gonna tell you how the role of the DBA has changed and is still changing in a DevOps world from my perspective. I’ll show you my professional development, in a nutshell, then, I’ll try to describe the cultural and operational changes. Finally, I’d like to share with you some advices to make the change less painful.
A DBA should know many things about networking, hardware, operative systems, and software in general. We can’t manage a SQL Server setup without understanding how a storage subsystem will react to our settings. At the same time, we can’t make improvements if we don’t know how the software installed is working against our SQL Servers. With software I mean both the home-made solutions and components, but also the tools installed server-side. Then, as SQL Server sends and receives data (which is a server-to-server request/response, actually) we must understand protocols and measure the bandwidth we’re dealing with.
Finally, the most important thing, we must know everything about persistence, modelling and in general how a DBMS works, especially behind the hoods. Monitoring and getting our platform always up and running is one of our main objectives to accomplish.
A former DBA can be considered as a specialist which acts as the «committer» of any delivery in production which involves data. Anyways, the DBA should be more than this, but in my experiences, many of them just check for updates, validate and execute scripts, give permission and configure SQL Server Instances. When required, they setup new instances, but in some cases this task is delegated to external vendors. Something should may change.
The DBAs used to work mostly in production systems, acting as an “operation guy”. What does it mean?
First, many DBAs get the packages and then, they release them without knowing anything on their content.
This means that they’re ignoring the “how” focusing on the “what” and “when”. They must release with no troubles.
Unfortunately, silos will be created around the DBAs. A silo leads to more distances between development and database admins.
An engineer is slightly different. An engineer takes advantages in working with both DBAs and Operations and is focused on system thinking, early in the project.
This kind of professionals are always involved and share their ideas and thoughts with all the teams and the tech departments.
They must follow all the pipeline, from development to deployment, and after the deployment with monitoring tools (which is what DevOps is focused on, too). This is commitment.
No “heroes”, no people which can’t sleep or leave the work for just a single day. No silos, no barriers between team, strong commitment.
Another important thing to consider is to transform every manual task in an automated one. Thus, we can reduce the wasting of time and the human errors’ rate.
This professional is called Database Reliability Engineer and I think that every DBA should move to (or take from) this approach.
A former DBA is often a gatekeeper. This means that everything which is related to “data” is a mission-critical item.
Thus, no release can be done against databases, because is mission-critical, so the gatekeeper must stop the process and check for the shape of the packages. Unfortunately, the most of time, the DBA is not free for these reviews and validations, so, stopping the release, avoiding any error, leads to a bottleneck.
The result is obvious: despite the good intentions, we’re slowing down the deployment. We’re breaking it, no deploy will be done soon. Additionally, the more is the time between two releases, the bigger is the probability to break the production databases, because too large packages will modify too much objects. More locks, more errors, more regressions.
A DBA should be a facilitator, since we would like to release more frequently. The lesser is the time between the releases, the smaller will be the release packages. More quick deployments, less problems at all, less risks.
A facilitator doesn’t stop anything, it clears the obstacles instead. It makes everything easier. I’m not saying that no checks or rules should be done but releasing the “locks” can reduce the concurrency problems, speaking in database dictionary.
DBAs should gather trusted people and delegate to them many activities. This helps everyone to be more “DevOps” and to build the confidence and the trust between people in a team and with other teams. Also, the automation is not a problem anymore.
The target is to forget about the automated deployments, they just work! When you reach this goal, you can scream out loud “I’m DevOps”
Changing ourselves is not simple, we all know this. Anyways, changing our habits step by step is something we can deal with better. When I say “change your way of work since the beginning” I mean that we can re-start to think about our job. Every day, I’ve asked myself (for many years) “what can I do to improve our solution?”. This allows us to forget our legacy activities and let’s us to be proactive since the beginning of the project we’re working on. So, we start speaking about “projects” not just “deployments”, we start speaking with the business concepts and work with a business glossary to understand better and better WHY we are doing something.
While dealing with developers, we can be proactive helping them modeling objects and tuning their queries (or the queries generated by the ORMs), share our thoughts with them, helping them to model the database avoiding regression and enhancing backward compatibility patterns, and so on. At the same time, we can learn from the way they work, which is not less important. Also, with developers and operations, we can start thinking about how we will deploy our entire solution to production (not just the database itself).
While dealing with continuous integration, testing and deployments we can take advantage from the collaboration tools, like Azure DevOps for all the build and release task, slack, zoom, teams for meeting and sharing screens, and so on. This should help us to be involved in anytime in the project, both for requirements and the pipelines (the Operation part). Then, we should start using the DevOps glossary, with terms like “artifacts”, “packages”, “pipelines” and so on.
Our job is not based on “take a package and release it” pattern. So, it’s important to understand the business value of everything we do, from tech stuff to features. Thanks to this, we can get an insight about the customer satisfaction as well as the customer frustration when we make a change that breaks down the environment. This metric is one of the most important, business and tech side. Working in enterprises should force us to move to this approach, because we can’t just deal with executing some scripts, checking policies and doing some permission tasks nowadays.
While working with development and operations team, we can make a trusted workflow, that is a set of processes with trusted people, focused on creating a pipeline we can forget about execution after execution. As we’ve already said, the trust with automated tasks is got step by step, not all in one.
When you need to help developers to gather data or you’d like to execute integration tests, consider the option to make a solution that allow a fast data provisioning, taking advantages from tools like docker, spawn or similar. With these, you can quickly get data and environments both for the developer sandboxes and test servers, especially in automation. For new customers, instances, or everything which is “brand new”, consider using open-source tools, like dbatools.io with which you can migrate, manage, create and configure instances with a few lines of PowerShell.
Finally, the commitment after releasing in production. The monitoring. This is the simplest task for a DBA to consider. Monitoring is a part of the foundation of a classic DBA role, so, hopefully, we will be ready as soon as we start to setup a monitoring tool. The difference is on the monitoring style. Installing, configuring and checking a monitoring tool aren’t the only things we should do. With DevOps approach, we would like to make the monitoring proactive. This means that something well designed, will alert us before the problem occurs. How? Integrating the alerts with our collaboration tools, like slack or teams, with bots, for instance.