The shift to cloud computing means that organizations are undergoing a major shift as they develop scale-out infrastructure that can respond to apace of business change faster than ever before. Opscode Chef® is an open-source systems integration framework build specifically for
automating the cloud by making it easy to deploy and scale servers and applications throughout your infrastructure. Join us for this session
containing an introduction to Chef including:
An Overview of Chef
The Chef Architecture
Cookbook Components
System Integration
Live demo launching a Java Stack on Amazon EC2, Rackspace, Ubuntu, and
CentOS
[Presented as part of the Open Source Build a Cloud program on 2/29/2012 - http://cloudstack.org/about-cloudstack/cloudstack-events.html?categoryid=6]
So, you know how to deploy your code, what about your database? This talk will go through deploying your database with LiquiBase and DBDeploy a non-framework based approach to handling migrations of DDL and DML.
CONNECT is a storage engine for MariaDB. It allows to use external, possibly remote data sources of several types. We can then query them as if they were local relational tables. In this presentation, Federico Razzoli demonstrates a couple of interesting things we can do with it. The talk took place at MariaDB Server Fest 2020.
The shift to cloud computing means that organizations are undergoing a major shift as they develop scale-out infrastructure that can respond to apace of business change faster than ever before. Opscode Chef® is an open-source systems integration framework build specifically for
automating the cloud by making it easy to deploy and scale servers and applications throughout your infrastructure. Join us for this session
containing an introduction to Chef including:
An Overview of Chef
The Chef Architecture
Cookbook Components
System Integration
Live demo launching a Java Stack on Amazon EC2, Rackspace, Ubuntu, and
CentOS
[Presented as part of the Open Source Build a Cloud program on 2/29/2012 - http://cloudstack.org/about-cloudstack/cloudstack-events.html?categoryid=6]
So, you know how to deploy your code, what about your database? This talk will go through deploying your database with LiquiBase and DBDeploy a non-framework based approach to handling migrations of DDL and DML.
CONNECT is a storage engine for MariaDB. It allows to use external, possibly remote data sources of several types. We can then query them as if they were local relational tables. In this presentation, Federico Razzoli demonstrates a couple of interesting things we can do with it. The talk took place at MariaDB Server Fest 2020.
Just about anyone can write a basic SQL query for a table. Not everyone can write a good query though - that takes practice and knowing how to understand what the optimizer is doing with the query. Learn the basics of query optimization so you keep your application engaging the user rather then showing the progress bar as they wait on the database.
Optimizing the queries you send to the database can greatly increase the database's performance, but what do you know about all those strange MySQL variables that can be played with to get even more power from the database? Join me as we go over some of the basics of the various MySQL settings you can twitch, tweak and massage to get the most out of your MySQL server.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
29. Table Filters 1 Start (Following DB options) Any replicate-*-table options? execute UPDATE and Exit Which logging format? Statement Row For each statement that performs an update.. For each update of a table row... No Yes
30. Table Filters 2 (do/ignore) Any replicate-do-table options? execute UPDATE and Exit Any replicate-ignore-table options? Does the table match any of them? Yes No Yes No ignore UPDATE and Exit Does the table match any of them? Yes Yes No No
31. Table Filters 3 (wild do/wild ignore) Any replicate-wild-do-table options? execute UPDATE and Exit Any replicate-wild-ignore-table options? Does the table match any of them? Yes No Yes No ignore UPDATE and Exit Does the table match any of them? Yes Yes No No
32. Table Filters 4 Is there another table to be tested? Any replicate-do-table or replicate-wild-do-table options? Yes No No ignore UPDATE and Exit Yes execute UPDATE and Exit
- Replication is the concept of taking data from one machine and copying it over to one or more separate machines. - Why would we want that? It can be used for a multitude of tasks including as part of a foundation to build larger high performance systems, keeping a “hot” spare of your server, provide a place to generated backups away from the production system, providing a development area with real data .
-
Who knows what the binary log is? Who knows what the relay log is? Replication is based on - master server keeping track of all changes in its binary log. - binary log serves as a written record of all events that modify database structure or content (data) - relay log is a log kept on the slave that consists of the events read from the binary log of the master. - implementation is one-way, asynchronous. * slaves pull the information from the master * they do not have to be connected to the master all the time. So updates can occur over long-distance connections and even over temporary or intermittent connections such as a dial-up service. Not too bad. pretty easy to understand. But each step is actually multiple complex steps.
bullet 1 sub-bullet 1: Right before a transaction on the master that alters data commits... bullet 1 sub-bullet 1 sub-sub-bullet 2: even if the transactions are interwoven on the master during execution - Can you see any problems with this? Potentially you could have a binary log entry written but never run on the master... How? (Answer: server crash between the writing to the binary log and the commit of the transaction. When the server comes back up the transaction will be rolled back, even though it is already in the binary log. Potential to get master/slave out of sync. )
Slave pulls the data from the master, rather than the master pushing the data to the slave. This will happen for each slave. bullet 1: The state of this thread is shown as Slave_IO_running in the output of SHOW SLAVE STATUS or as Slave_running in the output of SHOW STATUS. bullet 3: thread identified in the output of SHOW PROCESSLIST on the master as the Bin log Dump thread. It acquires a lock on the master's binary log for reading each event that is to be sent to the slave. As soon as the event has been read, the lock is released, even before the event is sent to the slave.
bullet 4: need to know about this for security concerns. If it makes it into the relay log - it will happen. bullet 5: master server can be writing to the binary log with N threads (parallel) but the slave only has the one thread to repeat all the commands done on the master (serial). Slave should be more powerful then the master - it will be doing everything the master does *and* it’s own workload
-
also called logical replication bullet 1: replicates entire SQL statements bullet 2 sub-bullet 2: the SQL is written to the log - not all the rows changed and how they are changed bullet 2 sub-bullet 3: contain all statements that made any changes bullet 3 sub-bullet 1: Def deterministic: guaranteed output with a given input - unfortunately there are quite a few Examples: 1) DELETE and UPDATE state ments that use a LIMIT clause without an ORDER BY 2) using any of the following functions: UUID(), UUID _SHORT (), USER (), FOUND_RO WS(), LOAD_FILE(), MASTER_POS_WAIT() , SLEEP(), VERSION(), et c. bullet 3 sub-b ullet 2: Examples: INSERT ... SELECT requires a greater number of row-level l ocks, UPDATE statements that require a table scan (because no index is used in th e WHER E clause) must lock a greater number of rows
bullet 1: Row-based binary logging logs changes in individual table rows. The master writes events to the binary log that indicate how individual table rows are changed. bullet 2 sub-bullet 4: - On the Master: INSERT ... SELECT , INSERT statements with AUTO_INCREMENT, UPDATE or DELETE statem ents w ith WHERE clause s that do not use keys or do not change most of the examined rows. - On the Slave: INSERT, UPDATE, or DELETE statement s bull et 3 sub -bulle t 1: SBR lo gs jus t the UPDATE statement. RBR logs each row changed by that UPDATE. - More data means it may take longer to use the binary logs to recover the server and the binary log will be locked for the writing of the data to it bullet 3 sub-bullet 2: - Examples: - Until 5.1.29 you couldn’t read the actual statements that caused changes. After that you can use --base64-output=DECODE-ROWS and --verbose. with mysql binlog - Prior to 5. 1.24, it was possible to get dif ferent re sults on the slave then from on the master. Caused by a bug that handled locking of rows as they were accessed. Corrected now.
bullet 3: Some examples: UUID() one or more tables with AUTO_INCREMENT columns are updated and a trigger or stored function is invoked any INSERT DELAYED is ex ecuted. call to a UDF is involved individual engines can also determine the logging format used when information in a table is updated
-
- slaves should be more powerful from Master since they have to do all the work from the master and all the reads for the slave - master can only expand so much. For each slave it has it will have to handle the connection and the sending of the binlog - multiple layouts - Master/Slave, Master/Master (not recommended unless Hot Master/Cold Master), Pyramid etc.
Having a copy of the data on the master: bullet 2: you can stop the slave to get a clean backup of the master without interfering with the availability of public facing system bullet 3: using MMM you can handle failover to a “hot swap” system that has been updating to keep up with the original. No single point of failure. bullet 4: allows you to test with real world data to have a better idea of your applications interaction with it bullet 5: have a different storage engines between the master and the slave on tables to take advantage of a specific storage engines abilities (full-text searching, support of transactions
Reporting queries tend to be very different then the queries that are run by the application. This also gives the DBA an area to query the data to learn what about it - helps with query tuning or learn about trends in the data (data mning). All separate from the Master production server so it doesn’t interfere with its work.
- take into account latency on the network, so it will not be able to be completely “up-to-date” but something may be better then nothing. - Office/branch/developers/contractors can have a local copy without having access to the master
-
bullet 1: If this has not already been done, this part of master setup requires a server restart. bullet 2: If this has not already been done, this part of slave setup requires a server restart. bullet 3: Each slave must connect to the master using a MySQL user name and password, so there must be a user account on the master that the slave can use to connect. Does not require a specific replication account - but be aware that user name and password will be stored in plain text within the master.info file - SQL account solely for the purposes of replication
bullet1 sub-bullet 1: Look for File and Position in MASTER STATUS bullet1 sub-bullet 2: Pick your poison for how you want to do this. Both methods have manual pages for how to do to it. Maybe a want to test your backup procedures to see if it works... bullet 2 step 4: mysql> UNLOCK TABLES;
Bold is all that is really required. [] are optional configs if you need them not all options are shown
-
Known Gotcha: default database and qualified tables (database.table) can cause a query to not be replicated when you think it should.
-
Slave_IO_State: A copy of the State field of the SHOW PROCESSLIST output for the slave I/O thread. Master_Log_File: master binlog file from which the I/O thread is currently reading. Read_Master_Log_Pos: position in the current master bin log file that I/O thread has read to. Relay_Log_File: relay log file from which the SQL thread is currently *reading* and executing. Relay_Log_Pos: position in relay log file up to which the SQL thread has read and executed. Relay_Master_Log_File: name of the master binlog containing the most recent event executed by the SQL thread.
Exec_Master_Log_Pos: position in the binlog up to which the SQL thread has read and executed. - The coordinates given by (Relay_Master_Log_File, Exec_Master_Log_Pos) in the master's binary log correspond to the coordinates given by (Relay_Log_File, Relay_Log_Pos) in the relay log. Relay_Log_Space: total combined size of all existing relay log files. Seconds_Behind_Master: In essence, this field measures the time difference in seconds between the slave SQL thread and the slave I/O thread. This field is an indication of how “late” the slave is: - When the slave SQL thread is actively processing updates, this field is the number of seconds that have elapsed since the timestamp of the most recent event on the master executed by that thread. - When the SQL thread has caught up to the slave I/O thread and is idle waiting for more events from the I/O thread, this field is zero. Gotcha: If the network is slow, this is not a good approximation; the slave SQL thread may quite often be caught up with the slow-reading slave I/O thread, so Seconds_Behind_Master often shows a value of 0, even if the I/O thread is late compared to the master. In other words, this column is useful only for fast networks . Last_IO_Errno/Last_IO_Error: error number and error message of the last error that caused the I/O thread to stop. Last_SQL_Errno/Last_SQL_Error: error number and error message of the last error that caused the SQL thread to stop.