The document discusses the challenges of managing changes and versions for PeopleSoft environments. It describes how traditional version control tools only manage files and not PeopleSoft database objects. It introduces Stat ACM as a solution that can version both files and PeopleSoft objects natively. It highlights key Stat ACM capabilities like enforcing change control policies, providing audit trails of changes, facilitating rollbacks, and increasing efficiency through automation.
Top 10 DBA Mistakes on Microsoft SQL ServerKevin Kline
From the noted author of SQL in a Nutshell - Microsoft SQL Server is easier to administrate than any other mainstream relational database on the market. But “easier than everyone else” doesn’t mean it’s easy. And it doesn’t mean that database administration on SQL Server is problem free. Since SQL Server frequently grows up from small, home-grown applications, many IT professionals end up encountering issues that others have tackled and solved years ago. Why not learn from those who first blazed the trails of database administration, so that we don’t make the same mistakes over and over again. In fact, wouldn’t you like to learn about those mistakes before they ever happen?
There is a short list of mistakes that, if you know of them in advance, will make your life much easier. These mistakes are the “low hanging fruit” of application design, development, and administration. Once you apply the lessons learned from this session, you’ll find yourself performing at a higher level of efficiency and effectiveness than before.
No reuse without permission. Follow me on social media at kekline and blog at kevinekline.com.
BPMN, BPEL, ESB or maybe Java? What should I use to implement my project?Guido Schmutz
Have you already asked yourself at the beginning of a SOA or Integration project about the technology you want to use? Is it feasible to implement the integration layer completely in Java or do modern integration platforms such as Oracle Service Bus or Oracle SOA Suite provide the benefits to get closer to the often proposed IT flexibility and agility?
2016 Mastering SAP Tech - 2 Speed IT and lessons from an Agile Waterfall eCom...Eneko Jon Bilbao
A recent clash of worlds occurred when a local client asked to deliver their Hybris eCommerce portal on top of their global template SAP system. The backend SAP team jogged along in the traditional waterfall pace whilst the frontend Hybris team sought to sprint along in agile fashion. This is the story of how we managed the different worlds, the skills required and the lessons learned from both teams.
The eBay Architecture: Striking a Balance between Site Stability, Feature Ve...Randy Shoup
eBay architects Randy Shoup and Dan Pritchett give a guided tour of the eBay architecture. They cover the evolution of the technology stack from Perl to C++ to Java. And they discuss scaling strategies for the data tier, application tier, search, and operations.
Lessons from Large-Scale Cloud Software at DatabricksMatei Zaharia
Keynote by Matei Zaharia at SOCC 2019
Abstract:
The cloud has become one of the most attractive ways for enterprises to purchase software, but it requires building products in a very different way from traditional software, which has not been heavily studied in research. I will explain some of these challenges based on my experience at Databricks, a startup that provides a data analytics platform as a service on AWS and Azure. Databricks manages millions of VMs per day to run data engineering and machine learning workloads using Apache Spark, TensorFlow, Python and other software for thousands of customers. Two main challenges arise in this context: (1) building a reliable, scalable control plane that can manage thousands of customers at once and (2) adapting the data processing software itself (e.g. Apache Spark) for an elastic cloud environment (for instance, autoscaling instead of assuming static clusters). These challenges are especially significant for data analytics workloads whose users constantly push boundaries in terms of scale (e.g. number of VMs used, data size, metadata size, number of concurrent users, etc). I’ll describe some of the common challenges that our new services face and some of the main ways that Databricks has extended and modified open source analytics software for the cloud environment (e.g., designing an autoscaling engine for Apache Spark and creating a transactional storage layer on top of S3 in the Delta Lake open source project).
Bio:
Matei Zaharia is an Assistant Professor of Computer Science at Stanford University and Chief Technologist at Databricks. He started the Apache Spark project during his PhD at UC Berkeley in 2009, and has worked broadly on datacenter systems, co-starting the Apache Mesos project and contributing as a committer on Apache Hadoop. Today, Matei tech-leads the MLflow open source machine learning platform at Databricks and is a PI in the DAWN Lab focusing on systems for ML at Stanford. Matei’s research was recognized through the 2014 ACM Doctoral Dissertation Award for the best PhD dissertation in computer science, an NSF CAREER Award, and the US Presidential Early Career Award for Scientists and Engineers (PECASE).
From Obvious to Ingenius: Incrementally Scaling Web Apps on PostgreSQLKonstantin Gredeskoul
In this exciting and informative talk, presented at PgConf Sillicon Valley 2015, Konstantin cut through the theory to deliver a clear set of practical solutions for scaling applications atop PostgreSQL, eventually supporting millions of active users, tens of thousands concurrently, and with the application stack that responds to requests with a 100ms average. He will share how his team solved one of the biggest challenges they faced: effectively storing and retrieving over 3B rows of "saves" (a Wanelo equivalent of Instagram's "like" or Pinterest's "pin"), all in PostgreSQL, with highly concurrent random access.
Over the last three years, the team at Wanelo optimized the hell out of their application and database stacks. Using PostgreSQL version 9 as their primary data store, Joyent Public Cloud as a hosting environment, the team re-architected their backend for rapid expansion several times over, as the unrelenting traffic kept climbing up. This ultimately resulted in a highly efficient, horizontally scalable, fault tolerant application infrastructure. Unimpressed? Now try getting there without the OPS or DBA teams, all while deploying seven times per day to production, with an application measuring 99.999% uptime over the last 6 months.
Bridging Oracle Database and Hadoop by Alex Gorbachev, Pythian from Oracle Op...Alex Gorbachev
Modern big data solutions often incorporate Hadoop as one of the components and require the integration of Hadoop with other components including Oracle Database. This presentation explains how Hadoop integrates with Oracle products focusing specifically on the Oracle Database products. Explore various methods and tools available to move data between Oracle Database and Hadoop, how to transparently access data in Hadoop from Oracle Database, and review how other products, such as Oracle Business Intelligence Enterprise Edition and Oracle Data Integrator integrate with Hadoop.
Top 10 DBA Mistakes on Microsoft SQL ServerKevin Kline
From the noted author of SQL in a Nutshell - Microsoft SQL Server is easier to administrate than any other mainstream relational database on the market. But “easier than everyone else” doesn’t mean it’s easy. And it doesn’t mean that database administration on SQL Server is problem free. Since SQL Server frequently grows up from small, home-grown applications, many IT professionals end up encountering issues that others have tackled and solved years ago. Why not learn from those who first blazed the trails of database administration, so that we don’t make the same mistakes over and over again. In fact, wouldn’t you like to learn about those mistakes before they ever happen?
There is a short list of mistakes that, if you know of them in advance, will make your life much easier. These mistakes are the “low hanging fruit” of application design, development, and administration. Once you apply the lessons learned from this session, you’ll find yourself performing at a higher level of efficiency and effectiveness than before.
No reuse without permission. Follow me on social media at kekline and blog at kevinekline.com.
BPMN, BPEL, ESB or maybe Java? What should I use to implement my project?Guido Schmutz
Have you already asked yourself at the beginning of a SOA or Integration project about the technology you want to use? Is it feasible to implement the integration layer completely in Java or do modern integration platforms such as Oracle Service Bus or Oracle SOA Suite provide the benefits to get closer to the often proposed IT flexibility and agility?
2016 Mastering SAP Tech - 2 Speed IT and lessons from an Agile Waterfall eCom...Eneko Jon Bilbao
A recent clash of worlds occurred when a local client asked to deliver their Hybris eCommerce portal on top of their global template SAP system. The backend SAP team jogged along in the traditional waterfall pace whilst the frontend Hybris team sought to sprint along in agile fashion. This is the story of how we managed the different worlds, the skills required and the lessons learned from both teams.
The eBay Architecture: Striking a Balance between Site Stability, Feature Ve...Randy Shoup
eBay architects Randy Shoup and Dan Pritchett give a guided tour of the eBay architecture. They cover the evolution of the technology stack from Perl to C++ to Java. And they discuss scaling strategies for the data tier, application tier, search, and operations.
Lessons from Large-Scale Cloud Software at DatabricksMatei Zaharia
Keynote by Matei Zaharia at SOCC 2019
Abstract:
The cloud has become one of the most attractive ways for enterprises to purchase software, but it requires building products in a very different way from traditional software, which has not been heavily studied in research. I will explain some of these challenges based on my experience at Databricks, a startup that provides a data analytics platform as a service on AWS and Azure. Databricks manages millions of VMs per day to run data engineering and machine learning workloads using Apache Spark, TensorFlow, Python and other software for thousands of customers. Two main challenges arise in this context: (1) building a reliable, scalable control plane that can manage thousands of customers at once and (2) adapting the data processing software itself (e.g. Apache Spark) for an elastic cloud environment (for instance, autoscaling instead of assuming static clusters). These challenges are especially significant for data analytics workloads whose users constantly push boundaries in terms of scale (e.g. number of VMs used, data size, metadata size, number of concurrent users, etc). I’ll describe some of the common challenges that our new services face and some of the main ways that Databricks has extended and modified open source analytics software for the cloud environment (e.g., designing an autoscaling engine for Apache Spark and creating a transactional storage layer on top of S3 in the Delta Lake open source project).
Bio:
Matei Zaharia is an Assistant Professor of Computer Science at Stanford University and Chief Technologist at Databricks. He started the Apache Spark project during his PhD at UC Berkeley in 2009, and has worked broadly on datacenter systems, co-starting the Apache Mesos project and contributing as a committer on Apache Hadoop. Today, Matei tech-leads the MLflow open source machine learning platform at Databricks and is a PI in the DAWN Lab focusing on systems for ML at Stanford. Matei’s research was recognized through the 2014 ACM Doctoral Dissertation Award for the best PhD dissertation in computer science, an NSF CAREER Award, and the US Presidential Early Career Award for Scientists and Engineers (PECASE).
From Obvious to Ingenius: Incrementally Scaling Web Apps on PostgreSQLKonstantin Gredeskoul
In this exciting and informative talk, presented at PgConf Sillicon Valley 2015, Konstantin cut through the theory to deliver a clear set of practical solutions for scaling applications atop PostgreSQL, eventually supporting millions of active users, tens of thousands concurrently, and with the application stack that responds to requests with a 100ms average. He will share how his team solved one of the biggest challenges they faced: effectively storing and retrieving over 3B rows of "saves" (a Wanelo equivalent of Instagram's "like" or Pinterest's "pin"), all in PostgreSQL, with highly concurrent random access.
Over the last three years, the team at Wanelo optimized the hell out of their application and database stacks. Using PostgreSQL version 9 as their primary data store, Joyent Public Cloud as a hosting environment, the team re-architected their backend for rapid expansion several times over, as the unrelenting traffic kept climbing up. This ultimately resulted in a highly efficient, horizontally scalable, fault tolerant application infrastructure. Unimpressed? Now try getting there without the OPS or DBA teams, all while deploying seven times per day to production, with an application measuring 99.999% uptime over the last 6 months.
Bridging Oracle Database and Hadoop by Alex Gorbachev, Pythian from Oracle Op...Alex Gorbachev
Modern big data solutions often incorporate Hadoop as one of the components and require the integration of Hadoop with other components including Oracle Database. This presentation explains how Hadoop integrates with Oracle products focusing specifically on the Oracle Database products. Explore various methods and tools available to move data between Oracle Database and Hadoop, how to transparently access data in Hadoop from Oracle Database, and review how other products, such as Oracle Business Intelligence Enterprise Edition and Oracle Data Integrator integrate with Hadoop.
Vladimir Bacvanski and Rafael Coss
Common demands for Web 2.0 application are rich interactivity and ability to handle large volumes of data. Join us to see
how to integrate IBM DB2, IBM WebSphere sMash and high-performance data access through IBM Optim pureQuery. This
combination provides a way to rapidly develop data-centric, scalable dynamic Web applications. See how we begin with a DB2
database, access the data with Optim pureQuery, use business objects in Groovy, and expose them to users through Dojo AJAX framework.
CloverDX for IBM Infosphere MDM (for 11.4 and later)CloverDX
For users of IBM Infosphere MDM product, the data transformation/loading component (CloverETL) has been removed as of version 11.4. However, if you wish to continue using the product, you can obtain a free complimentary license for CloverDX (new brand name for CloverETL) by contacting IBM support.
Microservices, Events, and Breaking the Data Monolith with KafkaVMware Tanzu
One of the trickiest problems with microservices is dealing with data as it becomes spread across many different bounded contexts. An event architecture and event-streaming platform like Kafka provide a respite to this problem. Event-first thinking has a plethora of other advantages too, pulling in concepts from event sourcing, stream processing, and domain-driven design.
In this talk, Ben and Cornelia will tackle how to do the following:
● Transform the data monolith to microservices
● Manage bounded contexts for data fields that overlap
● Use event architectures that apply streaming technologies like Kafka to address the challenges of distributed data
Speakers:
Cornelia Davis, Author & VP, Technology, Pivotal
Ben Stopford, Author & Technologist, Office of CTO, Confluent
Whitepaper: Volume Testing Thick Clients and DatabasesRTTS
Even in the current age of cloud computing there are still endless benefits of developing thick client software: non-dependency on browser version, offline support, low hosting fees, and utilizing existing end user hardware, to name a few.
It's more than likely that your organization is utilizing at least a few thick client applications. Now consider this: as your user base grows, does your think client's back-end server need to grow as well? How quickly? How do you ensure that you provide the correct amount of additional capacity without overstepping and unnecessarily eating into your profits? The answer is volume testing.
Read how RTTS does this with IBM Rational Performance Tester.
Slides from Oracle's ADF Architecture TV series covering the Design phase of ADF projects, investigating the transaction options on ADF task flows.
Like to know more? Check out:
- Subscribe to the YouTube channel - http://bit.ly/adftvsub
- Design Playlist - http://www.youtube.com/playlist?list=PLJz3HAsCPVaSemIjFk4lfokNynzp5Euet
- Read the episode index on the ADF Architecture Square - http://bit.ly/adfarchsquare
Oracle ADF Architecture TV - Design - Task Flow Data Control Scope OptionsChris Muir
Slides from Oracle's ADF Architecture TV series covering the Design phase of ADF projects, investigation the task data control scope options.
Like to know more? Check out:
- Subscribe to the YouTube channel - http://bit.ly/adftvsub
- Design Playlist - http://www.youtube.com/playlist?list=PLJz3HAsCPVaSemIjFk4lfokNynzp5Euet
- Read the episode index on the ADF Architecture Square - http://bit.ly/adfarchsquare
Introducing domain driven design - dogfood con 2018Steven Smith
DDD provides a set of patterns and practices for tackling complex business problems with software models. Learn the basics of DDD in this session, including several principles and patterns you can start using immediately even if your project hasn't otherwise embraced DDD. Examples will primarily use C#/.NET.
SQL Server 2012 is the most crucial release of SQL Server to-date. In this slideshow, you'll see how SQL Server 2012 supports mission critical applications 24x7 and gives significant insight into business operations. Presented by Subhash Jawahrani of Microsoft to the Silicon Valley SQL Server User Group in March 2012.
You'll learn about:
* Mission Critical Apps
* New Business Intelligence features
* Improving business agility with Cloud computing
Memory Heap Analysis with AppDynamics - AppSphere16AppDynamics
Learn the internal workings of the Java memory heap, how generational memory heaps work, and the different heap optimization parameters. Discover how to monitor and diagnose memory issues with AppDynamics Automatic Leak Detection and Object Instance Tracking.
Vladimir Bacvanski and Rafael Coss
Common demands for Web 2.0 application are rich interactivity and ability to handle large volumes of data. Join us to see
how to integrate IBM DB2, IBM WebSphere sMash and high-performance data access through IBM Optim pureQuery. This
combination provides a way to rapidly develop data-centric, scalable dynamic Web applications. See how we begin with a DB2
database, access the data with Optim pureQuery, use business objects in Groovy, and expose them to users through Dojo AJAX framework.
CloverDX for IBM Infosphere MDM (for 11.4 and later)CloverDX
For users of IBM Infosphere MDM product, the data transformation/loading component (CloverETL) has been removed as of version 11.4. However, if you wish to continue using the product, you can obtain a free complimentary license for CloverDX (new brand name for CloverETL) by contacting IBM support.
Microservices, Events, and Breaking the Data Monolith with KafkaVMware Tanzu
One of the trickiest problems with microservices is dealing with data as it becomes spread across many different bounded contexts. An event architecture and event-streaming platform like Kafka provide a respite to this problem. Event-first thinking has a plethora of other advantages too, pulling in concepts from event sourcing, stream processing, and domain-driven design.
In this talk, Ben and Cornelia will tackle how to do the following:
● Transform the data monolith to microservices
● Manage bounded contexts for data fields that overlap
● Use event architectures that apply streaming technologies like Kafka to address the challenges of distributed data
Speakers:
Cornelia Davis, Author & VP, Technology, Pivotal
Ben Stopford, Author & Technologist, Office of CTO, Confluent
Whitepaper: Volume Testing Thick Clients and DatabasesRTTS
Even in the current age of cloud computing there are still endless benefits of developing thick client software: non-dependency on browser version, offline support, low hosting fees, and utilizing existing end user hardware, to name a few.
It's more than likely that your organization is utilizing at least a few thick client applications. Now consider this: as your user base grows, does your think client's back-end server need to grow as well? How quickly? How do you ensure that you provide the correct amount of additional capacity without overstepping and unnecessarily eating into your profits? The answer is volume testing.
Read how RTTS does this with IBM Rational Performance Tester.
Slides from Oracle's ADF Architecture TV series covering the Design phase of ADF projects, investigating the transaction options on ADF task flows.
Like to know more? Check out:
- Subscribe to the YouTube channel - http://bit.ly/adftvsub
- Design Playlist - http://www.youtube.com/playlist?list=PLJz3HAsCPVaSemIjFk4lfokNynzp5Euet
- Read the episode index on the ADF Architecture Square - http://bit.ly/adfarchsquare
Oracle ADF Architecture TV - Design - Task Flow Data Control Scope OptionsChris Muir
Slides from Oracle's ADF Architecture TV series covering the Design phase of ADF projects, investigation the task data control scope options.
Like to know more? Check out:
- Subscribe to the YouTube channel - http://bit.ly/adftvsub
- Design Playlist - http://www.youtube.com/playlist?list=PLJz3HAsCPVaSemIjFk4lfokNynzp5Euet
- Read the episode index on the ADF Architecture Square - http://bit.ly/adfarchsquare
Introducing domain driven design - dogfood con 2018Steven Smith
DDD provides a set of patterns and practices for tackling complex business problems with software models. Learn the basics of DDD in this session, including several principles and patterns you can start using immediately even if your project hasn't otherwise embraced DDD. Examples will primarily use C#/.NET.
SQL Server 2012 is the most crucial release of SQL Server to-date. In this slideshow, you'll see how SQL Server 2012 supports mission critical applications 24x7 and gives significant insight into business operations. Presented by Subhash Jawahrani of Microsoft to the Silicon Valley SQL Server User Group in March 2012.
You'll learn about:
* Mission Critical Apps
* New Business Intelligence features
* Improving business agility with Cloud computing
Memory Heap Analysis with AppDynamics - AppSphere16AppDynamics
Learn the internal workings of the Java memory heap, how generational memory heaps work, and the different heap optimization parameters. Discover how to monitor and diagnose memory issues with AppDynamics Automatic Leak Detection and Object Instance Tracking.
A full overview of Team Foundation Server 2010 (not just what's new).
Includes 4 main areas:
- Manage & Plan your Project
- Understand Parallel Development
- No More "No Repro" Bugs
- Reporting on your Entire Portfolio.
Screenshots are included.
Developing and delivering applications in a repeatable way, with the expected quality is a great challenge these days. In order to maximize business value at-the-speed-of-business, initiatives are being driven both by the development or delivery teams and by operations. They each have their own focus and specifics, but in essence they are both centered around: Collaboration and integration, automation, standardization and governance.
Practical, team-focused operability techniques for distributed systems - DevO...Matthew Skelton
In this talk, we explore five practical, tried-and-tested, real world techniques for improving operability with many kinds of software systems, including cloud, Serverless, Microservices, on-premise, and IoT. Based on our work in many industry sectors, we will share our experience of helping teams to improve the operability of their software systems through these straightforward, team-friendly techniques.
From a talk given at DevOpsCon Munich 2017 https://devopsconference.de/microservices/practical-team-focused-operability-techniques-for-distributed-systems/
Presenters: Matthew Skelton and Rob Thatcher, Skelton Thatcher Consulting
Webinar: Operability is all about making software work well in Production. In this webinar, we explore practical, tried-and-tested, real world techniques for improving operability with many kinds of software systems, including cloud, Serverless, on-premise, and IoT: logging with Event IDs, Run Book dialogue sheets, endpoint healthchecks, correlation IDs, and lightweight User Personas.
Target audience: Software Developer, Tester, Software Architect, DevOps Engineer, Delivery Manager, Head of Delivery, Head of IT.
Benefits: Attendees will gain insights into operability and why this is important for modern software systems, along with practical experience of techniques to enhance operability in almost any software system they encounter.
LUXproject is a distributed web-based project management system created on the basis of specific commercial and non-commercial modules developed by third-party vendors and open-source communities (Atlassian JIRA/GreenHopper, Atlassian Confluence, Atlassian FishEye, Subversion/Perforce, Cruise Control, WebDav etc.) as well as Luxoft modules.
company management to arrange transparent project management and always have current and reliable information.
Modern software systems now increasingly span cloud, on-premise, and remote embedded devices & sensors. These distributed systems bring challenges with data, connectivity, performance, and systems management, so for business success we need to design and build with operability as a first class property.
In this talk, we explore five practical, tried-and-tested, real world techniques for improving operability with many kinds of software systems, including cloud, Serverless, on-premise, and IoT:
- Logging as a live diagnostics vector with sparse Event IDs
- Operational checklists and 'Run Book dialogue sheets' as a discovery mechanism for teams
- Endpoint healthchecks as a way to assess runtime dependencies and complexity
- Correlation IDs beyond simple HTTP calls
- Lightweight 'User Personas' as drivers for operational dashboards
These techniques work very differently with different technologies. For instance, an IoT device has limited storage, processing, and I/O, so generation and shipping of logs and metrics looks very different from the cloud or Serverless case. However, the principles - logging as a live diagnostics vector, Event IDs for discovery, etc. - work remarkably well across very different technologies.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Connector Corner: Automate dynamic content and events by pushing a button
Stat 5.4 Pre Sales Demo Master
1. Change Management and Version Control for PeopleSoft: Stat ® ACM Tim Steward, Senior Systems Consultant ServiceQuest Consulting
2. Visualizing a Typical PeopleSoft Development Lifestyle Production Development Testing 10% Files (SQR, COBOL, SQC, Crystal, Scripts, etc) On Both Unix & Windows Servers 90% Proprietary Database Objects (Record, Page, Index, PeopleCode, Activity, Component, etc) Windows Unix Windows Unix Windows Unix DB DB DB PeopleSoft Development Lifestyle
3. Most Version Control Tools Only Version Files Production Development Testing Files (SQR, COBOL, SQC, Crystal, Scripts, etc) On Both Unix & Windows Servers Windows Unix Windows Unix Windows Unix PeopleSoft Development Lifestyle PVCS, SourceSafe, Harvest, etc., version & manage flat files, however, they only offer a partial solution 10% DB DB DB Which Are Only 10% of the Object Types No No No Objects Objects Objects
4. Only Stat Versions & Migrates Both PeopleSoft Objects & Flat Files Natively Production Development Testing 10% Files (SQR, COBOL, SQC, Crystal, Scripts, etc) On Both Unix & Windows Servers 90% Proprietary Database Objects (Record, Page, Index, PeopleCode, Activity, Component, etc) Windows Unix Windows Unix Windows Unix DB DB DB PeopleSoft Development Lifestyle = Full Object Support 100%
9. Common Change Management Goals Change Management & Version Control graphic. Providing an Audit Trail Do I know who changed what? Establishing Controls Can I enforce my policy and procedures today? Supporting Compliance Can I satisfy the auditors and management? Enhancing Visibility What impact is change having and can I report on those changes? Enabling Communication Are we all on the same page and being proactively notified? Reducing Downtime & Risk Can I rollback? Can I fix production? Increasing Efficiency Are we leveraging technology & automation? Here is Outline of What we will be covering
10. Let’s Start With Establishing Controls Change Management & Version Control graphic. Establishing Controls Can I enforce my policy and procedures today?
11. Establishing Controls: Physically lock down tools using object security in PeopleSoft Prevent changes without a proper change request ticket Require task completion before advancing in workflow Ensure read-only access to developers without obtaining a lock in Stat Require an approval before a change or migration can occur 1 2 3 4 5 1 Stat uniquely able to… (if required) 2 3 5 4 6 6 Use role based security to enforce separation of duties Dev Read only access to tools Physical Locking Ticket Approvals Security Tasks
12.
13. Workflow Enables Control of Policy & Procedures Workflow allows you to assign/configure business steps ensure that… Proper person has the change request at the proper time Approvals have been met before migrations or transfers are made to the next person Tasks have been accomplished Issues and Documentation have been logged Migrations and post migrations have been performed (Build, DMS, COBOL, Custom)
14. Workflow Enables Control of Policies & Procedures Databases People Status Rule Transfer Rule Require Tasks & Approvals
15.
16. Common Change Management Goals Change Management & Version Control graphic. Providing an Audit Trail Do I know who changed what? Establishing Controls Can I enforce my policy and procedures today?
17. Providing An Audit Trail Through CSR’s Production Development Testing Windows Unix Windows Unix Windows Unix DB DB DB Electronic change request (CSR) tracks all changes made from development to production CSR Enhancements Customizations Patches CSR Audit Trail Migrations (objects & files) Approvals Documentation Tasks & Issues CSR Audit Trail Final Version Manager Review Notifications Sign-Off CSR Audit Trail Open Ticket Close Ticket CSR Audit Trail
19. Automatic & Enforced Audit Trail Change From Value To Value Last Update Update By
20. Common Change Management Goals Change Management & Version Control graphic. Providing an Audit Trail Do I know who changed what? Establishing Controls Can I enforce my policy and procedures today? Reducing Downtime & Risk Can I rollback? Can I fix production?
21. Can I Rollback My Changes? PeopleSoft Projects PeopleSoft Flat Files Records Pages Indexes Components PeopleCode Etc. SQR COBOL SQR Envision Crystal Reports Etc. CSR Baseline, Interim, Final (archive sets) Stat Repository Separated by Tools Version inside Prod Dev Test Quick Rollback for Files & Objects Via Drag & Drop
22. Rollback Examples PeopleSoft Projects PeopleSoft Flat Files Records Pages Indexes Components PeopleCode Etc. SQR COBOL SQR Envision Crystal Reports Etc. Stat Repository Prod Dev Test Quick Rollback for Files & Objects Drag & Drop Something Fails in Production Emergency Fix Migrates Over Developer Database Refresh to Test 1 2 3
23. Common Change Management Goals Change Management & Version Control graphic. Providing an Audit Trail Do I know who changed what? Establishing Controls Can I enforce my policy and procedures today? Reducing Downtime & Risk Can I rollback? Can I fix production? Increasing Efficiency Are we leveraging technology & automation?
24. What Types of Automation Does Stat Provide? STAT Automation Manual Process Automatic email notifications Reactive manual communication or paper forms Drag & drop migrations & rollback of files & objects Manual project copy & FTP to all source life locations - manual recreation Scheduled reports providing Seamless visibility of all changes Manually compiling documentation from several report unfriendly sources PeopleSoft centric wizards for impact analysis, mass migration, customization history, recovery, and release management Manual time consuming impact analysis, one-off migrations, stacks of compare reports, object recreation, non-migration or object based release management Independent repository providing central location for audit trail of documentation and forms Paper forms or multiple disparate applications difficult to audit and/or report against
26. STAT Example #1: Automated Migrations Drag and Drop Migrations (Objects & Files) Version Control Windows Unix DB
27. Example #2 Mass Migrations & Release Management Multiple Change Requests (CSRs) Multiple PeopleSoft Projects Multiple File Types & Locations Mass Migration By Release 8.42 8.44 8.45 8.47 8.49 Dev Test Stage Prod Demo By Ready for Environment
28. Common Change Management Goals Change Management & Version Control graphic. Providing an Audit Trail Do I know who changed what? Establishing Controls Can I enforce my policy and procedures today? Enabling Communication Are we all on the same page and being proactively notified? Reducing Downtime & Risk Can I rollback? Can I fix production? Increasing Efficiency Are we leveraging technology & automation?
29. Difficult Scenarios Which Require Communication "What we've got here is failure to communicate ."
30. Environment Wide Object & File Locking Reserved By Environment “ waiting for lock” “ environment specific” Lock Lock Reservation Prod Dev Test Locked “ exclusive rights”
31.
32. Locks & Reservations If Someone already has a lock on your object You will get a reservation and can see information about who has the lock and what stage they are in.
33. Example #1: An Emergency Fix Prod Dev Test Developer has a page locked that will soon be needed for an emergency fix that without Stat might overwrite the developer’s work Emergency fix must communicate with developer and cannot migrate over developer’s lock without proper permission Developer can back up their work in Stat and unlock objects allowing fix to go through with the option to later restore their copy if they had done more work than the fix 1 2 3 CSR #101 Page A ( dev ) CSR #102 Page A ( fix ) Developer Emergency Fix lock Stat Page A ( dev ) Page A ( fix ) Page A ( fix )
34. Example #2: Automated Email Notifications Due Date CSR Assignment Approval Pending Email Trigger
35. Common Change Management Goals Change Management & Version Control graphic. Providing an Audit Trail Do I know who changed what? Establishing Controls Can I enforce my policy and procedures today? Enhancing Visibility What impact is change having and can I report on those changes? Enabling Communication Are we all on the same page and being proactively notified? Reducing Downtime & Risk Can I rollback? Can I fix production? Increasing Efficiency Are we leveraging technology & automation?
36. Increasing Visibility through Impact Analysis Prod Dev Test Patches, fixes, and enhancement often impact existing customizations Visibility is needed to determine what objects or files will be potentially impacted and what customizations will be impacted as well? Because Stat tracks object history and customization history we can warn you of any objects and/or files that may be impacted by introducing new change 1 2 3 Patches, Fixes & Enhancements Stat Customization History in Database Existing Customizations Existing Customizations Existing Customizations Causing Overwrites What objects/files will be impacted? What customizations will be impacted?
37. Increasing Visibility Through Reports Because of STAT’s central repository auditors and managers have deeper visibility into… Object & File History Migration History Approval History Documentation Providing quick access to critical reports and information that otherwise may take days to produce manually
40. Common Change Management Goals Change Management & Version Control graphic. Providing an Audit Trail Do I know who changed what? Establishing Controls Can I enforce my policy and procedures today? Supporting Compliance Can I satisfy the auditors and management? Enhancing Visibility What impact is change having and can I report on those changes? Enabling Communication Are we all on the same page and being proactively notified? Reducing Downtime & Risk Can I rollback? Can I fix production? Increasing Efficiency Are we leveraging technology & automation?
41. Supporting Compliance: Already Established Points We provide what most auditors & managers are looking for
42. Supporting Compliance & Best Business Practices CMDB Separation of Duties Approvals Workflow Role Based Security Stat Repository Common Requirements STAT Solution