This document provides an overview of how to integrate Red Hat CloudForms 4 with OpenShift 3 to retrieve container metrics. It describes demonstrating the integration, installing and configuring CloudForms 4 to add OpenShift 3.1 as a container provider, configuring Hawkular on OpenShift to expose metrics via the Hawkular Metrics API, and modifying the Hawkular configuration to make it compatible with how CloudForms expects to retrieve metrics. The presentation contains steps for configuring CloudForms and OpenShift, creating service accounts, adding roles, and configuring routes to expose the Hawkular Metrics API on a port CloudForms can access.
Puppet is an important part of Satellite 6, in this presentation, I'm introducing Puppet, how to quickly setup a Puppet server and a Puppet client, and finally how to write Puppet receipt in to goal of importing them into Satellite 6.
Source - https://www.openmaru.io/?p=3228
쿠버네티스를 이해하기 위해서 반드시 알아야 하는 개념이 불변의 인프라스트럭처 입니다.
불변과 가변의 인프라스트럭처에서 서버 운영 방법을 비교하여 개념과 장점을 설명 드립니다.
이제 IT 환경이 왜 머신 중심에서 애플리케이션 중심으로 전환되고 있는지에 대해서 살펴보겠습니다.
불변의 인프라는 고급 도자기 찻잔과 비유 될 수 있습니다.
일회용 종이컵은 한번 쓰면 버리고, 구매하는데도 큰 부담이 없습니다.
하지만 고급 도자기 찻잔은 어떨까요?애지중지 관리하며 깨지면 모든 것이 끝나게 됩니다.
Extending OpenShift Origin: Build Your Own Cartridge with Bill DeCoste of Red...OpenShift Origin
Extending OpenShift Origin: Build Your Own Cartridge
Presenters: Bill DeCoste
Cartridges allow developers to provide services running on top of the Red Hat OpenShift Platform-as-a-Service (PaaS). OpenShift already provides cartridges for numerous web application frameworks and databases. Writing your own cartridges allows you to customize or enhance an existing service, or provide new services. In this session, the presenter will discuss best practices for cartridge development and the latest changes in the OpenShift cartridge support.
* Latest changes made in the platform to ease cartridge development
* OpenShift Cartridges vs. plugins
* Outline for development of a new cartridge
* Customization of existing cartridges
* Quickstarts: leveraging a cartridge or cartridges to provide a complete application
In addition to authorization policies that control what a user can do, OpenShift Container Platform gives its administrators the ability to manage a set of security context constraints (SCCs) for limiting pods and securing their cluster.
Default security context may be too restrictive for containers pulled down from DockerHub, thorugh this talk we'll explore the various steps to execute for enabling required permissions on selected OpenShift's pods.
Helm, the defacto package manager for Kubernetes, is a powerful tool going through a period of breaking development. Join us as Matt Farina, a Helm Maintainer and co-chair of sig-apps will explore some advanced and often overlooked techniques; along with discussing the future direction of the project and the major changes in store for Helm v3.
Puppet is an important part of Satellite 6, in this presentation, I'm introducing Puppet, how to quickly setup a Puppet server and a Puppet client, and finally how to write Puppet receipt in to goal of importing them into Satellite 6.
Source - https://www.openmaru.io/?p=3228
쿠버네티스를 이해하기 위해서 반드시 알아야 하는 개념이 불변의 인프라스트럭처 입니다.
불변과 가변의 인프라스트럭처에서 서버 운영 방법을 비교하여 개념과 장점을 설명 드립니다.
이제 IT 환경이 왜 머신 중심에서 애플리케이션 중심으로 전환되고 있는지에 대해서 살펴보겠습니다.
불변의 인프라는 고급 도자기 찻잔과 비유 될 수 있습니다.
일회용 종이컵은 한번 쓰면 버리고, 구매하는데도 큰 부담이 없습니다.
하지만 고급 도자기 찻잔은 어떨까요?애지중지 관리하며 깨지면 모든 것이 끝나게 됩니다.
Extending OpenShift Origin: Build Your Own Cartridge with Bill DeCoste of Red...OpenShift Origin
Extending OpenShift Origin: Build Your Own Cartridge
Presenters: Bill DeCoste
Cartridges allow developers to provide services running on top of the Red Hat OpenShift Platform-as-a-Service (PaaS). OpenShift already provides cartridges for numerous web application frameworks and databases. Writing your own cartridges allows you to customize or enhance an existing service, or provide new services. In this session, the presenter will discuss best practices for cartridge development and the latest changes in the OpenShift cartridge support.
* Latest changes made in the platform to ease cartridge development
* OpenShift Cartridges vs. plugins
* Outline for development of a new cartridge
* Customization of existing cartridges
* Quickstarts: leveraging a cartridge or cartridges to provide a complete application
In addition to authorization policies that control what a user can do, OpenShift Container Platform gives its administrators the ability to manage a set of security context constraints (SCCs) for limiting pods and securing their cluster.
Default security context may be too restrictive for containers pulled down from DockerHub, thorugh this talk we'll explore the various steps to execute for enabling required permissions on selected OpenShift's pods.
Helm, the defacto package manager for Kubernetes, is a powerful tool going through a period of breaking development. Join us as Matt Farina, a Helm Maintainer and co-chair of sig-apps will explore some advanced and often overlooked techniques; along with discussing the future direction of the project and the major changes in store for Helm v3.
Comparison of control plane deployment architectures in the scope of hypercon...Miroslav Halas
The OpenStack control plane can be implemented using one of the three infrastructure types: Bare Metal, Virtual Machines, and Containers. Comprehensive comparisons of these approaches are not available. In the first section of our talk, we present a reference architecture for building a virtualized control plane, which supports OpenStack controller HA, file system HA, networking HA, and enhanced performance through CPU-pinning, and SR-IOV. All the building blocks are based on established open source tools and their corresponding products, Red Hat Enterprise Virtualization, Red Hat Gluster Storage and Red Hat OpenStack. Any OpenStack deployment tool which can be used on bare metal work to build this virtualized control plane. We will focus on comparisons of different OpenStack control plane infrastructures. We compare the deployment and operational aspects of different control plane implementation on the same hardware environment. We also evaluate the different control plane deployments through benchmarking tools (such as Rally), and provide a quantitative comparison.
OpenStack clusters are most often built with servers for Nova compute VMs and servers for storage, with Ceph storage requiring 3 or more nodes. It can be more cost-effective to "hyperconverge" Nova and Ceph on to the same servers, and rising processor core counts and RAM density have made this feasible. But it is important to understand the resource demand patterns of each and protect against corner cases where one starves the other. In the second section of our talk we will present our empirical approach to:
Generating realistic system & storage loads using open source test suites
Collecting and analyzing results quantitatively
Optimizing hardware configuration and resource partitioning
We will present data, analysis, and lessons learned from our hyperconverged infrastructure work.
Compute node HA - current upstream developmentAdam Spiers
Short presentation made for OpenStack London "Tokyo Aftermath" meetup, on current upstream activity in the OpenStack HA developers community around high availability for compute nodes.
How to integrate_custom_openstack_services_with_devstackSławomir Kapłoński
Presentation about how to configure and use DevStack to deploy all-in-one OpenStack cluster for development and testing purpose.
It also shows how to integrate own OpenStack service with DevStack using DevStack's plugins system
Dockerizing OpenStack for High AvailabilityDaniel Krook
Presentation at the OpenStack Summit in Paris, France on November 4, 2014.
High availability in OpenStack can be achieved in many ways. In this session we will describe how Docker can be used to provide an active-active highly available OpenStack environment. We will focus the real world work that we have done to "Dockerize" OpenStack services, detail the advantages to this type of deployment (rapid deployment, rapid scale out, versioning, etc.), and walk through our design - from requirements, limitations, obstacles, and especially our decisions. We will use our experiences as examples to provide real world best practices, as well as showing a demonstration of the environment in action.
Manuel Silveyra - Senior Cloud Solutions Architect
Daniel Krook - Senior Certified IT Specialist
Shaun Murakami - Senior Cloud Solution Architect
Kalonji Bankole - Cloud Architect
Thanks to tools like vagrant, puppet/chef, and Platform as a Service services like Heroku, developers are extremely used to being able to spin up a development environment that is the same every time. What if we could go a step further and make sure our development environment is not only using the same software, but 100% configured and set up like production. Docker will let us do that, and so much more. We’ll look at what Docker is, why you should look into using it, and all of the features that developers can take advantage of.
2018년 10월 19일 금요일, 오픈스택 한국 커뮤니티 정기 세미나에서 발표주셨던 자료입니다.
- 행사 정보: http://festa.io/events/118
- 발표자: 김용기 부장님
> Sr. Solution Architect, Red Hat
> Administrator, Ansible Facebook Usergroup
Introduce the basic concept of load-balancing, common implementations of load-balancing and the detail fo kubernetes service. In the last, demonstrate how to modify the linux iptable kernel module to fulfill the layer-7 load-balcning for kubernetes
Moving to Nova Cells without Destroying the WorldMike Dorman
Note: Video recording of this presentation at the OpenStack Liberty Summit in Vancouver is available here: https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/moving-to-nova-cells-without-destroying-the-world
Your cloud has been growing for a while and you've realized you need Nova Cells to scale further. But you've already got thousands of VMs and hundreds of active users. What to do?
This talk describes Go Daddy's experience with live-converting the production cloud to Nova Cells, including tips and recommendations to help you do it, too.
- Brief overview of Nova Cells' theory of operation and basic configuration
- Environment preparation to get ready for the conversion
- Specific steps to complete the conversion with minimal service interruption
- Caveats and lessons learned
- Introduction to Cells v2, and why you might want to wait for Kilo to convert.
OpenNebula Conf 2014: CentOS, QA an OpenNebula - Christoph GaluschkaNETWAYS
CentOS, the Community Enterprise OS, uses Opennebula as virtualization plattform for its automated QA-process. The opennebula setup consists of 3 nodes, all running CentOS-6, who handle the following tasks:
– sunstone as cloud controller
– local mirror/DNS-Server/http-Server for the VMs to pull in packages
– one VM to run a jenkins instance to launch the various tests (ci.de.centos.org)
– nginx on the cloud controller to forward http traffic to the jenkins VM
A public git repository (http://www.gitorious.org/testautomation) is used to allow whoever wants to contribute to pull the current test suite – t_functional, a series of bash scripts used to do funtional tests of various applications, binaries, configuration files and Trademark issues. As new tests are added to the repo via personal clones and merge requests, those tests first need to complete a test run via jenkins. Each test run currently consists of 4 VMs (one for each arch for C5 and C6 – C7 to come), which run the complete test suite. All VMs used for theses tests are instantiated and torn down on demand, whenever the call to testrun a personal clone is issued (via IRC).
Once completed successfully, the request is merged into the main repo. The jenkins node monitors this repository and which automatically triggers another complete test run.
Besides these triggered test runs, the test suite is automatically triggered daily to run. This is used to verify functionality of published updates – a handfull of failty updates have allready been discovered this way.
Besides t_functional, the Linux Test Project Suite of tests is also run on a daily basis, also to verify functionality of the OS and all updates.
The third setup is used to test the available and functional integrity of published docker images for CentOS.
All these tests are later – during the QA-phase of a point release – used to verify functionality of new packages inside the CentOS QA-Setup.
Compute 101 - OpenStack Summit Vancouver 2015Stephen Gordon
OpenStack Compute (Nova), has been a core component of OpenStack since the original Austin release in 2010. In the intervening years development has proceeded at a rapid pace adding support for new virtualization technologies and exposing additional features. Learn how Compute fits into the OpenStack architecture, and how it interacts with other OpenStack components and the hypervisors it manages.
Comparison of control plane deployment architectures in the scope of hypercon...Miroslav Halas
The OpenStack control plane can be implemented using one of the three infrastructure types: Bare Metal, Virtual Machines, and Containers. Comprehensive comparisons of these approaches are not available. In the first section of our talk, we present a reference architecture for building a virtualized control plane, which supports OpenStack controller HA, file system HA, networking HA, and enhanced performance through CPU-pinning, and SR-IOV. All the building blocks are based on established open source tools and their corresponding products, Red Hat Enterprise Virtualization, Red Hat Gluster Storage and Red Hat OpenStack. Any OpenStack deployment tool which can be used on bare metal work to build this virtualized control plane. We will focus on comparisons of different OpenStack control plane infrastructures. We compare the deployment and operational aspects of different control plane implementation on the same hardware environment. We also evaluate the different control plane deployments through benchmarking tools (such as Rally), and provide a quantitative comparison.
OpenStack clusters are most often built with servers for Nova compute VMs and servers for storage, with Ceph storage requiring 3 or more nodes. It can be more cost-effective to "hyperconverge" Nova and Ceph on to the same servers, and rising processor core counts and RAM density have made this feasible. But it is important to understand the resource demand patterns of each and protect against corner cases where one starves the other. In the second section of our talk we will present our empirical approach to:
Generating realistic system & storage loads using open source test suites
Collecting and analyzing results quantitatively
Optimizing hardware configuration and resource partitioning
We will present data, analysis, and lessons learned from our hyperconverged infrastructure work.
Compute node HA - current upstream developmentAdam Spiers
Short presentation made for OpenStack London "Tokyo Aftermath" meetup, on current upstream activity in the OpenStack HA developers community around high availability for compute nodes.
How to integrate_custom_openstack_services_with_devstackSławomir Kapłoński
Presentation about how to configure and use DevStack to deploy all-in-one OpenStack cluster for development and testing purpose.
It also shows how to integrate own OpenStack service with DevStack using DevStack's plugins system
Dockerizing OpenStack for High AvailabilityDaniel Krook
Presentation at the OpenStack Summit in Paris, France on November 4, 2014.
High availability in OpenStack can be achieved in many ways. In this session we will describe how Docker can be used to provide an active-active highly available OpenStack environment. We will focus the real world work that we have done to "Dockerize" OpenStack services, detail the advantages to this type of deployment (rapid deployment, rapid scale out, versioning, etc.), and walk through our design - from requirements, limitations, obstacles, and especially our decisions. We will use our experiences as examples to provide real world best practices, as well as showing a demonstration of the environment in action.
Manuel Silveyra - Senior Cloud Solutions Architect
Daniel Krook - Senior Certified IT Specialist
Shaun Murakami - Senior Cloud Solution Architect
Kalonji Bankole - Cloud Architect
Thanks to tools like vagrant, puppet/chef, and Platform as a Service services like Heroku, developers are extremely used to being able to spin up a development environment that is the same every time. What if we could go a step further and make sure our development environment is not only using the same software, but 100% configured and set up like production. Docker will let us do that, and so much more. We’ll look at what Docker is, why you should look into using it, and all of the features that developers can take advantage of.
2018년 10월 19일 금요일, 오픈스택 한국 커뮤니티 정기 세미나에서 발표주셨던 자료입니다.
- 행사 정보: http://festa.io/events/118
- 발표자: 김용기 부장님
> Sr. Solution Architect, Red Hat
> Administrator, Ansible Facebook Usergroup
Introduce the basic concept of load-balancing, common implementations of load-balancing and the detail fo kubernetes service. In the last, demonstrate how to modify the linux iptable kernel module to fulfill the layer-7 load-balcning for kubernetes
Moving to Nova Cells without Destroying the WorldMike Dorman
Note: Video recording of this presentation at the OpenStack Liberty Summit in Vancouver is available here: https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/moving-to-nova-cells-without-destroying-the-world
Your cloud has been growing for a while and you've realized you need Nova Cells to scale further. But you've already got thousands of VMs and hundreds of active users. What to do?
This talk describes Go Daddy's experience with live-converting the production cloud to Nova Cells, including tips and recommendations to help you do it, too.
- Brief overview of Nova Cells' theory of operation and basic configuration
- Environment preparation to get ready for the conversion
- Specific steps to complete the conversion with minimal service interruption
- Caveats and lessons learned
- Introduction to Cells v2, and why you might want to wait for Kilo to convert.
OpenNebula Conf 2014: CentOS, QA an OpenNebula - Christoph GaluschkaNETWAYS
CentOS, the Community Enterprise OS, uses Opennebula as virtualization plattform for its automated QA-process. The opennebula setup consists of 3 nodes, all running CentOS-6, who handle the following tasks:
– sunstone as cloud controller
– local mirror/DNS-Server/http-Server for the VMs to pull in packages
– one VM to run a jenkins instance to launch the various tests (ci.de.centos.org)
– nginx on the cloud controller to forward http traffic to the jenkins VM
A public git repository (http://www.gitorious.org/testautomation) is used to allow whoever wants to contribute to pull the current test suite – t_functional, a series of bash scripts used to do funtional tests of various applications, binaries, configuration files and Trademark issues. As new tests are added to the repo via personal clones and merge requests, those tests first need to complete a test run via jenkins. Each test run currently consists of 4 VMs (one for each arch for C5 and C6 – C7 to come), which run the complete test suite. All VMs used for theses tests are instantiated and torn down on demand, whenever the call to testrun a personal clone is issued (via IRC).
Once completed successfully, the request is merged into the main repo. The jenkins node monitors this repository and which automatically triggers another complete test run.
Besides these triggered test runs, the test suite is automatically triggered daily to run. This is used to verify functionality of published updates – a handfull of failty updates have allready been discovered this way.
Besides t_functional, the Linux Test Project Suite of tests is also run on a daily basis, also to verify functionality of the OS and all updates.
The third setup is used to test the available and functional integrity of published docker images for CentOS.
All these tests are later – during the QA-phase of a point release – used to verify functionality of new packages inside the CentOS QA-Setup.
Compute 101 - OpenStack Summit Vancouver 2015Stephen Gordon
OpenStack Compute (Nova), has been a core component of OpenStack since the original Austin release in 2010. In the intervening years development has proceeded at a rapid pace adding support for new virtualization technologies and exposing additional features. Learn how Compute fits into the OpenStack architecture, and how it interacts with other OpenStack components and the hypervisors it manages.
Kuma Meshes Part I - The basics. Explaining the basics of how Meshes and Kuma Meshes work. It goes through how to get a cluster ready to start making tests with Kuma by diving into kubernetes concepts and quick installation command installations.
Campus Party Brasil 2014, FI-WARE Cloud presentation where you can find how to deploy servers and blueprint in the FI-Lab Cloud. Besides, the upload of contents into the Object Storage service.
Evolving to serverless
How the applications are transforming
A note on CI/CD
Architecture of Docker
Setting up a docker environment
Deep dive into DockerFile and containers
Tagging and publishing an image to docker hub
A glimpse from session one
Services: scale our application and enable load-balancing
Swarm: Deploying application onto a cluster, running it on multiple machines
Stack: A stack is a group of interrelated services that share dependencies, and can be orchestrated and scaled together.
Deploy your app: Compose file works just as well in production as it does on your machine.
Extras: Containers and VMs together
TYPO3 Flow 2.0 in the field - webtech Conference 2013die.agilen GmbH
Slides of the talk: "TYPO3 Flow 2.0 in the field" / webtech Conference 2013 by Patrick Lobacher (CEO typovision GmbH) / http://webtechcon.de / 29.10.2013
Similar to Integrate Openshift with Cloudforms (20)
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
8. Cloudforms 4 :: Openshift 3
Configure Cloudforms 4 - Part 1
8
1. Launch a new vm, use an existing image (4 gig of RAM is sufficient)
2. Login as root:smartvm
3. Configure the cloudforms appliance (# appliance_console)
a. assign an ip and hostname
b. Configure timezone
c. configure the database, do not need to use an external partition
4. Connect to the web interface. Username : admin, passwd : smartvm
5. Configure - Configuration, activate all roles except database synchro
and RHN mirror + configure the timezone (again)
6. Reboot the appliance
9. Cloudforms 4 :: Openshift 3
Configure Cloudforms 4 - Part 2
Add Openshift as a container provider
ON YOUR OPENSHIFT VM, RETRIEVE AN ADMIN KEY
# oc login -u system:admin -n default
# oc get -n management-infra sa/management-admin --template='{{range .
secrets}}{{printf "%sn" .name}}{{end}}'
management-admin-token-2g4iv
management-admin-dockercfg-02kl4
management-admin-token-5xqyo
# oc get -n management-infra secrets management-admin-token-2g4iv --
template='{{.data.token}}' | base64 -d > key.txt
9
Copy the key
https://access.redhat.com/documentation/en/red-hat-cloudforms/version-4.0
/managing-providers/#configuring_service_accounts
10. Cloudforms 4 :: Openshift 3
Configure Cloudforms 4 - Part 3
10
1. Login into the CF4 interface
2. Containers - Provider - Configuration - Add New containers provider
3. Put a name and select in type Openshift
4. Put the hostname and port 8443
5. Past the key then click validate
6. Et voilà :)
Add Openshift as a container provider
14. Cloudforms 4 :: Openshift 3
Configure Hawkular - Part 1
Create the service account
# oc project openshift-infra (should be there by default)
# oc create -f - <<API
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-deployer
secrets:
- name: metrics-deployer
API
14
15. Cloudforms 4 :: Openshift 3
Configure Hawkular - Part 2
ADD ROLES TO SOME SERVICE ACCOUNT
# oadm policy add-role-to-user edit system:serviceaccount:openshift-infra:
metrics-deployer
# oadm policy add-cluster-role-to-user cluster-reader system:serviceaccount:
openshift-infra:heapster
CREATES A SUPER SECURE SECRET
# oc secrets new metrics-deployer nothing=/dev/null
COPY THE TEMPLATE
# cp /usr/share/ansible/openshift-
ansible/roles/openshift_examples/files/examples/v1.1/infrastructure-
templates/enterprise/metrics-deployer.yaml /root/metrics.yaml
15
16. Cloudforms 4 :: Openshift 3
Configure Hawkular - Part 3
# cd /root/
# oc process -f metrics.yaml -v
HAWKULAR_METRICS_HOSTNAME=metrics.app.os3.mlc.dom,USE_PERSISTENT_STORAGE=false,
IMAGE_PREFIX=openshift3/,IMAGE_VERSION=latest
| oc create -f -
*** a reboot might be required … and wait … oc get pods is your friend
# vi /etc/origin/master/master-config.yaml
assetConfig:
.....
metricsPublicURL: https://metrics.app.os3.mlc.dom/hawkular/metrics
# systemctl restart atomic-openshift-master
16
Using a webbrowser, validate that Hawkular is started
20. Cloudforms 4 :: Openshift 3
Router for CF - map port 5000
At the moment a limitation in CloudForms Management Engine is assuming that the provider
Hostname is used also to collect the metrics.
Create an openshift router to give access to the metric information for
Cloudforms
# oadm router management-metrics -n default --
credentials=/etc/origin/master/openshift-router.kubeconfig --service-
account=router --ports='443:5000' --selector='kubernetes.
io/hostname=os3.mlc.dom' --stats-port=1937 --host-network=false
20