ERP System Implementation on Kubernetes Cluster with Sticky Sessions:
01. Security Features Enabled in Kubernetes Cluster.
02. SNMP, Syslog and audit logs enabled.
03. Enabled ERP no login service user.
04. Auto-scaling enabled both ESB and Jboss Pods.
05. Reduced power consumption using the scale in future during off-peak days.
06. NFS enables s usual with ERP service user.
07. External Ingress( Load Balance enabled).
08. Cluster load balancer enabled by default.
09. SSH enabled via both putty.exe and Kubernetes management console.
10. Network Monitoring enabled on Kubernetes dashboard.
11. Isolated Private and external network ranges to protect backend servers (pods).
12. OS of the pos is updated with the latest kernel version.
13. Core Linux OS will reduce security threats.
14. Lightweight OS over small HDD space
15. Less amount of RAM usage has been enabled.
16. AWS ready.
17. Possible for exporting into Public cloud ENV.
18. L7 and L4 Heavy Load Balancing Enabled.
19. Snapshot Versioning Control Enabled.
20. Many More ………etc.
I’ve been keeping a collection of Linux commands that are particularly useful; some are from websites I’ve visited, others from experience
I hope you find these are useful as I have. I’ll periodically add to the list, so check back occasionally.
Docker Practice for beginner.
- docker install on ubuntu 18.04 LTS
- docker pull/push
- making docker-compose file which serving spring-boot+ mySql application
Slides for my talk at the Blue4IT meeting in Utrecht. It shows you how to run everything in a Docker container. You can run the DTAP environment, the build environment and the development environment (including IDE) in Docker.
Install and Configure Ubuntu for Hadoop Installation for beginners Shilpa Hemaraj
Covered each and every step to configure Ubuntu. Used vmware workstation 10.
Note: I am beginner so I might have used technical word wrong. But it is working perfectly fine.
I’ve been keeping a collection of Linux commands that are particularly useful; some are from websites I’ve visited, others from experience
I hope you find these are useful as I have. I’ll periodically add to the list, so check back occasionally.
Docker Practice for beginner.
- docker install on ubuntu 18.04 LTS
- docker pull/push
- making docker-compose file which serving spring-boot+ mySql application
Slides for my talk at the Blue4IT meeting in Utrecht. It shows you how to run everything in a Docker container. You can run the DTAP environment, the build environment and the development environment (including IDE) in Docker.
Install and Configure Ubuntu for Hadoop Installation for beginners Shilpa Hemaraj
Covered each and every step to configure Ubuntu. Used vmware workstation 10.
Note: I am beginner so I might have used technical word wrong. But it is working perfectly fine.
Slides for the Docker for Java Developers workshop at JavaLand in 2017. It covers building and running containers. It also covers running GUI applications in Docker and using the Docker registry.
Troubleshooting Tips from a Docker Support EngineerJeff Anderson
Troubleshooting is like going on an adventure. Here are some tips for how to tackle unexpected situations when using Docker.
These cases were pulled from the most common issues encountered while helping folks in the Docker community solve issues.
Slides for the Docker for Java Developers workshop at JavaLand in 2017. It covers building and running containers. It also covers running GUI applications in Docker and using the Docker registry.
Troubleshooting Tips from a Docker Support EngineerJeff Anderson
Troubleshooting is like going on an adventure. Here are some tips for how to tackle unexpected situations when using Docker.
These cases were pulled from the most common issues encountered while helping folks in the Docker community solve issues.
Hide your development environment and application in a containerJohan Janssen
Presentation from our session at the JAX conference in Mainz. It shows how to run your IDE (Eclipse, NetBeans, IntelliJ...) inside a Docker container. Next to that some best practices are mentioned like the Docker registry and Docker compose.
Docker in Production: Reality, Not Hype
DramaFever uses AWS to power our streaming video platform. We've been running Docker in production since about October 2013 (well before it even went 1.0). This talk gives an overview of how we use it to make development more consistent and deployment more repeatable.
The slides from my Deployment Tactics talk at the ThinkVitamin Code Management online conference (http://thinkvitamin.com/online-conferences/code-manage-deploy/).
kubernetes install and practice
* Environment (bare metal installation, not using cloud service)
- VM 1 : Mater node, 30GB, 2 vCPU, 4GB Mem
- VM 2 : Worker node, 30GB, 2 vCPU, 4GB Mem
* Practice
- deploying pod, make a deployment and service
- expose service using ingress(nginx-ingress)
Similar to ERP System Implementation Kubernetes Cluster with Sticky Sessions (20)
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Designing Great Products: The Power of Design and Leadership by Chief Designe...
ERP System Implementation Kubernetes Cluster with Sticky Sessions
1. Docker to Kube Clsuter
pg. 1 By: chanaka.lasantha@gmail.com
ERP SYSTEM IMPLEMENTATION KUBERNETES CLUSTER
WITH AUTO-SCALING (AWS READY).
Wednesday, April 15, 2020
2. Docker to Kube Clsuter
pg. 2 By: chanaka.lasantha@gmail.com
CREATING NFS SERVER:
apt -y install nfs-kernel-server
vim /etc/exports
/opt/bkpdata *(rw,async,no_wdelay,insecure_locks,no_root_squash)
root@master:/var/sheared# showmount -e 192.168.2.28
Export list for 192.168.2.28:
/opt/bkpdata *
MOUNT NFS CLIENT ON ALL NODES AND MASTER:
apt -y install nfs-common
vim /etc/fstab
192.168.2.28:/opt/bkpdata /var/sheared nfs rw 0 0
mount /var/sheared
df -hT
192.168.2.28:/opt/bkpdata nfs4 49G 9.0G 38G 20% /var/sheared
DOCKERFILE OF EBS:
# Base system is the latest LTS version of Ubuntu.
FROM ubuntu
# Make sure we don't get notifications we can't answer during building.
ENV DEBIAN_FRONTEND non-interactive
# Prepare scripts and configs
ADD supervisor.conf /etc/supervisor.conf
# Download and install everything from the repos.
RUN apt-get -q -y update; apt-get -q -y upgrade &&
apt-get -q -y install sudo openssh-server supervisor vim iputils-ping net-tools curl htop tcpdump unzip alien &&
apt-get clean all &&
mkdir /var/run/sshd
# Create script folder
RUN mkdir -p /app/scripts
# Set working dir
WORKDIR /app
# Adding Jboss PID kill script into the docker container with permission.
#RUN chmod 775 -R /app/scripts/*
# Adding JDK package as deb install.
COPY jdk-7u76-linux-x64.rpm /app
RUN alien --scripts -i /app/jdk-7u76-linux-x64.rpm
# Adding Jboss application into the /app folder.
COPY wso2esb-4.8.0.zip /app
RUN unzip /app/wso2esb-4.8.0.zip
RUN chmod 775 -R /app/wso2esb-4.8.0
# Set custom ENV for the node
ENV JAVA_HOME=/usr/java/jdk1.7.0_76/bin/java
3. Docker to Kube Clsuter
pg. 3 By: chanaka.lasantha@gmail.com
# Set ENV
CMD ["source /etc/profile"]
# Set root password
RUN echo 'root:z80cpu' >> /root/passwdfile
# Create user and it's password
RUN useradd -m -G sudo chanakan
RUN echo 'chanakan:z80cpu' >> /root/passwdfile
# Apply root password
RUN chpasswd -c SHA512 < /root/passwdfile
RUN rm -rf /root/passwdfile
# Enable ROOT access for the root user (Optional)
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config
# Port 22 is used for ssh
EXPOSE 22 8280 8243 9443 11111 35399 9999 9763
# Assign /data as static volume.
VOLUME ["/data"]
# Starting sshd
CMD ["supervisord", "-c", "/etc/supervisor.conf"]
USER root
DOCKERFILE OF JBOSS:
# Base system is the latest LTS version of Ubuntu.
FROM ubuntu
# Make sure we don't get notifications we can't answer during building.
ENV DEBIAN_FRONTEND non-interactive
# Prepare scripts and configs
ADD supervisor.conf /etc/supervisor.conf
# Download and install everything from the repos.
RUN apt-get -q -y update; apt-get -q -y upgrade &&
apt-get -q -y install sudo openssh-server supervisor vim iputils-ping net-tools curl unzip tcpdump alien &&
apt-get clean all &&
mkdir /var/run/sshd
# Create script folder
RUN mkdir -p /app/scripts
RUN mkdir -p /app/JAVADIR
RUN mkdir -p /app/logs
RUN mkdir -p /opt/images/temp/daily/
RUN mkdir -p /opt/images/approval/
RUN mkdir -p /opt/images/documents/
RUN mkdir -p /opt/images/signatures/
RUN mkdir -p /opt/images/documents/insurance/renewal
RUN mkdir -p /opt/images/documents/officerupload
RUN mkdir -p /opt/images/documents/cheque/statementUpload
4. Docker to Kube Clsuter
pg. 4 By: chanaka.lasantha@gmail.com
RUN mkdir -p /opt/images/documents/budget/
RUN mkdir -p /opt/images/documents/finance/jrnlUpload/
RUN mkdir -p /opt/images/documents/bulkReceipt/
RUN mkdir -p /opt/images/documents/recovery/bulkInteract/
RUN mkdir -p /opt/images/documents/borrow/scheduleUpload/
# Set working dir
WORKDIR /app
# Adding Jboss PID kill script into the docker container with permission.
COPY JBOSS_STOP.sh /app/scripts
RUN chmod 775 -R /app/scripts/*
# Adding JDK package as deb install.
COPY jdk-7u76-linux-x64.rpm /app
RUN alien --scripts -i /app/jdk-7u76-linux-x64.rpm
# Adding Jboss application into the /app folder.
COPY jboss-as-7.1.3.Final.zip /app
RUN unzip /app/jboss-as-7.1.3.Final.zip
RUN chmod 775 -R /app/jboss-as-7.1.3.Final
#ADD cc-erp-ear-4.0.0.ear /app/jboss-as-7.1.3.Final/standalone/deployments/
#RUN chown root:root /app/jboss-as-7.1.3.Final/standalone/deployments/cc-erp-ear-4.0.0.ear
# Set custom ENV for the node
ENV JAVA_HOME=/usr/java/jdk1.7.0_76/bin/java
RUN echo "export JBOSS_HOME=/app/jboss-as-7.1.3.Final" >> /etc/profile
# Set ENV
CMD ["source /etc/profile"]
# Set root password
RUN echo 'root:z80cpu' >> /root/passwdfile
# Create user and it's password
RUN useradd -m -G sudo chanakan
RUN echo 'chanakan:z80cpu' >> /root/passwdfile
# Apply root password
RUN chpasswd -c SHA512 < /root/passwdfile
RUN rm -rf /root/passwdfile
# Enable ROOT access for the root user (Optional)
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config
# Port 22 is used for ssh
EXPOSE 22 9191
# Assign /data as static volume.
VOLUME ["/data"]
# Starting sshd
CMD ["supervisord", "-c", "/etc/supervisor.conf"]
USER root
13. Docker to Kube Clsuter
pg. 13 By: chanaka.lasantha@gmail.com
readOnly: false
# This is necessary for sticky-sessions because it can only consistently route to the same nodes, not pods.
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: esb-ssh
topologyKey: kubernetes.io/hostname
TO APPLY SERVICE AND DEPLOYMENT:
root@master:~# kubectl apply -f esb-ssh.yaml
root@master:~# watch -n 0.2 'kubectl get pods --all-namespaces -o wide'
root@master:~# kubectl describe service esb-ssh
RESTART A CONTAINER INSIDE OF POD:
root@master:~/ESB# kubectl delete pod esb-ssh-675995598d-szwp7
You can use the following command to clean these components
root@master:~/ESB# docker system prune
will be showed the message below:
WARNING! This will remove:
- all stopped containers
- all volumes not used by at least one container
- all networks not used by at least one container
- all dangling images
14. Docker to Kube Clsuter
pg. 14 By: chanaka.lasantha@gmail.com
RESOURCE REQUESTS AND LIMITS OF POD AND CONTAINER:
Each Container of a Pod can specify one or more of the following:
spec.containers[].resources.limits.cpu
spec.containers[].resources.limits.memory
spec.containers[].resources.limits.hugepages-<size>
spec.containers[].resources.requests.cpu
spec.containers[].resources.requests.memory
spec.containers[].resources.requests.hugepages-<size>
Although requests and limits can only be specified on individual Containers, it is convenient to talk about Pod resource requests and limits. A Pod
resource request/limit for a particular resource type is the sum of the resource requests/limits of that type for each Container in the Pod.
MEANING OF CPU:
Limits and requests for CPU resources are measured in cpu units. One cpu, in Kubernetes, is equivalent to 1 vCPU/Core for cloud providers and 1
hyperthread on bare-metal Intel processors.
Fractional requests are allowed. A Container with spec.containers[].resources.requests.cpu of 0.5 is guaranteed half as much CPU as one that asks for 1
CPU. The expression 0.1 is equivalent to the expression 100m, which can be read as “one hundred millicpu”. Some people say “one hundred millicores”,
and this is understood to mean the same thing. A request with a decimal point, like 0.1, is converted to 100m by the API, and precision finer than 1m is
not allowed. For this reason, the form 100m might be preferred.
CPU is always requested as an absolute quantity, never as a relative quantity; 0.1 is the same amount of CPU on a single-core, dual-core, or 48-core
machine.
MEANING OF MEMORY:
Limits and requests for memory are measured in bytes. You can express memory as a plain integer or as a fixed-point integer using one of these suffixes:
E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. For example, the following represent roughly the same value:
128974848, 129e6, 129M, 123Mi
Here’s an example. The following Pod has two Containers. Each Container has a request of 0.25 cpu and 64MiB (226 bytes) of memory. Each Container
has a limit of 0.5 cpu and 128MiB of memory. You can say the Pod has a request of 0.5 cpu and 128 MiB of memory, and a limit of 1 cpu and 256MiB of
memory.
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: db
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "password"
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
- name: wp
image: wordpress
resources:
15. Docker to Kube Clsuter
pg. 15 By: chanaka.lasantha@gmail.com
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
TO SET THE RESOURCE / REVOKE REQUESTS AND LIMITS OF THE DEPLOYMENT:
root@master:~# kubectl set resources deployment test-ssh --limits cpu=200m,memory=512Mi --requests cpu=100m,memory=256Mi
root@master:~# kubectl set resources deployment nginx --limits cpu=0,memory=0 --requests cpu=0,memory=0
root@master:~# watch -n 0.2 'kubectl get pods -o wide'
TO SCALE UP:
root@master:~# kubectl scale deployment test-ssh --replicas=3
root@master:~# kubectl scale deployment esb-ssh --replicas=3
root@master:~# watch -n 0.2 'kubectl get pods -o wide'
CREATE HORIZONTAL POD AUTOSCALER:
The following command will create a Horizontal Pod Autoscaler that maintains between 1 and 10 replicas of the Pods controlled by the test-ssh and esb-
ssh deployment we created in the first step of these instructions. Roughly speaking, HPA will increase and decrease the number of replicas (via the
deployment) to maintain an average CPU utilization across all Pods of 50% (since each pod requests 200 milli-cores by kubectl run), this means average
CPU usage of 100 milli-cores). See here for more details on the algorithm.
root@master:~# kubectl autoscale deployment test-ssh --cpu-percent=50 --min=1 --max=10
root@master:~# kubectl autoscale deployment esb-ssh--cpu-percent=50 --min=1 --max=10
TO EXPOSE PORT 2202 FOR EXTERNAL ACCESS(Optional):
root@master:~# kubectl expose deployment test-ssh --port=2202 --target-port=22
root@master:~# kubectl expose deployment test-ssh --port=9191 --target-port=9191
CEARTE SSL CERTIFICATES FRO HAPROXY – SELFSIGNED:
root@master# apt -y install haproxy
root@master# mkdir -p /etc/pki/tls/certs
root@master# openssl req -x509 -nodes -newkey rsa:2048 -keyout /etc/pki/tls/certs/haproxy.pem -out /etc/pki/tls/certs/haproxy.pem -days 3650
root@master# chmod 600 /etc/pki/tls/certs/haproxy.pem