The document outlines steps to implement an enterprise log management architecture using AWS services:
1. Install and configure the CloudWatch Agent on EC2 instances to collect and send logs and metrics to CloudWatch.
2. Use a Lambda function triggered by EventBridge to copy logs from CloudWatch to S3 for long-term storage in a data lake.
3. Analyze the log data stored in S3 using Athena, Glue and Redshift before creating reports and dashboards with QuickSight.
It's presentation for technet 2015 in korea.
I changed the format to pptx,
목차는 아래와 같습니다.
Openstack 인프라 구축 (4 node 구성) [ 30분]
Openstack 위에 VM 생성 [ 20분 ]
docker 구축 기초 [ 30분]
오픈스택에 docker를 연결 [ 30분]
Docker로 WEB서비스 구축 [ 15분]
Openstack 위에 Docker로 WEB서비스 구축 [ 15분]
Docker로 jenkins 구현 [30분]
The shift to cloud computing means that organizations are undergoing a major shift as they develop scale-out infrastructure that can respond to apace of business change faster than ever before. Opscode Chef® is an open-source systems integration framework build specifically for
automating the cloud by making it easy to deploy and scale servers and applications throughout your infrastructure. Join us for this session
containing an introduction to Chef including:
An Overview of Chef
The Chef Architecture
Cookbook Components
System Integration
Live demo launching a Java Stack on Amazon EC2, Rackspace, Ubuntu, and
CentOS
[Presented as part of the Open Source Build a Cloud program on 2/29/2012 - http://cloudstack.org/about-cloudstack/cloudstack-events.html?categoryid=6]
Prometheus and Docker (Docker Galway, November 2015)Brian Brazil
Brian Brazil is an engineer passionate about reliable systems who has worked at Google SRE and Boxever. He discusses Prometheus, an open source monitoring system he helped create. Prometheus offers inclusive monitoring of services, is manageable and reliable, integrates easily with other tools, and provides powerful querying and dashboards. It is efficient, scalable, and helps provide visibility into systems through its data model and labeling.
Install Oracle 12c Golden Gate On Oracle LinuxArun Sharma
In this article we will look at the steps to install oracle 12c Golden Gate on Oracle Enterprise Linux 6.5. The steps involved are:
Virtual Machine Setup
Install Oracle 12c Database
Install Oracle 12c Golden Gate
Prepare Golden Gate for Replication
Here is the full link of article: https://www.support.dbagenesis.com/post/install-oracle-12c-golden-gate-on-oracle-linux
- The sprint review covered work done in Sprint 130, including 38 PRs for the UI, core provider work, Automate improvements, and platform enhancements.
- Key UI work included new fields for groups and credential validation. Provider work focused on DDF schema updates and smart state collection.
- Automate addressed field properties and log messages. Platform enhancements used fewer queries and removed dependencies.
- Testing work included new EC2 configuration tests and timezone report automation. Release 17.68.0 was delivered.
This document summarizes the analysis of Windows event log files. It discusses how to view event logs using the Event Viewer and export logs. It also describes using log parsing tools like Log Parser Lizard and Log Parser 2.2 to query error, warning and other event types from system logs. Specific event IDs are analyzed, like DCOM errors, service failures, DNS issues and hard disk errors. Methods to resolve issues causing these events are provided.
Tracing and profiling my sql (percona live europe 2019) draft_1Valerii Kravchuk
The document discusses various tools that can be used for tracing and profiling MySQL, including Linux tools like strace, gdb, ftrace, bpftrace, perf, and dynamic probes. It focuses on perf as one of the best and easiest tools to use for tracing and profiling MySQL in production on Linux. Examples are provided of using perf to add probes to MySQL dynamically to capture SQL queries.
It's presentation for technet 2015 in korea.
I changed the format to pptx,
목차는 아래와 같습니다.
Openstack 인프라 구축 (4 node 구성) [ 30분]
Openstack 위에 VM 생성 [ 20분 ]
docker 구축 기초 [ 30분]
오픈스택에 docker를 연결 [ 30분]
Docker로 WEB서비스 구축 [ 15분]
Openstack 위에 Docker로 WEB서비스 구축 [ 15분]
Docker로 jenkins 구현 [30분]
The shift to cloud computing means that organizations are undergoing a major shift as they develop scale-out infrastructure that can respond to apace of business change faster than ever before. Opscode Chef® is an open-source systems integration framework build specifically for
automating the cloud by making it easy to deploy and scale servers and applications throughout your infrastructure. Join us for this session
containing an introduction to Chef including:
An Overview of Chef
The Chef Architecture
Cookbook Components
System Integration
Live demo launching a Java Stack on Amazon EC2, Rackspace, Ubuntu, and
CentOS
[Presented as part of the Open Source Build a Cloud program on 2/29/2012 - http://cloudstack.org/about-cloudstack/cloudstack-events.html?categoryid=6]
Prometheus and Docker (Docker Galway, November 2015)Brian Brazil
Brian Brazil is an engineer passionate about reliable systems who has worked at Google SRE and Boxever. He discusses Prometheus, an open source monitoring system he helped create. Prometheus offers inclusive monitoring of services, is manageable and reliable, integrates easily with other tools, and provides powerful querying and dashboards. It is efficient, scalable, and helps provide visibility into systems through its data model and labeling.
Install Oracle 12c Golden Gate On Oracle LinuxArun Sharma
In this article we will look at the steps to install oracle 12c Golden Gate on Oracle Enterprise Linux 6.5. The steps involved are:
Virtual Machine Setup
Install Oracle 12c Database
Install Oracle 12c Golden Gate
Prepare Golden Gate for Replication
Here is the full link of article: https://www.support.dbagenesis.com/post/install-oracle-12c-golden-gate-on-oracle-linux
- The sprint review covered work done in Sprint 130, including 38 PRs for the UI, core provider work, Automate improvements, and platform enhancements.
- Key UI work included new fields for groups and credential validation. Provider work focused on DDF schema updates and smart state collection.
- Automate addressed field properties and log messages. Platform enhancements used fewer queries and removed dependencies.
- Testing work included new EC2 configuration tests and timezone report automation. Release 17.68.0 was delivered.
This document summarizes the analysis of Windows event log files. It discusses how to view event logs using the Event Viewer and export logs. It also describes using log parsing tools like Log Parser Lizard and Log Parser 2.2 to query error, warning and other event types from system logs. Specific event IDs are analyzed, like DCOM errors, service failures, DNS issues and hard disk errors. Methods to resolve issues causing these events are provided.
Tracing and profiling my sql (percona live europe 2019) draft_1Valerii Kravchuk
The document discusses various tools that can be used for tracing and profiling MySQL, including Linux tools like strace, gdb, ftrace, bpftrace, perf, and dynamic probes. It focuses on perf as one of the best and easiest tools to use for tracing and profiling MySQL in production on Linux. Examples are provided of using perf to add probes to MySQL dynamically to capture SQL queries.
Practical Operation Automation with StackStormShu Sugimoto
Automation is getting more and more important these days, but it is not always easy to achieve, because it requires tremendous effort to convert existing procedures machine-friendly. That often means, you need to change almost everything!
StackStorm (aka st2, https://stackstorm.com/) is an open source IFTTT-ish middleware that ships with powerful workflow engine and unique features called "inquiries".
I'll focus on this workflow engine functionalities of st2 and show how these can ease the "automation" of day to day tasks. The example I'll show in this presentation is the actual workflow that we use at JPNAP, the real world IXP operation.
Ceph Day Beijing: CeTune: A Framework of Profile and Tune Ceph Performance Ceph Community
CeTune is a toolkit that helps deploy, benchmark, profile and tune Ceph cluster performance. It contains modules to deploy Ceph, run benchmarks, analyze collected data, tune Ceph configuration, and visualize results. CeTune automates the process of testing Ceph performance under different configurations and workloads to help identify optimal tuning parameters.
Using and Customizing the Android Framework / part 4 of Embedded Android Work...Opersys inc.
1) The document provides an overview of using and customizing the Android framework, covering topics like kickstarting the framework, utilities and commands, system services internals, and creating custom services.
2) It describes the core building blocks of the framework, like services, Dalvik, and the boot process. It also covers utilities like am, pm, and dumpsys.
3) The document discusses native daemons like servicemanager and installd. It explains how to observe the system server and interact with services programmatically.
Managing the logs of your (Rails) applications - Arrrrcamp 2011lennartkoopmann
1) The document discusses different levels of log management maturity from simply collecting logs to advanced correlation and visualization.
2) It provides examples of how to collect and send application logs from Rails applications using syslog, GELF, and AMQP and recommends sending structured logs.
3) The document also discusses open source log management tools like Logstash and Graylog2 that can collect, parse, store, and provide analytics and alerting for logs.
How to build a feedback loop in softwareSandeep Joshi
The document discusses how to build a feedback loop using a PID controller in software systems. It begins with an overview of why PID controllers are useful when the system to be controlled can be modeled as a "black box" and the goal is to maintain an output value. It then covers how to implement a PID controller by defining the setpoint, sensor output, control input, and PID calculation. The document provides examples of PID controllers in software systems like Golang garbage collection, Apache Spark, and Linux. It also discusses best practices like tuning parameters and avoiding issues like windup.
15 Troubleshooting tips and Tricks for Database 21c - KSAOUGSandesh Rao
The document discusses analyzing Oracle database logs using the Trace File Analyzer (TFA) tool. It provides examples of TFA commands to search and analyze logs for specific errors or time periods. The output includes summaries of matching errors, including the number of occurrences and server names. Investigating the Attention log and using TFA can help identify and troubleshoot database issues.
15 Troubleshooting Tips and Tricks for database 21c - OGBEMEA KSAOUGSandesh Rao
This session will focus on 15 troubleshooting tips and tricks for DBA’s covering tools from the Oracle Autonomous Health Framework (AHF) like Trace file Analyzer (TFA) to collect , organize and analyze log data , Exachk and orachk to perform mass best practices analysis and automation , Cluster Health Advisor to debug node evictions and calibrate the framework , OSWatcher and its analysis engine , oratop for pinpointing performance issues and many others to make one feel like a rockstar DBA
Many Scala developers nowadays consider using Dependency Injection frameworks an anti-pattern incompatible with modern FP settings. We argue that it's just a consequence of a bad experience with legacy Java runtime reflection-based implementations that lack features important for modern functional programming, such as a first-class support for higher-kinded types. We argue that as a paradigm for structuring purely functional programs, DI with automatic wiring compares favorably against implicits, monad transformers, free monads, algebraic effects, cake pattern et al, enabling scaling and a degree of modularity unachievable by any manual wiring approach. This talk covers DIStage – a transparent, flexible and efficient DI framework for Scala that enables late binding, testability, effect separation and modular resource management at scale, working with, instead of compromising the Scala type system.
Documentation: https://izumi.7mind.io/latest/release/doc/distage/
This document summarizes Lighttpd & Modcache, an event-driven web server and caching module. Lighttpd is lightweight and has a simple module structure. Modcache caches files locally and in memory to improve performance. It has advantages over Squid like being Lighttpd-based and keeping the code simple. The document provides configuration examples for Modcache and "cook books" for caching images, downloads, forums, and video sites.
This session will be particularly interesting to beginning developers, taking their first steps into the wide world of software development.
Normally you're always taught how to write code and how to keep it clean. But when business grows, performance issues arise and it's not always obvious how to solve them. This talk will teach you how to profile your code, how to detect issues, how to read the results, and how to somewhat automate those tests.
기존에 저희 회사에서 사용하던 모니터링은 Zabbix 였습니다.
컨테이너 모니터링 부분으로 옮겨가면서 변화가 필요하였고, 이에 대해서 프로메테우스를 활용한 모니터링 방법을 자연스럽게 고민하게 되었습니다.
이에 이영주님께서 테크세션을 진행하였고, 이에 발표자료를 올립니다.
5개의 부분으로 구성되어 있으며, 세팅 방법에 대한 내용까지 포함합니다.
01. Prometheus?
02. Usage
03. Alertmanager
04. Cluster
05. Performance
Joget Workflow v6 Training Slides - 20 - Basic System AdministrationJoget Workflow
This document provides an overview of basic system administration for Joget Workflow v6. It discusses the typical stack including Apache Tomcat, MariaDB database, and JDK. It covers managing the MariaDB database, including inspecting datasources and profiles. It also covers managing the Apache Tomcat application server, including Joget application files, updating Joget, log files, stack traces, and changing the HTTP port. The document provides exercises for setting up a new database and enabling SSL on Tomcat.
This document provides information about running NDISTest, including:
1) How to run NDISTest in standalone mode by opening the NDISTest.Net directory and running NDISTest.exe with administrator privileges.
2) How to run the NDISTest server by choosing "Server" from the file menu, selecting the message and support devices, and clicking "start".
3) How to run the NDISTest client by choosing "Client" from the file menu, selecting the test, message, and support devices, and clicking "start" to run selected tests.
This document provides an overview of various Google Cloud Platform services including Compute Engine, Networking, Load Balancing, Cloud Launcher, Cloud Storage, Cloud SQL, Cloud Monitoring, Cloud DNS, and Deployment Manager. It includes descriptions of the basic concepts and functionality for each service. It also outlines several hands-on labs demonstrating how to use specific GCP services like backing up instances to Cloud Storage snapshots, exporting Cloud SQL databases to Cloud Storage, enabling Cloud Logging, and deploying a VM instance using Deployment Manager.
RHEL 7 will use systemd as its init system, replacing upstart. Systemd is more than just an init system replacement - it is a system and service manager that provides features like dependency tracking, process supervision, on-demand starting of services, and lightweight boot process. It introduces new unit file types to define system components and their relationships. Customizing services can be done by editing unit files and using systemctl commands.
This is a talk on how you can monitor your microservices architecture using Prometheus and Grafana. This has easy to execute steps to get a local monitoring stack running on your local machine using docker.
Amazon DataZone is a data management service that allows users to catalog, discover, share, and govern data stored across AWS, on-premises, and third-party sources. It provides administrators fine-grained access controls to manage data assets and ensure the right level of access for users. Amazon DataZone also makes it easy for various roles like engineers, data scientists, and analysts to collaborate by sharing and accessing organizational data to derive insights.
Practical Operation Automation with StackStormShu Sugimoto
Automation is getting more and more important these days, but it is not always easy to achieve, because it requires tremendous effort to convert existing procedures machine-friendly. That often means, you need to change almost everything!
StackStorm (aka st2, https://stackstorm.com/) is an open source IFTTT-ish middleware that ships with powerful workflow engine and unique features called "inquiries".
I'll focus on this workflow engine functionalities of st2 and show how these can ease the "automation" of day to day tasks. The example I'll show in this presentation is the actual workflow that we use at JPNAP, the real world IXP operation.
Ceph Day Beijing: CeTune: A Framework of Profile and Tune Ceph Performance Ceph Community
CeTune is a toolkit that helps deploy, benchmark, profile and tune Ceph cluster performance. It contains modules to deploy Ceph, run benchmarks, analyze collected data, tune Ceph configuration, and visualize results. CeTune automates the process of testing Ceph performance under different configurations and workloads to help identify optimal tuning parameters.
Using and Customizing the Android Framework / part 4 of Embedded Android Work...Opersys inc.
1) The document provides an overview of using and customizing the Android framework, covering topics like kickstarting the framework, utilities and commands, system services internals, and creating custom services.
2) It describes the core building blocks of the framework, like services, Dalvik, and the boot process. It also covers utilities like am, pm, and dumpsys.
3) The document discusses native daemons like servicemanager and installd. It explains how to observe the system server and interact with services programmatically.
Managing the logs of your (Rails) applications - Arrrrcamp 2011lennartkoopmann
1) The document discusses different levels of log management maturity from simply collecting logs to advanced correlation and visualization.
2) It provides examples of how to collect and send application logs from Rails applications using syslog, GELF, and AMQP and recommends sending structured logs.
3) The document also discusses open source log management tools like Logstash and Graylog2 that can collect, parse, store, and provide analytics and alerting for logs.
How to build a feedback loop in softwareSandeep Joshi
The document discusses how to build a feedback loop using a PID controller in software systems. It begins with an overview of why PID controllers are useful when the system to be controlled can be modeled as a "black box" and the goal is to maintain an output value. It then covers how to implement a PID controller by defining the setpoint, sensor output, control input, and PID calculation. The document provides examples of PID controllers in software systems like Golang garbage collection, Apache Spark, and Linux. It also discusses best practices like tuning parameters and avoiding issues like windup.
15 Troubleshooting tips and Tricks for Database 21c - KSAOUGSandesh Rao
The document discusses analyzing Oracle database logs using the Trace File Analyzer (TFA) tool. It provides examples of TFA commands to search and analyze logs for specific errors or time periods. The output includes summaries of matching errors, including the number of occurrences and server names. Investigating the Attention log and using TFA can help identify and troubleshoot database issues.
15 Troubleshooting Tips and Tricks for database 21c - OGBEMEA KSAOUGSandesh Rao
This session will focus on 15 troubleshooting tips and tricks for DBA’s covering tools from the Oracle Autonomous Health Framework (AHF) like Trace file Analyzer (TFA) to collect , organize and analyze log data , Exachk and orachk to perform mass best practices analysis and automation , Cluster Health Advisor to debug node evictions and calibrate the framework , OSWatcher and its analysis engine , oratop for pinpointing performance issues and many others to make one feel like a rockstar DBA
Many Scala developers nowadays consider using Dependency Injection frameworks an anti-pattern incompatible with modern FP settings. We argue that it's just a consequence of a bad experience with legacy Java runtime reflection-based implementations that lack features important for modern functional programming, such as a first-class support for higher-kinded types. We argue that as a paradigm for structuring purely functional programs, DI with automatic wiring compares favorably against implicits, monad transformers, free monads, algebraic effects, cake pattern et al, enabling scaling and a degree of modularity unachievable by any manual wiring approach. This talk covers DIStage – a transparent, flexible and efficient DI framework for Scala that enables late binding, testability, effect separation and modular resource management at scale, working with, instead of compromising the Scala type system.
Documentation: https://izumi.7mind.io/latest/release/doc/distage/
This document summarizes Lighttpd & Modcache, an event-driven web server and caching module. Lighttpd is lightweight and has a simple module structure. Modcache caches files locally and in memory to improve performance. It has advantages over Squid like being Lighttpd-based and keeping the code simple. The document provides configuration examples for Modcache and "cook books" for caching images, downloads, forums, and video sites.
This session will be particularly interesting to beginning developers, taking their first steps into the wide world of software development.
Normally you're always taught how to write code and how to keep it clean. But when business grows, performance issues arise and it's not always obvious how to solve them. This talk will teach you how to profile your code, how to detect issues, how to read the results, and how to somewhat automate those tests.
기존에 저희 회사에서 사용하던 모니터링은 Zabbix 였습니다.
컨테이너 모니터링 부분으로 옮겨가면서 변화가 필요하였고, 이에 대해서 프로메테우스를 활용한 모니터링 방법을 자연스럽게 고민하게 되었습니다.
이에 이영주님께서 테크세션을 진행하였고, 이에 발표자료를 올립니다.
5개의 부분으로 구성되어 있으며, 세팅 방법에 대한 내용까지 포함합니다.
01. Prometheus?
02. Usage
03. Alertmanager
04. Cluster
05. Performance
Joget Workflow v6 Training Slides - 20 - Basic System AdministrationJoget Workflow
This document provides an overview of basic system administration for Joget Workflow v6. It discusses the typical stack including Apache Tomcat, MariaDB database, and JDK. It covers managing the MariaDB database, including inspecting datasources and profiles. It also covers managing the Apache Tomcat application server, including Joget application files, updating Joget, log files, stack traces, and changing the HTTP port. The document provides exercises for setting up a new database and enabling SSL on Tomcat.
This document provides information about running NDISTest, including:
1) How to run NDISTest in standalone mode by opening the NDISTest.Net directory and running NDISTest.exe with administrator privileges.
2) How to run the NDISTest server by choosing "Server" from the file menu, selecting the message and support devices, and clicking "start".
3) How to run the NDISTest client by choosing "Client" from the file menu, selecting the test, message, and support devices, and clicking "start" to run selected tests.
This document provides an overview of various Google Cloud Platform services including Compute Engine, Networking, Load Balancing, Cloud Launcher, Cloud Storage, Cloud SQL, Cloud Monitoring, Cloud DNS, and Deployment Manager. It includes descriptions of the basic concepts and functionality for each service. It also outlines several hands-on labs demonstrating how to use specific GCP services like backing up instances to Cloud Storage snapshots, exporting Cloud SQL databases to Cloud Storage, enabling Cloud Logging, and deploying a VM instance using Deployment Manager.
RHEL 7 will use systemd as its init system, replacing upstart. Systemd is more than just an init system replacement - it is a system and service manager that provides features like dependency tracking, process supervision, on-demand starting of services, and lightweight boot process. It introduces new unit file types to define system components and their relationships. Customizing services can be done by editing unit files and using systemctl commands.
This is a talk on how you can monitor your microservices architecture using Prometheus and Grafana. This has easy to execute steps to get a local monitoring stack running on your local machine using docker.
Amazon DataZone is a data management service that allows users to catalog, discover, share, and govern data stored across AWS, on-premises, and third-party sources. It provides administrators fine-grained access controls to manage data assets and ensure the right level of access for users. Amazon DataZone also makes it easy for various roles like engineers, data scientists, and analysts to collaborate by sharing and accessing organizational data to derive insights.
This document provides step-by-step instructions for implementing AWS Transfer Family for SFTP, including:
1. Setting up prerequisites like an S3 bucket and EC2 instances
2. Creating an IAM role and policies to allow access to the S3 bucket
3. Setting up the SFTP server, creating users, and assigning public keys
4. Testing file transfers from Linux and Windows SFTP clients to verify the process works end-to-end
Amazon AppFlow allows administrators to automate data flows between AWS services and SaaS applications like Salesforce, Zendesk, and ServiceNow without months of waiting for IT integration projects. It can transfer data both to and from SaaS applications and AWS services, encrypting the data in transit. Some SaaS applications integrate with AWS PrivateLink for an extra layer of security by keeping traffic on the Amazon network. AppFlow also simplifies capturing and analyzing SaaS data using AWS services like Glue, Athena, and Redshift.
The document outlines steps to load log data from an AWS S3 bucket into a DynamoDB table using AWS Lambda. It involves: 1) Creating an IAM role and policy for Lambda to access S3 and DynamoDB, 2) Creating an S3 bucket to hold log files, 3) Creating a Lambda function with an S3 trigger to parse log files and load data into DynamoDB, 4) Creating a DynamoDB table to store the log data, 5) Testing the process by uploading log files to S3 and verifying the data loads into DynamoDB.
This document outlines the steps to create a data analytic solution using Incorta, including connecting data sources, building schemas to load and analyze data, creating business schemas, building reports and dashboards, using a scheduler, and implementing security measures.
This document provides a step-by-step guide to replicating data from on-premise to cloud using AWS DataSync. It outlines setting up the necessary infrastructure components like creating a VPC, S3 bucket, security groups, and deploying the DataSync agent on an EC2 instance. It then walks through creating a DataSync task to copy files from S3 to an EFS file system, mounting the EFS on another EC2 instance to verify the files were successfully copied.
The document discusses using AWS Application Migration Service (AWS MGN) to migrate source servers from one AWS region to another AWS region. It involves the following key steps:
1. Initialize AWS MGN in the target region and create a launch template.
2. Install the AWS replication agent on the source servers in the original region by downloading and running an installer script.
3. Configure the launch settings for the migrated servers in the target region, including modifying the EC2 launch template to specify the correct subnets and security groups.
4. Monitor the replication process from the source servers to the target region through the AWS MGN console. Once initial sync is complete, the migrated servers will be ready for testing
ARENA - Young adults in the workplace (Knight Moves).pdfKnight Moves
Presentations of Bavo Raeymaekers (Project lead youth unemployment at the City of Antwerp), Suzan Martens (Service designer at Knight Moves) and Adriaan De Keersmaeker (Community manager at Talk to C)
during the 'Arena • Young adults in the workplace' conference hosted by Knight Moves.
Explore the essential graphic design tools and software that can elevate your creative projects. Discover industry favorites and innovative solutions for stunning design results.
Architectural and constructions management experience since 2003 including 18 years located in UAE.
Coordinate and oversee all technical activities relating to architectural and construction projects,
including directing the design team, reviewing drafts and computer models, and approving design
changes.
Organize and typically develop, and review building plans, ensuring that a project meets all safety and
environmental standards.
Prepare feasibility studies, construction contracts, and tender documents with specifications and
tender analyses.
Consulting with clients, work on formulating equipment and labor cost estimates, ensuring a project
meets environmental, safety, structural, zoning, and aesthetic standards.
Monitoring the progress of a project to assess whether or not it is in compliance with building plans
and project deadlines.
Attention to detail, exceptional time management, and strong problem-solving and communication
skills are required for this role.
Discovering the Best Indian Architects A Spotlight on Design Forum Internatio...Designforuminternational
India’s architectural landscape is a vibrant tapestry that weaves together the country's rich cultural heritage and its modern aspirations. From majestic historical structures to cutting-edge contemporary designs, the work of Indian architects is celebrated worldwide. Among the many firms shaping this dynamic field, Design Forum International stands out as a leader in innovative and sustainable architecture. This blog explores some of the best Indian architects, highlighting their contributions and showcasing the most famous architects in India.
Discovering the Best Indian Architects A Spotlight on Design Forum Internatio...
EnterpriseLogManagement.pdf
1. How to capture logs; store them in a data lake, and data warehouse; analyze; and publish
reports, dashboards.
A complete end to end solution and a step-by-step implementation process
Enterprise Log Management Architecture
Steps to implement the solution:
1. Create the appropriate IAM role
2. Launch an EC2 instance (the server)
3. Install the httpd
4. Start httpd
5. Access two types of logs – access logs & error logs
6. Get the amazon-cloudwatch-agent.rpm
7. Run the agent wizard to install the cloudwatch agent on each server
8. Once complete verify the CloudWatch log groups
9. Create a Lambda function to copy the data into data lake/ S3
10. Use EventBridge to schedule to copy data into data lake/ S3
11. Using Athena & Glue create the DB and tables to query/ analyze the log data
12. Copy data into Redshift from S3
13. Create reports/ dashboards using AWS QuickSight or Tableau out of Redshift
Create an IAM Role
mm-cloudwatchagent-role with CloudWatchAgentServerPolicy; CloudWatchAgentAdminPolicy
Install and start the httpd
2. $ sudo yum install httpd
[ec2-user@ip-10-0-45-76 html]$ ls -ltr
total 4
-rwxrwxrwx 1 root root 31 May 17 19:47 index.html
[ec2-user@ip-10-0-45-76 html]$ pwd
/var/www/html
[ec2-user@ip-10-0-45-76 html]$
[ec2-user@ip-10-0-45-76 html]$ sudo systemctl start httpd
Accessing the log files from /var/log/httpd/
[ec2-user@ip-10-0-45-76 log]$ sudo cat /var/log/httpd/access_log
73.192.163.126 - - [17/May/2022:19:52:14 +0000] "GET / HTTP/1.1" 200 31 "-" "Mozilla/5.0
(Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.54
Safari/537.36"
73.192.163.126 - - [17/May/2022:19:52:15 +0000] "GET /favicon.ico HTTP/1.1" 404 196
"http://ec2-54-75-110-66.eu-west-1.compute.amazonaws.com/" "Mozilla/5.0 (Windows NT 10.0;
WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/101.0.4951.54 Safari/537.36"
[ec2-user@ip-10-0-45-76 log]$ sudo cat /var/log/httpd/error_log
[Tue May 17 19:51:48.684227 2022] [suexec:notice] [pid 3500] AH01232: suEXEC mechanism
enabled (wrapper: /usr/sbin/suexec)
[Tue May 17 19:51:48.699295 2022] [lbmethod_heartbeat:notice] [pid 3500] AH02282: No
slotmem from mod_heartmonitor
[Tue May 17 19:51:48.699335 2022] [http2:warn] [pid 3500] AH10034: The mpm module
(prefork.c) is not supported by mod_http2. The mpm determines how things are processed in
your server. HTTP/2 has more demands in this regard and the currently selected mpm will just
not do. This is an advisory warning. Your server will continue to work, but the HTTP/2 protocol
will be inactive.
[Tue May 17 19:51:48.702387 2022] [mpm_prefork:notice] [pid 3500] AH00163: Apache/2.4.53
() configured -- resuming normal operations
[Tue May 17 19:51:48.702412 2022] [core:notice] [pid 3500] AH00094: Command line:
'/usr/sbin/httpd -D FOREGROUND'
#!/bin/bash
3. #Install the agent
wget https://s3.amazonaws.com/amazoncloudwatch-
agent/amazon_linux/amd64/latest/amazon-cloudwatch-agent.rpm
[ec2-user@ip-10-0-45-76 ~]$ wget https://s3.amazonaws.com/amazoncloudwatch-
agent/amazon_linux/amd64/latest/amazon-cloudwatch-agent.rpm
--2022-05-17 21:32:22-- https://s3.amazonaws.com/amazoncloudwatch-
agent/amazon_linux/amd64/latest/amazon-cloudwatch-agent.rpm
Resolving s3.amazonaws.com (s3.amazonaws.com)... 52.217.229.128
Connecting to s3.amazonaws.com (s3.amazonaws.com)|52.217.229.128|:443... Connected.
HTTP request sent, awaiting response... 200 OK
Length: 46945036 (45M) [application/octet-stream]
Saving to: ‘amazon-cloudwatch-agent.rpm’
100%[========================================================================
=============>] 46,945,036 19.3MB/s in 2.3s
2022-05-17 21:32:24 (19.3 MB/s) - ‘amazon-cloudwatch-agent.rpm’ saved
[46945036/46945036]
[ec2-user@ip-10-0-45-76 ~]$
Install the package. If you downloaded an RPM package on a Linux server, change to the
directory containing the package and enter the following:
4. sudo rpm -U ./amazon-cloudwatch-agent.rpm
[ec2-user@ip-10-0-45-76 ~]$ sudo rpm -U ./amazon-cloudwatch-agent.rpm
create group cwagent, result: 0
create user cwagent, result: 0
create group aoc, result: 0
create user aoc, result: 0
[ec2-user@ip-10-0-45-76 ~]$
# Run the wizard
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard
[ec2-user@ip-10-0-45-76 bin]$ sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-
cloudwatch-agent-config-wizard
================================================================
= Welcome to the Amazon CloudWatch Agent Configuration Manager =
= =
= CloudWatch Agent allows you to collect metrics and logs from =
= your host and send them to CloudWatch. Additional CloudWatch =
= charges may apply. =
================================================================
On which OS are you planning to use the agent?
1. linux
5. 2. windows
3. darwin
default choice: [1]:
1
Trying to fetch the default region based on ec2 metadata...
Are you using EC2 or On-Premises hosts?
1. EC2
2. On-Premises
default choice: [1]:
1
Which user are you planning to run the agent?
1. root
2. cwagent
3. others
default choice: [1]:
1
Do you want to turn on StatsD daemon?
1. yes
2. no
default choice: [1]:
6. 1
Which port do you want StatsD daemon to listen to?
default choice: [8125]
What is the collect interval for StatsD daemon?
1. 10s
2. 30s
3. 60s
default choice: [1]:
3
What is the aggregation interval for metrics collected by StatsD daemon?
1. Do not aggregate
2. 10s
3. 30s
4. 60s
default choice: [4]:
4
Do you want to monitor metrics from CollectD? WARNING: CollectD must be installed or the
Agent will fail to start
1. yes
2. no
default choice: [1]:
7. 1
Do you want to monitor any host metrics? e.g. CPU, memory, etc.
1. yes
2. no
default choice: [1]:
1
Do you want to monitor cpu metrics per core?
1. yes
2. no
default choice: [1]:
1
Do you want to add ec2 dimensions (ImageId, InstanceId, InstanceType,
AutoScalingGroupName) into all of your metrics if the info is available?
1. yes
2. no
default choice: [1]:
1
Do you want to aggregate ec2 dimensions (InstanceId)?
1. yes
2. no
default choice: [1]:
8. Are you satisfied with the above config? Note: it can be manually customized after the wizard
completes to add additional items.
1. yes
2. no
default choice: [1]:
1
Do you have any existing CloudWatch Log Agent
(http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/AgentReference.html)
configuration file to import for migration?
1. yes
2. no
default choice: [2]:
2
Do you want to monitor any log files?
1. yes
2. no
default choice: [1]:
1
Log file path:
/var/log/httpd/access_log
Log group name:
default choice: [access_log]
10. 16. 731
17. 1827
18. 3653
default choice: [1]:
Do you want to specify any additional log files to monitor?
1. yes
2. no
default choice: [1]:
/var/log/httpd/error_log
The value /var/log/httpd/error_log is not valid to this question.
Please retry to answer:
Do you want to specify any additional log files to monitor?
1. yes
2. no
default choice: [1]:
1
Log file path:
/var/log/httpd/error_log
Log group name:
15. "*"
]
},
"mem": {
"measurement": [
"mem_used_percent"
],
"metrics_collection_interval": 60
},
"statsd": {
"metrics_aggregation_interval": 60,
"metrics_collection_interval": 60,
"service_address": ":8125"
}
}
}
}
Please check the above content of the config.
The config file is also located at /opt/aws/amazon-cloudwatch-agent/bin/config.json.
Edit it manually if needed.
16. Do you want to store the config in the SSM parameter store?
1. yes
2. no
default choice: [1]:
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m
ec2 -c ssm:configuration-parameter-store-name –s
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m
ec2 -c ssm:AmazonCloudWatch-linux –s
OR
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a fetch-config -m
ec2 -c file:configuration-file-path -s
Create the types.db file
[ec2-user@ip-10-0-45-76 share]$ sudo mkdir -p /usr/share/collectd
[ec2-user@ip-10-0-45-76 share]$ sudo touch /usr/share/collectd/types.db
[ec2-user@ip-10-0-45-76 bin]$ sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-
cloudwatch-agent-ctl -a fetch-config -m ec2 -c file:/opt/aws/amazon-cloudwatch-
agent/bin/config.json -s
****** processing amazon-cloudwatch-agent ******
/opt/aws/amazon-cloudwatch-agent/bin/config-downloader --output-dir /opt/aws/amazon-
cloudwatch-agent/etc/amazon-cloudwatch-agent.d --download-source file:/opt/aws/amazon-
cloudwatch-agent/bin/config.json --mode ec2 --config /opt/aws/amazon-cloudwatch-
agent/etc/common-config.toml --multi-config default
17. 2022/05/17 22:50:11 D! [EC2] Found active network interface
Successfully fetched the config and saved in /opt/aws/amazon-cloudwatch-agent/etc/amazon-
cloudwatch-agent.d/file_config.json.tmp
Start configuration validation...
/opt/aws/amazon-cloudwatch-agent/bin/config-translator --input /opt/aws/amazon-
cloudwatch-agent/etc/amazon-cloudwatch-agent.json --input-dir /opt/aws/amazon-
cloudwatch-agent/etc/amazon-cloudwatch-agent.d --output /opt/aws/amazon-cloudwatch-
agent/etc/amazon-cloudwatch-agent.toml --mode ec2 --config /opt/aws/amazon-cloudwatch-
agent/etc/common-config.toml --multi-config default
2022/05/17 22:50:11 Reading json config file path: /opt/aws/amazon-cloudwatch-
agent/etc/amazon-cloudwatch-agent.d/file_config.json.tmp ...
2022/05/17 22:50:11 I! Valid Json input schema.
I! Detecting run_as_user...
2022/05/17 22:50:11 D! [EC2] Found active network interface
No csm configuration found.
Configuration validation first phase succeeded
/opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent -schematest -config
/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.toml
Configuration validation second phase succeeded
Configuration validation succeeded
amazon-cloudwatch-agent has already been stopped
Created symlink from /etc/systemd/system/multi-user.target.wants/amazon-cloudwatch-
agent.service to /etc/systemd/system/amazon-cloudwatch-agent.service.
Redirecting to /bin/systemctl restart amazon-cloudwatch-agent.service