7. 1st Prep a Base Image
Save yourself some headache and just prep an
empty image that sets puppet master in
/etc/hosts
[ec2-user@ip-172-30-0-118 ~]$ cat /etc/hosts
127.0.0.1 localhost localhost.localdomain
172.30.0.41 puppet
12. package_url => 'https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.4.3.noarch.rpm',
java_install => true,
config => {
'cluster.name' => 'Frederick-Von-Clusterberg',
'cloud.aws.access_key' => ‘SDFDSGGSDSDGFSRSGsgfse’,
'cloud.aws.secret_key' => ‘WhaTEVerUrKEYHaPp3n5t0B3ItWoodG0h3R3’,
'cloud.aws.region' => 'us-east',
'cloud.node.auto_attributes' => true,
'discovery.type' => 'ec2',
'discovery.ec2.tag.name' => 'elasticsearch',
'discovery.ec2.groups' => 'sg-0d6aaa69',
'http.port' => '9200',
'http.enabled' => true,
'http.cors.enabled' => true,
'http.cors.allow-origin' => 'http://54.152.82.147',
'path.data' => '/opt/elasticsearch/data',
'discovery.zen.ping.multicast.enabled' => false,
'discovery.zen.ping.unicast.hosts' => ["172.30.0.189", "172.30.0.190","172.30.0.159","172.30.0.160","172.30.0.4"],
}
}
exec{'export ES_HEAP_SIZ=2g':}
The ElasticSearch Package you
want to use
Give your cluster a name
13. 'cloud.node.auto_attributes' => true,
'discovery.type' => 'ec2',
'discovery.ec2.tag.name' => 'elasticsearch',
'discovery.ec2.groups' => 'sg-0d6aaa69',
tag your elasticsearch instances the SAME
groups are your security group ID’s
24. the Logstash config file
input {
lumberjack {
# The port to listen on
port => 1234
# The paths to your ssl cert and key
ssl_certificate => "/etc/pki/logstash-forwarder.crt"
ssl_key => "/etc/pki/logstash-forwarder.key"
# Set this to whatever you want.
type => "apache-access"
}
}
this is called logstash-forwarder now, but in
logstash config its still called lumberjack...just
so you know.
25. the Logstash config file
input {
lumberjack {
# The port to listen on
port => 1234
# The paths to your ssl cert and key
ssl_certificate => "/etc/pki/logstash-forwarder.crt"
ssl_key => "/etc/pki/logstash-forwarder.key"
# Set this to whatever you want.
type => "apache-access"
}
}
These need to be placed on the
servers sending the logs !
26. Filters….
filter {
grok {
type => "apache-access"
match => { message => "%{COMBINEDAPACHELOG}" }
}
date {
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
}
geoip {
source => clientip
}
}
27. Outputs...
output {
elasticsearch { host => 'LoadBalancer.us-east-1.elb.amazonaws.com'
cluster => 'Frederick-Von-Clusterberg'
protocol => 'http'
}
Send logs to your Load Balancer
make sure to give it the cluster name...or don’t, you
didn’t really need those logs anyway.
Set The “elasticsearch” output