SlideShare a Scribd company logo
Flux and InfluxDB 2.0
Paul Dix

@pauldix

paul@influxdata.com
• Data-scripting language

• Functional

• MIT Licensed

• Language & Runtime/Engine
Language + Query Engine
2.0
Biggest Change Since 0.9
Clean Migration Path
Compatibility Layer
• MIT Licensed

• Multi-tenanted

• Telegraf, InfluxDB, Chronograf, Kapacitor rolled into 1

• OSS single server

• Cloud usage based pricing

• Dedicated Cloud 

• Enterprise on-premise
• MIT Licensed

• Multi-tenanted

• Telegraf, InfluxDB, Chronograf, Kapacitor rolled into 1

• OSS single server

• Cloud usage based pricing

• Dedicated Cloud 

• Enterprise on-premise
TICK is dead
Long Live InfluxDB 2.0
(and Telegraf)
Consistent Documented API
Collection, Write/Query, Streaming & Batch Processing, Dashboards
Officially Supported Client
Libraries
Go, Node.js, Ruby, Python, PHP, Java, C#, C, Kotlin
Visualization Libraries
Ways to run Flux - (interpreter,
InfluxDB 1.7 & 2.0)
Flux Language Elements
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Comments
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Named Arguments
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
String Literals
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Buckets, not DBs
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Duration Literal
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:2018-11-07T00:00:00Z)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Time Literal
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Pipe forward operator
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Anonymous Function
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => (r._measurement == "cpu" or r._measurement == “cpu")
and r.host == “serverA")
Predicate Function
// variables
some_int = 23
// variables
some_int = 23
some_float = 23.2
// variables
some_int = 23
some_float = 23.2
some_string = “cpu"
// variables
some_int = 23
some_float = 23.2
some_string = “cpu"
some_duration = 1h
// variables
some_int = 23
some_float = 23.2
some_string = “cpu"
some_duration = 1h
some_time = 2018-10-10T19:00:00
// variables
some_int = 23
some_float = 23.2
some_string = “cpu"
some_duration = 1h
some_time = 2018-10-10T19:00:00
some_array = [1, 6, 20, 22]
// variables
some_int = 23
some_float = 23.2
some_string = “cpu"
some_duration = 1h
some_time = 2018-10-10T19:00:00
some_array = [1, 6, 20, 22]
some_object = {foo: "hello" bar: 22}
Data Model & Working with
Tables
Example Series
_measurement=mem,host=A,region=west,_field=free
_measurement=mem,host=B,region=west,_field=free
_measurement=cpu,host=A,region=west,_field=usage_system
_measurement=cpu,host=A,region=west,_field=usage_user
Example Series
_measurement=mem,host=A,region=west,_field=free
_measurement=mem,host=B,region=west,_field=free
_measurement=cpu,host=A,region=west,_field=usage_system
_measurement=cpu,host=A,region=west,_field=usage_user
Measurement
Example Series
_measurement=mem,host=A,region=west,_field=free
_measurement=mem,host=B,region=west,_field=free
_measurement=cpu,host=A,region=west,_field=usage_system
_measurement=cpu,host=A,region=west,_field=usage_user
Field
Table
_measurement host region _field _time _value
mem A west free 2018-06-14T09:15:00 10
mem A west free 2018-06-14T09:14:50 10
_measurement host region _field _time _value
mem A west free 2018-06-14T09:15:00 10
mem A west free 2018-06-14T09:14:50 10
Column
_measurement host region _field _time _value
mem A west free 2018-06-14T09:15:00 10
mem A west free 2018-06-14T09:14:50 10
Record
_measurement host region _field _time _value
mem A west free 2018-06-14T09:15:00 10
mem A west free 2018-06-14T09:14:50 10
Group Key
_measurement=mem,host=A,region=west,_field=free
_measurement host region _field _time _value
mem A west free 2018-06-14T09:15:00 10
mem A west free 2018-06-14T09:14:50 10
Every record has
the same value!
_measurement=mem,host=A,region=west,_field=free
Table Per Series
_measurement host region _field _time _value
mem A west free 2018-06-14T09:15:00 10
mem A west free 2018-06-14T09:14:50 11
_measurement host region _field _time _value
mem B west free 2018-06-14T09:15:00 20
mem B west free 2018-06-14T09:14:50 22
_measurement host region _field _time _value
cpu A west usage_user 2018-06-14T09:15:00 45
cpu A west usage_user 2018-06-14T09:14:50 49
_measurement host region _field _time _value
cpu A west usage_system 2018-06-14T09:15:00 35
cpu A west usage_system 2018-06-14T09:14:50 38
input tables -> function -> output tables
input tables -> function -> output tables
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> sum()
input tables -> function -> output tables
What to sum on?
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> sum()
input tables -> function -> output tables
Default columns argument
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> sum(columns: [“_value”])
input tables -> function -> output tables
_meas
ureme
host region _field _time _valu
e
mem A west free 2018-06-
14T09:1
10
mem A west free 2018-06-
14T09:1
11
_meas
ureme
host region _field _time _valu
emem B west free 2018-06-
14T09:15
20
mem B west free 2018-06-
14T09:14
22
Input in table form
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> sum()
input tables -> function -> output tables
_meas
ureme
host region _field _time _valu
e
mem A west free 2018-06-
14T09:1
10
mem A west free 2018-06-
14T09:1
11
_meas
ureme
host region _field _time _valu
emem B west free 2018-06-
14T09:15
20
mem B west free 2018-06-
14T09:14
22
sum()
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> sum()
input tables -> function -> output tables
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> sum()
_meas
ureme
host region _field _time _valu
e
mem A west free 2018-06-
14T09:1
10
mem A west free 2018-06-
14T09:1
11
_meas
ureme
host region _field _time _valu
emem B west free 2018-06-
14T09:15
20
mem B west free 2018-06-
14T09:14
22
sum()
_meas
ureme
host region _field _time _valu
e
mem A west free 2018-06-
14T09:1
21
_meas
ureme
host region _field _time _valu
e
mem B west free 2018-06-
14T09:15
42
N to N table mapping
(1 to 1 mapping)
N to M table mapping
window
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> window(every:20s)
30s of data (4 samples)
window
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> window(every:20s)
split into 20s windows
window
_meas host region _field _time _valu
mem A west free …14:30 10
mem A west free …14:40 11
mem A west free …14:50 12
mem A west free …15:00 13
_meas host region _field _time _valu
mem B west free …14:30 20
mem B west free …14:40 22
mem B west free …14:50 23
mem B west free …15:00 24
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> window(every:20s)
Input
window
_meas host region _field _time _valu
mem A west free …14:30 10
mem A west free …14:40 11
mem A west free …14:50 12
mem A west free …15:00 13
_meas host region _field _time _valu
mem B west free …14:30 20
mem B west free …14:40 22
mem B west free …14:50 23
mem B west free …15:00 24
window(
every:20s)
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> window(every:20s)
window
_meas host region _field _time _valu
mem A west free …14:30 10
mem A west free …14:40 11
mem A west free …14:50 12
mem A west free …15:00 13
_meas host region _field _time _valu
mem B west free …14:30 20
mem B west free …14:40 22
mem B west free …14:50 23
mem B west free …15:00 24
window(
every:20s)
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> window(every:20s) _meas
ureme
host region _field _time _valu
emem A west free …14:30 10
mem A west free …14:40 11
_meas
ureme
host region _field _time _valu
emem B west free …14:50 23
mem B west free …15:00 24
_meas
ureme
host region _field _time _valu
emem B west free …14:30 20
mem B west free …14:40 22
_meas
ureme
host region _field _time _valu
emem A west free …14:50 12
mem A west free …15:00 13
window
_meas host region _field _time _valu
mem A west free …14:30 10
mem A west free …14:40 11
mem A west free …14:50 12
mem A west free …15:00 13
_meas host region _field _time _valu
mem B west free …14:30 20
mem B west free …14:40 22
mem B west free …14:50 23
mem B west free …15:00 24
window(
every:20s)
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> window(every:20s) _meas
ureme
host region _field _time _valu
emem A west free …14:30 10
mem A west free …14:40 11
_meas
ureme
host region _field _time _valu
emem B west free …14:50 23
mem B west free …15:00 24
_meas
ureme
host region _field _time _valu
emem B west free …14:30 20
mem B west free …14:40 22
_meas
ureme
host region _field _time _valu
emem A west free …14:50 12
mem A west free …15:00 13
N to M tables
Window based on time
_start and _stop columns
group
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> group(keys:[“region"])
group
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> group(keys:[“region"])
new group key
group
_meas host region _field _time _valu
mem A west free …14:30 10
mem A west free …14:40 11
mem A west free …14:50 12
mem A west free …15:00 13
_meas host region _field _time _valu
mem B west free …14:30 20
mem B west free …14:40 22
mem B west free …14:50 23
mem B west free …15:00 24
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> group(keys:[“region"])
group
_meas host region _field _time _valu
mem A west free …14:30 10
mem A west free …14:40 11
mem A west free …14:50 12
mem A west free …15:00 13
_meas host region _field _time _valu
mem B west free …14:30 20
mem B west free …14:40 22
mem B west free …14:50 23
mem B west free …15:00 24
group(
keys:
[“region”])
// example query
from(db:"telegraf")
|> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01)
|> filter(fn: r => r._measurement == “mem" and
r._field == “free”)
|> group(keys:[“region"])
_meas
ureme
host region _field _time _valu
emem A west free …14:30 10
mem B west free …14:30 20
mem A west free …14:40 11
mem B west free …14:40 21
mem A west free …14:50 12
mem B west free …14:50 22
mem B west free …15:00 13
mem B west free …15:00 23
N to M tables
M == cardinality(group keys)
Group based on columns
Flux Design Principles
Useable
Make Everyone a Data
Programmer!
Readable
Flexible
Composable
Testable
Contributable
Shareable
Functions Overview
Inputs
from, fromKafka, fromFile, fromS3, fromPrometheus, fromMySQL, etc.
Flux != InfluxDB
Follow Telegraf Model
import "mysql"
customers = mysql.from(connect: loadSecret(name:”mysql_prod"),
query: "select id, name from customers")
data = from(bucket: "my_data")
|> range(start: -4h)
|> filter(fn: (r) => r._measurement == “write_requests")
|> rename(columns: {customer_id: “id"})
join(tables: {customers, data}, on: ["id"])
|> yield(name: "results")
import "mysql"
customers = mysql.from(connect: loadSecret(name:"mysql_prod"),
query: "select id, name from customers")
data = from(bucket: "my_data")
|> range(start: -4h)
|> filter(fn: (r) => r._measurement == “write_requests")
|> rename(columns: {customer_id: “id"})
join(tables: {customers, data}, on: ["id"])
|> yield(name: "results")
Imports for sharing code!
import "mysql"
customers = mysql.from(connect: loadSecret(name:"mysql_prod"),
query: "select id, name from customers")
data = from(bucket: "my_data")
|> range(start: -4h)
|> filter(fn: (r) => r._measurement == “write_requests")
|> rename(columns: {customer_id: “id"})
join(tables: {customers, data}, on: ["id"])
|> yield(name: "results")
Pulling data from a non-InfluxDB source
import "mysql"
customers = mysql.from(connect: loadSecret(name:"mysql_prod"),
query: "select id, name from customers")
data = from(bucket: "my_data")
|> range(start: -4h)
|> filter(fn: (r) => r._measurement == “write_requests")
|> rename(columns: {customer_id: “id"})
join(tables: {customers, data}, on: ["id"])
|> yield(name: "results")
Raw query (for now)
import "mysql"
customers = mysql.from(connect: loadSecret(name:"mysql_prod"),
query: "select id, name from customers")
data = from(bucket: "my_data")
|> range(start: -4h)
|> filter(fn: (r) => r._measurement == “write_requests")
|> rename(columns: {customer_id: “id"})
join(tables: {customers, data}, on: ["id"])
|> yield(name: "results")
Loading Secret
import "mysql"
customers = mysql.from(connect: loadSecret(name:"mysql_prod"),
query: "select id, name from customers")
data = from(bucket: "my_data")
|> range(start: -4h)
|> filter(fn: (r) => r._measurement == “write_requests")
|> rename(columns: {customer_id: “id"})
join(tables: {customers, data}, on: ["id"])
|> yield(name: "results")
Renaming & Shaping Data
import "mysql"
customers = mysql.from(connect: loadSecret(name:"mysql_prod"),
query: "select id, name from customers")
data = from(bucket: "my_data")
|> range(start: -4h)
|> filter(fn: (r) => r._measurement == “write_requests")
|> rename(columns: {customer_id: “id"})
join(tables: {customers, data}, on: ["id"])
|> yield(name: "results")
Join on any column
Outputs
to, toKafka, toFile, toS3, toPrometheus, toMySQL, etc.
Outputs are for Tasks
option task = {
name: “Alert on disk",
every: 5m,
}
crit = 90 // alert at this percentage
warn = 80 // warn at this percentage
data = from(bucket: "telegraf/autogen")
|> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent")
|> last()
data |> filter(fn: (r) => r._value > crit)
|> addColumn(key: "level", value: "critical")
|> addColumn(key: "alert", value: task.name)
|> to(bucket: "alerts")
data |> filter(fn: (r) => r._value > warn && r._value < crit)
|> addColumn(key: "level", value: "warn")
|> to(bucket: "alerts")
option task = {
name: “Alert on disk",
every: 5m,
}
crit = 90 // alert at this percentage
warn = 80 // warn at this percentage
data = from(bucket: "telegraf/autogen")
|> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent")
|> last()
data |> filter(fn: (r) => r._value > crit)
|> addColumn(key: "level", value: "critical")
|> addColumn(key: "alert", value: task.name)
|> to(bucket: "alerts")
data |> filter(fn: (r) => r._value > warn && r._value < crit)
|> addColumn(key: "level", value: "warn")
|> to(bucket: "alerts")
Option syntax for tasks
option task = {
name: “Alert on disk",
every: 5m,
}
crit = 90 // alert at this percentage
warn = 80 // warn at this percentage
data = from(bucket: "telegraf/autogen")
|> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent")
|> last()
data |> filter(fn: (r) => r._value > crit)
|> addColumn(key: "level", value: "critical")
|> addColumn(key: "alert", value: task.name)
|> to(bucket: "alerts")
data |> filter(fn: (r) => r._value > warn && r._value < crit)
|> addColumn(key: "level", value: "warn")
|> to(bucket: "alerts")
Get at the last value without specifying time range
option task = {
name: “Alert on disk",
every: 5m,
}
crit = 90 // alert at this percentage
warn = 80 // warn at this percentage
data = from(bucket: "telegraf/autogen")
|> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent")
|> last()
data |> filter(fn: (r) => r._value > crit)
|> addColumn(key: "level", value: “critical")
|> addColumn(key: "alert", value: task.name)
|> to(bucket: "alerts")
data |> filter(fn: (r) => r._value > warn && r._value < crit)
|> addColumn(key: "level", value: "warn")
|> to(bucket: "alerts")
Adding a column to decorate the data
option task = {
name: “Alert on disk",
every: 5m,
}
crit = 90 // alert at this percentage
warn = 80 // warn at this percentage
data = from(bucket: "telegraf/autogen")
|> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent")
|> last()
data |> filter(fn: (r) => r._value > crit)
|> addColumn(key: "level", value: "critical")
|> addColumn(key: "alert", value: task.name)
|> to(bucket: "alerts")
data |> filter(fn: (r) => r._value > warn && r._value < crit)
|> addColumn(key: "level", value: "warn")
|> to(bucket: "alerts") To writes to the local InfluxDB
Separate Alerts From
Notifications!
option task = {name: "slack critical alerts", every: 1m}
import "slack"
lastNotificationTime = from(bucket: "notificatons")
|> filter(fn: (r) => r.level == "critical" and r._field == "alert_time")
|> group(none:true)
|> last()
|> recordValue(column:"_value")
from(bucket: "alerts")
|> range(start: lastNotificationTime)
|> filter(fn: (r) => r.level == "critical")
// shape the alert data to what we care about in notifications
|> renameColumn(from: "_time", to: "alert_time")
|> renameColumn(from: "_value", to: "used_percent")
// set the time the notification is being sent
|> addColumn(key: "_time", value: now())
// get rid of unneeded columns
|> drop(columns: ["_start", "_stop"])
// write the message
|> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%")
|> slack.to(config: loadSecret(name: “slack_alert_config”), message: “_value”)
|> to(bucket: “notifications")
option task = {name: "slack critical alerts", every: 1m}
import "slack"
lastNotificationTime = from(bucket: "notificatons")
|> filter(fn: (r) => r.level == "critical" and r._field == "alert_time")
|> group(none:true)
|> last()
|> recordValue(column:"_value")
from(bucket: "alerts")
|> range(start: lastNotificationTime)
|> filter(fn: (r) => r.level == “critical”)
// shape the alert data to what we care about in notifications
|> renameColumn(from: "_time", to: "alert_time")
|> renameColumn(from: "_value", to: "used_percent")
// set the time the notification is being sent
|> addColumn(key: "_time", value: now())
// get rid of unneeded columns
|> drop(columns: ["_start", "_stop"])
// write the message
|> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%")
|> slack.to(config: loadSecret(name: "slack_alert"))
|> to(bucket: “notifications")
We have state so we don’t resend
option task = {name: "slack critical alerts", every: 1m}
import "slack"
lastNotificationTime = from(bucket: "notificatons")
|> filter(fn: (r) => r.level == "critical" and r._field == "alert_time")
|> group(none:true)
|> last()
|> recordValue(column:"_value")
from(bucket: "alerts")
|> range(start: lastNotificationTime)
|> filter(fn: (r) => r.level == "critical")
// shape the alert data to what we care about in notifications
|> renameColumn(from: "_time", to: "alert_time")
|> renameColumn(from: "_value", to: "used_percent")
// set the time the notification is being sent
|> addColumn(key: "_time", value: now())
// get rid of unneeded columns
|> drop(columns: ["_start", "_stop"])
// write the message
|> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%")
|> slack.to(config: loadSecret(name: "slack_alert"))
|> to(bucket: “notifications")
Use last time as argument to range
option task = {name: "slack critical alerts", every: 1m}
import "slack"
lastNotificationTime = from(bucket: "notificatons")
|> filter(fn: (r) => r.level == "critical" and r._field == "alert_time")
|> group(none:true)
|> last()
|> recordValue(column:"_value")
from(bucket: "alerts")
|> range(start: lastNotificationTime)
|> filter(fn: (r) => r.level == "critical")
// shape the alert data to what we care about in notifications
|> renameColumn(from: "_time", to: "alert_time")
|> renameColumn(from: "_value", to: "used_percent")
// set the time the notification is being sent
|> addColumn(key: "_time", value: now())
// get rid of unneeded columns
|> drop(columns: ["_start", "_stop"])
// write the message
|> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%")
|> slack.to(config: loadSecret(name: "slack_alert"))
|> to(bucket: “notifications")
Now function for current time
option task = {name: "slack critical alerts", every: 1m}
import "slack"
lastNotificationTime = from(bucket: "notificatons")
|> filter(fn: (r) => r.level == "critical" and r._field == "alert_time")
|> group(none:true)
|> last()
|> recordValue(column:"_value")
from(bucket: "alerts")
|> range(start: lastNotificationTime)
|> filter(fn: (r) => r.level == "critical")
// shape the alert data to what we care about in notifications
|> renameColumn(from: "_time", to: "alert_time")
|> renameColumn(from: "_value", to: "used_percent")
// set the time the notification is being sent
|> addColumn(key: "_time", value: now())
// get rid of unneeded columns
|> drop(columns: ["_start", "_stop"])
// write the message
|> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%")
|> slack.to(config: loadSecret(name: "slack_alert"))
|> to(bucket: “notifications")
Map function to iterate
over values
option task = {name: "slack critical alerts", every: 1m}
import "slack"
lastNotificationTime = from(bucket: "notificatons")
|> filter(fn: (r) => r.level == "critical" and r._field == "alert_time")
|> group(none:true)
|> last()
|> recordValue(column:"_value")
from(bucket: "alerts")
|> range(start: lastNotificationTime)
|> filter(fn: (r) => r.level == "critical")
// shape the alert data to what we care about in notifications
|> renameColumn(from: "_time", to: "alert_time")
|> renameColumn(from: "_value", to: "used_percent")
// set the time the notification is being sent
|> addColumn(key: "_time", value: now())
// get rid of unneeded columns
|> drop(columns: ["_start", "_stop"])
// write the message
|> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%")
|> slack.to(config: loadSecret(name: "slack_alert"))
|> to(bucket: “notifications")
String interpolation
option task = {name: "slack critical alerts", every: 1m}
import "slack"
lastNotificationTime = from(bucket: "notificatons")
|> filter(fn: (r) => r.level == "critical" and r._field == "alert_time")
|> group(none:true)
|> last()
|> recordValue(column:"_value")
from(bucket: "alerts")
|> range(start: lastNotificationTime)
|> filter(fn: (r) => r.level == "critical")
// shape the alert data to what we care about in notifications
|> renameColumn(from: "_time", to: "alert_time")
|> renameColumn(from: "_value", to: "used_percent")
// set the time the notification is being sent
|> addColumn(key: "_time", value: now())
// get rid of unneeded columns
|> drop(columns: ["_start", "_stop"])
// write the message
|> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%")
|> slack.to(config: loadSecret(name: "slack_alert"))
|> to(bucket: “notifications")
Send to Slack and
record in InfluxDB
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(by: ["alert"])
|> count()
|> group(none: true)
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message)
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(by: ["alert"])
|> count()
|> group(none: true)
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message)
Cron syntax
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(by: ["alert"])
|> count()
|> group(none: true)
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message)
Closures
Tasks run logs
(just another time series)
UI will hide complexity
Built on top of primitives
API for Defining Dashboards
Bulk Import & Export
Specify bucket, range, predicate
Same API in OSS, Cloud, and
Enterprise
CLI & UI
2.0
Thank you.
Paul Dix

@pauldix

paul@influxdata.com

More Related Content

What's hot

InfluxDB + Telegraf Operator: Easy Kubernetes Monitoring
InfluxDB + Telegraf Operator: Easy Kubernetes MonitoringInfluxDB + Telegraf Operator: Easy Kubernetes Monitoring
InfluxDB + Telegraf Operator: Easy Kubernetes Monitoring
InfluxData
 
Using Grafana with InfluxDB 2.0 and Flux Lang by Jacob Lisi
Using Grafana with InfluxDB 2.0 and Flux Lang by Jacob LisiUsing Grafana with InfluxDB 2.0 and Flux Lang by Jacob Lisi
Using Grafana with InfluxDB 2.0 and Flux Lang by Jacob Lisi
InfluxData
 
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Flink Forward
 
Introduction to Apache ZooKeeper | Big Data Hadoop Spark Tutorial | CloudxLab
Introduction to Apache ZooKeeper | Big Data Hadoop Spark Tutorial | CloudxLabIntroduction to Apache ZooKeeper | Big Data Hadoop Spark Tutorial | CloudxLab
Introduction to Apache ZooKeeper | Big Data Hadoop Spark Tutorial | CloudxLab
CloudxLab
 
Time series databases
Time series databasesTime series databases
Time series databases
Source Ministry
 
Locking and Concurrency Control
Locking and Concurrency ControlLocking and Concurrency Control
Locking and Concurrency ControlMorgan Tocker
 
Prometheus
PrometheusPrometheus
Prometheus
wyukawa
 
Beautiful Monitoring With Grafana and InfluxDB
Beautiful Monitoring With Grafana and InfluxDBBeautiful Monitoring With Grafana and InfluxDB
Beautiful Monitoring With Grafana and InfluxDB
leesjensen
 
Apache Airflow overview
Apache Airflow overviewApache Airflow overview
Apache Airflow overview
NikolayGrishchenkov
 
HTTP Analytics for 6M requests per second using ClickHouse, by Alexander Boc...
HTTP Analytics for 6M requests per second using ClickHouse, by  Alexander Boc...HTTP Analytics for 6M requests per second using ClickHouse, by  Alexander Boc...
HTTP Analytics for 6M requests per second using ClickHouse, by Alexander Boc...
Altinity Ltd
 
Monitoring kubernetes with prometheus
Monitoring kubernetes with prometheusMonitoring kubernetes with prometheus
Monitoring kubernetes with prometheus
Brice Fernandes
 
PromQL Deep Dive - The Prometheus Query Language
PromQL Deep Dive - The Prometheus Query Language PromQL Deep Dive - The Prometheus Query Language
PromQL Deep Dive - The Prometheus Query Language
Weaveworks
 
Apache Airflow
Apache AirflowApache Airflow
Apache Airflow
Knoldus Inc.
 
Performance testing using jmeter
Performance testing using jmeterPerformance testing using jmeter
Performance testing using jmeter
Rachappa Bandi
 
Near real-time statistical modeling and anomaly detection using Flink!
Near real-time statistical modeling and anomaly detection using Flink!Near real-time statistical modeling and anomaly detection using Flink!
Near real-time statistical modeling and anomaly detection using Flink!
Flink Forward
 
Introducing Apache Airflow and how we are using it
Introducing Apache Airflow and how we are using itIntroducing Apache Airflow and how we are using it
Introducing Apache Airflow and how we are using it
Bruno Faria
 
Browser Automated Testing Frameworks - Nightwatch.js
Browser Automated Testing Frameworks - Nightwatch.jsBrowser Automated Testing Frameworks - Nightwatch.js
Browser Automated Testing Frameworks - Nightwatch.js
Luís Bastião Silva
 
Infrastructure & System Monitoring using Prometheus
Infrastructure & System Monitoring using PrometheusInfrastructure & System Monitoring using Prometheus
Infrastructure & System Monitoring using Prometheus
Marco Pas
 
Monitoring with Prometheus
Monitoring with PrometheusMonitoring with Prometheus
Monitoring with Prometheus
Shiao-An Yuan
 
Airflow presentation
Airflow presentationAirflow presentation
Airflow presentation
Ilias Okacha
 

What's hot (20)

InfluxDB + Telegraf Operator: Easy Kubernetes Monitoring
InfluxDB + Telegraf Operator: Easy Kubernetes MonitoringInfluxDB + Telegraf Operator: Easy Kubernetes Monitoring
InfluxDB + Telegraf Operator: Easy Kubernetes Monitoring
 
Using Grafana with InfluxDB 2.0 and Flux Lang by Jacob Lisi
Using Grafana with InfluxDB 2.0 and Flux Lang by Jacob LisiUsing Grafana with InfluxDB 2.0 and Flux Lang by Jacob Lisi
Using Grafana with InfluxDB 2.0 and Flux Lang by Jacob Lisi
 
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
Introducing BinarySortedMultiMap - A new Flink state primitive to boost your ...
 
Introduction to Apache ZooKeeper | Big Data Hadoop Spark Tutorial | CloudxLab
Introduction to Apache ZooKeeper | Big Data Hadoop Spark Tutorial | CloudxLabIntroduction to Apache ZooKeeper | Big Data Hadoop Spark Tutorial | CloudxLab
Introduction to Apache ZooKeeper | Big Data Hadoop Spark Tutorial | CloudxLab
 
Time series databases
Time series databasesTime series databases
Time series databases
 
Locking and Concurrency Control
Locking and Concurrency ControlLocking and Concurrency Control
Locking and Concurrency Control
 
Prometheus
PrometheusPrometheus
Prometheus
 
Beautiful Monitoring With Grafana and InfluxDB
Beautiful Monitoring With Grafana and InfluxDBBeautiful Monitoring With Grafana and InfluxDB
Beautiful Monitoring With Grafana and InfluxDB
 
Apache Airflow overview
Apache Airflow overviewApache Airflow overview
Apache Airflow overview
 
HTTP Analytics for 6M requests per second using ClickHouse, by Alexander Boc...
HTTP Analytics for 6M requests per second using ClickHouse, by  Alexander Boc...HTTP Analytics for 6M requests per second using ClickHouse, by  Alexander Boc...
HTTP Analytics for 6M requests per second using ClickHouse, by Alexander Boc...
 
Monitoring kubernetes with prometheus
Monitoring kubernetes with prometheusMonitoring kubernetes with prometheus
Monitoring kubernetes with prometheus
 
PromQL Deep Dive - The Prometheus Query Language
PromQL Deep Dive - The Prometheus Query Language PromQL Deep Dive - The Prometheus Query Language
PromQL Deep Dive - The Prometheus Query Language
 
Apache Airflow
Apache AirflowApache Airflow
Apache Airflow
 
Performance testing using jmeter
Performance testing using jmeterPerformance testing using jmeter
Performance testing using jmeter
 
Near real-time statistical modeling and anomaly detection using Flink!
Near real-time statistical modeling and anomaly detection using Flink!Near real-time statistical modeling and anomaly detection using Flink!
Near real-time statistical modeling and anomaly detection using Flink!
 
Introducing Apache Airflow and how we are using it
Introducing Apache Airflow and how we are using itIntroducing Apache Airflow and how we are using it
Introducing Apache Airflow and how we are using it
 
Browser Automated Testing Frameworks - Nightwatch.js
Browser Automated Testing Frameworks - Nightwatch.jsBrowser Automated Testing Frameworks - Nightwatch.js
Browser Automated Testing Frameworks - Nightwatch.js
 
Infrastructure & System Monitoring using Prometheus
Infrastructure & System Monitoring using PrometheusInfrastructure & System Monitoring using Prometheus
Infrastructure & System Monitoring using Prometheus
 
Monitoring with Prometheus
Monitoring with PrometheusMonitoring with Prometheus
Monitoring with Prometheus
 
Airflow presentation
Airflow presentationAirflow presentation
Airflow presentation
 

Similar to Flux and InfluxDB 2.0

Optimizing the Grafana Platform for Flux
Optimizing the Grafana Platform for FluxOptimizing the Grafana Platform for Flux
Optimizing the Grafana Platform for Flux
InfluxData
 
9:40 am InfluxDB 2.0 and Flux – The Road Ahead Paul Dix, Founder and CTO | ...
 9:40 am InfluxDB 2.0 and Flux – The Road Ahead  Paul Dix, Founder and CTO | ... 9:40 am InfluxDB 2.0 and Flux – The Road Ahead  Paul Dix, Founder and CTO | ...
9:40 am InfluxDB 2.0 and Flux – The Road Ahead Paul Dix, Founder and CTO | ...
InfluxData
 
InfluxData Platform Future and Vision
InfluxData Platform Future and VisionInfluxData Platform Future and Vision
InfluxData Platform Future and Vision
InfluxData
 
Router Queue Simulation in C++ in MMNN and MM1 conditions
Router Queue Simulation in C++ in MMNN and MM1 conditionsRouter Queue Simulation in C++ in MMNN and MM1 conditions
Router Queue Simulation in C++ in MMNN and MM1 conditionsMorteza Mahdilar
 
Optimizing InfluxDB Performance in the Real World by Dean Sheehan, Senior Dir...
Optimizing InfluxDB Performance in the Real World by Dean Sheehan, Senior Dir...Optimizing InfluxDB Performance in the Real World by Dean Sheehan, Senior Dir...
Optimizing InfluxDB Performance in the Real World by Dean Sheehan, Senior Dir...
InfluxData
 
Monitoring InfluxEnterprise
Monitoring InfluxEnterpriseMonitoring InfluxEnterprise
Monitoring InfluxEnterprise
InfluxData
 
Towards an Integration of the Actor Model in an FRP Language for Small-Scale ...
Towards an Integration of the Actor Model in an FRP Language for Small-Scale ...Towards an Integration of the Actor Model in an FRP Language for Small-Scale ...
Towards an Integration of the Actor Model in an FRP Language for Small-Scale ...
Takuo Watanabe
 
Ns2 introduction 2
Ns2 introduction 2Ns2 introduction 2
Ns2 introduction 2
Rohini Sharma
 
Kapacitor - Real Time Data Processing Engine
Kapacitor - Real Time Data Processing EngineKapacitor - Real Time Data Processing Engine
Kapacitor - Real Time Data Processing Engine
Prashant Vats
 
First Flink Bay Area meetup
First Flink Bay Area meetupFirst Flink Bay Area meetup
First Flink Bay Area meetup
Kostas Tzoumas
 
Virtual training Intro to Kapacitor
Virtual training  Intro to Kapacitor Virtual training  Intro to Kapacitor
Virtual training Intro to Kapacitor
InfluxData
 
Writing a TSDB from scratch_ performance optimizations.pdf
Writing a TSDB from scratch_ performance optimizations.pdfWriting a TSDB from scratch_ performance optimizations.pdf
Writing a TSDB from scratch_ performance optimizations.pdf
RomanKhavronenko
 
計算機性能の限界点とその考え方
計算機性能の限界点とその考え方計算機性能の限界点とその考え方
計算機性能の限界点とその考え方
Naoto MATSUMOTO
 
Trash Robotic Router Platform - David Melendez - Codemotion Rome 2015
Trash Robotic Router Platform - David Melendez - Codemotion Rome 2015Trash Robotic Router Platform - David Melendez - Codemotion Rome 2015
Trash Robotic Router Platform - David Melendez - Codemotion Rome 2015
Codemotion
 
ClojureScript loves React, DomCode May 26 2015
ClojureScript loves React, DomCode May 26 2015ClojureScript loves React, DomCode May 26 2015
ClojureScript loves React, DomCode May 26 2015
Michiel Borkent
 
Unleashing your Kafka Streams Application Metrics!
Unleashing your Kafka Streams Application Metrics!Unleashing your Kafka Streams Application Metrics!
Unleashing your Kafka Streams Application Metrics!
HostedbyConfluent
 
Analyzing ECP Proxy Apps with the Profiling Tool Score-P
Analyzing ECP Proxy Apps with the Profiling Tool Score-PAnalyzing ECP Proxy Apps with the Profiling Tool Score-P
Analyzing ECP Proxy Apps with the Profiling Tool Score-P
George Markomanolis
 
LF_OVS_17_OVS/OVS-DPDK connection tracking for Mobile usecases
LF_OVS_17_OVS/OVS-DPDK connection tracking for Mobile usecasesLF_OVS_17_OVS/OVS-DPDK connection tracking for Mobile usecases
LF_OVS_17_OVS/OVS-DPDK connection tracking for Mobile usecases
LF_OpenvSwitch
 
Making Custom Oscilloscope Measurements
Making Custom Oscilloscope MeasurementsMaking Custom Oscilloscope Measurements
Making Custom Oscilloscope Measurements
teledynelecroy
 
RxJava applied [JavaDay Kyiv 2016]
RxJava applied [JavaDay Kyiv 2016]RxJava applied [JavaDay Kyiv 2016]
RxJava applied [JavaDay Kyiv 2016]
Igor Lozynskyi
 

Similar to Flux and InfluxDB 2.0 (20)

Optimizing the Grafana Platform for Flux
Optimizing the Grafana Platform for FluxOptimizing the Grafana Platform for Flux
Optimizing the Grafana Platform for Flux
 
9:40 am InfluxDB 2.0 and Flux – The Road Ahead Paul Dix, Founder and CTO | ...
 9:40 am InfluxDB 2.0 and Flux – The Road Ahead  Paul Dix, Founder and CTO | ... 9:40 am InfluxDB 2.0 and Flux – The Road Ahead  Paul Dix, Founder and CTO | ...
9:40 am InfluxDB 2.0 and Flux – The Road Ahead Paul Dix, Founder and CTO | ...
 
InfluxData Platform Future and Vision
InfluxData Platform Future and VisionInfluxData Platform Future and Vision
InfluxData Platform Future and Vision
 
Router Queue Simulation in C++ in MMNN and MM1 conditions
Router Queue Simulation in C++ in MMNN and MM1 conditionsRouter Queue Simulation in C++ in MMNN and MM1 conditions
Router Queue Simulation in C++ in MMNN and MM1 conditions
 
Optimizing InfluxDB Performance in the Real World by Dean Sheehan, Senior Dir...
Optimizing InfluxDB Performance in the Real World by Dean Sheehan, Senior Dir...Optimizing InfluxDB Performance in the Real World by Dean Sheehan, Senior Dir...
Optimizing InfluxDB Performance in the Real World by Dean Sheehan, Senior Dir...
 
Monitoring InfluxEnterprise
Monitoring InfluxEnterpriseMonitoring InfluxEnterprise
Monitoring InfluxEnterprise
 
Towards an Integration of the Actor Model in an FRP Language for Small-Scale ...
Towards an Integration of the Actor Model in an FRP Language for Small-Scale ...Towards an Integration of the Actor Model in an FRP Language for Small-Scale ...
Towards an Integration of the Actor Model in an FRP Language for Small-Scale ...
 
Ns2 introduction 2
Ns2 introduction 2Ns2 introduction 2
Ns2 introduction 2
 
Kapacitor - Real Time Data Processing Engine
Kapacitor - Real Time Data Processing EngineKapacitor - Real Time Data Processing Engine
Kapacitor - Real Time Data Processing Engine
 
First Flink Bay Area meetup
First Flink Bay Area meetupFirst Flink Bay Area meetup
First Flink Bay Area meetup
 
Virtual training Intro to Kapacitor
Virtual training  Intro to Kapacitor Virtual training  Intro to Kapacitor
Virtual training Intro to Kapacitor
 
Writing a TSDB from scratch_ performance optimizations.pdf
Writing a TSDB from scratch_ performance optimizations.pdfWriting a TSDB from scratch_ performance optimizations.pdf
Writing a TSDB from scratch_ performance optimizations.pdf
 
計算機性能の限界点とその考え方
計算機性能の限界点とその考え方計算機性能の限界点とその考え方
計算機性能の限界点とその考え方
 
Trash Robotic Router Platform - David Melendez - Codemotion Rome 2015
Trash Robotic Router Platform - David Melendez - Codemotion Rome 2015Trash Robotic Router Platform - David Melendez - Codemotion Rome 2015
Trash Robotic Router Platform - David Melendez - Codemotion Rome 2015
 
ClojureScript loves React, DomCode May 26 2015
ClojureScript loves React, DomCode May 26 2015ClojureScript loves React, DomCode May 26 2015
ClojureScript loves React, DomCode May 26 2015
 
Unleashing your Kafka Streams Application Metrics!
Unleashing your Kafka Streams Application Metrics!Unleashing your Kafka Streams Application Metrics!
Unleashing your Kafka Streams Application Metrics!
 
Analyzing ECP Proxy Apps with the Profiling Tool Score-P
Analyzing ECP Proxy Apps with the Profiling Tool Score-PAnalyzing ECP Proxy Apps with the Profiling Tool Score-P
Analyzing ECP Proxy Apps with the Profiling Tool Score-P
 
LF_OVS_17_OVS/OVS-DPDK connection tracking for Mobile usecases
LF_OVS_17_OVS/OVS-DPDK connection tracking for Mobile usecasesLF_OVS_17_OVS/OVS-DPDK connection tracking for Mobile usecases
LF_OVS_17_OVS/OVS-DPDK connection tracking for Mobile usecases
 
Making Custom Oscilloscope Measurements
Making Custom Oscilloscope MeasurementsMaking Custom Oscilloscope Measurements
Making Custom Oscilloscope Measurements
 
RxJava applied [JavaDay Kyiv 2016]
RxJava applied [JavaDay Kyiv 2016]RxJava applied [JavaDay Kyiv 2016]
RxJava applied [JavaDay Kyiv 2016]
 

More from InfluxData

Announcing InfluxDB Clustered
Announcing InfluxDB ClusteredAnnouncing InfluxDB Clustered
Announcing InfluxDB Clustered
InfluxData
 
Best Practices for Leveraging the Apache Arrow Ecosystem
Best Practices for Leveraging the Apache Arrow EcosystemBest Practices for Leveraging the Apache Arrow Ecosystem
Best Practices for Leveraging the Apache Arrow Ecosystem
InfluxData
 
How Bevi Uses InfluxDB and Grafana to Improve Predictive Maintenance and Redu...
How Bevi Uses InfluxDB and Grafana to Improve Predictive Maintenance and Redu...How Bevi Uses InfluxDB and Grafana to Improve Predictive Maintenance and Redu...
How Bevi Uses InfluxDB and Grafana to Improve Predictive Maintenance and Redu...
InfluxData
 
Power Your Predictive Analytics with InfluxDB
Power Your Predictive Analytics with InfluxDBPower Your Predictive Analytics with InfluxDB
Power Your Predictive Analytics with InfluxDB
InfluxData
 
How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base
How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base
How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base
InfluxData
 
Build an Edge-to-Cloud Solution with the MING Stack
Build an Edge-to-Cloud Solution with the MING StackBuild an Edge-to-Cloud Solution with the MING Stack
Build an Edge-to-Cloud Solution with the MING Stack
InfluxData
 
Meet the Founders: An Open Discussion About Rewriting Using Rust
Meet the Founders: An Open Discussion About Rewriting Using RustMeet the Founders: An Open Discussion About Rewriting Using Rust
Meet the Founders: An Open Discussion About Rewriting Using Rust
InfluxData
 
Introducing InfluxDB Cloud Dedicated
Introducing InfluxDB Cloud DedicatedIntroducing InfluxDB Cloud Dedicated
Introducing InfluxDB Cloud Dedicated
InfluxData
 
Gain Better Observability with OpenTelemetry and InfluxDB
Gain Better Observability with OpenTelemetry and InfluxDB Gain Better Observability with OpenTelemetry and InfluxDB
Gain Better Observability with OpenTelemetry and InfluxDB
InfluxData
 
How a Heat Treating Plant Ensures Tight Process Control and Exceptional Quali...
How a Heat Treating Plant Ensures Tight Process Control and Exceptional Quali...How a Heat Treating Plant Ensures Tight Process Control and Exceptional Quali...
How a Heat Treating Plant Ensures Tight Process Control and Exceptional Quali...
InfluxData
 
How Delft University's Engineering Students Make Their EV Formula-Style Race ...
How Delft University's Engineering Students Make Their EV Formula-Style Race ...How Delft University's Engineering Students Make Their EV Formula-Style Race ...
How Delft University's Engineering Students Make Their EV Formula-Style Race ...
InfluxData
 
Introducing InfluxDB’s New Time Series Database Storage Engine
Introducing InfluxDB’s New Time Series Database Storage EngineIntroducing InfluxDB’s New Time Series Database Storage Engine
Introducing InfluxDB’s New Time Series Database Storage Engine
InfluxData
 
Start Automating InfluxDB Deployments at the Edge with balena
Start Automating InfluxDB Deployments at the Edge with balena Start Automating InfluxDB Deployments at the Edge with balena
Start Automating InfluxDB Deployments at the Edge with balena
InfluxData
 
Understanding InfluxDB’s New Storage Engine
Understanding InfluxDB’s New Storage EngineUnderstanding InfluxDB’s New Storage Engine
Understanding InfluxDB’s New Storage Engine
InfluxData
 
Streamline and Scale Out Data Pipelines with Kubernetes, Telegraf, and InfluxDB
Streamline and Scale Out Data Pipelines with Kubernetes, Telegraf, and InfluxDBStreamline and Scale Out Data Pipelines with Kubernetes, Telegraf, and InfluxDB
Streamline and Scale Out Data Pipelines with Kubernetes, Telegraf, and InfluxDB
InfluxData
 
Ward Bowman [PTC] | ThingWorx Long-Term Data Storage with InfluxDB | InfluxDa...
Ward Bowman [PTC] | ThingWorx Long-Term Data Storage with InfluxDB | InfluxDa...Ward Bowman [PTC] | ThingWorx Long-Term Data Storage with InfluxDB | InfluxDa...
Ward Bowman [PTC] | ThingWorx Long-Term Data Storage with InfluxDB | InfluxDa...
InfluxData
 
Scott Anderson [InfluxData] | New & Upcoming Flux Features | InfluxDays 2022
Scott Anderson [InfluxData] | New & Upcoming Flux Features | InfluxDays 2022Scott Anderson [InfluxData] | New & Upcoming Flux Features | InfluxDays 2022
Scott Anderson [InfluxData] | New & Upcoming Flux Features | InfluxDays 2022
InfluxData
 
Steinkamp, Clifford [InfluxData] | Closing Thoughts | InfluxDays 2022
Steinkamp, Clifford [InfluxData] | Closing Thoughts | InfluxDays 2022Steinkamp, Clifford [InfluxData] | Closing Thoughts | InfluxDays 2022
Steinkamp, Clifford [InfluxData] | Closing Thoughts | InfluxDays 2022
InfluxData
 
Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...
Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...
Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...
InfluxData
 
Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022
Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022
Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022
InfluxData
 

More from InfluxData (20)

Announcing InfluxDB Clustered
Announcing InfluxDB ClusteredAnnouncing InfluxDB Clustered
Announcing InfluxDB Clustered
 
Best Practices for Leveraging the Apache Arrow Ecosystem
Best Practices for Leveraging the Apache Arrow EcosystemBest Practices for Leveraging the Apache Arrow Ecosystem
Best Practices for Leveraging the Apache Arrow Ecosystem
 
How Bevi Uses InfluxDB and Grafana to Improve Predictive Maintenance and Redu...
How Bevi Uses InfluxDB and Grafana to Improve Predictive Maintenance and Redu...How Bevi Uses InfluxDB and Grafana to Improve Predictive Maintenance and Redu...
How Bevi Uses InfluxDB and Grafana to Improve Predictive Maintenance and Redu...
 
Power Your Predictive Analytics with InfluxDB
Power Your Predictive Analytics with InfluxDBPower Your Predictive Analytics with InfluxDB
Power Your Predictive Analytics with InfluxDB
 
How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base
How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base
How Teréga Replaces Legacy Data Historians with InfluxDB, AWS and IO-Base
 
Build an Edge-to-Cloud Solution with the MING Stack
Build an Edge-to-Cloud Solution with the MING StackBuild an Edge-to-Cloud Solution with the MING Stack
Build an Edge-to-Cloud Solution with the MING Stack
 
Meet the Founders: An Open Discussion About Rewriting Using Rust
Meet the Founders: An Open Discussion About Rewriting Using RustMeet the Founders: An Open Discussion About Rewriting Using Rust
Meet the Founders: An Open Discussion About Rewriting Using Rust
 
Introducing InfluxDB Cloud Dedicated
Introducing InfluxDB Cloud DedicatedIntroducing InfluxDB Cloud Dedicated
Introducing InfluxDB Cloud Dedicated
 
Gain Better Observability with OpenTelemetry and InfluxDB
Gain Better Observability with OpenTelemetry and InfluxDB Gain Better Observability with OpenTelemetry and InfluxDB
Gain Better Observability with OpenTelemetry and InfluxDB
 
How a Heat Treating Plant Ensures Tight Process Control and Exceptional Quali...
How a Heat Treating Plant Ensures Tight Process Control and Exceptional Quali...How a Heat Treating Plant Ensures Tight Process Control and Exceptional Quali...
How a Heat Treating Plant Ensures Tight Process Control and Exceptional Quali...
 
How Delft University's Engineering Students Make Their EV Formula-Style Race ...
How Delft University's Engineering Students Make Their EV Formula-Style Race ...How Delft University's Engineering Students Make Their EV Formula-Style Race ...
How Delft University's Engineering Students Make Their EV Formula-Style Race ...
 
Introducing InfluxDB’s New Time Series Database Storage Engine
Introducing InfluxDB’s New Time Series Database Storage EngineIntroducing InfluxDB’s New Time Series Database Storage Engine
Introducing InfluxDB’s New Time Series Database Storage Engine
 
Start Automating InfluxDB Deployments at the Edge with balena
Start Automating InfluxDB Deployments at the Edge with balena Start Automating InfluxDB Deployments at the Edge with balena
Start Automating InfluxDB Deployments at the Edge with balena
 
Understanding InfluxDB’s New Storage Engine
Understanding InfluxDB’s New Storage EngineUnderstanding InfluxDB’s New Storage Engine
Understanding InfluxDB’s New Storage Engine
 
Streamline and Scale Out Data Pipelines with Kubernetes, Telegraf, and InfluxDB
Streamline and Scale Out Data Pipelines with Kubernetes, Telegraf, and InfluxDBStreamline and Scale Out Data Pipelines with Kubernetes, Telegraf, and InfluxDB
Streamline and Scale Out Data Pipelines with Kubernetes, Telegraf, and InfluxDB
 
Ward Bowman [PTC] | ThingWorx Long-Term Data Storage with InfluxDB | InfluxDa...
Ward Bowman [PTC] | ThingWorx Long-Term Data Storage with InfluxDB | InfluxDa...Ward Bowman [PTC] | ThingWorx Long-Term Data Storage with InfluxDB | InfluxDa...
Ward Bowman [PTC] | ThingWorx Long-Term Data Storage with InfluxDB | InfluxDa...
 
Scott Anderson [InfluxData] | New & Upcoming Flux Features | InfluxDays 2022
Scott Anderson [InfluxData] | New & Upcoming Flux Features | InfluxDays 2022Scott Anderson [InfluxData] | New & Upcoming Flux Features | InfluxDays 2022
Scott Anderson [InfluxData] | New & Upcoming Flux Features | InfluxDays 2022
 
Steinkamp, Clifford [InfluxData] | Closing Thoughts | InfluxDays 2022
Steinkamp, Clifford [InfluxData] | Closing Thoughts | InfluxDays 2022Steinkamp, Clifford [InfluxData] | Closing Thoughts | InfluxDays 2022
Steinkamp, Clifford [InfluxData] | Closing Thoughts | InfluxDays 2022
 
Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...
Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...
Steinkamp, Clifford [InfluxData] | Welcome to InfluxDays 2022 - Day 2 | Influ...
 
Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022
Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022
Steinkamp, Clifford [InfluxData] | Closing Thoughts Day 1 | InfluxDays 2022
 

Recently uploaded

spirit beverages ppt without graphics.pptx
spirit beverages ppt without graphics.pptxspirit beverages ppt without graphics.pptx
spirit beverages ppt without graphics.pptx
Madan Karki
 
sieving analysis and results interpretation
sieving analysis and results interpretationsieving analysis and results interpretation
sieving analysis and results interpretation
ssuser36d3051
 
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
thanhdowork
 
哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样
哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样
哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样
insn4465
 
Technical Drawings introduction to drawing of prisms
Technical Drawings introduction to drawing of prismsTechnical Drawings introduction to drawing of prisms
Technical Drawings introduction to drawing of prisms
heavyhaig
 
ACEP Magazine edition 4th launched on 05.06.2024
ACEP Magazine edition 4th launched on 05.06.2024ACEP Magazine edition 4th launched on 05.06.2024
ACEP Magazine edition 4th launched on 05.06.2024
Rahul
 
原版制作(unimelb毕业证书)墨尔本大学毕业证Offer一模一样
原版制作(unimelb毕业证书)墨尔本大学毕业证Offer一模一样原版制作(unimelb毕业证书)墨尔本大学毕业证Offer一模一样
原版制作(unimelb毕业证书)墨尔本大学毕业证Offer一模一样
obonagu
 
Tutorial for 16S rRNA Gene Analysis with QIIME2.pdf
Tutorial for 16S rRNA Gene Analysis with QIIME2.pdfTutorial for 16S rRNA Gene Analysis with QIIME2.pdf
Tutorial for 16S rRNA Gene Analysis with QIIME2.pdf
aqil azizi
 
A review on techniques and modelling methodologies used for checking electrom...
A review on techniques and modelling methodologies used for checking electrom...A review on techniques and modelling methodologies used for checking electrom...
A review on techniques and modelling methodologies used for checking electrom...
nooriasukmaningtyas
 
Online aptitude test management system project report.pdf
Online aptitude test management system project report.pdfOnline aptitude test management system project report.pdf
Online aptitude test management system project report.pdf
Kamal Acharya
 
[JPP-1] - (JEE 3.0) - Kinematics 1D - 14th May..pdf
[JPP-1] - (JEE 3.0) - Kinematics 1D - 14th May..pdf[JPP-1] - (JEE 3.0) - Kinematics 1D - 14th May..pdf
[JPP-1] - (JEE 3.0) - Kinematics 1D - 14th May..pdf
awadeshbabu
 
Hierarchical Digital Twin of a Naval Power System
Hierarchical Digital Twin of a Naval Power SystemHierarchical Digital Twin of a Naval Power System
Hierarchical Digital Twin of a Naval Power System
Kerry Sado
 
6th International Conference on Machine Learning & Applications (CMLA 2024)
6th International Conference on Machine Learning & Applications (CMLA 2024)6th International Conference on Machine Learning & Applications (CMLA 2024)
6th International Conference on Machine Learning & Applications (CMLA 2024)
ClaraZara1
 
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdf
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdfGoverning Equations for Fundamental Aerodynamics_Anderson2010.pdf
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdf
WENKENLI1
 
PROJECT FORMAT FOR EVS AMITY UNIVERSITY GWALIOR.ppt
PROJECT FORMAT FOR EVS AMITY UNIVERSITY GWALIOR.pptPROJECT FORMAT FOR EVS AMITY UNIVERSITY GWALIOR.ppt
PROJECT FORMAT FOR EVS AMITY UNIVERSITY GWALIOR.ppt
bhadouriyakaku
 
Modelagem de um CSTR com reação endotermica.pdf
Modelagem de um CSTR com reação endotermica.pdfModelagem de um CSTR com reação endotermica.pdf
Modelagem de um CSTR com reação endotermica.pdf
camseq
 
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
Amil Baba Dawood bangali
 
TOP 10 B TECH COLLEGES IN JAIPUR 2024.pptx
TOP 10 B TECH COLLEGES IN JAIPUR 2024.pptxTOP 10 B TECH COLLEGES IN JAIPUR 2024.pptx
TOP 10 B TECH COLLEGES IN JAIPUR 2024.pptx
nikitacareer3
 
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
zwunae
 
Swimming pool mechanical components design.pptx
Swimming pool  mechanical components design.pptxSwimming pool  mechanical components design.pptx
Swimming pool mechanical components design.pptx
yokeleetan1
 

Recently uploaded (20)

spirit beverages ppt without graphics.pptx
spirit beverages ppt without graphics.pptxspirit beverages ppt without graphics.pptx
spirit beverages ppt without graphics.pptx
 
sieving analysis and results interpretation
sieving analysis and results interpretationsieving analysis and results interpretation
sieving analysis and results interpretation
 
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Hori...
 
哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样
哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样
哪里办理(csu毕业证书)查尔斯特大学毕业证硕士学历原版一模一样
 
Technical Drawings introduction to drawing of prisms
Technical Drawings introduction to drawing of prismsTechnical Drawings introduction to drawing of prisms
Technical Drawings introduction to drawing of prisms
 
ACEP Magazine edition 4th launched on 05.06.2024
ACEP Magazine edition 4th launched on 05.06.2024ACEP Magazine edition 4th launched on 05.06.2024
ACEP Magazine edition 4th launched on 05.06.2024
 
原版制作(unimelb毕业证书)墨尔本大学毕业证Offer一模一样
原版制作(unimelb毕业证书)墨尔本大学毕业证Offer一模一样原版制作(unimelb毕业证书)墨尔本大学毕业证Offer一模一样
原版制作(unimelb毕业证书)墨尔本大学毕业证Offer一模一样
 
Tutorial for 16S rRNA Gene Analysis with QIIME2.pdf
Tutorial for 16S rRNA Gene Analysis with QIIME2.pdfTutorial for 16S rRNA Gene Analysis with QIIME2.pdf
Tutorial for 16S rRNA Gene Analysis with QIIME2.pdf
 
A review on techniques and modelling methodologies used for checking electrom...
A review on techniques and modelling methodologies used for checking electrom...A review on techniques and modelling methodologies used for checking electrom...
A review on techniques and modelling methodologies used for checking electrom...
 
Online aptitude test management system project report.pdf
Online aptitude test management system project report.pdfOnline aptitude test management system project report.pdf
Online aptitude test management system project report.pdf
 
[JPP-1] - (JEE 3.0) - Kinematics 1D - 14th May..pdf
[JPP-1] - (JEE 3.0) - Kinematics 1D - 14th May..pdf[JPP-1] - (JEE 3.0) - Kinematics 1D - 14th May..pdf
[JPP-1] - (JEE 3.0) - Kinematics 1D - 14th May..pdf
 
Hierarchical Digital Twin of a Naval Power System
Hierarchical Digital Twin of a Naval Power SystemHierarchical Digital Twin of a Naval Power System
Hierarchical Digital Twin of a Naval Power System
 
6th International Conference on Machine Learning & Applications (CMLA 2024)
6th International Conference on Machine Learning & Applications (CMLA 2024)6th International Conference on Machine Learning & Applications (CMLA 2024)
6th International Conference on Machine Learning & Applications (CMLA 2024)
 
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdf
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdfGoverning Equations for Fundamental Aerodynamics_Anderson2010.pdf
Governing Equations for Fundamental Aerodynamics_Anderson2010.pdf
 
PROJECT FORMAT FOR EVS AMITY UNIVERSITY GWALIOR.ppt
PROJECT FORMAT FOR EVS AMITY UNIVERSITY GWALIOR.pptPROJECT FORMAT FOR EVS AMITY UNIVERSITY GWALIOR.ppt
PROJECT FORMAT FOR EVS AMITY UNIVERSITY GWALIOR.ppt
 
Modelagem de um CSTR com reação endotermica.pdf
Modelagem de um CSTR com reação endotermica.pdfModelagem de um CSTR com reação endotermica.pdf
Modelagem de um CSTR com reação endotermica.pdf
 
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
NO1 Uk best vashikaran specialist in delhi vashikaran baba near me online vas...
 
TOP 10 B TECH COLLEGES IN JAIPUR 2024.pptx
TOP 10 B TECH COLLEGES IN JAIPUR 2024.pptxTOP 10 B TECH COLLEGES IN JAIPUR 2024.pptx
TOP 10 B TECH COLLEGES IN JAIPUR 2024.pptx
 
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
一比一原版(IIT毕业证)伊利诺伊理工大学毕业证成绩单专业办理
 
Swimming pool mechanical components design.pptx
Swimming pool  mechanical components design.pptxSwimming pool  mechanical components design.pptx
Swimming pool mechanical components design.pptx
 

Flux and InfluxDB 2.0

  • 1. Flux and InfluxDB 2.0 Paul Dix @pauldix paul@influxdata.com
  • 2.
  • 3. • Data-scripting language • Functional • MIT Licensed • Language & Runtime/Engine
  • 5.
  • 6.
  • 7. 2.0
  • 11. • MIT Licensed • Multi-tenanted • Telegraf, InfluxDB, Chronograf, Kapacitor rolled into 1 • OSS single server • Cloud usage based pricing • Dedicated Cloud • Enterprise on-premise
  • 12. • MIT Licensed • Multi-tenanted • Telegraf, InfluxDB, Chronograf, Kapacitor rolled into 1 • OSS single server • Cloud usage based pricing • Dedicated Cloud • Enterprise on-premise
  • 14. Long Live InfluxDB 2.0 (and Telegraf)
  • 15. Consistent Documented API Collection, Write/Query, Streaming & Batch Processing, Dashboards
  • 16.
  • 17. Officially Supported Client Libraries Go, Node.js, Ruby, Python, PHP, Java, C#, C, Kotlin
  • 19.
  • 20. Ways to run Flux - (interpreter, InfluxDB 1.7 & 2.0)
  • 21.
  • 22.
  • 24. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
  • 25. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Comments
  • 26. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Named Arguments
  • 27. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") String Literals
  • 28. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Buckets, not DBs
  • 29. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Duration Literal
  • 30. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:2018-11-07T00:00:00Z) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Time Literal
  • 31. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Pipe forward operator
  • 32. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Anonymous Function
  • 33. // get all data from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => (r._measurement == "cpu" or r._measurement == “cpu") and r.host == “serverA") Predicate Function
  • 35. // variables some_int = 23 some_float = 23.2
  • 36. // variables some_int = 23 some_float = 23.2 some_string = “cpu"
  • 37. // variables some_int = 23 some_float = 23.2 some_string = “cpu" some_duration = 1h
  • 38. // variables some_int = 23 some_float = 23.2 some_string = “cpu" some_duration = 1h some_time = 2018-10-10T19:00:00
  • 39. // variables some_int = 23 some_float = 23.2 some_string = “cpu" some_duration = 1h some_time = 2018-10-10T19:00:00 some_array = [1, 6, 20, 22]
  • 40. // variables some_int = 23 some_float = 23.2 some_string = “cpu" some_duration = 1h some_time = 2018-10-10T19:00:00 some_array = [1, 6, 20, 22] some_object = {foo: "hello" bar: 22}
  • 41. Data Model & Working with Tables
  • 45. Table _measurement host region _field _time _value mem A west free 2018-06-14T09:15:00 10 mem A west free 2018-06-14T09:14:50 10
  • 46. _measurement host region _field _time _value mem A west free 2018-06-14T09:15:00 10 mem A west free 2018-06-14T09:14:50 10 Column
  • 47. _measurement host region _field _time _value mem A west free 2018-06-14T09:15:00 10 mem A west free 2018-06-14T09:14:50 10 Record
  • 48. _measurement host region _field _time _value mem A west free 2018-06-14T09:15:00 10 mem A west free 2018-06-14T09:14:50 10 Group Key _measurement=mem,host=A,region=west,_field=free
  • 49. _measurement host region _field _time _value mem A west free 2018-06-14T09:15:00 10 mem A west free 2018-06-14T09:14:50 10 Every record has the same value! _measurement=mem,host=A,region=west,_field=free
  • 50. Table Per Series _measurement host region _field _time _value mem A west free 2018-06-14T09:15:00 10 mem A west free 2018-06-14T09:14:50 11 _measurement host region _field _time _value mem B west free 2018-06-14T09:15:00 20 mem B west free 2018-06-14T09:14:50 22 _measurement host region _field _time _value cpu A west usage_user 2018-06-14T09:15:00 45 cpu A west usage_user 2018-06-14T09:14:50 49 _measurement host region _field _time _value cpu A west usage_system 2018-06-14T09:15:00 35 cpu A west usage_system 2018-06-14T09:14:50 38
  • 51. input tables -> function -> output tables
  • 52. input tables -> function -> output tables // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> sum()
  • 53. input tables -> function -> output tables What to sum on? // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> sum()
  • 54. input tables -> function -> output tables Default columns argument // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> sum(columns: [“_value”])
  • 55. input tables -> function -> output tables _meas ureme host region _field _time _valu e mem A west free 2018-06- 14T09:1 10 mem A west free 2018-06- 14T09:1 11 _meas ureme host region _field _time _valu emem B west free 2018-06- 14T09:15 20 mem B west free 2018-06- 14T09:14 22 Input in table form // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> sum()
  • 56. input tables -> function -> output tables _meas ureme host region _field _time _valu e mem A west free 2018-06- 14T09:1 10 mem A west free 2018-06- 14T09:1 11 _meas ureme host region _field _time _valu emem B west free 2018-06- 14T09:15 20 mem B west free 2018-06- 14T09:14 22 sum() // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> sum()
  • 57. input tables -> function -> output tables // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:50, start:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> sum() _meas ureme host region _field _time _valu e mem A west free 2018-06- 14T09:1 10 mem A west free 2018-06- 14T09:1 11 _meas ureme host region _field _time _valu emem B west free 2018-06- 14T09:15 20 mem B west free 2018-06- 14T09:14 22 sum() _meas ureme host region _field _time _valu e mem A west free 2018-06- 14T09:1 21 _meas ureme host region _field _time _valu e mem B west free 2018-06- 14T09:15 42
  • 58. N to N table mapping (1 to 1 mapping)
  • 59. N to M table mapping
  • 60. window // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> window(every:20s) 30s of data (4 samples)
  • 61. window // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> window(every:20s) split into 20s windows
  • 62. window _meas host region _field _time _valu mem A west free …14:30 10 mem A west free …14:40 11 mem A west free …14:50 12 mem A west free …15:00 13 _meas host region _field _time _valu mem B west free …14:30 20 mem B west free …14:40 22 mem B west free …14:50 23 mem B west free …15:00 24 // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> window(every:20s) Input
  • 63. window _meas host region _field _time _valu mem A west free …14:30 10 mem A west free …14:40 11 mem A west free …14:50 12 mem A west free …15:00 13 _meas host region _field _time _valu mem B west free …14:30 20 mem B west free …14:40 22 mem B west free …14:50 23 mem B west free …15:00 24 window( every:20s) // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> window(every:20s)
  • 64. window _meas host region _field _time _valu mem A west free …14:30 10 mem A west free …14:40 11 mem A west free …14:50 12 mem A west free …15:00 13 _meas host region _field _time _valu mem B west free …14:30 20 mem B west free …14:40 22 mem B west free …14:50 23 mem B west free …15:00 24 window( every:20s) // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> window(every:20s) _meas ureme host region _field _time _valu emem A west free …14:30 10 mem A west free …14:40 11 _meas ureme host region _field _time _valu emem B west free …14:50 23 mem B west free …15:00 24 _meas ureme host region _field _time _valu emem B west free …14:30 20 mem B west free …14:40 22 _meas ureme host region _field _time _valu emem A west free …14:50 12 mem A west free …15:00 13
  • 65. window _meas host region _field _time _valu mem A west free …14:30 10 mem A west free …14:40 11 mem A west free …14:50 12 mem A west free …15:00 13 _meas host region _field _time _valu mem B west free …14:30 20 mem B west free …14:40 22 mem B west free …14:50 23 mem B west free …15:00 24 window( every:20s) // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> window(every:20s) _meas ureme host region _field _time _valu emem A west free …14:30 10 mem A west free …14:40 11 _meas ureme host region _field _time _valu emem B west free …14:50 23 mem B west free …15:00 24 _meas ureme host region _field _time _valu emem B west free …14:30 20 mem B west free …14:40 22 _meas ureme host region _field _time _valu emem A west free …14:50 12 mem A west free …15:00 13 N to M tables
  • 66. Window based on time _start and _stop columns
  • 67. group // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> group(keys:[“region"])
  • 68. group // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> group(keys:[“region"]) new group key
  • 69. group _meas host region _field _time _valu mem A west free …14:30 10 mem A west free …14:40 11 mem A west free …14:50 12 mem A west free …15:00 13 _meas host region _field _time _valu mem B west free …14:30 20 mem B west free …14:40 22 mem B west free …14:50 23 mem B west free …15:00 24 // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> group(keys:[“region"])
  • 70. group _meas host region _field _time _valu mem A west free …14:30 10 mem A west free …14:40 11 mem A west free …14:50 12 mem A west free …15:00 13 _meas host region _field _time _valu mem B west free …14:30 20 mem B west free …14:40 22 mem B west free …14:50 23 mem B west free …15:00 24 group( keys: [“region”]) // example query from(db:"telegraf") |> range(start:2018-06-14T09:14:30, end:2018-06-14T09:15:01) |> filter(fn: r => r._measurement == “mem" and r._field == “free”) |> group(keys:[“region"]) _meas ureme host region _field _time _valu emem A west free …14:30 10 mem B west free …14:30 20 mem A west free …14:40 11 mem B west free …14:40 21 mem A west free …14:50 12 mem B west free …14:50 22 mem B west free …15:00 13 mem B west free …15:00 23 N to M tables M == cardinality(group keys)
  • 71. Group based on columns
  • 74. Make Everyone a Data Programmer!
  • 75.
  • 76.
  • 77.
  • 85. Inputs from, fromKafka, fromFile, fromS3, fromPrometheus, fromMySQL, etc.
  • 87.
  • 88.
  • 89.
  • 90.
  • 92. import "mysql" customers = mysql.from(connect: loadSecret(name:”mysql_prod"), query: "select id, name from customers") data = from(bucket: "my_data") |> range(start: -4h) |> filter(fn: (r) => r._measurement == “write_requests") |> rename(columns: {customer_id: “id"}) join(tables: {customers, data}, on: ["id"]) |> yield(name: "results")
  • 93. import "mysql" customers = mysql.from(connect: loadSecret(name:"mysql_prod"), query: "select id, name from customers") data = from(bucket: "my_data") |> range(start: -4h) |> filter(fn: (r) => r._measurement == “write_requests") |> rename(columns: {customer_id: “id"}) join(tables: {customers, data}, on: ["id"]) |> yield(name: "results") Imports for sharing code!
  • 94. import "mysql" customers = mysql.from(connect: loadSecret(name:"mysql_prod"), query: "select id, name from customers") data = from(bucket: "my_data") |> range(start: -4h) |> filter(fn: (r) => r._measurement == “write_requests") |> rename(columns: {customer_id: “id"}) join(tables: {customers, data}, on: ["id"]) |> yield(name: "results") Pulling data from a non-InfluxDB source
  • 95. import "mysql" customers = mysql.from(connect: loadSecret(name:"mysql_prod"), query: "select id, name from customers") data = from(bucket: "my_data") |> range(start: -4h) |> filter(fn: (r) => r._measurement == “write_requests") |> rename(columns: {customer_id: “id"}) join(tables: {customers, data}, on: ["id"]) |> yield(name: "results") Raw query (for now)
  • 96. import "mysql" customers = mysql.from(connect: loadSecret(name:"mysql_prod"), query: "select id, name from customers") data = from(bucket: "my_data") |> range(start: -4h) |> filter(fn: (r) => r._measurement == “write_requests") |> rename(columns: {customer_id: “id"}) join(tables: {customers, data}, on: ["id"]) |> yield(name: "results") Loading Secret
  • 97. import "mysql" customers = mysql.from(connect: loadSecret(name:"mysql_prod"), query: "select id, name from customers") data = from(bucket: "my_data") |> range(start: -4h) |> filter(fn: (r) => r._measurement == “write_requests") |> rename(columns: {customer_id: “id"}) join(tables: {customers, data}, on: ["id"]) |> yield(name: "results") Renaming & Shaping Data
  • 98. import "mysql" customers = mysql.from(connect: loadSecret(name:"mysql_prod"), query: "select id, name from customers") data = from(bucket: "my_data") |> range(start: -4h) |> filter(fn: (r) => r._measurement == “write_requests") |> rename(columns: {customer_id: “id"}) join(tables: {customers, data}, on: ["id"]) |> yield(name: "results") Join on any column
  • 99. Outputs to, toKafka, toFile, toS3, toPrometheus, toMySQL, etc.
  • 100. Outputs are for Tasks
  • 101. option task = { name: “Alert on disk", every: 5m, } crit = 90 // alert at this percentage warn = 80 // warn at this percentage data = from(bucket: "telegraf/autogen") |> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent") |> last() data |> filter(fn: (r) => r._value > crit) |> addColumn(key: "level", value: "critical") |> addColumn(key: "alert", value: task.name) |> to(bucket: "alerts") data |> filter(fn: (r) => r._value > warn && r._value < crit) |> addColumn(key: "level", value: "warn") |> to(bucket: "alerts")
  • 102. option task = { name: “Alert on disk", every: 5m, } crit = 90 // alert at this percentage warn = 80 // warn at this percentage data = from(bucket: "telegraf/autogen") |> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent") |> last() data |> filter(fn: (r) => r._value > crit) |> addColumn(key: "level", value: "critical") |> addColumn(key: "alert", value: task.name) |> to(bucket: "alerts") data |> filter(fn: (r) => r._value > warn && r._value < crit) |> addColumn(key: "level", value: "warn") |> to(bucket: "alerts") Option syntax for tasks
  • 103. option task = { name: “Alert on disk", every: 5m, } crit = 90 // alert at this percentage warn = 80 // warn at this percentage data = from(bucket: "telegraf/autogen") |> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent") |> last() data |> filter(fn: (r) => r._value > crit) |> addColumn(key: "level", value: "critical") |> addColumn(key: "alert", value: task.name) |> to(bucket: "alerts") data |> filter(fn: (r) => r._value > warn && r._value < crit) |> addColumn(key: "level", value: "warn") |> to(bucket: "alerts") Get at the last value without specifying time range
  • 104. option task = { name: “Alert on disk", every: 5m, } crit = 90 // alert at this percentage warn = 80 // warn at this percentage data = from(bucket: "telegraf/autogen") |> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent") |> last() data |> filter(fn: (r) => r._value > crit) |> addColumn(key: "level", value: “critical") |> addColumn(key: "alert", value: task.name) |> to(bucket: "alerts") data |> filter(fn: (r) => r._value > warn && r._value < crit) |> addColumn(key: "level", value: "warn") |> to(bucket: "alerts") Adding a column to decorate the data
  • 105. option task = { name: “Alert on disk", every: 5m, } crit = 90 // alert at this percentage warn = 80 // warn at this percentage data = from(bucket: "telegraf/autogen") |> filter(fn: (r) => r._measurement == "disk" and r._field == "used_percent") |> last() data |> filter(fn: (r) => r._value > crit) |> addColumn(key: "level", value: "critical") |> addColumn(key: "alert", value: task.name) |> to(bucket: "alerts") data |> filter(fn: (r) => r._value > warn && r._value < crit) |> addColumn(key: "level", value: "warn") |> to(bucket: "alerts") To writes to the local InfluxDB
  • 107. option task = {name: "slack critical alerts", every: 1m} import "slack" lastNotificationTime = from(bucket: "notificatons") |> filter(fn: (r) => r.level == "critical" and r._field == "alert_time") |> group(none:true) |> last() |> recordValue(column:"_value") from(bucket: "alerts") |> range(start: lastNotificationTime) |> filter(fn: (r) => r.level == "critical") // shape the alert data to what we care about in notifications |> renameColumn(from: "_time", to: "alert_time") |> renameColumn(from: "_value", to: "used_percent") // set the time the notification is being sent |> addColumn(key: "_time", value: now()) // get rid of unneeded columns |> drop(columns: ["_start", "_stop"]) // write the message |> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%") |> slack.to(config: loadSecret(name: “slack_alert_config”), message: “_value”) |> to(bucket: “notifications")
  • 108. option task = {name: "slack critical alerts", every: 1m} import "slack" lastNotificationTime = from(bucket: "notificatons") |> filter(fn: (r) => r.level == "critical" and r._field == "alert_time") |> group(none:true) |> last() |> recordValue(column:"_value") from(bucket: "alerts") |> range(start: lastNotificationTime) |> filter(fn: (r) => r.level == “critical”) // shape the alert data to what we care about in notifications |> renameColumn(from: "_time", to: "alert_time") |> renameColumn(from: "_value", to: "used_percent") // set the time the notification is being sent |> addColumn(key: "_time", value: now()) // get rid of unneeded columns |> drop(columns: ["_start", "_stop"]) // write the message |> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%") |> slack.to(config: loadSecret(name: "slack_alert")) |> to(bucket: “notifications") We have state so we don’t resend
  • 109. option task = {name: "slack critical alerts", every: 1m} import "slack" lastNotificationTime = from(bucket: "notificatons") |> filter(fn: (r) => r.level == "critical" and r._field == "alert_time") |> group(none:true) |> last() |> recordValue(column:"_value") from(bucket: "alerts") |> range(start: lastNotificationTime) |> filter(fn: (r) => r.level == "critical") // shape the alert data to what we care about in notifications |> renameColumn(from: "_time", to: "alert_time") |> renameColumn(from: "_value", to: "used_percent") // set the time the notification is being sent |> addColumn(key: "_time", value: now()) // get rid of unneeded columns |> drop(columns: ["_start", "_stop"]) // write the message |> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%") |> slack.to(config: loadSecret(name: "slack_alert")) |> to(bucket: “notifications") Use last time as argument to range
  • 110. option task = {name: "slack critical alerts", every: 1m} import "slack" lastNotificationTime = from(bucket: "notificatons") |> filter(fn: (r) => r.level == "critical" and r._field == "alert_time") |> group(none:true) |> last() |> recordValue(column:"_value") from(bucket: "alerts") |> range(start: lastNotificationTime) |> filter(fn: (r) => r.level == "critical") // shape the alert data to what we care about in notifications |> renameColumn(from: "_time", to: "alert_time") |> renameColumn(from: "_value", to: "used_percent") // set the time the notification is being sent |> addColumn(key: "_time", value: now()) // get rid of unneeded columns |> drop(columns: ["_start", "_stop"]) // write the message |> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%") |> slack.to(config: loadSecret(name: "slack_alert")) |> to(bucket: “notifications") Now function for current time
  • 111. option task = {name: "slack critical alerts", every: 1m} import "slack" lastNotificationTime = from(bucket: "notificatons") |> filter(fn: (r) => r.level == "critical" and r._field == "alert_time") |> group(none:true) |> last() |> recordValue(column:"_value") from(bucket: "alerts") |> range(start: lastNotificationTime) |> filter(fn: (r) => r.level == "critical") // shape the alert data to what we care about in notifications |> renameColumn(from: "_time", to: "alert_time") |> renameColumn(from: "_value", to: "used_percent") // set the time the notification is being sent |> addColumn(key: "_time", value: now()) // get rid of unneeded columns |> drop(columns: ["_start", "_stop"]) // write the message |> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%") |> slack.to(config: loadSecret(name: "slack_alert")) |> to(bucket: “notifications") Map function to iterate over values
  • 112. option task = {name: "slack critical alerts", every: 1m} import "slack" lastNotificationTime = from(bucket: "notificatons") |> filter(fn: (r) => r.level == "critical" and r._field == "alert_time") |> group(none:true) |> last() |> recordValue(column:"_value") from(bucket: "alerts") |> range(start: lastNotificationTime) |> filter(fn: (r) => r.level == "critical") // shape the alert data to what we care about in notifications |> renameColumn(from: "_time", to: "alert_time") |> renameColumn(from: "_value", to: "used_percent") // set the time the notification is being sent |> addColumn(key: "_time", value: now()) // get rid of unneeded columns |> drop(columns: ["_start", "_stop"]) // write the message |> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%") |> slack.to(config: loadSecret(name: "slack_alert")) |> to(bucket: “notifications") String interpolation
  • 113. option task = {name: "slack critical alerts", every: 1m} import "slack" lastNotificationTime = from(bucket: "notificatons") |> filter(fn: (r) => r.level == "critical" and r._field == "alert_time") |> group(none:true) |> last() |> recordValue(column:"_value") from(bucket: "alerts") |> range(start: lastNotificationTime) |> filter(fn: (r) => r.level == "critical") // shape the alert data to what we care about in notifications |> renameColumn(from: "_time", to: "alert_time") |> renameColumn(from: "_value", to: "used_percent") // set the time the notification is being sent |> addColumn(key: "_time", value: now()) // get rid of unneeded columns |> drop(columns: ["_start", "_stop"]) // write the message |> map(fn: (r) => r._value = "{r.host} disk usage is at {r.used_percent}%") |> slack.to(config: loadSecret(name: "slack_alert")) |> to(bucket: “notifications") Send to Slack and record in InfluxDB
  • 114. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(by: ["alert"]) |> count() |> group(none: true) |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message)
  • 115. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(by: ["alert"]) |> count() |> group(none: true) |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message) Cron syntax
  • 116. option task = { name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(by: ["alert"]) |> count() |> group(none: true) |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message) Closures
  • 117. Tasks run logs (just another time series)
  • 118. UI will hide complexity
  • 119. Built on top of primitives
  • 120. API for Defining Dashboards
  • 121. Bulk Import & Export Specify bucket, range, predicate
  • 122. Same API in OSS, Cloud, and Enterprise
  • 124. 2.0