Flux and InfluxDB 2.0
Paul Dix

@pauldix

paul@influxdata.com
• Data-scripting language

• Functional

• MIT Licensed

• Language, VM, engine, planner, optimizer
Language + Query Engine
2.0
Biggest Change Since 0.9
Clean Migration Path
Compatibility Layer
• MIT Licensed

• Multi-tenanted

• Telegraf, InfluxDB, Chronograf, Kapacitor rolled into 1

• OSS single server

• Cloud usage based pricing

• Dedicated Cloud 

• Enterprise on-premise
Consistent Documented API
Collection, Write/Query, Streaming & Batch Processing, Dashboards
Officially Supported Client
Libraries
Go, Node.js, Ruby, Python, PHP, Java, C#, C, Kotlin
Visualization Libraries
Multi-tenant roles
• Operator

• Organization Administrator

• User
Data Model
• Organizations

• Buckets (retention)

• Time series data

• Tasks

• Runs

• Logs

• Dashboards

• Users

• Tokens

• Authorizations

• Protos (templates)

• Scrapers

• Telegrafs

• Labels
Ways to run Flux - (interpreter,
InfluxDB 1.7 & 2.0)
Flux Basics
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Comments
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Named Arguments
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
String Literals
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Buckets, not DBs
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Duration Literal
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:2018-11-07T00:00:00Z)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Time Literal
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Pipe forward operator
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
Anonymous Function
// get all data from the telegraf db
from(bucket:”telegraf/autogen”)
// filter that by the last hour
|> range(start:-1h)
// filter further by series with a specific measurement and field
|> filter(fn: (r) => (r._measurement == "cpu" or r._measurement == “cpu")
and r.host == “serverA")
Predicate Function
// variables
some_int = 23
// variables
some_int = 23
some_float = 23.2
// variables
some_int = 23
some_float = 23.2
some_string = “cpu"
// variables
some_int = 23
some_float = 23.2
some_string = “cpu"
some_duration = 1h
// variables
some_int = 23
some_float = 23.2
some_string = “cpu"
some_duration = 1h
some_time = 2018-10-10T19:00:00
// variables
some_int = 23
some_float = 23.2
some_string = “cpu"
some_duration = 1h
some_time = 2018-10-10T19:00:00
some_array = [1, 6, 20, 22]
// variables
some_int = 23
some_float = 23.2
some_string = “cpu"
some_duration = 1h
some_time = 2018-10-10T19:00:00
some_array = [1, 6, 20, 22]
some_object = {foo: "hello" bar: 22}
// defining a pipe forwardable function
square = (tables=<-) =>
tables
|> map(fn: (r) => {r with _value: r._value * r._value})
// defining a pipe forwardable function
square = (tables=<-) =>
tables
|> map(fn: (r) => {r with _value: r._value * r._value})
This is potentially new
// defining a pipe forwardable function
square = (tables=<-) =>
tables
|> map(fn: (r) => {r with _value: r._value * r._value})
from(bucket:"foo")
|> range(start: -1h)
|> filter(fn: (r) => r._measurement == "samples")
|> square()
|> filter(fn: (r) => r._value > 23.2)
Data Sources (inputs)
Data Sinks (outputs)
Tasks
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message)
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message)
tasks
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message)
cron scheduling
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message)
packages & imports
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message)
map
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message) String interpolation
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message)
Ship data elsewhere
option task = {
name: "email alert digest",
cron: "0 5 * * 0"
}
import "smtp"
body = ""
from(bucket: "alerts")
|> range(start: -24h)
|> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message")
|> group(columns: ["alert"])
|> count()
|> group()
|> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn")
smtp.to(
config: loadSecret(name: "smtp_digest"),
to: "alerts@influxdata.com",
title: "Alert digest for {now()}",
body: message)
Store secrets in a
store like Vault
Open Questions
User Packages &
Dependencies
// in a file called package.flux
package "paul"
option version = "0.1.1"
// define square here or…
// in a file called package.flux
package "paul"
option version = "0.1.1"
// import the other package files
// they must have package "paul" declaration at the top
// only package.flux has the version
import “packages”
packages.load(files: ["square.flux", "utils.flux"])
// in a file called package.flux
package "paul"
option version = "0.1.1"
// import the other package files
// they must have package "paul" declaration at the top
// only package.flux has the version
import “packages”
packages.load(files: ["square.flux", "utils.flux"])
// or this
packages.load(glob: "*.flux")
import "myorg/paul" // latest, will load package.flux
data |> paul.square()
import "myorg/paul", "0.1.0" // specific version
// 1. look in $fluxhome/myorg/paul/package.flux
// 2. look in $fluxhome/myorg/paul/0.1.0/package.flux
// 3. look in cloud2.influxdata.com/api/v1/packages/myorg/paul
data |> paul.square()
import "myorg/paul", ">=0.1.0" // at least this version
data |> paul.square()
Error Handling?
import "slack"
// what if this returns an error?
ret = slack.send(room: "foo", message: "testing this", token: "...")
Option Types?
match ret {
// on match ret gets mapped as the new type
Error => {
// do something ret
},
Else => {
// do something with ret
}
}
Loops?
records = [
{name: "foo", value: 23},
{name: "bar", value: 23},
{name: "asdf", value: 56}
]
records = [
{name: "foo", value: 23},
{name: "bar", value: 23},
{name: "asdf", value: 56}
]
// simple loop over each
records
|> map(fn: (r) => {name: r.name, value: r.value + 1})
records = [
{name: "foo", value: 23},
{name: "bar", value: 23},
{name: "asdf", value: 56}
]
// simple loop over each
records
|> map(fn: (r) => {name: r.name, value: r.value + 1})
// compute the sum
sum = records
|> reduce(
fn: (r, accumulator) => r.value + accumulator,
i: 0
)
records = [
{name: "foo", value: 23},
{name: "bar", value: 23},
{name: "asdf", value: 56}
]
// simple loop over each
records
|> map(fn: (r) => {name: r.name, value: r.value + 1})
// compute the sum
sum = records
|> reduce(
fn: (r, accumulator) => r.value + accumulator,
i: 0
)
// get matching records
foos = records
|> filter(fn: (r) => r.name == "foo")
while(fn: () => {
// do stuff
})
while(fn: () => {
// do stuff
})
while = (fn) =>
if fn()
while(fn)
// or loop some number of times
loop(fn: (i) => {
// do stuff here
},
times: 10)
// or loop some number of times
loop(fn: (i) => {
// do stuff here
},
times: 10)
loop = (fn, times) =>
loopUntil(fn, 0, times)
// or loop some number of times
loop(fn: (i) => {
// do stuff here
},
times: 10)
loop = (fn, times) =>
loopUntil(fn, 0, times)
loopUntil = (fn, iteration, times) =>
if iteration < times {
fn(iteration)
loopUntil(fn, iteration + 1, times)
}
Syntactic Sugar
// <stream object>[<predicate>,<time>:<time>,<list of strings>]
// <stream object>[<predicate>,<time>:<time>,<list of strings>]
// and here's an example
from(bucket:"foo")[_measurement == "cpu" and _field == "usage_user",
2018-11-07:2018-11-08,
["_measurement", "_time", "_value", “_field”]]
// <stream object>[<predicate>,<time>:<time>,<list of strings>]
// and here's an example
from(bucket:"foo")[_measurement == "cpu" and _field == "usage_user",
2018-11-07:2018-11-08,
["_measurement", "_time", "_value", “_field”]]
from(bucket:"foo")
|> filter(fn: (row) => row._measurement == "cpu" and row._field == "usage_user")
|> range(start: 2018-11-07, stop: 2018-11-08)
|> keep(columns: ["_measurement", "_time", "_value", “_field"])
from(bucket:"foo")[_measurement == "cpu"]
// notice the trailing commas can be left off
from(bucket: "foo")
|> filter(fn: (row) => row._measurement == "cpu")
|> last()
from(bucket:"foo")["some tag" == "asdf",,]
from(bucket: "foo")
|> filter(fn: (row) => row["some tag"] == "asdf")
|> last()
from(bucket:"foo")[foo=="bar",-1h]
from(bucket: "foo")
|> filter(fn: (row) => row.foo == "bar")
|> range(start: -1h)
bucket = "foo"
start = -3
from(bucket: bucket)
|> range(start: start, end: -1)
// shortcut if the variable name is the same as the argument
from(bucket)
|> range(start, end: -1)
Flux Office Hours Tomorrow
InfluxDB 2.0 Status
Thank you
Paul Dix

paul@influxdata.com

@pauldix

Flux and InfluxDB 2.0 by Paul Dix

  • 1.
    Flux and InfluxDB2.0 Paul Dix @pauldix paul@influxdata.com
  • 3.
    • Data-scripting language •Functional • MIT Licensed • Language, VM, engine, planner, optimizer
  • 4.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
    • MIT Licensed •Multi-tenanted • Telegraf, InfluxDB, Chronograf, Kapacitor rolled into 1 • OSS single server • Cloud usage based pricing • Dedicated Cloud • Enterprise on-premise
  • 12.
    Consistent Documented API Collection,Write/Query, Streaming & Batch Processing, Dashboards
  • 14.
    Officially Supported Client Libraries Go,Node.js, Ruby, Python, PHP, Java, C#, C, Kotlin
  • 15.
  • 16.
    Multi-tenant roles • Operator •Organization Administrator • User
  • 17.
    Data Model • Organizations •Buckets (retention) • Time series data • Tasks • Runs • Logs • Dashboards • Users • Tokens • Authorizations • Protos (templates) • Scrapers • Telegrafs • Labels
  • 19.
    Ways to runFlux - (interpreter, InfluxDB 1.7 & 2.0)
  • 21.
  • 22.
    // get alldata from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system")
  • 23.
    // get alldata from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Comments
  • 24.
    // get alldata from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Named Arguments
  • 25.
    // get alldata from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") String Literals
  • 26.
    // get alldata from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Buckets, not DBs
  • 27.
    // get alldata from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Duration Literal
  • 28.
    // get alldata from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:2018-11-07T00:00:00Z) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Time Literal
  • 29.
    // get alldata from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Pipe forward operator
  • 30.
    // get alldata from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => r._measurement == "cpu" and r._field == "usage_system") Anonymous Function
  • 31.
    // get alldata from the telegraf db from(bucket:”telegraf/autogen”) // filter that by the last hour |> range(start:-1h) // filter further by series with a specific measurement and field |> filter(fn: (r) => (r._measurement == "cpu" or r._measurement == “cpu") and r.host == “serverA") Predicate Function
  • 32.
  • 33.
    // variables some_int =23 some_float = 23.2
  • 34.
    // variables some_int =23 some_float = 23.2 some_string = “cpu"
  • 35.
    // variables some_int =23 some_float = 23.2 some_string = “cpu" some_duration = 1h
  • 36.
    // variables some_int =23 some_float = 23.2 some_string = “cpu" some_duration = 1h some_time = 2018-10-10T19:00:00
  • 37.
    // variables some_int =23 some_float = 23.2 some_string = “cpu" some_duration = 1h some_time = 2018-10-10T19:00:00 some_array = [1, 6, 20, 22]
  • 38.
    // variables some_int =23 some_float = 23.2 some_string = “cpu" some_duration = 1h some_time = 2018-10-10T19:00:00 some_array = [1, 6, 20, 22] some_object = {foo: "hello" bar: 22}
  • 39.
    // defining apipe forwardable function square = (tables=<-) => tables |> map(fn: (r) => {r with _value: r._value * r._value})
  • 40.
    // defining apipe forwardable function square = (tables=<-) => tables |> map(fn: (r) => {r with _value: r._value * r._value}) This is potentially new
  • 41.
    // defining apipe forwardable function square = (tables=<-) => tables |> map(fn: (r) => {r with _value: r._value * r._value}) from(bucket:"foo") |> range(start: -1h) |> filter(fn: (r) => r._measurement == "samples") |> square() |> filter(fn: (r) => r._value > 23.2)
  • 42.
  • 43.
  • 44.
  • 45.
    option task ={ name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message)
  • 46.
    option task ={ name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message) tasks
  • 47.
    option task ={ name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message) cron scheduling
  • 48.
    option task ={ name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message) packages & imports
  • 49.
    option task ={ name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message) map
  • 50.
    option task ={ name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message) String interpolation
  • 51.
    option task ={ name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message) Ship data elsewhere
  • 52.
    option task ={ name: "email alert digest", cron: "0 5 * * 0" } import "smtp" body = "" from(bucket: "alerts") |> range(start: -24h) |> filter(fn: (r) => (r.level == "warn" or r.level == "critical") and r._field == "message") |> group(columns: ["alert"]) |> count() |> group() |> map(fn: (r) => body = body + "Alert {r.alert} triggered {r._value} timesn") smtp.to( config: loadSecret(name: "smtp_digest"), to: "alerts@influxdata.com", title: "Alert digest for {now()}", body: message) Store secrets in a store like Vault
  • 53.
  • 54.
  • 55.
    // in afile called package.flux package "paul" option version = "0.1.1" // define square here or…
  • 56.
    // in afile called package.flux package "paul" option version = "0.1.1" // import the other package files // they must have package "paul" declaration at the top // only package.flux has the version import “packages” packages.load(files: ["square.flux", "utils.flux"])
  • 57.
    // in afile called package.flux package "paul" option version = "0.1.1" // import the other package files // they must have package "paul" declaration at the top // only package.flux has the version import “packages” packages.load(files: ["square.flux", "utils.flux"]) // or this packages.load(glob: "*.flux")
  • 58.
    import "myorg/paul" //latest, will load package.flux data |> paul.square()
  • 59.
    import "myorg/paul", "0.1.0"// specific version // 1. look in $fluxhome/myorg/paul/package.flux // 2. look in $fluxhome/myorg/paul/0.1.0/package.flux // 3. look in cloud2.influxdata.com/api/v1/packages/myorg/paul data |> paul.square()
  • 60.
    import "myorg/paul", ">=0.1.0"// at least this version data |> paul.square()
  • 61.
  • 62.
    import "slack" // whatif this returns an error? ret = slack.send(room: "foo", message: "testing this", token: "...")
  • 63.
  • 64.
    match ret { //on match ret gets mapped as the new type Error => { // do something ret }, Else => { // do something with ret } }
  • 65.
  • 66.
    records = [ {name:"foo", value: 23}, {name: "bar", value: 23}, {name: "asdf", value: 56} ]
  • 67.
    records = [ {name:"foo", value: 23}, {name: "bar", value: 23}, {name: "asdf", value: 56} ] // simple loop over each records |> map(fn: (r) => {name: r.name, value: r.value + 1})
  • 68.
    records = [ {name:"foo", value: 23}, {name: "bar", value: 23}, {name: "asdf", value: 56} ] // simple loop over each records |> map(fn: (r) => {name: r.name, value: r.value + 1}) // compute the sum sum = records |> reduce( fn: (r, accumulator) => r.value + accumulator, i: 0 )
  • 69.
    records = [ {name:"foo", value: 23}, {name: "bar", value: 23}, {name: "asdf", value: 56} ] // simple loop over each records |> map(fn: (r) => {name: r.name, value: r.value + 1}) // compute the sum sum = records |> reduce( fn: (r, accumulator) => r.value + accumulator, i: 0 ) // get matching records foos = records |> filter(fn: (r) => r.name == "foo")
  • 70.
    while(fn: () =>{ // do stuff })
  • 71.
    while(fn: () =>{ // do stuff }) while = (fn) => if fn() while(fn)
  • 72.
    // or loopsome number of times loop(fn: (i) => { // do stuff here }, times: 10)
  • 73.
    // or loopsome number of times loop(fn: (i) => { // do stuff here }, times: 10) loop = (fn, times) => loopUntil(fn, 0, times)
  • 74.
    // or loopsome number of times loop(fn: (i) => { // do stuff here }, times: 10) loop = (fn, times) => loopUntil(fn, 0, times) loopUntil = (fn, iteration, times) => if iteration < times { fn(iteration) loopUntil(fn, iteration + 1, times) }
  • 75.
  • 76.
  • 77.
    // <stream object>[<predicate>,<time>:<time>,<listof strings>] // and here's an example from(bucket:"foo")[_measurement == "cpu" and _field == "usage_user", 2018-11-07:2018-11-08, ["_measurement", "_time", "_value", “_field”]]
  • 78.
    // <stream object>[<predicate>,<time>:<time>,<listof strings>] // and here's an example from(bucket:"foo")[_measurement == "cpu" and _field == "usage_user", 2018-11-07:2018-11-08, ["_measurement", "_time", "_value", “_field”]] from(bucket:"foo") |> filter(fn: (row) => row._measurement == "cpu" and row._field == "usage_user") |> range(start: 2018-11-07, stop: 2018-11-08) |> keep(columns: ["_measurement", "_time", "_value", “_field"])
  • 79.
    from(bucket:"foo")[_measurement == "cpu"] //notice the trailing commas can be left off from(bucket: "foo") |> filter(fn: (row) => row._measurement == "cpu") |> last()
  • 80.
    from(bucket:"foo")["some tag" =="asdf",,] from(bucket: "foo") |> filter(fn: (row) => row["some tag"] == "asdf") |> last()
  • 81.
    from(bucket:"foo")[foo=="bar",-1h] from(bucket: "foo") |> filter(fn:(row) => row.foo == "bar") |> range(start: -1h)
  • 82.
    bucket = "foo" start= -3 from(bucket: bucket) |> range(start: start, end: -1) // shortcut if the variable name is the same as the argument from(bucket) |> range(start, end: -1)
  • 83.
  • 84.
  • 85.