Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Extending Flux to Support Other Databases and Data Stores | Adam Anthony | InfluxData

147 views

Published on

Flux was designed to work across databases and data stores. In this talk, Adam will walk through the steps necessary for you to add your own database or custom data source to Flux.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

Extending Flux to Support Other Databases and Data Stores | Adam Anthony | InfluxData

  1. 1. Adam Anthony Software Engineer, Flux team Extending Flux – Writing Custom Data Sources
  2. 2. © InfluxData. All rights reserved. Why Custom Data Sources import “sql” phBalances = from(bucket: "PH_Balances") |> range(start: now- ) |> filter(fn: (r) => r._measurement == "water_sensor" ) deviceIds = sql.from(driverName: “mysql”, dataSourceName: “username:password@tcp(host)/dbname”, query: "SELECT customer_name, customer_location, device_id AS devicen"+ "FROM DeviceTable WHERE customer_name = ‘InfluxData’”) join(tables: {t : phBalances, t : deviceIds}, on: ["device"]) |> aggregateWindow(every: v.windowPeriod, fn: mean) |> yield(name: "mean")
  3. 3. © InfluxData. All rights reserved. Overview Pre-Demo Flux Extension landscape Flux Data Model Creating tables from scratch Source Decoder Interface Row Reader Interface Example: Promql Ephemeral Scraper
  4. 4. © InfluxData. All rights reserved. Pre-Demo ● Counting Table Generator ● columns, N rows ● Each row has value +=
  5. 5. © InfluxData. All rights reserved. Flux Extension Landscape Write pure flux functions Port scalar constants and functions from a Go library Stream transformations – Go API for processing streams of tables – Used when pure flux cannot work Data Sources/Sinks – If there's a go library for it, you can connect it to Flux
  6. 6. © InfluxData. All rights reserved. SINKSOURCE Query Pipeline from(bucket: "my-bucket") range(start: -5m) filter(fn: (r) => r._measurement == "m", r._field == "f1") window(every:5m) mean() to(bucket: "my-output") Sinks may be ● custom writes to persistent storage ● client output (e.g. CSV encoding, JSON encoding, charts, http, etc.)
  7. 7. © InfluxData. All rights reserved. Columnar Data Stores Logical Joins vs. Aggregates – Sales Lead vs. Avg Host CPU 1145 US-West nginx.svc 89% _time Region Host CPU 1146 US-West docker.svc 99% 60% Mem 99% Physical Disk + Memory locality Cache locality Data Requirements
  8. 8. © InfluxData. All rights reserved. Flux Data Model An infinite stream… Of Finite tables… Identified by the GroupKey Time Host _field Value 10:45 H1A cpu 25.9 11:00 H1A cpu 20.0 GroupKey[Host=H1A,_field] Time Host _field Value 11:45 H1A mem 18.0 12:00 H1A mem 34.8 GroupKey[Host=H2B,_field] ...
  9. 9. © InfluxData. All rights reserved. Stream Transformation Process Time Host Value 10:30 H1A 25.9 10:45 H1A 20.0 GroupKey[Host=H1A] Time Host Value 11:45 H2B 18.0 11:50 H2B 34.8 GroupKey[Host=H2B] Time Host Value 10:00 H1A 45.9 GroupKey[Host=H1A] Time Host Value 11:00 H2B 52.8 GroupKey[Host=H2B] |> aggregateWindow(every: 60m, fn: sum)) For Each Table: . Convert incoming rows into or more outgoing rows . Sort outgoing rows into tables
  10. 10. © InfluxData. All rights reserved. CODE ALERT Counting Table Demo: ● Written in go ● Will Learn: ○ how to create a table ○ how to install the source as a flux function
  11. 11. © InfluxData. All rights reserved. Building Tables func BuildStaticTable (keyColumn, valueColumn , key string, nrows int64, a execute.Administration ) (flux.Table, error) { // 1. group keys help ID a table groupKey := execute.NewGroupKeyBuilder (nil) groupKey.AddKeyValue (keyColumn, values. NewString(key)) gk, err := groupKey.Build() if err != nil { return nil, err } // 2. Create a new table builder indexed by the group key. builder := execute.NewColListTableBuilder (gk, a.Allocator()) if _, err = builder. AddCol(flux.ColMeta{Label: keyColumn, Type: flux.TString}); err != nil { return nil, err } if _, err = builder. AddCol(flux.ColMeta{Label: valueColumn, Type: flux.TFloat}); err != nil { return nil, err } // 3. Add a row of data by appending one value to each column. for i := 0; i < int(nrows); i++ { if err = builder. AppendString (0, key); err != nil { return nil, err } if err = builder. AppendFloat (1, float64(i)); err != nil { return nil, err } } return builder.Table() }
  12. 12. © InfluxData. All rights reserved. Implementing Custom Data Sources type SourceDecoder interface { // Create a connection to a data source Connect(ctx context.Context) error // Fetch all data for a single, complete table Fetch(ctx context.Context) (bool, error) // Given data fetched above, decode it into a flux table Decode(ctx context.Context) (flux.Table, error) Close() error }
  13. 13. © InfluxData. All rights reserved. A First Decoder type StaticDecoder struct { administration execute.Administration keyColumn string valueColumn string key string nrows int64 } func (s *StaticDecoder) Connect(ctx context.Context) error { return nil } func (s *StaticDecoder) Fetch(ctx context.Context) (bool, error) { return false, nil } func (s *StaticDecoder) Decode(ctx context.Context) (flux.Table, error) { return BuildStaticTable(s.keyColumn, s.valueColumn, s.key, s.nrows, s.administration) } func (s *StaticDecoder) Close() error { return nil }
  14. 14. © InfluxData. All rights reserved. Install A Source Decoder: OpSpec // unique name for mapping const FromStaticKind = "fromStatic" // storing user params that are declared elsewhere // op spec represents what the user has told us; type FromStaticOpSpec struct { nrows int64 } func createFromStaticOpSpec(args flux.Arguments, administration *flux.Administration) (flux.OperationSpec, error) { spec := new(FromStaticOpSpec) // reading flux.args and extracting params var err error if spec.nrows, err = args.GetRequiredInt("nrows"); err != nil { return nil, err } return spec, nil } func newFromStaticOp() flux.OperationSpec { return new(FromStaticOpSpec) } func (s *FromStaticOpSpec) Kind() flux.OperationKind { return FromStaticKind } ● OpSpec: Collect User Parameters and Store for Execution
  15. 15. © InfluxData. All rights reserved. Install a Source Decoder: Procedure Spec type FromStaticProcedureSpec struct { plan.DefaultCost nrows int64 } // use op spec to initialize procedure spec func newFromStaticProcedure(qs flux.OperationSpec, pa plan.Administration) (plan.ProcedureSpec, error) { spec, ok := qs.(*FromStaticOpSpec) if !ok { return nil, fmt.Errorf("invalid spec type %T", qs) } return &FromStaticProcedureSpec{nrows: spec.nrows},nil } func (s *FromStaticProcedureSpec) Kind() plan.ProcedureKind { return FromStaticKind } func (s *FromStaticProcedureSpec) Copy() plan.ProcedureSpec { ns := new(FromStaticProcedureSpec) return ns } ● procedure spec is internal representation of the entire file used by the planner
  16. 16. © InfluxData. All rights reserved. Install A Source Decoder: Create Source // uses a procedure spec to create a source object for flux runtime func createFromStaticSource (prSpec plan.ProcedureSpec, dsid execute.DatasetID, a execute.Administration ) (execute.Source, error) { spec, ok := prSpec.(*FromStaticProcedureSpec) if !ok { return nil, fmt.Errorf("invalid spec type %T", prSpec) } StaticDecoder := StaticDecoder{ administration: a, keyColumn: "T1", valueColumn: "V1", key: "tag1", nrows: spec.nrows} return execute.CreateSourceFromDecoder (&StaticDecoder, dsid, a) }
  17. 17. © InfluxData. All rights reserved. Install A Source Decoder: Register Constructors func init() { fromStaticSignature := semantic.FunctionPolySignature{ Parameters: map[string]semantic.PolyType{ "nrows": semantic.Int, }, // user params Required: semantic.LabelSet{"nrows"}, Return: flux.TableObjectType, } // tell the flux runtime about the objects that we're creating flux.RegisterPackageValue("static", "from", flux.FunctionValue(FromStaticKind, createFromStaticOpSpec, fromStaticSignature)) flux.RegisterOpSpec(FromStaticKind, newFromStaticOp) plan.RegisterProcedureSpec(FromStaticKind, newFromStaticProcedure, FromStaticKind) execute.RegisterSource(FromStaticKind, createFromStaticSource) }
  18. 18. © InfluxData. All rights reserved. Final Steps . flux package: flux/stdlib/static/static.flux . Put `from.go` in same dir . Flux root dir: run 'make' . Build Flux/Influxdb binary
  19. 19. © InfluxData. All rights reserved. Data!
  20. 20. © InfluxData. All rights reserved. Demo Code static.from: https://github.com/influxdata/flux/tree/demo/influxdaysSFO2019 prometheus.scrape: https://github.com/influxdata/flux/blob/cb2d438ac7881c794c8a2982618e81503b4f2781 /stdlib/experimental/prometheus/scrape.go
  21. 21. © InfluxData. All rights reserved. Not Covered ● Writing data out. ○ Modeled after a normal flux function that sends data out as a side effect ○ For each table: i. Collect rows from a flux.table into a write request for your data destination ii. Send the request ● Alternate Interface: RowIterator
  22. 22. © InfluxData. All rights reserved. If you can connect to a data source in `go` you can write a source/sink for it in flux. ● Currently, we can read from: ○ influxdb ○ mysql ○ postgres ○ BigTable ○ CSV ○ Prometheus ● Currently we can write to: ○ influxdb ○ mysql ○ postgres ○ MQTT ○ kafka Contributions are welcome!
  23. 23. Thank You!

×