re:dash is a tool for sharing SQL queries, visualizing results, and scheduling automated refreshes. It supports connecting to various data sources, provides a low-cost option on AWS, and enables caching of query results for improved performance. Key features include sharing queries with team members, running queries on a schedule, connecting to backends like PostgreSQL, and programming visualizations and parameters through the HTTP API. It also focuses on security features such as authentication, authorization, auditing, and SSL encryption.
Using Redash for SQL Analytics on DatabricksDatabricks
This talk gives a brief overview with a demo performing SQL analytics with Redash and Databricks. We will introduce some of the new features coming as part of our integration with Databricks following the acquisition earlier this year, along with a demo of the other Redash features that enable a productive SQL experience on top of Delta Lake.
Quick iteration and reusability of metric calculations for powerful data exploration.
At Looker, we want to make it easier for data analysts to service the needs of the data-hungry users in their organizations. We believe too much of their time is spent responding to ad hoc data requests and not enough time is spent building, experimenting, and embellishing a robust model of the business. Worse yet, business users are starving for data, but are forced to make important decisions without access to data that could guide them in the right direction. Looker addresses both of these problems with a YAML-based modeling language called LookML.
This paper walks through a number of data modeling examples, demonstrating how to use LookML to generate, alter, and update reports—without the need to rewrite any SQL. With LookML, you build your business logic, defining your important metrics once and then reusing them throughout a model—allowing quick, rapid iteration of data exploration, while also ensuring the accuracy of the SQL that’s generated. Small updates are quick and can be made immediately available to business users to manipulate, iterate, and transform in any way they see fit.
Architecting for the Cloud using NetflixOSS - Codemash WorkshopSudhir Tonse
Cloud development is inherently different than data center development. Understanding those differences, and architecting for them is critical to successful cloud solutions. In this workshop, we will both describe Netflix OSS platform components and show you how you can piece them together to build your own fault-tolerant REST services. These include: Hystrix, Ribbon, Eureka, and Archaius. In this hands-on lab, you will both learn the benefits of each of these services and use them in a sample application (in a test account). If you want to get things running in your own account, you may want to attend the afternoon session (Setting up your environment for the AWS cloud).
DAX and Power BI Training - 004 Power QueryWill Harvey
I this session we are introducing Power Query for Excel, the data sources you can connect to, and the transformations you can apply. We also introduce more advanced topics of writing your own M functions.
Optimize the performance, cost, and value of databases.pptxIDERA Software
Today’s businesses run on data, making it essential for them to access data quickly and easily. This requirement means that databases must run efficiently at all times but keeping a database performing at its best remains a challenging task. Fortunately, database administrators (DBAs) can adopt many practices to achieve this goal, thus saving time and money.
Apache Iceberg: An Architectural Look Under the CoversScyllaDB
Data Lakes have been built with a desire to democratize data - to allow more and more people, tools, and applications to make use of data. A key capability needed to achieve it is hiding the complexity of underlying data structures and physical data storage from users. The de-facto standard has been the Hive table format addresses some of these problems but falls short at data, user, and application scale. So what is the answer? Apache Iceberg.
Apache Iceberg table format is now in use and contributed to by many leading tech companies like Netflix, Apple, Airbnb, LinkedIn, Dremio, Expedia, and AWS.
Watch Alex Merced, Developer Advocate at Dremio, as he describes the open architecture and performance-oriented capabilities of Apache Iceberg.
You will learn:
• The issues that arise when using the Hive table format at scale, and why we need a new table format
• How a straightforward, elegant change in table format structure has enormous positive effects
• The underlying architecture of an Apache Iceberg table, how a query against an Iceberg table works, and how the table’s underlying structure changes as CRUD operations are done on it
• The resulting benefits of this architectural design
Using Redash for SQL Analytics on DatabricksDatabricks
This talk gives a brief overview with a demo performing SQL analytics with Redash and Databricks. We will introduce some of the new features coming as part of our integration with Databricks following the acquisition earlier this year, along with a demo of the other Redash features that enable a productive SQL experience on top of Delta Lake.
Quick iteration and reusability of metric calculations for powerful data exploration.
At Looker, we want to make it easier for data analysts to service the needs of the data-hungry users in their organizations. We believe too much of their time is spent responding to ad hoc data requests and not enough time is spent building, experimenting, and embellishing a robust model of the business. Worse yet, business users are starving for data, but are forced to make important decisions without access to data that could guide them in the right direction. Looker addresses both of these problems with a YAML-based modeling language called LookML.
This paper walks through a number of data modeling examples, demonstrating how to use LookML to generate, alter, and update reports—without the need to rewrite any SQL. With LookML, you build your business logic, defining your important metrics once and then reusing them throughout a model—allowing quick, rapid iteration of data exploration, while also ensuring the accuracy of the SQL that’s generated. Small updates are quick and can be made immediately available to business users to manipulate, iterate, and transform in any way they see fit.
Architecting for the Cloud using NetflixOSS - Codemash WorkshopSudhir Tonse
Cloud development is inherently different than data center development. Understanding those differences, and architecting for them is critical to successful cloud solutions. In this workshop, we will both describe Netflix OSS platform components and show you how you can piece them together to build your own fault-tolerant REST services. These include: Hystrix, Ribbon, Eureka, and Archaius. In this hands-on lab, you will both learn the benefits of each of these services and use them in a sample application (in a test account). If you want to get things running in your own account, you may want to attend the afternoon session (Setting up your environment for the AWS cloud).
DAX and Power BI Training - 004 Power QueryWill Harvey
I this session we are introducing Power Query for Excel, the data sources you can connect to, and the transformations you can apply. We also introduce more advanced topics of writing your own M functions.
Optimize the performance, cost, and value of databases.pptxIDERA Software
Today’s businesses run on data, making it essential for them to access data quickly and easily. This requirement means that databases must run efficiently at all times but keeping a database performing at its best remains a challenging task. Fortunately, database administrators (DBAs) can adopt many practices to achieve this goal, thus saving time and money.
Apache Iceberg: An Architectural Look Under the CoversScyllaDB
Data Lakes have been built with a desire to democratize data - to allow more and more people, tools, and applications to make use of data. A key capability needed to achieve it is hiding the complexity of underlying data structures and physical data storage from users. The de-facto standard has been the Hive table format addresses some of these problems but falls short at data, user, and application scale. So what is the answer? Apache Iceberg.
Apache Iceberg table format is now in use and contributed to by many leading tech companies like Netflix, Apple, Airbnb, LinkedIn, Dremio, Expedia, and AWS.
Watch Alex Merced, Developer Advocate at Dremio, as he describes the open architecture and performance-oriented capabilities of Apache Iceberg.
You will learn:
• The issues that arise when using the Hive table format at scale, and why we need a new table format
• How a straightforward, elegant change in table format structure has enormous positive effects
• The underlying architecture of an Apache Iceberg table, how a query against an Iceberg table works, and how the table’s underlying structure changes as CRUD operations are done on it
• The resulting benefits of this architectural design
In the session, we discussed the End-to-end working of Apache Airflow that mainly focused on "Why What and How" factors. It includes the DAG creation/implementation, Architecture, pros & cons. It also includes how the DAG is created for scheduling the Job and what all steps are required to create the DAG using python script & finally with the working demo.
Organizations need to gain insight and knowledge from a growing number of Internet of Things (IoT) APIs clickstreams comprised of unstructured and log data sources. However, organizations are often limited by legacy data warehouses and ETL processes that were designed for transactional data. In this session, we’ll introduce the key ETL features of AWS Glue through use cases ranging from scheduled nightly data warehouse loads to near real-time, event-driven ETL flows for your data lake. We’ll also discuss how to build scalable, efficient and serverless ETL pipelines using AWS Glue.
OpenStack is an open source cloud project and community with broad commercial and developer support. OpenStack is currently developing two interrelated technologies: OpenStack Compute and OpenStack Object Storage. OpenStack Compute is the internal fabric of the cloud creating and managing large groups of virtual private servers and OpenStack Object Storage is software for creating redundant, scalable object storage using clusters of commodity servers to store terabytes or even petabytes of data. In this tutorial, Bret Piatt will explain how to deploy OpenStack Compute and Object Storage, including an overview of the architecture and technology requirements.
ELK Stack workshop covers real-world use cases and works with the participants to - implement them. This includes Elastic overview, Logstash configuration, creation of dashboards in Kibana, guidelines and tips on processing custom log formats, designing a system to scale, choosing hardware, and managing the lifecycle of your logs.
Building Data Lakes with Apache AirflowGary Stafford
Build a simple Data Lake on AWS using a combination of services, including Amazon Managed Workflows for Apache Airflow (Amazon MWAA), AWS Glue, AWS Glue Studio, Amazon Athena, and Amazon S3.
Blog post and link to the video: https://garystafford.medium.com/building-a-data-lake-with-apache-airflow-b48bd953c2b
First introduced with the Analytics Platform System (APS), PolyBase simplifies management and querying of both relational and non-relational data using T-SQL. It is now available in both Azure SQL Data Warehouse and SQL Server 2016. The major features of PolyBase include the ability to do ad-hoc queries on Hadoop data and the ability to import data from Hadoop and Azure blob storage to SQL Server for persistent storage. A major part of the presentation will be a demo on querying and creating data on HDFS (using Azure Blobs). Come see why PolyBase is the “glue” to creating federated data warehouse solutions where you can query data as it sits instead of having to move it all to one data platform.
Dynamic DDL: Adding Structure to Streaming Data on the Fly with David Winters...Databricks
At the end of day, the only thing that data scientists want is tabular data for their analysis. They do not want to spend hours or days preparing data. How does a data engineer handle the massive amount of data that is being streamed at them from IoT devices and apps, and at the same time add structure to it so that data scientists can focus on finding insights and not preparing data? By the way, you need to do this within minutes (sometimes seconds). Oh… and there are a lot of other data sources that you need to ingest, and the current providers of data are changing their structure.
GoPro has massive amounts of heterogeneous data being streamed from their consumer devices and applications, and they have developed the concept of “dynamic DDL” to structure their streamed data on the fly using Spark Streaming, Kafka, HBase, Hive and S3. The idea is simple: Add structure (schema) to the data as soon as possible; allow the providers of the data to dictate the structure; and automatically create event-based and state-based tables (DDL) for all data sources to allow data scientists to access the data via their lingua franca, SQL, within minutes.
Delivering data and analytics to your customers should be straightforward. The Looker Data Platform allows for easy access to data through a robust API and embeddable charts, tables and dashboards.
Learn how Looker can help you:
- Embed charts, tables and dashboards into applications
- Use the Looker API to deliver data to applications, including Slack
- Build a customer portal to deliver more value to your customers
- Design the above with version control through Git, along with clustering and multi-server setup as needed
Databricks is a Software-as-a-Service-like experience (or Spark-as-a-service) that is a tool for curating and processing massive amounts of data and developing, training and deploying models on that data, and managing the whole workflow process throughout the project. It is for those who are comfortable with Apache Spark as it is 100% based on Spark and is extensible with support for Scala, Java, R, and Python alongside Spark SQL, GraphX, Streaming and Machine Learning Library (Mllib). It has built-in integration with many data sources, has a workflow scheduler, allows for real-time workspace collaboration, and has performance improvements over traditional Apache Spark.
Getting Maximum Performance from Amazon Redshift (DAT305) | AWS re:Invent 2013Amazon Web Services
Get the most out of Amazon Redshift by learning about cutting-edge data warehousing implementations. Desk.com, a Salesforce.com company, discusses how they maintain a large concurrent user base on their customer-facing business intelligence portal powered by Amazon Redshift. HasOffers shares how they load 60 million events per day into Amazon Redshift with a 3-minute end-to-end load latency to support ad performance tracking for thousands of affiliate networks. Finally, Aggregate Knowledge discusses how they perform complex queries at scale with Amazon Redshift to support their media intelligence platform.
To create a project with node.js either for mobile applications to access data or for various clients based websites which requires accessing data; it requires building a basic API. These projects, mostly built with express.js and a mango database. In this article we will understand
the basic of Node.js, express middleware and API creation/Restful web services using Node.js with one basic example.
Strategies and Tips for Building Enterprise Drupal Applications - PNWDS 2013Mack Hardy
Mack Hardy, Dave Tarc, Damien Norris of Affinity Bridge presenting at Pacific Northwest Drupal Summit in Vancouver, October 5th, 2013. The presentation walks through management of releases, deployment strategies and build strategies with drupal features, git, and make files. Performance and caching is also covered, as well as specific tips and tricks for configuring apache and managing private files.
In the session, we discussed the End-to-end working of Apache Airflow that mainly focused on "Why What and How" factors. It includes the DAG creation/implementation, Architecture, pros & cons. It also includes how the DAG is created for scheduling the Job and what all steps are required to create the DAG using python script & finally with the working demo.
Organizations need to gain insight and knowledge from a growing number of Internet of Things (IoT) APIs clickstreams comprised of unstructured and log data sources. However, organizations are often limited by legacy data warehouses and ETL processes that were designed for transactional data. In this session, we’ll introduce the key ETL features of AWS Glue through use cases ranging from scheduled nightly data warehouse loads to near real-time, event-driven ETL flows for your data lake. We’ll also discuss how to build scalable, efficient and serverless ETL pipelines using AWS Glue.
OpenStack is an open source cloud project and community with broad commercial and developer support. OpenStack is currently developing two interrelated technologies: OpenStack Compute and OpenStack Object Storage. OpenStack Compute is the internal fabric of the cloud creating and managing large groups of virtual private servers and OpenStack Object Storage is software for creating redundant, scalable object storage using clusters of commodity servers to store terabytes or even petabytes of data. In this tutorial, Bret Piatt will explain how to deploy OpenStack Compute and Object Storage, including an overview of the architecture and technology requirements.
ELK Stack workshop covers real-world use cases and works with the participants to - implement them. This includes Elastic overview, Logstash configuration, creation of dashboards in Kibana, guidelines and tips on processing custom log formats, designing a system to scale, choosing hardware, and managing the lifecycle of your logs.
Building Data Lakes with Apache AirflowGary Stafford
Build a simple Data Lake on AWS using a combination of services, including Amazon Managed Workflows for Apache Airflow (Amazon MWAA), AWS Glue, AWS Glue Studio, Amazon Athena, and Amazon S3.
Blog post and link to the video: https://garystafford.medium.com/building-a-data-lake-with-apache-airflow-b48bd953c2b
First introduced with the Analytics Platform System (APS), PolyBase simplifies management and querying of both relational and non-relational data using T-SQL. It is now available in both Azure SQL Data Warehouse and SQL Server 2016. The major features of PolyBase include the ability to do ad-hoc queries on Hadoop data and the ability to import data from Hadoop and Azure blob storage to SQL Server for persistent storage. A major part of the presentation will be a demo on querying and creating data on HDFS (using Azure Blobs). Come see why PolyBase is the “glue” to creating federated data warehouse solutions where you can query data as it sits instead of having to move it all to one data platform.
Dynamic DDL: Adding Structure to Streaming Data on the Fly with David Winters...Databricks
At the end of day, the only thing that data scientists want is tabular data for their analysis. They do not want to spend hours or days preparing data. How does a data engineer handle the massive amount of data that is being streamed at them from IoT devices and apps, and at the same time add structure to it so that data scientists can focus on finding insights and not preparing data? By the way, you need to do this within minutes (sometimes seconds). Oh… and there are a lot of other data sources that you need to ingest, and the current providers of data are changing their structure.
GoPro has massive amounts of heterogeneous data being streamed from their consumer devices and applications, and they have developed the concept of “dynamic DDL” to structure their streamed data on the fly using Spark Streaming, Kafka, HBase, Hive and S3. The idea is simple: Add structure (schema) to the data as soon as possible; allow the providers of the data to dictate the structure; and automatically create event-based and state-based tables (DDL) for all data sources to allow data scientists to access the data via their lingua franca, SQL, within minutes.
Delivering data and analytics to your customers should be straightforward. The Looker Data Platform allows for easy access to data through a robust API and embeddable charts, tables and dashboards.
Learn how Looker can help you:
- Embed charts, tables and dashboards into applications
- Use the Looker API to deliver data to applications, including Slack
- Build a customer portal to deliver more value to your customers
- Design the above with version control through Git, along with clustering and multi-server setup as needed
Databricks is a Software-as-a-Service-like experience (or Spark-as-a-service) that is a tool for curating and processing massive amounts of data and developing, training and deploying models on that data, and managing the whole workflow process throughout the project. It is for those who are comfortable with Apache Spark as it is 100% based on Spark and is extensible with support for Scala, Java, R, and Python alongside Spark SQL, GraphX, Streaming and Machine Learning Library (Mllib). It has built-in integration with many data sources, has a workflow scheduler, allows for real-time workspace collaboration, and has performance improvements over traditional Apache Spark.
Getting Maximum Performance from Amazon Redshift (DAT305) | AWS re:Invent 2013Amazon Web Services
Get the most out of Amazon Redshift by learning about cutting-edge data warehousing implementations. Desk.com, a Salesforce.com company, discusses how they maintain a large concurrent user base on their customer-facing business intelligence portal powered by Amazon Redshift. HasOffers shares how they load 60 million events per day into Amazon Redshift with a 3-minute end-to-end load latency to support ad performance tracking for thousands of affiliate networks. Finally, Aggregate Knowledge discusses how they perform complex queries at scale with Amazon Redshift to support their media intelligence platform.
To create a project with node.js either for mobile applications to access data or for various clients based websites which requires accessing data; it requires building a basic API. These projects, mostly built with express.js and a mango database. In this article we will understand
the basic of Node.js, express middleware and API creation/Restful web services using Node.js with one basic example.
Strategies and Tips for Building Enterprise Drupal Applications - PNWDS 2013Mack Hardy
Mack Hardy, Dave Tarc, Damien Norris of Affinity Bridge presenting at Pacific Northwest Drupal Summit in Vancouver, October 5th, 2013. The presentation walks through management of releases, deployment strategies and build strategies with drupal features, git, and make files. Performance and caching is also covered, as well as specific tips and tricks for configuring apache and managing private files.
Learn the best practices and advanced techniques.
* Passing data to client libs, use the data attribute
* Expression contexts, choose wisely
* Use statement best practices, what fits best your needs
* Template & Call statements advanced usage
* Parameters for sub-resources, featuring resource attributes and synthetic resources
Spring boot is a suite, pre-configured, pre-sugared set of frameworks/technologies to reduce boilerplate configuration providing you the shortest way to have a Spring web application up and running with smallest line of code/configuration out-of-the-box.
Experiences using CouchDB inside Microsoft's Azure teamBrian Benz
Co-presented with Will Perry (@willpe). Real-world experiences using CouchDB inside Microsoft, and also how to get started with CouchDB on Microsoft Azure.
TYPO3 v8 is one of the most important LTS version releases in the TYPO3 History. You may call it the #NextGenerationCMS (Content Management System). It gives TYPO3 the long-awaited major boost in functionality and features. In this blog, you will find detail about the new improvements & features. We hope, this will help #Developers, #Integrators, #Editors & #Administrators to understand #TYPO3 8 in depth, Checkout AtoZ details at http://www.nitsan.in/blog/post/atoz-about-typo3-v8-cms/
Mastering Test Automation: How To Use Selenium SuccessfullySpringPeople
In this slide, identify what to test and choose the best language for automation. Learn to write maintainable and reusable Selenium tests and add UI layout test as part of automation using Galen framework. This slide will also guide you in reporting structure by using external plugin's, an illustration covering cross browser testing (Running selenium grid with Docker) and explain Code repository (Git) and Jenkins CI tool.
PHP owes its appeal and popularity to its low barriers to entry. Anyone with access to a basic LAMP stack can get started in just a few hours, but if you want to write a production-level application, you need the right tools. The PHP community today relies heavily on Composer and PHPUnit as tools and PSRs as the common dialect. npm is the unavoidable front-end counterpart to Composer. Git, though not specific to PHP, is critical to developing a maintainable project. This talk will guide you through these topics so you have a basic understanding of the modern PHP developer’s toolbox.
PVS-Studio: analyzing pull requests in Azure DevOps using self-hosted agentsAndrey Karpov
Static code analysis is most effective when changing a project, as errors are always more difficult to fix in the future than at an early stage. We continue expanding the options for using PVS-Studio in continuous development systems. This time, we'll show you how to configure pull request analysis using self-hosted agents in Microsoft Azure DevOps, using the example of the Minetest game.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
6. sharing query
can share sql between members
We use . But ansi sql syntax is extremely hard so
can review sql.
Reuse query another member.
fork can do sql
Presto
9. Pluggable backends
supports a great variety of data sources.
If you are making query runner, variety connect data source.
write by Python.
See.
https://github.com/EverythingMe/redash/tree/master/redash
11. Low Cost
Business intelligence tool is very high cost.
If you use AWS, for small deployments t2.micro should be
enough.
We regist 250 queries, using instance type is m3.medium.
memory size is 3.75GB.
1 CPU.
13. easy update
download.
With deploy runs DB migration.
one command update.
https://gist.githubusercontent.com/arikfr/440d1403b4aeb76e
fab -Hredash01 -uubuntu deploy_latest_release
recommended clear cache after upgrade...
14. Caching
Query Results Cache in PostgreSQL.
Do not run unnecessary SQL
See the results and come to the company in morning, not
need wait!
So Google Bigquery is Query pricing, one query results can
share.
18. WE love OSS♥
can read a code.
We can trust a code.
if Well do not know behaviors, read the code.
many contributions on github.
330 closed pull requests.
21. GET parameter
can write sql by mustache template.
select * from user where = {{id}}
http://demo.redash.io/queries/146?p_id=1
start with http string become to link.
22. HTTP API
re:dash have rest API.
API KEY can be per user and per query.
curl 'https://redash/api/queries/194/results.json?
api_key=XXXXX'
API KEY can modify by user.(ver 0.8)
23. visualization your API
visualization your json API.
See format.
Also visualization your python code, print same format.
http://docs.redash.io/en/latest/dev/results_format.html
28. permission management
Can set permissions for each Member.
Grant permissions to groups.
Members belong to group.
Also, can belong to multiple groups.
30. Google Apps authentication
you can Data linkage your google account.
defaults admin/admin account delete is better.
Can disable password login As below.
export REDASH_PASSWORD_LOGIN_ENABLED=false
31. SSL(https) Connection
You can connect with SSL.
Need modify nginx configuration.
see. http://docs.redash.io/en/latest/misc/ssl.html
35. our use case
action history and log data stores in Redshift, User and
master data in mysql.
data analytics complete only sql as possible.
I do not want to write code as possible for analytics.
We often write complicatedly sql.
38. Presto
open source distributed SQL query engine.
Presto can join multiple data sources.
We join mysql and redshift by Presto.
Ansi SQL Syntax.
39. Prestogres
PostgreSQL protocol gateway for Presto.
rewrite queries before sending Presto to PostgreSQL.
re:dash connecte with PostgreSQL protocol to presto.
But can directly connect to presto with current version.