The document discusses making advanced analytics more accessible to companies. It proposes using BigQuery, Google's serverless data warehouse, which allows storing and querying large datasets cost effectively without needing to manage infrastructure. Data is ingested via pipelines into BigQuery from various sources and then business analysts can easily build custom reports and dashboards in tools like Tableau without requiring developers. BigQuery reduces the time to insights from days to minutes by handling all the challenges of large-scale analytics.
GDG DevFest Ukraine - Powering Interactive Data Analysis with Google BigQueryMárton Kodok
Every scientist who needs big data analytics to save millions of lives should have that power. Powering Interactive Data Analysis require massive architecture, and know-how to build a fast real-time computing system. You will learn how Google BigQuery solves this problem by enabling super-fast, SQL queries against petabytes of data using the processing power of Google’s infrastructure. After this session you will be able to work with BigQuery, do streaming inserts, write User Defined Functions in Javascript, and several use cases for everyday developer: funnel analytics, behavioral analytics, exploring unstructured data. You will be able to run arbitrary queries on open-data such as historical data about Github commits, Stackoverflow Q&A data, or analysing Reddit comments to find out books the community talks about.
Voxxed Days Cluj - Powering interactive data analysis with Google BigQueryMárton Kodok
Every company,
no matter how far from the tech they are,
is evolving into a software company,
and by extension a data company.
For a small company it’s important
to have access to modern BigData tools
without running a dedicated team for it.
VoxxedDays Bucharest 2017 - Powering interactive data analysis with Google Bi...Márton Kodok
Every scientist who needs big data analytics to save millions of lives should have that power. Complex interactive Big Data analytics solutions require massive architecture, and Know-How to build a fast real-time computing system.BigQuery solves this problem by enabling super-fast, SQL-like queries against petabytes of data using the processing power of Google’s infrastructure. We will cover its core features, working with BigQuery, streaming inserts, User Defined Functions in Javascript, and several use cases for everyday developer: funnel analytics, behavioral analytics, exploring unstructured data.
Google BigQuery for Everyday DeveloperMárton Kodok
IV. IT&C Innovation Conference - October 2016 - Sovata, Romania
A. Every scientist who needs big data analytics to save millions of lives should have that power
Legacy systems don’t provide the power.
B. The simple fact is that you are brilliant but your brilliant ideas require complex analytics.
Traditional solutions are not applicable.
The Plan: have oversight over developments as they happen.
Goal: Store everything accessible by SQL immediately.
What is BigQuery?
Analytics-as-a-Service - Data Warehouse in the Cloud
Fully-Managed by Google (US or EU zone)
Scales into Petabytes
Ridiculously fast
Decent pricing (queries $5/TB, storage: $20/TB) *October 2016 pricing
100.000 rows / sec Streaming API
Open Interfaces (Web UI, BQ command line tool, REST, ODBC)
Familiar DB Structure (table, views, record, nested, JSON)
Convenience of SQL + Javascript UDF (User Defined Functions)
Integrates with Google Sheets + Google Cloud Storage + Pub/Sub connectors
Client libraries available in YFL (your favorite languages)
Our benefits
no provisioning/deploy
no running out of resources
no more focus on large scale execution plan
no need to re-implement tricky concepts
(time windows / join streams)
pay only the columns we have in your queries
run raw ad-hoc queries (either by analysts/sales or Devs)
no more throwing away-, expiring-, aggregating old data.
Fireside Chat with Bloor Research: State of the Graph Database Market 2020Cambridge Semantics
Sean Martin, CTO of Cambridge Semantics, Philip Howard, Research Director at Bloor Research and co-author of “Graph Database Market Update 2020”, and Steve Sarsfield, VP of Product at Cambridge Semantics, hold a fireside chat on the State of the Graph Database Market.
How to migrate to GraphDB in 10 easy to follow steps Ontotext
GraphDB Migration Service helps you institute Ontotext GraphDB™ as your new semantic graph database. GraphDB Migration Service helps you institute Ontotext GraphDB™ as your new semantic graph database.
Designed with a view to making your transitioning to GraphDB frictionless and resource-effective, GraphDB Migration Service provides the technical support and expertise you and your team of developers need to build a highly efficient architecture for semantic annotation, indexing and retrieval of digital assets.
With GraphDB Migration Services you will:
* Optimize the cost of managing the RDF database;
* Improve the performance of your system;
* Get the maximum value from your semantic solution.
Risk Analytics Using Knowledge Graphs / FIBO with Deep LearningCambridge Semantics
This EDM Council webinar, sponsored by Cambridge Semantics Inc. and featuring FI Consulting, explores the challenges common to a risk analytics pipeline, application of graph analytics to mortgage loan data and use cases in adjacent areas including customer service, collections, fraud and AML.
GDG DevFest Ukraine - Powering Interactive Data Analysis with Google BigQueryMárton Kodok
Every scientist who needs big data analytics to save millions of lives should have that power. Powering Interactive Data Analysis require massive architecture, and know-how to build a fast real-time computing system. You will learn how Google BigQuery solves this problem by enabling super-fast, SQL queries against petabytes of data using the processing power of Google’s infrastructure. After this session you will be able to work with BigQuery, do streaming inserts, write User Defined Functions in Javascript, and several use cases for everyday developer: funnel analytics, behavioral analytics, exploring unstructured data. You will be able to run arbitrary queries on open-data such as historical data about Github commits, Stackoverflow Q&A data, or analysing Reddit comments to find out books the community talks about.
Voxxed Days Cluj - Powering interactive data analysis with Google BigQueryMárton Kodok
Every company,
no matter how far from the tech they are,
is evolving into a software company,
and by extension a data company.
For a small company it’s important
to have access to modern BigData tools
without running a dedicated team for it.
VoxxedDays Bucharest 2017 - Powering interactive data analysis with Google Bi...Márton Kodok
Every scientist who needs big data analytics to save millions of lives should have that power. Complex interactive Big Data analytics solutions require massive architecture, and Know-How to build a fast real-time computing system.BigQuery solves this problem by enabling super-fast, SQL-like queries against petabytes of data using the processing power of Google’s infrastructure. We will cover its core features, working with BigQuery, streaming inserts, User Defined Functions in Javascript, and several use cases for everyday developer: funnel analytics, behavioral analytics, exploring unstructured data.
Google BigQuery for Everyday DeveloperMárton Kodok
IV. IT&C Innovation Conference - October 2016 - Sovata, Romania
A. Every scientist who needs big data analytics to save millions of lives should have that power
Legacy systems don’t provide the power.
B. The simple fact is that you are brilliant but your brilliant ideas require complex analytics.
Traditional solutions are not applicable.
The Plan: have oversight over developments as they happen.
Goal: Store everything accessible by SQL immediately.
What is BigQuery?
Analytics-as-a-Service - Data Warehouse in the Cloud
Fully-Managed by Google (US or EU zone)
Scales into Petabytes
Ridiculously fast
Decent pricing (queries $5/TB, storage: $20/TB) *October 2016 pricing
100.000 rows / sec Streaming API
Open Interfaces (Web UI, BQ command line tool, REST, ODBC)
Familiar DB Structure (table, views, record, nested, JSON)
Convenience of SQL + Javascript UDF (User Defined Functions)
Integrates with Google Sheets + Google Cloud Storage + Pub/Sub connectors
Client libraries available in YFL (your favorite languages)
Our benefits
no provisioning/deploy
no running out of resources
no more focus on large scale execution plan
no need to re-implement tricky concepts
(time windows / join streams)
pay only the columns we have in your queries
run raw ad-hoc queries (either by analysts/sales or Devs)
no more throwing away-, expiring-, aggregating old data.
Fireside Chat with Bloor Research: State of the Graph Database Market 2020Cambridge Semantics
Sean Martin, CTO of Cambridge Semantics, Philip Howard, Research Director at Bloor Research and co-author of “Graph Database Market Update 2020”, and Steve Sarsfield, VP of Product at Cambridge Semantics, hold a fireside chat on the State of the Graph Database Market.
How to migrate to GraphDB in 10 easy to follow steps Ontotext
GraphDB Migration Service helps you institute Ontotext GraphDB™ as your new semantic graph database. GraphDB Migration Service helps you institute Ontotext GraphDB™ as your new semantic graph database.
Designed with a view to making your transitioning to GraphDB frictionless and resource-effective, GraphDB Migration Service provides the technical support and expertise you and your team of developers need to build a highly efficient architecture for semantic annotation, indexing and retrieval of digital assets.
With GraphDB Migration Services you will:
* Optimize the cost of managing the RDF database;
* Improve the performance of your system;
* Get the maximum value from your semantic solution.
Risk Analytics Using Knowledge Graphs / FIBO with Deep LearningCambridge Semantics
This EDM Council webinar, sponsored by Cambridge Semantics Inc. and featuring FI Consulting, explores the challenges common to a risk analytics pipeline, application of graph analytics to mortgage loan data and use cases in adjacent areas including customer service, collections, fraud and AML.
Big Query - Utilizing Google Data Warehouse for Media Analyticshafeeznazri
This topic will cover the intermediate understanding of Google Big Query and how Media Prima Digital utilizing Big Query as data warehouse for production.
Connecta Event: Big Query och dataanalys med Google Cloud PlatformConnectaDigital
Avancerad dataanalys och ”big data” har under de senaste åren klättrat på trendlistorna och är nu ett av de mest prioriterade områdena i utvecklingen av nya tjänster och produkter för ledarföretag i det digitala landskapet.
Informationen som byggs upp i systemen när kundmötena digitaliseras har visat sig vara guld värt. Här finns allt vi behöver veta för att göra våra affärer mer effektiva.
Sedan sommaren 2013 har Connecta tillsammans med Google ett etablerat samarbete för att hjälpa våra kunder med övergången till moln-tjänster för bland annat avancerad dataanalys. För att göra oss själva redo att hjälpa våra kunder har vi under ett antal år utvecklat såväl kunskaper som skaffat oss erfarenheter kring Googles olika moln-produkter, som exempelvis ”Big Query”.
Big Query är ett molnbaserat analysverktyg och en del av Google Cloud Platform. Big Query gör det möjligt att ställa snabba frågor mot enorma dataset på bara någon sekund. Big Query och Google Cloud Platform erbjuder färdiga lösningar för att sätta upp och underhålla en infrastruktur som med enkla medel gör allt detta möjligt.
På Connecta Digital Consultings tredje event för våren introducerade vi våra kunder och partners i koncepten dataanalys och Big Query.
Under eventet berördes följande punkter:
- Big Data och Business Intelligence (BI)
- “The Google Big Data tools” – framgångsfaktorer och hur man kommer igång
- Google Cloud Platform och hur man genomför en framgångsrik molnsatsning
Vi presenterade case och berättade om viktiga lärdomar vi dragit i samarbetet med Google och våra kunder.
Knowledge Graph Discussion: Foundational Capability for Data Fabric, Data Int...Cambridge Semantics
Knowledge graphs are on the rise at businesses hungry for greater automation and intelligence with use cases spreading across industries, from fraud detection and chatbots, to risk analysis and recommendation engines. In this webinar we dive into key technical and business considerations, use cases and best practices in leveraging knowledge graphs for better knowledge management.
Using a Semantic and Graph-based Data Catalog in a Modern Data FabricCambridge Semantics
Watch this webinar to learn about the benefits of using semantic and graph database technology to create a Data Catalog of all of an enterprise's data, regardless of source or format, as part of a modern IT or data management stack and an important step toward building an Enterprise Data Fabric.
Google Analytics and BigQuery, by Javier Ramirez, from datawakijavier ramirez
Google Analytics is great, but having access to your raw data and being able to query it any way you want is much more powerful. Learn how you can integrate Analytics and BigQuery to unleash all your data potential. Talk delivered at Conversion Thursday London
Webinar: Introducing the MongoDB Connector for BI 2.0 with TableauMongoDB
Pairing your real-time operational data stored in a modern database like MongoDB with first-class business intelligence platforms like Tableau enables new insights to be discovered faster than ever before.
Many leading organizations already use MongoDB in conjunction with Tableau including a top American investment bank and the world’s largest airline. With the Connector for BI 2.0, it’s never been easier to streamline the connection process between these two systems.
In this webinar, we will create a live connection from Tableau Desktop to a MongoDB cluster using the Connector for BI. Once we have Tableau Desktop and MongoDB connected, we will demonstrate the visual power of Tableau to explore the agile data storage of MongoDB.
You’ll walk away knowing:
- How to configure MongoDB with Tableau using the updated connector
- Best practices for working with documents in a BI environment
- How leading companies are using big data visualization strategies to transform their businesses
Using Cloud Automation Technologies to Deliver an Enterprise Data FabricCambridge Semantics
The world of database management is changing. Cloud adoption is accelerating, offering a path for companies to increase their database capabilities while keeping costs in line. To help IT decision-makers survive and thrive in the cloud era, DBTA hosted this special roundtable webinar.
The 'macro view' on Big Query:
We started with an overview, some typical uses and moved to project hierarchy, access control and security.
In the end we touch about tools and demos.
Webinar: Live Data Visualisation with Tableau and MongoDBMongoDB
MongoDB 3.2 introduces a new way for familiar Business Intelligence (BI) tools to access your real-time operational data – opening it up to data analysts and data scientist, enabling new insights to be discovered faster than ever before. Tableau accesses the JSON document data stored in MongoDB via this new BI connector. We will cover how the BI connector works by creating a relational view definition of a JSON data set that is then used to present a tabular SQL/ODBC interface to Tableau. Then we will set-up a live connection from Tableau Desktop to the MongoDB Connector for BI. Once we have Tableau Desktop and MongoDB connected, we will demonstrate the visual power of Tableau to explore the agile data storage of MongoDB. This webinar will cover:
What is the MongoDB BI Connector?
Setting up a connection from Tableau to the MongoDB BI Connector.
How to perform data discovery Tableau connected to MongoDB live data.
Publishing a Tableau Dashboard for sharing insights.
Data Lineage with Apache Airflow using Marquez Willy Lulciuc
The term data quality is used to describe the dependability, reliability, and usability of datasets. Data scientists and business analysts often determine the quality of a dataset by its trustworthiness and completeness. But what information might be needed to differentiate between useful vs noisy data? How quickly can data quality issues be identified and explored? More importantly, how can metadata enable data scientists to make better sense of the high volume of data within their organization from a variety of data sources?
With Airflow now ubiquitous for DAG orchestration, organizations increasingly dependon Airflow to manage complex inter-DAG dependencies and provide up-to-date runtime visibility into DAG execution. At WeWork, Airflow has quickly become an important component of our Data Platform powering billing, space inventory, etc. But what effects (if any) would upstream DAGs have on downstream DAGs if dataset consumption was delayed? What alerting rules should be in place to notify downstream DAGs of possible upstream processing issues or failures?
At WeWork, we feel it’s critical that DAG metadata is collected, maintained, and shared across the organization. This investment in metadata enables:
● Data lineage
● Data governance
● Data discovery
In this talk, we introduce Marquez: an open source metadata service for the collection, aggregation, and visualization of a data ecosystem’s metadata. We will demonstrate how metadata management with Marquez helps maintain inter-DAG dependencies, catalog historical runs of DAGs, and minimize data quality issues.
Denodo DataFest 2016: Comparing and Contrasting Data Virtualization With Data...Denodo
Watch the full session: Denodo DataFest 2016 sessions: https://goo.gl/Bvmvc9
Data prep and data blending are terms that have come to prominence over the last year or two. On the surface, they appear to offer functionality similar to data virtualization…but there are important differences!
In this session, you will learn:
• How data virtualization complements or contrasts technologies such as data prep and data blending
• Pros and cons of functionality provided by data prep, data catalog and data blending tools
• When and how to use these different technologies to be most effective
This session is part of the Denodo DataFest 2016 event. You can also watch more Denodo DataFest sessions on demand here: https://goo.gl/VXb6M6
Graph-driven Data Integration: Accelerating and Automating Data Delivery for ...Cambridge Semantics
In our webinar "A Data Fabric Market Update with Guest Speaker, VP, Principal Analyst Noel Yuhanna" Ben Szekely, Cambridge Semantics’ Co-founder and SVP of Field Operations, and guest speaker, Noel Yuhanna, VP and Principal Analyst at Forrester and author of the “The Forrester Wave™: Enterprise Data Fabric, Q2 2020”, discuss the state of the Data Fabric Market. These are Ben's slides from that webinar.
Enterprise Reporting with MongoDB and JasperSoftMongoDB
Presented by Daniel Roberts, Senior Solutions Architect at MongoDB, at the recent JasperWorld event during the London leg of their current European city tour.
About the Speaker, Daniel Roberts:
Prior to MongoDB Daniel worked at Oracle for 11 years in a number of different positions, including Oracle's middleware technologies and strategy. Prior roles include consulting, product management, business development and more recently as a solution architect for financial services. Daniel has also worked for Novell, ICL and as a freelance contractor. He has a degree in Computer Science from Nottingham Trent University in the UK.
Graph Data: a New Data Management FrontierDemai Ni
Graph Data: a New Data Management Frontier -- Huawei’s view and Call for Collaboration by Demai Ni:
Huawei provides Enterprise Databases, and are actively exploring the latest technology to provide end-to-end Data Management Solution on Cloud. We are looking at to bridge classic RDMS to Graph Database on a distributed platform.
BigQuery ML - Machine learning at scale using SQLMárton Kodok
With BigQuery ML, you can build machine learning models without leaving the data warehouse environment and training it on massive datasets. We are going to demonstrate how to build, train, eval and predict, your own scalable machine learning models using standard SQL language in Google BigQuery.
We will see how can we use CREATE MODEL sql syntax to build different models such as:
Linear regression
Multiclass logistic regression for classification
K-means clustering
Import TensorFlow models for prediction in BigQuery
We will see how we can apply these models on tabular data in retail and marketing use cases.
Models are trained and accessed in BigQuery using SQL — a language data analysts know. This enables business decision making through predictive analytics across the organization without leaving the query editor.
CodeCamp Iasi - Creating serverless data analytics system on GCP using BigQueryMárton Kodok
Teaser: provide developers a new way of understanding advanced analytics and choosing the right cloud architecture
The new buzzword is #serverless, as there are many great services that helps us abstract away the complexity associated with managing servers. In this session we will see how serverless helps on large data analytics backends.
We will see how to architect for Cloud and implement into an existing project components that will take us into the #serverless architecture that will ingest our streaming data, run advanced analytics on petabytes of data using BigQuery on Google Cloud Platform - all this next to an existing stack, without being forced to reengineer our app.
BigQuery enables super-fast, SQL/Javascript queries against petabytes of data using the processing power of Google’s infrastructure. We will cover its core features, SQL 2011 standard, working with streaming inserts, User Defined Functions written in Javascript, reference external JS libraries, and several use cases for everyday backend developer: funnel analytics, email heatmap, custom data processing, building dashboards, extracting data using JS functions, emitting rows based on business logic.
Supercharge your data analytics with BigQueryMárton Kodok
Powering interactive data analysis require massive architecture, and Know-How to build a fast real-time computing system. BigQuery solves this problem by enabling super-fast, SQL-like queries against petabytes of data using the processing power of Google’s infrastructure. We will cover its core features, creating tables, columns, views, working with partitions, clustering for cost optimizations, streaming inserts, User Defined Functions, and several use cases for everydaay developer: funnel analytics, behavioral analytics, exploring unstructured data.
The other part will be about BigQuery ML, which enables users to create and execute machine learning models in BigQuery using standard SQL queries. BigQuery ML democratizes machine learning by enabling SQL practitioners to build models using existing SQL tools and skills. BigQuery ML increases development speed by eliminating the need to move data.
Big Query - Utilizing Google Data Warehouse for Media Analyticshafeeznazri
This topic will cover the intermediate understanding of Google Big Query and how Media Prima Digital utilizing Big Query as data warehouse for production.
Connecta Event: Big Query och dataanalys med Google Cloud PlatformConnectaDigital
Avancerad dataanalys och ”big data” har under de senaste åren klättrat på trendlistorna och är nu ett av de mest prioriterade områdena i utvecklingen av nya tjänster och produkter för ledarföretag i det digitala landskapet.
Informationen som byggs upp i systemen när kundmötena digitaliseras har visat sig vara guld värt. Här finns allt vi behöver veta för att göra våra affärer mer effektiva.
Sedan sommaren 2013 har Connecta tillsammans med Google ett etablerat samarbete för att hjälpa våra kunder med övergången till moln-tjänster för bland annat avancerad dataanalys. För att göra oss själva redo att hjälpa våra kunder har vi under ett antal år utvecklat såväl kunskaper som skaffat oss erfarenheter kring Googles olika moln-produkter, som exempelvis ”Big Query”.
Big Query är ett molnbaserat analysverktyg och en del av Google Cloud Platform. Big Query gör det möjligt att ställa snabba frågor mot enorma dataset på bara någon sekund. Big Query och Google Cloud Platform erbjuder färdiga lösningar för att sätta upp och underhålla en infrastruktur som med enkla medel gör allt detta möjligt.
På Connecta Digital Consultings tredje event för våren introducerade vi våra kunder och partners i koncepten dataanalys och Big Query.
Under eventet berördes följande punkter:
- Big Data och Business Intelligence (BI)
- “The Google Big Data tools” – framgångsfaktorer och hur man kommer igång
- Google Cloud Platform och hur man genomför en framgångsrik molnsatsning
Vi presenterade case och berättade om viktiga lärdomar vi dragit i samarbetet med Google och våra kunder.
Knowledge Graph Discussion: Foundational Capability for Data Fabric, Data Int...Cambridge Semantics
Knowledge graphs are on the rise at businesses hungry for greater automation and intelligence with use cases spreading across industries, from fraud detection and chatbots, to risk analysis and recommendation engines. In this webinar we dive into key technical and business considerations, use cases and best practices in leveraging knowledge graphs for better knowledge management.
Using a Semantic and Graph-based Data Catalog in a Modern Data FabricCambridge Semantics
Watch this webinar to learn about the benefits of using semantic and graph database technology to create a Data Catalog of all of an enterprise's data, regardless of source or format, as part of a modern IT or data management stack and an important step toward building an Enterprise Data Fabric.
Google Analytics and BigQuery, by Javier Ramirez, from datawakijavier ramirez
Google Analytics is great, but having access to your raw data and being able to query it any way you want is much more powerful. Learn how you can integrate Analytics and BigQuery to unleash all your data potential. Talk delivered at Conversion Thursday London
Webinar: Introducing the MongoDB Connector for BI 2.0 with TableauMongoDB
Pairing your real-time operational data stored in a modern database like MongoDB with first-class business intelligence platforms like Tableau enables new insights to be discovered faster than ever before.
Many leading organizations already use MongoDB in conjunction with Tableau including a top American investment bank and the world’s largest airline. With the Connector for BI 2.0, it’s never been easier to streamline the connection process between these two systems.
In this webinar, we will create a live connection from Tableau Desktop to a MongoDB cluster using the Connector for BI. Once we have Tableau Desktop and MongoDB connected, we will demonstrate the visual power of Tableau to explore the agile data storage of MongoDB.
You’ll walk away knowing:
- How to configure MongoDB with Tableau using the updated connector
- Best practices for working with documents in a BI environment
- How leading companies are using big data visualization strategies to transform their businesses
Using Cloud Automation Technologies to Deliver an Enterprise Data FabricCambridge Semantics
The world of database management is changing. Cloud adoption is accelerating, offering a path for companies to increase their database capabilities while keeping costs in line. To help IT decision-makers survive and thrive in the cloud era, DBTA hosted this special roundtable webinar.
The 'macro view' on Big Query:
We started with an overview, some typical uses and moved to project hierarchy, access control and security.
In the end we touch about tools and demos.
Webinar: Live Data Visualisation with Tableau and MongoDBMongoDB
MongoDB 3.2 introduces a new way for familiar Business Intelligence (BI) tools to access your real-time operational data – opening it up to data analysts and data scientist, enabling new insights to be discovered faster than ever before. Tableau accesses the JSON document data stored in MongoDB via this new BI connector. We will cover how the BI connector works by creating a relational view definition of a JSON data set that is then used to present a tabular SQL/ODBC interface to Tableau. Then we will set-up a live connection from Tableau Desktop to the MongoDB Connector for BI. Once we have Tableau Desktop and MongoDB connected, we will demonstrate the visual power of Tableau to explore the agile data storage of MongoDB. This webinar will cover:
What is the MongoDB BI Connector?
Setting up a connection from Tableau to the MongoDB BI Connector.
How to perform data discovery Tableau connected to MongoDB live data.
Publishing a Tableau Dashboard for sharing insights.
Data Lineage with Apache Airflow using Marquez Willy Lulciuc
The term data quality is used to describe the dependability, reliability, and usability of datasets. Data scientists and business analysts often determine the quality of a dataset by its trustworthiness and completeness. But what information might be needed to differentiate between useful vs noisy data? How quickly can data quality issues be identified and explored? More importantly, how can metadata enable data scientists to make better sense of the high volume of data within their organization from a variety of data sources?
With Airflow now ubiquitous for DAG orchestration, organizations increasingly dependon Airflow to manage complex inter-DAG dependencies and provide up-to-date runtime visibility into DAG execution. At WeWork, Airflow has quickly become an important component of our Data Platform powering billing, space inventory, etc. But what effects (if any) would upstream DAGs have on downstream DAGs if dataset consumption was delayed? What alerting rules should be in place to notify downstream DAGs of possible upstream processing issues or failures?
At WeWork, we feel it’s critical that DAG metadata is collected, maintained, and shared across the organization. This investment in metadata enables:
● Data lineage
● Data governance
● Data discovery
In this talk, we introduce Marquez: an open source metadata service for the collection, aggregation, and visualization of a data ecosystem’s metadata. We will demonstrate how metadata management with Marquez helps maintain inter-DAG dependencies, catalog historical runs of DAGs, and minimize data quality issues.
Denodo DataFest 2016: Comparing and Contrasting Data Virtualization With Data...Denodo
Watch the full session: Denodo DataFest 2016 sessions: https://goo.gl/Bvmvc9
Data prep and data blending are terms that have come to prominence over the last year or two. On the surface, they appear to offer functionality similar to data virtualization…but there are important differences!
In this session, you will learn:
• How data virtualization complements or contrasts technologies such as data prep and data blending
• Pros and cons of functionality provided by data prep, data catalog and data blending tools
• When and how to use these different technologies to be most effective
This session is part of the Denodo DataFest 2016 event. You can also watch more Denodo DataFest sessions on demand here: https://goo.gl/VXb6M6
Graph-driven Data Integration: Accelerating and Automating Data Delivery for ...Cambridge Semantics
In our webinar "A Data Fabric Market Update with Guest Speaker, VP, Principal Analyst Noel Yuhanna" Ben Szekely, Cambridge Semantics’ Co-founder and SVP of Field Operations, and guest speaker, Noel Yuhanna, VP and Principal Analyst at Forrester and author of the “The Forrester Wave™: Enterprise Data Fabric, Q2 2020”, discuss the state of the Data Fabric Market. These are Ben's slides from that webinar.
Enterprise Reporting with MongoDB and JasperSoftMongoDB
Presented by Daniel Roberts, Senior Solutions Architect at MongoDB, at the recent JasperWorld event during the London leg of their current European city tour.
About the Speaker, Daniel Roberts:
Prior to MongoDB Daniel worked at Oracle for 11 years in a number of different positions, including Oracle's middleware technologies and strategy. Prior roles include consulting, product management, business development and more recently as a solution architect for financial services. Daniel has also worked for Novell, ICL and as a freelance contractor. He has a degree in Computer Science from Nottingham Trent University in the UK.
Graph Data: a New Data Management FrontierDemai Ni
Graph Data: a New Data Management Frontier -- Huawei’s view and Call for Collaboration by Demai Ni:
Huawei provides Enterprise Databases, and are actively exploring the latest technology to provide end-to-end Data Management Solution on Cloud. We are looking at to bridge classic RDMS to Graph Database on a distributed platform.
BigQuery ML - Machine learning at scale using SQLMárton Kodok
With BigQuery ML, you can build machine learning models without leaving the data warehouse environment and training it on massive datasets. We are going to demonstrate how to build, train, eval and predict, your own scalable machine learning models using standard SQL language in Google BigQuery.
We will see how can we use CREATE MODEL sql syntax to build different models such as:
Linear regression
Multiclass logistic regression for classification
K-means clustering
Import TensorFlow models for prediction in BigQuery
We will see how we can apply these models on tabular data in retail and marketing use cases.
Models are trained and accessed in BigQuery using SQL — a language data analysts know. This enables business decision making through predictive analytics across the organization without leaving the query editor.
CodeCamp Iasi - Creating serverless data analytics system on GCP using BigQueryMárton Kodok
Teaser: provide developers a new way of understanding advanced analytics and choosing the right cloud architecture
The new buzzword is #serverless, as there are many great services that helps us abstract away the complexity associated with managing servers. In this session we will see how serverless helps on large data analytics backends.
We will see how to architect for Cloud and implement into an existing project components that will take us into the #serverless architecture that will ingest our streaming data, run advanced analytics on petabytes of data using BigQuery on Google Cloud Platform - all this next to an existing stack, without being forced to reengineer our app.
BigQuery enables super-fast, SQL/Javascript queries against petabytes of data using the processing power of Google’s infrastructure. We will cover its core features, SQL 2011 standard, working with streaming inserts, User Defined Functions written in Javascript, reference external JS libraries, and several use cases for everyday backend developer: funnel analytics, email heatmap, custom data processing, building dashboards, extracting data using JS functions, emitting rows based on business logic.
Supercharge your data analytics with BigQueryMárton Kodok
Powering interactive data analysis require massive architecture, and Know-How to build a fast real-time computing system. BigQuery solves this problem by enabling super-fast, SQL-like queries against petabytes of data using the processing power of Google’s infrastructure. We will cover its core features, creating tables, columns, views, working with partitions, clustering for cost optimizations, streaming inserts, User Defined Functions, and several use cases for everydaay developer: funnel analytics, behavioral analytics, exploring unstructured data.
The other part will be about BigQuery ML, which enables users to create and execute machine learning models in BigQuery using standard SQL queries. BigQuery ML democratizes machine learning by enabling SQL practitioners to build models using existing SQL tools and skills. BigQuery ML increases development speed by eliminating the need to move data.
How Data Virtualization Puts Enterprise Machine Learning Programs into Produc...Denodo
Watch full webinar here: https://bit.ly/3offv7G
Presented at AI Live APAC
Advanced data science techniques, like machine learning, have proven an extremely useful tool to derive valuable insights from existing data. Platforms like Spark, and complex libraries for R, Python and Scala put advanced techniques at the fingertips of the data scientists. However, these data scientists spend most of their time looking for the right data and massaging it into a usable format. Data virtualization offers a new alternative to address these issues in a more efficient and agile way.
Watch this on-demand session to learn how companies can use data virtualization to:
- Create a logical architecture to make all enterprise data available for advanced analytics exercise
- Accelerate data acquisition and massaging, providing the data scientist with a powerful tool to complement their practice
- Integrate popular tools from the data science ecosystem: Spark, Python, Zeppelin, Jupyter, etc.
Applying BigQuery ML on e-commerce data analyticsMárton Kodok
With BigQuery ML, you can build machine learning models without leaving the database environment and training it on massive datasets. We are going to demonstrate common marketing Machine Learning use cases we do at REEA.net to build, train, eval and predict, your own scalable machine learning models using SQL language in Google BigQuery and to address the following use cases:
Customer Segmentation
Customer Lifetime Value (LTV) prediction
Conversion/Purchase prediction
The audience will get first hand experience how to write CREATE MODEL sql syntax to build machine learning models such as:
Multiclass logistic regression for classification
K-means clustering
Import TensorFlow models for prediction in BigQuery
Models are trained and accessed in BigQuery using SQL — a language data analysts know. This enables business decision making through predictive analytics across the organization without leaving the query editor
Virtualisation de données : Enjeux, Usages & BénéficesDenodo
Watch full webinar here: https://bit.ly/3oah4ng
Gartner a récemment qualifié la Data Virtualisation comme étant une pièce maitresse des architectures d’intégration de données.
Découvrez :
- Les bénéfices d’une plateforme de virtualisation de données
- La multiplication des usages : Lakehouse, Data Science, Big Data, Data Service & IoT
- La création d’une vue unifiée de votre patrimoine de données sans transiger sur la performance
- La construction d’une architecture d’intégration Agile des données : on-premise, dans le cloud ou hybride
Webinar: Faster Big Data Analytics with MongoDBMongoDB
Learn how to leverage MongoDB and Big Data technologies to derive rich business insight and build high performance business intelligence platforms. This presentation includes:
- Uncovering Opportunities with Big Data analytics
- Challenges of real-time data processing
- Best practices for performance optimization
- Real world case study
This presentation was given in partnership with CIGNEX Datamatics.
BigQuery ML - Machine learning at scale using SQLMárton Kodok
With BigQuery ML, you can build machine learning models without leaving the data warehouse environment and training it on massive datasets. We are going to demonstrate how to build, train, eval and predict, your own scalable machine learning models using standard SQL language in Google BigQuery.
We will see how can we use CREATE MODEL sql syntax to build different models such as:
-Linear regression
-Multiclass logistic regression for classification
-K-means clustering
-Import TensorFlow models for prediction in BigQuery
We will see how we can apply these models on tabular data in retail and marketing use cases.
Models are trained and accessed in BigQuery using SQL — a language data analysts know. This enables business decision making through predictive analytics across the organization without leaving the query editor.
Product Keynote: Denodo 8.0 - A Logical Data Fabric for the Intelligent Enter...Denodo
Watch full webinar here: https://bit.ly/2O9gcBT
Denodo 8 expands data integration and management to data fabric with advanced data virtualization capabilities. What are they? Denodo CTO Alberto Pan will touch upon the key Denodo 8 capabilities.
Enabling Next Gen Analytics with Azure Data Lake and StreamSetsStreamsets Inc.
Big data and the cloud are perfect partners for companies who want to unlock maximum value from all of their unstructured, semi-structured, and structured data. The challenge has been how to create and manage a reliable end-to-end solution that spans data ingestion, storage and analysis in the face of the volume, velocity and variety of big data sources.
In this webinar, we will show you how to achieve big data bliss by combining StreamSets Data Collector, which specializes in creating and running complex any-to-any dataflows, with Microsoft's Azure Data Lake and Azure analytic solutions.
We will walk through an example of how a major bank is using StreamSets to transport their on-premise data to the Azure Cloud Computing Platform and Azure Data Lake to take advantage of analytics tools with unprecedented scale and performance.
Watch full webinar here: https://bit.ly/3mdj9i7
You will often hear that "data is the new gold"? In this context, data management is one of the areas that has received more attention from the software community in recent years. From Artificial Intelligence and Machine Learning to new ways to store and process data, the landscape for data management is in constant evolution. From the privileged perspective of an enterprise middleware platform, we at Denodo have the advantage of seeing many of these changes happen.
In this webinar, we will discuss the technology trends that will drive the enterprise data strategies in the years to come. Don't miss it if you want to keep yourself informed about how to convert your data to strategic assets in order to complete the data-driven transformation in your company.
Watch this on-demand webinar as we cover:
- The most interesting trends in data management
- How to build a data fabric architecture?
- How to manage your data integration strategy in the new hybrid world
- Our predictions on how those trends will change the data management world
- How can companies monetize the data through data-as-a-service infrastructure?
- What is the role of voice computing in future data analytic
Rethink Your 2021 Data Management Strategy with Data Virtualization (ASEAN)Denodo
Watch full webinar here: https://bit.ly/2O2r3NP
In the last several decades, BI has evolved from large, monolithic implementations controlled by IT to orchestrated sets of smaller, more agile capabilities that include visual-based data discovery and governance. These new capabilities provide more democratic analytics accessibility that is increasingly being controlled by business users. However, given the rapid advancements in emerging technologies such as cloud and big data systems and the fast changing business requirements, creating a future-proof data management strategy is an incredibly complex task.
Catch this on demand session to understand:
- BI program modernization challenges
- What is data virtualization and why is its adoption growing so quickly?
- How data virtualization works and how it compares to alternative approaches to data integration
- How modern data virtualization can significantly increase agility while reducing costs
Watch full webinar here: https://bit.ly/2SaBj5l
You will often hear that "data is the new gold". In this context, data management is one of the areas that has received more attention by the software community in recent years. From Artificial Intelligence and Machine Learning to new ways to store and process data, the landscape for data management is in constant evolution. From the privileged perspective of an enterprise middleware platform, we at Denodo have the advantage of seeing many of these changes happen.
In this webinar we will discuss the technology trends that will drive the enterprise data strategies in the years to come. Don't miss it if you want to keep yourself informed about how to convert your data to strategic assets in order to complete the data-driven transformation in your company.
Join us for an exciting session that will cover:
- The most interesting trends in data management
- How to build a logical data fabric architecture?
- How to manage your data integration strategy in the new hybrid world?
- Our predictions on how those trends will change the data management world
- How can companies monetize the data through data-as-a-service infrastructure?
- What is the role of the voice computing in the future of data analytics?
Data Ingestion in Big Data and IoT platformsGuido Schmutz
Many of the Big Data and IoT use cases are based on combining data from multiple data sources and to make them available on a Big Data platform for analysis. The data sources are often very heterogeneous, from simple files, databases to high-volume event streams from sensors (IoT devices). It’s important to retrieve this data in a secure and reliable manner and integrate it with the Big Data platform so that it is available for analysis in real-time (stream processing) as well as in batch (typical big data processing). In past some new tools have emerged, which are especially capable of handling the process of integrating data from outside, often called Data Ingestion. From an outside perspective, they are very similar to a traditional Enterprise Service Bus infrastructures, which in larger organization are often in use to handle message-driven and service-oriented systems. But there are also important differences, they are typically easier to scale in a horizontal fashion, offer a more distributed setup, are capable of handling high-volumes of data/messages, provide a very detailed monitoring on message level and integrate very well with the Hadoop ecosystem. This session will present and compare Apache NiFi, StreamSets and the Kafka Ecosystem and show how they handle the data ingestion in a Big Data solution architecture.
How to Swiftly Operationalize the Data Lake for Advanced Analytics Using a Lo...Denodo
Watch full webinar here: https://bit.ly/3mfFJqb
Presented at Chief Data Officer Live Series 2021, ASEAN (August Edition)
While big data initiatives have become necessary for any business to generate actionable insights, big data fabric has become a necessity for any successful big data initiative. The best-of-breed big data fabrics should deliver actionable insights to the business users with minimal effort, provide end-to-end security to the entire enterprise data platform, and provide real-time data integration while delivering a self-service data platform to business users.
Watch this on-demand session to learn how big data fabric enabled by Data Virtualization:
- Provides lightning fast self-service data access to business users
- Centralizes data security, governance, and data privacy
- Fulfills the promise of data lakes to provide actionable insights
Gen Apps on Google Cloud PaLM2 and Codey APIs in ActionMárton Kodok
Build applications with generative AI on Google Cloud! We are going to see in action what Gen App Builder is for developers to build and deploy AI-driven applications. We will explore Model Garden powered experiences, then we are going to learn more about the integration of these generative AI APIs. Vertex AI includes a suite of models that work with code. Together these code models are referred to as the PaLM and Codey APIs. The Vertex AI Codey APIs include the code generation API which supports generating code using a natural language description. We will show strategies for creating prompts that work with the model to generate code. At the end of the session, developers will understand how to innovate with generative AI and develop apps using the generative AI industry trends.
DevBCN Vertex AI - Pipelines for your MLOps workflowsMárton Kodok
In recent years, one of the biggest trends in applications development has been the rise of Machine Learning solutions, tools, and managed platforms. Vertex AI is a managed unified ML platform for all your AI workloads. On the MLOps side, Vertex AI Pipelines solutions let you adopt experiment pipelining beyond the classic build, train, eval, and deploy a model. It is engineered for data scientists and data engineers, and it’s a tremendous help for those teams who don’t have DevOps or sysadmin engineers, as infrastructure management overhead has been almost completely eliminated. Based on practical examples we will demonstrate how Vertex AI Pipelines scores high in terms of developer experience, how fits custom ML needs, and analyze results. It’s a toolset for a fully-fledged machine learning workflow, a sequence of steps in the model development, a deployment cycle, such as data preparation/validation, model training, hyperparameter tuning, model validation, and model deployment. Vertex AI comes with all classic resources plus an ML metadata store, a fully managed feature store, and a fully managed pipelines runner. Vertex AI Pipelines is a managed serverless toolkit, which means you don't have to fiddle with infrastructure or back-end resources to run workflows.
Discover BigQuery ML, build your own CREATE MODEL statementMárton Kodok
With BigQuery ML, you can build machine learning models without leaving the database environment and training it on massive datasets. In this demo session we are going to demonstrate common marketing Machine Learning use cases of how to build, train, eval, and predict, your own scalable machine learning models using SQL language in Google BigQuery and to address the following use cases: - Customer Segmentation + Product cross sale recommendation - Conversion/Purchase prediction - Inference with other in-built >20 models The audience will get first-hand experience with how to write CREATE MODEL sql syntax to build machine learning models such as: - Multiclass logistic regression for classification - K-means clustering - Matrix factorization - ARIMA time series predictions ... and more Models are trained and accessed in BigQuery using SQL — a language data analysts know. This enables business decision-making through predictive analytics across the organization without leaving the query editor. In the end, the audience will learn how everyday developers can build/train/run their own machine-learning models straight from the database query editor, by issuing CREATE MODEL statements
Cloud Run - the rise of serverless and containerizationMárton Kodok
Two of the biggest trends in applications development in recent years have been the rise of serverless and containerization. And Cloud Run has become a defacto container runtime service to production in seconds. Based on practical examples we will demonstrate how Cloud Run scores high in terms of developer experience. It differs from functions runtime as You can bring your own container, your own code, a folder, or binarys and it pairs great with the container ecosystem: Cloud Build, Cloud Code, Artifact Registry, and Docker. Each Cloud Run service gets an out-of-the-box stable HTTPS endpoint, with TLS termination handled for you. Map your services to your own domains and use either for web sites, backend APIs, workflows, invoke and connect services with the newest protocols of HTTP/2, WebSockets or gRPC (unary and streaming). Cloud Run is serverless containers, which means you don't have to fiddle with infrastructure or back-end resources to run applications.
BigQuery best practices and recommendations to reduce costs with BI Engine, S...Márton Kodok
best practices and recommendations for tuning BI Engine for your existing BigQuery workloads for cheaper and faster queries. Learn how we at REEA are orchestrating BI Engine reservations, on a 5TB dataset, considered small for BigQuery but with big cost savings and accelerated queries. We are seeing many presentations for big enterprises, but now we are showcasing how our queries perform better with lower costs. We are going to address the top considerations when to turn on BI Engine, how to use cloud orchestration for making this an automatic process, and combined with BigQuery and Datastudio query complexity that might save precious development time, lower bills, faster queries.
Vertex AI - Unified ML Platform for the entire AI workflow on Google CloudMárton Kodok
Vertex AI is a managed ML platform for practitioners to accelerate experiments and deploy AI models.
Enhanced developer experience
- Build with the groundbreaking ML tools that power Google
- Approachable from the non-ML developer perspective (AutoML, managed models, training)
- Ease the life of a data scientist/ML (has feature store, managed datasets, endpoints, notebooks)
- Infrastructure management overhead have been almost completely eliminated
- Unified UI for the entire ML workflow
- End-to-end integration for data and AI with build pipelines that outperform and solve complex ML tasks
- Explainable AI and TensorBoard to visualize and track ML experiments
Vertex AI: Pipelines for your MLOps workflowsMárton Kodok
In recent years, one of the biggest trends in applications development has been the rise of Machine Learning solutions, tools, and managed platforms. Vertex AI is a managed unified ML platform for all your AI workloads. On the MLOps side, Vertex AI Pipelines solutions let you adopt experiment pipelining beyond the classic build, train, eval, and deploy a model. It is engineered for data scientists and data engineers, and it’s a tremendous help for those teams who don’t have DevOps or sysadmin engineers, as infrastructure management overhead has been almost completely eliminated.
Based on practical examples we will demonstrate how Vertex AI Pipelines scores high in terms of developer experience, how fits custom ML needs, and analyze results. It’s a toolset for a fully-fledged machine learning workflow, a sequence of steps in the model development, a deployment cycle, such as data preparation/validation, model training, hyperparameter tuning, model validation, and model deployment. Vertex AI comes with all standard resources plus an ML metadata store, a fully managed feature store, and a fully managed pipelines runner.
Vertex AI Pipelines is a managed serverless toolkit, which means you don't have to fiddle with infrastructure or back-end resources to run workflows.
Cloud Workflows What's new in serverless orchestration and automationMárton Kodok
understand how Cloud Workflows resolves challenges in connecting services, HTTP based service orchestration and automation. We are going to dive deep how serverless HTTP service automation works to automate step engines. Based on practical examples we will demonstrate the newest features that lets you automate the cloud and integration with any Google Cloud product without worrying about authentication
Serverless orchestration and automation with Cloud WorkflowsMárton Kodok
Join this session to understand how Cloud Workflows resolves challenges in connecting services, HTTP based service orchestration and automation. We are going to dive deep how serverless HTTP service automation works to automate step engines. Based on practical examples we will demonstrate the built-in decision and conditional executions, subworkflows, support for external built-in API calls, and integration with any Google Cloud product without worrying about authentication. We are going to cover Marketing, Retail, Industrial and Developer possibilities, such as event driven marketing workflow execution, or inventory chain operations, generating and automatic state machines, or orchestrate DevOps workflows and automating the Cloud.
Serverless orchestration and automation with Cloud WorkflowsMárton Kodok
Join this session to understand how Cloud Workflows resolves challenges in connecting services, HTTP based service orchestration and automation. We are going to dive deep how serverless HTTP service automation works to automate step engines. Based on practical examples we will demonstrate the built-in decision and conditional executions, subworkflows, support for external built-in API calls, and integration with any Google Cloud product without worrying about authentication. We are going to cover Marketing, Retail, Industrial and Developer possibilities, such as event driven marketing workflow execution, or inventory chain operations, generating and automatic state machines, or orchestrate DevOps workflows and automating the Cloud.
Serverless orchestration and automation with Cloud WorkflowsMárton Kodok
Join this session to understand how Cloud Workflows resolves challenges in connecting services, HTTP based service orchestration and automation. We are going to dive deep how serverless HTTP service automation works to automate step engines. Based on practical examples we will demonstrate the built-in decision and conditional executions, subworkflows, support for external built-in API calls, and integration with any Google Cloud product without worrying about authentication. We are going to cover Marketing, Retail, Industrial and Developer possibilities, such as event driven marketing workflow execution, or inventory chain operations, generating and automatic state machines, or orchestrate DevOps workflows and automating the Cloud.
BigdataConference Europe - BigQuery MLMárton Kodok
One of the hottest topics in database land these days is BigQuery ML. A new way to use machine learning on top of tabular data straight on your tables without leaving the query editor.
With BigQuery ML, you can build machine learning models without leaving the database environment and training it on massive datasets.
In this demo session, we are going to demonstrate common marketing Machine Learning use cases how to build, train, eval and predict, your own scalable machine learning models using SQL language.
The audience will get first hand experience how to write CREATE MODEL sql syntax to build machine learning models such as:
– Multiclass logistic regression for classification
– K-means clustering
– Matrix factorization
– ARIMA time series predictions
– Import TensorFlow models for prediction in BigQuery
Models are trained and accessed in BigQuery using SQL — a language data analysts know. This enables business decision making through predictive analytics across the organization without leaving the query editor.
DevFest Romania 2020 Keynote: Bringing the Cloud to you.Márton Kodok
Next OnAir 20 in review,
Real-time AI solutions
like anomaly detection, pattern recognition, and predictive forecasting
2. Recommendations AI rich experience to personalized product recommendations
3. Media Translation API real-time speech translation from streaming audio
4. Lending DocAI solution powered by Document AI for mortgage industry
5. Contact Center AI support over chat/voice calls by identifying intent and providing assistance
Confidential VMs are a breakthrough technology that allow customers to encrypt their most sensitive data in the cloud while it's being processed
Cloud Run: - Minimum idle instances
- Allocate 4 vCPUs and 4GiB memory
- Requests up to 60 minutes
- Server-side HTTP + gRPC streaming
- VPC access support
- External Load Balancing
Serverless orchestration and automation with Cloud Workflows (beta)
- Steps defined in YAML
- Built-in decision and conditional exec
- Subworkflows
- Support for external API calls
- Custom predicate for retries
Predict, recommend and forecast with BigQuery ML
CREATE MODEL syntax in BigQuery to run Machine Learning tasks
Supported models:
- K-means clustering for data segmentation
- Recommend with Matrix Factorization
- Perform time-series forecast
- Import TensorFlow models
Single interface for multiple services with API Gateway
Find Your Topic and Skill Level
Qwiklabs + New Tutorials Center
Vibe Koli 2019 - Utazás az egyetem padjaitól a Google Developer ExpertigMárton Kodok
VIBE Koli 2019 - Vibe Garázs - Gokart.
Kodok Márton, miután elvégezte tanulmányait a Sapientián, IT-s karriert épített ki magának, ma pedig már tagja a Google Developer Expert (GDE) csapatának, így az ország kiemelkedő szakemberei közé tartozik. A VIBE Kolin abban segít neked, hogy megtaláld a saját utad. Bebizonyítja, csak akaraterő kell ahhoz, hogy egy társadhoz képest mást, többet csinálj.
Google Cloud Platform Solutions for DevOps EngineersMárton Kodok
learn the DevOps essentials about cloud components, FaaS, PaaS architectural patterns that make use of Cloud Functions, Pub/Sub, Dataflow, Kubernetes and how we develop and deploy cloud software. You will get hands on information how to build, run, monitor highly scalable and flexible applications optimized to run on GCP. We will discuss cloud concepts and highlights various design patterns and best practices.
GDG DevFest Romania - Architecting for the Google Cloud PlatformMárton Kodok
Learn about FaaS, PaaS architectural patterns that make use of Cloud Functions, Pub/Sub, Dataflow, Kubernetes and platforms that hides the management of servers from the user and have changed how we develop and deploy future software.
We discuss the difference between an event-driven approach - this means that you can trigger a function whenever something interesting happens within the cloud environment - and the simpler HTTP approach. Quota and pricing of per invocation, and the advantages and disadvantages of the serverless systems.
6. DISZ - Webalkalmazások skálázhatósága a Google Cloud PlatformonMárton Kodok
Az előadás témája hogyan építhető fel egy rugalmas, jól skálázható szolgáltatás a felhőszolgáltatók platformjain. Hogyan lehet megoldani, hogy a szolgáltatás, amelynek induláskor legfeljebb néhány tíz vagy száz felhasználót kell kiszolgálnia, akár több ezer vagy nagyságrendekkel több felhasználót is képes legyen kiszolgálni rugalmasan? Hátradőlni és csodálni az autoscaling funkciót a Black Friday napján. Beszélni fogunk virtualizációról, platformszintű virtualizációről, szuperkönnyű alkalmazáskonténerekről, a munkaterhek közel valósidejű “pakolgatásával”. Bemutatásra kerül a Google Cloud Platform számos komponense. Bankok, biztosítók, webshopok és így tovább mind a cloudban látják a kitörési pontot.
GDG Heraklion - Architecting for the Google Cloud PlatformMárton Kodok
Learn about cloud components, architecture overviews to build an app using GCP components.
You will get hands-on information on how to build highly scalable and flexible applications optimized to run in GCP on the same infrastructure that powers Google. We will discuss cloud concepts and highlights various design patterns and best practices.
By the end of the session you will have hands-on experience to build a basic cloud application, it could be a simple web tier, powered by highly distributed database, background tasks executed on a pub/subsystem, and you get information how to go next level with advanced concepts like analytics warehouse, recommendation engines, and ML.
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Why React Native as a Strategic Advantage for Startup Innovation.pdfayushiqss
Do you know that React Native is being increasingly adopted by startups as well as big companies in the mobile app development industry? Big names like Facebook, Instagram, and Pinterest have already integrated this robust open-source framework.
In fact, according to a report by Statista, the number of React Native developers has been steadily increasing over the years, reaching an estimated 1.9 million by the end of 2024. This means that the demand for this framework in the job market has been growing making it a valuable skill.
But what makes React Native so popular for mobile application development? It offers excellent cross-platform capabilities among other benefits. This way, with React Native, developers can write code once and run it on both iOS and Android devices thus saving time and resources leading to shorter development cycles hence faster time-to-market for your app.
Let’s take the example of a startup, which wanted to release their app on both iOS and Android at once. Through the use of React Native they managed to create an app and bring it into the market within a very short period. This helped them gain an advantage over their competitors because they had access to a large user base who were able to generate revenue quickly for them.
How Does XfilesPro Ensure Security While Sharing Documents in Salesforce?XfilesPro
Worried about document security while sharing them in Salesforce? Fret no more! Here are the top-notch security standards XfilesPro upholds to ensure strong security for your Salesforce documents while sharing with internal or external people.
To learn more, read the blog: https://www.xfilespro.com/how-does-xfilespro-make-document-sharing-secure-and-seamless-in-salesforce/
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Your Digital Assistant.
Making complex approach simple. Straightforward process saves time. No more waiting to connect with people that matter to you. Safety first is not a cliché - Securely protect information in cloud storage to prevent any third party from accessing data.
Would you rather make your visitors feel burdened by making them wait? Or choose VizMan for a stress-free experience? VizMan is an automated visitor management system that works for any industries not limited to factories, societies, government institutes, and warehouses. A new age contactless way of logging information of visitors, employees, packages, and vehicles. VizMan is a digital logbook so it deters unnecessary use of paper or space since there is no requirement of bundles of registers that is left to collect dust in a corner of a room. Visitor’s essential details, helps in scheduling meetings for visitors and employees, and assists in supervising the attendance of the employees. With VizMan, visitors don’t need to wait for hours in long queues. VizMan handles visitors with the value they deserve because we know time is important to you.
Feasible Features
One Subscription, Four Modules – Admin, Employee, Receptionist, and Gatekeeper ensures confidentiality and prevents data from being manipulated
User Friendly – can be easily used on Android, iOS, and Web Interface
Multiple Accessibility – Log in through any device from any place at any time
One app for all industries – a Visitor Management System that works for any organisation.
Stress-free Sign-up
Visitor is registered and checked-in by the Receptionist
Host gets a notification, where they opt to Approve the meeting
Host notifies the Receptionist of the end of the meeting
Visitor is checked-out by the Receptionist
Host enters notes and remarks of the meeting
Customizable Components
Scheduling Meetings – Host can invite visitors for meetings and also approve, reject and reschedule meetings
Single/Bulk invites – Invitations can be sent individually to a visitor or collectively to many visitors
VIP Visitors – Additional security of data for VIP visitors to avoid misuse of information
Courier Management – Keeps a check on deliveries like commodities being delivered in and out of establishments
Alerts & Notifications – Get notified on SMS, email, and application
Parking Management – Manage availability of parking space
Individual log-in – Every user has their own log-in id
Visitor/Meeting Analytics – Evaluate notes and remarks of the meeting stored in the system
Visitor Management System is a secure and user friendly database manager that records, filters, tracks the visitors to your organization.
"Secure Your Premises with VizMan (VMS) – Get It Now"
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Making advanced analytics accessible to more companies
1. Making advanced analytics
accessible to more companies
Márton Kodok / @martonkodok
Google Developer Expert at REEA.net, Targu Mures, Romania
24 May 2017 Tirgu Mures, Romania
Issue 59 - May 2017
2. ● Geek. Hiker. Do-er.
● Crafting Web/Mobile backends at REEA.net Targu Mures
● Among the TOP3 romanians on Stackoverflow.com
● Google Developer Expert on Cloud technologies
● BigQuery and database engine expert
● Active in mentoring
Twitter: @martonkodok
StackOverflow: pentium10
Slideshare: martonkodok
GitHub: pentium10
Making advanced analytics accessible to more companies @martonkodok
About me
3. Making advanced analytics accessible to more companies @martonkodok
Agenda
The
Challenge
Making advanced analytics
accessible to more companies
Architecture
Overview
Strategy &
Tricks
Winning
Solution
4. Companies:
❏ must be able to identify, combine, and manage multiple sources of data
❏ should have the ability to obtain advanced analytics using concepts
they are familiar with.
❏ have a deployment of the right technology architecture
matching their capabilities.
Making advanced analytics accessible to more companies @martonkodok
3 principles to get from small data to BigData
5. Making advanced analytics accessible to more companies @martonkodok
Legacy Business Reporting System
Web
Mobile
Web Server
Database
SQL
Cached
Platform Services
CMS/Framework
Report & Share
Business Analysis
Scheduled
Tasks
Batch Processing
Compute Engine
Multiple Instances
6. Making advanced analytics accessible to more companies @martonkodok
Web
Mobile
Web Server
Database
SQL
Cached
Platform Services
CMS/Framework
Report & Share
Business Analysis
Scheduled
Tasks
Batch Processing
Compute Engine
Multiple Instances
BehindtheScenes:
DaysToInsights
7. Making advanced analytics accessible to more companies @martonkodok
Legacy Business Reporting System
Web
Mobile
Web Server
Database
SQL
Cached
Platform Services
CMS/Framework
Report & Share
Business Analysis
Scheduled
Tasks
Batch Processing
Compute Engine
Multiple Instances
Minutes
to kick in
Hours to Run
Batch Processing
Hours to Clean and
Aggregate
DAYS TO
INSIGHTS
8. ❏ Need backend/database to STORE, QUERY, EXTRACT data
❏ Deep analytics - large, multi-source, complex, unstructured
❏ Be real time
❏ Terabyte scale - Cost effective
❏ Run Ad-Hoc reports - Without Developer - interactive
❏ Minimal engineering efforts - no dedicated BigData team
❏ Simple Query language (prefered SQL / Javascript)
Making advanced analytics accessible to more companies @martonkodok
Desired system
10. ● Analytics-as-a-Service - Data Warehouse in the Cloud
● Fully-Managed by Google (US or EU zone)
● Scales into Petabytes
● Ridiculously fast
● SQL 2011 Standard + Javascript UDF (User Defined Functions)
● Familiar DB Structure (table, views, record, nested, JSON)
● Integrates with Tableau, Google Sheets + Cloud Storage + Pub/Sub connectors
● Decent pricing (queries $5/TB, storage: $20/TB cold: $10/TB) *May 2017
Making advanced analytics accessible to more companies @martonkodok
What is BigQuery?
11. Making advanced analytics accessible to more companies @martonkodok
Architecting for The Cloud
BigQuery
On-Premises Servers
Pipelines
ETL
Engine
Event Sourcing
Frontend
Platform Services
Metrics / Logs/
Streaming
12. Making advanced analytics accessible to more companies @martonkodok
Data Pipeline Integration
Analytics Backend
BigQuery
On-Premises Servers
Pipelines
FluentD
Event Sourcing
Frontend
Platform Services
Metrics / Logs/
Streaming
Development
Team
Data Analysts
Report & Share
Business Analysis
Tools
Tableau
QlikView
Data Studio
Internal
Dashboard
Database
SQL
Application
ServersServers
Cloud Storage
archive
Load
Export
Replay
Standard
Devices
HTTPS
14. ● On data that it is difficult to process/analyze using traditional databases
● On exploring unstructured data
● Not a replacement to traditional DBs, but it compliments the system
● Applying Javascript UDF on columnar storage to resolve complex tasks
(eg: JS for natural language processing)
● On streams (form wizard ...)
● On IoT streams
● Major strength is handling Large datasets
Making advanced analytics accessible to more companies @martonkodok
Where to use BigQuery?
15. ● no manual sharding
● no capacity guessing
● no idle resources
● no manual scaling
● no provisioning/deploy/running out of resources
● run raw ad-hoc queries (either by analysts/sales)
● no more throwing away-, expiring-, aggregating old
data.
Making advanced analytics accessible to more companies @martonkodok
BigQuery Benefits: Serverless Data Warehouse
16. Making advanced analytics accessible to more companies @martonkodok
Easily Build Custom Reports and Dashboards
17. Thank you.
Slides available on: slideshare.net/martonkodok
Making advanced analytics accessible to more companies @martonkodok