This document provides an overview and user manual for Alasql, an open source JavaScript SQL database library. Alasql allows users to execute SQL statements on JavaScript data and interface with external databases using a familiar SQL syntax. The document covers key features like SQL data querying and manipulation, database definition/management, integration with JavaScript frameworks and Node.js, and processing of file-based data formats. Usage examples demonstrate both synchronous and asynchronous execution of SQL on in-memory and indexed database data sources.
This document discusses using SQL with NoSQL databases. It describes how the Alasql.js library allows JSON objects to be used within SQL queries, including creating JSON tables, finding, inserting, updating, and removing JSON data. It also outlines several proposed features, such as a MongoDB-style query interface and deep cloning of JSON objects.
Alasql fast JavaScript in-memory SQL databaseAndrey Gershun
Alasql.js is a fast in-memory SQL database for JavaScript that allows users to run SQL queries directly in the browser or Node.js. It supports standard SQL functions and operators, uses compilation and optimization to provide fast performance, and has a small minimized file size of around 100kb. Alasql.js aims to provide an alternative to other SQL libraries by offering full SQL functionality while being faster and more compact.
The document discusses using JSON in MySQL. It begins by introducing the speaker and outlining topics to be covered, including why JSON is useful, loading JSON data into MySQL, performance considerations when querying JSON data, using generated columns with JSON, and searching multi-valued attributes in JSON. The document then dives into examples demonstrating loading sample data from XML to JSON in MySQL, issues that can arise, and techniques for optimizing JSON queries using generated columns and indexes.
This document summarizes information about state transfers in Galera, an open source replication plugin for MySQL. It discusses when state transfers are needed, such as during network partitions or node failures. It describes the two types of state transfers - incremental state transfers (IST), which sync nodes incrementally, and state snapshot transfers (SST), which transfer a full data copy. The document compares different methods for SST, such as mysqldump, RSYNC, Xtrabackup, and Clone, and discusses their speeds, impact on donor nodes, and other characteristics. Overall, the document provides an overview of state transfers in Galera synchronous replication.
The document discusses MongoDB concepts including:
- MongoDB uses a document-oriented data model with dynamic schemas and supports embedding and linking of related data.
- Replication allows for high availability and data redundancy across multiple nodes.
- Sharding provides horizontal scalability by distributing data across nodes in a cluster.
- MongoDB supports both eventual and immediate consistency models.
This document discusses tuning MongoDB performance. It covers tuning queries using the database profiler and explain commands to analyze slow queries. It also covers tuning system configurations like Linux settings, disk I/O, and memory to optimize MongoDB performance. Topics include setting ulimits, IO scheduler, filesystem options, and more. References to MongoDB and Linux tuning documentation are also provided.
This is a introduction to PostgreSQL that provides a brief overview of PostgreSQL's architecture, features and ecosystem. It was delivered at NYLUG on Nov 24, 2014.
http://www.meetup.com/nylug-meetings/events/180533472/
This document discusses using SQL with NoSQL databases. It describes how the Alasql.js library allows JSON objects to be used within SQL queries, including creating JSON tables, finding, inserting, updating, and removing JSON data. It also outlines several proposed features, such as a MongoDB-style query interface and deep cloning of JSON objects.
Alasql fast JavaScript in-memory SQL databaseAndrey Gershun
Alasql.js is a fast in-memory SQL database for JavaScript that allows users to run SQL queries directly in the browser or Node.js. It supports standard SQL functions and operators, uses compilation and optimization to provide fast performance, and has a small minimized file size of around 100kb. Alasql.js aims to provide an alternative to other SQL libraries by offering full SQL functionality while being faster and more compact.
The document discusses using JSON in MySQL. It begins by introducing the speaker and outlining topics to be covered, including why JSON is useful, loading JSON data into MySQL, performance considerations when querying JSON data, using generated columns with JSON, and searching multi-valued attributes in JSON. The document then dives into examples demonstrating loading sample data from XML to JSON in MySQL, issues that can arise, and techniques for optimizing JSON queries using generated columns and indexes.
This document summarizes information about state transfers in Galera, an open source replication plugin for MySQL. It discusses when state transfers are needed, such as during network partitions or node failures. It describes the two types of state transfers - incremental state transfers (IST), which sync nodes incrementally, and state snapshot transfers (SST), which transfer a full data copy. The document compares different methods for SST, such as mysqldump, RSYNC, Xtrabackup, and Clone, and discusses their speeds, impact on donor nodes, and other characteristics. Overall, the document provides an overview of state transfers in Galera synchronous replication.
The document discusses MongoDB concepts including:
- MongoDB uses a document-oriented data model with dynamic schemas and supports embedding and linking of related data.
- Replication allows for high availability and data redundancy across multiple nodes.
- Sharding provides horizontal scalability by distributing data across nodes in a cluster.
- MongoDB supports both eventual and immediate consistency models.
This document discusses tuning MongoDB performance. It covers tuning queries using the database profiler and explain commands to analyze slow queries. It also covers tuning system configurations like Linux settings, disk I/O, and memory to optimize MongoDB performance. Topics include setting ulimits, IO scheduler, filesystem options, and more. References to MongoDB and Linux tuning documentation are also provided.
This is a introduction to PostgreSQL that provides a brief overview of PostgreSQL's architecture, features and ecosystem. It was delivered at NYLUG on Nov 24, 2014.
http://www.meetup.com/nylug-meetings/events/180533472/
The document provides an overview of PostgreSQL performance tuning. It discusses caching, query processing internals, and optimization of storage and memory usage. Specific topics covered include the PostgreSQL configuration parameters for tuning shared buffers, work memory, and free space map settings.
C* Summit 2013: The World's Next Top Data Model by Patrick McFadinDataStax Academy
The document provides an overview and examples of data modeling techniques for Cassandra. It discusses four use cases - shopping cart data, user activity tracking, log collection/aggregation, and user form versioning. For each use case, it describes the business needs, issues with a relational database approach, and provides the Cassandra data model solution with examples in CQL. The models showcase techniques like de-normalizing data, partitioning, clustering, counters, maps and setting TTL for expiration. The presentation aims to help attendees properly model their data for Cassandra use cases.
This document discusses how to achieve scale with MongoDB. It covers optimization tips like schema design, indexing, and monitoring. Vertical scaling involves upgrading hardware like RAM and SSDs. Horizontal scaling involves adding shards to distribute load. The document also discusses how MongoDB scales for large customers through examples of deployments handling high throughput and large datasets.
MongoDB World 2019: Tips and Tricks++ for Querying and Indexing MongoDBMongoDB
Query performance can either be a constant headache or the unsung hero of an application. MongoDB provides extremely powerful querying capabilities when used properly. As a senior member of the support team I will share more common mistakes observed and some tips and tricks to avoiding them.
MongoDB World 2019: The Sights (and Smells) of a Bad QueryMongoDB
“Why is MongoDB so slow?” you may ask yourself on occasion. You’ve created indexes, you’ve learned how to use the aggregation pipeline. What the heck? Could it be your queries? This talk will outline what tools are at your disposal (both in MongoDB Atlas and in MongoDB server) to identify inefficient queries.
Lightweight locks (LWLocks) in PostgreSQL provide mutually exclusive access to shared memory structures. They support both shared and exclusive locking modes. The LWLocks framework uses wait queues, semaphores, and spinlocks to efficiently manage acquiring and releasing locks. Dynamic monitoring of LWLock events is possible through special builds that incorporate statistics collection.
Intro to MongoDB
Get a jumpstart on MongoDB, use cases, and next steps for building your first app with Buzz Moschetti, MongoDB Enterprise Architect.
@BuzzMoschetti
Indexes are references to documents that are efficiently ordered by key and maintained in a tree structure for fast lookup. They improve the speed of document retrieval, range scanning, ordering, and other operations by enabling the use of the index instead of a collection scan. While indexes improve query performance, they can slow down document inserts and updates since the indexes also need to be maintained. The query optimizer aims to select the best index for each query but can sometimes be overridden.
The MySQL Query Optimizer Explained Through Optimizer Traceoysteing
The document discusses the MySQL query optimizer. It begins by explaining how the optimizer works, including analyzing statistics, determining optimal join orders and access methods. It then describes how the optimizer trace can provide insight into why a particular execution plan was selected. The remainder of the document provides details on the various phases the optimizer goes through, including logical transformations, cost-based optimizations like range analysis and join order selection.
Webscale PostgreSQL - JSONB and Horizontal Scaling StrategiesJonathan Katz
All data is relational and can be represented through relational algebra, right? Perhaps, but there are other ways to represent data, and the PostgreSQL team continues to work on making it easier and more efficient to do so!
With the upcoming 9.4 release, PostgreSQL is introducing the "JSONB" data type which allows for fast, compressed, storage of JSON formatted data, and for quick retrieval. And JSONB comes with all the benefits of PostgreSQL, like its data durability, MVCC, and of course, access to all the other data types and features in PostgreSQL.
How fast is JSONB? How do we access data stored with this type? What can it do with the rest of PostgreSQL? What can't it do? How can we leverage this new data type and make PostgreSQL scale horizontally? Follow along with our presentation as we try to answer these questions.
This talk discusses how we structure our analytics information at Adjust. The analytics environment consists of 20+ 20TB databases and many smaller systems for a total of more than 400 TB of data. See how we make it work, from structuring and modelling the data through moving data around between systems.
Storm is a distributed and fault-tolerant realtime computation system. It was created at BackType/Twitter to analyze tweets, links, and users on Twitter in realtime. Storm provides scalability, reliability, and ease of programming. It uses components like Zookeeper, ØMQ, and Thrift. A Storm topology defines the flow of data between spouts that read data and bolts that process data. Storm guarantees processing of all data through its reliability APIs and guarantees no data loss even during failures.
Top 10 Mistakes When Migrating From Oracle to PostgreSQLJim Mlodgenski
As more and more people are moving to PostgreSQL from Oracle, a pattern of mistakes is emerging. They can be caused by the tools being used or just not understanding how PostgreSQL is different than Oracle. In this talk we will discuss the top mistakes people generally make when moving to PostgreSQL from Oracle and what the correct course of action.
EXPLAIN ANALYZE is a new query profiling tool first released in MySQL 8.0.18. This presentation covers how this new feature works, both on the surface and on the inside, and how you can use it to better understand your queries, to improve them and make them go faster.
This presentation is for everyone who has ever had to understand why a query is executed slower than anticipated, and for everyone who wants to learn more about query plans and query execution in MySQL.
ProxySQL and the Tricks Up Its Sleeve - Percona Live 2022.pdfJesmar Cannao'
ProxySQL is a MySQL protocol proxy that provides high availability, scalability, and security for MySQL database systems. It allows clients to connect to ProxySQL, which then evaluates requests and performs actions like routing queries to backend databases, caching reads, connection pooling, and load balancing across servers. ProxySQL's main features include query routing, firewalling, real-time statistics, monitoring, and management of large numbers of backend servers. The presentation discusses using ProxySQL's query routing and rewriting capabilities to mask sensitive data when replicating databases for development environments. It also covers using the REST API and Prometheus integration to configure ProxySQL and monitor metrics without direct SQL access.
24시간 365일 서비스를 위한 MySQL DB 이중화.
MySQL 이중화 방안들에 대해 알아보고 운영하면서 겪은 고민들을 이야기해 봅니다.
목차
1. DB 이중화 필요성
2. 이중화 방안
- HW 이중화
- MySQL Replication 이중화
3. 이중화 운영 장애
4. DNS와 VIP
5. MySQL 이중화 솔루션 비교
대상
- MySQL을 서비스하고 있는 인프라 담당자
- MySQL 이중화에 관심 있는 개발자
Sharding in MongoDB allows for horizontal scaling of data and operations across multiple servers. When determining if sharding is needed, factors like available storage, query throughput, and response latency on a single server are considered. The number of shards can be calculated based on total required storage, working memory size, and input/output operations per second across servers. Different types of sharding include range, tag-aware, and hashed sharding. Choosing a high cardinality shard key that matches query patterns is important for performance. Reasons to shard include scaling to large data volumes and query loads, enabling local writes in a globally distributed deployment, and improving backup and restore times.
Comment, dans le cadre de la migration vers Google Cloud Platform, MeilleursAgents a revu ses techniques de déploiements d'applications Python pour garantir des releases fiables, testables et reproductibles.
The document provides an overview of PostgreSQL performance tuning. It discusses caching, query processing internals, and optimization of storage and memory usage. Specific topics covered include the PostgreSQL configuration parameters for tuning shared buffers, work memory, and free space map settings.
C* Summit 2013: The World's Next Top Data Model by Patrick McFadinDataStax Academy
The document provides an overview and examples of data modeling techniques for Cassandra. It discusses four use cases - shopping cart data, user activity tracking, log collection/aggregation, and user form versioning. For each use case, it describes the business needs, issues with a relational database approach, and provides the Cassandra data model solution with examples in CQL. The models showcase techniques like de-normalizing data, partitioning, clustering, counters, maps and setting TTL for expiration. The presentation aims to help attendees properly model their data for Cassandra use cases.
This document discusses how to achieve scale with MongoDB. It covers optimization tips like schema design, indexing, and monitoring. Vertical scaling involves upgrading hardware like RAM and SSDs. Horizontal scaling involves adding shards to distribute load. The document also discusses how MongoDB scales for large customers through examples of deployments handling high throughput and large datasets.
MongoDB World 2019: Tips and Tricks++ for Querying and Indexing MongoDBMongoDB
Query performance can either be a constant headache or the unsung hero of an application. MongoDB provides extremely powerful querying capabilities when used properly. As a senior member of the support team I will share more common mistakes observed and some tips and tricks to avoiding them.
MongoDB World 2019: The Sights (and Smells) of a Bad QueryMongoDB
“Why is MongoDB so slow?” you may ask yourself on occasion. You’ve created indexes, you’ve learned how to use the aggregation pipeline. What the heck? Could it be your queries? This talk will outline what tools are at your disposal (both in MongoDB Atlas and in MongoDB server) to identify inefficient queries.
Lightweight locks (LWLocks) in PostgreSQL provide mutually exclusive access to shared memory structures. They support both shared and exclusive locking modes. The LWLocks framework uses wait queues, semaphores, and spinlocks to efficiently manage acquiring and releasing locks. Dynamic monitoring of LWLock events is possible through special builds that incorporate statistics collection.
Intro to MongoDB
Get a jumpstart on MongoDB, use cases, and next steps for building your first app with Buzz Moschetti, MongoDB Enterprise Architect.
@BuzzMoschetti
Indexes are references to documents that are efficiently ordered by key and maintained in a tree structure for fast lookup. They improve the speed of document retrieval, range scanning, ordering, and other operations by enabling the use of the index instead of a collection scan. While indexes improve query performance, they can slow down document inserts and updates since the indexes also need to be maintained. The query optimizer aims to select the best index for each query but can sometimes be overridden.
The MySQL Query Optimizer Explained Through Optimizer Traceoysteing
The document discusses the MySQL query optimizer. It begins by explaining how the optimizer works, including analyzing statistics, determining optimal join orders and access methods. It then describes how the optimizer trace can provide insight into why a particular execution plan was selected. The remainder of the document provides details on the various phases the optimizer goes through, including logical transformations, cost-based optimizations like range analysis and join order selection.
Webscale PostgreSQL - JSONB and Horizontal Scaling StrategiesJonathan Katz
All data is relational and can be represented through relational algebra, right? Perhaps, but there are other ways to represent data, and the PostgreSQL team continues to work on making it easier and more efficient to do so!
With the upcoming 9.4 release, PostgreSQL is introducing the "JSONB" data type which allows for fast, compressed, storage of JSON formatted data, and for quick retrieval. And JSONB comes with all the benefits of PostgreSQL, like its data durability, MVCC, and of course, access to all the other data types and features in PostgreSQL.
How fast is JSONB? How do we access data stored with this type? What can it do with the rest of PostgreSQL? What can't it do? How can we leverage this new data type and make PostgreSQL scale horizontally? Follow along with our presentation as we try to answer these questions.
This talk discusses how we structure our analytics information at Adjust. The analytics environment consists of 20+ 20TB databases and many smaller systems for a total of more than 400 TB of data. See how we make it work, from structuring and modelling the data through moving data around between systems.
Storm is a distributed and fault-tolerant realtime computation system. It was created at BackType/Twitter to analyze tweets, links, and users on Twitter in realtime. Storm provides scalability, reliability, and ease of programming. It uses components like Zookeeper, ØMQ, and Thrift. A Storm topology defines the flow of data between spouts that read data and bolts that process data. Storm guarantees processing of all data through its reliability APIs and guarantees no data loss even during failures.
Top 10 Mistakes When Migrating From Oracle to PostgreSQLJim Mlodgenski
As more and more people are moving to PostgreSQL from Oracle, a pattern of mistakes is emerging. They can be caused by the tools being used or just not understanding how PostgreSQL is different than Oracle. In this talk we will discuss the top mistakes people generally make when moving to PostgreSQL from Oracle and what the correct course of action.
EXPLAIN ANALYZE is a new query profiling tool first released in MySQL 8.0.18. This presentation covers how this new feature works, both on the surface and on the inside, and how you can use it to better understand your queries, to improve them and make them go faster.
This presentation is for everyone who has ever had to understand why a query is executed slower than anticipated, and for everyone who wants to learn more about query plans and query execution in MySQL.
ProxySQL and the Tricks Up Its Sleeve - Percona Live 2022.pdfJesmar Cannao'
ProxySQL is a MySQL protocol proxy that provides high availability, scalability, and security for MySQL database systems. It allows clients to connect to ProxySQL, which then evaluates requests and performs actions like routing queries to backend databases, caching reads, connection pooling, and load balancing across servers. ProxySQL's main features include query routing, firewalling, real-time statistics, monitoring, and management of large numbers of backend servers. The presentation discusses using ProxySQL's query routing and rewriting capabilities to mask sensitive data when replicating databases for development environments. It also covers using the REST API and Prometheus integration to configure ProxySQL and monitor metrics without direct SQL access.
24시간 365일 서비스를 위한 MySQL DB 이중화.
MySQL 이중화 방안들에 대해 알아보고 운영하면서 겪은 고민들을 이야기해 봅니다.
목차
1. DB 이중화 필요성
2. 이중화 방안
- HW 이중화
- MySQL Replication 이중화
3. 이중화 운영 장애
4. DNS와 VIP
5. MySQL 이중화 솔루션 비교
대상
- MySQL을 서비스하고 있는 인프라 담당자
- MySQL 이중화에 관심 있는 개발자
Sharding in MongoDB allows for horizontal scaling of data and operations across multiple servers. When determining if sharding is needed, factors like available storage, query throughput, and response latency on a single server are considered. The number of shards can be calculated based on total required storage, working memory size, and input/output operations per second across servers. Different types of sharding include range, tag-aware, and hashed sharding. Choosing a high cardinality shard key that matches query patterns is important for performance. Reasons to shard include scaling to large data volumes and query loads, enabling local writes in a globally distributed deployment, and improving backup and restore times.
Comment, dans le cadre de la migration vers Google Cloud Platform, MeilleursAgents a revu ses techniques de déploiements d'applications Python pour garantir des releases fiables, testables et reproductibles.
AlaSQL - SQL библиотека на JavaScript (выступление на PiterJS)Andrey Gershun
AlaSQL - это библиотека для обработки данных с помощью языка SQL, которая написана на JavaScript и может работать в браузере (в том числе, и в режиме WebWorker) или Node.js. Библиотека может быть использована в приложениях для обработки данных, а также для решения задач ETL (extract-transform-loading), таких как приложения бизнес-аналитики.
AlaSQL позволяет проводить сложные манипуляции с массивами данных (такие как группировки, сортировки, выборки, слияния) с помощью привычных выражений языка SQL. Встроенные процедуры импорта и экспорта данных в различных форматах (включая TXT, JSON, CSV, TSV, Microsoft Excel и Google Spreadsheets) предоставляют удобный интерфейс для импорта и экспорта прямо из SQL-выражений. Библиотека хорошо сочетается с такими современными фреймворками, как Angular.js, d3.js и Google Chars.
AlaSQL поддерживает совместимость по многим операторам со стандартным SQL и различными его диалектами, что позволяет переносить ранее разработанные процедуры для других баз данных. Специальные расширения синтаксиса SQL позволяют простым и удобным способом использовать все возможности, предоставляемые JavaScript, например, обработка JSON объектов из SQL выражений.
Для достижения высокого быстродействия AlaSQL написана с использованием сильно оптимизированного JavaScript и содержит несколько эвристик для сокращения времени обработки SQL выражений.
Primeros pasos con Neo4J. Basada en la presentación de Andreas Kolleger, Getting started with neo4j. Describe los fundamentos básicos de las bases de datos de Grafos y como comenzar a usar Neo4j.
An immersive workshop at General Assembly, SF. I typically teach this workshop at General Assembly, San Francisco. To see a list of my upcoming classes, visit https://generalassemb.ly/instructors/seth-familian/4813
I also teach this workshop as a private lunch-and-learn or half-day immersive session for corporate clients. To learn more about pricing and availability, please contact me at http://familian1.com
3 Things Every Sales Team Needs to Be Thinking About in 2017Drift
Thinking about your sales team's goals for 2017? Drift's VP of Sales shares 3 things you can do to improve conversion rates and drive more revenue.
Read the full story on the Drift blog here: http://blog.drift.com/sales-team-tips
U-SQL combines SQL and C# to allow for querying and analyzing large amounts of structured and unstructured data stored in Azure Data Lake Store. U-SQL queries can access data across various Azure data services and provide analytics capabilities like window functions and ranking functions. The language also allows for extensibility through user-defined functions, aggregates, and operators written in C#. U-SQL queries are compiled and executed on Azure Data Lake Analytics, which provides a scalable analytics service based on Apache YARN.
Postgres vs Mongo / Олег Бартунов (Postgres Professional)Ontico
The document compares Postgres and MongoDB, discussing their different data models. It notes that Postgres supports semi-structured data through extensions like hstore and JSON, allowing flexible schemas like NoSQL databases while retaining ACID properties. JSON support has improved over time with the addition of the JSON and JSONB data types in Postgres.
Hive is a data warehouse infrastructure built on top of Hadoop for querying and managing large datasets stored in Hadoop Distributed File System (HDFS). It provides SQL-like interface to query data, and uses MapReduce to parallelize the execution of queries across clusters. The document discusses Hive architecture, how it works, HiveQL syntax, data types, storage formats, DDL commands, data loading, functions, optimizations and getting started with Hive.
This document provides an overview of in-memory databases, summarizing different types including row stores, column stores, compressed column stores, and how specific databases like SQLite, Excel, Tableau, Qlik, MonetDB, SQL Server, Oracle, SAP Hana, MemSQL, and others approach in-memory storage. It also discusses hardware considerations like GPUs, FPGAs, and new memory technologies that could enhance in-memory database performance.
MYSQL Query Anti-Patterns That Can Be Moved to SphinxPythian
This document provides an overview and summary of MySQL and Sphinx search capabilities. It discusses some limitations of MySQL for certain queries and how Sphinx can help address those limitations by offloading search queries and enabling features like full-text search and geospatial search. The document also covers how to install, configure, and query Sphinx including indexing data from MySQL, running the Sphinx daemon, and connecting to it via SphinxQL or APIs.
This document provides an overview of SQL queries and functions. It discusses the SELECT statement for building queries, aggregate functions like COUNT and MIN/MAX, and scalar functions. It also covers queries for adding, updating, deleting records and creating tables. Examples are provided for each SQL statement type. The document concludes with an exercise to recreate the queries in your own database and questions to test understanding of SQL standards, query creation, and query types in Microsoft Access.
Introduction to Azure Data Lake and U-SQL for SQL users (SQL Saturday 635)Michael Rys
Data Lakes have become a new tool in building modern data warehouse architectures. In this presentation we will introduce Microsoft's Azure Data Lake offering and its new big data processing language called U-SQL that makes Big Data Processing easy by combining the declarativity of SQL with the extensibility of C#. We will give you an initial introduction to U-SQL by explaining why we introduced U-SQL and showing with an example of how to analyze some tweet data with U-SQL and its extensibility capabilities and take you on an introductory tour of U-SQL that is geared towards existing SQL users.
slides for SQL Saturday 635, Vancouver BC, Aug 2017
This document discusses different types of SQL functions including string, numeric, conversion, group, date/time, and user-defined functions. It provides examples of common string functions like UPPER, LENGTH, SUBSTR. Numeric functions covered include ABS, ROUND, POWER. Group functions include AVG, COUNT, MAX, MIN, SUM. Date functions allow conversion and calculation involving dates. The document demonstrates how to create scalar and table-valued user-defined functions in SQL.
Sql server 2016: System Databases, data types, DML, json, and built-in functionsSeyed Ibrahim
SQL Server 2016 Slides for the Newbies. Prepared for a session. Covers SQL 2016 JSON support, Built-in Functions, Data Types & Pre-built system databases
Beyond SQL: Speeding up Spark with DataFramesDatabricks
This document summarizes Spark SQL and DataFrames in Spark. It notes that Spark SQL is part of the core Spark distribution and allows running SQL and HiveQL queries. DataFrames provide a way to select, filter, aggregate and plot structured data like in R and Pandas. DataFrames allow writing less code through a high-level API and reading less data by using optimized formats and partitioning. The optimizer can optimize queries across functions and push down predicates to read less data. This allows creating and running Spark programs faster.
GreenDao is an ORM library that provides high performance for CRUD operations on SQLite databases in Android apps. It uses code generation to map objects to database tables, allowing data to be accessed and queried using objects rather than raw SQL. Some key features include object mapping, query building, caching, and bulk operations. The documentation provides examples of how to set up GreenDao in a project, define entity classes, perform queries, inserts, updates and deletes on objects.
The document discusses LINQ (Language Integrated Query), which allows querying of data from various sources in .NET using a common language integrated into C# and VB.NET. It covers the context and motivation for LINQ, its architecture and usage with different data sources like XML, relational databases, and web services. It also discusses LINQ query operations, performance considerations, customizations, alternatives to LINQ, and new features in LINQ for .NET Framework 4.0.
The document discusses new features and improvements in the MySQL 5.7 optimizer. Key changes include support for generated columns and functional indexes, a new JSON datatype and functions, an improved cost model, updated hint syntax, and optimizations for queries with IN expressions and UNION ALL queries. The document provides examples of how generated columns, functional indexes, and JSON functions can be used. It also compares performance of querying and indexing JSON data stored in the native JSON datatype versus TEXT.
Best practices on Building a Big Data Analytics Solution (SQLBits 2018 Traini...Michael Rys
From theory to implementation - follow the steps of implementing an end-to-end analytics solution illustrated with some best practices and examples in Azure Data Lake.
During this full training day we will share the architecture patterns, tooling, learnings and tips and tricks for building such services on Azure Data Lake. We take you through some anti-patterns and best practices on data loading and organization, give you hands-on time and the ability to develop some of your own U-SQL scripts to process your data and discuss the pros and cons of files versus tables.
This were the slides presented at the SQLBits 2018 Training Day on Feb 21, 2018.
The document provides information on how to connect Python to MySQL and perform various operations like creating databases and tables, inserting, updating, deleting and fetching data. It explains how to install the required Python MySQL connector library and connect to a MySQL server from Python. It then demonstrates commands to create databases and tables, insert, update and delete data, and fetch data using where, order by and limit clauses. It also shows how to drop tables and databases and alter table structures.
This document provides an overview of SQL Server and database development concepts. It covers topics such as SQL Server objects, databases, tables, data types, relationships, constraints, identity columns, computed columns, indexes, views, SQL statements for data manipulation and definition, operators like WHERE, ORDER BY, JOIN, and wildcards. The document is intended as a learning guide for those looking to develop databases using Microsoft SQL Server.
5_MariaDB_What's New in MariaDB Server 10.2 and Big Data Analytics with Maria...Kangaroot
Anders Karlsson, Principal Sales Engineer at MariaDB Corporation Ab
Join this session to learn more about all the new product features included in MariaDB Server 10.2.
After running over these new features, the presentation will cover MariaDB ColumnStore. MariaDB ColumnStore is a powerful open source columnar storage engine that supports a wide variety of analytical use cases with ANSI SQL in highly scalable distributed environments. It unifies OLTP and analytics workloads with a single ANSI SQL interface.
Similar to Alasql JavaScript SQL Database Library: User Manual (20)
Manyata Tech Park Bangalore_ Infrastructure, Facilities and Morenarinav14
Located in the bustling city of Bangalore, Manyata Tech Park stands as one of India’s largest and most prominent tech parks, playing a pivotal role in shaping the city’s reputation as the Silicon Valley of India. Established to cater to the burgeoning IT and technology sectors
Liberarsi dai framework con i Web Component.pptxMassimo Artizzu
In Italian
Presentazione sulle feature e l'utilizzo dei Web Component nell sviluppo di pagine e applicazioni web. Racconto delle ragioni storiche dell'avvento dei Web Component. Evidenziazione dei vantaggi e delle sfide poste, indicazione delle best practices, con particolare accento sulla possibilità di usare web component per facilitare la migrazione delle proprie applicazioni verso nuovi stack tecnologici.
Using Query Store in Azure PostgreSQL to Understand Query PerformanceGrant Fritchey
Microsoft has added an excellent new extension in PostgreSQL on their Azure Platform. This session, presented at Posette 2024, covers what Query Store is and the types of information you can get out of it.
🏎️Tech Transformation: DevOps Insights from the Experts 👩💻campbellclarkson
Connect with fellow Trailblazers, learn from industry experts Glenda Thomson (Salesforce, Principal Technical Architect) and Will Dinn (Judo Bank, Salesforce Development Lead), and discover how to harness DevOps tools with Salesforce.
Superpower Your Apache Kafka Applications Development with Complementary Open...Paul Brebner
Kafka Summit talk (Bangalore, India, May 2, 2024, https://events.bizzabo.com/573863/agenda/session/1300469 )
Many Apache Kafka use cases take advantage of Kafka’s ability to integrate multiple heterogeneous systems for stream processing and real-time machine learning scenarios. But Kafka also exists in a rich ecosystem of related but complementary stream processing technologies and tools, particularly from the open-source community. In this talk, we’ll take you on a tour of a selection of complementary tools that can make Kafka even more powerful. We’ll focus on tools for stream processing and querying, streaming machine learning, stream visibility and observation, stream meta-data, stream visualisation, stream development including testing and the use of Generative AI and LLMs, and stream performance and scalability. By the end you will have a good idea of the types of Kafka “superhero” tools that exist, which are my favourites (and what superpowers they have), and how they combine to save your Kafka applications development universe from swamploads of data stagnation monsters!
Unlock the Secrets to Effortless Video Creation with Invideo: Your Ultimate G...The Third Creative Media
"Navigating Invideo: A Comprehensive Guide" is an essential resource for anyone looking to master Invideo, an AI-powered video creation tool. This guide provides step-by-step instructions, helpful tips, and comparisons with other AI video creators. Whether you're a beginner or an experienced video editor, you'll find valuable insights to enhance your video projects and bring your creative ideas to life.
WWDC 2024 Keynote Review: For CocoaCoders AustinPatrick Weigel
Overview of WWDC 2024 Keynote Address.
Covers: Apple Intelligence, iOS18, macOS Sequoia, iPadOS, watchOS, visionOS, and Apple TV+.
Understandable dialogue on Apple TV+
On-device app controlling AI.
Access to ChatGPT with a guest appearance by Chief Data Thief Sam Altman!
App Locking! iPhone Mirroring! And a Calculator!!
Transforming Product Development using OnePlan To Boost Efficiency and Innova...OnePlan Solutions
Ready to overcome challenges and drive innovation in your organization? Join us in our upcoming webinar where we discuss how to combat resource limitations, scope creep, and the difficulties of aligning your projects with strategic goals. Discover how OnePlan can revolutionize your product development processes, helping your team to innovate faster, manage resources more effectively, and deliver exceptional results.
Enhanced Screen Flows UI/UX using SLDS with Tom KittPeter Caitens
Join us for an engaging session led by Flow Champion, Tom Kitt. This session will dive into a technique of enhancing the user interfaces and user experiences within Screen Flows using the Salesforce Lightning Design System (SLDS). This technique uses Native functionality, with No Apex Code, No Custom Components and No Managed Packages required.
How Can Hiring A Mobile App Development Company Help Your Business Grow?ToXSL Technologies
ToXSL Technologies is an award-winning Mobile App Development Company in Dubai that helps businesses reshape their digital possibilities with custom app services. As a top app development company in Dubai, we offer highly engaging iOS & Android app solutions. https://rb.gy/necdnt
Unveiling the Advantages of Agile Software Development.pdfbrainerhub1
Learn about Agile Software Development's advantages. Simplify your workflow to spur quicker innovation. Jump right in! We have also discussed the advantages.
Orca: Nocode Graphical Editor for Container OrchestrationPedro J. Molina
Tool demo on CEDI/SISTEDES/JISBD2024 at A Coruña, Spain. 2024.06.18
"Orca: Nocode Graphical Editor for Container Orchestration"
by Pedro J. Molina PhD. from Metadev
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
Consistent toolbox talks are critical for maintaining workplace safety, as they provide regular opportunities to address specific hazards and reinforce safe practices.
These brief, focused sessions ensure that safety is a continual conversation rather than a one-time event, which helps keep safety protocols fresh in employees' minds. Studies have shown that shorter, more frequent training sessions are more effective for retention and behavior change compared to longer, infrequent sessions.
Engaging workers regularly, toolbox talks promote a culture of safety, empower employees to voice concerns, and ultimately reduce the likelihood of accidents and injuries on site.
The traditional method of conducting safety talks with paper documents and lengthy meetings is not only time-consuming but also less effective. Manual tracking of attendance and compliance is prone to errors and inconsistencies, leading to gaps in safety communication and potential non-compliance with OSHA regulations. Switching to a digital solution like Safelyio offers significant advantages.
Safelyio automates the delivery and documentation of safety talks, ensuring consistency and accessibility. The microlearning approach breaks down complex safety protocols into manageable, bite-sized pieces, making it easier for employees to absorb and retain information.
This method minimizes disruptions to work schedules, eliminates the hassle of paperwork, and ensures that all safety communications are tracked and recorded accurately. Ultimately, using a digital platform like Safelyio enhances engagement, compliance, and overall safety performance on site. https://safelyio.com/
Mobile App Development Company In Noida | Drona InfotechDrona Infotech
React.js, a JavaScript library developed by Facebook, has gained immense popularity for building user interfaces, especially for single-page applications. Over the years, React has evolved and expanded its capabilities, becoming a preferred choice for mobile app development. This article will explore why React.js is an excellent choice for the Best Mobile App development company in Noida.
Visit Us For Information: https://www.linkedin.com/pulse/what-makes-reactjs-stand-out-mobile-app-development-rajesh-rai-pihvf/
2. Content
I. About Alasql
II. SQL data language
III. JavaScript API
IV. Persistence and external databases
V. JSON, TXT, CSV, TSV, and Excel data processing
VI. JavaScript frameworks: Angular.js, d3.js
VII. Command-line utilities: Alacon, Alaserver
4. Alasql
• JavaScript SQL database library designed for:
• Client-side SQL database with persistence
• Fast data processing for BI and ERP applications
• JS data manipulation and advanced filtering, grouping and joining
• Easy ETL (extract, transfer, and loading) data in CSV and XLSX
formats
• Works in browser, Node.js, mobile applications
5. Alasql in Internet
• GitHub
• http://github.com/agershun/alasql
• Official site
• http://alasql.org
6. Installation and Usage
• Installation:
• In the browser
• Copy file (production)
• dist/alasql.min.js
• Or (debug)
• dist/alasql.js
• dist/alasql.js.map
• In Node.js
• npm install alasql
• Usage:
• In the browser
• <script src=‘alasql.js’></script>
• AMD module
• require([‘alasql’],
function(alasql){ /* body */ });
• In Node.js
• var alasql = require(‘alasql’);
7. Quick Start
// Advanced JavaScript data processing (sync. with parameters)
var data = [{a:1,b:1,c:1},{a:1,b:2,c:1},{a:1,b:3,c:1}, {a:2,b:1,c:1}];
var res = alasql('SELECT a, COUNT(*) AS b FROM ? GROUP BY a',[data]);
console.log(res);
// Work with IndexedDB database with SQL (async, multiple SQL statements)
alasql(’ATTACH INDEXEDDB DATABASE MyBase;
USE MyBase;
SELECT City.*
FROM City
JOIN Country USING CountryCode
WHERE Country.Continent = ”Asia”’, [], function (res) {
console.log(res.pop());
});
9. Alasql SQL statements
• Data query
• SELECT
• Data manipulation
• INSERT
• UPDATE
• DELETE
• Data definition
• CREATE TABLE
• ALTER TABLE
• DROP TABLE
• Database
• USE DATABASE
• CREATE DATABASE
• DROP DATABASE
• External database
• ATTACH DATABASE
• DETACH DATABASE
• Transactions
• BEGIN
• COMMIT
• ROLLBACK
• Show
• SHOW DATABASES
• SHOW TABLES
• SHOW CREATE TABLE
• Program
• SET
• SOURCE
• Debug
• ASSERT
• Information
• HELP
10. Statements
• Single statement
• Return value
• Query result
• [{a:1},{a:2}]
• Number of rows processed
• 362
• Number of database object
processed (e.g. tables
dropped)
• 1 / 0
• Multiple statements
• Separated by semicolon
• “CREATE DATABASE test;
USE test1”
• Return value
• Array of return values of
each of statements
• [1,0, [{a:1},{a:2}]]
11. Case-Sensitive
• Case insensitive
• SQL Keywords (SELECT)
• Standard functions (LEN)
• Aggregators (SUM)
• Engines (INDEXEDDB)
• FROM-functions (TXT)
• INTO-functions (XLSX)
• Same:
• SELECT * FROM city
• select * from city
• Case sensitive
• Database names
• Table names
• Columns
• User-defined functions
• JSON properties and
functions
• JavaScript classes
• Different:
• SELECT * FROM city
• SELECT * FROM City
• SELECT * FROM CITY
12. SELECT
• SELECT
• TOP / LIMIT FETCH
• DISTINCT
• INTO
• FROM
• JOIN ON / USING
• GROUP BY
• HAVING
• WHERE
• ORDER BY
• UNION / INTERSECT /
EXCEPT
• Value modifiers
• VALUE, COLUMN, ROW,
MATRIX …
• Columns
• City.Name, City.*, Population
AS p
• Operators
• w*h+20
• Aggregators
• SUM(), COUNT(),..
• Function
• LCASE(), LEN(), ..
13. Return Value Modifier
• SELECT By default
• returns array of objects
• [{a:1,b:10},{a:2,b:20}]
• SELECT VALUE
• returns first value of first row
• 1
• SELECT COLUMN
• returns first column from all rows
• [1,2]
• SELECT ROW
• returns values of all columns of first
row
• [1,10]
• SELECT MATRIX
• returns array of arrays
• [[1,10],[2,20]]
• Return number of all lines in
README.md
• SELECT VALUE COUNT(*) FROM
TXT(‘README.md’)
• Return array of arrays:
• SELECT MATRIX * FROM one
14. SELECT columns
• Columns
• SELECT size
• SELECT City.Name,
City.Population
• Expressions
• SELECT LCASE(City), 2+2
• Aggregators
• SELECT COUNT(*),
SUM(Population)
• Alias
• SELECT City+” “+Country
AS LongName
• All columns from table
• SELECT *, City.*
• Columns of arrays
• SELECT [0],[1]
• Column names with
spaces, etc
• [My Column]
• `My Column`
15. Operators
• Number
• +,-,*,/
• String
• +
• Logic
• AND, OR, NOT
• =, !=, >, >=, <, <=
• Complex
• v BETWEEN a AND b
• v NOT BETWEEN a AND b
• v IN (10,20,30)
• v NOT IN (SELECT *
FROM Ages)
• v >= ANY (20,30,40)
16. Aggregators
• SQL Standard
• SUM()
• AVG()
• COUNT()
• MAX()
• MIN()
• FIRST()
• LAST()
• Non-standard
• AGGR()
• COUNT
• COUNT(one)
• COUNT(*)
• AGGR – operations on
aggregated values
• SELECT SUM(a) AS sm,
COUNT(*) AS cnt,
AGGR(sm/cnt) AS avg1,
AVG(a) AS avg2
FROM data
• Here: avg1 = avg2
18. TOP / LIMIT FETCH
// Select top 10 records
SELECT TOP 10 * FROM Cities ORDER BY Name
// Select 20 records starting from record number 5
SELECT * FROM Cities ORDER BY Name LIMIT 20 FETCH 5
20. INTO
• Into table
• SELECT * INTO City
FROM Capital WHERE
• SELECT * INTO
• Into external file (into-
functions)
• SELECT * INTO
CSV(‘city.csv’) FROM City
• Into stdout (for Node.js)
• SELECT * INTO TXT()
FROM City
• Into-functions
• TXT()
• JSON()
• CSV()
• TSV() / TAB()
• XLSX()
21. FROM
• From table
• SELECT * FROM albums
• SELECT * FROM mydb.test
• From parameter
• alasql(‘SELECT * FROM
?’,[singers]);
• From file (from function)
• SELECT * FROM
XLSX(“medals.xlsx”)
• From stdin (for Node.js)
• SELECT * FROM TXT()
• FROM table alias
• SELECT * FROM ? City
• SELECT * FROM album AS a
• From SELECT
• SELECT * FROM
(SELECT * FROM
(SELECT * FROM City))
• From functions
• TXT()
• JSON()
• CSV()
• TSV() / TAB()
• XLSX() / XLS()
22. From Parameter
• Array of objects
• alasql(‘SELECT
city.population FROM ? AS
city’,[city]);
• Array of arrays
• alasql(‘SELECT [0]+[1]*[2]
FROM ?’, [data]);
• Object
• alasql(“SELECT [1] FROM
? WHERE [0] =
‘one’”,[{one:1,two:2}])
• String
• alasql(“SELECT LEN([0])
FROM ?”,[“Multi n line n text”])
• Parameter data type
conversion
• String => array of lines
• SELECT * FROM ? WHERE
LEN([0]) > 10
• “abcncde” =>
[[“abc”],[“cde”]]
• Objects => array of pairs
key-value
• {a:1,b:2} => [[“a”,1],[“b”,2]]
23. JOIN
• Joins
• [INNER] JOIN
• LEFT JOIN
• RIGHT JOIN
• [FULL] OUTER JOIN
• ANTI JOIN
• SEMI JOIN
• CROSS JOIN
• NATURAL JOIN
• USING
• SELECT city.*, country.*
FROM city
JOIN country
USING countryid
• ON
• SELECT city.*, country.*
FROM city
JOIN country
ON city.countryid =
country.countryid
24. WHERE
• Expression
• SELECT * FROM City
WHERE Population >
1000000
• EXIST() / NOT EXIST()
• SELECT * FROM City
WHERE EXIST(SELECT
* FROM Capital WHERE
City.Name =
Capital.Name)
25. GROUP BY
• Grouping
• SELECT * FROM City
GROUP BY Contient,
Country
• Grouping functions
• CUBE()
• ROLLUP()
• GROUPING SETS()
• SELECT * FROM City
GROUP BY
ROLLUP(Continent,
Country)
27. ORDER BY
• Ascending
• SELECT * FROM City
ORDER BY Population
• SELECT * FROM City
ORDER BY Population
ASC
• Descending
• SELECT * FROM City
ORDER BY Name
DESC
28. UNION / INTERSECT / EXCEPT
• SELECT 10
UNION ALL
SELECT 20
• UNION
• UNION ALL
• INTERSECT
• EXCEPT / MINUS
29. INSERT values
• VALUES
• INSERT INTO city (name, population) VALUES
(“Moscow”,11500000), (“Kyiv”,5000000)
• INSERT INTO city VALUES (“Paris”,3500000)
• INSERT INTO city VALUES {name:”Berlin”, population:4000000}
• DEFAULT VALUES
• INSERT INTO city DEFAULT VALUES
• SELECT (= SELECT INTO)
• INSERT INTO city SELECT capital AS name FROM country
GROUP BY capital;
32. CREATE TABLE
DROP TABLE
• CREATE TABLE star (
one INT DEFAULT 100,
two STRING,
three BOOL PRIMARY KEY
);
• DROP TABLE star;
33. ALTER TABLE
• ADD COLUMN
• ALTER TABLE City ADD COLUMN Continent STRING
• RENAME COLUMN
• ALTER TABLE City RENAME COLUMN Continent TO WorldPart
• DROP COLUMN
• ALTER TABLE City DROP COLUMN Continent
• RENAME TO
• ALTER TABLE City RENAME TO Capital
34. CREATE DATABASE /
DROP DATABASE / USE DATABASE
• Create database
• CREATE DATABASE mydb
• Select default database
• USE DATABASE mydb
• USE mydb
• Drop database
• DROP DATABASE mydb
35. Transaction
• Begin
• BEGIN
• Commit
• COMMIT
• Rollback
• ROLLBACK
• In version 0.0.35 Alasql supports transactions only for
Local Storage and DOM-storage database. Full support
for other databases will be available in future versions
36. SHOW
• SHOW DATABASES – list of all databases in memory
• SHOW DATABASES LIKE ‘A%’
• SHOW TABLES – list of tables in database
• SHOW TABLES FROM mydb
• SHOW CREATE TABLE table – show CREATE TABLE
statement from the table
• SHOW CREATE TABLE City
37. SET, SOURCE, ASSERT, HELP
• SET - now used only for one option:
• SET AUTOCOMMIT ON / OFF
• SOURCE “file.sql” – read and execute all SQL statements from
file
• SOURCE ‘world.sql’
• ASSERT value – throws error if result of last operation is not
equal to value (Alasql uses equalDeep() function for
comparision)
• ASSERT 1
• ASSERT “Wrong Value”, [{a:1,b:”Odessa”}]
• HELP
• Show list of available commands
39. SQL and JavaScript:
Better Together!
SQL way
alasql(‘CREATE DATABASE test01’);
alasql(‘USE test01’);
alasql(‘CREATE TABLE one (a INT)’);
alasql(‘INSERT INTO one VALUES (10)’):
var res = alasql(‘SELECT * FROM one’);
JavaScript way
data = [{a:1}, {a:2}, {a:3}];
alasql(‘SELECT * FROM ? WHERE a >=
?’, [data, 2]);
or
var db = new alasql.Database();
db.exec(“select * from one”, function(data)
{
console.log(data.length);
});
40. alasql - main library object and function
• alasql(sql,params,callback) – execute sql
• alasql.exec(sql,params,callback) – execute sql
• alasql.parse(sql) – parse to AST (abstract syntax tree)
• ast.compile(databaseid) – compile statement and cache it
in database cache
• alasql.exec(sql) – execute statement
• alasql.use(databaseid) – use database
• alasql.pretty(sql) – pretty SQL output in HTML and TXT
• alasql.options - options
41. alasql() - main function
• alasql(sql,[params],[callback])
• sql – one or some SQL-statements separated by ‘;’
• If one statement – alasql() returns one value
• USE test12 => 1
• SELECT * FROM one => [{a:1}, {a:2}]
• If some statements – alasql() returns array of values, one for each
statement
• USE test12; SELECT * FROM one => [1,[{a:1}, {a:2}]]
• params – an array of parameters of SQL statement
• You can use ? in SQL statement
• alasql(‘SELECT a FROM ? WHERE b = ?’,[[{a:1,b:1}, {a:2,b:2}],2])
• callback – a callback function
• Without callback alasql() runs synchroniously
• With callback alasql() runs asynchroniously with callbacks
42. alasql(): sync and async
• Sync version
• var result = alasql(sql,
params)
• Async version
• alasql(sql, params,
function(result) {
// do something
//with result
});
• It is impossible to use
sync version with
async operations like:
• IndexedDB functions
• INTO- and FROM-
functions
44. Alasql options
• alasql.options
• alasql.options.valueof (true/false) – convert all values with
.valueOf() function before comparing
• alasql.options.angularjs (true/false) – remove $$hashKey from
result arrays if angular.js library loaded
45. How Alasql stores data?
• alasql.databases – list of all current databases in memory
• alasql.engines – list of all alasql available engines (like
localStorage, IndexedDB)
46. Database class
• var db = new alasql.Database(‘mydb’)
• db.databaseid – database name
• db.tables – list of tables
• db.engineid – engine (Local Storage, IndexedDB, etc.)
• db.exec(sql) – execute sql in mydb database
48. User-defined functions
and compiled statements
Custom functions:
alasql.fn.myfn = function(a,b) {
return a*b+1;
}
alasql(‘SELECT myfn(a,b) FROM
one’);
Compiled statements:
var ins = alasql.compile(‘INSERT
INTO one VALUES (?,?)’);
ins(1,10);
ins(2,20);
Compiled functions:
var bigSum =
alasql.compile(‘SELECT SUM(a)
FROM one WHERE a>3’, ‘value’);
var res = bigSum([10]) + 10;
49. JavaScript classes as SQL data types
• alasql.fn.Date = Date;
• alasql(‘CREATE order (
orderno INT,
orderdate Date
)’);
• Classes are case-sensitive
50. NEW (like JavaScript ‘new’ operator)
• Register class as alasql type
• alasql.fn.Date = Date;
• Use NEW
• alasql(‘SELECT NEW Date(yr,mn-1,dy) FROM orderdates’);
51. Property
• Property operator ->
• INSERT INTO one VALUES @{a:5, b:{c:[4,5]}}
• SELECT * FROM one WHERE a->b->0 = 4
• Expression
• SELECT * FROM one WHERE a->(LCASE(“B”))->(1-1) = 4
52. Call JavaScript object function
• Arrow function ->
• object -> function(parameters)
• Select lengths of all lines from text file
• alasql(‘SELECT [0]->length FROM TXT(“mytext.txt”)’
• alasql(‘SELECT LEN([0]) FROM TXT(“mytext.txt”)’
53. JavaScript object properties
• Arrow function -> property
• var data = [{a:{b:1,c:1}, {a:{b:2}}}]
• alasql(‘SELECT a->b FROM ?’,[data]);
• Array members
• SELECT a->(0) FROM data
• Calculated property names
• SELECT a->(“mon”+moid), b->(2+2) FROM data
54. Object Properties & Functions
• Object property
• a -> b
• a -> b -> c
• Array member
• a -> 1
• a -> 1 -> 2
• Calculated property
name
• a -> (1+2)
• a -> (“text” + “ “ + ”more”)
• Functions
• myTime -> getFullYear()
• s -> substr(1,2)
• JavaScript string
functions
• “ABCDE”->length
• SELECT s->length
FROM mytext
55. JSON objects
• @ prefixes (like Objective-C NSObjects)
• @1
• @”string”
• @{a:1,b:2} or {a:1,b:2}
• @[1,2,3] – conflct with column names with spaces [My Column]
• Three equal operators
• a = b like == in JavaScript
• a == b compare a.valueOf() and b.valueOf() – for dates
• a === b uses equalDeep() – for JSON objects
56. JSON with expressions
• CREATE TABLE one;
• INSERT INTO one VALUES @{b:1}, @{b:2}
• SELECT @{a:@[2014,(2014+1),(2014+b)]} FROM one
• [{a:[2014,2015,2015]}, {a:[2014,2015,2016]}]
57. CREATE TABLE AND INSERT JSON
VALUES
• JSON table
• CREATE TABLE one;
• INSERT INTO one VALUES @{a:1}, @{b:2}, @{a:1,b:2}, @1,
@”String”
• JSON object
• CREATE TABLE two (a JSON);
• INSERT INTO one VALUES (1), (‘two’), (@{b:’three’}),
@[‘F’,’O’,’U’,’R’]
58. SELECT JSON
• SELECT * FROM one
• [{a:1}, {b:2}, {a:1,b:2}, 1, ”String”]
• SELECT a FROM one
• [{a:1}, {a:undefined}, {a:1}, {a:undefined},{a:undefined}]
• SELECT * FROM one WHERE a=1
• [{a:1},{a:1,b:2}]
59. Deep equal (==, !==)
• SELECT @{a:1} == @{a:1}
• True
• SELECT * FROM one WHERE a=1
• INSERT INTO one VALUES {a:[5,6]}
• SELECT * FROM one WHERE a==@[5,6]
60. Deep Clone JSON object
• SELECT a FROM one
• SELECT deepClone(a) FROM one
61. ? parameter value
• ? operator
• alasql(‘INSERT INTO one VALUES @{year:?, b:1}’,[2014]);
• alasql(“select * from sales where dt == @{year:?}”, [2014])
• Parameter object property by name
• alasql(‘SELECT $a FROM ?’, [{a:1}]’)
• alasql(‘SELECT :b FROM ?’, [{b:1}]’)
• Array member
• alasql(‘SELECT $2 FROM ?’, [[0,1,2,3,4]]’)
62. Date and Time in Alasql
• Usual realization of date and
time types in different SQL
databases:
• DATE
• DATETIME
• TIMEDIFF
• TIME
• Constants
• “YYYY-MM-DD”
• “YYYY-MM-DD hh:mm:ss”
• “YYYY-MM-DD hh:mm:ss.SSS”
• Definition
• CREATE TABLE orders (
ordered INT,
orderdate DATE
);
• SELECT DAY(orderdate),
MONTH(orderdate),
YEAR(orderdate)
FROM orders
WHERE orderdate
BETWEEN
“2014-01-01” AND
“2014-06-31”
63. JavaScript Date object
• Definition:
• var now = new Date();
• Constants
• No, only new object:
• new Date(2014,0,1)
• How to compare
• new Date(2014,0,1) != new
Date(2014,0,1)
• BUT!
• new Date(2014,0,1).valueOf() !=
new Date(2014,0,1) .valueOf()
• getTime() = valueOf()
64. Alasql approach
• DATE
• “2014-12-01”
• DATETIME
• “2014-12-01”
• “2014-12-01 23:01:23”
• “2014-12-01 12:34:56.123”
• Define class
• alasql.fn.Number = Number;
• alasql.fn.Date = Date;
• Table
• CREATE TABLE orders (
orderid Number,
orderdate Date
);
• Date
• new Date(“2014-12-01”)
• Compare
• new Date(a) == new Date(b)
66. Supported external databases
• Browser
• Local Storage (LOCALSTORAGE)
• IndexedDB (INDEXEDDB)
• Node.js
• DOM-storage (analog of Local Storage) – (LOCALSTORAGE)
• Browser and Node.js
• SQLite (SQLITE)
67. CREATE DATABASE
DROP DATABASE / SHOW DATABASES
• Engines
• CREATE INDEXEDDB DATABASE MyBase
• DROP INDEXEDDB DATABASE MyBase
• Created databases are not attached automatically
• Show databases in Local Storage
• SHOW LOCALSTORAGE DATABASE
68. ATTACH DATABASE
DETACH DATABASE
• Attach database
• ATTACH INDEXEDDB DATABASE Stars
• Attach database as alias
• ATTACH INDEXEDDB DATABASE Stars AS Astras
• Attach database from file (with parameters)
• ATTACH SQLITE DATABASE Stars(“stars.sqlite”)
• Detach database
• DETACH DATABASE Astras
• Attached database is not set as default (use USE
DATABASE statement)
• It is not necessary to USE database to use it (use
database prefixe)
• SELECT * FROM Sky.Stars
69. AUTOCOMMIT option
• Local Storage can work in two modes
• SET AUTOCOMMIT ON (default)
• Alasql stores results of each SQL statement to LocalStorage
• SET AUTOCOMMIT OFF
• Use BEGIN, COMMIT, and ROLLBACK to copy data from memory from
and to Local Storage
74. Angular.js and Alasql
// Export data to Excel file from Angular.js array
function MyCtrl($scope) {
$scope.items = [{City:”Moscow”, Population: 1150000},
{City: ‘New York’, Population: 16000000}];
function exportToExcel() {
alasql(‘SELECT * INTO XLSX(“mydata.xlsx”, {headers:true})
FROM ?’,[$scope.items]);
}
}
75. d3.js and Alasql
// Load data from cities.csv file and create a list with city names.
alasql(‘SELECT * FROM CSV(“cities.csv”,{headers:true})’,[],function(cities){
d3.select(“#cities”)
.append(‘ul’)
.data(cities)
.entry()
.append(‘li’)
.text(function(city){return city.name});
});
77. Alacon – command-line SQL
for data file processing
• Purpose
• Complex text processing
• Batch file format conversion
• Join data files on keys
• Usage:
• node alacon sql param1 param2…
• node –f file param1 param2…
78. Alacon samples
• Convert Excel file
• node alacon “select [0], [1] from xls(‘mytext.txt’)”
• Count number of lines from stdin
• node alacon ‘SELECT VALUE COUNT(*) FROM TXT()’ <a.txt
• Select long lines
• node alacon ‘SELECT * FROM TXT() WHERE LEN([0])>60’ <a.txt
• Grep
• node alacon “SELECT * FROM TXT() WHERE [0] LIKE ’M%’” <a.txt
• Filter lines with ‘one’ word:
• alacon “select line into txt() from txt() where line like ‘%one%’” <a.a
>b.b
• Calculator
• node alacon ‘2*2’
79. Alaserver - very simple SQL server
• Run
• node alaserver –p 8081
• Enter in browser line:
• localhost:8081?SELECT * FROM TXT(‘README.md’)
• or GET
• $.get(“localhost:8081?SELECT * FROM TXT(‘README.md’)”)
• Warning: Alaserver is not multithreaded, not secured, not
protected