This document is a tutorial for the ScalikeJDBC library. It introduces ScalikeJDBC as a tidy SQL-based database access library for Scala developers. It covers key topics like the connection pool, implicit sessions, SQL syntax, the query DSL, testing support, and integration with Play. The tutorial aims to help beginners get started with ScalikeJDBC's main features.
Content Management with MongoDB by Mark HelmstetterMongoDB
MongoDB is great for content management and delivery across a multitude of apps such as e-commerce websites, online publications, web content management systems (CMS), document management, archives and others. MongoDB's flexible schema and data model make it easy to catalog multiple content types with diverse meta data.
-Schema design for content management
-Using GridFS for storing binary files
-How you can leverage MongoDB's auto-sharding to partition your content across multiple servers
Jeremy Engle's slides from Redshift / Big Data meetup on July 13, 2017AWS Chicago
"Strategies for supporting near real time analytics, OLAP, and interactive data exploration" - Dr. Jeremy Engle, Engineering Manager Data Team at Jellyvision
Postgres expert, Bruce Momjian, as he discusses common table expressions (CTEs) and the ability to allow queries to be more imperative, allowing looping and processing hierarchical structures that are normally associated only with imperative languages.
Talk on Apache Kudu, presented by Asim Jalis at SF Data Engineering Meetup on 2/23/2016.
http://www.meetup.com/SF-Data-Engineering/events/228293610/
Big Data applications need to ingest streaming data and analyze it. HBase is great at ingesting streaming data but not good at analytics. HDFS is great at analytics but not at ingesting streaming data. Frequently applications ingest data into HBase and then move it to HDFS for analytics. What if you could use a single system for both use cases?
What if you could use a single system for both use cases? This could dramatically simplify your data pipeline architecture.
This is where Kudu comes in. Kudu is a storage system that lives between HDFS and HBase. It is good at both ingesting streaming data and good at analyzing it using Spark, MapReduce, and SQL.
Content Management with MongoDB by Mark HelmstetterMongoDB
MongoDB is great for content management and delivery across a multitude of apps such as e-commerce websites, online publications, web content management systems (CMS), document management, archives and others. MongoDB's flexible schema and data model make it easy to catalog multiple content types with diverse meta data.
-Schema design for content management
-Using GridFS for storing binary files
-How you can leverage MongoDB's auto-sharding to partition your content across multiple servers
Jeremy Engle's slides from Redshift / Big Data meetup on July 13, 2017AWS Chicago
"Strategies for supporting near real time analytics, OLAP, and interactive data exploration" - Dr. Jeremy Engle, Engineering Manager Data Team at Jellyvision
Postgres expert, Bruce Momjian, as he discusses common table expressions (CTEs) and the ability to allow queries to be more imperative, allowing looping and processing hierarchical structures that are normally associated only with imperative languages.
Talk on Apache Kudu, presented by Asim Jalis at SF Data Engineering Meetup on 2/23/2016.
http://www.meetup.com/SF-Data-Engineering/events/228293610/
Big Data applications need to ingest streaming data and analyze it. HBase is great at ingesting streaming data but not good at analytics. HDFS is great at analytics but not at ingesting streaming data. Frequently applications ingest data into HBase and then move it to HDFS for analytics. What if you could use a single system for both use cases?
What if you could use a single system for both use cases? This could dramatically simplify your data pipeline architecture.
This is where Kudu comes in. Kudu is a storage system that lives between HDFS and HBase. It is good at both ingesting streaming data and good at analyzing it using Spark, MapReduce, and SQL.
Dependency Injection in Apache Spark ApplicationsDatabricks
Dependency Injection is a programming paradigm that allows for cleaner, reusable, and more easily extensible code. Though Dependency injection has existed for a while now, its use for wiring dependencies in Apache Spark applications is relatively new. In this talk, we present our adventures writing testable Spark applications with dependency injection and explain why it is different than wiring dependencies for web applications due to Spark’s unique programming model.
Apache Spark Introduction and Resilient Distributed Dataset basics and deep diveSachin Aggarwal
We will give a detailed introduction to Apache Spark and why and how Spark can change the analytics world. Apache Spark's memory abstraction is RDD (Resilient Distributed DataSet). One of the key reason why Apache Spark is so different is because of the introduction of RDD. You cannot do anything in Apache Spark without knowing about RDDs. We will give a high level introduction to RDD and in the second half we will have a deep dive into RDDs.
Enterprise data is moving into Hadoop, but some data has to stay in operational systems. Apache Calcite (the technology behind Hive’s new cost-based optimizer, formerly known as Optiq) is a query-optimization and data federation technology that allows you to combine data in Hadoop with data in NoSQL systems such as MongoDB and Splunk, and access it all via SQL.
Hyde shows how to quickly build a SQL interface to a NoSQL system using Calcite. He shows how to add rules and operators to Calcite to push down processing to the source system, and how to automatically build materialized data sets in memory for blazing-fast interactive analysis.
PySpark Programming | PySpark Concepts with Hands-On | PySpark Training | Edu...Edureka!
** PySpark Certification Training: https://www.edureka.co/pyspark-certification-training **
This Edureka tutorial on PySpark Programming will give you a complete insight of the various fundamental concepts of PySpark. Fundamental concepts include the following:
1. PySpark
2. RDDs
3. DataFrames
4. PySpark SQL
5. PySpark Streaming
6. Machine Learning (MLlib)
Big Query - Utilizing Google Data Warehouse for Media Analyticshafeeznazri
This topic will cover the intermediate understanding of Google Big Query and how Media Prima Digital utilizing Big Query as data warehouse for production.
This developer-focused webinar will explain how to use the Cypher graph query language. Cypher, a query language designed specifically for graphs, allows for expressing complex graph patterns using simple ASCII art-like notation and offers a simple but expressive approach for working with graph data.
During this webinar you'll learn:
-Basic Cypher syntax
-How to construct graph patterns using Cypher
-Querying existing data
-Data import with Cypher
-Using aggregations such as statistical functions
-Extending the power of Cypher using procedures and functions
When it comes to data security, Uber’s business has unique needs related to scale, use-case, and technical stacks. This talk will discuss how our data platform team addressed specific challenges in deploying Uber's security requirements for Apache Hadoop, including how we leveraged open source building blocks. We'll share insights on how we augmented our Kerberized Hadoop integration with additional authentications mechanisms as well as our approach to supporting custom authentication in Apache Knox. In particular, we will elaborate Uber’s contributions to Apache Knox, specifically a novel pluggable platform for custom validation of any user request. This talk will also cover how we address table, column, and partition-level access control while ensuring improved developer productivity. In particular, we will explain how we translate RBAC policy into HDFS ACL to control data access, our internal audit platform built to detect and analyze the common security infringements, and real-world examples from our experiences in production.
Speakers
Mohammad Islam, Staff Software Engineer, Uber
Wei Han, Manager, Uber
Big data requires service that can orchestrate and operationalize processes to refine the enormous stores of raw data into actionable business insights. Azure Data Factory is a managed cloud service that's built for these complex hybrid extract-transform-load (ETL), extract-load-transform (ELT), and data integration projects.
Parallelization of Structured Streaming Jobs Using Delta LakeDatabricks
We’ll tackle the problem of running streaming jobs from another perspective using Databricks Delta Lake, while examining some of the current issues that we faced at Tubi while running regular structured streaming. A quick overview on why we transitioned from parquet data files to delta and the problems it solved for us in running our streaming jobs.
Dependency Injection in Apache Spark ApplicationsDatabricks
Dependency Injection is a programming paradigm that allows for cleaner, reusable, and more easily extensible code. Though Dependency injection has existed for a while now, its use for wiring dependencies in Apache Spark applications is relatively new. In this talk, we present our adventures writing testable Spark applications with dependency injection and explain why it is different than wiring dependencies for web applications due to Spark’s unique programming model.
Apache Spark Introduction and Resilient Distributed Dataset basics and deep diveSachin Aggarwal
We will give a detailed introduction to Apache Spark and why and how Spark can change the analytics world. Apache Spark's memory abstraction is RDD (Resilient Distributed DataSet). One of the key reason why Apache Spark is so different is because of the introduction of RDD. You cannot do anything in Apache Spark without knowing about RDDs. We will give a high level introduction to RDD and in the second half we will have a deep dive into RDDs.
Enterprise data is moving into Hadoop, but some data has to stay in operational systems. Apache Calcite (the technology behind Hive’s new cost-based optimizer, formerly known as Optiq) is a query-optimization and data federation technology that allows you to combine data in Hadoop with data in NoSQL systems such as MongoDB and Splunk, and access it all via SQL.
Hyde shows how to quickly build a SQL interface to a NoSQL system using Calcite. He shows how to add rules and operators to Calcite to push down processing to the source system, and how to automatically build materialized data sets in memory for blazing-fast interactive analysis.
PySpark Programming | PySpark Concepts with Hands-On | PySpark Training | Edu...Edureka!
** PySpark Certification Training: https://www.edureka.co/pyspark-certification-training **
This Edureka tutorial on PySpark Programming will give you a complete insight of the various fundamental concepts of PySpark. Fundamental concepts include the following:
1. PySpark
2. RDDs
3. DataFrames
4. PySpark SQL
5. PySpark Streaming
6. Machine Learning (MLlib)
Big Query - Utilizing Google Data Warehouse for Media Analyticshafeeznazri
This topic will cover the intermediate understanding of Google Big Query and how Media Prima Digital utilizing Big Query as data warehouse for production.
This developer-focused webinar will explain how to use the Cypher graph query language. Cypher, a query language designed specifically for graphs, allows for expressing complex graph patterns using simple ASCII art-like notation and offers a simple but expressive approach for working with graph data.
During this webinar you'll learn:
-Basic Cypher syntax
-How to construct graph patterns using Cypher
-Querying existing data
-Data import with Cypher
-Using aggregations such as statistical functions
-Extending the power of Cypher using procedures and functions
When it comes to data security, Uber’s business has unique needs related to scale, use-case, and technical stacks. This talk will discuss how our data platform team addressed specific challenges in deploying Uber's security requirements for Apache Hadoop, including how we leveraged open source building blocks. We'll share insights on how we augmented our Kerberized Hadoop integration with additional authentications mechanisms as well as our approach to supporting custom authentication in Apache Knox. In particular, we will elaborate Uber’s contributions to Apache Knox, specifically a novel pluggable platform for custom validation of any user request. This talk will also cover how we address table, column, and partition-level access control while ensuring improved developer productivity. In particular, we will explain how we translate RBAC policy into HDFS ACL to control data access, our internal audit platform built to detect and analyze the common security infringements, and real-world examples from our experiences in production.
Speakers
Mohammad Islam, Staff Software Engineer, Uber
Wei Han, Manager, Uber
Big data requires service that can orchestrate and operationalize processes to refine the enormous stores of raw data into actionable business insights. Azure Data Factory is a managed cloud service that's built for these complex hybrid extract-transform-load (ETL), extract-load-transform (ELT), and data integration projects.
Parallelization of Structured Streaming Jobs Using Delta LakeDatabricks
We’ll tackle the problem of running streaming jobs from another perspective using Databricks Delta Lake, while examining some of the current issues that we faced at Tubi while running regular structured streaming. A quick overview on why we transitioned from parquet data files to delta and the problems it solved for us in running our streaming jobs.
A comparison of different solutions for full-text search in web applications using PostgreSQL and other technology. Presented at the PostgreSQL Conference West, in Seattle, October 2009.
Http4s, Doobie and Circe: The Functional Web StackGaryCoady
Http4s, Doobie and Circe together form a nice platform for building web services. This presentations provides an introduction to using them to build your own service.
All I learned while working on a Scala OSS project for over six years #ScalaM...Kazuhiro Sera
* http://2018.scalamatsuri.org/index_en.html
* https://www.youtube.com/watch?v=y7BxvT-Jm6w
In 2011, I started a Scala open source project named ScalikeJDBC. Thanks to many contributors, I am still working on the project after six years. In the meantime, the Scala community has been growing sharply, and the trends have been continuously changing. Despite the fact that the number of developers who work on OSS projects has increased, there are not many developers who have been working on a Scala project for several years. In this talk, I will share my experiences and pieces of knowledge through maintaining the OSS project for more than six years.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
5. SQL-based
import scalikejdbc._, SQLInterpolation._
val id = 123
// Anorm-like API for Scala 2.9
val name = SQL(“select name from company where id = {id}”)
.bindByName(‘id -> id).map(rs => rs.string(“name”)).single.apply()
// SQL Interpolation since 2.10
val name = sql”select name from company where id = ${id}”
.map(rs => rs.string(“name”)).single.apply()
// Query DSL
val name = withSQL {
select(c.name).from(Company as c).where.eq(c.id, id)
}.map(rs => rs.string(c.resultName.name)).single.apply()
6. Everybody knows SQL
We don’t need something new and
different from SQL.
Maybe, you can write the code on the
previous slide immediately, right?
Because you already know SQL.
8. Easy-to-use CP
DB block such as DB.readOnly { ... }
borrows a connection from
ConnectionPool.
Since ConnectionPool is a singleton
object, you can easily borrow
anywhere.
9. ConnectionPool
import scalikejdbc._
Class.forName(“org.h2.Driver”)
// default connection pool
val (url, user, password) = (“jdbc:h2:mem:db”, “sa”, “”)
ConnectionPool.singleton(url, user, password)
val conn: java.sql.Connection = ConnectionPool.borrow()
val names: List[String] = DB readOnly { implicit session =>
sql”select name from company”.map(_.string(“name”).list.apply()
}
// named connection pool
val poolSettings = new ConnectionPoolSettings(maxSize = 50)
ConnectionPool.add(‘secondary, url, user, passsword, poolSettings)
NamedDB(‘secondary) readOnly { implicit session =>
10. DBCP by default
By default, only Commons DBCP
implementation is provided.
It’s also possible to implement your
preferred connection pool.
(e.g. C3P0, BoneCP)
13. Implicit DBSession
import scalikejdbc._, SQLInterpolation._
val id = 123
// DBSession usage
val name: Option[String] = DB readOnly { session: DBSession =>
session.single(“select name from company where id = ?”, id) { rs =>
rs.string(“name”)
}
}
// SQL API requires implicit DBSession
val name: Option[String] = DB readOnly { implicit session =>
sql”select name from company where id = ${id}”
.map(_.string(“name”)).single.apply()
}
15. Flexible Transaction
import scalikejdbc._, SQLInterpolation._
implicit val session = AutoSession
sql”select name from company”.map(_.string(“name”)).list.apply()
def addCompany(name: String)(implicit s: DBSession = AutoSession) {
sql”insert into company values (${name})”.update.apply()
}
def getAllNames()(implicit s: DBSession = AutoSession): List[String] = {
sql”select name from company”.map(_.string(“name”)).list.apply()
}
val names: List[String] = getAllNames() // read-only session
DB localTx { implicit session =>
addCompany(“Typesafe”) // within a transaction
getAllNames() // within a transaction, includes “Typesafe”
}
17. SQLSyntax?
// SQLSyntax = a part of SQL object, will be embedded as-is
val c: SQLSyntax = sqls”count(*)”
val bobby: String = “Bob%”
val query = sql”select ${c} from members where name like ${bobby}”
// -> “select count(*) from members where name like ?”
A part of SQL object which will be
embedded as-is.
18. SQLSyntaxSupport
SQLSyntaxSupport provides DRY and
type safe SQL.
- CoC but configurable
- Write complex join queries
- Using type-dynamic(similar to
Ruby’s method-missing) but have
compile-time check with macro
19. CoC Rules
// Scala object
case class GroupMember(id: Long, fullName: Option[String] = None)
obejct GroupMember extends SQLSyntaxSupport[GroupMember]
// DDL
create table group_member {
id bigint not null primary key,
full_name varchar(255)
}
Simply snake_case’d name
20. Customize it
// Scala object
case class GroupMember(id: Long, fullName: Option[String] = None)
obejct GroupMember extends SQLSyntaxSupport[GroupMember] {
override val tableName = “group_members”
override val nameConverters = Map(“fullName” -> “fullname”)
}
// DDL
create table group_members {
id bigint not null primary key,
fullname varchar(255)
}
21. Query & Extracting
import scalikejdbc._, SQLInterpolation._
// entity class (Plain Old Scala Object)
case class Member(id: Long, fullName: Option[String] = None)
// companion object for entity
obejct Member extends SQLSyntaxSupport[Member] {
def apply(m: ResultName[Member])(rs: WrappedResultSet) = new Member(
id = rs.long(m.id), name = rs.stringOpt(m.fullName)
)
)
val m = Member.syntax(“m”)
val id = 123
val memebr: Option[Member] = DB readOnly { implicit s =>
sql”select ${m.result.*} from ${Member as m} where ${m.id} = ${id}”
.map(Member(m.resultName)).single.apply()
}
22. result? resultName?
val m = Member.syntax(“m”)
m.fullName == “m.full_name”
m.result.fullName == “m.full_name as fn_on_m”
m.resultName.fullName == “fn_on_m”
Scala:
sql”select ${m.result.fullName} from ${Member as m} where ${m.id} = 123”
SQL:
“select m.full_name as fn_on_m from member m where m.id = 123”
ResultSet extractor:
val name = sql”...”.map(rs => rs.string(m.resultName.fullName)).single.apply()
// extracting “fn_on_m” from ResultSet
23. Type safe dynamic
import scalikejdbc._, SQLInterpolation._
val m = Member.syntax(“m”)
val id = 123
val memebr: Option[Member] = DB readOnly { implicit s =>
sql”select ${m.result.fullNamee} from ${Member as m} where ${m.id} = ${id}”
.map(Member(m.resultName)).single.apply()
}
<console>:28: error: Member#fullNamee not found. Expected fields are #id,
#fullName, #createdAt, #deletedAt.
m.fullNamee
^
25. Uniqueness
- DBSession inspired by Querulous
- SQL(String) inspired by Anorm
- sql”...” inspired by Slick
Query DSL is surely
ScalikeJDBC’s original
way!
26. What’s Query DSL
- Just appends SQL parts
- Type safe and pragmatic
- Easy to understand
- Not composable but reusable
- Parts from sqls object
- Append sqls”...” when needed
27. Query DSL examples
import scalikejdbc._, SQLInterpolation._
implicit val session = AutoSession
val c = Company.syntax(“c”)
val id = 123
val company: Option[Company] = withSQL {
select.from(Company as c).where.eq(c.id, id)
}.map(Company(c.resultName)).single.apply()
insert.into(Company).values(123, “Typesafe”)
val column = Company.column
update(Company).set(column.name -> “Oracle”).where.eq(column.id, 123)
delete.from(Company).where.eq(column.id, 123)
28. Joins, one-to-x API
val programmerWithSkills = withSQL {
select
.from(Programmer as p)
.leftJoin(Company as c).on(p.companyId, c.id)
.leftJoin(ProgrammerSkill as ps).on(ps.programmerId, p.id)
.leftJoin(Skill as s).(ps.skillId, s.id)
.where
.eq(p.id, id)
.and
.isNull(p.deletedAt)
}
.one(Programmer(p, c))
.toMany(SKill.opt(s))
.map { (pg, skills) => pg.copy(skills = skills) }
.single.apply()
29. Sub query, Paging
val noSkillProgrammers = withSQL {
select
.from(Programmer as p)
.leftJoin(Company as c).on(p.companyId, c.id)
.where
.notIn(p.id,
select(sqls.distinct(ps.programmerId)).from(ProgrammerSkill as ps))
.isNull(p.deletedAt)
.limit(10).offset(0)
.orderBy(p.id).desc
}
.map(Programmer(p, c))
.list.apply()
30. sqls.xxx, sqls”...”
val userId =
val orderCount: Long = withSQL {
select(sqls.count(sqls.distinct(o.id))) // sqls object
.from(Order as o)
.innerJoin(Product as p).on(p.id,o.productId)
.where
.append(sqls”${o.userId} = ${userId}”) // direct SQL embedding
}
.map(rs => rs.long(1))
.single.apply().get
31. More and More
- insert select
- groupBy, having
- in, exists, like, between
- union, unionAll
- withRoundBracket { ... }
- dynamic(And|Or)Conditions { ... }
In detail:
QueryDSLFeature.scala
QueryInterfaceSpec.scala
33. Code Generator
- sbt plugin
- Code from existing tables
- project/scalikejdbc.properties
- Play2 style models and tests
In detail:
Wiki Page
sbt “scalikejdbc-gen [table-name]”
34. Testing with ScalaTest
import scalikejdbc._, scalatest.AutoRollback
import org.scalatest.fixture.FlatSpec
class MemberSpec extends FlatSpec with AutoRollback {
override def fixture(implicit s: DBSession) {
sql”delete from members”.update.apply()
Member.create(1, “Alice”)
}
behavior of “Member”
it should “create a new record” in { implicit s =>
val beforeCount = Member.count
Member.create(123, “Brian”)
Member.count should equal(before + 1)
}
}
35. Testing with specs2 (1)
import scalikejdbc._, specs2.mutable.AutoRollback
import org.specs2.mutable.Specification
object MemberSpec extends Specification {
“Member should create a new record” in new MyAutoRollback {
val beforeCount = Member.count
Member.create(123, “Brian”)
Member.count should equal(before + 1)
}
}
trait MyAutoRollback extends AutoRollback {
override def fixture(implicit s: DBSession) {
sql”delete from members”.update.apply()
Member.create(1, “Alice”)
}
}
36. Testing with specs2 (2)
import scalikejdbc._, specs2.AutoRollback
import org.specs2.Specification
object MemberSpec extends Specification { def is =
“Member should create a new record” ! autoRollback().create
end
}
case class autoRollback() extends AutoRollback {
override def fixture(implicit s: DBSession) {
sql”delete from members”.update.apply()
Member.create(1, “Alice”)
}
def create = this {
val beforeCount = Member.count
Member.create(123, “Brian”)
Member.count should equal(before + 1)
}
}
37. Play2 Support
- Integrates with Play2 seamlessly
- Read conf/application.conf
- Add plugin to conf/play.plugins
- FixturePlugin is also available
- Great contribution by @tototoshi
In detail:
Wiki Page