DynamoDB powers many cloud-scale applications, with its robust horizontal scalability and uptime. Yet, interacting with the Java SDK is error-prone and tedious. In this presentation, Avinder Bahra presents ZIO DynamoDB, a new library by Avi and Adam Johnson designed to make interacting with DynamoDB easy, type-safe, testable, and productive.
Scratch cơ bản. Bài 0. Tổng quan ScratchBùi Việt Hà
Đây là bài đầu tiên của cuốn sách Lập trình Scratch Cơ bản. Các bài học của cuốn sách này tôi sẽ đưa vào ấn phẩm Khoa học máy tính trong nhà trường - CS4S. Bài này in trong số đầu tiên của CS4S.
Scratch cơ bản. Bài 0. Tổng quan ScratchBùi Việt Hà
Đây là bài đầu tiên của cuốn sách Lập trình Scratch Cơ bản. Các bài học của cuốn sách này tôi sẽ đưa vào ấn phẩm Khoa học máy tính trong nhà trường - CS4S. Bài này in trong số đầu tiên của CS4S.
This document defines variables and strings used in an assembly language program that performs various mathematical operations and checks on two input numbers a and b. It allows the user to select from menu options to calculate the sum, difference, product, quotient of a and b or check if they are prime numbers, perfect numbers, square numbers, or symmetrical numbers. The program flow jumps to different code blocks depending on the menu selection to perform the corresponding operation and display the output.
Introducing the ZIO DynamoDB Type Safe APIAvinderBahra
In this talk, Avinder introduces a new type-safe API for the ZIO DynamoDB ligrary that can prevent many errors at compile time while remaining user-friendly.
Lift provides a concise framework for developing web applications in Scala with features like convention over configuration, clean separation of presentation logic and content, and powerful AJAX and Comet support. It leverages the Scala programming language and has a responsive community. Code is more concise in Lift, increasing developer productivity.
This document defines variables and strings used in an assembly language program that performs various mathematical operations and checks on two input numbers a and b. It allows the user to select from menu options to calculate the sum, difference, product, quotient of a and b or check if they are prime numbers, perfect numbers, square numbers, or symmetrical numbers. The program flow jumps to different code blocks depending on the menu selection to perform the corresponding operation and display the output.
Introducing the ZIO DynamoDB Type Safe APIAvinderBahra
In this talk, Avinder introduces a new type-safe API for the ZIO DynamoDB ligrary that can prevent many errors at compile time while remaining user-friendly.
Lift provides a concise framework for developing web applications in Scala with features like convention over configuration, clean separation of presentation logic and content, and powerful AJAX and Comet support. It leverages the Scala programming language and has a responsive community. Code is more concise in Lift, increasing developer productivity.
Overview of The Scala Based Lift Web FrameworkIndicThreads
All of us having experience with other web frameworks such as Struts,Tapestry, Rails, etc would ask “Why another framework? Does Lift really solve problems any differently or more effectively than the ones we’ve used before? The Lift Web Framework provides an advanced set of tools for quickly and easily building real-time, multi-users, interactive web applications. Lift has a unique advantage that no other web framework currently shares: the Scala programming language. Scala is a relatively new language developed by Martin Odersky and his group at EPFL Switzerland. Scala is a hybrid Object Oriented and Functional language that runs at native speeds on the JVM and fully interoperates with Java code. Lift is a hybrid web framework built on Scala. Lift derives its features and idioms from the best of existing web frameworks as well as the functional and OO features in Scala. It compiles to Java bytecode and runs on the JVM, which means that we can leverage the vast ecosystem of Java libraries just as we would with any other java web framework. This presentation details the advantages of this Scala based Web framework over all the existing frameworks that we have used uptil now and shows a small sample application built with Lift. We will create a basic application with a model that maps to RDBMS, web pages that correspond to back end logic and bind dynamically created content to elements on the webpage.
ReactJS for Beginners provides an overview of ReactJS including what it is, advantages, disadvantages, typical setup tools, and examples of basic React code. Key points covered include:
- ReactJS is a JavaScript library for building user interfaces and is component-based.
- Advantages include high efficiency, easier JavaScript via JSX, good developer tools and SEO, and easy testing.
- Disadvantages include React only handling the view layer and requiring other libraries for full MVC functionality.
- Examples demonstrate basic components, properties, events, conditional rendering, and lists in ReactJS.
DAGScheduler - The Internals of Apache Spark.pdfJoeKibangu
DAGScheduler is the scheduling layer of Apache Spark that implements stage-oriented scheduling using Jobs and Stages. It transforms a logical execution plan into a physical execution plan using stages. It computes an execution DAG of stages for each job, keeps track of materialized RDDs and stage outputs, and finds a minimal schedule to run jobs. It then submits stages to TaskScheduler. It also determines preferred task locations and handles failures due to lost shuffle files.
This document provides an introduction to React.js, including:
- React.js uses a virtual DOM for improved performance over directly manipulating the real DOM. Components are used to build up the UI and can contain state that updates the view on change.
- The Flux architecture is described using React with unidirectional data flow from Actions to Stores to Views via a Dispatcher. This ensures state changes in a predictable way.
- Setting up React with tools like Browserify/Webpack for module bundling is discussed, along with additional topics like PropTypes, mixins, server-side rendering and React Native.
Using Spark to Load Oracle Data into Cassandra (Jim Hatcher, IHS Markit) | C*...DataStax
Spark is an execution framework designed to operate on distributed systems like Cassandra. It's a handy tool for many things, including ETL (extract, transform, and load) jobs. In this session, let me share with you some tips and tricks that I have learned through experience. I'm no oracle, but I can guarantee these tips will get you well down the path of pulling your relational data into Cassandra.
About the Speaker
Jim Hatcher Principal Architect, IHS Markit
Jim Hatcher is a software architect with a passion for data. He has spent most of his 20 year career working with relational databases, but he has been working with Big Data technologies such as Cassandra, Solr, and Spark for the last several years. He has supported systems with very large databases at companies like First Data, CyberSource, and Western Union. He is currently working at IHS, supporting an Electronic Parts Database which tracks half a billion electronic parts using Cassandra.
Using Spark to Load Oracle Data into CassandraJim Hatcher
The document discusses lessons learned from using Spark to load data from Oracle into Cassandra. It describes problems encountered with Spark SQL handling Oracle NUMBER and timeuuid fields incorrectly. It also discusses issues generating IDs across RDDs and limitations on returning RDDs of tuples over 22 items. The resources section provides references for learning more about Spark, Scala, and using Spark with Cassandra.
ReactJS is a JavaScript framework for building user interfaces and web applications. It uses a component-based approach where reusable UI components are built using JavaScript classes or functions. Components render markup using JSX, which gets compiled to React function calls. State changes cause re-rendering for efficient updates. Components communicate via props for parent to child and callbacks for child to parent.
The document discusses various advanced features of JDBC including using prepared statements, managing transactions, performing batch updates, and calling stored procedures. Prepared statements improve performance by compiling SQL statements only once. Transactions allow grouping statements to execute atomically through commit and rollback. Batch updates reduce network calls by executing multiple statements as a single unit. Stored procedures are called using a CallableStatement object which can accept input parameters and return output parameters.
Gradle is easy to use for building standard Java projects, but it’s rare to find a project that is completely standard. Whenever you have some custom requirement, you need to start using Gradle’s power features. It’s at that point that you can find yourself producing an unmaintainable mess and a hard-to-use build.
This talk will start by explaining Gradle’s model, which you need to understand if you want to retain control over your builds. I will then introduce you to some simple but effective guidelines that will ensure that your builds stay clean and effective.
This document provides an overview of Slick, a library for Scala that facilitates database access and querying. It discusses key Slick concepts like the lifted and direct query APIs, supported databases, and features like being easy, concise, safe, composable and explicit. It also covers topics like database schemas, queries for data definition, manipulation, filtering, sorting, joins and unions. Live code examples are provided throughout to demonstrate how to connect to a database, define schemas, and write various query types in Slick.
This document discusses building reactive database drivers with R2DBC. It covers how Spring supports reactive database access, an introduction to R2DBC including its design principles and SPI. It then discusses how R2DBC drivers work internally and provides examples of using R2DBC with Spring Data repositories. While R2DBC brings reactivity to database access, it is still immature and has limitations compared to blocking approaches like JPA.
Scala is an alternative JVM language with both object-oriented and functional programming paradigms. Scala development with the Java EE 7 platform is definitely possible and can be a pleasant experience. If you have uncertainty about how Scala can fit around the Java EE 7 platform, then this session aims to illustrate the huge benefit that Scala adoption can bring to the platform. Many other developers are taking advantage and the challenge of the JVM’s capability of being a vessel for multi-language programming. You no longer have to write every single project using Java, even if you like Lambdas experiences. For the developer and engineering terms that feeling a little braver than usual, Scala is attractive as it is strongly typed and lets you set the gauge on how object oriented or how functional you want to be. You will learn how to reuse the annotations and creating Scala plain object safely and concisely. This session will highlight and contrast the experience I had developing Scala solutions with Java EE, and there will be plenty of advice about using the functional programming features against the Java object oriented API.
Scala language overview
Java EE 7 architecture and design
WildFly 8 application server
Using Gradle as a build tool
How to create beans in Scala with dependency injection
JAX-RS endpoints
Servlet Endpoints
JMS Messaging
Scala adoption advice and hints for sustainable team development
JavaCro 2014 Scala and Java EE 7 Development ExperiencesPeter Pilgrim
Scala is an alternative JVM language with both object-oriented and functional programming paradigms. Scala development with the Java EE 7 platform is definitely possible and can be a pleasant experience. If you have uncertainty about how Scala can fit around the Java EE 7 platform, then this session aims to illustrate the huge benefit that Scala adoption can bring to the platform. Many other developers are taking advantage and the challenge of the JVM’s capability of being a vessel for multi-language programming. You no longer have to write every single project using Java, even if you like Lambdas experiences.
For the developer and engineering terms that feeling a little braver than usual, Scala is attractive as it is strongly typed and lets you set the gauge on how object oriented or how functional you want to be. You will learn how to reuse the annotations and creating Scala plain object safely and concisely.
This session will highlight and contrast the experience I had developing Scala solutions with Java EE, and there will be plenty of advice about using the functional programming features against the Java object oriented API.
Scala language overview
Java EE 7 architecture and design
Using Gradle as a build tool
How to create beans in Scala with dependency injection
JAX-RS endpoints
Servlet Endpoints
JMS Messaging
Scala adoption advice and hints for sustainable team development
PyCon KR 2018 Effective Tips for Django ORM in PracticeSeomgi Han
In the following slides, I will share my solutions that have worked out with Django ORM. These are not only Django ORM issues but also some resolutions that have been effective with other techniques.
Similar to Introducing Zio DynamoDB - a new Scala library for Simple, type-safe, and efficient access to DynamoDB (20)
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
"Frontline Battles with DDoS: Best practices and Lessons Learned", Igor IvaniukFwdays
At this talk we will discuss DDoS protection tools and best practices, discuss network architectures and what AWS has to offer. Also, we will look into one of the largest DDoS attacks on Ukrainian infrastructure that happened in February 2022. We'll see, what techniques helped to keep the web resources available for Ukrainians and how AWS improved DDoS protection for all customers based on Ukraine experience
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
2. contributors
- Avinder Bahra
used Scala in production for last 5 years
used ZIO in production for the last 2 years
been programming professionally for last 30 years
- Adam Johnson
2 / 42
6. Some context
DynamoDB
"fully managed, serverless, key-value NoSQL database designed to
run high-performance applications at any scale"
great for scalability, throughput with features like auto-scaling
and global table replication
6 / 42
7. Some context
DynamoDB
fully managed, serverless, key-value NoSQL database designed to run
high-performance applications at any scale
great for scalability, throughput with features like auto-scaling
and global table replication
downside of NoSQL DBs
they do less work than relational databases for tasks such as
querying and maintaining consistency/integrity and offer less
tooling
hence the code burden for the developer is significantly
increased
7 / 42
8. problem: using DDB Java SDK is painful
we are going to walk you through the process of creating a production
grade Scala application using the Java JDK.
Lets see how many LOC it takes to perform simple write and read
operations.
Here are the steps we are going to follow:
create a typesafe model (case class)
serialialse/deserialse the Scala model to the APIs model
create API requests for wring and reading the model to the database
execute the requests and process the results
improve performance by using batching of requests
as this is production grade we are going to use ZIO to manage effects,
concurrency, Java interop ...
8 / 42
9. problem: using DDB Java SDK is painful
final case class Student(email: String, // partition key
subject: String, // sort key
enrollmentDate: Option[Instant], payment: Payment)
sealed trait Payment
object Payment {
final case object DebitCard extends Payment
final case object CreditCard extends Payment
final case object PayPal extends Payment
}
9 / 42
10. problem: using DDB Java SDK is painful
we have to build a PutItemRequest
import scala.jdk.CollectionConverters._
final case class Student(email: String, subject: String,
enrollmentDate: Option[Instant], payment: Payment)
sealed trait Payment
object Payment {
final case object DebitCard extends Payment
final case object CreditCard extends Payment
final case object PayPal extends Payment
}
def putItemRequest(student: Student): PutItemRequest =
PutItemRequest.builder
.tableName("student")
.item(toAttributeValueMap(student).asJava) //Item: Map[String, AttributeValue]
.build
10 / 42
11. problem: using DDB Java SDK is painful
we have to serialise the Student case class to AttributeValue's
final case class Student(email: String, subject: String,
enrollmentDate: Option[Instant], payment: Payment)
sealed trait Payment
object Payment {
final case object DebitCard extends Payment
final case object CreditCard extends Payment
final case object PayPal extends Payment
}
def putItemRequest(student: Student): PutItemRequest =
PutItemRequest.builder
.tableName("student")
.item(toAttributeValueMap(student).asJava)
.build
def toAttributeValueMap(student: Student): Map[String, AttributeValue] = {
Map(
"email" -> AttributeValue.builder.s(student.email).build,
"subject" -> AttributeValue.builder.s(student.subject).build
)
}
11 / 42
12. problem: using DDB Java SDK is painful
we have to deal with optional and sum types
def toAttributeValueMap(student: Student): Map[String, AttributeValue] = {
val mandatoryFields = Map(
"email" -> AttributeValue.builder.s(student.email).build,
"subject" -> AttributeValue.builder.s(student.subject).build,
"payment" -> AttributeValue.builder.s { // serialise sum type
student.payment match {
case DebitCard => "DebitCard"
case CreditCard => "CreditCard"
case PayPal => "PayPal"
}
}.build
)
val nonEmptyOptionalFields: Map[String, AttributeValue] = Map(
"enrollmentDate" -> student.enrollmentDate.map(instant =>
AttributeValue.builder.s(instant.toString).build)
).filter(_._2.nonEmpty).view.mapValues(_.get).toMap
mandatoryFields ++ nonEmptyOptionalFields
}
12 / 42
13. problem: using DDB Java SDK is painful
Now that we have saved a Student as an Item in the DynamoDb database we
next need to create a GetItemRequest to retrieve it.
GetItemRequest.builder
.tableName("student")
.key(
Map(
"email" -> AttributeValue.builder.s("avi.gmail.com").build,
"subject" -> AttributeValue.builder.s("maths").build
).asJava
)
.build()
13 / 42
14. problem: using DDB Java SDK is painful
Next have to de-serialise a GetItemResponse - a Map[String, AttributeValue]
and deal with concerns such as:
mandatory fields - if not present we want an error
optional fields
conversion to standard primitive type eg Instant
Helper functions that return an Either[String, _] for managing errors
def getString(map: Map[String, AttributeValue],
name: String): Either[String, String] =
map.get(name).toRight(s"mandatory field $name not found").map(_.s)
def getStringOpt(map: Map[String, AttributeValue],
name: String): Either[Nothing, Option[String]] =
Right(map.get(name).map(_.s))
def parseInstant(s: String): Either[String, Instant] =
Try(Instant.parse(s)).toEither.left.map(_.getMessage)
14 / 42
15. problem: using DDB Java SDK is painful
we create a deserialise function that uses the previous helpers
note we have to de-serialise the Payment sum type
def deserialise(item: Map[String, AttributeValue]): Either[String, Student] =
for {
email <- getString(item, "email")
subject <- getString(item, "subject")
maybeEnrollmentDateAV <- getStringOpt(item, "enrollmentDate")
maybeEnrollmentDate <-
maybeEnrollmentDateAV.fold[Either[String, Option[Instant]]](Right(None))(s =>
parseInstant(s).map(i => Some(i))
)
payment <- getString(item, "payment")
paymentType = payment match {
case "DebitCard" => DebitCard
case "CreditCard" => CreditCard
case "PayPal" => PayPal
}
} yield Student(email, subject, maybeEnrollmentDate, paymentType)
15 / 42
16. problem: using DDB Java SDK is painful
stitching together all the functions using ZIO
val program =
for {
client <- ZIO.service[DynamoDbAsyncClient]
student = Student("avi@gmail.com", "maths", Instant.now,
Payment.DebitCard)
putRequest = putItemRequest(expectedStudent)
_ <- ZIO.fromCompletionStage(client.putItem(putRequest))
getItemRequest = getItemRequest(student)
getItemResponse <- ZIO.fromCompletionStage(client.getItem(getItemRequest))
studentItem = getItemResponse.item.asScala.toMap
foundStudent = deserialise(studentItemstudentItem)
} yield foundStudent
16 / 42
17. problem: using DDB Java SDK is painful
But that's not fast enough - we want to use DDB batching for our Puts and Gets
So we have to first create BatchWriteItemRequest
def batchWriteItemRequest(students: List[Student]): BatchWriteItemRequest = {
val putRequests = students.map { student =>
val request = PutRequest
.builder()
.item(toAttributeValueMap(student).asJava)
.build()
WriteRequest.builder().putRequest(request).build()
}
BatchWriteItemRequest
.builder()
.requestItems(Map("student" -> putRequests.asJava).asJava)
.build()
}
input is a List of Student which we map to WriteRequests
for each student se use our toAttributeValueMap function to serilaise
to an Item
finally we create a BatchWriteItemRequest
17 / 42
18. problem: using DDB Java SDK is painful
We then have to execute the batch and process the response
def batchWriteAndRetryUnprocessed(
batchRequest: BatchWriteItemRequest
): ZIO[Has[DynamoDbAsyncClient], Throwable, BatchWriteItemResponse] = {
val result = for {
client <- ZIO.service[DynamoDbAsyncClient]
response <- ZIO.fromCompletionStage(client.batchWriteItem(batchRequest))
} yield response
result.flatMap {
case response if response.unprocessedItems().isEmpty => ZIO.succeed(response)
case response =>
// very simple recursive retry of unprocessed requests
// in Production we would have exponential back offs and a timeout
batchWriteAndRetryUnprocessed(batchRequest =
BatchWriteItemRequest
.builder()
.requestItems(response.unprocessedItems())
.build()
)
}
}
18 / 42
19. problem: using DDB Java SDK is painful
We then have to create a BatchGetItemRequest
def batchGetItemReq(studentPks: Seq[(String, String)]): BatchGetItemRequest = {
val keysAndAttributes = KeysAndAttributes.builder
.keys(
studentPks.map { // for all students we extract the partition and sort keys
case (email, subject) =>
Map(
"email" -> AttributeValue.builder().s(email).build(),
"subject" -> AttributeValue.builder().s(subject).build()
).asJava
}.asJava
)
.build()
BatchGetItemRequest.builder
.requestItems(Map("student" -> keysAndAttributes).asJava)
.build()
}
19 / 42
20. problem: using DDB Java SDK is painful
We then have to execute the batch and process the response
def batchGetItemAndRetryUnprocessed(batchRequest: BatchGetItemRequest)
: ZIO[Has[DynamoDbAsyncClient], Throwable, BatchGetItemResponse] = {
val result = for {
client <- ZIO.service[DynamoDbAsyncClient]
response <- ZIO.fromCompletionStage(client.batchGetItem(batchRequest))
} yield response
result.flatMap {
case response if response.unprocessedKeys.isEmpty => ZIO.succeed(response)
case response =>
// very simple recursive retry of failed requests
// in Production we would have exponential back offs and a timeout
batchGetItemAndRetryUnprocessed(batchRequest =
BatchGetItemRequest.builder
.requestItems(response.unprocessedKeys)
.build
)
}
}
20 / 42
21. problem: using DDB Java SDK is painful
putting it all together - full batching program
val program =
for {
client <- ZIO.service[DynamoDbAsyncClient]
avi = Student("avi@gmail.com", ...)
adam = Student("adam@gmail.com", ...)
students = List(avi, adam)
batchPutRequest = batchWriteItemRequest(students)
_ <- batchWriteAndRetryUnprocessed(batchPutRequest)
batchGetItemResponse <-
batchGetItemAndRetryUnprocessed(batchGetItemRequest(
students.map(st => (st.email, st.subject))))
responseMap = batchGetItemResponse.responses.asScala
listOfErrorOrStudent =
responseMap.get("student")
.fold[List[Either[String, Student]]](List.empty) {
javaList =>
val listOfErrorOrStudent: List[Either[String, Student]] =
javaList.asScala.map(m =>
attributeValueMapToStudent(m.asScala.toMap)).toList
listOfErrorOrStudent
} // traverse!
errorOrStudents = foreach(listOfErrorOrStudent)(identity)
} yield errorOrStudents
21 / 42
22. problem: using DDB Java SDK is painful
That's not all, we want to speed up our updates
however there is no batching support in DDB for updates
So we want to parallelise our updates
def updateItemRequest(student: Student): UpdateItemRequest = {
val values: Map[String, AttributeValue] =
Map(":paymentType" ->
AttributeValue.builder.s(student.payment.toString).build)
UpdateItemRequest.builder
.tableName("student")
.key(
Map(
"email" -> AttributeValue.builder.s(student.email).build,
"subject" -> AttributeValue.builder.s(student.subject).build
).asJava
)
.updateExpression("set payment = :paymentType")
.expressionAttributeValues(values.asJava)
.build
}
...
// we execute two queries in parallel using a ZIO zipPar
ZIO.fromCompletionStage(client.updateItem(updateItemRequest(updatedAvi))) zipPar
ZIO.fromCompletionStage(client.updateItem(updateItemRequest(updatedAdam))
)
22 / 42
23. problem: using DDB Java SDK is painful
Thats a lot of boilerplate!
...and this is just the tip of the iceberg - we have not covered
// TODO: should I create more examples for any of these?
// TODO: show how many lines of code
// TODO: move though slides quickly @ explain at a higher level
scanning and queries with key condition and filter expressions
pagination of scan and query results
complex projection expressions
error handling
TODO: solution is
23 / 42
24. solution: zio-dynamodb
Simple, type-safe, and efficient access to DynamoDB
We are now going to write the equivelent application using zio-dynamodb and
show you how much less boilerplate code there is.
24 / 42
25. solution: zio-dynamodb
val program = (for {
avi = Student("avi@gmail.com", "maths", ...)
adam = Student("adam@gmail.com", "english", ...)
_ <- (DynamoDBQuery.put("student", avi) zip
DynamoDBQuery.put("student", adam)).execute
listErrorOrStudent <- DynamoDBQuery
.forEach(List(avi, adam)) { st =>
DynamoDBQuery.get[Student](
"student",
PrimaryKey("email" -> st.email, "subject" -> st.subject)
)
}
.execute
} yield EitherUtil.collectAll(listErrorOrStudent))
.provideCustomLayer(DynamoDBExecutor.live)
25 / 42
26. solution: zio-dynamodb
offers a type safe API with auto serialisation
auto batching and parallelisation queries
testable using a fake in memory DB
26 / 42
27. zio-dynamodb API 101
DynamoDBQuery
its type and it's combinators
auto batching and parallelisation
how to create and execute a query
Serialisation
low level API
built in type classes
Expressions
usages
mutations operations
query operations
Serialisation
high level type safe API
note there is a 1:1 correspondence to the AWS API to aid discoverability
27 / 42
29. Simple Put for multiple tables
val program = (for {
_ <- (DynamoDBQuery.put("student", avi) zip
DynamoDBQuery.put("student", adam) zip
DynamoDBQuery.put("course", french) zip
DynamoDBQuery.put("course", art) zip
).execute
} yield ())
.provideCustomLayer(DynamoDBExecutor.live)
puts for multiple tables are batch together, grouped by table in the request
29 / 42
30. Serialisation - Low level API
AttributeValue
sealed trait AttributeValue
final case class Binary(value: Iterable[Byte]) extends AttributeValue
final case class BinarySet(value: Iterable[Iterable[Byte]]) extends AttributeValue
final case class Bool(value: Boolean) extends AttributeValue
// ... etc etc
final case class String(value: ScalaString) extends AttributeValue
final case class Number(value: BigDecimal) extends AttributeValue
This corresponds 1:1 with the AWS API AttributeValue
30 / 42
31. Serialisation - Low level API
AttrMap
Top level container type for an Item with type aliases
final case class AttrMap(map: Map[String, AttributeValue])
type Item = AttrMap
type PrimaryKey = AttrMap
Internal type classes take care of AttributeValue conversions
val aviItem = Item("email" -> "avi@gmail.com", "age" -> 21)
is equivalent to
val aviPrimaryKey = AttrMap(Map("email" -> AttributeValue.String("email"),
"age" -> AttributeValue.Number(BigDecimal(21)))
31 / 42
32. Projection Expression Parser
$("cost") // simple
$("address.line1") // map
$("adresses[1]") // list
$("adresses[work].line1") // list with map
The $ projection expression parser function takes a string field expression and
turns it into the internal representation
32 / 42
33. updates
updateItem("course", PrimaryKey("name" -> "art"))(
// UpdateExpression
$("cost").set(500.0) + $("code").set("123")
)
we can specify ProjectionExpression's
$("cost") ... $("code")
a ProjectionExpression has may update actions which can be combined using
+
$("count").set(1)
$("field1").set($("field2")) // replaces filed1 with field2
$("count").setIfNotExists($("two"), 42)
$("numberList").appendList(List("1"))
$("numberList").prependList(List("1"))
$("count").add(1) // updating Numbers and Sets
$("count").remove // Removes this field from an item
$("numberSet").deleteFromSet(1)
$("person.address").set(Item("line1" -> "1 high street"))
33 / 42
34. delete
deleteItem("course", PrimaryKey("name" -> "art"))
Both Update and Delete can have ConditionExpressions that must be met for
the operation to succeed. For this we use the where method.
where $("code") > 1 && $("code") < 5
applied to the queries
deleteItem("course", PrimaryKey("name" -> "art")) where
$("code") > 1 && $("code") < 5
updateItem("course", PrimaryKey("name" -> "art")) {
// UpdateExpression
$("cost").set(500.0) + $("code").set("123")
} where $("code") > 1 && $("code") < 5
34 / 42
35. queryAll
val zio: ZIO[Has[DynamoDBExecutor], Exception, Stream[Exception, Person]] =
queryAll("person", $("name"), $("address[1].line1"))
.whereKey(
PartitionKey("partitionKey1") === "x" && SortKey("sortKey1") > 10
)
.execute
Note that we use a list of ProjectionExpression's again
$("name"), $("address[1].line1")
The whereKey method specifies a KeyConditionExpression
.whereKey(
PartitionKey("partitionKey1") === "x" && SortKey("sortKey1") > 10
)
... and we get back a ZStream that the library lazily paginates for us
val queryAll: ZIO[Has[DynamoDBExecutor], Exception, Stream[Exception, Person]]
35 / 42
36. querySome
val zio =
querySomeItem("person", limit = 5, $("name"), $("address[1].line1"))
.whereKey(PartitionKey("partitionKey1") === "x" && SortKey("sortKey1") > 10)
.execute
... and we get back a ZIO of (Chunk[Person], LastEvaluatedKey)
type LastEvaluatedKey = Option[PrimaryKey]
val q: ZIO[Has[DynamoDBExecutor], Exception, (Chunk[Item], LastEvaluatedKey)]
and we use the startKey method to feed back the LastEvaluatedKey to get the
next page of data
val zio =
querySomeItem("person", limit = 5, last$("name"), $("address[1].line1"))
.whereKey(PartitionKey("partitionKey1") === "x" && SortKey("sortKey1") > 10)
.startKey(startKey)
.execute
36 / 42
37. scanAll, scanSome
scanAll
These are similar to the previous queryAll/querySome queries
val zio: ZIO[Has[DynamoDBExecutor], Exception, Stream[Exception, Person]] =
scanAll("person", $("name"), $("address[1].line1")).execute
scanSome
val zio =
scanSome("person", limit = 5, $("name"), $("address[1].line1")).execute
...
val zio =
scanSome("person", limit = 5, $("name"), $("address[1].line1"))
.startKey(startKey)
.execute
37 / 42