This document describes how to create a custom DataMapper adapter for MongoDB. It discusses initializing the adapter, connecting to MongoDB, and implementing CRUD methods like create, read, update and delete. Methods are provided to parse DataMapper query conditions to MongoDB query formats, handle associations, and apply field and collection naming conventions. The adapter subclasses DataMapper::Adapters::AbstractAdapter and implements adapter-specific behavior while retaining compatibility with DataMapper APIs.
Big Data Everywhere Chicago: Unleash the Power of HBase Shell (Conversant) BigDataEverywhere
Jayesh Thakrar, Senior Systems Engineer, Conversant
The venerable HBase shell is often regarded as a simple utility to perform basic DDL and maintenance activities. However, it is in fact a powerful, interactive programming environment, primarily due to the JRuby engine under the covers. In this presentation, I'll describe its JRuby heritage and show some of the things that can be done with the "ird" (interactive ruby shell), as well as show how to exploit JRuby and Java integration via concrete working examples. In addition, I will demonstrate how the "shell" can be used in Hadoop streaming to quickly perform complex and large volume batch jobs.
Especially when looking at WordPress as a potential platform for web apps, understanding proper caching techniques is a must—and the Transients API is a powerful tool that sometimes goes unnoticed.
We’ll cover the basics and see easy examples, and then discuss common places where this method can be most helpful, like large, complex queries or pulling from an external API. We’ll also get into the details of the API, covering concepts like object caching, autoloading, and see some examples of more advanced setups.
It's back...
AND it's better than ever, DBTNG (Database: The Next Generation) is nothing to be scared of and we'll show how easy it is to create both static and dynamic query statements for use in your custom modules and Drupal 6 to Drupal 7 module migration work. In this session we'll take a look at the Drupal 7 database abstraction layer and the database API and cover:
To db_query or not to db_query?
Dynamic query syntax and fluid interfaces
Working with result sets
Joins, conditional statements, subselects and sorting with db_select
Tagging your db_select queries for hook awareness
Decorator patterns for db_select -
db_update, db_insert, db_delete and our new friend, db_merge
Explore alternatives to views and how and when to make that call.
After this session attendees will be ready for Drupal III: Drupalicon Takes Manhattan
Big Data Everywhere Chicago: Unleash the Power of HBase Shell (Conversant) BigDataEverywhere
Jayesh Thakrar, Senior Systems Engineer, Conversant
The venerable HBase shell is often regarded as a simple utility to perform basic DDL and maintenance activities. However, it is in fact a powerful, interactive programming environment, primarily due to the JRuby engine under the covers. In this presentation, I'll describe its JRuby heritage and show some of the things that can be done with the "ird" (interactive ruby shell), as well as show how to exploit JRuby and Java integration via concrete working examples. In addition, I will demonstrate how the "shell" can be used in Hadoop streaming to quickly perform complex and large volume batch jobs.
Especially when looking at WordPress as a potential platform for web apps, understanding proper caching techniques is a must—and the Transients API is a powerful tool that sometimes goes unnoticed.
We’ll cover the basics and see easy examples, and then discuss common places where this method can be most helpful, like large, complex queries or pulling from an external API. We’ll also get into the details of the API, covering concepts like object caching, autoloading, and see some examples of more advanced setups.
It's back...
AND it's better than ever, DBTNG (Database: The Next Generation) is nothing to be scared of and we'll show how easy it is to create both static and dynamic query statements for use in your custom modules and Drupal 6 to Drupal 7 module migration work. In this session we'll take a look at the Drupal 7 database abstraction layer and the database API and cover:
To db_query or not to db_query?
Dynamic query syntax and fluid interfaces
Working with result sets
Joins, conditional statements, subselects and sorting with db_select
Tagging your db_select queries for hook awareness
Decorator patterns for db_select -
db_update, db_insert, db_delete and our new friend, db_merge
Explore alternatives to views and how and when to make that call.
After this session attendees will be ready for Drupal III: Drupalicon Takes Manhattan
The Perforce Web Content Management System development team, lacking a pre-existing solution in PHP, designed and implemented their own object model and record layer to ease the interaction of the system with the Perforce Server. This session will focus on how users can access files in Perforce via a simple CRUD API, the subsystems exposed, and their usage.
Testing stateful, concurrent, and async systems using test.checkEric Normand
Generative testing is great for testing pure functions, but it is also used to test the behavior of stateful systems that change over time. Those systems are often designed for highly concurrent usage, such as queues, databases, and storage. And then sometimes your code is also asynchronous, such as in JavaScript. Learn several patterns to find inconsistencies across platforms, race conditions, and corner cases. This talk assumes basic understanding of generative testing.
DBIx-DataModel is an object-relational mapping framework for Perl5. Schema declarations are inspired from UML modelling. The API provides efficient interaction with the DBI layer, detailed control on statement execution steps, flexible and powerful treatment of database joins. More on http://search.cpan.org/dist/DBIx-DataModel.
Talk presented at YAPC::EU::2011 Riga (updated from a previous version presented at FPW2010).
Serial Data Type & Sequence
Sequence Control
One Sequence for Two Tables
Reset Sequence
Array Data Type
Array Functions and Operands
Extension hStore
XML data
XML2 - Transform XML to Presentable view
JSON & JSONB
JSON Data Retrieving
Domain and Citext Extension
Write functions on Perl, Python, JS, Ruby, PHP
Lithium: The Framework for People Who Hate FrameworksNate Abele
This is the presentation was given at ConFoo on March 11th by Nate Abele and Joël Perras, and is an introduction to the architectural problems with other frameworks that Lithium was designed to address, and how it addresses them. It also introduces programming paradigms like functional and aspect-oriented programming which address issues that OOP doesn't account for.
Finally, the talk provides a quick overview of the innovative and unparalleled features that Lithium provides, including the data layer, which supports both relational and non-relational databases.
The Perforce Web Content Management System development team, lacking a pre-existing solution in PHP, designed and implemented their own object model and record layer to ease the interaction of the system with the Perforce Server. This session will focus on how users can access files in Perforce via a simple CRUD API, the subsystems exposed, and their usage.
Testing stateful, concurrent, and async systems using test.checkEric Normand
Generative testing is great for testing pure functions, but it is also used to test the behavior of stateful systems that change over time. Those systems are often designed for highly concurrent usage, such as queues, databases, and storage. And then sometimes your code is also asynchronous, such as in JavaScript. Learn several patterns to find inconsistencies across platforms, race conditions, and corner cases. This talk assumes basic understanding of generative testing.
DBIx-DataModel is an object-relational mapping framework for Perl5. Schema declarations are inspired from UML modelling. The API provides efficient interaction with the DBI layer, detailed control on statement execution steps, flexible and powerful treatment of database joins. More on http://search.cpan.org/dist/DBIx-DataModel.
Talk presented at YAPC::EU::2011 Riga (updated from a previous version presented at FPW2010).
Serial Data Type & Sequence
Sequence Control
One Sequence for Two Tables
Reset Sequence
Array Data Type
Array Functions and Operands
Extension hStore
XML data
XML2 - Transform XML to Presentable view
JSON & JSONB
JSON Data Retrieving
Domain and Citext Extension
Write functions on Perl, Python, JS, Ruby, PHP
Lithium: The Framework for People Who Hate FrameworksNate Abele
This is the presentation was given at ConFoo on March 11th by Nate Abele and Joël Perras, and is an introduction to the architectural problems with other frameworks that Lithium was designed to address, and how it addresses them. It also introduces programming paradigms like functional and aspect-oriented programming which address issues that OOP doesn't account for.
Finally, the talk provides a quick overview of the innovative and unparalleled features that Lithium provides, including the data layer, which supports both relational and non-relational databases.
TDC 2012 - Patterns e Anti-Patterns em RubyFabio Akita
Palestra apresentada no The Developers Conference 2012 em São Paulo. Explicação sobre Patterns e Anti-Patterns em Ruby para quem está iniciando a aprender a linguagem.
ManageIQ currently runs on Ruby on Rails 3. Aaron "tenderlove" Patterson presents his effort to migrate to RoR 4, which entails some changes in the code to take advantage of the latest advances in RoR.
For more on ManageIQ, see http://manageiq.org/
Data Works MD July 2021 - https://www.meetup.com/DataWorks/events/278394107/
Video - https://youtu.be/WXA1yX8O3Lc
-------------------------------------------------
Introducing Datawave: Scalable Data Ingest and Query on Apache Accumulo
Out of the box, Accumulo's strengths are difficult to appreciate without first building an application that showcases its capabilities to handle massive amounts of data. Unfortunately, building such an application is non-trivial for many would-be users, which affects Accumulo's adoption.
In this talk, we introduce Datawave, a complete ingest, query, and analytic framework for Accumulo. Datawave, recently open-sourced by the National Security Agency, capitalizes on Accumulo's capabilities, provides an API for working with structured and unstructured data, and boasts a robust, flexible, and scalable backend.
We'll do a deep dive into Datawave's project layout, table structures, and APIs in addition to demonstrating the Datawave quickstart—a tool that makes it incredibly easy to hit the ground running with Accumulo and Datawave without having to develop a complete application.
Datawave - https://code.nsa.gov/datawave/
-------------------------------------------------
Hannah Pellón received her B.S. in Mathematics from the University of Maryland while working as a software engineering intern at Northrop Grumman conducting RF signal analysis and spectrometry. She spent 11 years at Northrop Grumman thereafter contributing to IR&D efforts and programs centered around Accumulo and Hadoop. She is currently a software developer and lead at Tiber Technologies focusing on Datawave and distributed computing technologies
From docker to kubernetes: running Apache Hadoop in a cloud native wayDataWorks Summit
Creating containers for an application is easy (even if it’s a goold old distributed application like Apache Hadoop), just a few steps of packaging.
The hard part isn't packaging: it's deploying
How can we run the containers together? How to configure them? How do the services in the containers find and talk to each other? How do you deploy and manage clusters with hundred of nodes?
Modern cloud native tools like Kubernetes or Consul/Nomad could help a lot but they could be used in different way.
It this presentation I will demonstrate multiple solutions to manage containerized clusters with different cloud-native tools including kubernetes, and docker-swarm/compose.
No matter which tools you use, the same questions of service discovery and configuration management arise. This talk will show the key elements needed to make that containerized cluster work.
Tools:
kubernetes, docker-swam, docker-compose, consul, consul-template, nomad
together with: Hadoop, Yarn, Spark, Kafka, Zookeeper, Storm….
References:
https://github.com/flokkr
Speaker
Marton Elek, Lead Software Engineer, Hortonworks
No more struggles with Apache Spark workloads in productionChetan Khatri
Paris Scala Group Event May 2019, No more struggles with Apache Spark workloads in production.
Apache Spark
Primary data structures (RDD, DataSet, Dataframe)
Pragmatic explanation - executors, cores, containers, stage, job, a task in Spark.
Parallel read from JDBC: Challenges and best practices.
Bulk Load API vs JDBC write
An optimization strategy for Joins: SortMergeJoin vs BroadcastHashJoin
Avoid unnecessary shuffle
Alternative to spark default sort
Why dropDuplicates() doesn’t result consistency, What is alternative
Optimize Spark stage generation plan
Predicate pushdown with partitioning and bucketing
Why not to use Scala Concurrent ‘Future’ explicitly!
iPhone applications can often benefit by talking to a web service to synchronize data or share information with a community. Ruby on Rails, with its RESTful conventions, is an ideal backend for iPhone applications. In this session you'll learn how to use ObjectiveResource in an iPhone application to interact with a RESTful web service implemented in Rails. This session isn't about how to build web applications that are served up on the iPhone. It's about how to build iPhone applications with a native look and feel that happen to talk to Rails applications under the hood. The upshot is a user experience that transcends the device.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
8. Examples
class Comment class Post
include include
DataMapper::Resource DataMapper::Resource
property :id, Serial property :id, Serial
property :body, String property :title, String
belongs_to :post has n, :comments
end end
9.
10. Terms
term meaning
Resource A model
field A property on a model
Repository
DataMapper term for the storage engine
11.
12. Preamble
require 'dm-core'
Just do it
module DataMapper
Just do it
module Adapters
class TestAdapter < AbstractAdapter
...
end
Just do it
const_added(:TestAdapter)
end
end
18. Connect
to your
def initialize(name, options) adapter
super
@conn = Mongo::Connection.new(
options[:host], options[:port])
@adapter = @conn[options[:database]]
end
19. def initialize(name, options)
super
@conn = Mongo::Connection.new(
Sub class of: DataMapper::Property
options[:host], options[:port])
@adapter = @conn[options[:database]]
@field_naming_convention = Proc.new do |field|
field.model.storage_name + '_' + field.name.to_s
end
Return a String
end
20. def initialize(name, options)
super
@conn = Mongo::Connection.new(
options[:host], options[:port])
@adapter = @conn[options[:database]]
@field_naming_convention = Proc.new do |field|
field.model.storage_name String (class.to_s)
+ '_' + field.name.to_s
end
@resource_naming_convention = Proc.new do |resource|
resource.downcase
end Return a String
end
21. def initialize(name, options)
super
@conn = Mongo::Connection.new(
options[:host], options[:port])
@adapter = @conn[options[:database]]
@field_naming_convention = Proc.new do |field|
field.model.storage_name + '_' + field.name.to_s
end
@resource_naming_convention = Proc.new do |resource|
resource.downcase
end
end
25. def create(resources)
resources.collect do |resource|
initialize_serial(resource,
@adapter[resource.class.storage_name].find.count)
fields = attributes_as_fields(
resource.attributes(:property))
@adapter[resource.class.storage_name].insert(fields)
end.size
end
26. def create(resources)
resources.collect do |resource|
initialize_serial(resource,
@adapter[resource.class.storage_name].find.count)
fields = attributes_as_fields(
resource.attributes(:property))
@adapter[resource.class.storage_name].insert(fields)
end.size
end
27. • Accepts: Hash
• Key: Sub class of: DataMapper::Property
• Value: non marshaled data
• example:
{<DataMapper::Property::String(title)> =>
"hasdf"}
def create(resources)
resources.collect do |resource|
initialize_serial(resource,
@adapter[resource.class.storage_name].find.count)
fields = attributes_as_fields(
resource.attributes(:property))
@adapter[resource.class.storage_name].insert(fields)
end.size
• Return: Hash
• Key: @field_naming_convention result
end • Value: Marshaled data
• Only values that are set
•Example:
{"post_title" => "hasdf"}
28. def create(resources)
resources.collect do |resource|
initialize_serial(resource,
@adapter[resource.class.storage_name].find.count)
fields = attributes_as_fields(
resource.attributes(:property))
@adapter[resource.class.storage_name].insert(fields)
end.size
end
29. def create(resources)
resources.collect do |resource|
initialize_serial(resource,
@adapter[resource.class.storage_name].find.count)
fields = attributes_as_fields(
resource.attributes(:property))
@adapter[resource.class.storage_name].insert(fields)
end.size
end Unless an Exception is raised the
resource will be considered saved
40. query.conditions.operands.each do |condition|
def parse_query_conditions(query)
mongo_conditions = {}
case condition.class.to_s
when 'DataMapper::Query::Conditions::GreaterThanComparison'
mongo_conditions[condition.subject.field] =
{ "$gt" => condition.value}
when 'DataMapper::Query::Conditions::LessThanComparison'
mongo_conditions[condition.subject.field] =
{ "$lt" => condition.value}
else
mongo_conditions[condition.subject.field] =
condition.value
end mongo_conditions
end end
41. query.conditions.operands.each do |condition|
def parse_query_conditions(query)
mongo_conditions = {}
case condition.class.to_s
when 'DataMapper::Query::Conditions::GreaterThanComparison'
mongo_conditions[condition.subject.field] =
{ "$gt" => condition.value}
when 'DataMapper::Query::Conditions::LessThanComparison'
mongo_conditions[condition.subject.field] =
{ "$lt" => condition.value}
else
mongo_conditions[condition.subject.field] =
condition.value
end mongo_conditions
end end
42. query.conditions.operands.each do |condition|
def parse_query_conditions(query)
mongo_conditions = {}
case condition.class.to_s
when 'DataMapper::Query::Conditions::GreaterThanComparison'
mongo_conditions[condition.subject.field] =
{ "$gt" => condition.value}
when 'DataMapper::Query::Conditions::LessThanComparison'
mongo_conditions[condition.subject.field] =
{ "$lt" => condition.value}
else
mongo_conditions[condition.subject.field] =
condition.value
end mongo_conditions
end end
43. def parse_query_conditions(query)
mongo_conditions = {}
query.conditions.operands.each do |condition|
case condition.class.to_s
when 'DataMapper::Query::Conditions::GreaterThanComparison'
mongo_conditions[condition.subject.field] =
{ "$gt" => condition.value}
when 'DataMapper::Query::Conditions::LessThanComparison'
mongo_conditions[condition.subject.field] =
{ "$lt" => condition.value}
else
mongo_conditions[condition.subject.field] =
condition.value
end
end
mongo_conditions
end
45. conditions.operands.each do |condition|
...
case condition.class.to_s
when '...InclusionComparison'
if condition.subject.instance_of?
DataMapper::Associations::OneToMany::Relationship
pk = condition.subject.parent_key.first.field
ck = condition.subject.child_key.first.name
mongo_conditions[pk] = {"$in" =>
condition.value.collect {|r| r.send(ck)}}
else
...
46. conditions.operands.each do |condition|
...
case condition.class.to_s
when '...InclusionComparison'
if condition.subject.instance_of?
DataMapper::Associations::OneToMany::Relationship
pk = condition.subject.parent_key.first.field
ck = condition.subject.child_key.first.name
mongo_conditions[pk] = {"$in" =>
condition.value.collect {|r| r.send(ck)}}
else
...
47. conditions.operands.each do |condition|
... Array of properties
case condition.class.to_s
* property - subclass of DataMapper::Property
* ex. Post#id
when '...InclusionComparison'
if condition.subject.instance_of?
DataMapper::Associations::OneToMany::Relationship
pk = condition.subject.parent_key.first.field
ck = condition.subject.child_key.first.name
mongo_conditions[pk] = {"$in" =>
condition.value.collect {|r| r.send(ck)}}
elseArray of properties
* property - subclass of DataMapper::Property
... * ex Coment#post_id
48. conditions.operands.each do |condition|
...
case condition.class.to_s
when '...InclusionComparison'
if condition.subject.instance_of?
DataMapper::Associations::OneToMany::Relationship
Array of resources
pk = condition.subject.parent_key.first.field
* [#<Comment..>, #<Comment..>,...]
ck = condition.subject.child_key.first.name
mongo_conditions[pk] = {"$in" =>
condition.value.collect {|r| r.send(ck)}}
else
...
49. conditions.operands.each do |condition|
...
case condition.class.to_s
when '...InclusionComparison'
if condition.subject.instance_of?
DataMapper::Associations::OneToMany::Relationship
pk = condition.subject.parent_key.first.field
ck = condition.subject.child_key.first.name
mongo_conditions[pk] = {"$in" =>
condition.value.collect {|r| r.send(ck)}}
else
...
50. conditions.operands.each do |condition|
...
case condition.class.to_s
when '...InclusionComparison'
if condition.subject.instance_of?
DataMapper::Associations::OneToMany::Relationship
pk = condition.subject.parent_key.first.field
ck = condition.subject.child_key.first.name
mongo_conditions[pk] = {"$in" =>
condition.value.collect {|r| r.send(ck)}}
else
...
54. If your backed does not
have a query language
A Array of Hashes
key: field name
value: unmarshed value
[{field_name => value}]
def read(query)
query.filter_records(records)
end
57. DataMapper::Collection
def delete(resources)
conditions = parse_query_conditions(resources.query)
record_count = read(resources.query).count
@adapter[resources.storage_name].remove(conditions)
record_count
end Number of resources deleted (int)
58. def delete(resources)
conditions = parse_query_conditions(resources.query)
record_count = read(resources.query).count
@adapter[resources.storage_name].remove(conditions)
record_count
end
Unless an Exception is raised the
resource will be considered saved
61. Unmarshaled hash of changes
{<DataMapper::Property::String(title)>
=> "hasdf"}
DataMapper::Collection
def update(changes, resources)
conditions = parse_query_conditions(resources.query)
@adapter[resources.storage_name].update(conditions,
{"$set" => attributes_as_fields(changes)},
{:multi => true})
read(resources.query).count
end
Number of resources updated (int)
62. def update(changes, resources)
conditions = parse_query_conditions(resources.query)
@adapter[resources.storage_name].update(conditions,
{"$set" => attributes_as_fields(changes)},
{:multi => true})
read(resources.query).count
end
Unless an Exception is raised the
resource will be considered saved
* Work mostly with backend data transformations from Transactional Database to Reporting datastore\n* When you have a child the blogging slows down\n* Talk quickly when I am nervous, let me know if to fast\n
\n
\n
* About the little ruby Code in the middle\n* Will not focus on either DB OR DM\n
* last commit was almost a month ago\n* It is true\n* ActiveRecord lots of improvements\n
\n
\n
* Associations across multiple repositories\n
* &#x738B;&#x5EFA;&#x8208; talked about picking different things from different languages... Why not learn different ORMs to?\n
\n
* xdite will think it is stupid\n
\n
* With the addition of the initialization mostly just CRUD operations\n* CRUD\n
\n
\n
* Raw Access to connection through adapter\n* Persisted through entire instance, so if time out need to reconnect\n
* Raw Access to connection through adapter\n* Persisted through entire instance, so if time out need to reconnect\n
* Raw Access to connection through adapter\n* Persisted through entire instance, so if time out need to reconnect\n
* Raw Access to connection through adapter\n* Persisted through entire instance, so if time out need to reconnect\n
* Raw Access to connection through adapter\n* Persisted through entire instance, so if time out need to reconnect\n
* Raw Access to connection through adapter\n* Persisted through entire instance, so if time out need to reconnect\n
* Raw Access to connection through adapter\n* Persisted through entire instance, so if time out need to reconnect\n
* Raw Access to connection through adapter\n* Persisted through entire instance, so if time out need to reconnect\n
* Raw Access to connection through adapter\n* Persisted through entire instance, so if time out need to reconnect\n
* Raw Access to connection through adapter\n* Persisted through entire instance, so if time out need to reconnect\n
* Raw Access to connection through adapter\n* Persisted through entire instance, so if time out need to reconnect\n
* Raw Access to connection through adapter\n* Persisted through entire instance, so if time out need to reconnect\n
* Raw Access to connection through adapter\n* Persisted through entire instance, so if time out need to reconnect\n
* Raw Access to connection through adapter\n* Persisted through entire instance, so if time out need to reconnect\n
* Raw Access to connection through adapter\n* Persisted through entire instance, so if time out need to reconnect\n
* Raw Access to connection through adapter\n* Persisted through entire instance, so if time out need to reconnect\n
* Raw Access to connection through adapter\n* Persisted through entire instance, so if time out need to reconnect\n
* Raw Access to connection through adapter\n* Persisted through entire instance, so if time out need to reconnect\n
* Raw Access to connection through adapter\n* Persisted through entire instance, so if time out need to reconnect\n
* Raw Access to connection through adapter\n* Persisted through entire instance, so if time out need to reconnect\n
* Raw Access to connection through adapter\n* Persisted through entire instance, so if time out need to reconnect\n
* Raw Access to connection through adapter\n* Persisted through entire instance, so if time out need to reconnect\n
* Raw Access to connection through adapter\n* Persisted through entire instance, so if time out need to reconnect\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
* Association - subclass of DataMapper::Associations::*::Relationship\n* Field - subclass of DataMapper::Property\n
* Association - subclass of DataMapper::Associations::*::Relationship\n* Field - subclass of DataMapper::Property\n
* Association - subclass of DataMapper::Associations::*::Relationship\n* Field - subclass of DataMapper::Property\n
* Association - subclass of DataMapper::Associations::*::Relationship\n* Field - subclass of DataMapper::Property\n
* Only top level Operation\n* own your own for recursive solution\n
* Only top level Operation\n* own your own for recursive solution\n
* Only top level Operation\n* own your own for recursive solution\n
* Only top level Operation\n* own your own for recursive solution\n
* Only top level Operation\n* own your own for recursive solution\n
* Only top level Operation\n* own your own for recursive solution\n
* Only top level Operation\n* own your own for recursive solution\n
* Only top level Operation\n* own your own for recursive solution\n
* Only top level Operation\n* own your own for recursive solution\n
* Only top level Operation\n* own your own for recursive solution\n
* Only top level Operation\n* own your own for recursive solution\n
* Only top level Operation\n* own your own for recursive solution\n
* Only top level Operation\n* own your own for recursive solution\n
* Only top level Operation\n* own your own for recursive solution\n
* Only top level Operation\n* own your own for recursive solution\n
* Only top level Operation\n* own your own for recursive solution\n
* Only top level Operation\n* own your own for recursive solution\n
* Only top level Operation\n* own your own for recursive solution\n
* Only top level Operation\n* own your own for recursive solution\n
\n
* .field - the repository name\n* .name - the resource name\n* value IS UNMARSHALED \n
* .field - the repository name\n* .name - the resource name\n* value IS UNMARSHALED \n
* .field - the repository name\n* .name - the resource name\n* value IS UNMARSHALED \n
* .field - the repository name\n* .name - the resource name\n* value IS UNMARSHALED \n
* .field - the repository name\n* .name - the resource name\n* value IS UNMARSHALED \n
* .field - the repository name\n* .name - the resource name\n* value IS UNMARSHALED \n
* .field - the repository name\n* .name - the resource name\n* value IS UNMARSHALED \n
* .field - the repository name\n* .name - the resource name\n* value IS UNMARSHALED \n
* .field - the repository name\n* .name - the resource name\n* value IS UNMARSHALED \n
* .field - the repository name\n* .name - the resource name\n* value IS UNMARSHALED \n
* .field - the repository name\n* .name - the resource name\n* value IS UNMARSHALED \n
* .field - the repository name\n* .name - the resource name\n* value IS UNMARSHALED \n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
* info - used for logging what you are sending to the repository\n
\n
* The first part of the presentation is the 80 percent that makes the most difference\n* The last (more repository specific) part is the 20 percent\n
* Mongo specific example \n* But demonstrates the process of modifying a models\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
\n
* Be kind log\n* at info log the query parameters that are being passed to repository\n* Mongo supports Array and Hash natively\n