Ruby plays to many programming paradigms. It's an object-oriented language that can be used in a functional or an imperative/procedural way. But Ruby does not often get used as a logic programming language. In this talk I'll explore logic programming using Ruby. What is it, and is it a tool you want to add to your toolbox? We'll touch on several libraries, we'll primary look at an implementation of minikanren (http://minikanren.org/) for Ruby.
This presentation covers MySQL data encryption at disk. How to encrypt all tablespaces and MySQL related files for the compliances ? The new releases in MySQL 8.0 take care of the encryption of the system tablespace and supporting tables unlike MySQL 5.7.
MySQL Load Balancers - Maxscale, ProxySQL, HAProxy, MySQL Router & nginx - A ...Severalnines
This presentation by Krzysztof Książek at Percona Live 2017 in Santa Clara, California gives detailed descriptions and comparisons of the leading open source database load balancing technologies
This presentation covers MySQL data encryption at disk. How to encrypt all tablespaces and MySQL related files for the compliances ? The new releases in MySQL 8.0 take care of the encryption of the system tablespace and supporting tables unlike MySQL 5.7.
MySQL Load Balancers - Maxscale, ProxySQL, HAProxy, MySQL Router & nginx - A ...Severalnines
This presentation by Krzysztof Książek at Percona Live 2017 in Santa Clara, California gives detailed descriptions and comparisons of the leading open source database load balancing technologies
Analyzing Petabyte Scale Financial Data with Apache Pinot and Apache Kafka | ...HostedbyConfluent
At Stripe, we operate a general ledger modeled as double-entry bookkeeping for all financial transactions. Warehousing such data is challenging due to its high volume and high cardinality of unique accounts.
aFurthermore, it is financially critical to get up-to-date, accurate analytics over all records. Due to the changing nature of real time transactions, it is impossible to pre-compute the analytics as a fixed time series. We have overcome the challenge by creating a real time key-value store inside Pinot that can sustain half million QPS with all the financial transactions.
We will talk about the details of our solution and the interesting technical challenges faced.
Demystifying the Distributed Database Landscape (DevOps) (1).pdfScyllaDB
What is the state of high-performance, distributed databases as we head deeper into 2022, and which options are best suited for your own development projects?
The data-intensive applications leading this next tech cycle are typically powered by multiple types of databases and data stores—each satisfying specific needs and often interacting with a broader data ecosystem. Even the very notion of a database is evolving as new hardware architectures and methodologies allow for ever-greater capabilities and expectations for horizontal and vertical scalability, performance and reliability.
In this webinar, Peter Corless, ScyllaDB’s director of technology advocacy, will survey the current landscape of distributed database systems and highlight new directions in the industry.
This talk will cover different database and database-adjacent technologies as well as describe their appropriate use cases, patterns and anti-patterns with a focus on:
- Distributed SQL, NewSQL and NoSQL
- In-memory datastores and caches
- Streaming technologies with persistent data storage
Group Replication in MySQL 8.0 ( A Walk Through ) Mydbops
This presentation provides an overview about Group Replication in MySQL 8.0. The primary election algorithm, Replication modes are described here.
www.mydbops.com
You've seen the basic 2-stage example Spark Programs, and now you're ready to move on to something larger. I'll go over lessons I've learned for writing efficient Spark programs, from design patterns to debugging tips.
The slides are largely just talking points for a live presentation, but hopefully you can still make sense of them for offline viewing as well.
MySQL and PostgreSQL are the two most popular open-source relational databases. To help choosing between them, a comparison of their query optimizers has been carried out. The aim of this session is to summarize the outcome of the comparison. Specifically, to point out optimizer-related strengths and weaknesses.
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...Spark Summit
What if you could get the simplicity, convenience, interoperability, and storage niceties of an old-fashioned CSV with the speed of a NoSQL database and the storage requirements of a gzipped file? Enter Parquet.
At The Weather Company, Parquet files are a quietly awesome and deeply integral part of our Spark-driven analytics workflow. Using Spark + Parquet, we’ve built a blazing fast, storage-efficient, query-efficient data lake and a suite of tools to accompany it.
We will give a technical overview of how Parquet works and how recent improvements from Tungsten enable SparkSQL to take advantage of this design to provide fast queries by overcoming two major bottlenecks of distributed analytics: communication costs (IO bound) and data decoding (CPU bound).
Data Security at Scale through Spark and Parquet EncryptionDatabricks
Big data presents new challenges for protection of privacy and integrity of sensitive information. Straightforward application of traditional file encryption and MAC techniques can’t cope with staggering volumes of data, flowing in modern analytic pipelines.
Apple addresses these challenges by leveraging the new capabilities in the Apache Parquet format. We work with the Apache Parquet community on a modular data security mechanism, that provides privacy and integrity guarantees for sensitive information at scale; the encryption specification has been approved and released by the Apache Parquet Format project. Today, there are two open source implementations of this specification – in Apache Arrow (C++) and in Apache Parquet-MR (Java) repositories. The latter has just been released in the parquet-mr-1.12 version – which means the Apache Spark and other Java/Scala based analytic frameworks can start working with Apache Parquet encryption.
In this talk, Gidon Gershinsky and Tim Perelmutov will outline the challenges of protecting the privacy of data at scale and describe the Apache Parquet encryption technology security approach. We will give a quick intro to usage of Apache Parquet encryption API in pure Java and in Apache Spark applications. We will also discuss the roadmap of the community work on new encryption features and on deeper integration with Apache Spark and other analytic frameworks. Finally, we will show a demo of the Apache Parquet modular encryption in action, sharing our learnings using it at scale.
In a world where compute is paramount, it is all too easy to overlook the importance of storage and IO in the performance and optimization of Spark jobs.
Analyzing Petabyte Scale Financial Data with Apache Pinot and Apache Kafka | ...HostedbyConfluent
At Stripe, we operate a general ledger modeled as double-entry bookkeeping for all financial transactions. Warehousing such data is challenging due to its high volume and high cardinality of unique accounts.
aFurthermore, it is financially critical to get up-to-date, accurate analytics over all records. Due to the changing nature of real time transactions, it is impossible to pre-compute the analytics as a fixed time series. We have overcome the challenge by creating a real time key-value store inside Pinot that can sustain half million QPS with all the financial transactions.
We will talk about the details of our solution and the interesting technical challenges faced.
Demystifying the Distributed Database Landscape (DevOps) (1).pdfScyllaDB
What is the state of high-performance, distributed databases as we head deeper into 2022, and which options are best suited for your own development projects?
The data-intensive applications leading this next tech cycle are typically powered by multiple types of databases and data stores—each satisfying specific needs and often interacting with a broader data ecosystem. Even the very notion of a database is evolving as new hardware architectures and methodologies allow for ever-greater capabilities and expectations for horizontal and vertical scalability, performance and reliability.
In this webinar, Peter Corless, ScyllaDB’s director of technology advocacy, will survey the current landscape of distributed database systems and highlight new directions in the industry.
This talk will cover different database and database-adjacent technologies as well as describe their appropriate use cases, patterns and anti-patterns with a focus on:
- Distributed SQL, NewSQL and NoSQL
- In-memory datastores and caches
- Streaming technologies with persistent data storage
Group Replication in MySQL 8.0 ( A Walk Through ) Mydbops
This presentation provides an overview about Group Replication in MySQL 8.0. The primary election algorithm, Replication modes are described here.
www.mydbops.com
You've seen the basic 2-stage example Spark Programs, and now you're ready to move on to something larger. I'll go over lessons I've learned for writing efficient Spark programs, from design patterns to debugging tips.
The slides are largely just talking points for a live presentation, but hopefully you can still make sense of them for offline viewing as well.
MySQL and PostgreSQL are the two most popular open-source relational databases. To help choosing between them, a comparison of their query optimizers has been carried out. The aim of this session is to summarize the outcome of the comparison. Specifically, to point out optimizer-related strengths and weaknesses.
Spark + Parquet In Depth: Spark Summit East Talk by Emily Curtin and Robbie S...Spark Summit
What if you could get the simplicity, convenience, interoperability, and storage niceties of an old-fashioned CSV with the speed of a NoSQL database and the storage requirements of a gzipped file? Enter Parquet.
At The Weather Company, Parquet files are a quietly awesome and deeply integral part of our Spark-driven analytics workflow. Using Spark + Parquet, we’ve built a blazing fast, storage-efficient, query-efficient data lake and a suite of tools to accompany it.
We will give a technical overview of how Parquet works and how recent improvements from Tungsten enable SparkSQL to take advantage of this design to provide fast queries by overcoming two major bottlenecks of distributed analytics: communication costs (IO bound) and data decoding (CPU bound).
Data Security at Scale through Spark and Parquet EncryptionDatabricks
Big data presents new challenges for protection of privacy and integrity of sensitive information. Straightforward application of traditional file encryption and MAC techniques can’t cope with staggering volumes of data, flowing in modern analytic pipelines.
Apple addresses these challenges by leveraging the new capabilities in the Apache Parquet format. We work with the Apache Parquet community on a modular data security mechanism, that provides privacy and integrity guarantees for sensitive information at scale; the encryption specification has been approved and released by the Apache Parquet Format project. Today, there are two open source implementations of this specification – in Apache Arrow (C++) and in Apache Parquet-MR (Java) repositories. The latter has just been released in the parquet-mr-1.12 version – which means the Apache Spark and other Java/Scala based analytic frameworks can start working with Apache Parquet encryption.
In this talk, Gidon Gershinsky and Tim Perelmutov will outline the challenges of protecting the privacy of data at scale and describe the Apache Parquet encryption technology security approach. We will give a quick intro to usage of Apache Parquet encryption API in pure Java and in Apache Spark applications. We will also discuss the roadmap of the community work on new encryption features and on deeper integration with Apache Spark and other analytic frameworks. Finally, we will show a demo of the Apache Parquet modular encryption in action, sharing our learnings using it at scale.
In a world where compute is paramount, it is all too easy to overlook the importance of storage and IO in the performance and optimization of Spark jobs.
A long time ago in a galaxy far, far away...
Java open source developers managed to the see the previously secret plans to the Empire's ultimate weapon, the JAVA™ COLLECTIONS FRAMEWORK.
Evading the dreaded Imperial Starfleet, a group of freedom fighters investigate the performance of the Empire’s most popular weapons: LinkedList, ArrayList and HashMap. In addition, they investigate common developer errors and bugs to help protect their vital software. With this new found knowledge they strike back!
Pursued by the Empire's sinister agents, JDuchess races home aboard her JVM, investigating proposed future changes to the Java Collections and other options such as Immutable Collections which could save her people and restore freedom to the galaxy....
Beyond Map/Reduce: Getting Creative With Parallel ProcessingEd Kohlwey
While Map/Reduce is an excellent environment for some parallel computing tasks, there are many ways to use a cluster beyond Map/Reduce. Within the last year, the YARN and NextGen Map/Reduce has been contributed into the Hadoop trunk, Mesos has been released as an open source project, and a variety of new parallel programming environments have emerged such as Spark, Giraph, Golden Orb, Accumulo, and others.
We will discuss the features of YARN and Mesos, and talk about obvious yet relatively unexplored uses of these cluster schedulers as simple work queues. Examples will be provided in the context of machine learning. Next, we will provide an overview of the Bulk-Synchronous-Parallel model of computation, and compare and contrast the implementations that have emerged over the last year. We will also discuss two other alternative environments: Spark, an in-memory version of Map/Reduce which features a Scala-based interpreter; and Accumulo, a BigTable-style database that implements a novel model for parallel computation and was recently released by the NSA.
Presentation slides for our first OpenCog Developer Workshop. To stay tuned with updates about our SingularityNET Developer Program, subscribe to the newsletter via singularitynet.io.
Clojure is a new dialect of LISP that runs on the Java Virtual Machine (JVM). As a functional language, it offers great benefits in terms of programmer productivity; as a language that runs on the JVM, it also offers the opportunity to reuse existing Java libraries. Simon’s interest is in using Clojure to build desktop applications with the Java Swing GUI library. In this presentation Simon discusses how the power of Clojure can be applied to Swing, and whether it hits the sweet spot.
Engineering Fast Indexes for Big-Data Applications: Spark Summit East talk by...Spark Summit
Contemporary computing hardware offers massive new performance opportunities. Yet high-performance programming remains a daunting challenge.
We present some of the lessons learned while designing faster indexes, with a particular emphasis on compressed bitmap indexes. Compressed bitmap indexes accelerate queries in popular systems such as Apache Spark, Git, Elastic, Druid and Apache Kylin.
Similar to Logic programming a ruby perspective (20)
For the last few hack days, I've been developing a small clojurescript application using a stack that is fairly new to me: quiescent and figwheel. At the last hack day I finished the application and deployed it as a pure-ClojureScript application with no server-side Clojure code.
I'll demo my application, and talk about some of the things I learned in the process. We'll also live code a ClojureScript web application from scratch and deploy it using Amazon S3 static hosting in a matter of minutes.
This is a talk from the March 2015 Austin Clojure Meetup in which I explored LISP 1.5 and showed how to run the original LISP 1.5 in an IBM 7094 simulator.
Deconstructing the Functional Web with ClojureNorman Richards
Programming for the web in Clojure isn't hard, but with layers of abstraction you can easily lose track of what is going on. In this talk, we'll dig deep into Ring, the request/response library that most Clojure web programming is based on. We'll see exactly what a Ring handler is and look at the middleware abstraction in depth. We'll then take this knowledge and deconstruct the Compojure routing framework to understand precisely how your web application responds to request. At the end of the talk you should thoroughly understand everything that happens in the request/response stack and be able to customize your web application stack with confidence.
Updated for Houston Clojure Meetup 2/28/14
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
In the ever-evolving landscape of technology, enterprise software development is undergoing a significant transformation. Traditional coding methods are being challenged by innovative no-code solutions, which promise to streamline and democratize the software development process.
This shift is particularly impactful for enterprises, which require robust, scalable, and efficient software to manage their operations. In this article, we will explore the various facets of enterprise software development with no-code solutions, examining their benefits, challenges, and the future potential they hold.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
Do you want Software for your Business? Visit Deuglo
Deuglo has top Software Developers in India. They are experts in software development and help design and create custom Software solutions.
Deuglo follows seven steps methods for delivering their services to their customers. They called it the Software development life cycle process (SDLC).
Requirement — Collecting the Requirements is the first Phase in the SSLC process.
Feasibility Study — after completing the requirement process they move to the design phase.
Design — in this phase, they start designing the software.
Coding — when designing is completed, the developers start coding for the software.
Testing — in this phase when the coding of the software is done the testing team will start testing.
Installation — after completion of testing, the application opens to the live server and launches!
Maintenance — after completing the software development, customers start using the software.
Understanding Nidhi Software Pricing: A Quick Guide 🌟
Choosing the right software is vital for Nidhi companies to streamline operations. Our latest presentation covers Nidhi software pricing, key factors, costs, and negotiation tips.
📊 What You’ll Learn:
Key factors influencing Nidhi software price
Understanding the true cost beyond the initial price
Tips for negotiating the best deal
Affordable and customizable pricing options with Vector Nidhi Software
🔗 Learn more at: www.vectornidhisoftware.com/software-for-nidhi-company/
#NidhiSoftwarePrice #NidhiSoftware #VectorNidhi
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
3. Ruby is many things
Ruby is a dynamic, reflective, object-oriented, general-
purpose programming language. [...] Ruby was influenced
by Perl, Smalltalk, Eiffel, Ada, and Lisp. It supports multiple
programming paradigms, including functional, object-
oriented, and imperative. It also has a dynamic type system
and automatic memory management.
~Wikipedia
6. This may not be very pragmatiC
I'm going to talk about something Ruby isn't good at...
I'm going to show you some libraries that are half baked...
But hopefully, I'll encourage you to explore logic programming more....
7. real world Logic programming
ThreatGRID uses logic
programming (core.logic in Clojure)
to process observations of malware
execution looking for behavioral
indicators of compromise.
10. (defobs process-modified-path
[pid path]
:doc "A pathname modified by a process,
associated by the PID."
:tags ["process" "file" "directory" "path"])
assert observations
Malware analysis generates
analysis data, which in turn
generates observation data that
can be queried by core.logic.
Some observations are exported
to the database.
11. (defioc autoexec-bat-modified
:title "Process Modified AUTOEXEC.BAT"
:description "A process modified the AUTOEXEC.BAT file. ..."
:category ["persistence" "weakening"]
:tags ["process" "autorun" "removal"]
:severity 80
:confidence 70
:variables [Path Process_Name Process_ID]
:query [(process-modified-path Process_ID Path)
(matches "(?i).*AUTOEXEC.BAT" Path)
(process-name Process_ID Process_Name)])
Logic programs are queries
Security researchers write
core.logic queries over the
observations.
Declarative nature combined with
abstraction make queries small
and high level.
12. (defioc sinkholed-domain-detected
:title "Domain Resolves to a Known DNS Sinkhole"
:description "..."
:category ["research" "defending"]
:tags ["network" "dns" "sinkhole" "botnet"]
:severity 100
:confidence 100
:variables [Answer_Data Answer_Type Query_Data
Query_Type Network_Stream]
:query
[(fresh [qid]
(dns-query Network_Stream qid (lvar)
Query_Type Query_Data)
(dns-answer Network_Stream qid (lvar)
Answer_Type Answer_Data (lvar)))
(sinkhole-servers Answer_Data)])
Logic programs are queries
We combine rules with internal
knowledge bases.
Declarative queries combined
with abstraction make queries
small and high level.
13. Indicators produce data
{:ioc autoexec-bat-modified
:hits 1
:data ({Process_ID 1200
Process_Name "smss.exe"
Path "AUTOEXEC.BAT"})
:confidence 70
:truncated false
:title "Process Modified AUTOEXEC.BAT"
:description "A process modified the AUTOEXEC.BAT ..."
:severity 80
:category ["persistence" "weakening"]
:tags ["process" "autorun" "removal"]}
Queries generate data that is
used in reports.
14. Sample reports
Reports show observations and
matched indicators and their
data.
We also correlate this data and
mine the relationships between
samples to create data feeds that
customers can take action based
on
15. minikanren
Minikanren is a relational programming
environment, originally written in Scheme
but ported to many other languages. It is
described in the book The Reasoned Schemer.
The language is powerful, but deceptively
simple, with only a few core language
concepts.
http://minikanren.org/
16. ruby minikanren
One of two implementations, neither of
which are currently being developed. (I
would love to help someone fix this)
Doesn't have any advanced features
you need for real world use, but it can
be used for most of the examples in
The Reasoned Schemer.
https://github.com/spariev/mini_kanren
require 'mini_kanren'
include MiniKanren::Extras
result = MiniKanren.exec do
# your logic program goes here
end
17. run
run([], succeed)
This is the simplest possible
minikanren. There are no query
variables, and the query always
succeeds
run says "give me all the results"
and in ruby minikanren is an
array. This query returns one
result, which matches the empty
query.
[[]]
19. FRESH
fresh introduces logic variables.
Logic variables are the things we
want to find the values of.
Minikanren programs often use q
to represent the query.
_.0 represents an unbound logic
variable in the results. We are
saying, the query succeeded and
the result is anything.
["_.0"]
q = fresh
run(q, succeed)
20. FRESH
This query has two logic
variables, and we find one
results, where both logic
variables are unbound and
different. (or at least not
constrained to be the same) [["_.0", "_.0"]]
a, b = fresh 2
run([a, b], eq(a, b))
21. unification
run(q, eq(q, :hello))
The most fundamental operation
on a logic variable is to unify it.
unification is eq.
There is only one value of q that
satisfies the relation. [:hello]
22. unification
run(q, eq(q, [:hello, :world]))
Logic variables can also be
unified over non-primitive values
There is still only one value of q
that satisfies the relation.
[[:hello, :world]]
23. all
run(q, all(eq(q, :helloworld),
eq(:helloworld, q)))
All expresses that all conditions
must be true.
A logic variable can unify with the
same value multiple times. But
the overall goal only succeeds
once, so there is only one value
of q that satisfies the relation.
[:helloworld]
24. all
run(q, all(eq(q, :hello),
eq(q, :world)))
A logic variable cannot unify with
two different values at the same
time.
There are no values of q that
satisfy the relation. []
25. conde
run(q,
conde(eq(q, :hello),
eq(q, :world)))
You can introduce alternative
values with conde. Every conde
clause that succeeds produces
possible alternative values.
There are 2 values of q that
satisfy the relation. [:hello, :world]
26. Ordering clauses
run(q,
fresh {|a,b|
all(eq([a, :and, b], q),
eq(a, :something),
eq(:somethingelse, b)})
fresh can be used inside of a
query.
Order does not matter for
unification nor does the order of
clauses matter. [[:something, :and, :somethingelse]]
27. rock paper scissors
def beats(move1, move2)
conde(all(eq(move1, :rock),
eq(move2, :scissors)),
all(eq(move1, :scissors),
eq(move2, :paper)),
all(eq(move1, :paper),
eq(move2, :rock)))
end
beats is a custom relation
between two terms. It succeeds
when the first players move
beats the second players move.
More advanced implementations
might have a prolog-style fact
database, but we'll do this the
hard way.
28. rock paper scissors
run(q, beats(:rock, :paper))
beats fails because :rock does
not beat :paper. No value of q
makes this succeed.
[]
29. rock paper scissors
run(q, beats(:paper, :rock))
beats succeeds because :paper
beats :rock. q remains fresh
because no questions were
asked of it.
["_.0"]
31. rock paper scissors
core.logic
winner, loser = fresh 2
run([winner, loser],
beats(winner, loser))This query asks for all the pairs
where winner beats loser.
[[:rock, :scissors],
[:scissors, :paper],
[:paper, :rock]]
33. SPOCK CHAINS
core.logiccore.logic
run(q,
fresh{|m1, m2|
all(eq(q, [:spock, m1, m2, :spock]),
rpsls_beats(:spock, m1),
rpsls_beats(m1, m2),
rpsls_beats(m2, :spock))})
We can ask questions like: give
me a 4-chain of dominated
moves starting and ending
with :spock. There are three
solutions.
[[:spock, :rock, :lizard, :spock],
[:spock, :scissors, :paper, :spock],
[:spock, :scissors, :lizard, :spock]]
34. spock chains
def chain(moves)
fresh {|first, rest|
all(caro(moves, first),
cdro(moves, rest),
rpsls(first),
conde(nullo(rest),
fresh {|second|
all(caro(rest, first),
rpsls_beats(first, second),
defer(method(:chain), rest))}))}
end
A winning chain is a single rpsls
move either by itself or followed
by a winning chain whose first
move is beaten by the original
move.
This example uses LISP-style list
conventions. caro (first element)
and cdro (the rest of the times)
are relations on those lists.
35. how many chains?
run(q,
all(eq(q, build_list([:spock] + fresh(10) +[:spock])),
chain(q))).length
How many winning chains are
there from :spock to :spock with
10 steps?
385
36. def edge(x,y)
edgefact = -> (x1, y1) {
all(eq(x,x1),eq(y,y1))
}
conde(edgefact[:g, :d],
edgefact[:g, :h],
edgefact[:e, :d],
edgefact[:h, :f],
edgefact[:e, :f],
edgefact[:a, :e],
edgefact[:a, :b],
edgefact[:b, :f],
edgefact[:b, :c],
edgefact[:f, :c])
end
Path finding
D
A
E
B
G
H
F
C
37. def path(x, y)
z = fresh
conde(eq(x, y),
all(edge(x, z),
defer(method(:path), z, y)))
end
def ispath(nodes)
fresh {|first, second, rest|
all(caro(nodes, first),
cdro(nodes, rest),
conde(nullo(rest),
all(edge(first, second),
caro(rest, second),
defer(method(:ispath), rest))))}
end
Path finding
D
A
E
B
G
H
F
C
42. Map coloring
core.logiccore.logic
http://pragprog.com/book/btlang/seven-languages-in-seven-weeks
(run 1 [q]
(fresh [tn ms al ga fl]
(everyg #(membero % [:red :blue :green])
[tn ms al ga fl])
(!= ms tn) (!= ms al) (!= al tn)
(!= al ga) (!= al fl) (!= ga fl) (!= ga tn)
(== q {:tennesse tn
:mississipi ms
:alabama al
:georgia ga
:florida fl})))
({:tennesse :blue,
:mississipi :red,
:alabama :green,
:georgia :red,
:florida :blue})
43. FINITE DOMAINS
core.logiccore.logic
fd/interval declares a finite
integer interval and fd/in
contrains logic variables to a
domain.
(defn two-plus-two-is-four [q]
(fresh [t w o f u r TWO FOUR]
(fd/in t w o f u r (fd/interval 0 9))
(fd/distinct [t w o f u r])
(fd/in TWO (fd/interval 100 999))
(fd/in FOUR (fd/interval 1000 9999))
...
(== q [TWO TWO FOUR])))
T W O
+ T W O
-------
F O U R
http://www.amazon.com/Crypt-arithmetic-Puzzles-in-PROLOG-ebook/dp/B006X9LY8O
44. FINITE DOMAINS
core.logiccore.logic
fd/eq translates simple math to
constraints over finite domain
logic variables.
(fd/eq (= TWO
(+ (* 100 t)
(* 10 w)
o)))
(fd/eq (= FOUR
(+ (* 1000 f)
(* 100 o)
(* 10 u)
r)))
(fd/eq (= (+ TWO TWO) FOUR))
T W O
+ T W O
-------
F O U R
45. FINITE DOMAINS
core.logiccore.logic
There are 7 unique solutions to
the problem.
(run* [q]
(two-plus-two-is-four q))
T W O
+ T W O
-------
F O U R
([734 734 1468]
[765 765 1530]
[836 836 1672]
[846 846 1692]
[867 867 1734]
[928 928 1856]
[938 938 1876])
46. USEless logic puzzle
core.logiccore.logic
‣ petey pig did not hand out the popcorn
‣ pippin pig does not live in the wood house
‣ the pig that lives in the straw house handed out
popcorn
‣ Petunia pig handed out apples
‣ The pig who handed out chocolate does not live in
the brick house.
Three little pigs, who each
lived in a different type of
house, handed out treats for
Halloween. Use the clues to
figure out which pig lived in
each house, and what type of
treat each pig handed out.
http://holidays.hobbyloco.com/halloween/logic1.html
47. USEless logic puzzle
core.logiccore.logic
(defn pigso [q]
(fresh [h1 h2 h3 t1 t2 t3]
(== q [[:petey h1 t1]
[:pippin h2 t2]
[:petunia h3 t3]])
(permuteo [t1 t2 t3]
[:chocolate :popcorn :apple])
(permuteo [h1 h2 h3]
[:wood :straw :brick])
... ))
pigso starts by defining the
solution space.
permuteo succeeds when the
first list is permutation of the
second.
48. USEless logic puzzle
core.logiccore.logic
(fresh [notpopcorn _]
(!= notpopcorn :popcorn)
(membero [:petey _ notpopcorn] q))
(fresh [notwood _]
(!= notwood :wood)
(membero [:pippin notwood _] q))
(fresh [_]
(membero [_ :straw :popcorn] q))
(fresh [_]
(membero [:petunia _ :apple] q))
(fresh [notbrick _]
(!= notbrick :brick)
(membero [_ notbrick :chocolate] q))
The clues translate cleanly to
goals constraining the solution
space.
membero has a solution when
the first item is a member of the
second.