The document discusses several "outrageous ideas" for improving graph databases, such as using a column-oriented storage model inspired by relational databases, employing worst-case optimal join algorithms, adopting a semantic query optimizer informed by mathematical concepts, and leveraging recursion to enable queries over paths in graph structures. The presentation argues that current graph database implementations are flawed and lessons from relational databases have not been adequately applied.
Deep learning has come a long way over the past few years, with advances in cloud computing, frameworks, and open source tooling, working with images has gotten simpler over time. Delta Lake has been amazing at creating a tabular structured transactional layer on object storage, but what about images? Would you like to know how to gain a 45x improvement in your image processing pipeline? Join Jason and Rohit to find out how!
Analytics with Apache Superset and ClickHouse - DoK Talks #151DoKC
Link: https://youtu.be/Y-1uFVKDfgY
https://go.dok.community/slack
https://dok.community/
ABSTRACT OF THE TALK
This talk concerns performing analytical tasks with Apache Superset with ClickHouse as the data backend. ClickHouse is a super fast database for analytical tasks, and Apache Superset is an Apache Software foundation project meant for data visualization and exploration. Performing analytical tasks using this combo is super fast since both the software are designed to be scalable and capable of handling data of petabyte scale.
Neo4j is a powerful and expressive tool for storing, querying and manipulating data. However modeling data as graphs is quite different from modeling data under a relational database. In this talk, Michael Hunger will cover modeling business domains using graphs and show how they can be persisted and queried in Neo4j. We'll contrast this approach with the relational model, and discuss the impact on complexity, flexibility and performance.
Nowadays traditional layered monolithic architecture in Java world is not so popular as 5-10 years ago. I remember how we wrote tons of code for each layer repeating almost the same parts for every application. Add unit and integration testing to understand how much time and efforts has been spent on repeatable work. All cool ideas around DDD (domain driven design) and Hexagonal Architecture was just a nice theory because reality hasn’t allow us to implement it easily. Even Dependency Injection with Spring framework was completely focused on traditional layered approach, not even talking about JavaEE platform.
Today we have Spring Boot ecosystem covering most of our needs for integration with almost all possible technologies and microservices architectural trend, enabling completely new approach to build Java applications around domain model. It is so natural to build Java domain-oriented services and connect them with external world using ports and adapters, that Hexagonal Architecture is almost enabled by default. You just need to switch your way of thinking…
Data Lineage with Apache Airflow using Marquez Willy Lulciuc
The term data quality is used to describe the dependability, reliability, and usability of datasets. Data scientists and business analysts often determine the quality of a dataset by its trustworthiness and completeness. But what information might be needed to differentiate between useful vs noisy data? How quickly can data quality issues be identified and explored? More importantly, how can metadata enable data scientists to make better sense of the high volume of data within their organization from a variety of data sources?
With Airflow now ubiquitous for DAG orchestration, organizations increasingly dependon Airflow to manage complex inter-DAG dependencies and provide up-to-date runtime visibility into DAG execution. At WeWork, Airflow has quickly become an important component of our Data Platform powering billing, space inventory, etc. But what effects (if any) would upstream DAGs have on downstream DAGs if dataset consumption was delayed? What alerting rules should be in place to notify downstream DAGs of possible upstream processing issues or failures?
At WeWork, we feel it’s critical that DAG metadata is collected, maintained, and shared across the organization. This investment in metadata enables:
● Data lineage
● Data governance
● Data discovery
In this talk, we introduce Marquez: an open source metadata service for the collection, aggregation, and visualization of a data ecosystem’s metadata. We will demonstrate how metadata management with Marquez helps maintain inter-DAG dependencies, catalog historical runs of DAGs, and minimize data quality issues.
Deep learning has come a long way over the past few years, with advances in cloud computing, frameworks, and open source tooling, working with images has gotten simpler over time. Delta Lake has been amazing at creating a tabular structured transactional layer on object storage, but what about images? Would you like to know how to gain a 45x improvement in your image processing pipeline? Join Jason and Rohit to find out how!
Analytics with Apache Superset and ClickHouse - DoK Talks #151DoKC
Link: https://youtu.be/Y-1uFVKDfgY
https://go.dok.community/slack
https://dok.community/
ABSTRACT OF THE TALK
This talk concerns performing analytical tasks with Apache Superset with ClickHouse as the data backend. ClickHouse is a super fast database for analytical tasks, and Apache Superset is an Apache Software foundation project meant for data visualization and exploration. Performing analytical tasks using this combo is super fast since both the software are designed to be scalable and capable of handling data of petabyte scale.
Neo4j is a powerful and expressive tool for storing, querying and manipulating data. However modeling data as graphs is quite different from modeling data under a relational database. In this talk, Michael Hunger will cover modeling business domains using graphs and show how they can be persisted and queried in Neo4j. We'll contrast this approach with the relational model, and discuss the impact on complexity, flexibility and performance.
Nowadays traditional layered monolithic architecture in Java world is not so popular as 5-10 years ago. I remember how we wrote tons of code for each layer repeating almost the same parts for every application. Add unit and integration testing to understand how much time and efforts has been spent on repeatable work. All cool ideas around DDD (domain driven design) and Hexagonal Architecture was just a nice theory because reality hasn’t allow us to implement it easily. Even Dependency Injection with Spring framework was completely focused on traditional layered approach, not even talking about JavaEE platform.
Today we have Spring Boot ecosystem covering most of our needs for integration with almost all possible technologies and microservices architectural trend, enabling completely new approach to build Java applications around domain model. It is so natural to build Java domain-oriented services and connect them with external world using ports and adapters, that Hexagonal Architecture is almost enabled by default. You just need to switch your way of thinking…
Data Lineage with Apache Airflow using Marquez Willy Lulciuc
The term data quality is used to describe the dependability, reliability, and usability of datasets. Data scientists and business analysts often determine the quality of a dataset by its trustworthiness and completeness. But what information might be needed to differentiate between useful vs noisy data? How quickly can data quality issues be identified and explored? More importantly, how can metadata enable data scientists to make better sense of the high volume of data within their organization from a variety of data sources?
With Airflow now ubiquitous for DAG orchestration, organizations increasingly dependon Airflow to manage complex inter-DAG dependencies and provide up-to-date runtime visibility into DAG execution. At WeWork, Airflow has quickly become an important component of our Data Platform powering billing, space inventory, etc. But what effects (if any) would upstream DAGs have on downstream DAGs if dataset consumption was delayed? What alerting rules should be in place to notify downstream DAGs of possible upstream processing issues or failures?
At WeWork, we feel it’s critical that DAG metadata is collected, maintained, and shared across the organization. This investment in metadata enables:
● Data lineage
● Data governance
● Data discovery
In this talk, we introduce Marquez: an open source metadata service for the collection, aggregation, and visualization of a data ecosystem’s metadata. We will demonstrate how metadata management with Marquez helps maintain inter-DAG dependencies, catalog historical runs of DAGs, and minimize data quality issues.
Mark and Wes will talk about Cypher optimization techniques based on real queries as well as the theoretical underlying processes. They'll start from the basics of "what not to do", and how to take advantage of indexes, and continue to the subtle ways of ordering MATCH/WHERE/WITH clauses for optimal performance as of the 2.0.0 release.
Optimizing Your Supply Chain with the Neo4j GraphNeo4j
With the world’s supply chain system in crisis, it’s clear that better solutions are needed. Digital twins built on knowledge graph technology allow you to achieve an end-to-end view of the process, supporting real-time monitoring of critical assets.
Everyday I'm Shuffling - Tips for Writing Better Spark Programs, Strata San J...Databricks
Watch video at: http://youtu.be/Wg2boMqLjCg
Want to learn how to write faster and more efficient programs for Apache Spark? Two Spark experts from Databricks, Vida Ha and Holden Karau, provide some performance tuning and testing tips for your Spark applications
Incremental Processing on Large Analytical Datasets with Prasanna Rajaperumal...Databricks
Prasanna Rajaperumal and Vinoth Chandar will explore a specific problem of ingesting petabytes of data in Uber and why they ended up building an analytical datastore from scratch using Spark. Prasanna will discuss design choices and implementation approaches in building Hoodie to provide near-real-time data ingestion and querying using Spark and HDFS.
Join operations in Apache Spark is often the biggest source of performance problems and even full-blown exceptions in Spark. After this talk, you will understand the two most basic methods Spark employs for joining DataFrames – to the level of detail of how Spark distributes the data within the cluster. You’ll also find out how to work out common errors and even handle the trickiest corner cases we’ve encountered! After this talk, you should be able to write performance joins in Spark SQL that scale and are zippy fast!
This session will cover different ways of joining tables in Apache Spark.
Speaker: Vida Ha
This talk was originally presented at Spark Summit East 2017.
026 Neo4j Data Loading (ETL_ELT) Best Practices - NODES2022 AMERICAS Advanced...Neo4j
What patterns are most appropriate for building ETLs using Neo4j? In this session, we share how we built the Google Cloud DataFlow flex template using the Neo4j Java API. You can then apply the same approach to building read and write operators in any framework, including AWS Lambda and Google Cloud Functions.
Domain Driven Design main concepts
This presentation is a summary of the book "Domain Driven Design" from InfoQ.
Here is the link: http://www.infoq.com/minibooks/domain-driven-design-quickly
Event Sourcing, Domain Driven Design, and Command Query Responsibility Segregation – we hear all of these technologies used together frequently, but how do they actually work together? How do you manage complex co-ordination in a CQRS system?
In this talk, we will discuss a real world example of DDD with ES and CQRS written in F# - a functional first language on the .NET Framework. We’ll take a deep dive into the F# algebraic type system that constructs the domain model. Also, the explanation of the abstract notion of a DDD aggregate root compared to the CQRS implementation of an aggregate root. Finishing with how sagas and triggers facilitate poly-aggregate communication.
Spark Streaming | Twitter Sentiment Analysis Example | Apache Spark Training ...Edureka!
This Edureka Spark Streaming Tutorial will help you understand how to use Spark Streaming to stream data from twitter in real-time and then process it for Sentiment Analysis. This Spark Streaming tutorial is ideal for both beginners as well as professionals who want to learn or brush up their Apache Spark concepts. Below are the topics covered in this tutorial:
1) What is Streaming?
2) Spark Ecosystem
3) Why Spark Streaming?
4) Spark Streaming Overview
5) DStreams
6) DStream Transformations
7) Caching/ Persistence
8) Accumulators, Broadcast Variables and Checkpoints
9) Use Case – Twitter Sentiment Analysis
Mark and Wes will talk about Cypher optimization techniques based on real queries as well as the theoretical underlying processes. They'll start from the basics of "what not to do", and how to take advantage of indexes, and continue to the subtle ways of ordering MATCH/WHERE/WITH clauses for optimal performance as of the 2.0.0 release.
Optimizing Your Supply Chain with the Neo4j GraphNeo4j
With the world’s supply chain system in crisis, it’s clear that better solutions are needed. Digital twins built on knowledge graph technology allow you to achieve an end-to-end view of the process, supporting real-time monitoring of critical assets.
Everyday I'm Shuffling - Tips for Writing Better Spark Programs, Strata San J...Databricks
Watch video at: http://youtu.be/Wg2boMqLjCg
Want to learn how to write faster and more efficient programs for Apache Spark? Two Spark experts from Databricks, Vida Ha and Holden Karau, provide some performance tuning and testing tips for your Spark applications
Incremental Processing on Large Analytical Datasets with Prasanna Rajaperumal...Databricks
Prasanna Rajaperumal and Vinoth Chandar will explore a specific problem of ingesting petabytes of data in Uber and why they ended up building an analytical datastore from scratch using Spark. Prasanna will discuss design choices and implementation approaches in building Hoodie to provide near-real-time data ingestion and querying using Spark and HDFS.
Join operations in Apache Spark is often the biggest source of performance problems and even full-blown exceptions in Spark. After this talk, you will understand the two most basic methods Spark employs for joining DataFrames – to the level of detail of how Spark distributes the data within the cluster. You’ll also find out how to work out common errors and even handle the trickiest corner cases we’ve encountered! After this talk, you should be able to write performance joins in Spark SQL that scale and are zippy fast!
This session will cover different ways of joining tables in Apache Spark.
Speaker: Vida Ha
This talk was originally presented at Spark Summit East 2017.
026 Neo4j Data Loading (ETL_ELT) Best Practices - NODES2022 AMERICAS Advanced...Neo4j
What patterns are most appropriate for building ETLs using Neo4j? In this session, we share how we built the Google Cloud DataFlow flex template using the Neo4j Java API. You can then apply the same approach to building read and write operators in any framework, including AWS Lambda and Google Cloud Functions.
Domain Driven Design main concepts
This presentation is a summary of the book "Domain Driven Design" from InfoQ.
Here is the link: http://www.infoq.com/minibooks/domain-driven-design-quickly
Event Sourcing, Domain Driven Design, and Command Query Responsibility Segregation – we hear all of these technologies used together frequently, but how do they actually work together? How do you manage complex co-ordination in a CQRS system?
In this talk, we will discuss a real world example of DDD with ES and CQRS written in F# - a functional first language on the .NET Framework. We’ll take a deep dive into the F# algebraic type system that constructs the domain model. Also, the explanation of the abstract notion of a DDD aggregate root compared to the CQRS implementation of an aggregate root. Finishing with how sagas and triggers facilitate poly-aggregate communication.
Spark Streaming | Twitter Sentiment Analysis Example | Apache Spark Training ...Edureka!
This Edureka Spark Streaming Tutorial will help you understand how to use Spark Streaming to stream data from twitter in real-time and then process it for Sentiment Analysis. This Spark Streaming tutorial is ideal for both beginners as well as professionals who want to learn or brush up their Apache Spark concepts. Below are the topics covered in this tutorial:
1) What is Streaming?
2) Spark Ecosystem
3) Why Spark Streaming?
4) Spark Streaming Overview
5) DStreams
6) DStream Transformations
7) Caching/ Persistence
8) Accumulators, Broadcast Variables and Checkpoints
9) Use Case – Twitter Sentiment Analysis
Unix and Shell Programming,
Q P Code: 60305.
Additional Mathematics I
Q P Code: 60306
Computer Organization and Architecture
Q P Code: 62303
Data Structures Using C
Q P Code: 60303
Discrete Mathematical Structures
Q P Code: 60304
Engineering Mathematics - III
Q P Code: 60301
Soft Skill Development
Q P Code: 60307
Rdio's Alex Gaynor at Heroku's Waza 2013: Why Python, Ruby and Javascript are...Heroku
Rdio Software Engineer Alex Gaynor (@alex_gaynor) took to the #Waza 2013 stage (Heroku's Developer Conference) to talk about "Why Python, Ruby and Javascript are Slow". Gaynor argues that developers should aim to make performance beautiful. For more from Gaynor or to contact him, ping him at @Alex_Gaynor.
For more on Waza visit http://waza.heroku.com/2013.
For Waza videos stay tuned at http://blog.heroku.com or visit http://vimeo.com/herokuwaza
Here we describe how to "think" mapreduce not just "code" mapreduce. We solve some interesting problems using mapreduce (e.g. how to compute similarity between all pair of documents on the web, how to do k-means clustering using map-reduce, and how to find cliques in a graph using map-reduce). These solutions are simple, elegant, and open up new ways for people to actually use mapreduce more than just simple number crunching.
Mid-Term Exam
Name___________________________________
MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question.
Fill in the blank with one of the words or phrases listed below.
distributive real reciprocals absolute value opposite associative
inequality commutative whole algebraic expression exponent variable
1) The of a number is the distance between the number and 0 on the number line.
A) opposite B) whole
C) absolute value D) exponent
1)
Find an equation of the line. Write the equation using function notation.
2) Through (1, -3); perpendicular to f(x) = -4x - 3
A) f(x) =
1
4
x -
13
4
B) f(x) = -
1
4
x -
13
4
C) f(x) = -4x -
13
4
D) f(x) = 4x -
13
4
2)
Multiply or divide as indicated.
3)
60
-5
A) -22 B) 12 C) - 1
12
D) -12
3)
Write the sentence using mathematical symbols.
4) Two subtracted from x is 55.
A) 2 + x = 55 B) 2 - x = 55 C) x - 2 = 55 D) 55 - 2 = x
4)
Name the property illustrated by the statement.
5) (-10) + 10 = 0
A) associative property of addition B) additive identity property
C) commutative property of addition D) additive inverse property
5)
Tell whether the statement is true or false.
6) Every rational number is an integer.
A) True B) False
6)
Add or subtract as indicated.
7) -5 - 12
A) 7 B) -17 C) 17 D) -7
7)
1
Name the property illustrated by the statement.
8) (1 + 8) + 6 = 1 + (8 + 6)
A) distributive property
B) associative property of addition
C) commutative property of multiplication
D) associative property of multiplication
8)
Simplify the expression.
9) -(10v - 6) + 10(2v + 10)
A) 30v + 16 B) -10v + 94 C) 10v + 106 D) 30v + 4
9)
Solve the equation.
10) 5(x + 3) = 3[14 - 2(3 - x) + 10]
A) -39 B) 3 C) -13 D) 39
10)
List the elements of the set.
11) If A = {x|x is an odd integer} and B = {35, 37, 38, 40}, list the elements of A ∩ B.
A) {35, 37}
B) {x|x is an odd integer}
C) {x|x is an odd integer or x = 38 or x = 40}
D) { }
11)
Solve the inequality. Graph the solution set.
12) |x| ≥ 4
A) (-∞, -4] ∪ [4, ∞)
-6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6-6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6
B) [-4, 4]
-6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6-6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6
C) [4, ∞)
-6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6-6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6
D) (-∞, -4) ∪ (4, ∞)
-6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6-6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6
12)
Solve.
13) The sum of three consecutive even integers is 336. Find the integers.
A) 108, 110, 112 B) 110, 112, 114 C) 112, 114, 116 D) 111, 112, 113
13)
2
Solve the inequality. Write your solution in interval notation.
14) x ≥ 4 or x ≥ -2
A) (-∞, ∞) B) [4, ∞)
C) [-2, ∞) D) (-∞, -2] ∪ [4, ∞)
14)
Use the formula A = P 1 + r
n
nt
to find the amount requested.
15) A principal of $12,000 is invested in an account paying an annual interest rate of 4%. Find the
amount in the account after 3 years if the account is compounded quarterly.
A) $1521.9 B) $13,388.02 C) $13,498.37 D) $13,521.90
15)
Graph the solution set ...
Monads and Monoids: from daily java to Big Data analytics in Scala
Finally, after two decades of evolution, Java 8 made a step towards functional programming. What can Java learn from other mature functional languages? How to leverage obscure mathematical abstractions such as Monad or Monoid in practice? Usually people find it scary and difficult to understand. Oleksiy will explain these concepts in simple words to give a feeling of powerful tool applicable in many domains, from daily Java and Scala routines to Big Data analytics with Storm or Hadoop.
Algorithm Analysis
Computational Complexity
Introduction to Basic Data
Structures
Graph Theory
Graph Algorithms
Greedy Algorithms
Divide and Conquer
Dynamic Programming
Introduction to Linear Programming
Flow Network
VARIOUS FUZZY NUMBERS AND THEIR VARIOUS RANKING APPROACHESIAEME Publication
A brief survey of this study is to identify the ranking formulas for various fuzzy numbers derived from research papers published over the past few years. This paper presents the latest results of fuzzy ranking applications very clearly and simply, as well as highlighting key points in the use of fuzzy numbers. This paper discusses the importance of pointing out the concepts of fuzzy numbers and their formulas for ranking.
Outrageous ideas for Graph Databases
Almost every graph database vendor raised money in 2021. I am glad they did, because they are going to need the money. Our current Graph Databases are terrible and need a lot of work. There I said it. It's the ugly truth in our little niche industry. That's why despite waiting for over a decade for the "Year of the Graph" to come we still haven't set the world on fire. Graph databases can be painfully slow, they can't handle non-graph workloads, their APIs are clunky, their query languages are either hard to learn or hard to scale. Most graph projects require expert shepherding to succeed. 80% of the work takes 20% of the time, but that last 20% takes forever. The graph database vendors optimize for new users, not grizzly veterans. They optimize for sales not solutions. Come listen to a Rant by an industry OG on where we could go from here if we took the time to listen to the users that haven't given up on us yet.
Outrageous ideas for Graph Databases
Almost every graph database vendor raised money in 2021. I am glad they did, because they are going to need the money. Our current Graph Databases are terrible and need a lot of work. There I said it. It's the ugly truth in our little niche industry. That's why despite waiting for over a decade for the "Year of the Graph" to come we still haven't set the world on fire. Graph databases can be painfully slow, they can't handle non-graph workloads, their APIs are clunky, their query languages are either hard to learn or hard to scale. Most graph projects require expert shepherding to succeed. 80% of the work takes 20% of the time, but that last 20% takes forever. The graph database vendors optimize for new users, not grizzly veterans. They optimize for sales not solutions. Come listen to a Rant by an industry OG on where we could go from here if we took the time to listen to the users that haven't given up on us yet.
Los estafadores ahora están utilizando métodos más sofisticados y dinámicos con tarjetas de crédito, el blanqueo de dinero y otros tipos de fraude. El aprovechamiento de la tecnología gráfica le permitirá ver más allá de los puntos de datos individuales y descubrir patrones difíciles de detectar.
What Finance can learn from Dating SitesMax De Marzi
Dating, as is often said, is a numbers game. And organizations such as Match.com, and Zoosk rely on very sophisticated technology as they sift through vast customer bases to create the most compatible couples. Specially, they rely on data to build the most nuanced portraits of their members that they can, so they can find the best matches. This is a business-critical activity for dating sites — the more successful the matching, the better revenues will be. One of the ways they do this is through graph databases. These differ from relational databases as they specialize in identifying the relationships between multiple data points. This means they can query and display connections between people, preferences and interests very quickly.
In this session you will see how in many ways dating sites are getting better performance and more value out of their data than financial institutions by using Neo4j.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
Introduction to AI for Nonprofits with Tapp NetworkTechSoup
Dive into the world of AI! Experts Jon Hill and Tareq Monaur will guide you through AI's role in enhancing nonprofit websites and basic marketing strategies, making it easy to understand and apply.
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
17. Ideas are Wrong
• Too Many Back-ends (aka
Tinkerpop is wrong)
• No lessons applied from
Relational Databases
• API is incomplete (bulk)
• Query Languages are
Incompetent
18. Implementations
are Wrong
• Nodes as Objects sucks
• No internal algebras
• Incompetent Query Optimizers
• Incompetent Query Executors
• Incompetent Engineering
• A short clip of the talk
31. Peter Suggests:
https://homepages.cwi.nl/~boncz/edbt2022.pdf
1. Row Storage for Properties of Nodes/Relationships
2. Less Indexing
3. Less Joins
4. Be more Relational then add Graph Functionality
5. Don’t rely on the query optimizer
6. Don’t allow generic recursive queries
7. Limit the query language
51. Problem with Joins
Table 1
ID
0
1
3
4
5
6
7
8
9
11
Table 2
ID
0
2
6
7
8
9
Table 3
ID
2
4
5
8
10
Results
Table 1 Table 2 Table 3
8 8 8
Intermediate Results
Table1 and Table 2
0
6
7
8
9
52. Worst Case Optimal Joins
● Worst-Case Optimal Join Algorithms: Techniques, Results, and
Open Problems. Ngo. (Gems of PODS 2018)
● Worst-Case Optimal Join Algorithms: Techniques, Results, and
Open Problems. Ngo, Porat, Re, Rudra. (Journal of the ACM
2018)
● What do Shannon-type inequalities, submodular width, and
disjunctive datalog have to do with one another? Abo Khamis,
Ngo, Suciu, (PODS 2017 - Invited to Journal of ACM)
● Computing Join Queries with Functional Dependencies. Abo
Khamis, Ngo, Suciu. (PODS 2017)
● Joins via Geometric Resolutions: Worst-case and Beyond. Abo
Khamis, Ngo, Re, Rudra. (PODS 2015, Invited to TODS 2015)
● Beyond Worst-Case Analysis for Joins with Minesweeper. Abo
Khamis, Ngo, Re, Rudra. (PODS 2014)
● Leapfrog Triejoin: A Simple Worst-Case Optimal Join Algorithm.
Veldhuizen (ICDT 2014 - Best Newcomer)
● Skew Strikes Back: New Developments in the Theory of Join
Algorithms. Ngo, Re, Rudra. (Invited to SIGMOD Record 2013)
● Worst Case Optimal Join Algorithms. Ngo, Porat, Re,
Rudra. (PODS 2012 – Best Paper)
54. More than 3 Tables
m
a
14
Brand
Category
Retailer
Rating
p
o
n
b
7) seek m
6) seek m
3) seek f
5) seek m
4) seek g
2) seek c
1) seek c
c d e f g
Worst-Case Optimal Joins take advantage of sorted keys and gaps in the data to
eliminate intermediate results, speed up queries and get rid of the Join problem.
59. Reduce the Search Space
m
a
14
Airport
Day
Flight
Destination
p
o
n
b
7) seek m
6) seek m
3) seek f
5) seek m
4) seek g
2) seek c
1) seek c
c d e f g
What if you wanted to earn miles on your frequent flyer program and filter by Airline? No
problem here, the more joins the merrier.
64. What’s wrong with NULL?
SELECT *
FROM parts
WHERE (price <= 99) OR (price > 99)
SELECT *
FROM parts
WHERE (price <= 99) OR (price > 99) OR isNull(price)
SELECT AVG(height)
FROM parts
SELECT orders.id, parts.id
FROM orders LEFT OUTER JOIN
parts ON parts.id = orders.part_id
SELECT orders.id, parts.id
FROM parts LEFT OUTER JOIN
orders ON parts.id = orders.part_id
●(a and NOT(a)) != True
●Aggregation requires special cases
●Outer Joins are not commutative
a x b != b x a
Query Optimizers hate Nulls. The 3 valued
logic cause major headaches.
67. Sets vs Bags
Set: {1,2,3}, {8,3,4}
Bags: {1,2,2,3}, {3, 3, 3, 3}
Sets have Unique Values
Bags allow Duplicate Values
●Queries that use only ANDs (no ORs)
are called “conjunctive queries”
●Conjunctive Queries under Set
Semantics are Much Easier to Optimize
Query Optimizers hate Bags. Duplicates cause
major headaches.
73. Math
You learned this in middle school
• 1 + (2 + 3) = (1 + 2) + 3
• 3 + 4 = 4 + 3
• 3 + 0 = 3
• 1 + (-1) = 0
• 2 x (3 x 4) = (2 x 3) x 4
• 2 x 5 = 5 x 2
• 2 x 1 = 2
• 2 x 0.5 = 1
• 2 x (3 + 4) = (2 x 3) + (2 x 4)
• (3 + 4) x 2 = (3 x 2) + (4 x 2)
74. Math
You learned this in high school
• a + (b + c) = (a + b) + c
• a + b = b + a
• a + 0 = a
• a + (-a) = 0
• a x (b x c) = (a x b) x c
• a x b = b x a
• a x 1 = a
• a x a-1 = 1, a != 0
• a x (b + c) = (a x b) + (a x c)
• (a + b) x c = (a x c) + (b x c)
75. Math
You forgot this in high school
• Addition:
• Associativity:
• a ⊕ (b ⊕ c) = (a ⊕ b) ⊕ c
• Commutativity:
• a ⊕ b = b ⊕ a
• Identity: a ⊕ ō = a
• Inverse: a ⊕ (-a) = ō
• Multiplication
• Associativity:
• a ⊗ (b ⊗ c) = (a ⊗ b) ⊗ c
• Commutativity:
• a ⊗ b = b ⊗ a
• Identity: a ⊗ ī = a
• Inverse: a ⊗ a-1 = ī
• Distribution of Multiplication over Addition:
• a ⊗ (b ⊕ c) = (a ⊗ b) ⊕ (a ⊗ c)
• (a ⊕ b) ⊗ c = (a ⊗ c) ⊕ (b ⊗ c)
76. Example 1
Query: find the count of the combined rows a, b, c in tables R, S and T
def result = count[a,b,c: R(a) and S(b) and T(c)]
Mathematic Representation:
80. Example 1
Query: count the number of combined rows a, b, c in tables R, S and T
def result = count[a,b,c: R(a) and S(b) and T(c)]
Optimized Query:
def result = count[R] * count[S] * count[T]
n^3 is much slower than 3n
81. Example 2
Query: find the minimum sum of rows a, b, c in tables R, S and T:
def result = min[a,b,c,v: v = R[a] + S[b] + T[c]]
Mathematic Representation:
83. Example 2
Query: find the minimum sum of rows a, b, c in tables R, S and T:
def result = min[a,b,c,v: v = R[a] + S[b] + T[c]]
Optimized Query:
def result = min[R] + min[S] + min[T]
84. C
B D
A E F
1
2
9 4
6
3
5
AEF = 9 + 4 = 13
ABDF = 1 + 6 + 5 = 12
ABCDF = 1 + 2 + 3 + 5 = 11
min{13,12,11} = 11
Shortest Path
from A to F
85. C
B D
A E F
0.9
0.9
0.4 0.8
0.2
1.0
0.7
AEF = 0.4 x 0.8 = 0.32
ABDF = 0.9 x 0.2 x 0.7 = 0.126
ABCDF = 0.9 x 0.9 x 1.0 x 0.7 = 0.567
max{0.32,0.126,0.567} = 0.567
Maximum Reliability
from A to F
86. C
B D
A E F
T
I
A T
H
M
E
AEF = A · T = AT
ABDF = T · H · E = THE
ABCDF = T · I · M · E = TIME
union{at, the, time} = at the time
Words
from A to F
87. Math
You skipped this in college
• min { (9 + 4), (1 + 6 + 5), ( 1 + 2 + 3 + 5 ) }
• max { (0.4 x 0.8), (0.9 x 0.2 x 0.7), (0.9 x 0.9 x 1.0 x 0.7) }
• union { (A · T), (T · H · E), (T · I · M · E) }
88. Math
You skipped this in college
• ⊕ { (9 ⊗ 4), (1 ⊗ 6 ⊗ 5), ( 1 ⊗ 2 ⊗ 3 ⊗ 5 ) }
• ⊕ { (0.4 ⊗ 0.8), (0.9 ⊗ 0.2 ⊗ 0.7), (0.9 ⊗ 0.9 ⊗ 1.0 ⊗ 0.7) }
• ⊕ { (A ⊗ T), (T ⊗ H ⊗ E), (T ⊗ I ⊗ M ⊗ E) }
89. Example 3
Query: count the number of 3-hop paths per node in a graph
def path3(a, b, c, d) = edge(a,b) and edge(b,c) and edge(c,d)
def result[a] = count[path3[a]]
Mathematic Representation:
A B C D
90. Query: count the number of 3-hop paths per node in a graph
A B C D
91. Example 3
Query: count the number of 3-hop paths per node in a graph
def path3(a, b, c, d) = edge(a,b) and edge(b,c) and edge(c,d)
def result[a] = count[path3[a]]
Optimized Query:
def path1[c] = count[edge[c]]
def path2[b] = sum[path1[c] for c in edge[b]]
def result[a] = sum[path2[b] for b in edge[a]]
A B C D
92. Semantic Query Optimizer
It knows math!
• Compute Discrete Fourier Transform in Fast Fourier Transform-time
• Junction Tree Algorithm for inference in Probabilistic Graphical Models
• Message passing, belief propagation
• Viterbi Algorithm, forward/backward for Hidden Markov Models most probable
paths
• Counting sub-graph patterns (motifs)
• Yannakakis Algorithm for acyclic conjunctive queries in Polynomial Time
• Fractional hypertree-width time algorithm for Constraint Satisfaction Problems
• Best known results for Conjunctive Queries and Quanti
f
ied Conjunctive Queries
93. Semantic Query Optimizer
It knows math!
• This optimizer produces much better code than the average developer
because it knows a ton more math than the average developer.
• Maryam Mirzakhani
• Terence Tao
• Ramanujan
• Katherine Goble
• Good Will Hunting
104. Betweenness Centrality
Graph Algorithms
One of many of graph centrality measures which are
useful for assessing the importance of a node.
High Level Definition: Number of times a node
appears on shortest paths within a network
Why it’s Useful: Identify which nodes control
information flow between different areas of the
graph; also called “Bridge Nodes”
Business Use-Cases:
Communication Analysis: Identify important
people which communicate across different
groups
Retail Purchase Analysis: Which products
introduce customers to new categories
105. Betweenness Centrality
Computation
Brandes Algorithm is applied as follows:
1. For each pair of nodes, compute all
shortest paths and capture nodes
(less endpoints) on said path(s)
2. For each pair of nodes, assign each
node along path a value of one if there
is only one shortest path, or the
fractional contribution (1/n) if n
shortest paths
3. Sum the value from step 2 for each
node; this is the Betweenness
Centrality
106. Betweenness Centrality Implementation
// Shortest path between s and t when they are the same is 0.
def shortest_path[s, t] = Min[
v, w:
(shortest_path(s, t, w) and v = 1) or
(w = shortest_path[s,v] +1 and E(v, t))
]
// When s and t are the same, there is only one shortest path between
// them, namely the one with length 0.
def nb_shortest(s, t, n) = V(s) and V(t) and s = t and n = 1
// When s and t are *not* the same, it is the sum of the number of
shortest
// paths between s and v for all the v's adjacent to t and on the shortest
// path between s and t.
def nb_shortest(s, t, n) =
s != t and
n = sum[v, m:
shortest_path[s, v] + 1 = shortest_path[s, t] and E(v, t) and
nb_shortest(s, v, m)
]
// sum over all t's such that there is an edge between v and t,
// and v is on the shortest path between s and t
def C[s, v] = sum[t, r:
E(v, t) and shortest_path[s, t] = shortest_path[s, v] + 1 and
(
a = C[s, t] or
not C(s, t, _) and a = 0.0
) and
r = (nb_shortest[s, v] / nb_shortest[s, t]) * (1 + a)
] from a
// Note that below we divide by 2 because we are double
counting every edge.
def betweenness_centrality_brandes[v] =
sum[s, p : s != v and C[s, v] = p]/2
107. Betweenness Centrality ReComputation
Incremental updates to
data and recomputation
of Betweenness
Centrality takes only a
few seconds, whereas
the entire graph needs to
be re-computed in other
systems.
110. Incremental Maintenance
1. Dependency tracking to figure out which views are affected by a change.
2. Demand-driven execution to only compute what users are actively interested in.
3. Differential computation to incrementally maintain even general recursion.
4. Semantic optimization to recover better maintenance algorithms where possible.