"Consider the following setting. Suppose we are given as input a ""corrupted"" truth-table of a polynomial f(x1,..,xm) of degree r = o(√m), with a random set of 1/2 - o(1) fraction of the evaluations flipped. Can the polynomial f be recovered? Turns out we can, and we can do it efficiently! The above question is a demonstration of the reliability of Reed-Muller codes under the random error model. For Reed-Muller codes over F_2, the message is thought of as an m-variate polynomial of degree r, and its encoding is just the evaluation over all of F_2^m.
In this talk, we shall study the resilience of RM codes in the *random error* model. We shall see that under random errors, RM codes can efficiently decode many more errors than its minimum distance. (For example, in the above toy example, minimum distance is about 1/2^{√m} but we can correct close to 1/2-fraction of random errors). This builds on a recent work of [Abbe-Shpilka-Wigderson-2015] who established strong connections between decoding erasures and decoding errors. The main result in this talk would be constructive versions of those connections that yield efficient decoding algorithms. This is joint work with Amir Shpilka and Ben Lee Volk."
Decreasing and increasing functions by arun umraossuserd6b1fd
Function analysis - characteristics of increasing and decreasing functions. How "sign" either positive or negative tells about the nature of the function, i.e. where it is increasing and where it is decreasing.
Computer Oriented Numerical Analysis
What is interpolation?
Many times, data is given only at discrete points such as .
So, how then does one find the value of y at any other value of x ?
Well, a continuous function f(x) may be used to represent the data values with f(x) passing through the points (Figure 1). Then one can find the value of y at any other value of x .
This is called interpolation
Newton’s Divided Difference Formula:
To illustrate this method, linear and quadratic interpolation is presented first.
Then, the general form of Newton’s divided difference polynomial method is presented.
Decreasing and increasing functions by arun umraossuserd6b1fd
Function analysis - characteristics of increasing and decreasing functions. How "sign" either positive or negative tells about the nature of the function, i.e. where it is increasing and where it is decreasing.
Computer Oriented Numerical Analysis
What is interpolation?
Many times, data is given only at discrete points such as .
So, how then does one find the value of y at any other value of x ?
Well, a continuous function f(x) may be used to represent the data values with f(x) passing through the points (Figure 1). Then one can find the value of y at any other value of x .
This is called interpolation
Newton’s Divided Difference Formula:
To illustrate this method, linear and quadratic interpolation is presented first.
Then, the general form of Newton’s divided difference polynomial method is presented.
Numerical method (curve fitting)
***TOPICS ARE****
Linear Regression
Multiple Linear Regression
Polynomial Regression
Example of Newton’s Interpolation Polynomial And example
Example of Newton’s Interpolation Polynomial And example
IOSR Journal of Electronics and Communication Engineering(IOSR-JECE) is an open access international journal that provides rapid publication (within a month) of articles in all areas of electronics and communication engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in electronics and communication engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Termoregolazione e contabilizzazione in condominioChiara Ragazzini
Applicazione della norma UNI10200 e Dlgs 141/2016 ad impianti di riscaldamento centralizzati. Contabilizzazione diretta e indiretta. Servizi personalizzati con report mensili e da remoto.
Numerical method (curve fitting)
***TOPICS ARE****
Linear Regression
Multiple Linear Regression
Polynomial Regression
Example of Newton’s Interpolation Polynomial And example
Example of Newton’s Interpolation Polynomial And example
IOSR Journal of Electronics and Communication Engineering(IOSR-JECE) is an open access international journal that provides rapid publication (within a month) of articles in all areas of electronics and communication engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in electronics and communication engineering. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Termoregolazione e contabilizzazione in condominioChiara Ragazzini
Applicazione della norma UNI10200 e Dlgs 141/2016 ad impianti di riscaldamento centralizzati. Contabilizzazione diretta e indiretta. Servizi personalizzati con report mensili e da remoto.
The presentation describes the "OYYAL" accomplishments in the Academic year 2014-15 and its objectives set for 2015-16. Through this we also request our sponsors to continue their financial support and encourage us move along.
Wireless Sensor Network (WSN) is partially distributed autonomous sensors to monitor physical or environmental conditions such as temperature, pressure etc. and to cooperatively pass their data through the network to the central location. The technique referred to as multi-hop wireless communications is used by the WSN’s to communicate. Due to the limited processing power and the finite power accessible to each sensor nodes, the application of regular routing techniques is not recommended. Hence recent advances in wireless sensor networks have made the routing protocols more efficient. This paper surveys and compares the advanced routing protocols. The three main categories discussed here are flat based, hierarchical based and location based. The paper concludes with open research issues.
A suggestion , solution for Istanbul city in Turkey to have a better sustainable classification, taking into consideration the last updates and reports
This presentation is all about IELTS related questions and answers which are commonly asked by Federal Skilled Worker Programme (FSWP) applicants. As I seen from my experience that maximum applicants are confused or do not understand accurately what is minimum score requirement to apply this FSWP.
Handout for the course Abstract Argumentation and Interfaces to Argumentative...Federico Cerutti
Abstract: Introduction to abstract argumentation and semantics, signatures and decomposability. Complexity of reasoning problems on an abstract argumentation framework and state-of-the-art solvers. Graphical interfaces to reasoning problems and natural language interfaces, with introduction on Natural Language Generation.
Course held as part of The Second Summer School on Argumentation: Computational and Linguistic Perspectives
September 2016
http://ssa2016.west.uni-koblenz.de/
Review for the Third Midterm of Math 150 B 11242014Probl.docxjoellemurphey
Review for the Third Midterm of Math 150 B 11/24/2014
Problem 1
Recall that 1
1−x =
∑∞
n=0 x
n for |x| < 1.
Find a power series representation for the following functions and state the radius of
convergence for the power series
a) f(x) = x
2
(1+x)2
.
b) f(x) = 2
1+4x2
.
c) f(x) = x
4
2−x.
d) f(x) = x
1+x2
.
e) f(x) = 1
6+x
.
f) f(x) = x
2
27−x3 .
Problem 2
Find a Taylor series with a = 0 for the given function and state the radius of conver-
gence. You may use either the direct method (definition of a Taylor series) or known
series.
a) f(x) = ln(1 + x)
b) f(x) = sin x
x
c) f(x) = x sin(3x).
Problem 3
Find the radius of convergence and interval of convergence for the series
∑∞
n=1
(x+2)n
n4n
.
Ans. Radius r=2,
√
2 − 2 < x <
√
2 + 2. Problem 4
Find the interval of convergence of the following power series. You must justify your
answers.
∑∞
n=0
n2(x+4)n
23n
.
Ans. −12 < x < 4.
Problem 5
For the function f(x) = 1/
√
x, find the fourth order Taylor polynomial with a=1.
Problem 6
A curve has the parametric equations
x = cos t, y = 1 + sin t, 0 ≤ t ≤ 2π
a) Find dy
dx
when t = π
4
.
b) Find the equation of the line tangent to the curve at t = π/4. Write it in y = mx+b
form.
c) Eliminate the parameter t to find a cartesian (x, y) equation of the curve.
d) Using (c), or otherwise identify the curve.
Problem 7
State whether the given series converges or diverges
a)
∑∞
n=0 (−1)
n+1 n22n
n!
.
b)
∑∞
n=0
n(−3)n
4n−1
.
c)
∑∞
n=1
sin n
2n2+n
.
Problem 8
1
Approximate the value of the integral
∫ 1
0
e−x
2
dx with an error no greater than 5×10−4.
Ans.
∫ 1
0
e−x
2
dx = 1 − 1
3
+ 1
5.2!
− 1
7.3!
+ ... +
(−1)n
(2n+1)n!
+ .... n ≥ 5,
for n=5
∫ 1
0
e−x
2
dx ≈ 1 − 1
3
+ 1
5.2!
− 1
7.3!
+ 1
9.4!
− 1
11.5!
≈ 0.747.
Problem 9
Find the radius of convergence for the series
∑∞
n=1
nn(x−2)2n
n!
.
Ans. R = 1√
e
.
Problem 10
Let f(x) =
∑∞
n=0
(x−1)n
n2+1
.
a) Calculate the domain of f.
b) Calculate f ′(x).
c) Calculate the domain of f ′.
Problem 11
Let f(x) =
∑∞
n=0
cos n
n!
xn.
a) Calculate the domain of f.
b) Calculate f ′(x).
c) Calculate
∫
f(x)dx.
Problem 12
Using properties of series, known Maclaurin expansions of familiar functions and their
arithmetic, calculate Maclaurin series for the following.
a) ex
2
b) sin 2x
c)
∫
x5 sin xdx
d) cos x−1
x2
e)
d((x+1) tan−1(x))
dx
Problem 13
Calculate the Taylor polynomial T5(x), expanded at a=0, for
f(x) =
∫ x
0
ln |sect + tan t|dt.
Ans. T5(x) =
x2
2
+ x
4
4!
.
Problem 14
Suppose we only consider |x| ≤ 0.8. Find the best upper bound or maximum value
you can for∣∣∣sin x − (x − x33! + x55! )∣∣∣
Same question: If
(
x − x
3
3!
+ x
5
5!
)
is used to approximate sin x for |x| ≤ 0.8. What is
the maximum error? Explain what method you are using.
Problem 15
The Taylor polynomial T5(x) of degree 5 for (4 + x)
3/2 is
(4 + x)3/2 ≈ 8 + 3x + 3
16
x2 − 1
128
x3 + 3
4096
x4 − 3
32768
x5.
a) Use this polynomial to find Taylor polynomials for (4 + ...
GR 8 Math Powerpoint about Polynomial Techniquesreginaatin
-This is a powerpoint inspired by one of Canva displayed presentation.
- This is about Math Polynomials and good for highschoolers presentation for school.
- It consists of 39 pages explaining each of the Polynomial Techniques.
- Good for review or inspired powerpoint.
Mathematics (from Greek μάθημα máthēma, “knowledge, study, learning”) is the study of topics such as quantity (numbers), structure, space, and change. There is a range of views among mathematicians and philosophers as to the exact scope and definition of mathematics
A Quest for Subexponential Time Parameterized Algorithms for Planar-k-Path: F...cseiitgn
The field of designing subexponential time parameterized algorithms has gained a lot of momentum lately. While the subexponential time algorithm for Planar-k-Path (finding a path of length at least k on planar graphs) has been known for last 15 years. There was no such algorithms known on directed planar graphs. In this talk, I will survey this journey of designing subexponential time parameterized algorithms for finding a path of length at least k in planar undirected graphs to planar directed graphs; highlighting the new tools and techniques that got developed on the way.
Much of the early work on parameterized complexity considered the solution size as the parameter when parameterizing optimization problems, with a possible exception of treewidth. This talk will survey results and open problems on *alternate parameterizations*, where the parameter is typically some structure of the input or the distance of the output size from a guarantee.
Dynamic Parameterized Problems - Algorithms and Complexitycseiitgn
In this talk, we will discuss the parameterized complexity of various classical graph-theoretic problems in the dynamic framework where the input graph is being updated by a sequence of edge additions and deletions. We will describe fixed-parameter tractable algorithms and lower bounds on the running time of algorithms for these problems.
Space-efficient Approximation Scheme for Maximum Matching in Sparse Graphscseiitgn
We present a Logspace Approximation Scheme (LSAS), i.e. an approximation algorithm for maximum matching in planar graphs (not necessarily bipartite) that achieves an approximation ratio arbitrarily close to one, in Logspace This deviates from the well known Baker’s approach for approximation in planar graphs by avoiding the use of distance computation - which is not known to be in Logspace. Our algorithm actually works for any “recursively sparse” graph class which contains a linear size matching.The scheme is based on an LSAS in bounded degree graphs which are not known to be amenable to Baker’s method. We solve the bounded degree case by parallel augmentation of short augmenting paths. Finding a large number of such disjoint paths can, in turn, be reduced to finding a large independent set in a bounded degree graph. This is joint work with Samir Datta and Raghav Kulkarni.
A multiple choice problem consists of a set of color classes P = {C1 , C2 , . . . , Cn }. Each color class Ci consists of a pair of objects typically a pair of points. Objective of such a problem, is to select one object from each color class such that certain optimality criteria is satisfied. One example of such problem is rainbow minmax gap problem(RMGP). In RMGP, given P, the objective is to select exactly one point from each color class, such that the maximum distance between a pair of consecutive selected points is minimized. This problem was studied by Consuegra and Narasimhan. We show that the problem is NP-hard. For our proof we also describe an auxiliary result on satisfiability. A 3-SAT formula is an LSAT formula if each clause (viewed as a set of literals) intersects at most one other clause, and, moreover, if two clauses intersect, then they have exactly one literal in common. We show that the problem of deciding whether an LSAT formula is satisfiable or not is NP-complete. We also briefly describe some approximation results of some multiple choice problems.
The Chasm at Depth Four, and Tensor Rank : Old results, new insightscseiitgn
Agrawal and Vinay [FOCS 2008] showed how any polynomial size arithmetic circuit can be thought of as a depth four arithmetic circuit of subexponential size. The resulting circuit size in this simulation was more carefully analyzed by Koiran [TCS 2012] and subsequently by Tavenas [MFCS 2013]. We provide a simple proof of this chain of results. We then abstract the main ingredient to apply it to formulas and constant depth circuits, and show more structured depth reductions for them.In an apriori surprising result, Raz [STOC 2010] showed that for any $n$ and $d$, such that $\omega(1) \leq d \leq O(logn/loglogn)$, constructing explicit tensors $T: [n] \rightarrow F$ of high enough rank would imply superpolynomial lower bounds for arithmetic formulas over the field F. Using the additional structure we obtain from our proof of the depth reduction for arithmetic formulas, we give a new and arguably simpler proof of this connection. We also extend this result for homogeneous formulas to show that, in fact, the connection holds for any d such that $\omega(1) \leq d \leq n^{o(1)}$. Joint work with Mrinal Kumar, Ramprasad Saptharishi and V Vinay.
Isolation Lemma for Directed Reachability and NL vs. Lcseiitgn
We discuss versions of isolation lemma for problems in NP and NL. We then show that improving the success probability of the isolation lemma is equivalent to some complexity theoretic collapses such as NP is in P/poly and NL is in L/poly. Basic familiarity with complexity classes such as NP, P/poly, NL will be assumed. All the other notions used in the talk will be introduced during the talk.
Narrow sieves, representative sets and divide-and-color are three breakthrough techniques related to color coding, which led to the design of extremely fast parameterized algorithms. In this talk, I will discuss the power and limitations of these techniques. I will also briefly address some recent developments related to these techniques, including general schemes for mixing them.
Solving connectivity problems via basic Linear Algebracseiitgn
Directed reachability and undirected connectivity are well studied problems in Complexity Theory. Reachability/Connectivity between distinct pairs of vertices through disjoint paths are well known but hard variations. We talk about recent algorithms to solve variants and restrictions of these problems in the static and dynamic settings by reductions to the determinant.
We study communication cost of computing functions when inputs are distributed among k processors, each of which is located at one vertex of a network/graph called a terminal. Every other node of the network also has a processor, with no input. The communication is point-to-point and the cost is the total number of bits exchanged by the protocol, in the worst case, on all edges. Our results show the effect of topology of the network on the total communication cost. We prove tight bounds for simple functions like Element-Distinctness (ED), which depend on the 1-median of the graph. On the other hand, we show that for a large class of natural functions like Set-Disjointness the communication cost is essentially n times the cost of the optimal Steiner tree connecting the terminals. Further, we show for natural composed functions like ED∘XOR and XOR∘ED, the naive protocols suggested by their definition is optimal for general networks. Interestingly, the bounds for these functions depend on more involved topological parameters that are a combination of Steiner tree and 1-median costs. To obtain our results, we use some tools like metric embeddings and linear programming whose use in the context of communication complexity is novel as far as we know. (Based on joint works with Jaikumar Radhakrishnan and Atri Rudra)
Space-efficient Approximation Scheme for Maximum Matching in Sparse Graphscseiitgn
We present a Logspace Approximation Scheme (LSAS), i.e. an approximation algorithm for maximum matching in planar graphs (not necessarily bipartite) that achieves an approximation ratio arbitrarily close to one, in Logspace This deviates from the well known Baker’s approach for approximation in planar graphs by avoiding the use of distance computation - which is not known to be in Logspace. Our algorithm actually works for any “recursively sparse” graph class which contains a linear size matching.The scheme is based on an LSAS in bounded degree graphs which are not known to be amenable to Baker’s method. We solve the bounded degree case by parallel augmentation of short augmenting paths. Finding a large number of such disjoint paths can, in turn, be reduced to finding a large independent set in a bounded degree graph. This is joint work with Samir Datta and Raghav Kulkarni.
Complexity Classes and the Graph Isomorphism Problemcseiitgn
The Graph Isomorphism problem is one of the few problems in NP, but not expected to be NP complete and not known to be in P.In this talk I will review some of the attempts that have been made in order to provide a better classification of the problem in terms of complexity classes reviewing upper and lower bounds and illustrating in this way the utility of several complexity classes.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Acetabularia Information For Class 9 .docxvaibhavrinwa19
Acetabularia acetabulum is a single-celled green alga that in its vegetative state is morphologically differentiated into a basal rhizoid and an axially elongated stalk, which bears whorls of branching hairs. The single diploid nucleus resides in the rhizoid.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
Safalta Digital marketing institute in Noida, provide complete applications that encompass a huge range of virtual advertising and marketing additives, which includes search engine optimization, virtual communication advertising, pay-per-click on marketing, content material advertising, internet analytics, and greater. These university courses are designed for students who possess a comprehensive understanding of virtual marketing strategies and attributes.Safalta Digital Marketing Institute in Noida is a first choice for young individuals or students who are looking to start their careers in the field of digital advertising. The institute gives specialized courses designed and certification.
for beginners, providing thorough training in areas such as SEO, digital communication marketing, and PPC training in Noida. After finishing the program, students receive the certifications recognised by top different universitie, setting a strong foundation for a successful career in digital marketing.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
2. A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
3. A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of f , i.e. the value of f (v) for every
v ∈ {0,1}m
.
1 0 1 1 0 0 01 0 1 1 0 0 0 1 1
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
4. A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of f , i.e. the value of f (v) for every
v ∈ {0,1}m
.
1 0 1 1 0 0 01 0 1 1 0 0 0 1 1
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
Can you recover f ?
5. A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of f , i.e. the value of f (v) for every
v ∈ {0,1}m
.
1 0 1 1 0 0 01 0 1 1 0 0 0 1 1
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
Can you recover f ?
Of course! Just interpolate.
6. A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of f , i.e. the value of f (v) for every
v ∈ {0,1}m
. But she corrupts 49% of the bits.
1 0 1 1 0 0 01 0 1 1 0 0 0 1 1
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
Can you recover f ?
7. A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of f , i.e. the value of f (v) for every
v ∈ {0,1}m
. But she corrupts 49% of the bits.
1 0 1 1 0 0 01 0 1 1 0 0 0 1 1
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
Can you recover f ?
8. A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of f , i.e. the value of f (v) for every
v ∈ {0,1}m
. But she corrupts 49% of the bits.
0 1 0 0 1 1 11 0 1 1 0 0 0 1 1
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
Can you recover f ?
9. A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of f , i.e. the value of f (v) for every
v ∈ {0,1}m
. But she corrupts 49% of the bits.
0 1 0 0 1 1 11 0 1 1 0 0 0 1 1
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
Can you recover f ?
10. A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of f , i.e. the value of f (v) for every
v ∈ {0,1}m
. But she corrupts 49% of the bits.
0 1 0 0 1 1 11 0 1 1 0 0 0 1 1
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
Can you recover f ?
Impossible if errors are adversarial...
11. A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of f , i.e. the value of f (v) for every
v ∈ {0,1}m
. But she corrupts 49% of the bits randomly.
0 1 0 0 1 1 11 0 1 1 0 0 0 1 1
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
Can you recover f ?
12. A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of f , i.e. the value of f (v) for every
v ∈ {0,1}m
. But she corrupts 49% of the bits randomly.
0 1 0 0 1 1 11 0 1 1 0 0 0 1 1
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
Can you recover f ?
This talk: Yes, and we can do it efficiently!
13. A Puzzle
Your friend picks a polynomial f (x) ∈ 2[x1,··· , xm] of degree
r ≈ 3
m.
She gives you the entire truth-table of f , i.e. the value of f (v) for every
v ∈ {0,1}m
. But she corrupts 49% of the bits randomly.
0 1 0 0 1 1 11 0 1 1 0 0 0 1 1
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
Can you recover f ?
This talk: Yes, and we can do it efficiently!
(“Efficiently decoding Reed-Muller codes from random errors”)
18. Typical Setting
Alicia Machado sends
Dont vote for Trump
Channel corrupts the message...
Bobby Jindal receives
Dude vote for Trump
19. Typical Setting
Alicia Machado sends
Dont vote for Trump
Channel corrupts the message...
Bobby Jindal receives
Dude vote for Trump
Want to encode messages with redundancies to make it resilient to errors
20. Reed-Muller Codes: RM(m, r)
Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r.
Encoding: The evaluation of f on all points in {0,1}m
.
f (x1, x2, x3, x4) = x1x2 + x3x4
−→
0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
21. Reed-Muller Codes: RM(m, r)
Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r.
Encoding: The evaluation of f on all points in {0,1}m
.
f (x1, x2, x3, x4) = x1x2 + x3x4
−→
0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
A linear code, with
22. Reed-Muller Codes: RM(m, r)
Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r.
Encoding: The evaluation of f on all points in {0,1}m
.
f (x1, x2, x3, x4) = x1x2 + x3x4
−→
0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
A linear code, with
▶ Block Length: 2m
:= n
23. Reed-Muller Codes: RM(m, r)
Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r.
Encoding: The evaluation of f on all points in {0,1}m
.
f (x1, x2, x3, x4) = x1x2 + x3x4
−→
0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
A linear code, with
▶ Block Length: 2m
:= n
▶ Distance: 2m−r
(lightest codeword: x1x2 ··· xr )
24. Reed-Muller Codes: RM(m, r)
Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r.
Encoding: The evaluation of f on all points in {0,1}m
.
f (x1, x2, x3, x4) = x1x2 + x3x4
−→
0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
A linear code, with
▶ Block Length: 2m
:= n
▶ Distance: 2m−r
(lightest codeword: x1x2 ··· xr )
▶ Dimension: m
0 + m
1 + ··· + m
r =: m
≤r
25. Reed-Muller Codes: RM(m, r)
Message: A degree polynomial f ∈ 2[x1,··· , xm] of degree at most r.
Encoding: The evaluation of f on all points in {0,1}m
.
f (x1, x2, x3, x4) = x1x2 + x3x4
−→
0 0 0 1 0 0 0 1 0 0 0 1 1 1 1 0
0000 0001 0010 0011 0100 0101 0110 0111 1000 1001 1010 1011 1100 1101 1110 1111
A linear code, with
▶ Block Length: 2m
:= n
▶ Distance: 2m−r
(lightest codeword: x1x2 ··· xr )
▶ Dimension: m
0 + m
1 + ··· + m
r =: m
≤r
▶ Rate: dimension/block length = m
≤r /2m
26. Reed-Muller Codes: RM(m, r)
Generator Matrix: evaluation matrix of deg ≤ r monomials:
v ∈ m
2
M
:= E(m, r)M(v)
27. Reed-Muller Codes: RM(m, r)
Generator Matrix: evaluation matrix of deg ≤ r monomials:
v ∈ m
2
M
:= E(m, r)M(v)
(Every codeword is spanned by the rows.)
28. Reed-Muller Codes: RM(m, r)
Generator Matrix: evaluation matrix of deg ≤ r monomials:
v ∈ m
2
M
:= E(m, r)M(v)
(Every codeword is spanned by the rows.)
Also called the inclusion matrix
(M(v) = 1 if and only if “M ⊂ v”).
31. Decoding RM(m, r)
Worst Case Errors: Up to d/2 (d = 2m−r
is minimal distance).
d
d/2
(algorithm by [Reed54])
32. Decoding RM(m, r)
Worst Case Errors: Up to d/2 (d = 2m−r
is minimal distance).
d
d/2
(algorithm by [Reed54])
List Decoding: max radius with constant # of words
33. Decoding RM(m, r)
Worst Case Errors: Up to d/2 (d = 2m−r
is minimal distance).
d
d/2
(algorithm by [Reed54])
List Decoding: max radius with constant # of words
d [Gopalan-Klivans-Zuckerman08,
Bhowmick-Lovett15]
List decoding radius = d.
34. Two schools of study
▶ Hamming Model a.k.a worst-case errors
▶ Generally the model of interest for complexity theorists,
▶ Reed-Muller codes are not the best for these (far from optimal
rate-distance tradeoffs).
▶ Shannon Model a.k.a random errors
▶ The standard model for coding theorists,
▶ Recent breakthroughs (e.g. Arıkan’s polar codes),
35. Two schools of study
▶ Hamming Model a.k.a worst-case errors
▶ Generally the model of interest for complexity theorists,
▶ Reed-Muller codes are not the best for these (far from optimal
rate-distance tradeoffs).
▶ Shannon Model a.k.a random errors
▶ The standard model for coding theorists,
▶ Recent breakthroughs (e.g. Arıkan’s polar codes),
▶ An ongoing research endeavor:
How do Reed-Muller codes perform in the Shannon model?
36. Models for random corruptions
(channels)
Binary Erasure Channel — BEC(p)
Each bit independently replaced by ‘?’ with probability p
37. Models for random corruptions
(channels)
Binary Erasure Channel — BEC(p)
Each bit independently replaced by ‘?’ with probability p
0 0 1 1 00 0 1
38. Models for random corruptions
(channels)
Binary Erasure Channel — BEC(p)
Each bit independently replaced by ‘?’ with probability p
0 0 1 1 0? ? ?
39. Models for random corruptions
(channels)
Binary Erasure Channel — BEC(p)
Each bit independently replaced by ‘?’ with probability p
0 0 1 1 0? ? ?
Binary Symmetric Channel — BSC(p)
Each bit independently flipped with probability p
40. Models for random corruptions
(channels)
Binary Erasure Channel — BEC(p)
Each bit independently replaced by ‘?’ with probability p
0 0 1 1 0? ? ?
Binary Symmetric Channel — BSC(p)
Each bit independently flipped with probability p
0 0 1 1 00 0 1
41. Models for random corruptions
(channels)
Binary Erasure Channel — BEC(p)
Each bit independently replaced by ‘?’ with probability p
0 0 1 1 0? ? ?
Binary Symmetric Channel — BSC(p)
Each bit independently flipped with probability p
0 0 1 1 01 1 0
42. Models for random corruptions
(channels)
Binary Erasure Channel — BEC(p)
Each bit independently replaced by ‘?’ with probability p
0 0 1 1 0? ? ?
Binary Symmetric Channel — BSC(p)
Each bit independently flipped with probability p
0 0 1 1 01 1 0
(almost) equiv: fixed number t ≈ pn of random errors
44. Channel Capacity
Question: Given a channel, what is the best rate we can hope for?
Binary Erasure Channel — BEC(p)
Each bit independently replaced by ‘?’ with probability p
0 0 1 1 0? ? ?
45. Channel Capacity
Question: Given a channel, what is the best rate we can hope for?
Binary Erasure Channel — BEC(p)
Each bit independently replaced by ‘?’ with probability p
0 0 1 1 0? ? ?
If X n
is transmitted to the channel and received as Y n
, how many bits of
information about X n
do we get from Y n
?
46. Channel Capacity
Question: Given a channel, what is the best rate we can hope for?
Binary Erasure Channel — BEC(p)
Each bit independently replaced by ‘?’ with probability p
0 0 1 1 0? ? ?
If X n
is transmitted to the channel and received as Y n
, how many bits of
information about X n
do we get from Y n
?
Intuitively, (1 − p)n.
47. Channel Capacity
Question: Given a channel, what is the best rate we can hope for?
Binary Symmetric Channel — BSC(p)
Each bit independently flipped with probability p
0 0 1 1 00 1 1
∗ ∗ ∗∗
48. Channel Capacity
Question: Given a channel, what is the best rate we can hope for?
Binary Symmetric Channel — BSC(p)
Each bit independently flipped with probability p
0 0 1 1 00 1 1
∗ ∗ ∗∗
If X n
is transmitted to the channel and received as Y n
, how many bits of
information about X n
do we get from Y n
?
49. Channel Capacity
Question: Given a channel, what is the best rate we can hope for?
Binary Symmetric Channel — BSC(p)
Each bit independently flipped with probability p
0 0 1 1 00 1 1
∗ ∗ ∗∗
If X n
is transmitted to the channel and received as Y n
, how many bits of
information about X n
do we get from Y n
?
Intuitively, (1 − H(p))n. (as n
pn ≈ 2H(p)·n
)
50. Channel Capacity
Question: Given a channel, what is the best rate we can hope for?
[Shannon48] Maximum rate that enables decoding (w.h.p.) is:
1 − p for BEC(p),
1 − H(p) for BSC(p).
Codes achieving this bound called capacity achieving.
51.
52. Motivating questions for this talk
How well does Reed-Muller codes perform in the Shannon Model?
In BEC(p)?
In BSC(p)?
53. Motivating questions for this talk
How well does Reed-Muller codes perform in the Shannon Model?
In BEC(p)?
In BSC(p)?
Are they as good as polar codes?
60. Dual of a code
Any linear space can be specified by a generating basis, or a solution to a
system of constraints.
61. Dual of a code
Any linear space can be specified by a generating basis, or a solution to a
system of constraints.
⊥
= {u : 〈v,u〉 = 0 for every v ∈ }
62. Dual of a code
Any linear space can be specified by a generating basis, or a solution to a
system of constraints.
⊥
= {u : 〈v,u〉 = 0 for every v ∈ }
Parity Check Matrix
(A basis for ⊥
stacked as rows)
PC M v = 0 ⇐⇒ v ∈
63. Linear codes and erasures
0 0 1 1 0? ? ?
Question: When can we decode from a pattern of erasures?
64. Linear codes and erasures
0 0 1 1 0? ? ?
Question: When can we decode from a pattern of erasures?
0 0 1 1 0
0 0 1 1 0
1 0 0
0 1 0
65. Linear codes and erasures
0 0 1 1 0? ? ?
Question: When can we decode from a pattern of erasures?
0 0 1 1 0
0 0 1 1 0
1 0 0
0 1 0
0 0 0 0 01 1 0
66. Linear codes and erasures
0 0 1 1 0? ? ?
Question: When can we decode from a pattern of erasures?
0 0 0 0 01 1 0
67. Linear codes and erasures
0 0 1 1 0? ? ?
Question: When can we decode from a pattern of erasures?
0 0 0 0 01 1 0
Decodable if and only if no non-zero codeword supported on erasures.
68. Linear codes and erasures
? ? ?
Question: When can we decode from a pattern of erasures?
0 0 0 0 01 1 0
Decodable if and only if no non-zero codeword supported on erasures.
Depends only the erasure pattern
69. Linear codes and erasures
? ? ?
Question: When can we decode from a pattern of erasures?
0 0 0 0 01 1 0
70. Linear codes and erasures
? ? ?
Question: When can we decode from a pattern of erasures?
Observation: A pattern of erasures are decodeable if and only if the
corresponding columns of the Parity Check Matrix are linearly
independent.
71. Linear codes and erasures
? ? ?
Question: When can we decode from a pattern of erasures?
Observation: A pattern of erasures are decodeable if and only if the
corresponding columns of the Parity Check Matrix are linearly
independent.
In order for a code to be good for BEC(p), the Parity Check
Matrix of the code must be “robustly high-rank”.
72. Reed-Muller codes under erasures
Cool Fact
The dual of RM(m, r) is RM(m, r′
) where r′
= m − r − 1.
73. Reed-Muller codes under erasures
Cool Fact
The dual of RM(m, r) is RM(m, r′
) where r′
= m − r − 1.
Hence, the Parity Check Matrix of RM(m, r) is the generator matrix of
RM(m, r′
).
74. Reed-Muller codes under erasures
Cool Fact
The dual of RM(m, r) is RM(m, r′
) where r′
= m − r − 1.
Hence, the Parity Check Matrix of RM(m, r) is the generator matrix of
RM(m, r′
).
[m]
≤r′
{0,1}m
v
M
M(v)
76. Reed-Muller codes under erasures
[m]
≤r′
{0,1}m
v
M
M(v)
Question: Let R = m
≤r′ . Suppose you pick (0.99)R columns at
random. Are they linearly independent with high probability?
77. Reed-Muller codes under erasures
[m]
≤r′
{0,1}m
v
M
M(v)
Question: Let R = m
≤r′ . Suppose you pick (0.99)R columns at
random. Are they linearly independent with high probability?
r′
:
m/20 m
o( m/log m)
[ASW-15]
o(m)
[ASW-15]
O( m)
[KMSU+KP-16]
78. Reed-Muller codes under erasures
[m]
≤r′
{0,1}m
v
M
M(v)
Question: Let R = m
≤r′ . Suppose you pick (0.99)R columns at
random. Are they linearly independent with high probability?
r′
:
m/20 m
o( m/log m)
[ASW-15]
o(m)
[ASW-15]
O( m)
[KMSU+KP-16]
OPEN OPEN
79. From erasures to errors
Theorem: [ASW] Any pattern correctable from erasures in
RM(m, m − r − 1) is correctable from errors in RM(m, m − 2r − 2).
80. From erasures to errors
Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m
≤r random erasures.
m/20 m
o( m/log m) o(m)O( m)
Theorem: [ASW] Any pattern correctable from erasures in
RM(m, m − r − 1) is correctable from errors in RM(m, m − 2r − 2).
81. From erasures to errors
Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m
≤r random erasures.
m/20 m
o( m/log m) o(m)O( m)
Theorem: [ASW] Any pattern correctable from erasures in
RM(m, m − r − 1) is correctable from errors in RM(m, m − 2r − 2).
Corollary #1: (high-rate) Decodeable from (1 − o(1)) m
≤r random
errors in RM(m, m − 2r) if r = o( m/log m)
(min distance of RM(m, m − 2r) is 22r
) .
82. From erasures to errors
Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m
≤r random erasures.
m/20 m
o( m/log m) o(m)O( m)
Theorem: [ASW] Any pattern correctable from erasures in
RM(m, m − r − 1) is correctable from errors in RM(m, m − 2r − 2).
83. From erasures to errors
Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m
≤r random erasures.
m/20 m
o( m/log m) o(m)O( m)
Theorem: [ASW] Any pattern correctable from erasures in
RM(m, m − r − 1) is correctable from errors in RM(m, m − 2r − 2).
(If r =
m
2
− o( m), then m − 2r − 2 = o( m))
Corollary #2: (low-rate) Decodeable from (
1
2 − o(1))2m
random errors
in RM(m,o( m)) (min distance of RM(m, m) is 2m− m
) .
84. From erasures to errors
Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m
≤r random erasures.
m/20 m
o( m/log m) o(m)O( m)
Theorem: [ASW] Any pattern correctable from erasures in
RM(m, m − r − 1) is correctable from errors in RM(m, m − 2r − 2).
(If r =
m
2
− o( m), then m − 2r − 2 = o( m))
Corollary #2: (low-rate) Decodeable from (
1
2 − o(1))2m
random errors
in RM(m,o( m)) (min distance of RM(m, m) is 2m− m
) .
[S-Shpilka-Volk]: Efficient decoding from errors.
87. What we want to prove
Theorem [S-Shpilka-Volk]
There exists an efficient algorithm with the following guarantee:
Given a corrupted codeword w = v + errS of
RM(m, m − 2r − 1),
if S happens to be a correctable erasure pattern in
RM(m, m − r − 1),
then the algorithm correctly decodes v from w.
88. What we have access to
Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and
S = {u1,..., ut }.
89. What we have access to
Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and
S = {u1,..., ut }.
Parity Check Matrix for RM(m, m − 2r − 2) is the generator matrix for
RM(m,2r + 1).
90. What we have access to
Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and
S = {u1,..., ut }.
E(m,2r + 1) v = 0
91. What we have access to
Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and
S = {u1,..., ut }.
E(m,2r + 1) w =
92. What we have access to
Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and
S = {u1,..., ut }.
E(m,2r + 1) w = M
M(u1) + ··· + M(ut )
93. What we have access to
Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and
S = {u1,..., ut }.
E(m,2r + 1) w = M
M(u1) + ··· + M(ut )
Have access to
∑
i∈S M(ui ) for every monomial M with deg(M) ≤ 2r + 1.
94. What we have access to
Received word is w := v + errS for some v ∈ RM(m, m − 2r − 2) and
S = {u1,..., ut }.
E(m,2r + 1) w = M
M(u1) + ··· + M(ut )
Have access to
∑
i∈S f (ui ) for every polynomial f with deg(f ) ≤ 2r + 1.
96. Erasure Correctable Patterns
A pattern of erasures is correctable if and only if the corresponding
columns in the parity check matrix are linearly independent.
97. Erasure Correctable Patterns
A pattern of erasures is correctable if and only if the corresponding
columns in the parity check matrix are linearly independent.
The parity check matrix for RM(m, m − r − 1) is the generator matrix
for RM(m, r).
98. Erasure Correctable Patterns
A pattern of erasures is correctable if and only if the corresponding
columns in the parity check matrix are linearly independent.
The parity check matrix for RM(m, m − r − 1) is the generator matrix
for RM(m, r).
Corollary
A set of patterns S = {u1,..., ut } is erasure-correctable in
RM(m, m − r − 1) if and only if ur
1 ,..., ur
t are linearly independent.
ur
i
is just the vector of degree r monomials evaluated at ui
100. The Decoding Algorithm
Input: Received word w (= v + errS)
Lemma
Assume that S is a pattern of erasures correctable in RM(m, m − r − 1).
101. The Decoding Algorithm
Input: Received word w (= v + errS)
Lemma
Assume that S is a pattern of erasures correctable in RM(m, m − r − 1).
For any arbitrary u ∈ {0,1}m
, we have u ∈ S if and only if there exists a
polynomial g with deg(g) ≤ r such that
∑
i∈S
(f · g)(ui ) = f (u) for every f with deg(f ) ≤ r + 1.
102. The Decoding Algorithm
Input: Received word w (= v + errS)
Lemma
Assume that S is a pattern of erasures correctable in RM(m, m − r − 1).
For any arbitrary u ∈ {0,1}m
, we have u ∈ S if and only if there exists a
polynomial g with deg(g) ≤ r such that
∑
i∈S
(f · g)(ui ) = f (u) for every f with deg(f ) ≤ r + 1.
Can be checked by solving a system of linear equations.
Algorithm is straightforward.
103. Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m
such that ur
1 ,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
104. Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m
such that ur
1 ,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Proof.
ur
1 ··· ur
t
105. Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m
such that ur
1 ,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Proof.
ur
1 ··· ur
t Row operations
I
0
106. Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m
such that ur
1 ,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
107. Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m
such that ur
1 ,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Main Lemma (⇒): If u ∈ S, then there is a polynomial g with
deg(g) ≤ r such that
∑
ui ∈S
(f · g)(ui ) = f (u) for every f with deg(f ) ≤ r + 1.
108. Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m
such that ur
1 ,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Main Lemma (⇒): If u ∈ S, then there is a polynomial g with
deg(g) ≤ r such that
∑
ui ∈S
(f · g)(ui ) = f (u) for every f with deg(f ) ≤ r + 1.
If u = ui , then g = hi satisfies the conditions.
109. Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m
such that ur
1 ,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
110. Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
111. Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Claim 2
If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and
f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t.
112. Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Claim 2
If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and
f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t.
Main Lemma (⇐): If u /∈ S, then there is no polynomial g with
deg(g) ≤ r such that
∑
ui ∈S
(f · g)(ui ) = f (u) for every f with deg(f ) ≤ r + 1.
113. Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Claim 2
If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and
f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t.
Proof
114. Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Claim 2
If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and
f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t.
Proof
Case 1: Suppose hi (u) = 1 for some i.
u
1
u1
0
ut
0···
ui
0 1 0 ···
115. Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Claim 2
If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and
f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t.
Proof
Case 1: Suppose hi (u) = 1 for some i.
u
1
u1
0
ut
0···
ui
0 1 0 ···
u ̸= ui hence u(ℓ) ̸= (ui )(ℓ) for some coordinate ℓ.
116. Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Claim 2
If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and
f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t.
Proof
Case 1: Suppose hi (u) = 1 for some i.
u
1
u1
0
ut
0···
ui
0 1 0 ···
u ̸= ui hence u(ℓ) ̸= (ui )(ℓ) for some coordinate ℓ.
f (x) = hi (x) · xℓ − (ui )(ℓ) works.
u
1
u1
0
ut
0···
ui
0 ···
117. Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Claim 2
If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and
f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t.
Proof
Case 2: Suppose hi (u) = 0 for all i.
118. Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Claim 2
If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and
f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t.
Proof
Case 2: Suppose hi (u) = 0 for all i.
Look at
∑
hi (x).
u
0
u1
1
ut
1···
119. Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Claim 2
If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and
f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t.
Proof
Case 2: Suppose hi (u) = 0 for all i.
Look at
∑
hi (x).
u
0
u1
1
ut
1···
f (x) = 1 −
∑
hi (x) works.
u
1
u1
0
ut
0···
120. Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Claim 2
If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and
f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t.
Proof
Case 2: Suppose hi (u) = 0 for all i.
Look at
∑
hi (x).
u
0
u1
1
ut
1···
f (x) = 1 −
∑
hi (x) works.
u
1
u1
0
ut
0···
121. Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Claim 2
If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and
f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t.
122. Proof of Lemma
Claim 1
Let u1,..., ut ∈ {0,1}m such that ur
1
,..., ur
t are linearly independent.
Then, for each i ∈ [t], there is a polynomial hi such that:
▶ deg(hi ) ≤ r,
▶ hi (uj ) = 1 if and only if i = j, and 0 otherwise.
Claim 2
If u /∈ {u1,..., ut }, then there is a polynomial f such that deg(f ) ≤ r + 1 and
f (u) = 1 , but f (ui ) = 0 for all i = 1,..., t.
Therefore, if ur
1 ,..., ur
t are linearly independent, then there exists a
polynomial g with deg(g) ≤ r satisfying
∑
ui ∈S
(f ·g)(ui ) = f (u) , for every polynomial f with deg(f ) ≤ r + 1,
if and only if u = ui for some i.
124. Decoding Algorithm
▶ Input: Received word w (= v + errS)
▶ Compute E(m,2r + 1)w to get the value of
∑
ui ∈S f (ui ) for every
polynomial f with deg(f ) ≤ 2r + 1.
125. Decoding Algorithm
▶ Input: Received word w (= v + errS)
▶ Compute E(m,2r + 1)w to get the value of
∑
ui ∈S f (ui ) for every
polynomial f with deg(f ) ≤ 2r + 1.
▶ For each u ∈ {0,1}m
, solve for a polynomial g with deg(g) ≤ r
satisfying
∑
ui ∈S
(f · g)(ui ) = f (u) , for every f satisfying deg(f ) ≤ r + 1.
If there is a solution, then add u to Corruptions.
126. Decoding Algorithm
▶ Input: Received word w (= v + errS)
▶ Compute E(m,2r + 1)w to get the value of
∑
ui ∈S f (ui ) for every
polynomial f with deg(f ) ≤ 2r + 1.
▶ For each u ∈ {0,1}m
, solve for a polynomial g with deg(g) ≤ r
satisfying
∑
ui ∈S
(f · g)(ui ) = f (u) , for every f satisfying deg(f ) ≤ r + 1.
If there is a solution, then add u to Corruptions.
▶ Flip the coordinates in Corruptions and interpolate.
129. ICYMI
Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m
≤r random erasures.
m/20 m
o( m/log m) o(m)O( m)
130. ICYMI
Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m
≤r random erasures.
m/20 m
o( m/log m) o(m)O( m)
Theorem: [S-Shpilka-Volk] Any pattern that is erasure-correctable in
RM(m, m − r − 1) is efficiently error-correctable in RM(m, m − 2r − 2).
131. ICYMI
Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m
≤r random erasures.
m/20 m
o( m/log m) o(m)O( m)
Theorem: [S-Shpilka-Volk] Any pattern that is erasure-correctable in
RM(m, m − r − 1) is efficiently error-correctable in RM(m, m − 2r − 2).
Corollary #1: (high-rate) Efficiently decodeable from (1 − o(1)) m
≤r
random
errors in RM(m, m − 2r) if r = o( m/log m).
(min distance of RM(m, m − 2r) is 22r
)
132. ICYMI
Remark: If r falls in the green zone, then RM(m, m − r − 1) can correct ≈ m
≤r random erasures.
m/20 m
o( m/log m) o(m)O( m)
Theorem: [S-Shpilka-Volk] Any pattern that is erasure-correctable in
RM(m, m − r − 1) is efficiently error-correctable in RM(m, m − 2r − 2).
Corollary #1: (high-rate) Efficiently decodeable from (1 − o(1)) m
≤r
random
errors in RM(m, m − 2r) if r = o( m/log m).
(min distance of RM(m, m − 2r) is 22r
)
Corollary #2: (low-rate) Efficiently decodeable from (
1
2
− o(1))2m
random
errors in RM(m,o( m)). (min distance of RM(m, m) is 2m− m
)
133. The obvious open question
[m]
≤r
{0,1}m
v
M
M(v)
Question: Let R = m
≤r . Suppose you pick (0.99)R columns at random.
Are they linearly independent with high probability?
134. The obvious open question
[m]
≤r
{0,1}m
v
M
M(v)
Question: Let R = m
≤r . Suppose you pick (0.99)R columns at random.
Are they linearly independent with high probability?
r :
m/20 m
o( m/log m)
[ASW-15]
o(m)
[ASW-15]
O( m)
[KMSU+KP-16]
135. The obvious open question
[m]
≤r
{0,1}m
v
M
M(v)
Question: Let R = m
≤r . Suppose you pick (0.99)R columns at random.
Are they linearly independent with high probability?
r :
m/20 m
o( m/log m)
[ASW-15]
o(m)
[ASW-15]
O( m)
[KMSU+KP-16]
OPEN OPEN
136. The obvious open question
[m]
≤r
{0,1}m
v
M
M(v)
Question: Let R = m
≤r . Suppose you pick (0.99)R columns at random.
Are they linearly independent with high probability?
r :
m/20 m
o( m/log m)
[ASW-15]
o(m)
[ASW-15]
O( m)
[KMSU+KP-16]
OPEN OPEN
end{document}