This document summarizes a paper that proposes using Markov Chain Monte Carlo (MCMC) methods to estimate parameters for Markov random field (MRF) models. Specifically:
- MRF models are popular for pattern analysis but estimating their parameters is important and challenging. Existing methods like maximum likelihood and least squares fitting have drawbacks.
- The paper proposes using MCMC to estimate MRF parameters by deriving the posterior distribution and using the Metropolis-Hastings algorithm to sample from it. Pseudo-likelihood is used instead of the true likelihood to make computations feasible.
- Experiments apply the MCMC method to estimate parameters for textures generated from MRF models with known parameters. Visual inspection of Markov chains indicates they
Demand-Driven Context-Sensitive Alias Analysis for JavaDacong (Tony) Yan
This document describes a demand-driven context-sensitive alias analysis for Java. It introduces a symbolic points-to graph representation that enables efficient demand-driven analysis without computing full points-to sets. The analysis uses method summaries to improve precision and reduce redundancy. Experimental results show the analysis has higher precision than a state-of-the-art points-to analysis and summaries provide up to 24% speedup.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Finite-difference modeling, accuracy, and boundary conditions- Arthur Weglein...Arthur Weglein
This short report gives a brief review on the finite difference modeling method used in MOSRP
and its boundary conditions as a preparation for the Green’s theorem RTM. The first
part gives the finite difference formulae we used and the second part describes the implemented
boundary conditions. The last part, using two examples, points out some impacts of the accuracy
of source fields on the results of modeling.
Rolle's theorem states that if a function is continuous on a closed interval and differentiable on the open interval with equal values at the endpoints, then the derivative is 0 for at least one value in the interval. The mean value theorems - Lagrange's and Cauchy's - generalize this idea, relating the average rate of change over an interval to the instantaneous rate at a point within the interval. Examples are provided to illustrate the theorems and exceptions that can occur when their conditions are not fully met.
GROUPOIDS, LOCAL SYSTEMS AND DIFFERENTIAL EQUATIONSHeinrich Hartmann
This document discusses groupoids, local systems, and their relationships to differential equations on manifolds. Some key points:
1) Groupoids generalize groups by allowing multiple objects and isomorphisms between them. Representations of groupoids correspond to local systems on manifolds.
2) Local systems on a manifold X are sheaves of vector spaces that are locally isomorphic to a constant sheaf. They correspond to representations of the fundamental groupoid of X.
3) Vector bundles with connections on a Riemann surface B are equivalent to local systems on B. Global sections of bundles generate differential equations, whose solutions can be studied via the bundle's local system or groupoid representation.
The document discusses conditional random fields (CRFs), which are probabilistic models used for structured prediction problems. CRFs define a conditional probability distribution p(y|x) via an exponential family form using feature functions. Maximum likelihood, maximum entropy, and MAP estimation techniques can be used to learn the parameters of a CRF by minimizing the negative conditional log-likelihood of labeled training data. Gradient descent or other numerical optimization methods are then required to perform the actual minimization. CRFs provide a principled probabilistic approach to learning the relationships between inputs x and structured outputs y.
This document contains exercises related to limits and continuity. It provides examples of functions and asks the reader to (a) evaluate the functions at given x-values to determine apparent behavior, and (b) find the indicated limit. It also contains exercises asking the reader to find vertical asymptotes and sketch graphs of given functions.
The Power of Graphs in Immersive Communicationstonizza82
This document discusses graph signal processing and its applications in immersive communications. It begins with an introduction to graphs and how they can represent network-structured data. It then discusses how machine learning can be applied to graph-structured data through tasks like graph classification, node classification, and graph clustering. The document outlines challenges with 360-degree video streaming like delivering large volumes of data under low-delay constraints. It proposes that graph signal processing approaches may help address these challenges by accounting for both the data and relationships in the network.
Demand-Driven Context-Sensitive Alias Analysis for JavaDacong (Tony) Yan
This document describes a demand-driven context-sensitive alias analysis for Java. It introduces a symbolic points-to graph representation that enables efficient demand-driven analysis without computing full points-to sets. The analysis uses method summaries to improve precision and reduce redundancy. Experimental results show the analysis has higher precision than a state-of-the-art points-to analysis and summaries provide up to 24% speedup.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Finite-difference modeling, accuracy, and boundary conditions- Arthur Weglein...Arthur Weglein
This short report gives a brief review on the finite difference modeling method used in MOSRP
and its boundary conditions as a preparation for the Green’s theorem RTM. The first
part gives the finite difference formulae we used and the second part describes the implemented
boundary conditions. The last part, using two examples, points out some impacts of the accuracy
of source fields on the results of modeling.
Rolle's theorem states that if a function is continuous on a closed interval and differentiable on the open interval with equal values at the endpoints, then the derivative is 0 for at least one value in the interval. The mean value theorems - Lagrange's and Cauchy's - generalize this idea, relating the average rate of change over an interval to the instantaneous rate at a point within the interval. Examples are provided to illustrate the theorems and exceptions that can occur when their conditions are not fully met.
GROUPOIDS, LOCAL SYSTEMS AND DIFFERENTIAL EQUATIONSHeinrich Hartmann
This document discusses groupoids, local systems, and their relationships to differential equations on manifolds. Some key points:
1) Groupoids generalize groups by allowing multiple objects and isomorphisms between them. Representations of groupoids correspond to local systems on manifolds.
2) Local systems on a manifold X are sheaves of vector spaces that are locally isomorphic to a constant sheaf. They correspond to representations of the fundamental groupoid of X.
3) Vector bundles with connections on a Riemann surface B are equivalent to local systems on B. Global sections of bundles generate differential equations, whose solutions can be studied via the bundle's local system or groupoid representation.
The document discusses conditional random fields (CRFs), which are probabilistic models used for structured prediction problems. CRFs define a conditional probability distribution p(y|x) via an exponential family form using feature functions. Maximum likelihood, maximum entropy, and MAP estimation techniques can be used to learn the parameters of a CRF by minimizing the negative conditional log-likelihood of labeled training data. Gradient descent or other numerical optimization methods are then required to perform the actual minimization. CRFs provide a principled probabilistic approach to learning the relationships between inputs x and structured outputs y.
This document contains exercises related to limits and continuity. It provides examples of functions and asks the reader to (a) evaluate the functions at given x-values to determine apparent behavior, and (b) find the indicated limit. It also contains exercises asking the reader to find vertical asymptotes and sketch graphs of given functions.
The Power of Graphs in Immersive Communicationstonizza82
This document discusses graph signal processing and its applications in immersive communications. It begins with an introduction to graphs and how they can represent network-structured data. It then discusses how machine learning can be applied to graph-structured data through tasks like graph classification, node classification, and graph clustering. The document outlines challenges with 360-degree video streaming like delivering large volumes of data under low-delay constraints. It proposes that graph signal processing approaches may help address these challenges by accounting for both the data and relationships in the network.
Iterative deepening A* (IDA*) is an informed search algorithm similar to iterative deepening depth-first search but uses an f-limit instead of depth limit. It expands nodes in best-first order up to the f-limit. The f-limit is increased each iteration by the minimum f-value of any node pruned in the previous iteration. IDA* is complete, optimal, and requires less space than A* but can expand more nodes on problems where heuristic values are unique. Local search methods like hill climbing iteratively improve the current state by moving to a neighboring state with better value until no improvement is possible.
This document provides an overview of the challenges and steps involved in 3D visual reconstruction from stereo images. It discusses:
1) Camera modeling and calibration to determine intrinsic and extrinsic parameters
2) Epipolar geometry which defines the relationship between corresponding 2D image points from different camera views
3) Computing feature correspondences between images and triangulating 3D points
4) Building a physical stereo testbed and presenting some initial reconstruction results.
I am Christopher Hemmingway. I am a Computer Science Assignment Expert at programminghomeworkhelp.com. I hold a Master's in Computer Science, Princeton University, Princeton. I have been helping students with their homework for the past 10 years. I solve assignments related to Computer Science.
Visit programminghomeworkhelp.com or email support@programminghomeworkhelp.com.You can also call on +1 678 648 4277 for any assistance with Computer Science assignments.
A function is a rule that maps each element in one set (the domain) to exactly one element in another set (the co-domain or range). Diagrams 1-4 show examples of mappings, with Diagrams 1, 2, and 4 representing functions and Diagram 3 not representing a function because element a in the domain is not mapped to anything in the co-domain. Functions can also be written as sets of ordered pairs, with the property that no two ordered pairs have the same first element but different second elements.
I am Blake H. I am a Software Construction Assignment Expert at programminghomeworkhelp.com. I hold a PhD. in Programming, Curtin University, Australia. I have been helping students with their homework for the past 10 years. I solve assignments related to Software Construction.
Visit programminghomeworkhelp.com or email support@programminghomeworkhelp.com. You can also call on +1 678 648 4277 for any assistance with Software Construction Assignments.
Light Scattering by Nonspherical Particlesavinokurov
This document discusses light scattering by non-spherical particles. It compares three methods for solving this problem: the separation of variables method, the extended boundary condition method, and the generalized point matching method. All three methods use the same field expansions, but differ in important details of how the boundary conditions are applied. The accuracy of the extended boundary condition method depends on the particle shape, working well for some shapes but not others. Near-field behavior and convergence are more complex for non-spherical particles compared to spheres. The concept of the Rayleigh hypothesis is also discussed.
The document describes an algorithm for enumerating 2-level polytopes in fixed dimensions. A 2-level polytope has vertices that are contained in two parallel hyperplanes. The algorithm takes as input a list of (d-1)-dimensional 2-level polytopes and extends each one to d dimensions, computing the closed sets of vertices to obtain new d-dimensional 2-level polytopes. Experimental results show the numbers of 2-level polytopes enumerated for dimensions up to 6. Open questions ask for a more output-sensitive enumeration algorithm and whether the number of d-dimensional 2-level polytopes is exponential in d.
The document summarizes informed search strategies, including best-first search algorithms like greedy search, uniform-cost search (UCS), and A* search. It provides an overview of how heuristics can be used to guide search toward more promising solutions. A* search is described as using both path cost g(n) and heuristic estimate h(n) to determine the best order of node expansion. The properties of A*, including admissibility, completeness, and optimality, are proven assuming h(n) underestimates cost to the goal. Performance depends on heuristic accuracy, with exponential growth possible if errors are large.
The document contains 7 questions related to probability and statistics. Question 1 asks about computing a 95% confidence interval for a population mean and reducing the error in estimating the population mean. Question 2 asks about the number of amplifiers needed to achieve 95% reliability for a concert lasting 2 hours. Question 3 asks about the probability of a system meeting certain tolerance limits on diameters and the probability of none among randomly selected systems violating tolerances.
The computation of automorphic forms for a group Gamma is
a major problem in number theory. The only known way to approach the higher rank cases is by computing the action of Hecke operators on the cohomology.
Henceforth, we consider the explicit computation of the cohomology by using cellular complexes. We then explain how the rational elements can be made to act on the complex when it originate from perfect forms. We illustrate the results obtained for the symplectic Sp4(Z) group.
This document provides an introduction to concepts in differential geometry including manifolds, tangent spaces, vector fields, differential forms, and operations on differential forms such as the exterior product and integration. It outlines key definitions and properties for differential geometry, Riemannian geometry, and applications to probability and statistics. The document is divided into three main sections on differential geometry, Riemannian geometry, and settings without Riemannian geometry.
This document discusses the implementation of digital filters in fixed-point arithmetic on embedded systems. It presents the need for methodology and tools to design fixed-point embedded filter systems. The key steps are: 1) choosing a filter algorithm, 2) rounding coefficients to fixed-point, and 3) implementing the algorithm. Optimal implementations minimize degradation from quantization errors while meeting resource constraints. The document outlines a global flow from filter design to code generation and optimization.
This document is the final exam for ENGR 371 - Probability and Statistics given on April 29, 2010 at Concordia University. It contains 6 questions testing concepts like probability, confidence intervals, hypothesis testing, and distributions. Formulas relevant to the exam questions are also provided.
Parallel Evaluation of Multi-Semi-JoinsJonny Daenen
Presentation given on VLDB 2016: 42nd International Conference on Very Large Data Bases.
Paper: http://dx.doi.org/10.14778/2977797.2977800
ArXiv: https://arxiv.org/abs/1605.05219
Poster: https://zenodo.org/record/61653 (doi 10.5281/zenodo.61653)
Gumbo Software: https://github.com/JonnyDaenen/Gumbo
Abstract
While services such as Amazon AWS make computing power abundantly available, adding more computing nodes can incur high costs in, for instance, pay-as-you-go plans while not always significantly improving the net running time (aka wall-clock time) of queries. In this work, we provide algorithms for parallel evaluation of SGF queries in MapReduce that optimize total time, while retaining low net time. Not only can SGF queries specify all semi-join reducers, but also more expressive queries involving disjunction and negation. Since SGF queries can be seen as Boolean combinations of (potentially nested) semi-joins, we introduce a novel multi-semi-join (MSJ) MapReduce operator that enables the evaluation of a set of semi-joins in one job. We use this operator to obtain parallel query plans for SGF queries that outvalue sequential plans w.r.t. net time and provide additional optimizations aimed at minimizing total time without severely affecting net time. Even though the latter optimizations are NP-hard, we present effective greedy algorithms. Our experiments, conducted using our own implementation Gumbo on top of Hadoop, confirm the usefulness of parallel query plans, and the effectiveness and scalability of our optimizations, all with a significant improvement over Pig and Hive.
The document discusses Venn diagrams and set operations. It provides examples of how to represent different set operations using Venn diagrams, such as (A ∪ B) ∩ C and (A ∩ C) ∪ (B ∩ C). It also discusses set notations and how to represent finite sets, intervals, and inequalities.
The Mean Value Theorem is the Most Important Theorem in Calculus. It allows us to relate information about the derivative of a function to information about the function itself.
This document presents a method for estimating the eigenvalues of a covariance matrix when there are few samples. It involves shifting the sampled eigenvalues toward the population values based on theoretical distributions, and balancing the energy across eigenvalues. This simple 3-matrix approach improves estimation and detection performance compared to using the sampled eigenvalues alone. Simulations and hyperspectral data experiments demonstrate the effectiveness of the method.
Lecture 21 problem reduction search ao star searchHema Kashyap
The AO* search algorithm is used to find optimal solutions for AND/OR search problems. It uses two arrays (OPEN and CLOSE) and a heuristic function h(n) to estimate the cost to reach the goal. The algorithm selects the most promising node from OPEN, expands it to find successors, and calculates their h(n) values, adding them to OPEN. It continues until the start node is marked as solved or unsolvable. AO* finds optimal solutions but can be inefficient for unsolvable problems compared to other algorithms.
11.quadrature radon transform for smoother tomographic reconstructionAlexander Decker
This document discusses a technique called quadrature Radon transform for tomographic reconstruction. The quadrature Radon transform uses projections from two angles (θ and θ+π/2) rather than just one angle as in conventional Radon transform. This provides additional information that can yield smoother reconstructions. Two approaches are proposed - treating the two sets of projections as real and imaginary parts of a complex number, or averaging the individual back projections. Experimental results show the quadrature Radon transform produces numerically and visually better reconstructions compared to using projections from a single angle.
11.[23 36]quadrature radon transform for smoother tomographic reconstructionAlexander Decker
This document discusses a technique called quadrature Radon transform for tomographic reconstruction. The quadrature Radon transform uses projections from two angles (θ and θ+π/2) rather than just one angle as in conventional Radon transform. This provides additional information that can yield smoother reconstructions. Two approaches are proposed: 1) treating the two sets of projections as real and imaginary parts of a complex number, or 2) averaging the individual back projections. Experimental results show the quadrature Radon transform produces numerically and visually better reconstructions compared to using a single set of projections.
This document discusses macrocanonical models for texture synthesis. It begins by introducing the goal of texture synthesis and providing a brief history. It then describes the parametric question of combining randomness and structure in images. Specifically, it discusses maximizing entropy under geometric constraints. The document goes on to discuss links to statistical physics, defining microcanonical and macrocanonical models. It focuses on studying the macrocanonical model, describing how to find optimal parameters through gradient descent and how to sample from the model using Langevin dynamics. The document provides examples of texture synthesis and compares results to other methods.
Iterative deepening A* (IDA*) is an informed search algorithm similar to iterative deepening depth-first search but uses an f-limit instead of depth limit. It expands nodes in best-first order up to the f-limit. The f-limit is increased each iteration by the minimum f-value of any node pruned in the previous iteration. IDA* is complete, optimal, and requires less space than A* but can expand more nodes on problems where heuristic values are unique. Local search methods like hill climbing iteratively improve the current state by moving to a neighboring state with better value until no improvement is possible.
This document provides an overview of the challenges and steps involved in 3D visual reconstruction from stereo images. It discusses:
1) Camera modeling and calibration to determine intrinsic and extrinsic parameters
2) Epipolar geometry which defines the relationship between corresponding 2D image points from different camera views
3) Computing feature correspondences between images and triangulating 3D points
4) Building a physical stereo testbed and presenting some initial reconstruction results.
I am Christopher Hemmingway. I am a Computer Science Assignment Expert at programminghomeworkhelp.com. I hold a Master's in Computer Science, Princeton University, Princeton. I have been helping students with their homework for the past 10 years. I solve assignments related to Computer Science.
Visit programminghomeworkhelp.com or email support@programminghomeworkhelp.com.You can also call on +1 678 648 4277 for any assistance with Computer Science assignments.
A function is a rule that maps each element in one set (the domain) to exactly one element in another set (the co-domain or range). Diagrams 1-4 show examples of mappings, with Diagrams 1, 2, and 4 representing functions and Diagram 3 not representing a function because element a in the domain is not mapped to anything in the co-domain. Functions can also be written as sets of ordered pairs, with the property that no two ordered pairs have the same first element but different second elements.
I am Blake H. I am a Software Construction Assignment Expert at programminghomeworkhelp.com. I hold a PhD. in Programming, Curtin University, Australia. I have been helping students with their homework for the past 10 years. I solve assignments related to Software Construction.
Visit programminghomeworkhelp.com or email support@programminghomeworkhelp.com. You can also call on +1 678 648 4277 for any assistance with Software Construction Assignments.
Light Scattering by Nonspherical Particlesavinokurov
This document discusses light scattering by non-spherical particles. It compares three methods for solving this problem: the separation of variables method, the extended boundary condition method, and the generalized point matching method. All three methods use the same field expansions, but differ in important details of how the boundary conditions are applied. The accuracy of the extended boundary condition method depends on the particle shape, working well for some shapes but not others. Near-field behavior and convergence are more complex for non-spherical particles compared to spheres. The concept of the Rayleigh hypothesis is also discussed.
The document describes an algorithm for enumerating 2-level polytopes in fixed dimensions. A 2-level polytope has vertices that are contained in two parallel hyperplanes. The algorithm takes as input a list of (d-1)-dimensional 2-level polytopes and extends each one to d dimensions, computing the closed sets of vertices to obtain new d-dimensional 2-level polytopes. Experimental results show the numbers of 2-level polytopes enumerated for dimensions up to 6. Open questions ask for a more output-sensitive enumeration algorithm and whether the number of d-dimensional 2-level polytopes is exponential in d.
The document summarizes informed search strategies, including best-first search algorithms like greedy search, uniform-cost search (UCS), and A* search. It provides an overview of how heuristics can be used to guide search toward more promising solutions. A* search is described as using both path cost g(n) and heuristic estimate h(n) to determine the best order of node expansion. The properties of A*, including admissibility, completeness, and optimality, are proven assuming h(n) underestimates cost to the goal. Performance depends on heuristic accuracy, with exponential growth possible if errors are large.
The document contains 7 questions related to probability and statistics. Question 1 asks about computing a 95% confidence interval for a population mean and reducing the error in estimating the population mean. Question 2 asks about the number of amplifiers needed to achieve 95% reliability for a concert lasting 2 hours. Question 3 asks about the probability of a system meeting certain tolerance limits on diameters and the probability of none among randomly selected systems violating tolerances.
The computation of automorphic forms for a group Gamma is
a major problem in number theory. The only known way to approach the higher rank cases is by computing the action of Hecke operators on the cohomology.
Henceforth, we consider the explicit computation of the cohomology by using cellular complexes. We then explain how the rational elements can be made to act on the complex when it originate from perfect forms. We illustrate the results obtained for the symplectic Sp4(Z) group.
This document provides an introduction to concepts in differential geometry including manifolds, tangent spaces, vector fields, differential forms, and operations on differential forms such as the exterior product and integration. It outlines key definitions and properties for differential geometry, Riemannian geometry, and applications to probability and statistics. The document is divided into three main sections on differential geometry, Riemannian geometry, and settings without Riemannian geometry.
This document discusses the implementation of digital filters in fixed-point arithmetic on embedded systems. It presents the need for methodology and tools to design fixed-point embedded filter systems. The key steps are: 1) choosing a filter algorithm, 2) rounding coefficients to fixed-point, and 3) implementing the algorithm. Optimal implementations minimize degradation from quantization errors while meeting resource constraints. The document outlines a global flow from filter design to code generation and optimization.
This document is the final exam for ENGR 371 - Probability and Statistics given on April 29, 2010 at Concordia University. It contains 6 questions testing concepts like probability, confidence intervals, hypothesis testing, and distributions. Formulas relevant to the exam questions are also provided.
Parallel Evaluation of Multi-Semi-JoinsJonny Daenen
Presentation given on VLDB 2016: 42nd International Conference on Very Large Data Bases.
Paper: http://dx.doi.org/10.14778/2977797.2977800
ArXiv: https://arxiv.org/abs/1605.05219
Poster: https://zenodo.org/record/61653 (doi 10.5281/zenodo.61653)
Gumbo Software: https://github.com/JonnyDaenen/Gumbo
Abstract
While services such as Amazon AWS make computing power abundantly available, adding more computing nodes can incur high costs in, for instance, pay-as-you-go plans while not always significantly improving the net running time (aka wall-clock time) of queries. In this work, we provide algorithms for parallel evaluation of SGF queries in MapReduce that optimize total time, while retaining low net time. Not only can SGF queries specify all semi-join reducers, but also more expressive queries involving disjunction and negation. Since SGF queries can be seen as Boolean combinations of (potentially nested) semi-joins, we introduce a novel multi-semi-join (MSJ) MapReduce operator that enables the evaluation of a set of semi-joins in one job. We use this operator to obtain parallel query plans for SGF queries that outvalue sequential plans w.r.t. net time and provide additional optimizations aimed at minimizing total time without severely affecting net time. Even though the latter optimizations are NP-hard, we present effective greedy algorithms. Our experiments, conducted using our own implementation Gumbo on top of Hadoop, confirm the usefulness of parallel query plans, and the effectiveness and scalability of our optimizations, all with a significant improvement over Pig and Hive.
The document discusses Venn diagrams and set operations. It provides examples of how to represent different set operations using Venn diagrams, such as (A ∪ B) ∩ C and (A ∩ C) ∪ (B ∩ C). It also discusses set notations and how to represent finite sets, intervals, and inequalities.
The Mean Value Theorem is the Most Important Theorem in Calculus. It allows us to relate information about the derivative of a function to information about the function itself.
This document presents a method for estimating the eigenvalues of a covariance matrix when there are few samples. It involves shifting the sampled eigenvalues toward the population values based on theoretical distributions, and balancing the energy across eigenvalues. This simple 3-matrix approach improves estimation and detection performance compared to using the sampled eigenvalues alone. Simulations and hyperspectral data experiments demonstrate the effectiveness of the method.
Lecture 21 problem reduction search ao star searchHema Kashyap
The AO* search algorithm is used to find optimal solutions for AND/OR search problems. It uses two arrays (OPEN and CLOSE) and a heuristic function h(n) to estimate the cost to reach the goal. The algorithm selects the most promising node from OPEN, expands it to find successors, and calculates their h(n) values, adding them to OPEN. It continues until the start node is marked as solved or unsolvable. AO* finds optimal solutions but can be inefficient for unsolvable problems compared to other algorithms.
11.quadrature radon transform for smoother tomographic reconstructionAlexander Decker
This document discusses a technique called quadrature Radon transform for tomographic reconstruction. The quadrature Radon transform uses projections from two angles (θ and θ+π/2) rather than just one angle as in conventional Radon transform. This provides additional information that can yield smoother reconstructions. Two approaches are proposed - treating the two sets of projections as real and imaginary parts of a complex number, or averaging the individual back projections. Experimental results show the quadrature Radon transform produces numerically and visually better reconstructions compared to using projections from a single angle.
11.[23 36]quadrature radon transform for smoother tomographic reconstructionAlexander Decker
This document discusses a technique called quadrature Radon transform for tomographic reconstruction. The quadrature Radon transform uses projections from two angles (θ and θ+π/2) rather than just one angle as in conventional Radon transform. This provides additional information that can yield smoother reconstructions. Two approaches are proposed: 1) treating the two sets of projections as real and imaginary parts of a complex number, or 2) averaging the individual back projections. Experimental results show the quadrature Radon transform produces numerically and visually better reconstructions compared to using a single set of projections.
This document discusses macrocanonical models for texture synthesis. It begins by introducing the goal of texture synthesis and providing a brief history. It then describes the parametric question of combining randomness and structure in images. Specifically, it discusses maximizing entropy under geometric constraints. The document goes on to discuss links to statistical physics, defining microcanonical and macrocanonical models. It focuses on studying the macrocanonical model, describing how to find optimal parameters through gradient descent and how to sample from the model using Langevin dynamics. The document provides examples of texture synthesis and compares results to other methods.
The document provides an overview of concepts in functional analysis that will be covered in a math camp, including: function spaces, metric spaces, dense subsets, linear spaces, linear functionals, norms, Euclidean spaces, orthogonality, separable spaces, complete metric spaces, Hilbert spaces, and convex functions. Examples are given for each concept to illustrate the definitions.
EARTHSC 5642 Spring 2015 Dr. von FreseEARTHSC 5642.docxjacksnathalie
EARTHSC 5642
Spring 2015 Dr. von Frese
EARTHSC 5642
Spring 2015 Dr. von Frese
Homework 5.2
A) Compute and plot 17 gravity effects (gz) of the buried horizontal cylinder with radius R = 3 km centered on the cylinder at the station interval of 1 km.
B) Compute the Fast Fourier Transform (FFT) for the travel-time signal (gz) using the attached description of the FFT in Summary of Jenkins and Watts (1968) procedure(see the attached Appendix A7.3). Some information about the assignment can be find below in the solution of the exercise 1.1 that I have already done. I have provide two solutions the first one was obtained using matlab and the second excel but they are both the same thing. (Note: Assignment is Homework 5.2 only)
1) Partition the (gz) observations successively into halves and use an appropriate version of eq. (A7.3.5) in APPENDIX A7.3 from Jenkins and Watts (1968) to construct the transform. Show all details of the partitioning and calculations of the transform coefficients.
2) Describe in no more than a single, half-page paragraph how the FFT was taken.
3) List and plot the coefficients of the cosine and sine transforms for (gz).
4) List and plot the coefficients of the amplitude and phase spectra for (gz).
C) Inverse transform the FFT to estimate the original (gz) observations.
1) Compute the synthesis of the signal coefficients showing all calculations.
2) Describe in no more than a single, half-page paragraph how the IFFT was taken.
3) Plot up and analyze the differences between the FFT-estimates and original observations.
D) Determine the second horizontal derivative ∂2gz/∂d2 from the FFT of (gz).
1) What are the transfer function coefficients that take the second horizontal derivative in the f-frequency domain?
2) Apply the second derivative coefficients to the FFT of (gz) and inverse transform and plot the results.
4) How do the results in D.2 compare with the analytical horizontal second derivative gravity effects of the buried horizontal cylinder?
Exercise 1.1
You have taken a job at the Johnson Space Flight Center in Houston (TX). In the desk that you were assigned, you find papers with a list of raw travel-time data for the free falls of a feather and a rock hammer. The intriguing thing about the two lists of numbers is that they are exactly the same
i
1
2
3
4
5
6
7
8
ti(s)
0.0
0.5
1.0
1.5
2.0
2.5
3.0
3.5
zi(ft)
25.0
25.7
27.7
31.0
35.6
41.6
48.9
57.5
9
10
11
12
13
14
15
16
17
4.0
4.5
5.0
5.5
6.0
6.5
7.0
7.5
8.0
67.5
78.8
91.4
105.3
120.6
137.2
155.1
174.4
194.6
Explore the inverse properties of numerical differentiation and integration for the above profile of travel-time data – i.e.,
A) Plot the travel-time data profile using appropriate units.
B) Compute and list the 15 horizontal derivative values that may be defined from the successive 3-point data sequences.
C) Find the derivative values for i = 1 and 17 using the 2nd Fundamental Theorem of Calculus (i.e., a ...
Fixed point result in menger space with ea propertyAlexander Decker
This document presents a fixed point theorem for four self-maps in a Menger space. It begins by defining key concepts related to Menger spaces including probabilistic metric spaces, t-norms, neighborhoods, convergence, Cauchy sequences, and completeness. It then introduces properties like weakly compatible maps, property (EA), and JSR mappings. The main result, Theorem 3.1, proves the existence of a common fixed point for four self-maps under conditions that the map pairs satisfy a common property (EA) and are closed, JSR mappings satisfying an inequality involving the probabilistic distance functions.
This document summarizes an academic paper that proposes modifying well-known local linear models for system identification by replacing their original recursive learning rules with outlier-robust variants based on M-estimation. It describes three existing local linear models - local linear map (LLM), radial basis function network (RBFN), and local model network (LMN) - and then introduces the concept of M-estimation as a way to make the learning rules of these models more robust to outliers. The performance of the proposed outlier-robust variants is evaluated on three benchmark datasets and is found to provide considerable improvement in the presence of outliers compared to the original models.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
The document discusses dynamical behaviors in the Lorenz model, a system of nonlinear differential equations. It begins with introductions to concepts like limit cycles, Hopf bifurcations, and how periodic solutions can emerge from equilibria. The main investigation analyzes the Lorenz model, which exhibits chaotic behavior. Parameters are fixed, and three equilibrium points are identified. The focus is on one equilibrium point that is suitable for investigating the system's dynamic behaviors near a Hopf bifurcation.
An investigation of inference of the generalized extreme value distribution b...Alexander Decker
This document presents an investigation of parameter estimation for the generalized extreme value distribution based on record values. Maximum likelihood estimation is used to estimate the parameters β (scale parameter) and ξ (shape parameter). Likelihood equations are derived and solved numerically. Bootstrap and Markov chain Monte Carlo methods are proposed to construct confidence intervals for the parameters since intervals based on asymptotic normality may not perform well due to small sample sizes of records. Bayesian estimation of the parameters using MCMC is also investigated. An illustrative example involving simulated records is provided.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document presents a new method for denoising spectral radar data using complex wavelets.
The existing denoising method works well at lower altitudes but fails at higher altitudes where noise is dominant. The proposed method applies complex wavelet transform with a custom thresholding function to denoise the data in the frequency domain before estimating the Doppler spectrum.
Results show the new method can accurately detect wind speeds up to 18km in altitude, unlike the existing method which fails above 11km. Validation with GPS sonde data also supports the improved performance of the proposed complex wavelet denoising approach.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
This document discusses the implementation of Elliptic Curve Digital Signature Algorithm (ECDSA) using variable text message encryption methods. It begins with an abstract that outlines ECDSA, its advantages over other digital signature algorithms like smaller key size, and implementation of ECDSA over elliptic curves P-192 and P-256 with variable size text message, fixed size text message, and text based message encryption. It then provides details on elliptic curve cryptography, the elliptic curve discrete logarithm problem, finite fields, and domain parameters for ECDSA.
This document summarizes finite difference modeling methods used at M-OSRP. It discusses:
1) The second order time and fourth order space finite difference schemes used to model acoustic wave propagation.
2) How boundary conditions like Dirichlet/Neumann generate strong spurious reflections that can mask true events.
3) The importance of accurate source fields for modeling - better source fields lead to more accurate linear inversions and the ability to observe phenomena like polarity reversals in modeled data.
Slides: A glance at information-geometric signal processingFrank Nielsen
This document discusses information geometry and its applications in statistical signal processing. It introduces several key concepts:
1) Statistical signal processing models data with probability distributions like Gaussians and histograms. Information geometry provides a geometric framework for intuitive reasoning about these statistical models.
2) Exponential family mixture models generalize Gaussian and Rayleigh mixtures and are algorithmically useful in dually flat spaces.
3) Distances between statistical models, like Kullback-Leibler divergence and Bregman divergences, can be interpreted geometrically in terms of convex conjugates and Legendre transformations.
Analytic construction of points on modular elliptic curvesmmasdeu
1. The document discusses analytic constructions of points on modular elliptic curves over number fields.
2. It introduces Heegner points, which provide a tool for verifying the Birch and Swinnerton-Dyer conjecture when the number field is an imaginary quadratic field.
3. Later work has generalized these constructions to some real quadratic fields and cubic fields of signature (1,1) by using Hilbert modular forms and automorphic forms on hyperbolic 3-space.
Interferogram Filtering Using Gaussians Scale Mixtures in Steerable Wavelet D...CSCJournals
An interferogram filtering is presented in this paper. The main concern of the proposed scheme is to lower the residues count mean while preserving the location and jump height of the lines of phase discontinuity. The proposed method is based on a statistical model of the coefficients of multi-scale oriented basis. Neighborhoods of coefficients at adjacent positions and scales are modeled as the product of two independent random variables: a Gaussian vector and a hidden positive scalar multiplier. Under this model, the Bayesian least squares estimate of each coefficient reduces to a weighted average of the local linear estimates over all possible values of the hidden multiplier variable. The performance of this method substantially has the advantages of reducing number of residuals without affecting line of height discontinuity.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
Cubic convolution interpolation is a new technique for resampling discrete data that has several desirable features for image processing. It can be performed efficiently on a digital computer. The cubic convolution interpolation function converges uniformly to the function being interpolated as the sampling increment approaches zero, achieving third-order accuracy. The paper derives the one-dimensional cubic convolution interpolation function and shows how it can be extended separably to two dimensions for interpolating image data.
1. PR 1181 Indira Bala Giri MV
Pattern Recognition 33 (2000) 1919}1925
MRF parameter estimation by MCMC method
Lei Wang, Jun Liu*, Stan Z. Li
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore
Received 13 January 1999; received in revised form 28 July 1999; accepted 28 July 1999
Abstract
Markov random "eld (MRF) modeling is a popular pattern analysis method and MRF parameter estimation plays an
important role in MRF modeling. In this paper, a method based on Markov Chain Monte Carlo (MCMC) is proposed to
estimate MRF parameters. Pseudo-likelihood is used to represent likelihood function and it gives a good estimation
result. 2000 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.
Keywords: MRF; MCMC; Least-squares "t; Parameter estimation; Pseudo-likelihood
1. Introduction consuming. Here a method based on MCMC is used to
estimate the parameters which can give a good solution
The objective of mathematical modeling in pattern to the estimation.
analysis is aimed to extract the intrinsic characteristics of The general parameter estimation principle is as fol-
the pattern in a few parameters so as to represent the lows. Let F denote any "nite set which comprises of
pattern e!ectively. Markov random "eld modeling is a random "eld and f3F is an observation of F. On
a very popular pattern analysis method and it plays an F a family of distributions
important role in pattern recognition and computer vi- "+ (F; ): 3 ,
sion. Markov random "eld models were popularized by
Besag to model spatial interactions on lattice system [1]. is considered where LRB is a set of parameters. The
It can be used in texture classi"cation and segmentation &true' parameter H3 is not known and needs to be
as well as image restoration [2]. The most important determined or at least approximated. The only available
characteristic of MRF modeling is that the global pat- information is hidden in the observation f which is a
terns can be formed via stochastic propagation of local realization of F. Now, the problem is how to choose K
interactions. MRF parameter estimation is necessary in as a substitute for H if f is picked at random from
MRF modeling after the form of model is given. During (F; H).
the past years, many authors presented methods to esti- In this paper, the estimation of parameters is based
mate MRF parameters. Simulated annealing [3], max- on deriving posterior distribution calculated using the
imum likelihood [4], coding method [1], mean "eld Metropolis}Hastings algorithm. This is a Markov chain
approximations [5], Bayesian estimation [6] and least- Monte Carlo (MCMC) technique [8].
squares (LS) "t [7] are discussed to estimate MRF para- The paper is arranged as follows. MRF image model is
meters. discussed in Section 2. MCMC parameter estimation is
Least-squares (LS) methods and maximum likelihood proposed in Section 3. The experiments are shown in
methods are often used. However, LS is not accurate in Section 4 and conclusion is given in Section 5.
estimation and maximum likelihood method is time-
2. MRF modeling
* Corresponding author. This section introduces some notations related to
E-mail addresses: elwang@263.net (L. Wang), MRF modeling which will be used in the following sec-
ejliu@ntu.edu.sg (J. Liu), szli@szli.eee.ntu.edu.sg (S.Z. Li). tions of the paper.
0031-3203/00/$20.00 2000 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.
PII: S 0 0 3 1 - 3 2 0 3 ( 9 9 ) 0 0 1 7 8 - 8
2. PR 1181
1920 L. Wang et al. / Pattern Recognition 33 (2000) 1919}1925
A lattice is a square array of pixels, or sites, where c ( j) denotes the neighbor of site j in the ith cli-
G
+( j, k): 0)j)N!1, 0)k)N!1,. We adopt a que c .
G
simple numbering of sites by assigning sequence number
i"k#Nj to site ( j, k). Letting M"N denote the num- Then the distributions have the form
ber of sites,
P( f, )Z( ) exp (1 , H( f )2), 3
S+0, 1,2, M!1,
where 1 , H( f )2 L H is the inner product of
G G G
index the set of sites. A random eld model is a distribu- and H, and Z( ) exp (1 , H( f )2) is the normaliz-
D
tion for the M-tuple random vector F, which contains ing partition function. The conditional probability is as
a random variable F(i) for the value of site i. The sites in follows:
S are related to one another via a neighborhood system.
A neighborhood system for S is dened as exp(!1 , H( f )2)
P( f f H ) H , (2)
H , exp(! 1 , H(z )2)
XH ZL H
N+N ∀i3S,,
G
where H( f ) is the local histogram only calculated in the
H
where N is the set of sites neighboring i. The neighbor- neighborhood of site j. H(z ) denotes the local histogram
G H
ing relationship has the following properties: replacing f with z and the neighborhood of j is xed.
H H
The computation of Z( ) is infeasible because there are
(1) a site is not neighboring to itself; a combinatorial number of elements in the conguration
(2) the neighboring relationship is mutual. space. In order to avoid using the partition function Z( ),
the pseudo-likelihood function
A clique c is a set of sites in which all pairs of sites are
mutual neighbors. The set of all cliques in a neighbor-
P¸( f )log “ P( f f H )
hood system is denoted as Q. H ,
HZS
Suppose F is an MRF. Let f3F be a realization of F.
A clique function, or potential function, ( f ), is asso-
A “ (1 , H( f )2)!log exp(1 , H(z )2) (3)
ciated with each clique and the energy function, ;( f ), of H H
MRF can be expressed as the sum of clique functions. HZ S
XH ZL
can be used to replace likelihood function. The pseudo-
;( f ) ( f ). likelihood does not involve the partition function Z( ).
A Hence it is much easier to be calculated.
AZ/
To a homogeneous MRF, the potential function is
independent of locations. Thus, the number of clique
3. MCMC estimation of MRF parameters
potentials can be reduced to the number of clique types,
that is, each potential corresponding to a clique type. According to Bayesian theorem, the posterior distribu-
Consider a multi-level logistic (MLL) model [4]. Let tion of conditional on f is
L+1,2, m, be the label set and ( ,2, ) be the
L
parameter vector for clique potentials where each com-
P( )P( f )
ponent corresponds to a clique type. Consider the distri- P( f ) JP( )P( f ). (4)
bution of Gibbs form, P( )P( f ) d
According to Gilks et al. [8], any features of the posterior
P(Ff, )JP(Ff )Z( ) exp(!;( f, )), (1)
distribution are legitimate for Bayesian inference: mo-
ments, quantiles, highest posterior density regions, etc.
where ;( f, ) is energy function and depends linearly on
All these quantiles can be expressed in terms of posterior
. Suppose H( f )(H ,2, H ) is the histogram of
L expectations of functions of . The posterior expectation
cliques of f, n denotes the index of clique type. Let
of a function g( ) is
1 if z0,
(z) g( )P( )P( f ) d
0 otherwise. E[g( ) f ] . (5)
P( )P( f ) d
H 2 ( f !f )!1 , i1,2, n, The integrations in this expression are di$cult to be
G H HY solved in Bayesian inference. Monte Carlo integration
HZ1 HYZAG H
3. PR 1181
L. Wang et al. / Pattern Recognition 33 (2000) 1919}1925 1921
including Markov Chain Monte Carlo (MCMC) of say m iterations, + R, tm#1,2, n, will be depen-
approach [6] can be used to deal with the di$culty [8]. dent samples approximately from
(.). Let
The task is to evaluate the expectation
1 L
M R. (7)
g( )P( ) d n!m
E(g( )) . (6) RK
P( ) d This is an ergodic average. Convergence to the re-
quired expectation is ensured by the ergodic theorem.
A Markov chain can be adopted for the purpose of Eq. (7) shows how a Markov chain can be used to
evaluation. Suppose we generate a sequence of random estimate E( f ). Such a Markov chain can be constructed
variables + , ,2,. At each time t*0, the next state by Metropolis}Hastings algorithm [8]. At each time t,
R is sampled from a distribution P( R R) which de- the next state R is chosen by rst sampling a candidate
pends only on the current state R of the chain. This point from a proposal distribution q( R). The choice
Markov chain is assumed to be time-homogeneous. of proposal distribution is almost arbitrary; here a
Thus, the sequence will gradually converge to a unique multivariate normal distribution centered on the cur-
stationary distribution
(.). After a su$cient long burn-in rent value R is adopted. The candidate is accepted
Fig. 1. Textures used in the experiment. (a) Number of graylevels M2, (b) M2, (c) M4, (d) M4.
4. PR 1181
1922 L. Wang et al. / Pattern Recognition 33 (2000) 1919}1925
with probability The Metropolis}Hastings algorithm can be sum-
marized in the following procedures:
P( f )q( R)
( R, )min 1, . Initialize ; set t0 and ¹maximum number of
P( R f )q( R )
iteration
While t(¹
The transition kernel for the Metropolis-Hastings algo-
BEGIN
rithm is
Sample a point from q(. R)
Sample a uniform (0, 1) random variable v
P( R R)q( R R) ( R, R)#I( R R) If v) ( R, ), set R . Otherwise set R R
Increment t
; 1! q( R) ( R, )d
END
where I(.) denotes the indicator function (taking 1 when 4. Experiments
its argument is true, and 0 otherwise). If the candidate
is accepted, the next state becomes R , otherwise In order to inspect the performance of the method
R R. Since P( f)JP( )P(f ) and the prior P( ) can proposed in this paper, a Gibbs sampler [4] is used to
be assumed to be #at when the prior information is sample textures with the specied parameters. Here a sec-
totally unavailable, ond-order neighborhood system is used and four double-
site cliques + , , , , corresponding to 03, 903,
P( f )q( R) 453 and 1353 individually are adopted as non-zero para-
( R, )min 1, meters. Fig. 1 shows four 128;128 textures generated
P( R f )q( R )
from the Gibbs Sampler. The rst two textures are sam-
P( f )q( R) pled with two graylevels and the next two textures are
min 1, . (8) sampled with four graylevels. The parameters of the four
P( f R)q( R )
textures are listed in Table 1. In order to get acceptable
parameters, the MCMC procedure described in the pre-
Since the choice of proposal distribution here is normal vious section should be repeated until stability of the
centered on the current value, q( R)q( R ) due Markov chain is reached. The choice of starting values
to the symmetric property of the proposed distribution. will not a!ect the stationary distribution if the chain is
Thus, the acceptance probability formula can be irreducible. In our experiments, are chosen randomly.
reduced to The usual informal approach to detection of convergence
is visual inspection of plots of the Monte-Carlo output
P( f ) + R, t1,2, n,. From Figs. 2}5, three independent sam-
( R, )min 1, . (9)
P( f R) ples of Markov chains for texture 4 are given. From the
Thus, the Metropolis}Hastings algorithm is switched to
Metropolis algorithm. When we use pseudo-likelihood to
represent the likelihood function, we get
( R, )min(1, exp(P¸( f )!P¸( f R)))
min 1, exp (1 , H( f )2!1 R, H( f )2)
H H
HZ1
!log exp(1 , H(z )2)
H
XH ZL
#log exp(1 R, H(z )2) . (10)
H
XH ZL
With this acceptance probability, the can be approxi- Fig. 2. 1000 iterations with di!erent starting values for estima-
mated e!ectively. ting
for texture 4.
5. PR 1181
L. Wang et al. / Pattern Recognition 33 (2000) 1919}1925 1923
Fig. 3. 1000 iterations with di!erent starting values for estima-
Fig. 4. 1000 iterations with di!erent starting values for estima-
ting for texture 4.
ting
for texture 4.
gures, we observe that the length of burn-in depends on creased. Initially, we set n1000. If the estimates M do
. The Markov chains converge in less than 300 iter- not agree adequately, we increase 500 iterations each
ations in most examples according to visual inspection of time until estimates are similar. We only need to inspect
the monitoring statistics. Here we set burn-in m500. the mean M and variance of the Monte}Carlo output. In
More formal methods for convergence diagnostics can be our experiments in Table 1, n1000 is enough. The
found in Refs. [9,10]. Decision about the iteration num- results of MCMC approach in Table 1 are acceptable
ber is an important and practical matter. The aim is to where denotes the average standard deviation of
run the chain long enough to obtain adequate precision Markov chains after burn-in. In order to verify the per-
in the estimator. Here three chains are run in parallel formance of this method, least- squares (LS) t method
with di!erent starting values M from Eq. (7). If they do not proposed by Derin and Elliott [7] is also used in
agree adequately, the iteration number n must be in- our experiments. From Table 1, it can be seen that LS
Table 1
MRF parameter estimation
Textures Method
Texture 1 Specied 1 1 !0.5 !0.5
LS 0.8448 0.8734 !0.4332 !0.4382
MCMC 0.9884 0.9899 !0.5076 !0.5078
0.0436 0.0323 0.0278 0.0400
Texture 2 Specied 1 !0.8 0.5 !0.5
LS 0.9949 !0.8157 0.4960 !0.3244
MCMC 1.0093 !0.8586 0.5569 !0.4522
0.0147 0.0235 0.0168 0.0245
Texture 3 Specied 0.3 0.3 0.3 0.3
LS 0.1152 0.1520 0.1867 0.1444
MCMC 0.3478 0.2762 0.2960 0.2877
0.0266 0.0165 0.0130 0.0086
Texture 4 Specied 0.5 1 !0.5 0.7
LS 0.0415 0.4364 0 0.4201
MCMC 0.5525 1.0394 !0.5951 0.6810
0.0321 0.0364 0.0490 0.0156
6. PR 1181
1924 L. Wang et al. / Pattern Recognition 33 (2000) 1919}1925
cation and segmentation as well as image restoration.
MRF parameter estimation plays an important role in
MRF modeling. In order to estimate MRF parameter
e!ectively and e$ciently, an MRF parameter estimation
method based on MCMC is proposed in this paper.
A Markov chain is constructed to sample the MRF
parameters via Monte Carlo method. MLL model is used
as image model. In order to avoid to calculate the nor-
malizing partition function, pseudo-likelihood function
is used to represent likelihood function. Compared to
least-squares t method, our method is more accurate
and can be used for multi-graylevel texture parameter
estimation e!ectively as seen from the experiments in the
paper. This method can be extended to be used in multi-
resolution analysis of texture modeling and segmentation
of textured images.
Fig. 5. 1000 iterations with di!erent starting values for estima-
ting for texture 4.
method is e!ective only to the textures with two gray- Acknowledgements
levels, while MCMC method is e!ective to all examples
in the experiments even more graylevels are adopted in We wish to thank the constructive comments and
the model. From the comparison, MCMC method pro- suggestions of the reviewers.
posed in this paper is much better than LS method. The
MCMC routines are run on a Sun Ultra 2 workstation,
each analysis takes less than 3 min to perform 1000
iterations. References
[1] J. Besag, Spatial interaction and the statistical analysis of
5. Conclusion lattice systems, J. Roy. Statist. Soc. Ser. B 36 (1974)
192}236.
Markov random elds (MRFs) modeling is a popular [2] S. Barker, Image segmentation using Markov random eld
pattern analysis method. It can be used in texture classi- models, Ph.D. Thesis, University of Cambridge, 1998.
cation and segmentation as well as image restoration. [3] S. Geman, D. Geman, Stochastic relaxation, Gibbs distri-
MRF parameter estimation plays an important role in bution and the Bayesian restoratopn of images, IEEE
MRF modeling. In order to estimate MRF parameter Trans. PAMI 6 (6) (1984) 721}741.
[4] S. Li, Markov Random Field Modeling in Computer
e!ectively and e$ciently, an MRF parameter estimation
Vision, Springer, New York, 1995.
method based on method MCMC is proposed in this [5] D. Chandler, Introduction to Modern Statistical Mechan-
paper. A Markov chain is constructed to sample the ics, Oxford University Press, Oxford, 1987.
MRF parameters via Monte Carlo method. MLL model [6] R. Aykroyd, Bayesian estimation for homogeneous and
is used as image model. In order to avoid to calculate the inhomogeneous Gaussian random elds, IEEE Trans.
normalizing partition function, pseudo-likelihood func- Pattern Analysis Mach. Intell. 20 (5) (1998) 533}539.
tion is used to represent likelihood function. Compared [7] H. Derin, H. Elliott, Modeling and segmentation of
to least-squares t method, the proposed method is more noisy and textured images using Gibbs random elds,
accurate and can be used for multilevel-graylevel texture IEEE Trans. Pattern Analysis Mach. Intell. 9 (1) (1987)
parameter estimation e!ectively as seen from the experi- 39}55.
[8] W. Gilks, S. Richardson, D. Spiegelhalter, Markov
ments in the paper. This method can be extended to be
chain Monte Carlo in Practice, Chapman Hall, London,
used in multiresolution analysis of texture modeling and 1996.
segmentation of textured images. [9] M. Cowles, B. Carlin, Markov Chain Monte Carlo conver-
gence diagnostics: a comparative review, Technical Re-
port, Division of Biostatistics, School of Public Health,
University of Minnesota, 1994.
6. Summary [10] S. Brooks, P. Dellaportas, G. Roberts, An approach to
diagnosing total variation convergence of MCMC
Markov random elds (MRFs) modeling is a popular algorithms, University of Cambridge, http://www.stats.
pattern analysis method. It can be used in texture classi- bris.ac.uk/ maspb/mypapers/brodr96.html, 1996.
7. PR 1181
L. Wang et al. / Pattern Recognition 33 (2000) 1919}1925 1925
About the Author*LEI WANG received his B. Eng and M. Eng degrees from Harbin Institute of Technology, China, in 1992 and 1995,
respectively. He is currently a Ph.D. candidate in School of Electrical and Electronic Engineering, Nanyang Technological University,
Singapore. His research interests include pattern recognition, image compression, image processing, and image retrieval.
About the Author*JUN LIU received his B.S. and M.S. degree from Jiao Tong University, Xian, China in 1982 and 1984, respectively.
He obtained his Ph.D. degree from Oakland University, MI, USA in 1989. He is currently an associate professor with Division of
Information Engineering, School of Electrical and Electronic Engineering, Nanyang Technological University. His research interests
include pattern recognition, image processing and multimedia database.
About the Author*STAN Z. LI received the B.Sc degree from Hunan University, China, in 1982, M.Sc degree from the National
University of Defense Technology, China, in 1985 and Ph.D. degree from the University of Surrey, UK, in 1991. All degrees are in EEE.
He is currently a senior lecturer at Nanyang Technological University, Singapore. His research interests include computer vision,
pattern recognition, image processing and optimization methods.