The document discusses various techniques for constructing shadows and lighting effects in 3D computer graphics, including using projection matrices to generate shadow polygons and accounting for factors like light source positioning, radial intensity attenuation, and surface reflectance properties. It also examines methods for animating camera movement and introducing texture mapping to surfaces.
This document summarizes Ja-Keoung Koo's presentation on structure from motion. It discusses image formation, the structure from motion pipeline with calibrated cameras, and the 8-point algorithm. The key points are:
1. Image formation maps 3D world points to 2D image points using a camera's intrinsic and extrinsic parameters.
2. Structure from motion with calibrated cameras recovers 3D structure and camera motion from 2D correspondences using the essential matrix and 8-point algorithm.
3. The 8-point algorithm finds the essential matrix from point correspondences, decomposes it to recover the rotation and translation between views.
The document discusses using structured support vector machines to predict structured outputs by learning a scoring function F(x,y) = w*φ(x,y) that is maximized to make predictions, it provides an example of using this approach for category-level object localization in images by representing image-box pairs as features and learning to localize objects.
This document provides information on various mathematical topics including:
1. Graphs of polynomial functions in factorized form such as quadratics, cubics, and quartics.
2. Transformations of functions including translations, reflections, dilations, and their effects on graphs.
3. Exponential, logarithmic, and trigonometric functions and their graphs.
4. Relations, functions, and tests to determine if a relation is a function and if a function is one-to-one or many-to-one.
This document provides a calculus cheat sheet covering key topics in limits, derivatives, and integrals. It defines limits, including one-sided limits and limits at infinity. Properties of limits are listed. Derivatives are defined and basic rules like the power, constant multiple, sum, difference, and chain rules are covered. Common derivatives are provided. Higher order derivatives and the second derivative are defined. Evaluation techniques like L'Hospital's rule, polynomials at infinity, and piecewise functions are summarized.
The document discusses calculating volumes of solids generated when an area is rotated about an axis. It provides formulas for finding the volume when an area A is rotated about the x-axis (V=∫2πA dx) or y-axis (V=∫2πA dy). Examples calculate the volume generated when the area between the parabola y=4-x^2 and x-axis is rotated about the x-axis, and the volume enclosed between y=4-x^2 and y=3 rotated about the line y=3.
This document provides summaries of common derivatives and integrals, including:
- Basic properties and formulas for derivatives and integrals of functions like polynomials, trig functions, inverse trig functions, exponentials/logarithms, and more.
- Standard integration techniques like u-substitution, integration by parts, and trig substitutions.
- How to evaluate integrals of products and quotients of trig functions using properties like angle addition formulas and half-angle identities.
- How to use partial fractions to decompose rational functions for the purpose of integration.
So in summary, this document outlines essential derivatives and integrals for many common functions, along with standard integration strategies and techniques.
This document summarizes Ja-Keoung Koo's presentation on structure from motion. It discusses image formation, the structure from motion pipeline with calibrated cameras, and the 8-point algorithm. The key points are:
1. Image formation maps 3D world points to 2D image points using a camera's intrinsic and extrinsic parameters.
2. Structure from motion with calibrated cameras recovers 3D structure and camera motion from 2D correspondences using the essential matrix and 8-point algorithm.
3. The 8-point algorithm finds the essential matrix from point correspondences, decomposes it to recover the rotation and translation between views.
The document discusses using structured support vector machines to predict structured outputs by learning a scoring function F(x,y) = w*φ(x,y) that is maximized to make predictions, it provides an example of using this approach for category-level object localization in images by representing image-box pairs as features and learning to localize objects.
This document provides information on various mathematical topics including:
1. Graphs of polynomial functions in factorized form such as quadratics, cubics, and quartics.
2. Transformations of functions including translations, reflections, dilations, and their effects on graphs.
3. Exponential, logarithmic, and trigonometric functions and their graphs.
4. Relations, functions, and tests to determine if a relation is a function and if a function is one-to-one or many-to-one.
This document provides a calculus cheat sheet covering key topics in limits, derivatives, and integrals. It defines limits, including one-sided limits and limits at infinity. Properties of limits are listed. Derivatives are defined and basic rules like the power, constant multiple, sum, difference, and chain rules are covered. Common derivatives are provided. Higher order derivatives and the second derivative are defined. Evaluation techniques like L'Hospital's rule, polynomials at infinity, and piecewise functions are summarized.
The document discusses calculating volumes of solids generated when an area is rotated about an axis. It provides formulas for finding the volume when an area A is rotated about the x-axis (V=∫2πA dx) or y-axis (V=∫2πA dy). Examples calculate the volume generated when the area between the parabola y=4-x^2 and x-axis is rotated about the x-axis, and the volume enclosed between y=4-x^2 and y=3 rotated about the line y=3.
This document provides summaries of common derivatives and integrals, including:
- Basic properties and formulas for derivatives and integrals of functions like polynomials, trig functions, inverse trig functions, exponentials/logarithms, and more.
- Standard integration techniques like u-substitution, integration by parts, and trig substitutions.
- How to evaluate integrals of products and quotients of trig functions using properties like angle addition formulas and half-angle identities.
- How to use partial fractions to decompose rational functions for the purpose of integration.
So in summary, this document outlines essential derivatives and integrals for many common functions, along with standard integration strategies and techniques.
CVPR2010: higher order models in computer vision: Part 1, 2zukun
This document discusses tractable higher order models in computer vision using random field models. It introduces Markov random fields (MRFs) and factor graphs as graphical models for computer vision problems. Higher order models that include factors over cliques of more than two variables can model problems more accurately but are generally intractable. The document discusses various inference techniques for higher order models such as relaxation, message passing, and decomposition methods. It provides examples of how higher order and global models can be used in problems like segmentation, stereo matching, reconstruction, and denoising.
IVR - Chapter 2 - Basics of filtering I: Spatial filters (25Mb) Charles Deledalle
Moving averages. Finite differences and edge detectors. Gradient, Sobel and Laplacian. Linear translations invariant filters, cross-correlation and convolution. Adaptive and non-linear filters. Median filters. Morphological filters. Local versus global filters. Sigma filter. Bilateral filter. Patches and non-local means. Applications to image denoising.
Arithmetic coding is an entropy encoding technique that maps a sequence of symbols to a numeric interval between 0 and 1. Each symbol maps to a sub-interval of the current interval based on the symbol probabilities. As symbols are processed, the interval boundaries are updated according to the cumulative distribution function of the symbol probabilities. Arithmetic coding achieves better compression than Huffman coding by allowing coding of variable-length blocks without pre-specifying code lengths. It also handles conditional probability models more efficiently by updating interval boundaries based on context without needing pre-specified codebooks for all contexts.
This document discusses arithmetic coding, an entropy encoding technique. It begins with an introduction comparing arithmetic coding to Huffman coding. The document then provides pseudocode for the basic encoding and decoding algorithms. It describes how scaling techniques like E1 and E2 scaling allow for incremental encoding and decoding as well as achieving infinite precision with finite-precision integers. The document outlines applications of arithmetic coding in areas like JBIG, H.264, and JPEG 2000.
Bai giang ham so kha vi va vi phan cua ham nhieu bienNhan Nguyen
This document introduces differentials in functions of several variables. It begins with a review of differentials in two variables using differentials dx and dy. It then extends the concept to functions of several variables, where the total differential dz is defined as the sum of its partial derivatives with respect to each variable times the differentials of those variables. Examples are provided to demonstrate calculating total differentials and comparing them to actual changes. The relationship between differentiability and continuity is also discussed.
Camera calibration involves determining the internal camera parameters like focal length, image center, distortion, and scaling factors that affect the imaging process. These parameters are important for applications like 3D reconstruction and robotics that require understanding the relationship between 3D world points and their 2D projections in an image. The document describes estimating internal parameters by taking images of a calibration target with known 3D positions and solving for the camera projection matrix P that relates 3D scene points to their 2D image coordinates.
Lesson 27: Integration by Substitution (Section 4 version)Matthew Leingang
The document outlines a calculus lecture on integration by substitution. It provides examples of using u-substitution to find antiderivatives of expressions like √(x^2+1) and tan(x). The key ideas are that if u is a function of x, its derivative du/dx can be used to rewrite the integrand and perform a u-substitution integration.
Structured regression for efficient object detectionzukun
This document summarizes research on structured regression for efficient object detection. It proposes framing object localization as a structured output regression problem rather than a classification problem. This involves learning a function that maps images directly to object bounding boxes. It describes using a structured support vector machine with joint image/box kernels and box overlap loss to learn this mapping from training data. The document also outlines techniques for efficiently solving the resulting argmax problem using branch-and-bound optimization and discusses extensions to other tasks like image segmentation.
The document discusses differential processing on triangular meshes, including defining functions on meshes, local averaging operators, gradient and Laplacian operators, and proving that the normalized Laplacian is symmetric and positive definite using the properties of the gradient and local connectivity of the mesh. Operators like the Laplacian can be used to smooth functions defined on meshes through diffusion.
The document discusses various machine learning algorithms including polynomial regression, quadratic regression, radial basis functions, and robust regression. It provides mathematical formulas and visual examples to explain how each algorithm works. The key ideas are that polynomial regression fits nonlinear functions of inputs, quadratic regression extends linear regression by including quadratic terms, radial basis functions use kernel functions centered at data points to perform nonlinear regression, and robust regression aims to fit data robustly by down-weighting outliers.
The inverse of a function "undoes" the effect of the function. We look at the implications of that property in the derivative, as well as logarithmic functions, which are inverses of exponential functions.
The inverse of a function "undoes" the effect of the function. We look at the implications of that property in the derivative, as well as logarithmic functions, which are inverses of exponential functions.
The document presents a Green's function-based method for transient analysis of multiconductor transmission lines. It begins with an introduction to existing time-domain modeling techniques and their issues. It then describes modeling transmission lines as a vector Sturm-Liouville problem and using the spectral representation of the Green's function to solve it. Numerical results are presented for lines with both frequency-independent and dependent parameters. The method provides a rational model representation of transmission line behavior.
1. Geodesic sampling and meshing techniques can be used to generate adaptive triangulations and meshes on Riemannian manifolds based on a metric tensor.
2. Anisotropic metrics can be defined to generate meshes adapted to features like edges in images or curvature on surfaces. Triangles will be elongated along strong features to better approximate functions.
3. Farthest point sampling can be used to generate well-spaced point distributions over manifolds according to a metric, which can then be triangulated using geodesic Delaunay refinement.
The document contains C++ code that defines functions for displaying 3D objects using OpenGL. It initializes materials, lighting, and transforms for 10 objects. Keyboard inputs control animation and properties. Mouse clicks select an object and set rotation axes. The display function renders each object with the appropriate material and transform based on user input.
The document describes research on using symbolic regression to infer mathematical models from experimental data. Symbolic regression evolves computer programs that best fit the data, such as equations composed of basic arithmetic operations and functions. The approach is able to recover known models of various physical systems from sample data alone. It can also infer novel models of biological networks and other complex systems directly from experimental measurements. The ability to distill natural laws from data has applications in scientific discovery, engineering design, and other fields.
Gazr is a Flickr browser application built with Java and Qt Jambi that provides a Cover Flow-like visual experience for quickly browsing large photo collections. It uses kinetic scrolling along a Bézier curve controlled by velocity to smoothly animate between photos. Sound effects are triggered during scrolling movements. The application leverages various libraries like WebKit for image display and caching, and uses a model-view-controller framework to retrieve and display photo data from the Flickr API in a tree structure.
This document provides an overview of light and optics topics including:
- Light travels in straight lines and can be reflected, absorbed, or pass through materials
- Shadows are formed when light is blocked and change size based on the position of the light source
- Refraction causes light to change direction when passing from one material to another, as seen through lenses and prisms
- Natural light sources like the sun can be investigated using shadows, while mirrors, lenses, and telescopes demonstrate optical principles.
The document discusses light sources and shadows. It explains that the moon, water, and mirrors are not light sources themselves, but rather reflect light from other sources. It describes how cast shadows are created by opaque objects blocking light and how their shape depends on the object and light source. Specifically, it notes that a sphere casts a round or elliptical shadow depending on the light angle and that lower light sources create longer shadows. The document also provides tips for highlighting and shading drawings, such as using different pencil pressures to create values from light to dark.
Texture mapping is a process that maps a 2D texture image onto a 3D object's surface. This allows the 3D object to take on the visual characteristics of the 2D texture. The document discusses key aspects of texture mapping like how textures are represented as arrays of texels, how texture coordinates are assigned to map textures onto object surfaces, and techniques like mipmapping, filtering and wrapping that are used to render textures properly at different distances and orientations. OpenGL functions like glTexImage2D and glTexCoord are used to specify textures and texture coordinates for 3D rendering with texture mapping.
Discuss about Secret invention -> patentability
The principle secret -> build working models -> introduce how to review patentability -> Including 4 legal requirements -> statutory class (focus on class 1: process)
CVPR2010: higher order models in computer vision: Part 1, 2zukun
This document discusses tractable higher order models in computer vision using random field models. It introduces Markov random fields (MRFs) and factor graphs as graphical models for computer vision problems. Higher order models that include factors over cliques of more than two variables can model problems more accurately but are generally intractable. The document discusses various inference techniques for higher order models such as relaxation, message passing, and decomposition methods. It provides examples of how higher order and global models can be used in problems like segmentation, stereo matching, reconstruction, and denoising.
IVR - Chapter 2 - Basics of filtering I: Spatial filters (25Mb) Charles Deledalle
Moving averages. Finite differences and edge detectors. Gradient, Sobel and Laplacian. Linear translations invariant filters, cross-correlation and convolution. Adaptive and non-linear filters. Median filters. Morphological filters. Local versus global filters. Sigma filter. Bilateral filter. Patches and non-local means. Applications to image denoising.
Arithmetic coding is an entropy encoding technique that maps a sequence of symbols to a numeric interval between 0 and 1. Each symbol maps to a sub-interval of the current interval based on the symbol probabilities. As symbols are processed, the interval boundaries are updated according to the cumulative distribution function of the symbol probabilities. Arithmetic coding achieves better compression than Huffman coding by allowing coding of variable-length blocks without pre-specifying code lengths. It also handles conditional probability models more efficiently by updating interval boundaries based on context without needing pre-specified codebooks for all contexts.
This document discusses arithmetic coding, an entropy encoding technique. It begins with an introduction comparing arithmetic coding to Huffman coding. The document then provides pseudocode for the basic encoding and decoding algorithms. It describes how scaling techniques like E1 and E2 scaling allow for incremental encoding and decoding as well as achieving infinite precision with finite-precision integers. The document outlines applications of arithmetic coding in areas like JBIG, H.264, and JPEG 2000.
Bai giang ham so kha vi va vi phan cua ham nhieu bienNhan Nguyen
This document introduces differentials in functions of several variables. It begins with a review of differentials in two variables using differentials dx and dy. It then extends the concept to functions of several variables, where the total differential dz is defined as the sum of its partial derivatives with respect to each variable times the differentials of those variables. Examples are provided to demonstrate calculating total differentials and comparing them to actual changes. The relationship between differentiability and continuity is also discussed.
Camera calibration involves determining the internal camera parameters like focal length, image center, distortion, and scaling factors that affect the imaging process. These parameters are important for applications like 3D reconstruction and robotics that require understanding the relationship between 3D world points and their 2D projections in an image. The document describes estimating internal parameters by taking images of a calibration target with known 3D positions and solving for the camera projection matrix P that relates 3D scene points to their 2D image coordinates.
Lesson 27: Integration by Substitution (Section 4 version)Matthew Leingang
The document outlines a calculus lecture on integration by substitution. It provides examples of using u-substitution to find antiderivatives of expressions like √(x^2+1) and tan(x). The key ideas are that if u is a function of x, its derivative du/dx can be used to rewrite the integrand and perform a u-substitution integration.
Structured regression for efficient object detectionzukun
This document summarizes research on structured regression for efficient object detection. It proposes framing object localization as a structured output regression problem rather than a classification problem. This involves learning a function that maps images directly to object bounding boxes. It describes using a structured support vector machine with joint image/box kernels and box overlap loss to learn this mapping from training data. The document also outlines techniques for efficiently solving the resulting argmax problem using branch-and-bound optimization and discusses extensions to other tasks like image segmentation.
The document discusses differential processing on triangular meshes, including defining functions on meshes, local averaging operators, gradient and Laplacian operators, and proving that the normalized Laplacian is symmetric and positive definite using the properties of the gradient and local connectivity of the mesh. Operators like the Laplacian can be used to smooth functions defined on meshes through diffusion.
The document discusses various machine learning algorithms including polynomial regression, quadratic regression, radial basis functions, and robust regression. It provides mathematical formulas and visual examples to explain how each algorithm works. The key ideas are that polynomial regression fits nonlinear functions of inputs, quadratic regression extends linear regression by including quadratic terms, radial basis functions use kernel functions centered at data points to perform nonlinear regression, and robust regression aims to fit data robustly by down-weighting outliers.
The inverse of a function "undoes" the effect of the function. We look at the implications of that property in the derivative, as well as logarithmic functions, which are inverses of exponential functions.
The inverse of a function "undoes" the effect of the function. We look at the implications of that property in the derivative, as well as logarithmic functions, which are inverses of exponential functions.
The document presents a Green's function-based method for transient analysis of multiconductor transmission lines. It begins with an introduction to existing time-domain modeling techniques and their issues. It then describes modeling transmission lines as a vector Sturm-Liouville problem and using the spectral representation of the Green's function to solve it. Numerical results are presented for lines with both frequency-independent and dependent parameters. The method provides a rational model representation of transmission line behavior.
1. Geodesic sampling and meshing techniques can be used to generate adaptive triangulations and meshes on Riemannian manifolds based on a metric tensor.
2. Anisotropic metrics can be defined to generate meshes adapted to features like edges in images or curvature on surfaces. Triangles will be elongated along strong features to better approximate functions.
3. Farthest point sampling can be used to generate well-spaced point distributions over manifolds according to a metric, which can then be triangulated using geodesic Delaunay refinement.
The document contains C++ code that defines functions for displaying 3D objects using OpenGL. It initializes materials, lighting, and transforms for 10 objects. Keyboard inputs control animation and properties. Mouse clicks select an object and set rotation axes. The display function renders each object with the appropriate material and transform based on user input.
The document describes research on using symbolic regression to infer mathematical models from experimental data. Symbolic regression evolves computer programs that best fit the data, such as equations composed of basic arithmetic operations and functions. The approach is able to recover known models of various physical systems from sample data alone. It can also infer novel models of biological networks and other complex systems directly from experimental measurements. The ability to distill natural laws from data has applications in scientific discovery, engineering design, and other fields.
Gazr is a Flickr browser application built with Java and Qt Jambi that provides a Cover Flow-like visual experience for quickly browsing large photo collections. It uses kinetic scrolling along a Bézier curve controlled by velocity to smoothly animate between photos. Sound effects are triggered during scrolling movements. The application leverages various libraries like WebKit for image display and caching, and uses a model-view-controller framework to retrieve and display photo data from the Flickr API in a tree structure.
This document provides an overview of light and optics topics including:
- Light travels in straight lines and can be reflected, absorbed, or pass through materials
- Shadows are formed when light is blocked and change size based on the position of the light source
- Refraction causes light to change direction when passing from one material to another, as seen through lenses and prisms
- Natural light sources like the sun can be investigated using shadows, while mirrors, lenses, and telescopes demonstrate optical principles.
The document discusses light sources and shadows. It explains that the moon, water, and mirrors are not light sources themselves, but rather reflect light from other sources. It describes how cast shadows are created by opaque objects blocking light and how their shape depends on the object and light source. Specifically, it notes that a sphere casts a round or elliptical shadow depending on the light angle and that lower light sources create longer shadows. The document also provides tips for highlighting and shading drawings, such as using different pencil pressures to create values from light to dark.
Texture mapping is a process that maps a 2D texture image onto a 3D object's surface. This allows the 3D object to take on the visual characteristics of the 2D texture. The document discusses key aspects of texture mapping like how textures are represented as arrays of texels, how texture coordinates are assigned to map textures onto object surfaces, and techniques like mipmapping, filtering and wrapping that are used to render textures properly at different distances and orientations. OpenGL functions like glTexImage2D and glTexCoord are used to specify textures and texture coordinates for 3D rendering with texture mapping.
Discuss about Secret invention -> patentability
The principle secret -> build working models -> introduce how to review patentability -> Including 4 legal requirements -> statutory class (focus on class 1: process)
1) Social TV began in 2001 with prototypes allowing friends to connect while viewing, and has evolved to include content sharing and recommendations, communication features, and status updates.
2) Major interactive Social TV activities include content selection and sharing with friends, direct synchronous and asynchronous communication around TV viewing, community building through comments and ratings, and updating friends on what one is currently watching.
3) Challenges include determining the right devices, modality of interaction, strength of social ties, and handling synchronization between solitary and shared viewing experiences.
Weather, News, ... Terrestrial network
Social TV
• Facebook, Twitter, ...
Hybrid TV
• Integrate broadcast & internet
• Enrich TV experience
• Personalized service
Ref: III project
14
(III)
Hybrid features on TV
- Integrate broadcast & internet content
- Personalized recommendation
- Social TV
- Second screen experience
- Voice/gesture control
- Apps
- Advertisement
- Payment service
Ref: III project
15
(III)
Display content: Diversity video service
- Live TV
- Video on Demand (VoD)
- Time-shift TV
- Electronic Program
How to separate the related inventions or combine those in one
case study: radio met car
Filing application tips - "KISS rule“
Trademark note
Software & biz methods notes including source code & flowchart
First sketches skills (Official Gazette, OG)
Drafting the specification hints
OpenGL - point & line design
introduce the construction of displayers (CRT, Flat-panel, LCD, PDP, projector...)
those render is based on graphic skills (point & line)
Intellectual Property Guide
patent requirements
industry approaches
patent strength -> patent map
International property offices
Scenario extends to more patents) - case study
.....
This very short document appears to be about traveling westward. It does not provide many details to summarize in 3 sentences or less. The document simply states "Westward Ho !" and "The End", without any other context or content to extract a meaningful multi-sentence summary from.
Smart TV content converged service & social mediafungfung Chen
This document discusses the convergence of smart TV content and social media. It begins with an outline covering the topics of what TV is in a globally connected world, content issues like apps and distribution/monetization, and social media case studies. The document then covers hybrid TV platforms, the development of HbbTV standards, bandwidth needs with more applications, and how CE manufacturers' value chains may change. It also discusses content issues around apps, distribution, synchronization and monetization solutions. Finally, it explores social media history and case studies involving Facebook, mobile apps, and gamification solutions.
Start from Patentability – 4 legal requirements
req.1: a statutory class – lawsuits & 5 classes cover processes, Machines, Manufactures, Compositions, New use
req.2: Utility including software, non-patentable case by drug issue & how to turn back a whimsical case
Req.3: Novelty - prior art (Reduction to Practice)
Cg is a high-level shading language that is used to program the vertex and fragment shaders in Unity. Cg shaders allow for complex pixel processing and GPU calculations. In Unity, shaders are written using the Cg language and compiled for the target graphics API. Shaders have inputs like vertex attributes and uniforms, and outputs like varying values passed between stages. Common shader types include surface, vertex, and fragment shaders. Advanced techniques like multi-pass rendering are also possible in Unity using Cg shaders.
The document describes a graphics editor program that simulates the MS Paint application. It uses OpenGL for graphics rendering and GLUT for creating windows and rendering scenes. Key features implemented include tools for drawing shapes, images, and text. OpenGL functions are used for rendering while GLUT functions handle window creation and events. The design section covers header files, OpenGL/GLUT functions, and user-defined functions for tasks like drawing, erasing, and filling shapes. Implementation details are provided for various drawing algorithms and user interface elements.
- The document introduces real-time shaders for artists working in Softimage and discusses three key hurdles for understanding shaders: 1) dot products and shading calculations, 2) normal mapping and environment mapping, and 3) shader blending techniques.
- It provides examples of shader code to illustrate dot products, normal mapping, and blurring textures.
- The goal is to help artists understand and use shaders through a tutorial on basic shader concepts and translating shader logic into Softimage.
CS 354 Object Viewing and RepresentationMark Kilgard
- The document summarizes a lecture on viewing and representing 3D objects in computer graphics. It discusses representing objects as triangle meshes and storing vertex data in arrays indexed by triangle lists. It also covers transforms like glFrustum and gluLookAt for viewing, and examples of modeling transforms.
- Common ways to represent 3D objects include procedural, explicit polygon meshes, and implicit surfaces. Triangle meshes stored with unique vertex positions and triangle indices are popular due to efficiency and compatibility with OpenGL/GPU rendering.
- The lecture also covered projection transforms, modeling transforms, lighting, and "look at" camera positioning for 3D viewing. Next lecture will discuss mesh properties and OpenGL rendering details.
OpenGL ES 1.1 is the 3D graphics API used by the iPhone and while it is extremely powerful it can often be very intimidating to the beginner. One of the main issues is that while there is a great deal of documentation and tutorials for OpenGL like the “Red Book” and other sources online there seem to be very few available resources for Open GL ES. This session will introduce the concepts of developing with OpenGL ES 1.1 and demonstrate them via sample code.
The document discusses building 2D and 3D games with Ruby using low-level APIs. It covers topics like building 2D games with Ruby SDL, building 3D games with Ruby OpenGL, and whether Ruby is a legitimate player in the gaming space. Examples provided include sample code for sprites, events, and extensions to interface with C libraries for graphics and game development.
The document discusses building 2D and 3D games with Ruby using low-level APIs. It covers topics like building 2D games with Ruby SDL, building 3D games with Ruby OpenGL, and whether Ruby is a legitimate player in the gaming space. Examples provided include sample code for sprites, events, and extensions to interface with C libraries for graphics and game development.
This document discusses various techniques and optimizations from Apple's GameplayKit framework. It begins by introducing GameplayKit and explaining that it is used to develop gameplay mechanics rather than rendering. Several techniques are then presented as "Gems" including using GKRandomSource for shuffling arrays, GKRTree for performant visual searches, GKPerlinNoiseSource for natural randomness, and using GKObstacleGraph for pathfinding around obstacles. Links are provided at the end for further information on GameplayKit and related algorithms.
This document discusses image enhancement techniques in the spatial domain. It introduces image enhancement and the spatial domain, which refers to direct pixel-level manipulation of the image plane. Various spatial domain operators and transformations are described, including gray-level transformations like negatives, log transformations, and power laws. Piecewise-linear transformations like thresholding, slicing, and bitplane slicing are also covered. The document discusses arithmetic, logic, and other operations that can be applied on sets of images and pixels. Histogram processing techniques like equalization are explained.
The document discusses several OpenGL functions and concepts related to setting up the coordinate system and rendering 3D objects. It explains how functions like glOrtho, glMatrixMode, glTranslate, glRotate, and glScale are used to apply transformations to the modelview and projection matrices. It also covers setting the viewport and world window. Finally, it provides details on functions like glutSolidSphere and glutWireCube that can be used to render basic 3D shapes.
The Day You Finally Use Algebra: A 3D Math PrimerJanie Clayton
This document provides an overview of various math and programming concepts used for graphics. It begins with an introduction to linear algebra and how it allows performing actions on multiple values simultaneously through matrices. It then discusses trigonometry and how triangles are used as a foundation for 3D graphics. Finally, it shares code for a fragment shader that simulates refraction through a sphere to demonstrate these concepts in action.
The document describes algorithms for 2D transformations in computer graphics using C programming language. It discusses functions for translation, rotation, and scaling of 2D objects.
The algorithms take input for the type of transformation, parameters for the transformation, and coordinates of the 2D object. Translation simply adds the translation offsets to the x and y coordinates. Rotation rotates the object around the x-axis or y-axis based on the angle input. Scaling multiplies the x and y coordinates by respective scaling factors.
The transformed object is displayed on the graphics screen. Thus basic 2D transformations like translation, rotation, scaling are implemented through these algorithms in C.
This document describes implementing various attributes of lines, circles, and ellipses using the C programming language. It includes the following:
1. The aim is to implement different attributes of lines, circles, and ellipses using C.
2. The algorithm involves including necessary packages, declaring line and fill types as character variables, using setlinestyle and line functions to draw lines with different styles, and using ellipse functions to draw circles and ellipses with different fill styles.
3. The program implements the algorithm by initializing graphics, clearing the screen, outputting text to describe the attributes, using for loops to set line styles, colors, and draw lines/ellipses with varying attributes, and terminating the graphics window.
The document discusses different concepts related to clipping in computer graphics including 2D and 3D clipping. It describes how clipping is used to eliminate portions of objects that fall outside the viewing frustum or clip window. Various clipping techniques are covered such as point clipping, line clipping, polygon clipping, and the Cohen-Sutherland algorithm for 2D region clipping. The key purposes of clipping are to avoid drawing objects that are not visible, improve efficiency by culling invisible geometry, and prevent degenerate cases.
The Ring programming language version 1.10 book - Part 64 of 212Mahmoud Samir Fayed
This document discusses using RingOpenGL and RingFreeGLUT for 3D graphics development in Ring.
RingOpenGL provides bindings to the OpenGL graphics library and supports versions from OpenGL 1.1 to 4.6. RingFreeGLUT contains bindings to the FreeGLUT library for creating windows, handling events, etc.
The document provides a simple example of creating a window using RingFreeGLUT. It loads the FreeGLUT library, initializes GLUT, sets the display mode and window size/position, creates a window with a title, and enters the main loop to process events.
This document discusses feature extraction in computer vision systems. It focuses on edge and corner detection methods. Edge detection aims to locate boundaries between objects and background in images. Common approaches discussed include Sobel and Canny edge detectors, which apply first and second derivative filters to detect edges. Corner detection aims to find stable points of interest across images for tracking objects. It involves computing the eigenvalues of a matrix formed from the image gradient to identify corners.
This document discusses feature extraction in computer vision systems. It focuses on edge and corner detection methods. Edge detection techniques discussed include Sobel operators, Laplacian operators, and the Canny edge detector. Corner detection looks at the eigenvalues of the gradient matrix at each point to find locations where both eigenvalues are large and distinct. Edge and corner features extracted from images can be used for tasks like object detection and tracking.
This document discusses animations in SwiftUI. It provides an overview of basic animation properties like position, size, angle, shape, and color that can be animated. It also covers transformation animations using translation, scaling, rotation, and opacity. Different timing functions for animations like linear, easeIn, easeOut, easeInOut, and spring are demonstrated. The key concepts of Animatable and VectorArithmetic protocols that enable property animation are explained. Examples are provided to illustrate animating a star shape by modifying its edges property over time. Custom animatable modifiers like PercentageModifier are also demonstrated to create progress indicators. The document concludes by emphasizing the effective way to implement animation in SwiftUI.
This document provides instructions for drawing graphs in Pascal using the Graph unit. It discusses initializing graphics mode, calculating appropriate coordinates for the screen, and algorithms for drawing normal and polar functions. Examples are given to illustrate drawing the linear function f(x)=x and polar function f(u)=1+sin(u). The document concludes with reminding students what they need to do in exams and providing contact information for the programming group.
The document discusses image processing techniques including image derivatives, integral images, convolution, morphology operations, and image pyramids.
It explains that image derivatives detect edges by capturing changes in pixel intensity, and provides an example calculation. Integral images allow fast computation of box filters by precomputing pixel sums. Convolution is used to calculate probabilities as the sliding overlap of distributions. Morphology operations like erosion and dilation modify images based on pixel neighborhoods. Image pyramids create multiple resolution layers that aid in object detection across scales.
Similar to CG OpenGL Shadows + Light + Texture -course 10 (20)
Integrating the journey on IP protections to discuss the strategies
cope pirate issue & how to handle a non-patentability case (ex. Trademark, copyright, trade secret…)
Give some suggestions for lay inventors to prepare a patent application
Some online patent application forms & the template of RPA & PTO’s response after you sent a application
Start to the procedure on classification search (patents)
Point out reading skill & how to define the scope of a patent
Use some case studies to illustrations
Design around lawsuits for Apple against Samsung
Development online DB approaches (cloud)
Finally, show other useful information (online search) such as official gazette, trademarks …
The document discusses the Patent Prosecution Highway (PPH) program which allows applicants to use examination results from one patent office to accelerate examination in another office, and describes PPH routes including the Paris route, PCT-PPH, and expanded PPH MOTTAINAI program. It also provides an example of using search results for a mobile phone multimedia controller patent to determine relevant patent classification codes between the USPTO and JPO technical fields.
Start from overseas patent protection including
PCT (patent cooperation treaty), PPH (patent prosecution highway), CAF/CCD, new route pilot (work-sharing), tripway pilot (search sharing) …
How to audit work of patent agent/lawyer
Keep document & bill
Pass a Bar exam to show your ability
The document discusses conducting a patentability search, including why they are important, how to search patent classifications, and examples of patent infringement cases. It describes searching major patent databases like Espacenet and USPTO to find prior art and check for patent applications. Classification searches involve searching patents within a particular class and subclass. The Cooperative Patent Classification (CPC) system aims to develop a joint classification for the European and US patent offices.
Introduce the requirement #3: Novelty & the date issue on “first to file” & 1 yr law
Talk the satisfy condition on the novelty requirement
Finally, share the law recognizes 3 types of novelty (Section 102) on the part of Physical (hardware or method)
CG OpenGL 3D object representations-course 8fungfung Chen
This document discusses 3D object representations in computer graphics. It describes how regular polyhedrons like cubes can be used to represent simple 3D objects. It also discusses representing curved surfaces like spheres, ellipsoids, and tori through parametric equations in spherical or Cartesian coordinates and approximating them with polygon meshes. OpenGL and GLUT functions for drawing common 3D primitives like spheres and cones are also covered.
The document discusses 3D viewing frameworks and how to generate 3D views of objects and scenes by setting up a camera position and orientation, projecting object descriptions onto a view plane using different projection types like parallel, perspective, and oblique projections, and transforming the view for output. It also covers topics like depth cueing, aspect ratios, and the steps involved in the 3D viewing process using computer graphics.
The document discusses 2D viewing and simple animation techniques in computer graphics, including how to define a viewing region, perform viewing transformations, construct basic animations using techniques like double buffering and periodic motion, and manage frame rates for smooth animation playback. It also provides OpenGL code examples for tasks like setting the viewport and scaling images.
This document discusses vectors, coordinate systems, and geometric transformations that are fundamental concepts in computer graphics. It provides examples of different coordinate systems and how to project points from one system to another. It also explains various 2D affine transformations like translation, scaling, rotation, shearing, and reflection through transformation matrices. Homogeneous coordinates are introduced as a technique to represent 2D points as 3D homogeneous coordinates to allow for general linear transformations.
This document discusses various topics related to computer graphics and input devices, including:
1. It provides an overview of polar coordinates and how to convert between polar and Cartesian coordinates.
2. It describes different input device modes including request, sample, and event mode and provides examples of each.
3. It covers color information and graphics functions in OpenGL related to color, including color tables, pixel arrays, and color functions.
4. It discusses additional graphics functions in OpenGL related to points, lines, polygons and filling algorithms.
This document discusses various topics related to computer graphics and video processing, including coordinate systems, line drawing algorithms, circle generation, polygon filling, and 3D modeling. It provides explanations of techniques like Bresenham's line algorithm, the midpoint circle algorithm, scanline polygon filling, and boundary representation for 3D objects. Examples are given for 2D and 3D coordinate systems, parallel line drawing, concave polygon splitting, inside-outside testing, and representing adjacent polygon surfaces.
User stories are estimated in story points to plan project timelines. Story points are a relative unit used to estimate complexity rather than time. The team estimates stories together by first independently assigning points, then discussing to converge on a shared estimate. Velocity is calculated based on the number of points completed in an iteration to predict future capacity. Pair programming may impact velocity but not the story point estimates themselves. Estimates should consider the story complexity and effort from the team perspective rather than individuals.
Introduce gathering stories technologies as user interview, questionnaires, observation & story-writing workshops.
Share some guidelines and real cases about to make good stories as size the story to the horizon & including user roles in the stories.
The document discusses six attributes for writing good user stories for agile software development: independent, negotiable, valuable, estimatable, small, and testable. It provides examples and methods for ensuring stories have these attributes, such as splitting large compound stories into smaller individual stories, adding notes for negotiation, and making stories focused on functionality that can be tested.
Best Digital Marketing Strategy Build Your Online Presence 2024.pptxpavankumarpayexelsol
This presentation provides a comprehensive guide to the best digital marketing strategies for 2024, focusing on enhancing your online presence. Key topics include understanding and targeting your audience, building a user-friendly and mobile-responsive website, leveraging the power of social media platforms, optimizing content for search engines, and using email marketing to foster direct engagement. By adopting these strategies, you can increase brand visibility, drive traffic, generate leads, and ultimately boost sales, ensuring your business thrives in the competitive digital landscape.
Practical eLearning Makeovers for EveryoneBianca Woods
Welcome to Practical eLearning Makeovers for Everyone. In this presentation, we’ll take a look at a bunch of easy-to-use visual design tips and tricks. And we’ll do this by using them to spruce up some eLearning screens that are in dire need of a new look.
Explore the essential graphic design tools and software that can elevate your creative projects. Discover industry favorites and innovative solutions for stunning design results.
1. Shadows + Light
+Texture
Chen Jing-Fung (2006/12/15)
Assistant Research Fellow,
Digital Media Center,
National Taiwan Normal University
Ch10: Computer Graphics with OpenGL 3th, Hearn Baker
Ch6: Interactive Computer Graphics 3th, Addison Wesley
Ch7: Interactive Computer Graphics 3th, Addison Wesley
2. outline
• How to construct the object’s
shadow in a scene
• Camera’s walking in a scene
• Several kinds about light
2
3. shadows
• Create simple shadows is an
interesting application of projection
matrices
– Shadows are not geometric objects in
OpenGL
– Shadows can realistic images and give
many visual clues to the spatial
relationships among objects in a scene
3
4. How to create the
object’s shadow
• Starting from a view point
• Lighting source is also required
(infinitely light)
– If light source is at the center of
projection, there are no visible shadows
(shadows are behind the objects)
4
5. Polygon’s shadow
y
• Consider the shadow (xl,yl,zl)
generated by the point source
– Assume the shadow falls on the
surface (y=0)
x
– Then, the shadow polygon is z
related to original polygon
• Shadow ~ origin
5
6. y
(xl,yl,zl)
z x
• Find a suitable projection matrix and use
OpenGL to compute the vertices of the
shadow polygon
is projected to
– (x,y,z) in space -> (xp, yp, zp) in
projection plane
– Characteristic:
• All projectors pass through the origin and all
projected polygon through the vertical to y-axis
6
7. y
(xl,yl,zl)
z x
• Shadow point and polygon point are
projected from x-axis to y-axis
x (xp,-d)
(x,y) xp x x
xp
d y y / d
y yp = -d
• Project from z-axis to y-axis
(zp,-d) z
(z,y) zp z z
zp
d y y / d
y y = -d
p
7
8. Homogeneous
coordinates
• Original homogeneous coordinates:
x x p 1 0 0 0 x
xp
y / d y 0 1 0 0 y
p
yp = y zp 0 0 1 0 z
zp
z 0 1
0 0 1
y / d 1 d
Perspective projection matrix: Our light can be
shadow projection Matrix moved by design
GLfloat light[3]={0.0, 10.0, 0.0};
GLfloat m[16]; light[0]=10.0*sin((6.28/180.0)*theta);
light[2]=10.0*cos((6.28/180.0)*theta);
for(i=0;i<16;i++) m[i]=0.0;
m[0]=m[5]=m[10]=1.0; m[7]=-1.0/light[1];
8
9. Orthogonal view with
clipping box
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
/* set up standard orthogonal view with clipping */
/* box as cube of side 2 centered at origin */
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
glOrtho(-2.0, 2.0, -2.0, 2.0, -5.0, 5.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(1.0,1.0,1.0,0.0,0.0,0.0,0.0,1.0,0.0);
// view plane up vector at y-axis (0.0,1.0,0.0)
9
10. Polygon & its shadow
/* define unit square polygon */
glColor3f(1.0, 0.0, 0.0);/* set drawing/fill color to red*/
glBegin(GL_POLYGON);
glVertex3f(…); …
glEnd();
glPushMatrix(); //save state
glTranslatef(light[0], light[1],light[2]); //translate back
glMultMatrixf(m); //project
glTranslatef(-light[0], -light[1],-light[2]); //return origin
//shadow object
glColor3f(0.0,0.0,0.0);
glBegin(GL_POLYGON);
glVertex3f(…);…
glEnd();
glPopMatrix(); //restore state
How to design the different size
between original polygon & its shadow?
10
11. Special key parameter
void SpecialKeys(int key, int x, int y){
if(key == GLUT_KEY_UP){
theta += 2.0;
if( theta > 360.0 ) theta -= 360.0;
//set range’s boundary
}
if(key == GLUT_KEY_DOWN){
theta -= 2.0; y
if( theta < 360.0 ) theta
+= 360.0;
}
glutPostRedisplay();
}
demo z x
11
12. How to design walking
object?
• Walking direction?
• Viewer (camera) parameter
• Reshape projected function
12
13. Viewer (camera) moving (1)
• Viewer move the camera in a scene by
depressing the x, X, y, Y, z, Z keys on
keyboard
void keys(unsigned char key, int x, int y){
if(key == ‘x’) viewer[0] -= 1.0;
if(key == ‘X’) viewer[0] += 1.0;
if(key == ‘y’) viewer[1] -= 1.0;
if(key == ‘Y’) viewer[1] += 1.0;
if(key == ‘z’) viewer[2] -= 1.0;
if(key == ‘Z’) viewer[2] += 1.0;
glutPostRedisplay(); }
Walking in a scene.
What problem happen if object is walked far away?
13
14. Viewer (camera) moving (2)
• The gluLookAt function provides a
simple way to reposition and reorient
the camera
void display(void){
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
gluLookAt(viewer[0],viewer[1],viewer[2],0.0,0.0,0.0,0.0,1.0,0.0);
/* rotate cube */
glRotatef(theta[0], 1.0, 0.0, 0.0);
glRotatef(theta[1], 0.0, 1.0, 0.0);
glRotatef(theta[2], 0.0, 0.0, 1.0);
colorcube();
glFlush();
glutSwapBuffers(); }
14
15. Viewer (camera) moving (3)
• Invoke glFrustum in the reshape
callback to specify the camera lens
void myReshape(int w, int h){
glViewport(0, 0, w, h);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
if(w<=h) glFrustum(-2.0, 2.0, -2.0 * (GLfloat) h/ (GLfloat) w,
2.0* (GLfloat) h / (GLfloat) w, 2.0, 20.0);
else glFrustum(-2.0, 2.0, -2.0 * (GLfloat) w/ (GLfloat) h,
2.0* (GLfloat) w / (GLfloat) h, 2.0, 20.0);
glMatrixMode(GL_MODELVIEW); }
demo
15
16. Light & surface
• Light reflection method is very
complicated
– Describe that light source is reflected from an
actual surface
– Maybe depends an many factors
• Light source’s direction, observer’s eye and the
normal to the surface
– Surface characteristics can also consider that its
roughness or surface’s color … (surface’s texture)
16
17. Why do shading?
• Set a sphere model to cyan
– Result
• the sphere seem like a circle
• We want to see a sphere
– Material + light + viewer + surface
orientation
• The material’s color can be designed
to a gradual transformation
17
19. Point light sources
• The simplest model for an object and
its light source
– Light rays are generated along radially
diverging paths from the single-color
source position
• light source is a single color
– The light source’s size is smaller than object
– We can use an illumination model to calculate the
light direction to a selected object surface’s
position
19
20. Infinitely distant light
sources
• A large light source (sun) that is very far
from a scene like a point light source
• Large light source is small different to
point light source
– When remote the point light source, the
object is illuminated at only one direction
– In constant to sun which is very far so it
shines everywhere
20
21. Radial intensity
attenuation (1)
• As radiant energy from a light source
travels outwards through space, its
amplitude at any distance dl from the
source is decreased by the factor
1/dl2 d ’
l
Energy =1 light
Energylight = 1/dl’ 2
dl
Energylight = 1/dl2
21
22. Radial intensity
attenuation (2)
• General form to the object about the
infinity light source and point light
source
1.0 if source is at infinity
fl ,radatten 1
a0 a1d l a2d l
2 if source is local
22
23. Directional light sources
and spotlight effects
• A local light source can easily be modified
to produce a directional, or spotlight,
beam of light.
– The unit light-direction vector defines the axis
of a light cone, the angle θl defines the angular
extent of the circular cone
Vlight
Light direction
vector
Light
source θl
23
24. spotlight effects
• Denote Vlight in the light-source direction
and Vobj in the direction from the light
position to an object position
To object Vobj
vertex
α Cone axis Vlight
vector if Vobj .Vlight < cosθl
->
Light Vobj .Vlight = cos α the object is outside
source the light cone
cos α >= cosθl
θl
00 <θl<= 900
24
25. Angular intensity
attenuation
• For a directional light source, we can
degrade the light intensity angularly
about the source as well as radially
out from the point-source position.
– Light intensity decreasing as we move
farther from the cone axis
– Common angular intensity-attenuation
function for a directional light source
f angatten( ) cos al 0
25
26. Attenuation function
f angatten( ) cos al 0
– al (attentuation exponent) is assigned
positive value
• (al: the greater value in this function)
– Angle φis measured from the cone axis
• Along the cone axis, φ=0o fangatten(φ)=1.0
26
27. Combination above
different light sources
• To determine the angular attenuation
factor along a line from the light
position to a surface position in a
scene
1.0 if source is not a spotlight
f l ,radatten 0.0 if Vobj .Vlight = cos α < cosθl
(V V ) al
obj light otherwise
27
28. Surface lighting effects
• Besides light source can light to
object, object also can reflect lights
– Surfaces that are rough so tend to
scatter the reflected light
• when object exist more faces of surface,
more directions can be directed by the
reflected light
28
29. Specular reflection
• Some of the reflected light is
concentrated into a highlight called
specular reflection
– The lighting effect is more outstanding
on shiny surfaces
29
30. Summary light &
surface
• Surface lighting effects are produced by a
combination of illumination from light
sources and reflections from other
surfaces Surface is not directly
exposed to a light source
may still be visible due to
the reflected light from
nearby objects.
The ambient light is the
illumination effect
produced by the
reflected light from
various surfaces
30
31. Homework
• Walking in a scene
void polygon(int a, int b, int c , int d){
– Hint: Object glBegin(GL_POLYGON);
walking or walking glColor3fv(colors[a]);
glNormal3fv(normals[a]);
above floor glVertex3fv(vertices[a]);
glColor3fv(colors[b]);
– Example: color glNormal3fv(normals[b]);
cube glVertex3fv(vertices[b]);
glColor3fv(colors[c]);
glNormal3fv(normals[c]);
glVertex3fv(vertices[c]);
glColor3fv(colors[d]);
glNormal3fv(normals[d]);
glVertex3fv(vertices[d]);
glEnd(); }
demo
31
34. Simple buffer mapping
• How we design program which can both
write into and read from buffers.
• (Generally, two factors make these operations
different between reading and writing into computer
memory)
– First, read or write a single pixel or bit
– Rather, extend to read and write rectangular
blocks of pixels (called bit blocks)
34
35. Example: read &
write
I love
monitor OpenGL
• Our program would follow user controlling
when user assign to fill polygon, user key
some words or user clear the window
• Therefore, both the hardware and
software support a set of operations
– The set of operations work on rectangular
blocks of pixels
• This procedure is called bit-block transfer
• These operations are raster operations (raster-ops)
35
36. bit-block transfer
(bitblt)
• Take an n*m block from the source
buffer and to copy it into another
buffer (destination buffer)
Write_block(source,n,m,x,y,destination,u,v);
• source and destination are the buffer
• the n*m source block which lower-left
corner is at (x,y) to the destination
destination
buffer at a location (u,v)
• the bitblt is that a single function call
alters the destination block source
n Frame buffer
m
36
37. raster operations
(raster-ops)
• The mode is the exclusive OR or XOR
mode True table
’ d=d⊕s
s d d’
Source
pixel (s) 0 0 0
XOR
Destination
pixel (d’)
0 1 1
Read pixel (d)
Color
1 0 1
buffer
1 1 0
glEnable(GL_COLOR_LOGIC_OP)
glLogicOp(GL_XOR)
37
39. Drawing erasable lines
• Why line can erasable
– Line color and background color are combined togrther
• How to do
– First, we use the mouse to get the first endpoint and
store it.
xm=x/500.; ym=(500-y)/500.;
– Then, get the second point and draw a line segment in
XOR mode
xmm = x/500.; ymm=(500-y)/500.;
glColor3f(1.0,0.0,0.0);
glLogicOp(GL_XOR);
glBegin(GL_LINES);
glVertex2f(xm,ym);
glVertex2f(xmm,ymm);
glEnd();
glLogicOp(GL_COPY);
glFlush();
39
40. Texture mapping
• Texture mapping which describe a
pattern map to a surface
• describe texture: parametric
compute
textures
Regular pattern
40
Ch7: Interactive Computer Graphics 3th, Addison Wesley
41. Texture elements
• Texture elements which can be put in
a array T(s,t)
– This array is used to show a continuous
rectangular 2D texture pattern
– Texture coordinates (s, t) which are
independent variables
• With no loss of generality, scale (s, t) to the
interval (0, 1)
41
42. Texture maps (1)
• Texture map on a geometric object where
mapped to screen coordinates for display
– Object in spatial coordinates [(x,y,z) or
(x,y,z,w)] & texture elements (s,t)
• The mapping function:
x = x(s,t), y = y(s,t), z = z(s,t), w = w(s,t)
• The inverse function:
s = s(x,y,z,w), t = t(x,y,z,w)
42
43. Texture maps (2)
• If the geometric object in (u,v)
surface (Ex: sphere…)
– Object’s coordination (x,y,z) - > (u,v)
– Parametric coordinates (u,v) can also be
mapped to texture coordinates
– Consider the projection process from
worldcoordination to screencoordination
• xs = xs(s,t), ys = ys(s,t)
43
44. Texture maps (3)
First, determine the map from
texture coordinate to geometric
coordinates.
The mapping from this rectangle to Third, we can use
an arbitrary region in 3D space the texture maps to
vary the object’s
Second, owing to the nature of shape
the rendering process, which
works on a pixel-by-pixel
44
Ch7: Interactive Computer Graphics 3th, Addison Wesley
45. Linear mapping function
(1)
• 2D coordinated map
t xs
(rmax,smax) (umax,vmax)
(umin,vmin)
(rmin,smin) s ys
s smin
u umin (umax umin )
smax smin
t t min
v vmin (vmax vmin )
t max t min
45
46. Linear mapping function (2)
• Cylinder coordination
t
s
u and v ~ (0,1)
x r cos(2u )
=> s = u, t = v
y r sin( 2v)
z v/h
46
47. Linear mapping function
(3)
• Texture mapping with a box
t
Back
Left Bottom Right Top
s
Front
47
48. Pixel and geometric
pipelines
• OpenGL’s texture maps rely on its
pipeline architecture
vertices Geometric
rasterization display
processing
pixels Pixel
operations
48
49. Texture mapping in
OpenGL (1)
• OpenGL contained the functionality
to map 1D and 2D texture to one-
through 4D graphical objects
• The key issue on texture mapping
– The pixel pipeline can be mapped onto
geometric primitives.
vertices Geometric
processing
pixels Pixel
operations
49
50. Texture mapping in
OpenGL (2)
• In particular, texture mapping is
done as primitives are rasterized
• This process maps 3D points to
locations (pixels) on the display
• Each fragment that is generated is
tested for visibility (with z-buffer)
50
51. 2D texture mapping (1)
• Support we have a 512*512 image my_texels
GLubye my_texels[512][512]
• Specify this array is too be used as a 2D texture
glTexImage2D(GL_TEXTURE_2D, level, components,
width, height, border, format,type,tarry);
– tarray size is the same the width*height
– The value components is the (1-4) of color components
(RGBA) or 3 (RGB)
– The format (RGBA) = 4 or 3 (RGB)
– In processor memory, tarry’s pixels are moved through
the pixel pipeline (** not in the frame buffer)
– The parameters level and border give us fine control
Ex: glTexImage2D(GL_TEXTURE_2D, 0, 3, 512, 512,
0, GL_RGB,GL_UNSIGNED_BYTE,my_texels);
51
52. 2D texture mapping (2)
• Enable texture mapping
glEnable(GL_TEXTURE_2D);
• Specify how the texture is mapped onto a
geometric object
t
(512,512)
1
1 s (0,0)
glTexCoord2f(s,t); glVertex2f(x,y,z);
glBegin(GL_QUAD);
glTexCoord2f(0.0,0.0); glVertex2f(x1,y1,z1);
….
glEnd(); 52
53. 2D texture mapping (3)
t
• Mapping texels to pixels
t
xs xs
s ys s ys
Magnification: large Minification: min
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,
GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,
GL_NEAREST);
53
54. Texture objects
• Texture generation in frame buffer
Fragment
Texture unit 0 Texture unit 1
Frame buffer
Texture unit 2
54
55. Environmental maps
• Mapping of the environment Object in
environment
Projected object
T(s,t)
Intermediate
surface
glTexGeni(GL_S,GL_TEXTURE_GEN_MODE,GL_SPHERE_MAP);
glTexGeni(GL_T,GL_TEXTURE_GEN_MODE,GL_SPHERE_MAP);
glEnable(GL_TEXTURE_GEN_S);
glEnable(GL_TEXTURE_GEN_T);
55
56. The complex domain’s
figure
• The mandelbrot set
z1=x1+iy1 z1+z2=(x1+x2)+i(y1+y2)
z2=x2+iy2 z1z2 = x1x2-y1y2+i(x1y2+x2y1)
A complex recurrence
|z|2=x2+y2 y zk+1=F(zk)
y
Attractors: zk+1=zk2
z =x + iy
z3=F(z2) z2=F(z1)
x x
z0 z1=F(z0)
Complex plane Paths from complex recurrence
The complex plane’s
Attractors general:
function w=F(z)
zk+1=zk2+c
56
57. Pixels & display
The area centered at
-0.75+i0.0
If |zk|>4, break
0~255 -> Rarray
demo
57