This document provides an introduction to OpenGL with code samples. It discusses the history and overview of OpenGL, versions, philosophy, functionality, usage, conventions, basic concepts like the rendering pipeline and primitives. It also covers environment setup for using OpenGL with Windows SDK and GLUT as well as providing code samples. The document serves as a high-level overview of OpenGL for developers.
The document provides an introduction and overview of OpenGL graphics programming. It discusses that OpenGL is a 3D graphics rendering API that is hardware independent and portable. The document outlines the OpenGL rendering pipeline and libraries. It describes that OpenGL is not a language itself but makes calls to functions from libraries like GLUT, GLU, and OpenGL in a programming language like C/C++. The basic framework of an OpenGL program and functions for initializing OpenGL state, registering callbacks, and the event processing loop are also covered.
Mrinmoy Dalal is giving a presentation about computer graphics and OpenGL. OpenGL is a cross-platform API for rendering 2D and 3D graphics. It can interface with the graphics hardware of various platforms to render graphics and has good support of modern graphics hardware. OpenGL uses a graphics pipeline that processes vertex and fragment data to render 3D scenes to the framebuffer.
OpenGL is a cross-platform API for rendering 3D graphics. It consists of a pipeline that processes vertices, primitives, fragments and pixels. Key stages include vertex processing, tessellation, primitive processing, rasterization, fragment processing and pixel processing. OpenGL uses libraries like GLUT and GLEW and works across Windows, Linux and Mac operating systems.
OpenGL is a cross-language API for 2D and 3D graphics rendering on the GPU. It was created by Silicon Graphics in 1992 and is now maintained by the Khronos Group. OpenGL provides an interface between software and graphics hardware to perform tasks like rendering, texture mapping, and shading. Developers write OpenGL code that gets translated into GPU commands by a driver for the specific graphics card. This allows hardware-accelerated graphics to be used across many platforms and programming languages.
OpenGL is a standard graphics library used to render 2D and 3D graphics. It provides basic drawing primitives like points, lines, and polygons. OpenGL follows a graphics pipeline where commands are processed through various stages including transformations, rasterization, and finally writing to the framebuffer. Programmers use OpenGL by issuing commands to specify geometry and settings, which are then rendered to the screen independent of hardware.
Lecture 6 introduction to open gl and glutsimpleok
The document discusses OpenGL and its supporting libraries GLUT and GLU. It explains that OpenGL was developed by Silicon Graphics to provide 3D graphics rendering and is now an industry standard. It focuses on device-independent 3D rendering but additional libraries are needed for window management, user interaction, and more complex graphics objects. The document also outlines how OpenGL programs work in an event-driven model using callbacks.
The document provides an overview of OpenGL and computer graphics concepts. It discusses the basics of computer graphics including applications, the graphics pipeline, primitives like vertices and polygons, attributes like color, and an example of drawing a shaded triangle. The graphics pipeline involves steps like vertex operations, primitive assembly, rasterization, and fragment operations. Primitives are specified using vertices and attributes remain in effect until changed. The OpenGL API is used to program 3D graphics and interfaces with the graphics driver.
Open Graphics Library (OpenGL) is a cross-language, cross-platform application programming interface (API) for rendering 2D and 3D vector graphics. The API is typically used to interact with a graphics processing unit (GPU), to achieve hardware-accelerated rendering.
The document provides an introduction and overview of OpenGL graphics programming. It discusses that OpenGL is a 3D graphics rendering API that is hardware independent and portable. The document outlines the OpenGL rendering pipeline and libraries. It describes that OpenGL is not a language itself but makes calls to functions from libraries like GLUT, GLU, and OpenGL in a programming language like C/C++. The basic framework of an OpenGL program and functions for initializing OpenGL state, registering callbacks, and the event processing loop are also covered.
Mrinmoy Dalal is giving a presentation about computer graphics and OpenGL. OpenGL is a cross-platform API for rendering 2D and 3D graphics. It can interface with the graphics hardware of various platforms to render graphics and has good support of modern graphics hardware. OpenGL uses a graphics pipeline that processes vertex and fragment data to render 3D scenes to the framebuffer.
OpenGL is a cross-platform API for rendering 3D graphics. It consists of a pipeline that processes vertices, primitives, fragments and pixels. Key stages include vertex processing, tessellation, primitive processing, rasterization, fragment processing and pixel processing. OpenGL uses libraries like GLUT and GLEW and works across Windows, Linux and Mac operating systems.
OpenGL is a cross-language API for 2D and 3D graphics rendering on the GPU. It was created by Silicon Graphics in 1992 and is now maintained by the Khronos Group. OpenGL provides an interface between software and graphics hardware to perform tasks like rendering, texture mapping, and shading. Developers write OpenGL code that gets translated into GPU commands by a driver for the specific graphics card. This allows hardware-accelerated graphics to be used across many platforms and programming languages.
OpenGL is a standard graphics library used to render 2D and 3D graphics. It provides basic drawing primitives like points, lines, and polygons. OpenGL follows a graphics pipeline where commands are processed through various stages including transformations, rasterization, and finally writing to the framebuffer. Programmers use OpenGL by issuing commands to specify geometry and settings, which are then rendered to the screen independent of hardware.
Lecture 6 introduction to open gl and glutsimpleok
The document discusses OpenGL and its supporting libraries GLUT and GLU. It explains that OpenGL was developed by Silicon Graphics to provide 3D graphics rendering and is now an industry standard. It focuses on device-independent 3D rendering but additional libraries are needed for window management, user interaction, and more complex graphics objects. The document also outlines how OpenGL programs work in an event-driven model using callbacks.
The document provides an overview of OpenGL and computer graphics concepts. It discusses the basics of computer graphics including applications, the graphics pipeline, primitives like vertices and polygons, attributes like color, and an example of drawing a shaded triangle. The graphics pipeline involves steps like vertex operations, primitive assembly, rasterization, and fragment operations. Primitives are specified using vertices and attributes remain in effect until changed. The OpenGL API is used to program 3D graphics and interfaces with the graphics driver.
Open Graphics Library (OpenGL) is a cross-language, cross-platform application programming interface (API) for rendering 2D and 3D vector graphics. The API is typically used to interact with a graphics processing unit (GPU), to achieve hardware-accelerated rendering.
OpenGL is a cross-language, cross-platform API for rendering 2D and 3D graphics via hardware acceleration. It uses shaders and programmable pipelines to process vertices and fragments. The rendering pipeline involves transforming vertices, assembling triangles, rasterization, applying textures, testing fragments, and writing pixels to the framebuffer. Key concepts include transformation matrices, lighting, and the vertex and fragment shaders that operate on data at each pipeline stage.
Takes the reader through the various components of windowing systems, and how to develop and benchmark various Graphics applications using OpenGL and other toolsets. Also includes a Cheatsheet that covers various terminologies used in the Graphics world.
OpenGL is a cross-language cross-platform API for rendering 2D and 3D graphics. It has been in development for over 16 years and is overseen by the Khronos Group. OpenGL's most recent release introduced some radical changes, moving to a fully programmable pipeline where developers must apply vertex attributes in shaders rather than using fixed-function functionality. Related libraries like GLUT and SDL provide windowing, input, and multimedia capabilities to complement the 3D rendering features in OpenGL.
The document provides an introduction to OpenGL programming. It discusses that OpenGL is a hardware-independent API for 3D graphics. It originated from SGI's GL library and was developed as the cross-platform OpenGL standard. The document outlines OpenGL's core functionality and architecture, as well as common libraries like GLUT and GLU. It provides examples of basic OpenGL programs and concepts like the rendering pipeline, coordinate systems, and event handling.
COLOR CRT MONITORS IN COMPUTER GRAPHICSnehrurevathy
1. Color CRT displays use phosphors and one of two methods - beam penetration or shadow mask - to generate colors.
2. The beam penetration method uses red and green phosphors and electron beam speed to produce four colors, while the shadow mask method uses three color phosphors and electron beam deflection through a shadow mask to generate millions of colors.
3. Flat panel displays like LCDs and plasma panels provide alternatives to CRTs with reduced size and power use, though early types had limitations in features like color capability.
Graphics software acts as an intermediary between application programs and graphics hardware, supporting output primitives and interaction devices. There are two main types of graphics software: general programming packages that provide extensive graphics functions for use in languages like C and FORTRAN, including functions for shapes, colors, and transformations; and special-purpose applications packages that are designed for non-programmers to generate displays without programming knowledge, such as painting and CAD programs.
Gouraud shading and Phong shading are two common techniques for interpolating shading across polygon surfaces in 3D graphics. Gouraud shading linearly interpolates intensities across polygon surfaces, improving on constant shading but still resulting in Mach bands or streaks. Phong shading interpolates normal vectors and applies lighting models at each surface point, producing more realistic highlights but requiring more computation than Gouraud shading. Fast Phong shading approximates calculations to speed up rendering with Phong shading at the cost of some accuracy.
Texture mapping is a method for defining high frequency detail, surface texture, or color information on a computer-generated graphic or 3D model. Its application to 3D graphics was pioneered by Edwin Catmull in 1974.
This presentation is made by me and partner syed maisam ali naqvi for Graphics course we try to give about open gl and its basicunder the supervision of sir irfan kandhro of sindh madressatul islam university hope you all find it worthy and hope this will work for you to understand open gl
1.THE USER DIALOGUE
2.INPUT OF GRAPHICS DATA
3.INTERACTIVE PICTURE CONSTRUCTION TECHNIQUE
4.THREE DIMENSIONAL CONCEPT
5. 3D DISPLAY METHODS
6. 3D PACKAGES
Comprehensive coverage of fundamentals of computer graphics.
3D Transformations
Reflections
3D Display methods
3D Object Representation
Polygon surfaces
Quadratic Surfaces
This document discusses algorithms for hidden surface removal in 3D computer graphics. It describes two main classifications of algorithms - object space and image space. It then provides details on various algorithms including Painter's algorithm (object space), Z-buffer algorithm (image space), and Warnock's area subdivision algorithm. The key aspects and approaches of each algorithm are summarized.
There are two main types of projections: perspective and parallel. In perspective projection, lines converge to a single point called the center of projection, creating the illusion of depth. In parallel projection, lines remain parallel as they are projected onto the view plane. Perspective projection is more realistic but parallel projection preserves proportions. Perspective projections can be one-point, two-point, or three-point depending on the number of principal vanishing points. Orthographic projections use perpendicular lines while oblique projections are at an angle. Common parallel projections include isometric, dimetric, trimetric, cavalier and cabinet views.
This document contains questions and answers about computer graphics. It begins by defining computer graphics as pictures and movies created using computers, usually referring to image data created with specialized graphics hardware and software. Applications of computer graphics mentioned include computer-aided design, presentation graphics, computer art, entertainment, education and training, visualization, image processing, and graphical user interfaces. Key terms like pixel, resolution, aspect ratio, and persistence are also defined. The document then discusses video display devices and CRTs, and explains raster scan and random scan display systems. Color CRTs using beam penetration and shadow mask techniques are also covered.
Computer graphics involves the creation and manipulation of images on a computer using geometric objects and their representations. It has many applications including computer-aided design, presentation graphics, computer art, entertainment, education and training, scientific visualization, image processing, and graphical user interfaces. Graphics packages provide standard functions and tools for working with geometric objects and images.
Use a game engine to create a video game. Its reusable components provide the general functionality. Define resources and building blocks.
What are the key elements of a game engine?
1. The presentation discusses different types of projections including parallel and perspective projections. Parallel projection involves projectors that are parallel, while perspective projection involves projectors that converge at a point.
2. Within parallel projection, there are orthographic and oblique projections. Orthographic projection uses perpendicular projectors, while oblique projection uses projectors that are not perpendicular. Specific types of oblique projection include cavalier and cabinet.
3. The presentation also derives the equations for parallel and oblique projections. It compares parallel and perspective projections, noting differences in properties like size preservation and foreshortening.
This document discusses lighting and shading models in computer graphics. It explains that lighting has two main components - the lighting model which calculates intensity at surface points, and surface rendering methods like ray tracing. Common lighting models include ambient, diffuse, and specular components. The diffuse component follows Lambert's cosine law, while the specular component uses Snell's law and the Phong reflection model. Together these components make up the lighting equation, which is approximated using shading techniques like constant, Gouraud, and Phong shading to assign colors to pixels.
This document provides an introduction to OpenGL concepts including state machines, primitives, buffers, shaders, attributes and uniforms, and drawing. It explains key OpenGL functions and terms like rendering settings, buffer objects, vertex and fragment shaders, and clearing and drawing. Examples of rendering a simple 3D scene with vertices, colors, and textures are provided.
The document provides an introduction to OpenGL graphics and drawing primitives. It discusses that OpenGL is a cross-language, cross-platform API for 2D and 3D graphics. It then demonstrates how to draw various primitives like points, lines, triangles, quads, using functions like glBegin, glVertex, glEnd. Specific primitive types include GL_POINTS, GL_LINES, GL_TRIANGLES, GL_QUADS, GL_LINE_STRIP and their usage is explained.
OpenGL is a cross-language, cross-platform API for rendering 2D and 3D graphics via hardware acceleration. It uses shaders and programmable pipelines to process vertices and fragments. The rendering pipeline involves transforming vertices, assembling triangles, rasterization, applying textures, testing fragments, and writing pixels to the framebuffer. Key concepts include transformation matrices, lighting, and the vertex and fragment shaders that operate on data at each pipeline stage.
Takes the reader through the various components of windowing systems, and how to develop and benchmark various Graphics applications using OpenGL and other toolsets. Also includes a Cheatsheet that covers various terminologies used in the Graphics world.
OpenGL is a cross-language cross-platform API for rendering 2D and 3D graphics. It has been in development for over 16 years and is overseen by the Khronos Group. OpenGL's most recent release introduced some radical changes, moving to a fully programmable pipeline where developers must apply vertex attributes in shaders rather than using fixed-function functionality. Related libraries like GLUT and SDL provide windowing, input, and multimedia capabilities to complement the 3D rendering features in OpenGL.
The document provides an introduction to OpenGL programming. It discusses that OpenGL is a hardware-independent API for 3D graphics. It originated from SGI's GL library and was developed as the cross-platform OpenGL standard. The document outlines OpenGL's core functionality and architecture, as well as common libraries like GLUT and GLU. It provides examples of basic OpenGL programs and concepts like the rendering pipeline, coordinate systems, and event handling.
COLOR CRT MONITORS IN COMPUTER GRAPHICSnehrurevathy
1. Color CRT displays use phosphors and one of two methods - beam penetration or shadow mask - to generate colors.
2. The beam penetration method uses red and green phosphors and electron beam speed to produce four colors, while the shadow mask method uses three color phosphors and electron beam deflection through a shadow mask to generate millions of colors.
3. Flat panel displays like LCDs and plasma panels provide alternatives to CRTs with reduced size and power use, though early types had limitations in features like color capability.
Graphics software acts as an intermediary between application programs and graphics hardware, supporting output primitives and interaction devices. There are two main types of graphics software: general programming packages that provide extensive graphics functions for use in languages like C and FORTRAN, including functions for shapes, colors, and transformations; and special-purpose applications packages that are designed for non-programmers to generate displays without programming knowledge, such as painting and CAD programs.
Gouraud shading and Phong shading are two common techniques for interpolating shading across polygon surfaces in 3D graphics. Gouraud shading linearly interpolates intensities across polygon surfaces, improving on constant shading but still resulting in Mach bands or streaks. Phong shading interpolates normal vectors and applies lighting models at each surface point, producing more realistic highlights but requiring more computation than Gouraud shading. Fast Phong shading approximates calculations to speed up rendering with Phong shading at the cost of some accuracy.
Texture mapping is a method for defining high frequency detail, surface texture, or color information on a computer-generated graphic or 3D model. Its application to 3D graphics was pioneered by Edwin Catmull in 1974.
This presentation is made by me and partner syed maisam ali naqvi for Graphics course we try to give about open gl and its basicunder the supervision of sir irfan kandhro of sindh madressatul islam university hope you all find it worthy and hope this will work for you to understand open gl
1.THE USER DIALOGUE
2.INPUT OF GRAPHICS DATA
3.INTERACTIVE PICTURE CONSTRUCTION TECHNIQUE
4.THREE DIMENSIONAL CONCEPT
5. 3D DISPLAY METHODS
6. 3D PACKAGES
Comprehensive coverage of fundamentals of computer graphics.
3D Transformations
Reflections
3D Display methods
3D Object Representation
Polygon surfaces
Quadratic Surfaces
This document discusses algorithms for hidden surface removal in 3D computer graphics. It describes two main classifications of algorithms - object space and image space. It then provides details on various algorithms including Painter's algorithm (object space), Z-buffer algorithm (image space), and Warnock's area subdivision algorithm. The key aspects and approaches of each algorithm are summarized.
There are two main types of projections: perspective and parallel. In perspective projection, lines converge to a single point called the center of projection, creating the illusion of depth. In parallel projection, lines remain parallel as they are projected onto the view plane. Perspective projection is more realistic but parallel projection preserves proportions. Perspective projections can be one-point, two-point, or three-point depending on the number of principal vanishing points. Orthographic projections use perpendicular lines while oblique projections are at an angle. Common parallel projections include isometric, dimetric, trimetric, cavalier and cabinet views.
This document contains questions and answers about computer graphics. It begins by defining computer graphics as pictures and movies created using computers, usually referring to image data created with specialized graphics hardware and software. Applications of computer graphics mentioned include computer-aided design, presentation graphics, computer art, entertainment, education and training, visualization, image processing, and graphical user interfaces. Key terms like pixel, resolution, aspect ratio, and persistence are also defined. The document then discusses video display devices and CRTs, and explains raster scan and random scan display systems. Color CRTs using beam penetration and shadow mask techniques are also covered.
Computer graphics involves the creation and manipulation of images on a computer using geometric objects and their representations. It has many applications including computer-aided design, presentation graphics, computer art, entertainment, education and training, scientific visualization, image processing, and graphical user interfaces. Graphics packages provide standard functions and tools for working with geometric objects and images.
Use a game engine to create a video game. Its reusable components provide the general functionality. Define resources and building blocks.
What are the key elements of a game engine?
1. The presentation discusses different types of projections including parallel and perspective projections. Parallel projection involves projectors that are parallel, while perspective projection involves projectors that converge at a point.
2. Within parallel projection, there are orthographic and oblique projections. Orthographic projection uses perpendicular projectors, while oblique projection uses projectors that are not perpendicular. Specific types of oblique projection include cavalier and cabinet.
3. The presentation also derives the equations for parallel and oblique projections. It compares parallel and perspective projections, noting differences in properties like size preservation and foreshortening.
This document discusses lighting and shading models in computer graphics. It explains that lighting has two main components - the lighting model which calculates intensity at surface points, and surface rendering methods like ray tracing. Common lighting models include ambient, diffuse, and specular components. The diffuse component follows Lambert's cosine law, while the specular component uses Snell's law and the Phong reflection model. Together these components make up the lighting equation, which is approximated using shading techniques like constant, Gouraud, and Phong shading to assign colors to pixels.
This document provides an introduction to OpenGL concepts including state machines, primitives, buffers, shaders, attributes and uniforms, and drawing. It explains key OpenGL functions and terms like rendering settings, buffer objects, vertex and fragment shaders, and clearing and drawing. Examples of rendering a simple 3D scene with vertices, colors, and textures are provided.
The document provides an introduction to OpenGL graphics and drawing primitives. It discusses that OpenGL is a cross-language, cross-platform API for 2D and 3D graphics. It then demonstrates how to draw various primitives like points, lines, triangles, quads, using functions like glBegin, glVertex, glEnd. Specific primitive types include GL_POINTS, GL_LINES, GL_TRIANGLES, GL_QUADS, GL_LINE_STRIP and their usage is explained.
Presented as a pre-conference tutorial at the GPU Technology Conference in San Jose on September 20, 2010.
Learn about NVIDIA's OpenGL 4.1 functionality available now on Fermi-based GPUs.
Video replay: http://nvidia.fullviewmedia.com/siggraph2012/ondemand/SS104.html
Date: Wednesday, August 8, 2012
Time: 11:50 AM - 12:50 PM
Location: SIGGRAPH 2012, Los Angeles
Attend this session to get the most out of OpenGL on NVIDIA Quadro and GeForce GPUs. Learn about the new features in OpenGL 4.3, particularly Compute Shaders. Other topics include bindless graphics; Linux improvements; and how to best use the modern OpenGL graphics pipeline. Learn how your application can benefit from NVIDIA's leadership driving OpenGL as a cross-platform, open industry standard.
Get OpenGL 4.3 beta drivers for NVIDIA GPUs from http://www.nvidia.com/content/devzone/opengl-driver-4.3.html
This document discusses OpenGL transformations. It describes how geometric objects are represented and transformed using matrices. Transformation matrices for modeling, viewing and projection are manipulated separately in OpenGL. The viewing transformation places the camera in the 3D world. Projection can be parallel (orthographic) or perspective, with perspective being more realistic. OpenGL provides functions for common viewing and projection setups. The full transformation pipeline involves clipping, perspective division, viewport mapping and rasterization to render the 3D scene to a 2D framebuffer.
This document contains slides from an introductory OpenGL course. It begins with an overview of OpenGL and 3D graphics concepts. It then demonstrates how to draw basic polygons like triangles and quads using OpenGL functions. It also discusses projection and camera concepts in OpenGL. The document contains code for a simple function that draws a triangle and quad as an example of drawing the first polygons in OpenGL.
The document provides an introduction to how the web works, including a brief history of the internet and protocols like IP and TCP. It describes the client-server model with browsers as clients and web servers as servers. It discusses how host name resolution works, with computers querying root servers and top-level domain servers to find the IP address for a given domain name. The document then introduces some common web technologies like HTML, CSS, JavaScript, PHP and others. It provides an overview of HTML fundamentals including tags, attributes, elements and comments.
The document provides instructions on basic animation in Maya. It discusses how to animate objects by setting keys on the timeline and editing motion curves in the graph editor. It also covers using driven keys to link the animation of one object to another and attaching objects to motion paths along curved paths to create natural arcing movement. The document is a tutorial with step-by-step guidance on animation tools and techniques in Maya.
This document discusses techniques for creating skyboxes and terrain in OpenGL graphics. It explains how to create a skydome using skybox textures and then generate terrain by mapping height data from a heightmap texture onto a grid and rendering it with additional textures. Methods are provided for scaling, translating and drawing the terrain as well as querying the heightmap to position the camera above the terrain. Advanced techniques discussed include using quad trees for efficient searching, multitexturing, adding detail textures, and raycasting against the terrain.
This document discusses 3D transformations and projections. It describes two main projection methods: parallel projection and perspective projection. Parallel projection preserves proportions but does not provide a realistic 3D representation. Perspective projection maps 3D points along converging lines to a vanishing point, resulting in foreshortening effects where objects appear smaller the farther they are from the viewing plane. The document outlines different types of parallel and perspective projections.
An illumination model, also called a lighting model and sometimes referred to as a shading model, is used to calculate the intensity of light that we should see at a given point on the surface of an object.
Surface rendering means a procedure for applying a lighting model to obtain pixel intensities for all the projected surface positions in a scene.
A surface-rendering algorithm uses the intensity calculations from an illumination model to determine the light intensity for all projected pixel positions for the various surfaces in a scene.
Surface rendering can be performed by applying the illumination model to every visible surface point.
The document discusses perspective projection in computer graphics. It begins with an overview of orthographic projection and viewing transformations. It then covers perspective viewing, including the perspective viewing volume defined by clipping planes, field of view, and different parameters used to specify a camera such as focal distance and projection matrices. It also discusses how parallel lines appear to converge in perspective projections and how perspective is used in OpenGL.
Instancing is a technique that allows rendering multiple instances of an object with a single draw call. It works by passing an instance ID to the vertex shader which can then apply unique transforms. The key differences from traditional rendering are that it is faster by using vertex buffer objects and requires less memory by passing shared geometry and instance-specific transforms. An example is provided rendering many instanced spheres with improved performance over traditional methods as the number of instances increases.
DirectX is a collection of APIs that provides low-level access to multimedia hardware on Windows systems. It enhances graphics and audio capabilities by taking advantage of capabilities like 3D acceleration and hardware processing. DirectX includes APIs that support 3D graphics through Direct3D, audio playback and mixing through DirectSound, input from devices like controllers through DirectInput, network multiplayer games through DirectPlay, and multimedia playback through DirectShow. By providing standardized access, DirectX allows programs to take advantage of capabilities without needing device-specific code.
This document provides an introduction to DirectX, including:
- A brief history of DirectX and overview of its capabilities for 2D/3D graphics, sound, input handling, and as an alternative to OpenGL.
- Explanations of core DirectX concepts like devices, device contexts, and swap chains for buffer management.
- An overview of the initialization process including setting up the swap chain and render targets.
- Details on rendering basics like viewports, presenting results, and using shaders.
- Code snippets demonstrating window creation, message processing, and DirectX object handling.
A brief presentation on Microsoft's Graphics platform - DirectX. The presentation explains the need for DirectX and how it came into being. It explains the Architecture in detail including the components which make up DirectX. The presentation then gives a short version history of DirectX and makes some points on the latest version, known as DirectX 12.
OpenGL extensions allow graphics card vendors to provide access to new features without waiting for updates to the OpenGL specification. Extensions add new functions, tokens, and capabilities. Functions are prefixed with "gl" while tokens use all capital letters prefixed by "GL_". Extensions are supported through platform-specific header files. Functions in extensions must be dynamically queried at runtime. Vertex and fragment shaders replace fixed-function OpenGL by implementing vertex transformations, lighting, texturing, and pixel operations through the OpenGL Shading Language (GLSL). Shaders are compiled and linked into programs for use. Uniforms and attributes are used to pass data between GLSL shaders and OpenGL.
The 3D graphics rendering pipeline consists of 4 main stages: 1) vertex processing, 2) rasterization, 3) fragment processing, and 4) output merging. In vertex processing, transformations are applied to position objects in the scene and camera. Rasterization converts vertex data into fragments and performs operations like clipping and scan conversion. Fragment processing handles texture mapping and lighting. Output merging combines fragments and uses the z-buffer to remove hidden surfaces when displaying the final pixels.
openGL basics for sample program (1).pptHIMANKMISHRA2
The document outlines the course learning objectives and outcomes for a Computer Graphics & Visualization laboratory course using OpenGL. The course aims to teach simple algorithms and implementation of line drawing, clipping, and 2D/3D modeling and rendering using OpenGL functions. Students will complete programming assignments to demonstrate techniques like Bézier curves, polygon filling, and develop a mini-project using OpenGL APIs.
The document outlines the course learning objectives and outcomes for a Computer Graphics & Visualization laboratory course using OpenGL. The course aims to teach simple algorithms and implementation of line drawing, clipping, and 2D/3D modeling and rendering using OpenGL functions. Students will complete programming assignments to demonstrate techniques like Bézier curves, polygon filling, and develop a mini-project using OpenGL APIs.
1. The document is a lab manual for a computer graphics and visualization course that provides instructions and programs for students to implement.
2. It includes an introduction to OpenGL that describes it as a software interface for graphics hardware consisting of 150 commands. It also discusses OpenGL's state machine functionality and related libraries like GLUT and GLU.
3. The lab manual then provides 9 programs for students to complete related to topics like line drawing algorithms, 3D modeling, and animations. It concludes with questions for a viva exam.
The document introduces OpenGL and GLUT (OpenGL Utility Toolkit). It discusses that OpenGL is a graphics library for rendering 2D and 3D graphics, while GLUT provides a windowing and input framework. It then covers OpenGL fundamentals like rendering primitives, transformations, lighting and texture mapping. The goals are to demonstrate enough OpenGL to create interactive 3D graphics and introduce advanced topics. Sample code shows basic GLUT and OpenGL usage.
This document provides an introduction to OpenGL. It discusses the history and evolution of OpenGL, its core components and libraries, basic programming concepts like coordinate systems and transformations, and how to set up an OpenGL development environment in both Unix and Windows systems. It also covers OpenGL programming concepts like the display callback, event handling, and GLUT functions. Finally, it provides information on submitting assignments and office hours for questions.
The document discusses graphics programming and OpenGL. It introduces OpenGL, describing it as a hardware-independent interface consisting of over 700 commands. It outlines the OpenGL API, including primitive functions, attribute functions, and viewing functions. It also covers OpenGL primitives like points, lines, and polygons, as well as attributes like color. It explains orthographic and two-dimensional viewing in OpenGL.
This document provides an introduction to computer graphics. It defines computer graphics as using a computer to produce and manipulate images on a screen by creating and manipulating models and images. It discusses modeling, rendering, imaging, animation, and hardware aspects of computer graphics. It provides examples of applications such as entertainment, design, scientific visualization, and more. It introduces the OpenGL graphics library and GLUT toolkit for window management in OpenGL programs. It provides conventions for OpenGL and GLUT functions and constants. Finally, it provides two short code examples of simple OpenGL programs to draw shapes and animate a rotating cube.
The document outlines the agenda for an Advanced Graphics Workshop being held by Texas Instruments. The workshop will include an introduction to graphics hardware architectures and the OpenGL rendering pipeline. It will provide a detailed walkthrough of the OpenGL ES 2.0 specification and APIs. Participants will work through several hands-on labs covering texturing, transformations, shaders and more. The goal is to help developers optimize graphics performance on embedded platforms.
OpenGL is a software interface for graphics hardware that provides a portable low-level 3D graphics library. It consists of three main libraries - OpenGL (GL) for modeling objects with primitives, OpenGL Utility Library (GLU) for utilities like camera and projection as well as additional modeling functions, and OpenGL Utilities Toolkit (GLUT) for window creation and input handling along with more modeling functions. OpenGL originated from Silicon Graphics' efforts to improve the portability of their IRIS GL graphics API and has evolved through multiple generations to support both fixed and programmable graphics pipelines. It works procedurally by describing graphic rendering steps rather than describing a scene descriptively. Popular programs that use OpenGL include games, 3D modeling software, and virtual
Presented September 30, 2009 in San Jose, California at GPU Technology Conference.
Describes the new features of OpenGL 3.2 and NVIDIA's extensions beyond 3.2 such as bindless graphics, direct state access, separate shader objects, copy image, texture barrier, and Cg 2.2.
Lab Practices and Works Documentation / Report on Computer GraphicsRup Chowdhury
This is a report that I have prepared during my Computer Graphics Lab course. This contains the theoretical information that we learned in our introduction class. It also contains information on different computer graphics tools and software. It contains codes to create different and also the procedure.
1. Information on GLUT
2. Flag drawing with GLUT
3. DDA Algorithm
4. Midpoint Line Drawing Algorithm
5. Tansformation
This presentation made at TI Developer Conference 2008, introduces the options available for developers to create User Interfaces on TI SGX based platforms.
OpenGL is a software interface that allows programmers to create 2D and 3D graphics. It consists of over 150 commands to specify graphics objects and operations. OpenGL is designed to be hardware-independent and supported across different platforms. The OpenGL architecture uses a pipeline that processes commands from display lists, evaluates polynomials, performs per-vertex and per-fragment operations, and renders to the framebuffer. Key operations include geometric primitives, attributes, transformations, viewing, clipping, blending, and using a z-buffer for hidden surface removal. OpenGL programs utilize various functions to define graphics, attributes, views, inputs, and window controls.
This document discusses drawing 2D and 3D graphics with OpenGL ES 1.x and 2.x APIs in Android NDK. It covers drawing 2D shapes and applying transforms using OpenGL ES 1.x, including translating, scaling and rotating shapes. It also discusses the graphics rendering pipeline in OpenGL and differences between the fixed pipeline in OpenGL ES 1.x versus the programmable pipeline in OpenGL ES 2.0. The document provides code samples for drawing 2D triangles and squares in Android NDK and applying common transforms.
OpenGL Fixed Function to Shaders - Porting a fixed function application to “m...ICS
Watch the video here: http://bit.ly/1TA24fU
OpenGL Fixed Function to Shaders - Porting a fixed function application to “modern” OpenGL - Webinar Mar 2016
This document discusses 3D graphics and OpenGL. It begins with an introduction to 3D concepts in OpenGL like using glVertex3f to specify 3D coordinates and constructing 3D objects from triangles. It then discusses several built-in OpenGL functions for drawing common 3D objects like cubes, spheres and cylinders. The document also covers key 3D transformations like projection, viewing and modeling transformations. It includes code examples for a basic 3D cube program and another drawing a cone. It ends with an example program demonstrating different projection transformations across four windows.
This document provides information about OpenGL and EGL. It discusses OpenGL concepts like the rendering pipeline, shaders, and GLSL. It explains how to use OpenGL and GLSL on Android, including initializing, creating shaders, and rendering. It also covers EGL and how it acts as an interface between OpenGL and the native platform. It describes EGL objects like displays, surfaces, and contexts. It provides examples of basic EGL usage and lists EGL APIs for initializing, configuring, and managing graphics resources.
Mini Project final report on " LEAKY BUCKET ALGORITHM "Nikhil Jain
The project “Leaky Bucket Algorithm” is based on computer networks.The leaky bucket algorithm is a general algorithm that can be effectively used to police real time traffic. Both Frame Relay and Aysnchronous Transfer Mode (ATM) networks use a form of the leaky bucket algorithm for traffic management.
For designing this graphical project we require the knowledge of both computer graphics and the language in which it is to be coded.The use of the language helps in designing a package more user friendly since for a general user high end languages creates complexities in understanding and usage.OpenGL provides us with all the inbuilt functions which makes us easy to understand graphics.
Data structures are used to organize graphics data and allow individual portions of an image to be referenced and modified independently. Common data structures include those based on vertices, edges, and surfaces. A database implements these data structures for storage. Relational and hierarchical models are two common database models. OpenGL is a widely used graphics library that provides a standardized interface for 3D graphics across platforms through functions that output graphics primitives.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
4. Part01 : About OpenGL
History
OpenGL is relatively new (1992) GL from Silicon Graphics
IrisGL - a 3D API for high-end IRIS graphics workstations
OpenGL attempts to be more portable
OpenGL Architecture Review Board (ARB) decides on all
enhancements
What it is…
Software interface to graphics hardware
About 120 C-callable routines for 3D graphics
Platform (OS/Hardware) independent graphics library
What it is not…
Not a windowing system (no window creation)
Not a UI system (no keyboard and mouse routines)
Not a 3D modeling system (Open Inventor, VRML, Java3D)
http://en.wikipedia.org/wiki/OpenGL
4 Part 01 - Introduction February 13, 2013
5. Part01 : OpenGL Versions
Version Release Year
OpenGL 1.0 January, 1992
OpenGL 1.1 January, 1997
OpenGL 1.2 March 16, 1998
OpenGL 1.2.1 October 14, 1998
OpenGL 1.3 August 14, 2001
OpenGL 1.4 July 24, 2002
OpenGL 1.5 July 29, 2003
OpenGL 2.0 September 7, 2004
OpenGL 2.1 July 2, 2006
OpenGL 3.0 July 11, 2008
OpenGL 3.1 March 24, 2009 and updated May 28, 2009
OpenGL 3.2 August 3, 2009 and updated December 7, 2009
OpenGL 3.3 March 11, 2010
OpenGL 4.0 March 11, 2010
OpenGL 4.1 July 26, 2010
5 Part 01 - Introduction February 13, 2013
6. Part01 : Overview
OpenGL is a procedural graphics language
programmer describes the steps involved to achieve a
certain display
“steps” involve C style function calls to a highly portable
API
fairly direct control over fundamental operations of two
and three dimensional graphics
an API not a language
What it can do?
Display primitives
Coordinate transformations (transformation matrix
manipulation)
Lighting calculations
Antialiasing
Pixel Update Operations
Display-List Mode
6 Part 01 - Introduction February 13, 2013
7. Part01 : Philosophy
Platform independent
Window system independent
Rendering only
Aims to be real-time
Takes advantage of graphics hardware where it
exists
State system
Client-server system
Standard supported by major companies
7 Part 01 - Introduction February 13, 2013
8. Part01 : Functionality
Simple geometric objects
(e.g. lines, polygons, rectangles, etc.)
Transformations, viewing, clipping
Hidden line & hidden surface removal
Color, lighting, texture
Bitmaps, fonts, and images
Immediate- & Retained- mode graphics
An immediate-mode API is procedural. Each time a new
frame is drawn, the application directly issues the drawing
commands.
A retained-mode API is declarative. The application
constructs a scene from graphics primitives, such as
8 shapes and lines. Part 01 - Introduction February 13, 2013
9. Part01 : Usage
Scientific Visualization
Information
Visualization
Medical Visualization
CAD
Games
Movies
Virtual Reality
Architectural
Walkthrough
9 Part 01 - Introduction February 13, 2013
10. Part01 : Convention
Constants:
prefix GL + all capitals (e.g. GL_COLOR_BUFER_BIT)
Functions:
prefix gl + capital first letter (e.g. glClearColor)
returnType glCommand[234][sifd] (type value, ...);
returnType glCommand[234][sifd]v (type *value);
Many variations of the same functions
glColor[2,3,4][b,s,i,f,d,ub,us,ui](v)
[2,3,4]: dimension
[b,s,i,f,d,ub,us,ui]: data type
(v): optional pointer (vector) representation
Example:
glColor3i(1, 0, 0)
or
glColor3f(1.0, 1.0, 1.0)
or
GLfloat color_array[] = {1.0, 1.0, 1.0};
glColor3fv(color_array)
10 Part 01 - Introduction February 13, 2013
11. Part01 : Basic Concepts
OpenGL as a state machine (Once the value of a
property is set, the value persists until a new value is
given).
Graphics primitives going through a “pipeline” of
rendering operations
OpenGL controls the state of the pipeline with many state
variables (fg & bg colors, line thickness, texture
pattern, eyes, lights, surface material, etc.)
Binary state: glEnable & glDisable
Query: glGet[Boolean,Integer,Float,Double]
Coordinates :
XYZ axis follow Cartesian system.
11 Part 01 - Introduction February 13, 2013
13. Part01 : Primitives (Points, Lines…) - I
All geometric objects in OpenGL are created from a set of basic
primitives.
Certain primitives are provided to allow optimization of geometry for
improved rendering speed.
Primitives specified by vertex calls (glVertex*) bracketed by
glBegin(type) and glEnd()
Specified by a set of vertices
glVertex[2,3,4][s,i,f,d](v) (TYPE coords)
Grouped together by glBegin() & glEnd()
glBegin(GLenum mode)
glBegin(GL_POLYGON) mode includes
GL_POINTS
glVertex3f(…) GL_LINES, GL_LINE_STRIP, GL_LINE_
LOOP
glVertex3f(…) GL_POLYGON
glVertex3f(…) GL_TRIANGLES, GL_TRIANGLE_STRI
P
glEnd GL_QUADS, GL_QUAD_STRIP
13 Part 01 - Introduction February 13, 2013
14. Part01 : Primitives (Points, Lines…) - II
Point Type
GL_POINTS
Line Type
GL_LINES
GL_LINE_STRIP
GL_LINE_LOOP
Triangle Type
GL_TRIANGLES
GL_TRIANGLE_STRI
P
GL_TRIANGLE_FAN
Quad Type
GL_QUADS
GL_QUAD_STRIP
Polygon Type
GL_POLYGON
Ref : Drawing Primitives in OpenGL
14 Part 01 - Introduction February 13, 2013
15. Part01 : Environment Setup
Using Windows SDK
OpenGL and OpenGL Utility (GLU) ships with Microsoft SDK.
Add SDK Path to IDE Project Directories.
Add Headers: gl.h, glu.h
Found @ <SDKDIR>Windowsv6.0Aincludegl
Add Libs for linking: opengl32.lib, glu32.lib
Found @ <SDKDIR>Windowsv6.0Alib
Required DLLs: opengl32.dll, glu32.dll
Found @ <WINDIR> System32
Using GLUT (www.xmission.com/~nate/glut.html or http://freeglut.sourceforge.net)
Store the Binaries at appropriate location and reference it properly
Add Header: glut.h
Found @ <GLUTPATH>include
Add Lib for linking: glut32.lib
Found @ <GLUTPATH>lib
Required DLL: glut32.dll
Found @ <GLUTPATH>bin
15 Part 01 - Introduction February 13, 2013
16. Part01 : Code Samples
Using Windows SDK
Create Basic Window from the Windows Base Code.
Add Headers & Libs.
Modify the Windows Class Registration.
Modify the Window Creation Code.
Setup PixelFormat.
Create Rendering Context and set it current.
Add Cleanup code where remove rendering context.
Add Event Handlers
Add Display function handler for rendering OpenGL stuff.
Add Resize function handler for window resizing.
Using GLUT
Add Headers and Libs.
Initialize the GLUT system and create basic window.
Add Event Handlers
Add Display, Resize, Idle, Keyboard, Mouse handlers.
16 Part 01 - Introduction February 13, 2013
17. Part02: Basics
Introduction of OpenGL with code samples.
17 Part 02 - Basics February 13, 2013
18. Part02 : Topics
Transformations
Modeling
Concept of Matrices.
Scaling, Rotation, Translation
Viewing
Camera
Projection: Ortho/Perspective
Code Samples (Win32/glut)
18 Part 02 - Basics February 13, 2013
19. Part02 : Transformations-Modeling I
Concept of Matrices.
All affine operations are matrix multiplications.
A 3D vertex is represented by a 4-tuple (column) vector.
A vertex is transformed by 4 x 4 matrices.
All matrices are stored column-major in OpenGL
Matrices are always post-multiplied. product of matrix and
vector is Mv. OpenGL only multiplies a matrix on the
right, the programmer must remember that the last matrix
specified is the first applied.
x
m0 m4 m8 m12
y
v M
m1 m5 m9 m13
z m2 m6 m10 m14
w m3 m7 m11 m15
19 Part 02 - Basics February 13, 2013
20. Part02 : Transformations-Modeling II
OpenGL uses stacks to maintain transformation matrices
(MODELVIEW stack is the most important)
You can load, push and pop the stack
The current transform is applied to all graphics primitive until it is
changed
2 ways of specifying Transformation Matrices.
Using crude Matrices. Using built-in routines.
Specify current Matrix glTranslate[f,d](x,y,z)
glMatrixMode(GLenum mode) glRotate[f,d](angle,x,y,z)
Initialize current Matrix glScale[f,d](x,y,z)
glLoadIdentity(void) Order is important
glLoadMatrix[f,d](const TYPE
*m)
Concatenate current Matrix
glMultMatrix(const TYPE *m)
20 Part 02 - Basics February 13, 2013
21. Part02 : Transformations-Viewing I
Camera.
Default: eyes at origin, looking along -Z
Important parameters:
Where is the observer (camera)? Origin.
What is the look-at direction? -z direction.
What is the head-up direction? y direction.
gluLookAt(
eyex, eyey, eyez, aimx, aimy, aimz, upx, upy, upz )
gluLookAt() multiplies itself onto the current matrix, so it usually
comes after glMatrixMode(GL_MODELVIEW) and
glLoadIdentity().
21 Part 02 - Basics February 13, 2013
22. Part02 : Transformations-Viewing II
Projection
Perspective projection
gluPerspective( fovy, aspect, zNear, zFar )
glFrustum( left, right, bottom, top, zNear, zFar )
Orthographic parallel projection
glOrtho( left, right, bottom, top, zNear, zFar )
gluOrtho2D( left, right, bottom, top )
Projection transformations
(gluPerspective, glOrtho) are left handed
Everything else is right handed, including the y
vertexes to be rendered y z+
x
x
left handed z right
+ handed
22 Part 02 - Basics February 13, 2013
24. Part02 : Code Samples
Ortho Perspective
24 Part 02 - Basics February 13, 2013
Editor's Notes
Why is a 4-tuple vector used for a 3D (x, y, z) vertex? To ensure that all matrix operations are multiplications. w is usually 1.0If w is changed from 1.0, we can recover x, y and z by division by w. Generally, only perspective transformations change w and require this perspective division in the pipeline.
For perspective projections, the viewing volume is shaped like a truncated pyramid (frustum). There is a distinct camera (eye) position, and vertexes of objects are “projected” to camera. Objects which are further from the camera appear smaller. The default camera position at (0, 0, 0), looks down the z-axis, although the camera can be moved by other transformations.ForgluPerspective(), fovyis the angle of field of view (in degrees) in the y direction. fovymust be between 0.0 and 180.0, exclusive. aspect is x/y and should be same as the viewport to avoid distortion. zNearand zFardefine the distance to the near and far clipping planes.glFrustum() is rarely used. Warning: for gluPerspective() or glFrustum(), don’t use zero for zNear!For glOrtho(), the viewing volume is shaped like a rectangular parallelepiped (a box). Vertexes of an object are “projected” towards infinity. Distance does not change the apparent size of an object. Orthographic projection is used for drafting and design (such as blueprints).