The document discusses various graphics concepts used in C programming including:
1. Color constants, fill patterns, graphics drivers, errors, and modes that are used in graphics functions.
2. The functions closegraph, detectgraph, getbkcolor, getcolor, getmaxx, getmaxy, getpixel, getx, gety, grapherrormsg, graphresult, initgraph, outtext, outtexttxy, putpixel, setbkcolor, setcolor, and setfillpattern and their purposes.
3. initgraph initializes the graphics system by loading a driver and putting the system into graphics mode, while closegraph deallocates memory and restores the original screen mode.
The document discusses key concepts in image processing including image sensing, acquisition, formation, sampling, quantization, and digital representation. It describes how the human eye forms images and contains photoreceptor cells. There are three main types of image sensors: single, line, and array. Sampling converts a continuous image to digital by selecting pixel values at regular intervals while quantization assigns discrete brightness levels. Together they allow images to be represented digitally as matrices of pixel values.
Histograms show the distribution of pixel intensities in an image by counting the number of pixels for each intensity value. Normalized histograms provide an estimate of the probability of each intensity occurring. Histogram equalization transforms the pixel intensity distribution of an image to a uniform distribution in order to increase contrast. It does this by using the cumulative distribution function to map intensities to new output values. Local histogram equalization performs this on neighborhoods within an image to enhance local details. Arithmetic and logical operations can also be used for image enhancement, such as AND, OR, and subtraction between images on a pixel-by-pixel basis.
From Image Processing To Computer VisionJoud Khattab
This document provides an overview of digital image processing and computer vision. It defines digital images and describes different image types including binary, grayscale, and color images. The document outlines common digital image processing steps such as acquisition, enhancement, restoration, compression, segmentation, representation and description. It also discusses applications of computer vision such as scene completion, object detection and recognition tasks. In summary, the document serves as an introduction to digital image processing and computer vision concepts.
DISCRETE COSINE TRANSFORM WITH ADAPTIVE HUFFMAN CODING BASED IMAGE COMPRESSION
code with ressult
ABSTRACT
Method of compression which is Huffman coding based on histogram information and image segmentation. It is used for lossless and lossy compression. Theamount of image will be compressed in lossy manner, and in lossless manner, depends on theinformation obtained by the histogram of the image. The results show that the difference betweenoriginal and compressed images is visually negligible. The compression ratio(CR) and peak signal tonoise ratio(PSNR) are obtained for different images. The relation between compression ratio and peaksignal to noise ratio shows that whenever we increase compression ratio we get PSNR high. We can alsoobtain minimum mean square error. It shows that if we get high PSNR than our image quality is better.
Spatial filtering involves applying filters or kernels to images to enhance or modify pixel values based on neighboring pixel values. Linear spatial filtering involves taking a weighted sum of pixel values within the filter window. Common filters include averaging filters for noise reduction, median filters to reduce impulse noise while preserving edges, and sharpening filters like Laplacian filters and unsharp masking to enhance details.
Image filtering in Digital image processingAbinaya B
This document discusses various image filtering techniques used for modifying or enhancing digital images. It describes spatial domain filters such as smoothing filters including averaging and weighted averaging filters, as well as order statistics filters like median filters. It also covers frequency domain filters including ideal low pass, Butterworth low pass, and Gaussian low pass filters for smoothing, as well as their corresponding high pass filters for sharpening. Examples of applying different filters at different cutoff frequencies are provided to illustrate their effects.
Bab 4 membahas operasi ketetanggaan piksel yang digunakan untuk memproses setiap piksel citra dengan melibatkan nilai piksel tetangganya. Terdapat tiga jenis filter yang menggunakan operasi ketetanggaan piksel yaitu filter batas, filter pererataan, dan filter median. Filter-filter tersebut berfungsi untuk menyaring atau mengurangi gangguan pada citra.
The document discusses key concepts in image processing including image sensing, acquisition, formation, sampling, quantization, and digital representation. It describes how the human eye forms images and contains photoreceptor cells. There are three main types of image sensors: single, line, and array. Sampling converts a continuous image to digital by selecting pixel values at regular intervals while quantization assigns discrete brightness levels. Together they allow images to be represented digitally as matrices of pixel values.
Histograms show the distribution of pixel intensities in an image by counting the number of pixels for each intensity value. Normalized histograms provide an estimate of the probability of each intensity occurring. Histogram equalization transforms the pixel intensity distribution of an image to a uniform distribution in order to increase contrast. It does this by using the cumulative distribution function to map intensities to new output values. Local histogram equalization performs this on neighborhoods within an image to enhance local details. Arithmetic and logical operations can also be used for image enhancement, such as AND, OR, and subtraction between images on a pixel-by-pixel basis.
From Image Processing To Computer VisionJoud Khattab
This document provides an overview of digital image processing and computer vision. It defines digital images and describes different image types including binary, grayscale, and color images. The document outlines common digital image processing steps such as acquisition, enhancement, restoration, compression, segmentation, representation and description. It also discusses applications of computer vision such as scene completion, object detection and recognition tasks. In summary, the document serves as an introduction to digital image processing and computer vision concepts.
DISCRETE COSINE TRANSFORM WITH ADAPTIVE HUFFMAN CODING BASED IMAGE COMPRESSION
code with ressult
ABSTRACT
Method of compression which is Huffman coding based on histogram information and image segmentation. It is used for lossless and lossy compression. Theamount of image will be compressed in lossy manner, and in lossless manner, depends on theinformation obtained by the histogram of the image. The results show that the difference betweenoriginal and compressed images is visually negligible. The compression ratio(CR) and peak signal tonoise ratio(PSNR) are obtained for different images. The relation between compression ratio and peaksignal to noise ratio shows that whenever we increase compression ratio we get PSNR high. We can alsoobtain minimum mean square error. It shows that if we get high PSNR than our image quality is better.
Spatial filtering involves applying filters or kernels to images to enhance or modify pixel values based on neighboring pixel values. Linear spatial filtering involves taking a weighted sum of pixel values within the filter window. Common filters include averaging filters for noise reduction, median filters to reduce impulse noise while preserving edges, and sharpening filters like Laplacian filters and unsharp masking to enhance details.
Image filtering in Digital image processingAbinaya B
This document discusses various image filtering techniques used for modifying or enhancing digital images. It describes spatial domain filters such as smoothing filters including averaging and weighted averaging filters, as well as order statistics filters like median filters. It also covers frequency domain filters including ideal low pass, Butterworth low pass, and Gaussian low pass filters for smoothing, as well as their corresponding high pass filters for sharpening. Examples of applying different filters at different cutoff frequencies are provided to illustrate their effects.
Bab 4 membahas operasi ketetanggaan piksel yang digunakan untuk memproses setiap piksel citra dengan melibatkan nilai piksel tetangganya. Terdapat tiga jenis filter yang menggunakan operasi ketetanggaan piksel yaitu filter batas, filter pererataan, dan filter median. Filter-filter tersebut berfungsi untuk menyaring atau mengurangi gangguan pada citra.
This document discusses pixel relationships and neighborhood concepts in digital images. It defines a pixel and pixel connectivity. There are different types of pixel neighborhoods, including 4-neighbor, 8-neighbor, and diagonal neighbors. Connected components are sets of pixels that are connected based on pixel adjacency. Algorithms can label connected components and identify distinct image regions. Various distance measures quantify how close pixels are, such as Euclidean, Manhattan, and chessboard distances. Arithmetic and logical operators can combine pixel values from different images. Neighborhood operations apply functions to pixels based on their values and those of nearby pixels.
The document discusses scan conversion algorithms for computer graphics. It begins by defining scan conversion as the process of representing continuous graphic objects as discrete pixels. It then discusses various graphic objects that must be scan converted, including points, lines, circles, ellipses, and polygons. The document focuses on algorithms for scan converting points and lines. It describes the Digital Differential Analyzer (DDA) algorithm and Bresenham's line algorithm for scan converting lines, explaining how each algorithm determines which pixels to turn on between a start and end point.
This document describes the digital differential analyzer (DDA) algorithm for rasterizing lines, triangles, and polygons in computer graphics. It discusses implementing DDA using floating-point or integer arithmetic. The DDA line drawing algorithm works by incrementing either the x or y coordinate by 1 each step depending on whether the slope is less than or greater than 1. Pseudocode is provided to illustrate the algorithm. Potential drawbacks of DDA are also mentioned, such as the expense of rounding operations.
Features image processing and ExtactionAli A Jalil
This document discusses various techniques for extracting features and representing shapes from images, including:
1. External representations based on boundary properties and internal representations based on texture and statistical moments.
2. Principal component analysis (PCA) is mentioned as a statistical method for feature extraction.
3. Feature vectors are described as arrays that encode measured features of an image numerically, symbolically, or both.
Dokumen tersebut membahas tentang teknik-teknik peningkatan citra digital (image enhancement) yang meliputi operasi titik (point operation), operasi mask, operasi transformasi, dan operasi warna (coloring operation). Beberapa teknik yang dijelaskan antara lain negatif citra, pengaturan kontras, pemotongan tingkat abu-abu, pengurangan noise dengan rata-rata citra, serta pengaturan histogram untuk memperbaiki kontras citra.
This document discusses image restoration and reconstruction techniques for noise removal. It begins by defining image restoration as attempting to reverse degradation processes to restore degraded images. Various noise models are described, including Gaussian, Rayleigh, Erlang, exponential, uniform, and impulse noise. Spatial domain filtering techniques like mean, median, and order statistics filters are covered for noise removal. Frequency domain filtering using band reject filters is also discussed, as well as adaptive filtering techniques. Examples are provided to demonstrate noise removal.
The document discusses image restoration techniques. It introduces common image degradation models and noise models encountered in imaging. Spatial and frequency domain filtering methods are described for restoration when the degradation is additive noise. Adaptive median filtering and frequency domain filtering techniques like bandreject, bandpass and notch filters are explained for periodic noise removal. Optimal filtering methods like Wiener filtering that minimize mean square error are also covered. The document provides an overview of key concepts and methods in image restoration.
This document discusses various methods for estimating noise parameters and filtering noise from images. It begins by explaining how to estimate noise parameters such as mean and variance by analyzing sample images. It then covers periodic noise reduction using frequency domain filtering like notch filters. Other filtering methods discussed include direct inverse filtering, Wiener filtering, constrained least squares filtering, and iterative nonlinear restoration using the Lucy-Richardson algorithm. Examples are provided to illustrate Wiener filtering and constrained least squares filtering.
This document provides information about a digital image processing lecture given by Dr. Moe Moe Myint from Technological University in Kyaukse, Myanmar. It includes the lecture schedule and contact information for Dr. Myint. The document also provides an overview of Chapter 2 which discusses elements of visual perception, light and the electromagnetic spectrum, image sensing and acquisition, image sampling and quantization, and basic relationships between pixels. It provides examples of different types of digital images including intensity, RGB, binary, and index images. It also discusses the effects of spatial and intensity level resolution on images.
Image Restoration And Reconstruction
Mean Filters
Order-Statistic Filters
Spatial Filtering: Mean Filters
Adaptive Filters
Adaptive Mean Filters
Adaptive Median Filters
Introduction to computer graphics part 1Ankit Garg
This document discusses computer graphics systems and their components. It describes video display devices like CRTs and how they work. Color is generated using techniques like beam penetration and shadow masks. Raster scan and random scan displays are covered. Input devices for graphics like mice, tablets, and gloves are also summarized. The document provides details on graphics hardware like frame buffers, refresh rates, and video controllers.
Halftoning is the process of converting a greyscale image to a binary image made up of black and white dots. In newspapers, halftoning simulates greyscale using patterns of black dots of varying sizes on a white background. Traditionally, halftoning was done photographically by projecting an image through a halftone screen with an etched grid onto film. Different screen frequencies control dot size. Digital halftoning techniques include patterning, which replaces each pixel with a pattern from a binary font, and dithering, which thresholds the image against a dither matrix to determine black and white pixels.
This document discusses image thresholding techniques for image segmentation. It describes thresholding as the basic first step for segmentation that partitions an image into foreground and background pixels based on intensity value. Simple thresholding uses a single cutoff value but can fail for complex histograms. Adaptive thresholding divides an image into sub-images and thresholds each individually to handle varying intensities better than simple thresholding. The document provides examples and algorithms to illustrate thresholding and its limitations and adaptations.
In this presentation we described important things about Image processing and computer vision. If you have any query about this presentation then feels free to visit us at:
http://www.siliconmentor.com/
Image compression involves reducing the size of image files to reduce storage space and transmission time. There are three main types of redundancy in images: coding redundancy, spatial redundancy between neighboring pixels, and irrelevant information. Common compression methods remove these redundancies, such as Huffman coding, arithmetic coding, LZW coding, and run length coding. Popular image file formats include JPEG for photos, PNG for web images, and TIFF, GIF, and DICOM for other uses.
The document provides an overview of Huffman coding, a lossless data compression algorithm. It begins with a simple example to illustrate the basic idea of assigning shorter codes to more frequent symbols. It then defines key terms like entropy and describes the Huffman coding algorithm, which constructs an optimal prefix code from the frequency of symbols in the data. The document discusses how Huffman coding can be applied to image compression by first predicting pixel values and then encoding the residuals. It notes some disadvantages of Huffman coding and describes variations like adaptive Huffman coding.
This document provides an overview of digital image processing and human vision. It discusses the key stages of digital image processing including image acquisition, enhancement, restoration, morphological processing, segmentation, representation and description, object recognition, and compression. It also covers the anatomy of the human eye, photoreceptors, color perception, image formation in the eye, brightness adaptation, and the Weber ratio relating the just noticeable difference in light intensity to background intensity. The document uses images and diagrams from the textbook "Digital Image Processing" to illustrate concepts in digital images and the human visual system.
its very useful for students.
Sharpening process in spatial domain
Direct Manipulation of image Pixels.
The objective of Sharpening is to highlight transitions in intensity
The image blurring is accomplished by pixel averaging in a neighborhood.
Since averaging is analogous to integration.
Prepared by
M. Sahaya Pretha
Department of Computer Science and Engineering,
MS University, Tirunelveli Dist, Tamilnadu.
Gaussian filtering is used to blur images and remove noise and detail. It works by convolving the image with a Gaussian point spread function. The standard deviation of the Gaussian function determines the amount of blurring, with larger standard deviations producing more blur. A discrete Gaussian kernel is used to approximate the continuous Gaussian function for computational purposes. Common applications of Gaussian filtering include smoothing images and reducing noise as a pre-processing step for tasks like edge detection.
1. Clipping is a procedure that identifies parts of an image that are inside or outside a specified region, called the clip window. Parts inside the window are displayed, while outside parts are discarded.
2. There are different types of clipping like point, curve, text, and line clipping. Line clipping involves testing if line segments are fully inside/outside the window, and calculating intersections if they cross window boundaries.
3. Popular line clipping algorithms like Cohen-Sutherland and Liang-Barsky assign codes to line endpoints to quickly determine if lines are fully in/out of the window without calculating intersections. They find intersection points to clip lines that cross window edges.
Computer graphics involves the creation, manipulation and storage of geometric objects and images. It has various applications including computer-aided design, presentation graphics, computer art, entertainment, education and training, scientific visualization, image processing, and graphical user interfaces. Graphics packages provide programmatic access to graphics functions and libraries for tasks like 2D drawing, modeling, and rendering.
This document discusses pixel relationships and neighborhood concepts in digital images. It defines a pixel and pixel connectivity. There are different types of pixel neighborhoods, including 4-neighbor, 8-neighbor, and diagonal neighbors. Connected components are sets of pixels that are connected based on pixel adjacency. Algorithms can label connected components and identify distinct image regions. Various distance measures quantify how close pixels are, such as Euclidean, Manhattan, and chessboard distances. Arithmetic and logical operators can combine pixel values from different images. Neighborhood operations apply functions to pixels based on their values and those of nearby pixels.
The document discusses scan conversion algorithms for computer graphics. It begins by defining scan conversion as the process of representing continuous graphic objects as discrete pixels. It then discusses various graphic objects that must be scan converted, including points, lines, circles, ellipses, and polygons. The document focuses on algorithms for scan converting points and lines. It describes the Digital Differential Analyzer (DDA) algorithm and Bresenham's line algorithm for scan converting lines, explaining how each algorithm determines which pixels to turn on between a start and end point.
This document describes the digital differential analyzer (DDA) algorithm for rasterizing lines, triangles, and polygons in computer graphics. It discusses implementing DDA using floating-point or integer arithmetic. The DDA line drawing algorithm works by incrementing either the x or y coordinate by 1 each step depending on whether the slope is less than or greater than 1. Pseudocode is provided to illustrate the algorithm. Potential drawbacks of DDA are also mentioned, such as the expense of rounding operations.
Features image processing and ExtactionAli A Jalil
This document discusses various techniques for extracting features and representing shapes from images, including:
1. External representations based on boundary properties and internal representations based on texture and statistical moments.
2. Principal component analysis (PCA) is mentioned as a statistical method for feature extraction.
3. Feature vectors are described as arrays that encode measured features of an image numerically, symbolically, or both.
Dokumen tersebut membahas tentang teknik-teknik peningkatan citra digital (image enhancement) yang meliputi operasi titik (point operation), operasi mask, operasi transformasi, dan operasi warna (coloring operation). Beberapa teknik yang dijelaskan antara lain negatif citra, pengaturan kontras, pemotongan tingkat abu-abu, pengurangan noise dengan rata-rata citra, serta pengaturan histogram untuk memperbaiki kontras citra.
This document discusses image restoration and reconstruction techniques for noise removal. It begins by defining image restoration as attempting to reverse degradation processes to restore degraded images. Various noise models are described, including Gaussian, Rayleigh, Erlang, exponential, uniform, and impulse noise. Spatial domain filtering techniques like mean, median, and order statistics filters are covered for noise removal. Frequency domain filtering using band reject filters is also discussed, as well as adaptive filtering techniques. Examples are provided to demonstrate noise removal.
The document discusses image restoration techniques. It introduces common image degradation models and noise models encountered in imaging. Spatial and frequency domain filtering methods are described for restoration when the degradation is additive noise. Adaptive median filtering and frequency domain filtering techniques like bandreject, bandpass and notch filters are explained for periodic noise removal. Optimal filtering methods like Wiener filtering that minimize mean square error are also covered. The document provides an overview of key concepts and methods in image restoration.
This document discusses various methods for estimating noise parameters and filtering noise from images. It begins by explaining how to estimate noise parameters such as mean and variance by analyzing sample images. It then covers periodic noise reduction using frequency domain filtering like notch filters. Other filtering methods discussed include direct inverse filtering, Wiener filtering, constrained least squares filtering, and iterative nonlinear restoration using the Lucy-Richardson algorithm. Examples are provided to illustrate Wiener filtering and constrained least squares filtering.
This document provides information about a digital image processing lecture given by Dr. Moe Moe Myint from Technological University in Kyaukse, Myanmar. It includes the lecture schedule and contact information for Dr. Myint. The document also provides an overview of Chapter 2 which discusses elements of visual perception, light and the electromagnetic spectrum, image sensing and acquisition, image sampling and quantization, and basic relationships between pixels. It provides examples of different types of digital images including intensity, RGB, binary, and index images. It also discusses the effects of spatial and intensity level resolution on images.
Image Restoration And Reconstruction
Mean Filters
Order-Statistic Filters
Spatial Filtering: Mean Filters
Adaptive Filters
Adaptive Mean Filters
Adaptive Median Filters
Introduction to computer graphics part 1Ankit Garg
This document discusses computer graphics systems and their components. It describes video display devices like CRTs and how they work. Color is generated using techniques like beam penetration and shadow masks. Raster scan and random scan displays are covered. Input devices for graphics like mice, tablets, and gloves are also summarized. The document provides details on graphics hardware like frame buffers, refresh rates, and video controllers.
Halftoning is the process of converting a greyscale image to a binary image made up of black and white dots. In newspapers, halftoning simulates greyscale using patterns of black dots of varying sizes on a white background. Traditionally, halftoning was done photographically by projecting an image through a halftone screen with an etched grid onto film. Different screen frequencies control dot size. Digital halftoning techniques include patterning, which replaces each pixel with a pattern from a binary font, and dithering, which thresholds the image against a dither matrix to determine black and white pixels.
This document discusses image thresholding techniques for image segmentation. It describes thresholding as the basic first step for segmentation that partitions an image into foreground and background pixels based on intensity value. Simple thresholding uses a single cutoff value but can fail for complex histograms. Adaptive thresholding divides an image into sub-images and thresholds each individually to handle varying intensities better than simple thresholding. The document provides examples and algorithms to illustrate thresholding and its limitations and adaptations.
In this presentation we described important things about Image processing and computer vision. If you have any query about this presentation then feels free to visit us at:
http://www.siliconmentor.com/
Image compression involves reducing the size of image files to reduce storage space and transmission time. There are three main types of redundancy in images: coding redundancy, spatial redundancy between neighboring pixels, and irrelevant information. Common compression methods remove these redundancies, such as Huffman coding, arithmetic coding, LZW coding, and run length coding. Popular image file formats include JPEG for photos, PNG for web images, and TIFF, GIF, and DICOM for other uses.
The document provides an overview of Huffman coding, a lossless data compression algorithm. It begins with a simple example to illustrate the basic idea of assigning shorter codes to more frequent symbols. It then defines key terms like entropy and describes the Huffman coding algorithm, which constructs an optimal prefix code from the frequency of symbols in the data. The document discusses how Huffman coding can be applied to image compression by first predicting pixel values and then encoding the residuals. It notes some disadvantages of Huffman coding and describes variations like adaptive Huffman coding.
This document provides an overview of digital image processing and human vision. It discusses the key stages of digital image processing including image acquisition, enhancement, restoration, morphological processing, segmentation, representation and description, object recognition, and compression. It also covers the anatomy of the human eye, photoreceptors, color perception, image formation in the eye, brightness adaptation, and the Weber ratio relating the just noticeable difference in light intensity to background intensity. The document uses images and diagrams from the textbook "Digital Image Processing" to illustrate concepts in digital images and the human visual system.
its very useful for students.
Sharpening process in spatial domain
Direct Manipulation of image Pixels.
The objective of Sharpening is to highlight transitions in intensity
The image blurring is accomplished by pixel averaging in a neighborhood.
Since averaging is analogous to integration.
Prepared by
M. Sahaya Pretha
Department of Computer Science and Engineering,
MS University, Tirunelveli Dist, Tamilnadu.
Gaussian filtering is used to blur images and remove noise and detail. It works by convolving the image with a Gaussian point spread function. The standard deviation of the Gaussian function determines the amount of blurring, with larger standard deviations producing more blur. A discrete Gaussian kernel is used to approximate the continuous Gaussian function for computational purposes. Common applications of Gaussian filtering include smoothing images and reducing noise as a pre-processing step for tasks like edge detection.
1. Clipping is a procedure that identifies parts of an image that are inside or outside a specified region, called the clip window. Parts inside the window are displayed, while outside parts are discarded.
2. There are different types of clipping like point, curve, text, and line clipping. Line clipping involves testing if line segments are fully inside/outside the window, and calculating intersections if they cross window boundaries.
3. Popular line clipping algorithms like Cohen-Sutherland and Liang-Barsky assign codes to line endpoints to quickly determine if lines are fully in/out of the window without calculating intersections. They find intersection points to clip lines that cross window edges.
Computer graphics involves the creation, manipulation and storage of geometric objects and images. It has various applications including computer-aided design, presentation graphics, computer art, entertainment, education and training, scientific visualization, image processing, and graphical user interfaces. Graphics packages provide programmatic access to graphics functions and libraries for tasks like 2D drawing, modeling, and rendering.
The document discusses different concepts related to clipping in computer graphics including 2D and 3D clipping. It describes how clipping is used to eliminate portions of objects that fall outside the viewing frustum or clip window. Various clipping techniques are covered such as point clipping, line clipping, polygon clipping, and the Cohen-Sutherland algorithm for 2D region clipping. The key purposes of clipping are to avoid drawing objects that are not visible, improve efficiency by culling invisible geometry, and prevent degenerate cases.
Clipping is a technique used to remove portions of lines, polygons, and other primitives that lie outside the visible viewing area or viewport. There are several common clipping algorithms. Cohen-Sutherland line clipping uses bit codes to quickly determine if a line segment can be fully accepted or rejected for clipping. Sutherland-Hodgman polygon clipping considers each viewport edge individually, clips the polygon against that edge plane, and generates a new clipped polygon. Perspective projection transforms 3D objects to 2D screen coordinates, and clipping must account for objects behind the viewer; this can be done by clipping in camera coordinates before perspective projection or in homogeneous screen coordinates after projection.
Cohen-Sutherland Line Clipping Algorithm:
When drawing a 2D line on screen, it might happen that one or both of the endpoints are outside the screen while a part of the line should still be visible. In that case, an efficient algorithm is needed to find two new endpoints that are on the edges on the screen, so that the part of the line that's visible can now be drawn. This way, all those points of the line outside the screen are clipped away and you don't need to waste any execution time on them.
A good clipping algorithm is the Cohen-Sutherland algorithm for this solution.
By,
Maruf Abdullah Rion
CAD - Unit-1 (Fundamentals of Computer Graphics)Priscilla CPG
This document provides an overview of computer-aided design (CAD). It discusses the different types of CAD (2D, 2.5D, and 3D) and how CAD software is used to create and test models. CAD is used in fields like architecture, engineering, and medical design. The document then covers the product design cycle and how CAD/CAM fits within stages like synthesis, analysis, and manufacturing. It also discusses concurrent engineering and the benefits of a collaborative design process. Finally, it explains fundamental CAD concepts like transformations, viewing, clipping algorithms, and the Sutherland-Hodgman area clipping method.
Raster scan systems use a video controller to refresh the screen by accessing pixels stored in a frame buffer in memory. The video controller uses two registers to iterate through each pixel location, retrieving the pixel value and using it to set the intensity of the CRT beam. It draws one scan line at a time from top to bottom until the entire screen is refreshed at a rate of 60 frames per second. Display processors can offload graphics processing tasks from the CPU by performing operations like scan conversion and generating lines and color areas to draw objects in the frame buffer.
Cohen-sutherland & liang-basky line clipping algorithmShilpa Hait
The document describes two line clipping algorithms: Cohen-Sutherland and Liang-Barsky. Cohen-Sutherland assigns region codes to line endpoints and checks for complete visibility, invisibility, or partial visibility. It then finds intersection points if the line is partially visible. Liang-Barsky uses the parametric line equation and clipping window inequalities to determine intersection points u1 and u2, clipping the line between those points if u1 < u2. Liang-Barsky is generally more efficient as it requires only one division to update u1 and u2, while Cohen-Sutherland may repeatedly calculate unnecessary intersections.
The document discusses different algorithms for polygon clipping, which is a process that identifies the visible portions of a polygon through a clipping window. It describes the Sutherland-Hodgeman algorithm, which clips polygons by extending the edges of a convex clip polygon and selecting only visible vertices. The Weiler-Atherton algorithm modifies this approach to correctly display concave polygons. Polygon clipping is important for video games to maximize frame rate by avoiding rendering calculations for invisible portions of polygons.
A polygon is a closed two-dimensional shape with straight or curved sides. It can be defined by an ordered sequence of vertices and edges connecting consecutive vertices. The scan line polygon fill algorithm uses an odd-even rule to determine if a point is inside or outside the polygon by counting edge crossings along a scan line from that point to infinity. Boundary fill and flood fill are two area filling algorithms that color the interior of a polygon or region by recursively filling neighboring pixels of the same color.
This document discusses various algorithms for polygon scan conversion and filling, including:
- The scan line polygon fill algorithm which determines pixel color by calculating polygon edge intersections with scan lines and using an odd-even rule.
- Methods for handling special cases like horizontal edges and vertex intersections.
- Using a sorted edge table and active edge list to incrementally calculate edge intersections across scan lines.
- Flood fill and depth/z-buffer algorithms for hidden surface removal when rendering overlapping polygons.
Clipping algorithms identify portions of an image that are inside or outside a specified clipping region. They are used to extract a defined scene for viewing, identify visible surfaces, and perform other drawing and display operations. Common types of clipping include point, line, polygon, and curve clipping. Algorithms like Cohen-Sutherland and mid-point subdivision use codes and binary subdivision to efficiently determine which image portions are visible and should be displayed.
This ppt's introduced Basics of computer graphics, which helps to diploma in computer engineering, DCA BCA, BE computer science student's to improve study in computer graphics.
This document provides information about graphics functions in C. It begins by explaining graphics modes and how images are displayed on screens using pixels. It then provides details on the initgraph() function which initializes the graphics system. The rest of the document summarizes many common graphics functions like line(), rectangle(), circle(), putpixel(), getpixel() and more, explaining what they do and their parameters.
This document discusses optimizations made to the game GT Racing 2 to add advanced graphical effects on Android x86 platforms. The optimizations included removing unnecessary render target clears, reducing the size of render targets used for blur passes, and changing from rendering at 50% resolution to 100% resolution. These changes helped reduce the frame time needed for new effects like depth of field, light shafts and bloom, allowing the game to run at the target 30 FPS while fully rendering at native resolution and including the new features. The document provides technical details on implementing various post-processing effects and analyzing the rendering pipeline using tools like Game Analyzer to identify optimization opportunities.
This file contains all the practicals with output regarding GTU syllabus. so it will help to IT and Computer engineering students. It is really knowledgeable so refer these for computer graphics practicals.
This document provides an introduction to graphics programming in C. It discusses setting up graphics using GCC, basic concepts of graphics programming in C, common graphics functions like line(), circle(), rectangle(), and text functions like outtext() and outtextxy(). It also includes a short example program to demonstrate drawing various shapes and text.
Geometry shaders operate on primitives like points, lines and triangles to modify or generate new geometry directly on the GPU. They provide benefits like reducing vertex data and computations by generating geometry from a limited number of inputs. However, generating too many new vertices can negatively impact performance due to increased memory and bandwidth usage. Geometry shaders are well suited for tasks like instancing, displacement mapping and outlining but care needs to be taken to optimize output size.
The document provides an algorithm and sample program to implement Bresenham's circle drawing algorithm in C.
The algorithm reads the radius of the circle, initializes the starting points and decision variable, and then uses a do-while loop to plot pixels on the circle by incrementing x and conditionally incrementing or decrementing y based on the decision variable.
The sample program includes code to read the radius, initialize graphics mode, set starting points, and implement the do-while loop to plot pixels and delay between each pixel for visualization. It plots all four quadrants of the circle.
Results of the GPUs for GEC Competition held at GECCO 2013.
Organizers
Daniele Loiacono, Politecnico di Milano
Antonino Tumeo, Pacific Northwest National Laboratory
Webpage
http://gpu.geccocompetitions.com
The document provides a manual for the UTFT library, which is a universal library for driving TFT displays from Arduino and other microcontroller boards. It supports many 8-bit, 16-bit, and serial graphic displays. The summary includes:
- The UTFT library supports various TFT displays and will work with Arduino, chipKit, and TI LaunchPads. It allows setting colors, drawing shapes, printing text, and more.
- Functions are provided for initializing displays, setting colors, clearing/filling screens, drawing pixels/lines/shapes, printing text and numbers, selecting fonts, and drawing bitmaps.
- The library includes predefined fonts and color values. Additional fonts can be
Session11 J2ME MID-Low Level User Interface(LLUI)-graphicsmuthusvm
- Graphics objects allow drawing on canvases and are obtained through painting methods or mutable images. They have methods for drawing lines, shapes, text and images.
- Colors are specified with red, green, blue component values or a single integer. Grayscale is also supported.
- Basic shapes, text and images can be drawn including options for strokes, fonts, anchors and clipping regions.
Please read this carefully needs to be in JAVA Java 2D intr.pdfPRATIKSINHA7304
Part A)
As a technician in a large pharmaceutical research firm, you need to produce 150. mL of a
potassium dihydrogen phosphate buffer solution of pH = 6.77. The pKa of H2PO4 is 7.21.
You have the following supplies: 2.00 L of 1.00 M KH2PO4 stock solution, 1.50 L of 1.00 M
K2HPO4 stock solution, and a carboy of pure distilled H2O.
How much 1.00 M KH2PO4 will you need to make this solution? (Assume additive volumes.)
Express your answer to three significant digits with the appropriate units.
Part B)
If the normal physiological concentration of HCO3 is 24 mM, what is the pH of blood if PCO2
drops to 33.0 mmHg ?
Extra Info:
The Henderson-Hasselbalch equation in medicine
Carbon dioxide (CO2) and bicarbonate (HCO3) concentrations in the bloodstream are
physiologically controlled to keep blood pH constant at a normal value of 7.40.
Physicians use the following modified form of the Henderson-Hasselbalch equation to track
changes in blood pH:
pH=pKa+log([HCO3]/(0.030)(PCO2))
where [HCO3] is given in millimoles/liter and the arterial blood partial pressure of CO2 is given
in mmHg. The pKaof carbonic acid is 6.1. Hyperventilation causes a physiological state in which
the concentration of CO2 in the bloodstream drops. The drop in the partial pressure of
CO2constricts arteries and reduces blood flow to the brain, causing dizziness or even fainting.
Solution
We use Henderson-Hasselbalch equation.
pH= pKA + log [Base] / [Acid]
6.79 = 7.21 + log [HPO4] / [H2PO4]
log [HPO4] / [H2PO4] = -0.42
[HPO4] / [H2PO4] = 0.380 / 1.00
But, we know that [HPO4] + [H2PO4] = 1.00 M
[HPO4] = 1.00 - [H2PO4]
0.380 = [1.00 - [H2PO4] ] / [H2PO4]
0.380 [H2PO4] = 1.00 - [H2PO4]
1.380 [H2PO4] = 1.00
[H2PO4] = 0.725 M
We have 2.00 L of 1.00 M KH2PO4 stock solution. We need 150.0 mL of 0.725 M
Thus, M1V1 = M2V2
(1.00)(V1) = (0.725) (150)
V1 = 108.75 mL = 109 mL
Thus, we need 109 mL of 1.00 M KH2PO4 is needed..
The document discusses NVIDIA graphics hardware over seven years, the Cg programming language, and transparency techniques. It describes the evolution of NVIDIA GPUs and features like GeForce cards, increased processing power, and support for DirectX. It promotes Cg as a cross-platform language for GPU programming. It also explains the depth peeling algorithm for rendering transparency in real-time using multiple rendering passes.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
COMPARISON OF GPU AND FPGA HARDWARE ACCELERATION OF LANE DETECTION ALGORITHMsipij
The two fundamental components of a complete computer vision system are detection and classification.
The Lane detection algorithm, which is used in autonomous driving and smart vehicle systems, is within the
computer vision detection area. In a sophisticated road environment, lane marking is the responsibility of
the lane detection system. The warning system for a car that leaves its lane also heavily relies on lane
detection. The two primary stages of the implemented lane detection algorithm are edge detection and line
detection. In order to assess the trade-offs for latency, power consumption, and utilisation, we will
compare the state-of-the-art implementation performance attained with both FPGA and GPU in this work.
Our analysis highlights the benefits and drawbacks of the two systems.
Comparison of GPU and FPGA Hardware Acceleration of Lane Detection Algorithmsipij
The two fundamental components of a complete computer vision system are detection and classification.
The Lane detection algorithm, which is used in autonomous driving and smart vehicle systems, is within the
computer vision detection area. In a sophisticated road environment, lane marking is the responsibility of
the lane detection system. The warning system for a car that leaves its lane also heavily relies on lane
detection. The two primary stages of the implemented lane detection algorithm are edge detection and line
detection. In order to assess the trade-offs for latency, power consumption, and utilisation, we will
compare the state-of-the-art implementation performance attained with both FPGA and GPU in this work.
Our analysis highlights the benefits and drawbacks of the two systems.
The document discusses three examples of connecting devices to a VGA display: 1) Drawing a diagonal line, 2) Connecting a camera to output video, 3) Connecting a keyboard to display text. It provides details on VGA signal generation, dual-port RAM for buffering between asynchronous clock domains, and example code for interfacing a PS/2 keyboard to generate on-screen text.
Walmart Business+ and Spark Good for Nonprofits.pdfTechSoup
"Learn about all the ways Walmart supports nonprofit organizations.
You will hear from Liz Willett, the Head of Nonprofits, and hear about what Walmart is doing to help nonprofits, including Walmart Business and Spark Good. Walmart Business+ is a new offer for nonprofits that offers discounts and also streamlines nonprofits order and expense tracking, saving time and money.
The webinar may also give some examples on how nonprofits can best leverage Walmart Business+.
The event will cover the following::
Walmart Business + (https://business.walmart.com/plus) is a new shopping experience for nonprofits, schools, and local business customers that connects an exclusive online shopping experience to stores. Benefits include free delivery and shipping, a 'Spend Analytics” feature, special discounts, deals and tax-exempt shopping.
Special TechSoup offer for a free 180 days membership, and up to $150 in discounts on eligible orders.
Spark Good (walmart.com/sparkgood) is a charitable platform that enables nonprofits to receive donations directly from customers and associates.
Answers about how you can do more with Walmart!"
Gender and Mental Health - Counselling and Family Therapy Applications and In...PsychoTech Services
A proprietary approach developed by bringing together the best of learning theories from Psychology, design principles from the world of visualization, and pedagogical methods from over a decade of training experience, that enables you to: Learn better, faster!
Chapter wise All Notes of First year Basic Civil Engineering.pptxDenish Jangid
Chapter wise All Notes of First year Basic Civil Engineering
Syllabus
Chapter-1
Introduction to objective, scope and outcome the subject
Chapter 2
Introduction: Scope and Specialization of Civil Engineering, Role of civil Engineer in Society, Impact of infrastructural development on economy of country.
Chapter 3
Surveying: Object Principles & Types of Surveying; Site Plans, Plans & Maps; Scales & Unit of different Measurements.
Linear Measurements: Instruments used. Linear Measurement by Tape, Ranging out Survey Lines and overcoming Obstructions; Measurements on sloping ground; Tape corrections, conventional symbols. Angular Measurements: Instruments used; Introduction to Compass Surveying, Bearings and Longitude & Latitude of a Line, Introduction to total station.
Levelling: Instrument used Object of levelling, Methods of levelling in brief, and Contour maps.
Chapter 4
Buildings: Selection of site for Buildings, Layout of Building Plan, Types of buildings, Plinth area, carpet area, floor space index, Introduction to building byelaws, concept of sun light & ventilation. Components of Buildings & their functions, Basic concept of R.C.C., Introduction to types of foundation
Chapter 5
Transportation: Introduction to Transportation Engineering; Traffic and Road Safety: Types and Characteristics of Various Modes of Transportation; Various Road Traffic Signs, Causes of Accidents and Road Safety Measures.
Chapter 6
Environmental Engineering: Environmental Pollution, Environmental Acts and Regulations, Functional Concepts of Ecology, Basics of Species, Biodiversity, Ecosystem, Hydrological Cycle; Chemical Cycles: Carbon, Nitrogen & Phosphorus; Energy Flow in Ecosystems.
Water Pollution: Water Quality standards, Introduction to Treatment & Disposal of Waste Water. Reuse and Saving of Water, Rain Water Harvesting. Solid Waste Management: Classification of Solid Waste, Collection, Transportation and Disposal of Solid. Recycling of Solid Waste: Energy Recovery, Sanitary Landfill, On-Site Sanitation. Air & Noise Pollution: Primary and Secondary air pollutants, Harmful effects of Air Pollution, Control of Air Pollution. . Noise Pollution Harmful Effects of noise pollution, control of noise pollution, Global warming & Climate Change, Ozone depletion, Greenhouse effect
Text Books:
1. Palancharmy, Basic Civil Engineering, McGraw Hill publishers.
2. Satheesh Gopi, Basic Civil Engineering, Pearson Publishers.
3. Ketki Rangwala Dalal, Essentials of Civil Engineering, Charotar Publishing House.
4. BCP, Surveying volume 1
How to Setup Warehouse & Location in Odoo 17 InventoryCeline George
In this slide, we'll explore how to set up warehouses and locations in Odoo 17 Inventory. This will help us manage our stock effectively, track inventory levels, and streamline warehouse operations.
Temple of Asclepius in Thrace. Excavation resultsKrassimira Luka
The temple and the sanctuary around were dedicated to Asklepios Zmidrenus. This name has been known since 1875 when an inscription dedicated to him was discovered in Rome. The inscription is dated in 227 AD and was left by soldiers originating from the city of Philippopolis (modern Plovdiv).
1. Page | 1
1. Study and understanding the meaning and use of following graphics contents, data
types and global variables.
A ) COLOR C ) graphics_driver E) graphics_mode
B ) fill_pattern D ) graphics_errors
A)COLORS
COLORS, CGA_COLORS, and EGA_COLORS (Enumerated Constants for Colors)
These tables show the symbolic constants used to set text attributes on CGA and EGA
monitors. (Defined in CONIO.H.) the drawing colors available for BGI functions running on
CGA and EGA monitors. (Defined in GRAPHICS.H.)The COLORS constants are used by
these text mode functions: textattr, textbackground , textcolor
The CGA_COLORS and EGA_COLORS constants are used by these BGI graphics functions:
setallpalette setbkcolor
setcolor setpalette
Valid colors depend on the current graphics driver and current graphics mode.
COLORS (text mode)
Back- Fore-
Constant Value grnd? grnd?
BLACK 0 Yes Yes
BLUE 1 Yes Yes
GREEN 2 Yes Yes
CYAN 3 Yes Yes
RED 4 Yes Yes
MAGENTA 5 Yes Yes
BROWN 6 Yes Yes
LIGHTGRAY 7 Yes Yes
DARKGRAY 8 No Yes
LIGHTBLUE 9 No Yes
LIGHTGREEN 10 No Yes
LIGHTCYAN 11 No Yes
LIGHTRED 12 No Yes
LIGHTMAGENTA 13 No Yes
YELLOW 14 No Yes
WHITE 15 No Yes
..............................
BLINK 128 No *** To display blinking characters in text mode, add BLINK to the
foreground color.(Defined in CONIO.H.) CGA_COLORS (graphics mode)In this table, the
palette listings CGA0,CGA1, CGA2, and CGA3 refer to the fourpredefined four-color palettes
available on CGA (and compatible) systems.
Palette Constant assigned to this color number (pixel value)
Number 1 2 3
CGA0 CGA_LIGHTGREEN CGA_LIGHTRED CGA_YELLOW
CGA1 CGA_LIGHTCYAN CGA_LIGHTMAGENTA CGA_WHITE
CGA2 CGA_GREEN CGA_RED CGA_BROWN
CGA3 CGA_CYAN CGA_MAGENTA CGA_LIGHTGRAY
2. Page | 2
You can select the background color (entry 0)in each of these palettes, but the other colors are
fixed.
EGA_ COLORS (graphics mode)
Constant Value Constant Value
EGA_BLACK 0 EGA_DARKGRAY 56
EGA_BLUE 1 EGA_LIGHTBLUE 57
EGA_GREEN 2 EGA_LIGHTGREEN 58
EGA_CYAN 3 EGA_LIGHTCYAN 59
EGA_RED 4 EGA_LIGHTRED 60
EGA_MAGENTA 5 EGA_LIGHTMAGENTA 61
EGA_LIGHTGRAY 7 EGA_YELLOW 62
EGA_BROWN 20 EGA_WHITE 63
B ) fill_pattern
Enum: Fill patterns for getfillsettings and setfillstyle.
Names Value Means Fill With...
EMPTY_FILL 0 Background color
SOLID_FILL 1 Solid fill
LINE_FILL 2 ---
LTSLASH_FILL 3 ///
SLASH_FILL 4 ///, thick lines
BKSLASH_FILL 5 , thick lines
LTBKSLASH_FILL 6
HATCH_FILL 7 Light hatch
XHATCH_FILL 8 Heavy crosshatch
INTERLEAVE_FILL 9 Interleaving lines
WIDE_DOT_FILL 10 Widely spaced dots
CLOSE_DOT_FILL 11 Closely spaced dots
USER_FILL 12 User-defined fill pattern
All but EMPTY_FILL fill with the current fill color. EMPTY_FILL uses the current
background color.
C ) graphics_driver
Enum: BGI graphics drivers
Constant Value
DETECT 0 (requests autodetection)
CGA 1
MCGA 2
EGA 3
EGA64 4
EGAMONO 5
IBM8514 6
HERCMONO 7
ATT400 8
3. Page | 3
VGA 9
PC3270 10
D ) graphics_errors
graphics_errors <GRAPHICS.H>
Enum: Error return code from graphresult
Error graphics_errors
code constant Corresponding error message string
0 grOk No error
-1 grNoInitGraph (BGI) graphics not installed (use initgraph)
-2 grNotDetected Graphics hardware not detected
-3 grFileNotFound Device driver file not found
-4 grInvalidDriver Invalid device driver file
-5 grNoLoadMem Not enough memory to load driver
-6 grNoScanMem Out of memory in scan fill
-7 grNoFloodMem Out of memory in flood fill
-8 grFontNotFound Font file not found
-9 grNoFontMem Not enough memory to load font
-10 grInvalidMode Invalid graphics mode for selected driver
-11 grError Graphics error
-12 grIOerror Graphics I/O error
-13 grInvalidFont Invalid font file
-14 grInvalidFontNum Invalid font number
-15 grInvalidDeviceNum Invalid device number
-18 grInvalidVersion Invalid version number
E) graphics_modes
Enum: Graphics modes for each BGI driver
Graphics driver graphics_modes Value Column x Row Palette Pages
CGA CGAC0 0 320 x 200 C0 1
CGAC1 1 320 x 200 C1 1
CGAC2 2 320 x 200 C2 1
CGAC3 3 320 x 200 C3 1
CGAHI 4 640 x 200 2 color 1
MCGA MCGAC0 0 320 x 200 C0 1
MCGAC1 1 320 x 200 C1 1
MCGAC2 2 320 x 200 C2 1
MCGAC3 3 320 x 200 C3 1
MCGAMED 4 640 x 200 2 color 1
MCGAHI 5 640 x 480 2 color 1
EGA EGALO 0 640 x 200 16 color 4
EGAHI 1 640 x 350 16 color 2
EGA64 EGA64LO 0 640 x 200 16 color 1
4. Page | 4
EGA64HI 1 640 x 350 4 color 1
EGA-MONO EGAMONOHI 3 640 x 350 2 color 1*
EGAMONOHI 3 640 x 350 2 color 2**
HERC HERCMONOHI 0 720 x 348 2 color 2
ATT400 ATT400C0 0 320 x 200 C0 1
ATT400C1 1 320 x 200 C1 1
ATT400C2 2 320 x 200 C2 1
ATT400C3 3 320 x 200 C3 1
ATT400MED 4 640 x 200 2 color 1
ATT400HI 5 640 x 400 2 color 1
VGA VGALO 0 640 x 200 16 color 2
VGAMED 1 640 x 350 16 color 2
PC3270 PC3270HI 0 720 x 350 2 color 1
IBM8514 IBM8514HI 1 1024 x 760 256 color
IBM8514LO 0 640 x 480 256 color
* 64K on EGAMONO card ** 256K on EGAMONO card
5. Page | 5
2. Study and understand the meaning and use of following graphics function.
a) Closegraph b) detectgraph c) getbkcolor d)getcolor e)getmaxx f)getmaxy g)
getpixel h) getx i) gety j) grapherrormsg k) graphresult l) initgraph m)outtext
n)outtexttxy o)putpixel p) setbkcolor q ) setcolor r) setfillpattern
a) Closegraph
Syntax:
#include <graphics.h>
void closegraph(void);
Description:
closegraph deal locates all memory allocated by the graphics system, then restores the screen to
the mode it was in before you called initgraph. (The graphics system deal locates memory, such
as the drivers, fonts, and an internal buffer, through a call to_graphfreemem.)
Return Value
None.
b) detectgraph
Syntax:
#include <graphics.h>
void detectgraph(int *graphdriver, int *graphmode);
Description:
detectgraph detects your system's graphics adapter and chooses the mode that provides the
highest resolution for that adapter. If no graphics hardware is detected, *graphdriver is set to
grNotDetected (-2), and graphresult returns grNotDetected (-2).
*graphdriver is an integer that specifies the graphics driver to be used. You can give it a value
using a constant of the graphics_drivers enumeration type defined in graphics.h and listed as
follows:
graphics_drivers constant Numeric value
DETECT 0 (requests autodetect)
CGA 1
MCGA 2
EGA 3
EGA64 4
EGAMONO 5
IBM8514 6
HERCMONO 7
ATT400 8
VGA 9
PC3270 10
*graphmode is an integer that specifies the initial graphics mode (unless *graphdriver equals
DETECT; in which case, *graphmode is set to the highest resolution available for the detected
driver). You can give *graphmode a value using a constant of the graphics_modes enumeration
type defined in graphics.h and listed as follows.
Graphics
Columns
Driver graphics_mode Value x Rows Palette Pages
6. Page | 6
CGA CGAC0 0 320 x 200 C0 1
CGAC1 1 320 x 200 C1 1
CGAC2 2 320 x 200 C2 1
CGAC3 3 320 x 200 C3 1
CGAHI 4 640 x 200 2 color 1
MCGA MCGAC0 0 320 x 200 C0 1
MCGAC1 1 320 x 200 C1 1
MCGAC2 2 320 x 200 C2 1
MCGAC3 3 320 x 200 C3 1
MCGAMED 4 640 x 200 2 color 1
MCGAHI 5 640 x 480 2 color 1
EGA EGALO 0 640 x 200 16 color 4
EGAHI 1 640 x 350 16 color 2
EGA64 EGA64LO 0 640 x 200 16 color 1
EGA64HI 1 640 x 350 4 color 1
EGA-MONO EGAMONOHI 3 640 x 350 2 color 1 w/64K
EGAMONOHI 3 640 x 350 2 color 2 w/256K
HERC HERCMONOHI 0 720 x 348 2 color 2
ATT400 ATT400C0 0 320 x 200 C0 1
ATT400C1 1 320 x 200 C1 1
ATT400C2 2 320 x 200 C2 1
ATT400C3 3 320 x 200 C3 1
ATT400MED 4 640 x 200 2 color 1
ATT400HI 5 640 x 400 2 color 1
VGA VGALO 0 640 x 200 16 color 2
VGAMED 1 640 x 350 16 color 2
VGAHI 2 640 x 480 16 color 1
PC3270 PC3270HI 0 720 x 350 2 color 1
IBM8514 IBM8514HI 0 640 x 480 256 color ?
IBM8514LO 0 1024 x 768 256 color ?
Return Value : None.
c) getbkcolor
Syntax
#include <graphics.h>
int getbkcolor(void);
Description
getbkcolor returns the current background color. (See the table in setbkcolor for details.)
Return Value getbkcolor returns the current background color.
7. Page | 7
d) getcolor
Syntax
#include <graphics.h>
int getcolor(void);
Description
getcolor returns the current drawing color. The drawing color is the value to which pixels are set
when lines and so on are drawn. For example, in CGAC0 mode, the palette contains four colors:
the background color, light green, light red, and yellow. In this mode, if getcolor returns 1, the
current drawing color is light green.
Return Valuen :getcolor returns the current drawing color.
e) getmaxx
Syntax
#include <graphics.h>
int getmaxx(void);
Description
getmaxx returns the maximum (screen-relative) x value for the current graphics driver and mode.
For example, on a CGA in 320*200 mode, getmaxx returns 319. getmaxx is invaluable for
centering, determining the boundaries of a region onscreen, and so on.
Return Value : getmaxx returns the maximum x screen coordinate.
f)getmaxx
Syntax
#include <graphics.h>
int getmaxy(void);
Description
getmaxy returns the maximum (screen-relative) y value for the current graphics driver and mode.
For example, on a CGA in 320*200 mode, getmaxy returns 199. getmaxy is invaluable for
centering, determining the boundaries of a region onscreen, and so on.
Return Value : getmaxy returns the maximum y screen coordinate.
g) getpixel
Syntax
#include <graphics.h>
unsigned getpixel(int x, int y);
Description
getpixel gets the color of the pixel located at (x,y).
Return Value : getpixel returns the color of the given pixel.
h) getx
Syntax
#include <graphics.h>
int getx(void);
Description
getx finds the current graphics position's x-coordinate. The value is viewport-relative.
Return Value : getx returns the x-coordinate of the current position.
8. Page | 8
i) gety
Syntax:
#include <graphics.h>
int gety(void);
Description:
gety returns the current graphics position's y-coordinate. The value is viewport-relative.
Return Value : gety returns the y-coordinate of the current position.
j) grapherrormsg
Syntax
#include <graphics.h>
char * grapherrormsg(int errorcode);
Description
grapherrormsg returns a pointer to the error message string associated with errorcode, the value
returned by graphresult.
Refer to the entry for errno in the Library Reference, Chapter 4, for a list of error messages and
mnemonics.
Return Value : grapherrormsg returns a pointer to an error message string.
k) graphresult
Syntax
#include <graphics.h>
int graphresult(void);
Description
graphresult returns the error code for the last graphics operation that reported an error and
resets the error level to grOk.
The following table lists the error codes returned by graphresult. The enumerated type
graph_errors defines the errors in this table. graph_errors is declared in graphics.h.
code constant Corresponding error message string
0 grOk No error
-1 grNoInitGraph (BGI) graphics not installed (use initgraph)
-2 grNotDetected Graphics hardware not detected
-3 grFileNotFound Device driver file not found
-4 grInvalidDriver Invalid device driver file
-5 grNoLoadMem Not enough memory to load driver
-6 grNoScanMem Out of memory in scan fill
-7 grNoFloodMem Out of memory in flood fill
-8 grFontNotFound Font file not found
-9 grNoFontMem Not enough memory to load font
-10 grInvalidMode Invalid graphics mode for selected driver
-11 grError Graphics error
-12 grIOerror Graphics I/O error
-13 grInvalidFont Invalid font file
-14 grInvalidFontNum Invalid font number
9. Page | 9
-15 grInvalidDeviceNum Invalid device number
-18 grInvalidVersion Invalid version number
Note: The variable maintained by graphresult is reset to 0 after graphresult has been called.
Therefore, you should store the value of graphresult into a temporary variable and then test it.
Return Value : graphresult returns the current graphics error number, an integer in the range -15
to 0; grapherrormsg returns a pointer to a string associated with the value returned by
graphresult.
l) initgraph
Syntax
#include <graphics.h>
void initgraph(int *graphdriver, int *graphmode, char *pathtodriver);
Description
initgraph initializes the graphics system by loading a graphics driver from disk (or validating a
registered driver), and putting the system into graphics mode.
To start the graphics system, first call the initgraph function. initgraph loads the graphics driver
and puts the system into graphics mode. You can tell initgraph to use a particular graphics driver
and mode, or to autodetect the attached video adapter at run time and pick the corresponding
driver.
If you tell initgraph to autodetect, it calls detectgraph to select a graphics driver and mode.
initgraph also resets all graphics settings to their defaults (current position, palette, color,
viewport, and so on) and resets graphresult to 0.
Normally, initgraph loads a graphics driver by allocating memory for the driver (through
_graphgetmem), then loading the appropriate .BGI file from disk. As an alternative to this
dynamic loading scheme, you can link a graphics driver file (or several of them) directly into your
executable program file.
pathtodriver specifies the directory path where initgraph looks for graphics drivers. initgraph first
looks in the path specified in pathtodriver, then (if they are not there) in the current directory.
Accordingly, if pathtodriver is null, the driver files (*.BGI) must be in the current directory. This
is also the path settextstyle searches for the stroked character font files (*.CHR).
*graphdriver is an integer that specifies the graphics driver to be used. You can give it a value
using a constant of the graphics_drivers enumeration type, which is defined in graphics.h and
listed below.
graphics_drivers constant Numeric value
DETECT 0 (requests autodetect)
CGA 1
MCGA 2
EGA 3
EGA64 4
EGAMONO 5
IBM8514 6
HERCMONO 7
ATT400 8
VGA 9
PC3270 10
*graphmode is an integer that specifies the initial graphics mode (unless *graphdriver equals
DETECT; in which case, *graphmode is set by initgraph to the highest resolution available for
10. Page | 10
the detected driver). You can give *graphmode a value using a constant of the graphics_modes
enumeration type, which is defined in graphics.h and listed below.
graphdriver and graphmode must be set to valid values from the following tables, or you will get
unpredictable results. The exception is graphdriver = DETECT.
Palette listings C0, C1, C2, and C3 refer to the four predefined four-color palettes available on
CGA (and compatible) systems. You can select the background color (entry #0) in each of these
palettes, but the other colors are fixed.
Palette Number Three Colors
0 LIGHTGREEN LIGHTRED YELLOW
1 LIGHTCYAN LIGHTMAGENTA WHITE
2 GREEN RED BROWN
3 CYAN MAGENTA LIGHTGRAY
After a call to initgraph, *graphdriver is set to the current graphics driver, and *graphmode is set
to the current graphics mode.
Graphics
Columns
Driver graphics_mode Value x Rows Palette Pages
CGA CGAC0 0 320 x 200 C0 1
CGAC1 1 320 x 200 C1 1
CGAC2 2 320 x 200 C2 1
CGAC3 3 320 x 200 C3 1
CGAHI 4 640 x 200 2 color 1
MCGA MCGAC0 0 320 x 200 C0 1
MCGAC1 1 320 x 200 C1 1
MCGAC2 2 320 x 200 C2 1
MCGAC3 3 320 x 200 C3 1
MCGAMED 4 640 x 200 2 color 1
MCGAHI 5 640 x 480 2 color 1
EGA EGALO 0 640 x 200 16 color 4
EGAHI 1 640 x 350 16 color 2
EGA64 EGA64LO 0 640 x 200 16 color 1
EGA64HI 1 640 x 350 4 color 1
EGA-MONO EGAMONOHI 3 640 x 350 2 color 1 w/64K
EGAMONOHI 3 640 x 350 2 color 2 w/256K
HERC HERCMONOHI 0 720 x 348 2 color 2
ATT400 ATT400C0 0 320 x 200 C0 1
ATT400C1 1 320 x 200 C1 1
ATT400C2 2 320 x 200 C2 1
ATT400C3 3 320 x 200 C3 1
ATT400MED 4 640 x 200 2 color 1
ATT400HI 5 640 x 400 2 color 1
VGA VGALO 0 640 x 200 16 color 2
VGAMED 1 640 x 350 16 color 2
11. Page | 11
VGAHI 2 640 x 480 16 color 1
PC3270 PC3270HI 0 720 x 350 2 color 1
IBM8514 IBM8514HI 0 640 x 480 256 color ?
IBM8514LO 0 1024 x 768 256 color ?
Return Value : initgraph always sets the internal error code; on success, it sets the code to 0. If an
error occurred, *graphdriver is set to -2, -3, -4, or -5, and graphresult returns the same value as
listed below:
Constant Name Number Meaning
grNotDetected -2 Cannot detect a graphics card
grFileNotFound -3 Cannot find driver file
grInvalidDriver -4 Invalid driver
grNoLoadMem -5 Insufficient memory to load driver
m) outtext
Syntax
#include <graphics.h>
void outtext(char *textstring);
Description
outtext displays a text string in the viewport, using the current font, direction, and size.
outtext outputs textstring at the current position (CP). If the horizontal text justification is
LEFT_TEXT and the text direction is HORIZ_DIR, the CP's x-coordinate is advanced by
textwidth(textstring). Otherwise, the CP remains unchanged.
To maintain code compatibility when using several fonts, use textwidth and textheight to
determine the dimensions of the string.
If a string is printed with the default font using outtext, any part of the string that extends
outside the current viewport is truncated.
outtext is for use in graphics mode; it will not work in text mode.
Return Value : None.
n) outtextxy
Syntax
#include <graphics.h>
void outtextxy(int x, int y, char *textstring);
Description
outtextxy displays a text string in the viewport at the given position (x, y), using the current
justification settings and the current font, direction, and size.
To maintain code compatibility when using several fonts, use textwidth and textheight to
determine the dimensions of the string.
If a string is printed with the default font using outtext or outtextxy, any part of the string that
extends outside the current viewport is truncated.
outtextxy is for use in graphics mode; it will not work in text mode.
Return Value : None.
o) putpixel
Syntax
#include <graphics.h>
void putpixel(int x, int y, int color);
12. Page | 12
Description
putpixel plots a point in the color defined by color at (x,y).
Return Value : None.
p)setbkcolor
Syntax
#include <graphics.h>
void setbkcolor(int color);
Description
setbkcolor sets the background to the color specified by color. The argument color can be a
name or a number as listed below. (These symbolic names are defined in graphics.h.)
Name Value
BLACK 0
BLUE 1
GREEN 2
CYAN 3
RED 4
MAGENTA 5
BROWN 6
LIGHTGRAY 7
DARKGRAY 8
LIGHTBLUE 9
LIGHTGREEN 10
LIGHTCYAN 11
LIGHTRED 12
LIGHTMAGENTA 13
YELLOW 14
WHITE 15
For example, if you want to set the background color to blue, you can call setbkcolor(BLUE) /*
or */ setbkcolor(1)On CGA and EGA systems, setbkcolor changes the background color by
changing the first entry in the palette. If you use an EGA or a VGA, and you change the palette
colors with setpalette or setallpalette, the defined symbolic constants might not give you the
correct color. This is because the parameter to setbkcolor indicates the entry number in the
current palette rather than a specific color (unless the parameter passed is 0, which always sets
the background color to black).
Return Value :None.
q)setcolor
Syntax
#include <graphics.h>
void setcolor(int color);
Description
setcolor sets the current drawing color to color, which can range from 0 to getmaxcolor. The
current drawing color is the value to which pixels are set when lines, and so on are drawn. The
drawing colors shown below are available for the CGA and EGA, respectively.
Palette Number Three Colors
13. Page | 13
0 LIGHTGREEN LIGHTRED YELLOW
1 LIGHTCYAN LIGHTMAGENTA WHITE
2 GREEN RED BROWN
3 CYAN MAGENTA LIGHTGRAY
Name Value
BLACK 0
BLUE 1
GREEN 2
CYAN 3
RED 4
MAGENTA 5
BROWN 6
LIGHTGRAY 7
DARKGRAY 8
LIGHTBLUE 9
LIGHTGREEN 10
LIGHTCYAN 11
LIGHTRED 12
LIGHTMAGENTA 13
YELLOW 14
WHITE 15
You select a drawing color by passing either the color number itself or the equivalent symbolic
name to setcolor. For example, in CGAC0 mode, the palette contains four colors: the
background color, light green, light red, and yellow. In this mode, either setcolor(3) or
setcolor(CGA_YELLOW) selects a drawing color of yellow.
Return Value:None.
r) setfillpattern
Syntax
#include <graphics.h>
void setfillpattern(char *upattern, int color);
Description
setfillpattern is like setfillstyle, except that you use it to set a user-defined 8x8 pattern rather than
a predefined pattern.
upattern is a pointer to a sequence of 8 bytes, with each byte corresponding to 8 pixels in the
pattern. Whenever a bit in a pattern byte is set to 1, the corresponding pixel is plotted.
Return Value : None.
14. Page | 14
3. Write a program to draw pixels into the various location of the VDU.
Program:
#include<iostream.h>
#include<conio.h>
#include<graphics.h>
#include<dos.h>
void main()
{
int gd=DETECT,gm;
int x,y;
cout<<"enter the x or y Quordinates x,y=";
cin>>x>>y;
initgraph(&gd,&gd,"c:turboc3bgi");
int r1,r2;
r1=getmaxx();
r2=getmaxy();
while(!kbhit())
{
delay(100);
cleardevice();
for(int i=x;i<r1;i+=30)
{
for(int j=y;j<r2;j+=30)
putpixel(i,j,15);
}
}
getch();
closegraph();
}
Output:
15. Page | 15
4 . Write a program to transfer the origin of monitor from top left corner to center of the
monitor.
Program:
#include<stdio.h>
#include<conio.h>
#include<graphics.h>
#include<math.h>
void translate();
void main()
{
int ch;
int gd=DETECT,gm;
initgraph(&gd,&gm,"c:turboc3bgi");
setcolor(6);
printf("Object: ");
rectangle(100,175,175,100);
int tx,ty;
setcolor(2);
outtextxy(240,10,"TRANSLATION");
outtextxy(238,20,"------------");
printf("nEnter tx: ");
scanf("%d",&tx);
printf("nEnter ty: ");
scanf("%d",&ty);
cleardevice();
rectangle(100,150,150,100);
printf("nAfter Translation");
rectangle(100+tx,150+ty,150+tx,100+ty);
getch();
closegraph();
}
Output:
16. Page | 16
5. Write a program to implement Digital Differential Analyzer (DDA) line Drawing
algorithm.
Program:
#include <graphics.h>
#include <stdio.h>
#include<conio.h>
#include <math.h>
#include<dos.h>
void main( )
{
float x,y,x1,y1,x2,y2,dx,dy,pixel;
int i,gd,gm;
printf("Enter the value of x1 : ");
scanf("%f",&x1);
printf("Enter the value of y1 : ");
scanf("%f",&y1);
printf("Enter the value of x2 : ");
scanf("%f",&x2);
printf("Enter the value of y1 : ");
scanf("%f",&y2);
detectgraph(&gd,&gm);
initgraph(&gd,&gm,"c:turboc3bgi");
dx=abs(x2-x1);
dy=abs(y2-y1);
if(dx>=dy)
pixel=dx;
else
pixel=dy;
dx=dx/pixel;
dy=dy/pixel;
x=x1;
y=y1;
i=1;
while(i<=pixel)
{
putpixel(x,y,1);
x=x+dx;
y=y+dy;
i=i+1;
delay(100);
}
getch();
closegraph();
}
Output:
18. Page | 18
6. Write a program to implement the Bresenhams line drawing algorithm.
Program:
# include <stdio.h>
# include <conio.h>
# include <graphics.h>
void main()
{
int dx,dy,x,y,p,x1,y1,x2,y2;
int gd,gm;
clrscr();
printf("nntEnter the co-ordinates of first point : ");
scanf("%d %d",&x1,&y1);
printf("nntEnter the co-ordinates of second point : ");
scanf("%d %d",&x2,&y2);
dx = (x2 - x1);
dy = (y2 - y1);
p = 2 * (dy) - (dx);
x = x1;
y = y1;
detectgraph(&gd,&gm);
initgraph(&gd,&gm,"c:turboc3bgi");
putpixel(x,y,WHITE);
while(x <= x2)
{
if(p < 0)
{
x=x+1;
y=y;
p = p + 2 * (dy);
}
else
{
x=x+1;
y=y+1;
p = p + 2 * (dy - dx);
}
putpixel(x,y,WHITE);
}
getch();
closegraph();
}
22. Page | 22
8. Write a program to show the following attribute of output primitives.
A) Line style B) Color C) Intensity
A) Line styles
Program:
#include <graphics.h>
#include <stdlib.h>
#include <string.h>
#include <stdio.h>
#include <conio.h>
void main()
{
int gd=DETECT,gm;
int s;
char *lname[]={"SOLID LINE","DOTTED LINE","CENTER LINE",
"DASHED LINE","USERBIT LINE"};
initgraph(&gd,&gm,"c:turboc3bgi");
clrscr();
cleardevice();
printf("Line styles:");
for (s=0;s<5;s++)
{
setlinestyle(s,1,3);
line(100,30+s*50,250,250+s*50);
outtextxy(255,250+s*50,lname[s]);
}
getch();
closegraph();
}
Output:
B)Color
Program:
#include<stdio.h>
#include<conio.h>
23. Page | 23
main()
{
clrscr();
textcolor(RED);
cprintf("C programming n ");
textcolor(BLUE);
cprintf("C programming n ");
textcolor(GREEN);
cprintf("C programming n ");
textcolor(YELLOW);
cprintf("C programming n ");
textcolor(BROWN);
cprintf("C programming n ");
getch();
return 0;
}
Output:
C)Intensity:
Hue, Intensity, Brightness
The Hue (or simply, the "colour") is the dominant Wavelength or dominant frequency
Energy distribution of a light source with a dominant frequency near the red end of the
frequency range .The integration of the energy for all the visible wavelengths is proportional to
the intensity of the colour.
Intensity : Radiant Energy emitted per unit of time, per unit solid angle, and per unit projected
area of the source (related to the luminance of the source)
Brightness : perceived intensity of light.
24. Page | 24
9. Write a program to implement the following aria fill algorithm.
A) Boundary fill B) Flood fill C) Scan line algorithm
A) Boundary Fill algorithm
Program:
#include<stdio.h>
#include<conio.h>
#include<graphics.h>
#include<dos.h>
void fill_right(int x,int y);
void fill_left(int x,int y);
void main()
{
int gd=DETECT,gm,x,y,n,i;
clrscr();
initgraph(&gd,&gm,"c:turboc3bgi");
printf("*** Boundary Fill algorithm ***");
/*- draw object -*/
line (50,50,200,50);
line (200,50,200,300);
line (200,300,50,300);
line (50,300,50,50);
/*- set seed point -*/
x=100; y=100;
fill_right(x,y);
fill_left(x-1,y);
getch();
}
void fill_right(int x,int y)
{
if((getpixel(x,y) != WHITE)&&(getpixel(x,y) != RED))
{
putpixel(x,y,RED);
fill_right(++x,y); x=x-1;
fill_right(x,y-1);
fill_right(x,y+1);
}
delay(1);
}
void fill_left(int x,int y)
{
if((getpixel(x,y) != WHITE)&&(getpixel(x,y) != RED))
{
putpixel(x,y,RED);
fill_left(--x,y); x=x+1;
fill_left(x,y-1);
fill_left(x,y+1);
}
delay(1);
}
Output:
25. Page | 25
B) Flood Fill
Program:
#include <stdio.h>
#include <conio.h>
#include <graphics.h>
#include <dos.h>
void flood(int,int,int,int);
void main()
{
int gd,gm=DETECT;
clrscr();
detectgraph(&gd,&gm);
initgraph(&gd,&gm,"c:turboc3bgi");
rectangle(50,50,100,100);
flood(55,55,12,0);
getch();
}
void flood(int x,int y, int fill_col, int old_col)
{
if(getpixel(x,y)==old_col)
{
delay(10);
putpixel(x,y,fill_col);
flood(x+1,y,fill_col,old_col);
flood(x-1,y,fill_col,old_col);
flood(x,y+1,fill_col,old_col);
flood(x,y-1,fill_col,old_col);
}
}
Output:
26. Page | 26
C) Scan line
Program:
# include <iostream.h>
# include <graphics.h>
# include <conio.h>
# include <math.h>
class Edge
{
public:
int yUpper;
float xIntersect;
float dxPerScan;
Edge *next;
};
class PointCoordinates
{
public:
float x;
float y;
PointCoordinates( )
{
x=0;
y=0;
}
};
class LineCoordinates
{
public:
float x_1;
float y_1;
float x_2;
float y_2;
LineCoordinates( )
{
x_1=0;
31. Page | 31
}
void Polygon(const int n,const int coordinates[])
{
if(n>=2)
{
Line(coordinates[0],coordinates[1], coordinates[2],coordinates[3]);
for(int count=1;count<(n-1);count++)
Line(coordinates[(count*2)],coordinates[((count*2)+1)], coordinates[((count+1)*2)],
coordinates[(((count+1)*2)+1)]);
}
}
void Line(const int x_1,const int y_1,const int x_2,const int y_2)
{
int color=getcolor( );
int x1=x_1;
int y1=y_1;
int x2=x_2;
int y2=y_2;
if(x_1>x_2)
{
x1=x_2;
y1=y_2;
x2=x_1;
y2=y_1;
}
int dx=abs(x2-x1);
int dy=abs(y2-y1);
int inc_dec=((y2>=y1)?1:-1);
if(dx>dy)
{
int two_dy=(2*dy);
int two_dy_dx=(2*(dy-dx));
int p=((2*dy)-dx);
int x=x1;
int y=y1;
putpixel(x,y,color);
while(x<x2)
{
x++;
if(p<0)
p+=two_dy;
else
{
y+=inc_dec;
p+=two_dy_dx;
}
putpixel(x,y,color);
}
}
else
32. Page | 32
{
int two_dx=(2*dx);
int two_dx_dy=(2*(dx-dy));
int p=((2*dx)-dy);
int x=x1;
int y=y1;
putpixel(x,y,color);
while(y!=y2)
{
y+=inc_dec;
if(p<0)
p+=two_dx;
else
{
x++;
p+=two_dx_dy;
}
putpixel(x,y,color);
}
}
}
void show_screen( )
{
setfillstyle(1,1);
bar(178,26,450,38);
settextstyle(0,0,1);
setcolor(15);
outtextxy(5,5,"");
outtextxy(5,17,"");
outtextxy(5,29,"");
outtextxy(5,41,"");
outtextxy(5,53,"");
setcolor(11);
outtextxy(185,29,"Scan Line Polygon Fill Algorithm");
setcolor(15);
for(int count=0;count<=30;count++)
outtextxy(5,(65+(count*12)),"");
outtextxy(5,438,"");
outtextxy(5,450,"");
outtextxy(5,462,"");
setcolor(12);
outtextxy(229,450,"");
}
Output:
36. Page | 36
void Polygon(const int n,const int coordinates[])
{
if(n>=2)
{
Line(coordinates[0],coordinates[1],coordinates[2],coordinates[3]);
for(int count=1;count<(n-1);count+
Line(coordinates[(count*2)],coordinates[((count*2)+1)],coordinates[((count+1)*2)],
coordinates[(((count+1)*2)+1)]);
}
}
void Line(const int x_1,const int y_1,const int x_2,const int y_2)
{
int color=getcolor( );
int x1=x_1;
int y1=y_1;
int x2=x_2;
int y2=y_2;
if(x_1>x_2)
{
x1=x_2;
y1=y_2;
x2=x_1;
y2=y_1;
}
int dx=abs(x2-x1);
int dy=abs(y2-y1);
int inc_dec=((y2>=y1)?1:-1);
if(dx>dy)
{
int two_dy=(2*dy);
int two_dy_dx=(2*(dy-dx));
int p=((2*dy)-dx);
int x=x1;
int y=y1;
putpixel(x,y,color);
while(x<x2)
{
x++;
if(p<0)
p+=two_dy;
else
{
y+=inc_dec;
p+=two_dy_dx;
}
putpixel(x,y,color);
}
}
else
{
int two_dx=(2*dx);
37. Page | 37
int two_dx_dy=(2*(dx-dy));
int p=((2*dx)-dy);
int x=x1;
int y=y1;
putpixel(x,y,color);
while(y!=y2)
{
y+=inc_dec;
if(p<0)
p+=two_dx;
else
{
x++;
p+=two_dx_dy;
}
putpixel(x,y,color);
}
}
}
void show_screen( )
{
setfillstyle(1,1);
bar(205,26,430,38);
settextstyle(0,0,1);
setcolor(15);
outtextxy(5,5,"");
outtextxy(5,17,"");
outtextxy(5,29,"");
outtextxy(5,41,"");
outtextxy(5,53,"");
setcolor(11);
outtextxy(210,29,"Translation Transformation");
setcolor(15);
for(int count=0;count<=30;count++)
outtextxy(5,(65+(count*12)),"");
outtextxy(5,438,"");
outtextxy(5,450,"");
outtextxy(5,462,"");
setcolor(12);
outtextxy(229,450,"Press any Key to exit.");
}
39. Page | 39
11. Write a program to perform scaling of two dimensional objects.
A) About the origin B) About fixed point
Program:
# include <iostream.h>
# include <graphics.h>
# include <conio.h>
# include <math.h>
void show_screen( );
void apply_fixed_point_scaling(const int,int [],const float, const float,const int,const int);
void multiply_matrices(const float[3],const float[3][3],float[3]);
void Polygon(const int,const int []);
void Line(const int,const int,const int,const int);
int main( )
{
int driver=VGA;
int mode=VGAHI;
initgraph(&driver,&mode,"..Bgi");
show_screen( );
int polygon_points[10]={ 270,290, 270,190, 370,190, 370,290, 270,290 };
setcolor(15);
Polygon(5,polygon_points);
setcolor(15);
settextstyle(0,0,1);
outtextxy(50,400,"*** (320,240) is taken as Fixed Point.");
outtextxy(50,415,"*** Use '+' and '-' Keys to apply Scaling.");
int key_code=0;
char Key=NULL;
do
{
Key=NULL;
key_code=0;
Key=getch( );
key_code=int(Key);
if(key_code==0)
{
Key=getch( );
key_code=int(Key);
}
if(key_code==27)
break;
else if(key_code==43)
{
setfillstyle(1,0);
bar(40,70,600,410);
apply_fixed_point_scaling(5,polygon_points, 1.1,1.1,320,240);
setcolor(10);
Polygon(5,polygon_points);
}
else if(key_code==45)
{
40. Page | 40
setfillstyle(1,0);
bar(40,70,600,410);
apply_fixed_point_scaling(5,polygon_points, 0.9,0.9,320,240);
setcolor(12);
Polygon(5,polygon_points);
}
}
while(1);
return 0;
}
void apply_fixed_point_scaling(const int n,int coordinates[],const float Sx,const float Sy,
const int xf,const int yf)
{
for(int count_1=0;count_1<n;count_1++)
{
float matrix_a[3]={coordinates[(count_1*2)] coordinates[((count_1*2)+1)],1};
float matrix_b[3][3]={ {Sx,0,0} , {0,Sy,0} ,{ ((1-Sx)*xf),((1-Sy)*yf),1} };
float matrix_c[3]={0};
multiply_matrices(matrix_a,matrix_b,matrix_c);
coordinates[(count_1*2)]=(int)(matrix_c[0]+0.5);
coordinates[((count_1*2)+1)]=(int)(matrix_c[1]+0.5);
}
}
void multiply_matrices(const float matrix_1[3],const float matrix_2[3][3],float matrix_3[3])
{
for(int count_1=0;count_1<3;count_1++)
{
for(int count_2=0;count_2<3;count_2++)
matrix_3[count_1]+=
(matrix_1[count_2]*matrix_2[count_2][count_1]);
}
}
void Polygon(const int n,const int coordinates[])
{
if(n>=2)
{
Line(coordinates[0],coordinates[1], coordinates[2],coordinates[3]);
for(int count=1;count<(n-1);count++)
Line(coordinates[(count*2)],coordinates[((count*2)+1)], coordinates[((count+1)*2)],
coordinates[(((count+1)*2)+1)]);
}
}
void Line(const int x_1,const int y_1,const int x_2,const int y_2)
{
int color=getcolor( );
int x1=x_1;
int y1=y_1;
int x2=x_2;
int y2=y_2;
if(x_1>x_2)
{
41. Page | 41
x1=x_2;
y1=y_2;
x2=x_1;
y2=y_1;
}
int dx=abs(x2-x1);
int dy=abs(y2-y1);
int inc_dec=((y2>=y1)?1:-1);
if(dx>dy)
{
int two_dy=(2*dy);
int two_dy_dx=(2*(dy-dx));
int p=((2*dy)-dx);
int x=x1;
int y=y1;
putpixel(x,y,color);
while(x<x2)
{
x++;
if(p<0)
p+=two_dy;
else
{
y+=inc_dec;
p+=two_dy_dx;
}
putpixel(x,y,color);
}
}
else
{
int two_dx=(2*dx);
int two_dx_dy=(2*(dx-dy));
int p=((2*dx)-dy);
int x=x1;
int y=y1;
putpixel(x,y,color);
while(y!=y2)
{
y+=inc_dec;
if(p<0)
p+=two_dx;
else
{
x++;
p+=two_dx_dy;
}
putpixel(x,y,color);
}
}
43. Page | 43
12. Write a program to perform the Rotation of two dimensional objects.
A) About the origin B) About fixed point
Program:
# include <iostream.h>
# include <graphics.h>
# include <conio.h>
# include <math.h>
void show_screen( );
void apply_pivot_point_rotation(const int,int [],float,const int,const int);
void multiply_matrices(const float[3],const float[3][3],float[3]);
void Polygon(const int,const int []);
void Line(const int,const int,const int,const int);
int main( )
{
int driver=VGA;
int mode=VGAHI;
initgraph(&driver,&mode,"..Bgi");
show_screen( );
int polygon_points[8]={ 250,290, 320,190, 390,290, 250,290 };
setcolor(15);
Polygon(5,polygon_points);
setcolor(15);
settextstyle(0,0,1);
outtextxy(50,400,"*** (320,240) is taken as Fix Point.");
outtextxy(50,415,"*** Use '+' and '-' Keys to apply Rotation.");
int key_code=0;
char Key=NULL;
do
{
Key=NULL;
key_code=0;
Key=getch( );
key_code=int(Key);
if(key_code==0)
{
Key=getch( );
key_code=int(Key);
}
if(key_code==27)
break;
else if(key_code==43)
{
setfillstyle(1,0);
bar(40,70,600,410);
apply_pivot_point_rotation(4,polygon_points,5,320,240);
setcolor(10);
Polygon(4,polygon_points);
}
else if(key_code==45)
{
44. Page | 44
setfillstyle(1,0);
bar(40,70,600,410);
apply_pivot_point_rotation(4,polygon_points,-5,320,240);
setcolor(12);
Polygon(4,polygon_points);
}
}
while(1);
return 0;
}
void apply_pivot_point_rotation(const int n,int coordinates[],
float angle,const int xr,const int yr)
{
angle*=(M_PI/180);
for(int count_1=0;count_1<n;count_1++)
{
float matrix_a[3]={coordinates[(count_1*2)],coordinates[((count_1*2)+1)],1};
float temp_1=(((1-cos(angle))*xr)+(yr*sin(angle)));
float temp_2=(((1-cos(angle))*yr)-(xr*sin(angle)));
float matrix_b[3][3]={ { cos(angle),sin(angle),0 } ,{ temp_1,temp_2,1 } };
float matrix_c[3]={0};
multiply_matrices(matrix_a,matrix_b,matrix_c);
coordinates[(count_1*2)]=(int)(matrix_c[0]+0.5);
coordinates[((count_1*2)+1)]=(int)(matrix_c[1]+0.5);
}
}
void multiply_matrices(const float matrix_1[3],const float matrix_2[3][3],float matrix_3[3])
{
for(int count_1=0;count_1<3;count_1++)
{
for(int count_2=0;count_2<3;count_2++)
matrix_3[count_1]+= (matrix_1[count_2]*matrix_2[count_2][count_1]);
}
}
void Polygon(const int n,const int coordinates[])
{
if(n>=2)
{
Line(coordinates[0],coordinates[1], coordinates[2],coordinates[3]);
for(int count=1;count<(n-1);count++)
Line(coordinates[(count*2)],coordinates[((count*2)+1)],coordinates[((count+1)*2)],
coordinates[(((count+1)*2)+1)]);
}
}
void Line(const int x_1,const int y_1,const int x_2,const int y_2)
{
int color=getcolor( );
int x1=x_1;
int y1=y_1;
int x2=x_2;
int y2=y_2;
45. Page | 45
if(x_1>x_2)
{
x1=x_2;
y1=y_2;
x2=x_1;
y2=y_1;
}
int dx=abs(x2-x1);
int dy=abs(y2-y1);
int inc_dec=((y2>=y1)?1:-1);
if(dx>dy)
{
int two_dy=(2*dy);
int two_dy_dx=(2*(dy-dx));
int p=((2*dy)-dx);
int x=x1;
int y=y1;
putpixel(x,y,color);
while(x<x2)
{
x++;
if(p<0)
p+=two_dy;
else
{
y+=inc_dec;
p+=two_dy_dx;
}
putpixel(x,y,color);
}
}
else
{
int two_dx=(2*dx);
int two_dx_dy=(2*(dx-dy));
int p=((2*dx)-dy);
int x=x1;
int y=y1;
putpixel(x,y,color);
while(y!=y2)
{
y+=inc_dec;
if(p<0)
p+=two_dx;
else
{
x++;
p+=two_dx_dy;
}
47. Page | 47
13. Write a program to perform reflection of two dimensional objects.
A) Y=0(X-axis) B) X=0(Y-axis) C) Y=X
Program:
# include <iostream.h>
# include <graphics.h>
# include <conio.h>
# include <math.h>
void show_screen( );
void apply_reflection_along_x_axis(const int,int []);
void apply_reflection_along_y_axis(const int,int []);
void apply_reflection_wrt_origin(const int,int []);
void multiply_matrices(const int[3],const int[3][3],int[3]);
void Polygon(const int,const int []);
void Line(const int,const int,const int,const int);
int main( )
{
int driver=VGA;
int mode=VGAHI;
initgraph(&driver,&mode,"..Bgi");
show_screen( );
setcolor(15);
Line(320,100,320,400);
Line(315,105,320,100);
Line(320,100,325,105);
Line(315,395,320,400);
Line(320,400,325,395);
Line(150,240,500,240);
Line(150,240,155,235);
Line(150,240,155,245);
Line(500,240,495,235);
Line(500,240,495,245);
settextstyle(2,0,4);
outtextxy(305,85,"y-axis");
outtextxy(305,402,"y'-axis");
outtextxy(505,233,"x-axis");
outtextxy(105,233,"x'-axis");
outtextxy(380,100,"Original Object");
outtextxy(380,385,"Reflection along x-axis");
outtextxy(135,100,"Reflection along y-axis");
outtextxy(135,385,"Reflection w.r.t origin");
int polygon_points[8]={ 350,200, 380,150, 470,200, 350,200 };
int x_polygon[8]={ 350,200, 380,150, 470,200, 350,200 };
int y_polygon[8]={ 350,200, 380,150, 470,200, 350,200 };
int origin_polygon[8]={ 350,200, 380,150, 470,200, 350,200 };
setcolor(15);
Polygon(4,polygon_points);
apply_reflection_along_x_axis(4,x_polygon);
setcolor(12);
Polygon(4,x_polygon);
apply_reflection_along_y_axis(4,y_polygon);
48. Page | 48
setcolor(14);
Polygon(4,y_polygon);
apply_reflection_wrt_origin(4,origin_polygon);
setcolor(10);
Polygon(4,origin_polygon);
getch( );
return 0;
}
void apply_reflection_along_x_axis(const int n,int coordinates[])
{
for(int count=0;count<n;count++)
{
int matrix_a[3]={coordinates[(count*2)],coordinates[((count*2)+1)],1};
int matrix_b[3][3]={ {1,0,0} , {0,-1,0} ,{ 0,0,1} };
int matrix_c[3]={0};
multiply_matrices(matrix_a,matrix_b,matrix_c);
coordinates[(count*2)]=matrix_c[0];
coordinates[((count*2)+1)]=(480+matrix_c[1]);
}
}
void apply_reflection_along_y_axis(const int n,int coordinates[])
{
for(int count=0;count<n;count++)
{
int matrix_a[3]={coordinates[(count*2)],coordinates[((count*2)+1)],1};
int matrix_b[3][3]={ {-1,0,0} , {0,1,0} ,{ 0,0,1} };
int matrix_c[3]={0};
multiply_matrices(matrix_a,matrix_b,matrix_c);
coordinates[(count*2)]=(640+matrix_c[0]);
coordinates[((count*2)+1)]=matrix_c[1];
}
}
void apply_reflection_wrt_origin(const int n,int coordinates[])
{
for(int count=0;count<n;count++)
{
int matrix_a[3]={coordinates[(count*2)], coordinates[((count*2)+1)],1};
int matrix_b[3][3]={ {-1,0,0} , {0,-1,0} ,{ 0,0,1} };
int matrix_c[3]={0};
multiply_matrices(matrix_a,matrix_b,matrix_c);
coordinates[(count*2)]=(640+matrix_c[0]);
coordinates[((count*2)+1)]=(480+matrix_c[1]);
}
}
void multiply_matrices(const int matrix_1[3], const int matrix_2[3][3],int matrix_3[3])
{
for(int count_1=0;count_1<3;count_1++)
{
for(int count_2=0;count_2<3;count_2++)
matrix_3[count_1]+=
(matrix_1[count_2]*matrix_2[count_2][count_1]);
49. Page | 49
}
}
void Polygon(const int n,const int coordinates[])
{
if(n>=2)
{
Line(coordinates[0],coordinates[1],coordinates[2],coordinates[3]);
for(int count=1;count<(n-1);count++)
Line(coordinates[(count*2)],coordinates[((count*2)+1)], coordinates[((count+1)*2)],
coordinates[(((count+1)*2)+1)]);
}
}
void Line(const int x_1,const int y_1,const int x_2,const int y_2)
{
int color=getcolor( );
int x1=x_1;
int y1=y_1;
int x2=x_2;
int y2=y_2;
if(x_1>x_2)
{
x1=x_2;
y1=y_2;
x2=x_1;
y2=y_1;
}
int dx=abs(x2-x1);
int dy=abs(y2-y1);
int inc_dec=((y2>=y1)?1:-1);
if(dx>dy)
{
int two_dy=(2*dy);
int two_dy_dx=(2*(dy-dx));
int p=((2*dy)-dx);
int x=x1;
int y=y1;
putpixel(x,y,color);
while(x<x2)
{
x++;
if(p<0)
p+=two_dy;
else
{
y+=inc_dec;
p+=two_dy_dx;
}
putpixel(x,y,color);
}
}
50. Page | 50
else
{
int two_dx=(2*dx);
int two_dx_dy=(2*(dx-dy));
int p=((2*dx)-dy);
int x=x1;
int y=y1;
putpixel(x,y,color);
while(y!=y2)
{
y+=inc_dec;
if(p<0)
p+=two_dx;
else
{
x++;
p+=two_dx_dy;
}
putpixel(x,y,color);
}
}
}
void show_screen( )
{
setfillstyle(1,1);
bar(208,26,430,38);
settextstyle(0,0,1);
setcolor(15);
outtextxy(5,5,"");
outtextxy(5,17,"");
outtextxy(5,29,"");
outtextxy(5,41,"");
outtextxy(5,53,"");
setcolor(11);
outtextxy(218,29,"___________ Reflection ___________");
setcolor(15);
for(int count=0;count<=30;count++)
outtextxy(5,(65+(count*12)),"");
outtextxy(5,438,"");
outtextxy(5,450,"");
outtextxy(5,462,"");
setcolor(12);
outtextxy(229,450,"Press any Key to exit.");
}
Output:
52. Page | 52
outtextxy(305,402,"y'-axis");
outtextxy(505,233,"x-axis");
outtextxy(105,233,"x'-axis");
outtextxy(350,100,"Reflection about the line y=x");
outtextxy(115,100,"Reflection about the line y=-x");
int x_polygon[8]={ 340,200, 420,120, 370,120, 340,200 };
int y_polygon[8]={ 300,200, 220,120, 270,120, 300,200 };
setcolor(15);
Polygon(4,x_polygon);
Polygon(4,y_polygon);
apply_reflection_about_line_yex(4,x_polygon);
apply_reflection_about_line_yemx(4,y_polygon);
setcolor(7);
Polygon(4,x_polygon);
Polygon(4,y_polygon);
getch( );
return 0;
}
void apply_reflection_about_line_yex(const int n,int coordinates[])
{
apply_rotation(n,coordinates,45);
apply_reflection_along_x_axis(n,coordinates);
apply_rotation(n,coordinates,-45);
}
void apply_reflection_about_line_yemx(const int n,int coordinates[])
{
apply_rotation(n,coordinates,45);
apply_reflection_along_y_axis(n,coordinates);
apply_rotation(n,coordinates,-45);
}
void apply_rotation(const int n,int coordinates[],float angle)
{
float xr=320;
float yr=240;
angle*=(M_PI/180);
for(int count_1=0;count_1<n;count_1++)
{
float matrix_a[3]={coordinates[(count_1*2)],coordinates[((count_1*2)+1)],1};
float temp_1=(((1-cos(angle))*xr)+(yr*sin(angle)));
float temp_2=(((1-cos(angle))*yr)-(xr*sin(angle)));
float matrix_b[3][3]={ { cos(angle),sin(angle),0 } , { -sin(angle),cos(angle),0 } ,
{ temp_1,temp_2,1 } };
float matrix_c[3]={0};
multiply_matrices(matrix_a,matrix_b,matrix_c);
coordinates[(count_1*2)]=(int)(matrix_c[0]+0.5);
coordinates[((count_1*2)+1)]=(int)(matrix_c[1]+0.5);
}
}
void apply_reflection_along_x_axis(const int n,int coordinates[])
{
for(int count=0;count<n;count++)
53. Page | 53
{
float matrix_a[3]={coordinates[(count*2)],coordinates[((count*2)+1)],1};
float matrix_b[3][3]={ {1,0,0} , {0,-1,0} ,{ 0,0,1} };
float matrix_c[3]={0};
multiply_matrices(matrix_a,matrix_b,matrix_c);
coordinates[(count*2)]=matrix_c[0];
coordinates[((count*2)+1)]=(480+matrix_c[1]);
}
}
void apply_reflection_along_y_axis(const int n,int coordinates[])
{
for(int count=0;count<n;count++)
{
float matrix_a[3]={coordinates[(count*2)],coordinates[((count*2)+1)],1};
float matrix_b[3][3]={ {-1,0,0} , {0,1,0} ,{ 0,0,1} };
float matrix_c[3]={0};
multiply_matrices(matrix_a,matrix_b,matrix_c);
coordinates[(count*2)]=(640+matrix_c[0]);
coordinates[((count*2)+1)]=matrix_c[1];
}
}
void multiply_matrices(const float matrix_1[3], const float matrix_2[3][3],float matrix_3[3])
{
for(int count_1=0;count_1<3;count_1++)
{
for(int count_2=0;count_2<3;count_2++)
matrix_3[count_1]+=
(matrix_1[count_2]*matrix_2[count_2][count_1]);
}
}
void Polygon(const int n,const int coordinates[])
{
if(n>=2)
{
Line(coordinates[0],coordinates[1], coordinates[2],coordinates[3]);
for(int count=1;count<(n-1);count++)
Line(coordinates[(count*2)],coordinates[((count*2)+1)],coordinates[((count+1)*2)],
coordinates[(((count+1)*2)+1)]);
}
}
void Line(const int x_1,const int y_1,const int x_2,const int y_2)
{
int color=getcolor( );
int x1=x_1;
int y1=y_1;
int x2=x_2;
int y2=y_2;
if(x_1>x_2)
{
x1=x_2;
y1=y_2;
54. Page | 54
x2=x_1;
y2=y_1;
}
int dx=abs(x2-x1);
int dy=abs(y2-y1);
int inc_dec=((y2>=y1)?1:-1);
if(dx>dy)
{
int two_dy=(2*dy);
int two_dy_dx=(2*(dy-dx));
int p=((2*dy)-dx);
int x=x1;
int y=y1;
putpixel(x,y,color);
while(x<x2)
{
x++;
if(p<0)
p+=two_dy;
else
{
y+=inc_dec;
p+=two_dy_dx;
}
putpixel(x,y,color);
}
}
else
{
int two_dx=(2*dx);
int two_dx_dy=(2*(dx-dy));
int p=((2*dx)-dy);
int x=x1;
int y=y1;
putpixel(x,y,color);
while(y!=y2)
{
y+=inc_dec;
if(p<0)
p+=two_dx;
else
{
x++;
p+=two_dx_dy;
}
putpixel(x,y,color);
}
}
}
void Dashed_line(const int x_1,const int y_1,const int x_2,
55. Page | 55
const int y_2,const int line_type)
{
int count=0;
int color=getcolor( );
int x1=x_1;
int y1=y_1;
int x2=x_2;
int y2=y_2;
if(x_1>x_2)
{
x1=x_2;
y1=y_2;
x2=x_1;
y2=y_1;
}
int dx=abs(x2-x1);
int dy=abs(y2-y1);
int inc_dec=((y2>=y1)?1:-1);
if(dx>dy)
{
int two_dy=(2*dy);
int two_dy_dx=(2*(dy-dx));
int p=((2*dy)-dx);
int x=x1;
int y=y1;
putpixel(x,y,color);
while(x<x2)
{
x++;
if(p<0)
p+=two_dy;
else
{
y+=inc_dec;
p+=two_dy_dx;
}
if((count%2)!=0 && line_type==0)
putpixel(x,y,color);
else if((count%5)!=4 && line_type==1)
putpixel(x,y,color);
else if((count%10)!=8 && (count%10)!=9 && line_type==2)
putpixel(x,y,color);
else if((count%20)!=18 && (count%20)!=19 && line_type==3)
putpixel(x,y,color);
else if((count%12)!=7 && (count%12)!=8 &&
(count%12)!=10 && (count%12)!=11 && line_type==4)
putpixel(x,y,color);
count++;
}
}
60. Page | 60
int x1=x_1;
int y1=y_1;
int x2=x_2;
int y2=y_2;
if(x_1>x_2)
{
x1=x_2;
y1=y_2;
x2=x_1;
y2=y_1;
}
int dx=abs(x2-x1);
int dy=abs(y2-y1);
int inc_dec=((y2>=y1)?1:-1);
if(dx>dy)
{
int two_dy=(2*dy);
int two_dy_dx=(2*(dy-dx));
int p=((2*dy)-dx);
int x=x1;
int y=y1;
putpixel(x,y,color);
while(x<x2)
{
x++;
if(p<0)
p+=two_dy;
else
{
y+=inc_dec;
p+=two_dy_dx;
}
putpixel(x,y,color);
}
}
else
{
int two_dx=(2*dx);
int two_dx_dy=(2*(dx-dy));
int p=((2*dx)-dy);
int x=x1;
int y=y1;
putpixel(x,y,color);
while(y!=y2)
{
y+=inc_dec;
if(p<0)
p+=two_dx;
else
{
64. Page | 64
}
}
void Polygon(const int n,const int coordinates[])
{
if(n>=2)
{
Line(coordinates[0],coordinates[1], coordinates[2],coordinates[3]);
for(int count=1;count<(n-1);count++)
Line(coordinates[(count*2)],coordinates[((count*2)+1)], coordinates[((count+1)*2)],
coordinates[(((count+1)*2)+1)]);
}
}
void Line(const int x_1,const int y_1,const int x_2,const int y_2)
{
int color=getcolor( );
int x1=x_1;
int y1=y_1;
int x2=x_2;
int y2=y_2;
if(x_1>x_2)
{
x1=x_2;
y1=y_2;
x2=x_1;
y2=y_1;
}
int dx=abs(x2-x1);
int dy=abs(y2-y1);
int inc_dec=((y2>=y1)?1:-1);
if(dx>dy)
{
int two_dy=(2*dy);
int two_dy_dx=(2*(dy-dx));
int p=((2*dy)-dx);
int x=x1;
int y=y1;
putpixel(x,y,color);
while(x<x2)
{
x++;
if(p<0)
p+=two_dy;
else
{
y+=inc_dec;
p+=two_dy_dx;
}
putpixel(x,y,color);
}
}
65. Page | 65
else
{
int two_dx=(2*dx);
int two_dx_dy=(2*(dx-dy));
int p=((2*dx)-dy);
int x=x1;
int y=y1;
putpixel(x,y,color);
while(y!=y2)
{
y+=inc_dec;
if(p<0)
p+=two_dx;
else
{
x++;
p+=two_dx_dy;
}
putpixel(x,y,color);
}
}
}
void show_screen( )
{
setfillstyle(1,1);
bar(205,26,430,38);
settextstyle(0,0,1);
setcolor(15);
outtextxy(5,5,"");
outtextxy(5,17,"");
outtextxy(5,29,"");
outtextxy(5,41,"");
outtextxy(5,53,"");
setcolor(11);
outtextxy(210,29,"Translation Transformation");
setcolor(15);
for(int count=0;count<=30;count++)
outtextxy(5,(65+(count*12)),"");
outtextxy(5,438,"");
outtextxy(5,450,"");
outtextxy(5,462,"");
setcolor(12);
outtextxy(229,450,"Press any Key to exit.");
}