The document presents distribution fields, a unified representation for low-level vision problems such as tracking, optical flow, matching, stereo, backgrounding, registration, and image stitching. Distribution fields represent an image as a probability distribution over its pixel values, addressing issues with descriptors like HOG and SIFT and techniques such as geometric blur, mixture of Gaussian backgrounding, and bilateral filtering. The representation has properties including a large basin of attraction, a useful probability model from a single image, inherent multi-scale representation, and state-of-the-art performance from simple algorithms. Distribution fields are closely related to other approaches in computer vision.
Interactive Rendering Techniques for Highlighting (3D GeoInfo 2010)Matthias Trapp
The document discusses interactive rendering techniques for highlighting objects and areas in 3D geovirtual environments. It presents challenges in highlighting complex 3D data interactively and at scale. Various highlighting techniques are described, including color overlays, vignetting, outlines, and semantic depth-of-field. Examples demonstrate these techniques. Limitations are also discussed, such as occlusion, distance cues, and camera orientation effects. The paper concludes that "smart" highlighting techniques are needed to address these limitations and enable effective highlighting in 3D geovirtual environments.
This document discusses algorithms for real-time 3D graphics rendering. It describes the goal of simulating a realistic 3D world in real-time with high frame rates. The main challenge is determining what graphics are visible. The document outlines common 3D primitives and three algorithms to solve visibility - the painter's algorithm, binary space partitioning (BSP), and portal rendering. It proposes combining these algorithms by using portals for static rooms/sectors, BSP trees for complex static objects, and the painter's algorithm for dynamic objects, to achieve an efficient overall rendering approach.
This document provides an introduction and overview of basic 3D graphics concepts including:
- Representing 3D objects as points, lines, and polygons in 3D coordinate spaces
- Transforming objects using translation, scaling, and rotation matrices
- Projecting 3D scenes onto 2D screens using perspective and parallel projections
- Techniques for hidden surface removal like the painter's algorithm and z-buffering to determine which polygons are visible.
This document discusses coordinate systems and mapping between world coordinates and screen coordinates in OpenGL. It explains that:
1) Objects are defined using world coordinates, while screens use pixel coordinates, so OpenGL maps between these spaces.
2) The world window defines the region of the world coordinates that will be drawn, and the viewport defines the screen region it will be drawn to.
3) OpenGL uses a linear transformation to map world to screen coordinates, defined by scaling and translation constants A, B, C, and D that are calculated based on the world window and viewport sizes.
The document provides an introduction to 3D modeling materials and textures. It defines what materials and textures are, explaining that materials control how objects appear and reflect light, while textures provide surface detail. It then describes the different types of material maps - including diffuse, roughness, ambient occlusion, displacement, normal, bump, metalness, reflection, and opacity maps - and explains what each map controls and how it impacts the material. It also provides some online resources for finding texture images to apply to materials.
hidden surface elimination using z buffer algorithmrajivagarwal23dei
The document discusses hidden surface removal techniques used in 3D computer graphics. It introduces the hidden surface problem that arises when non-transparent objects obscure other objects from view. It describes object space and image space methods for identifying and removing hidden surfaces. The z-buffer algorithm is discussed as a commonly used image space method that works by comparing depth values in a z-buffer to determine which surfaces are visible at each pixel location.
The document presents distribution fields, a unified representation for low-level vision problems such as tracking, optical flow, matching, stereo, backgrounding, registration, and image stitching. Distribution fields represent an image as a probability distribution over its pixel values, addressing issues with descriptors like HOG and SIFT and techniques such as geometric blur, mixture of Gaussian backgrounding, and bilateral filtering. The representation has properties including a large basin of attraction, a useful probability model from a single image, inherent multi-scale representation, and state-of-the-art performance from simple algorithms. Distribution fields are closely related to other approaches in computer vision.
Interactive Rendering Techniques for Highlighting (3D GeoInfo 2010)Matthias Trapp
The document discusses interactive rendering techniques for highlighting objects and areas in 3D geovirtual environments. It presents challenges in highlighting complex 3D data interactively and at scale. Various highlighting techniques are described, including color overlays, vignetting, outlines, and semantic depth-of-field. Examples demonstrate these techniques. Limitations are also discussed, such as occlusion, distance cues, and camera orientation effects. The paper concludes that "smart" highlighting techniques are needed to address these limitations and enable effective highlighting in 3D geovirtual environments.
This document discusses algorithms for real-time 3D graphics rendering. It describes the goal of simulating a realistic 3D world in real-time with high frame rates. The main challenge is determining what graphics are visible. The document outlines common 3D primitives and three algorithms to solve visibility - the painter's algorithm, binary space partitioning (BSP), and portal rendering. It proposes combining these algorithms by using portals for static rooms/sectors, BSP trees for complex static objects, and the painter's algorithm for dynamic objects, to achieve an efficient overall rendering approach.
This document provides an introduction and overview of basic 3D graphics concepts including:
- Representing 3D objects as points, lines, and polygons in 3D coordinate spaces
- Transforming objects using translation, scaling, and rotation matrices
- Projecting 3D scenes onto 2D screens using perspective and parallel projections
- Techniques for hidden surface removal like the painter's algorithm and z-buffering to determine which polygons are visible.
This document discusses coordinate systems and mapping between world coordinates and screen coordinates in OpenGL. It explains that:
1) Objects are defined using world coordinates, while screens use pixel coordinates, so OpenGL maps between these spaces.
2) The world window defines the region of the world coordinates that will be drawn, and the viewport defines the screen region it will be drawn to.
3) OpenGL uses a linear transformation to map world to screen coordinates, defined by scaling and translation constants A, B, C, and D that are calculated based on the world window and viewport sizes.
The document provides an introduction to 3D modeling materials and textures. It defines what materials and textures are, explaining that materials control how objects appear and reflect light, while textures provide surface detail. It then describes the different types of material maps - including diffuse, roughness, ambient occlusion, displacement, normal, bump, metalness, reflection, and opacity maps - and explains what each map controls and how it impacts the material. It also provides some online resources for finding texture images to apply to materials.
hidden surface elimination using z buffer algorithmrajivagarwal23dei
The document discusses hidden surface removal techniques used in 3D computer graphics. It introduces the hidden surface problem that arises when non-transparent objects obscure other objects from view. It describes object space and image space methods for identifying and removing hidden surfaces. The z-buffer algorithm is discussed as a commonly used image space method that works by comparing depth values in a z-buffer to determine which surfaces are visible at each pixel location.
This document discusses techniques for achieving visual realism in geometric modeling. It covers topics like hidden line removal, hidden surface determination, shading models, transparency, reflection, and camera models. The goal of visual realism is to generate images that capture effects of light interacting with physical objects similarly to how we see the real world. This involves modeling objects and lighting conditions, determining visible surfaces, assigning color to pixels, and creating animated sequences. Realistic images find applications in simulation, design, entertainment, research, and control.
Geometry shaders render 3D models into polygons and color faces, while fragment shaders apply effects to the entire screen like tinting or distortion. Together, geometry and fragment shaders take models and textures and draw them, while fragment shaders alter the scene to make effects like lenses or Photoshop, potentially causing queasiness.
This document discusses computer graphics and the role of linear algebra in creating 2D images from 3D mathematical models. It explains that computer graphics uses transformations between different coordinate systems, such as object coordinates, world coordinates, camera coordinates, and screen coordinates, to render 3D scenes. These coordinate transformations involve affine transformations like rotation, translation, and scaling, which can be represented by homogeneous matrices. The document also describes how perspective projection is achieved through multiplication of points by a perspective matrix to give the illusion of depth.
This document discusses advanced computer graphics and realistic image generation techniques. It covers topics like modeling objects, lighting, rendering, visible surface determination, shading, textures, shadows, transparency, camera models, and anti-aliasing. Realism involves modeling objects and lighting conditions, determining visible surfaces, calculating pixel colors based on light reflection, and supporting animation. Rendering techniques like line drawings, shading, and shadows add information to convey depth. Anti-aliasing reduces jagged edges by using techniques like supersampling and weighted area sampling.
The document discusses different 3D display methods including parallel projection, perspective projection, depth cueing, visible line identification, visible surface identification, surface rendering, and material properties and shadows. Parallel projection preserves proportions but not realistic views, while perspective projection produces realistic views but not proportions. Depth cueing and visible surface identification are used to determine which surfaces are visible from a given viewing position. Surface rendering applies lighting effects to obscure hidden surfaces. Material properties and shadows can further enhance realism.
This document discusses coordinate systems and viewport mapping in OpenGL. It describes the screen coordinate system, world coordinate system, world window, viewport, and how objects are mapped from the world window to the viewport. It provides the equations to calculate the corresponding screen coordinates (sx, sy) given an object's world coordinates (x, y), and discusses techniques for setting up the world window and viewport to avoid distortions when drawing.
The document describes an experiment to write a program for window to viewport transformation in Turbo C. It involves taking input coordinates for the window and viewport, and vertices of a triangle. The window-to-viewport transformation is done using scaling factors calculated from the window and viewport dimensions. The transformed triangle is then drawn within the viewport.
This document discusses techniques for achieving visual realism in 3D computer graphics. It covers hidden line removal, hidden surface removal, and hidden solid removal algorithms. Hidden line removal algorithms like the priority and area-oriented algorithms aim to correctly display occluded lines. Hidden surface algorithms like the z-buffer, Warnock's area coherence, and painter's priority approaches determine which surfaces are visible. Ray tracing is introduced as a hidden solid removal technique that traces the path of light in a scene to realistically render 3D objects.
The document discusses the window-to-viewport transformation process which maps a 2D scene to device coordinates. It involves developing formulas to proportionally map points from the world window to the viewport. The scale and translation factors that relate the world window and viewport coordinates are defined. Changing the viewport position, using multiple viewports, or adjusting the viewport dimensions allows manipulating the displayed scene. Distortion may occur if the aspect ratios of the world window and viewport differ.
This document discusses key principles of map design including selection of colors, symbols, labeling, and overall layout. It emphasizes that while there are scientific rules of map design, there is also an artistic element. The document outlines topics to be covered such as map scale and generalization, symbolization, choropleth mapping, use of color, and labeling. It provides guidelines for map elements like titles, legends, and orientation indicators. It also discusses classification schemes, issues with choropleth maps, effective use of color, and best practices for labeling and typography. Ethical practices of map design to avoid deception are highlighted.
This 3D illustration depicts a joint strike mission in 2009 using advanced 3D modeling and rendering software. F-35 aircraft models were created in 3DS Max and placed into a scene assembled in Vue 7.5 xtreme for rendering with mental ray. Final touch-ups were done in Photoshop.
Shading and two type of shading flat shading and gauraud shading with coding ...Adil Mehmoood
The document discusses different shading techniques in computer graphics such as flat shading, Gouraud shading, and Phong shading. It explains that flat shading colors an entire polygon with one color, Gouraud shading uses interpolation to vary the shading across a polygon, and Phong shading more accurately simulates light reflection. Examples of OpenGL code are provided to demonstrate implementing different shading models.
This document discusses MATLAB and its capabilities for 3D plotting and visualization. MATLAB can create various 3D plot types like line plots, mesh plots, surface plots, and contour plots that are useful for visualizing 3D data and functions. It also describes how to generate 3D surface and contour plots by defining a function over a grid of x and y values and plotting the corresponding z values. Examples are provided to illustrate generating 3D surface and contour plots in MATLAB.
The document discusses window to viewport transformation. It defines a window as a world coordinate area selected for display and a viewport as a rectangular region of the screen selected for displaying objects. Window to viewport mapping requires transforming coordinates from the window to the viewport. This involves translation, scaling and another translation. Steps include translating the window to the origin, resizing it based on the viewport size, and translating it to the viewport position. An example transforms a sample window to a viewport through these three steps.
This document discusses layers and masks in Photoshop. It explains that layers allow separation of image content and independent manipulation. The background layer is locked but can be converted. Layers can be assigned colors, locked, edited, and blended using various modes and masks. Pixel and vector masks can hide or reveal layer content. Clipping groups use one layer to mask the effects of another layer below it.
The document discusses several methods for solid modeling representation: voxels, quadtrees and octrees, binary space partitions (BSP), and constructive solid geometry (CSG). Voxels represent space as a 3D grid but require large storage. Quadtrees and octrees refine voxel resolution hierarchically. BSPs recursively partition space with planes into convex regions. CSG represents solids as a hierarchy of boolean operations on primitives. Each method has advantages and disadvantages for storage, acquisition, display, and boolean operations.
This document discusses different types of projections used in computer graphics, including perspective and parallel projections. It describes orthographic projection, which projects points along the z-axis onto the z=0 plane. Perspective projection is also covered, including how it creates the effect of objects appearing smaller with distance using similar triangles. The document provides the equation for a perspective projection matrix and an example. It concludes by discussing defining a viewing region or frustum using functions like glFrustum and gluPerspective in OpenGL.
study Diffusion Curves: A Vector Representation for Smooth-Shaded ImagesChiamin Hsu
This document introduces diffusion curves, a new vector-based image representation. Diffusion curves represent smooth shaded images using curves that diffuse color on both sides. This allows for more complex gradients than previous methods. The document outlines how diffusion curves are created either manually, assisted via color sampling, or automatically from bitmaps. It also describes how diffusion curves are rendered by rasterizing color sources along curves, computing a gradient field, diffusing color, and reblurring. This new representation offers benefits over gradient meshes while being compact and enabling artistic control.
This document discusses techniques for achieving visual realism in geometric modeling. It covers topics like hidden line removal, hidden surface determination, shading models, transparency, reflection, and camera models. The goal of visual realism is to generate images that capture effects of light interacting with physical objects similarly to how we see the real world. This involves modeling objects and lighting conditions, determining visible surfaces, assigning color to pixels, and creating animated sequences. Realistic images find applications in simulation, design, entertainment, research, and control.
Geometry shaders render 3D models into polygons and color faces, while fragment shaders apply effects to the entire screen like tinting or distortion. Together, geometry and fragment shaders take models and textures and draw them, while fragment shaders alter the scene to make effects like lenses or Photoshop, potentially causing queasiness.
This document discusses computer graphics and the role of linear algebra in creating 2D images from 3D mathematical models. It explains that computer graphics uses transformations between different coordinate systems, such as object coordinates, world coordinates, camera coordinates, and screen coordinates, to render 3D scenes. These coordinate transformations involve affine transformations like rotation, translation, and scaling, which can be represented by homogeneous matrices. The document also describes how perspective projection is achieved through multiplication of points by a perspective matrix to give the illusion of depth.
This document discusses advanced computer graphics and realistic image generation techniques. It covers topics like modeling objects, lighting, rendering, visible surface determination, shading, textures, shadows, transparency, camera models, and anti-aliasing. Realism involves modeling objects and lighting conditions, determining visible surfaces, calculating pixel colors based on light reflection, and supporting animation. Rendering techniques like line drawings, shading, and shadows add information to convey depth. Anti-aliasing reduces jagged edges by using techniques like supersampling and weighted area sampling.
The document discusses different 3D display methods including parallel projection, perspective projection, depth cueing, visible line identification, visible surface identification, surface rendering, and material properties and shadows. Parallel projection preserves proportions but not realistic views, while perspective projection produces realistic views but not proportions. Depth cueing and visible surface identification are used to determine which surfaces are visible from a given viewing position. Surface rendering applies lighting effects to obscure hidden surfaces. Material properties and shadows can further enhance realism.
This document discusses coordinate systems and viewport mapping in OpenGL. It describes the screen coordinate system, world coordinate system, world window, viewport, and how objects are mapped from the world window to the viewport. It provides the equations to calculate the corresponding screen coordinates (sx, sy) given an object's world coordinates (x, y), and discusses techniques for setting up the world window and viewport to avoid distortions when drawing.
The document describes an experiment to write a program for window to viewport transformation in Turbo C. It involves taking input coordinates for the window and viewport, and vertices of a triangle. The window-to-viewport transformation is done using scaling factors calculated from the window and viewport dimensions. The transformed triangle is then drawn within the viewport.
This document discusses techniques for achieving visual realism in 3D computer graphics. It covers hidden line removal, hidden surface removal, and hidden solid removal algorithms. Hidden line removal algorithms like the priority and area-oriented algorithms aim to correctly display occluded lines. Hidden surface algorithms like the z-buffer, Warnock's area coherence, and painter's priority approaches determine which surfaces are visible. Ray tracing is introduced as a hidden solid removal technique that traces the path of light in a scene to realistically render 3D objects.
The document discusses the window-to-viewport transformation process which maps a 2D scene to device coordinates. It involves developing formulas to proportionally map points from the world window to the viewport. The scale and translation factors that relate the world window and viewport coordinates are defined. Changing the viewport position, using multiple viewports, or adjusting the viewport dimensions allows manipulating the displayed scene. Distortion may occur if the aspect ratios of the world window and viewport differ.
This document discusses key principles of map design including selection of colors, symbols, labeling, and overall layout. It emphasizes that while there are scientific rules of map design, there is also an artistic element. The document outlines topics to be covered such as map scale and generalization, symbolization, choropleth mapping, use of color, and labeling. It provides guidelines for map elements like titles, legends, and orientation indicators. It also discusses classification schemes, issues with choropleth maps, effective use of color, and best practices for labeling and typography. Ethical practices of map design to avoid deception are highlighted.
This 3D illustration depicts a joint strike mission in 2009 using advanced 3D modeling and rendering software. F-35 aircraft models were created in 3DS Max and placed into a scene assembled in Vue 7.5 xtreme for rendering with mental ray. Final touch-ups were done in Photoshop.
Shading and two type of shading flat shading and gauraud shading with coding ...Adil Mehmoood
The document discusses different shading techniques in computer graphics such as flat shading, Gouraud shading, and Phong shading. It explains that flat shading colors an entire polygon with one color, Gouraud shading uses interpolation to vary the shading across a polygon, and Phong shading more accurately simulates light reflection. Examples of OpenGL code are provided to demonstrate implementing different shading models.
This document discusses MATLAB and its capabilities for 3D plotting and visualization. MATLAB can create various 3D plot types like line plots, mesh plots, surface plots, and contour plots that are useful for visualizing 3D data and functions. It also describes how to generate 3D surface and contour plots by defining a function over a grid of x and y values and plotting the corresponding z values. Examples are provided to illustrate generating 3D surface and contour plots in MATLAB.
The document discusses window to viewport transformation. It defines a window as a world coordinate area selected for display and a viewport as a rectangular region of the screen selected for displaying objects. Window to viewport mapping requires transforming coordinates from the window to the viewport. This involves translation, scaling and another translation. Steps include translating the window to the origin, resizing it based on the viewport size, and translating it to the viewport position. An example transforms a sample window to a viewport through these three steps.
This document discusses layers and masks in Photoshop. It explains that layers allow separation of image content and independent manipulation. The background layer is locked but can be converted. Layers can be assigned colors, locked, edited, and blended using various modes and masks. Pixel and vector masks can hide or reveal layer content. Clipping groups use one layer to mask the effects of another layer below it.
The document discusses several methods for solid modeling representation: voxels, quadtrees and octrees, binary space partitions (BSP), and constructive solid geometry (CSG). Voxels represent space as a 3D grid but require large storage. Quadtrees and octrees refine voxel resolution hierarchically. BSPs recursively partition space with planes into convex regions. CSG represents solids as a hierarchy of boolean operations on primitives. Each method has advantages and disadvantages for storage, acquisition, display, and boolean operations.
This document discusses different types of projections used in computer graphics, including perspective and parallel projections. It describes orthographic projection, which projects points along the z-axis onto the z=0 plane. Perspective projection is also covered, including how it creates the effect of objects appearing smaller with distance using similar triangles. The document provides the equation for a perspective projection matrix and an example. It concludes by discussing defining a viewing region or frustum using functions like glFrustum and gluPerspective in OpenGL.
study Diffusion Curves: A Vector Representation for Smooth-Shaded ImagesChiamin Hsu
This document introduces diffusion curves, a new vector-based image representation. Diffusion curves represent smooth shaded images using curves that diffuse color on both sides. This allows for more complex gradients than previous methods. The document outlines how diffusion curves are created either manually, assisted via color sampling, or automatically from bitmaps. It also describes how diffusion curves are rendered by rasterizing color sources along curves, computing a gradient field, diffusing color, and reblurring. This new representation offers benefits over gradient meshes while being compact and enabling artistic control.
The document summarizes a lecture on blending, compositing, and anti-aliasing in computer graphics. It discusses how colors are combined during rendering using blending operations, and how compositing operates on entire images rather than individual pixels. Porter-Duff models for digital image compositing are explained, along with how they relate to OpenGL blending functions.
Beginning direct3d gameprogramming01_thehistoryofdirect3dgraphics_20160407_ji...JinTaek Seo
Direct3D has evolved over many versions to support more advanced graphics capabilities. Early versions supported basic 3D rendering while later versions like DirectX 8 introduced pixel and vertex shaders, point sprites, and 3D textures. DirectX 9 improved shaders and added multiple render targets. DirectX 10 unified the shader pipeline and DirectX 11 added tessellation and support for GPGPU programming. Each version expanded the set of graphics techniques supported in hardware-accelerated 3D graphics.
Even though exploring data visually is an integral part of the data analytic pipeline, we struggle to visually explore data once the number of dimensions go beyond three. This talk will focus on showcasing techniques to visually explore multi dimensional data p 3. The aim would be show examples of each of following techniques, potentially using one exemplar dataset. This talk was given at the Strata + Hadoop World Conference @ Singapore 2015 and at Fifth Elephant conference @ Bangalore, 2015
A Practical and Robust Bump-mapping Technique for Today’s GPUs (slides)Mark Kilgard
I presented this on May 8, 2000 to the Stanford Shading Group in Palo Alto, California. The presentation explains how to use the, then state-of-the-art, NVIDIA register combiners of the GeForce 256 to implement per-pixel bump mapping, a technique that is now ubiquitous in most 3D computer games.
This document summarizes a generalization approach for single-center 3D projections that can handle non-planar projections, 2D lens effects, and image warping. It presents a technique using dynamic cube maps and projection tile screens to define arbitrary projection functions in a hardware-accelerated way. Applications shown include non-planar surfaces, custom normal maps for distortions, and compound projections. Future work is noted to improve quality and provide a graphical user interface.
Interactive Stereoscopic Rendering for Non-Planar Projections (GRAPP 2009)Matthias Trapp
This document discusses approaches for interactive stereo rendering of 3D environments with non-planar projections. It compares a geometry-based approach (GBA) and an image-based approach (IBA) in terms of implementation complexity, memory footprint, rendering performance, and image quality. The GBA outperforms the IBA in rendering and image quality but is more complex and has a larger memory footprint for high field-of-view projections. The IBA is easier to implement but has artifacts from cubemap sampling. Future work is proposed to improve the IBA's performance and image quality.
This document describes SURF (Speeded Up Robust Features), a feature detection and description algorithm. SURF approximates or outperforms previous schemes in terms of repeatability, distinctiveness, and robustness, while being faster to compute. It relies on integral images for image convolutions and combines a Hessian matrix-based detector with a distribution-based descriptor. Experimental results show SURF outperforms SIFT, GLOH, and other descriptors in object recognition tasks under various image conditions, while being significantly faster to compute.
This document discusses different types of geometric modeling methods including wireframe, surface, and solid modeling. Wireframe modeling uses points and lines to define objects but does not represent actual surfaces or volumes. Surface modeling defines the outer surfaces of an object. Solid modeling precisely defines the enclosed volume of an object using its faces, edges, and vertices. Constructive solid geometry and boundary representation are two common solid modeling techniques. CSG uses Boolean operations to combine primitive shapes, while boundary representation stores topological information about faces, edges, and vertices. Feature-based modeling allows shapes to be created through operations like extruding, revolving, sweeping, and filling.
This document discusses different types of geometric modeling methods including wireframe, surface, and solid modeling. Wireframe modeling uses points and lines to define objects but does not represent actual surfaces or volumes. Surface modeling defines the outer surfaces of an object. Solid modeling precisely defines the enclosed volume of an object using its faces, edges, and vertices. Constructive solid geometry and boundary representation are two common solid modeling techniques. CSG uses Boolean operations to combine primitive shapes, while boundary representation precisely defines the boundaries and topology of a model.
Introduction To Massive Model Visualizationpjcozzi
This document discusses techniques for visualizing massive 3D models. It covers culling methods like view frustum and occlusion culling to remove invisible geometry. Level of detail techniques generate lower detail versions of models to improve performance. Hierarchical LOD representations allow efficient refinement. Out-of-core techniques bring portions of models into memory as needed to handle models too large to fit entirely in memory. Compression, prefetching, and cache-coherent layouts further optimize rendering massive models. The goal is to keep processors busy and maintain performance as model complexity increases beyond memory limits.
Don't Call It a Comeback: Attribute Grammars for Big Data VisualizationLeo Meyerovich
This talk overviews our web platform for big data visualization (Superconductor). It focuses on one of the core components: our domain specific language for writing custom layouts. We took a new look at the old idea of attribute grammars: I'll show live examples of writing attribute grammars, automatically finding parallelism in them, and then automatically compiling them down into efficient parallel JavaScript code that runs on GPUs.
See http://www.sc-lang.com for more. The video (which includes live demos) will be up in a couple of weeks. Thanks for Functional Monthly for hosting!
Interaction-Based Feature Extraction: How to Convert Your Users’ Activity int...Databricks
Today almost every website and app collect data about the interactions (clicks, likes, views…) between users and items. The most common use case for these sparse “user-item” matrices is to train and improve different recommendation systems. In my presentation I will introduce how we can use exactly the same matrices together with additional datasets to generate valuable features that can be used to train different regression and classification models.
I will start with describing how it was implemented at SimilarWeb, in order to accurately estimate different website metrics like demographics (age and gender) and category and continue with explaining how we can expand the algorithm to solve similar problems in different domains.
This document discusses techniques for 3D image visualization. It begins with an introduction and covers topics like rendering techniques, MATLAB visualization, volume rendering, isocontouring, hole detection, and applications of stereoscopic visualization. The document outlines various methods for 3D output like projection and OpenGL libraries. It discusses advantages like hardware support for 3D graphics and disadvantages such as objects being drawn as 2D. The conclusion states that while techniques exist, more research is still needed for innovative 3D visualization of diverse data types.
Spatial Clustering to Uncluttering Map Visualization in SOLAPBeniamino Murgante
Spatial Clustering to Uncluttering Map Visualization in SOLAP
Ricardo Silva, João Moura-Pires - New University of Lisbon
Maribel Yasmina Santos - University of Minho
This document discusses feature descriptors and matching in computer vision. It covers three main components: 1) detecting interest points in images, 2) extracting feature descriptors around each interest point, and 3) determining correspondences between descriptors to match features across images. The document focuses on SIFT (Scale Invariant Feature Transform) descriptors, which are histograms of gradient orientations computed over localized patches that provide robust matching across changes in scale, rotation and illumination. SIFT descriptors have been widely and successfully used for applications like image stitching, 3D reconstruction, object recognition and augmented reality.
Large-Scale Graph Computation on Just a PC: Aapo Kyrola Ph.D. thesis defenseAapo Kyrölä
This document summarizes Aapo Kyrölä's thesis defense on large-scale graph computation on a single PC. The thesis proposes the Parallel Sliding Windows algorithm and Partitioned Adjacency Lists data structure to enable analytical graph computations on very large graphs in external memory using just a personal computer. Experiments were conducted on a Mac Mini using real-world graphs, processing billions of edges in a reasonable time comparable to distributed systems. The key challenges of disk-based graph computation involving random access were addressed.
Similar to Cuberilles Statistical Volume Visualisation for Medical and Geological Data (20)
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive functioning. Exercise causes chemical changes in the brain that may help protect against mental illness and improve symptoms.
Towards Distributed, Semi-Automatic Content-Based Visual Information Retrieva...Christian Kehl
Talk on big media archive visual indexing using Convolutional Neural Networks on different High-Performance Computing Platforms, developing new parametrization schemes. Talk and Poster were presented at International Supercomputing Conference 2015 (Frankfurt a. Main / Germany)
Distributed Rendering and Collaborative User Navigation- and Scene Manipulati...Christian Kehl
This document discusses distributed rendering and collaborative user navigation and scene manipulation in virtual environments. It presents an approach to extend an existing virtual reality framework to allow remote, distributed rendering on various display devices as well as remote, collaborative navigation and editing of massive 3D datasets. Technical results show the framework can successfully synchronize distributed rendering and enable collaborative navigation and modification of 3D scenes in real-time. This allows multiple remote users to interactively discuss and communicate changes to virtual environments, with potential applications in flood protection planning and other domains. Future work aims to improve the techniques for touch and mobile devices and simplify data for remote clients.
Conformal multi-material mesh generation from labelled medical volumes (Dec 2...Christian Kehl
This document discusses generating volume meshes from labelled medical volumes for finite element analysis (FEA). It introduces a new approach using the integer medial axis (IMA) transform to generate meshes faster and with fewer elements than previous methods while maintaining precision at boundaries. The IMA approach generates meshes up to 100x faster than other methods for test cases. Local surface triangulation in tangent planes is also proposed to mesh sparse samples accurately without oversampling. Future work will explore using natural neighbors for neighborhood determination in local triangulation.
Interactive Simulation and Visualization of Large-Scale Flooding Scenarios (J...Christian Kehl
This document discusses interactive simulation and visualization of large-scale flooding scenarios. It covers topics such as real-time 3D visualization of massive LiDAR point clouds, interactive adaptive simulation of flooding scenarios, and multi-scenario comparative simulation visualization. The goal is to develop techniques to simulate and visualize different flooding scenarios and uncertainties to help decision makers and the general public understand the risks and impacts of potential flooding events.
Efficient Navigation in Temporal, Multi-Dimensional Point Sets (April 2013)Christian Kehl
1. The document discusses algorithms and techniques for efficient navigation of temporal, multi-dimensional point set data.
2. A goal is to develop visualization algorithms that support user navigation through time-series data, including real-time rendering, user-centered browsing, and navigation via visual summaries.
3. The research will focus on scalable rendering and visualization of time-dependent point sets, efficient browsing of time-dependent datasets, and navigation using visual summaries to guide users through important events.
Smooth, Interactive Rendering and On-line Modification of Large-Scale, Geospa...Christian Kehl
This document discusses techniques for interactively rendering and modifying large-scale geospatial LiDAR point set data. It proposes a rendering-on-budget approach that combines importance-based streaming with a PID controller to balance load. This allows for smooth rendering while modifying streamed data online without quality loss. Proof of concepts demonstrate modifying attributes like color via polygons or textures, and displacing vertices using displacement maps. Performance is improved over traditional level-of-detail approaches.
WP 4 – Interactive simulation and 3D visualization for water policy developme...Christian Kehl
This document describes methods for interactive 3D simulation and visualization of water policies. It discusses using large high-resolution topographic data integrated with flood simulations to understand flood protection policies. Methods include level-of-detail rendering to efficiently display large datasets, geospatial data integration using KML and triangulated meshes, and continuous temporal interpolation to animate realistic wave simulations in real-time. Use cases apply these methods to study historic floods in locations like Wieringermeer and visualize the 1953 North Sea flood for policy discussions. The results allow interactive exploration of urban flood studies with smooth level-of-detail transitions and animated water levels.
Master Thesis: Conformal multi-material mesh generation from labelled medical...Christian Kehl
An important step in orthopaedic pre-operative planning is the generation of accurate volume meshes out of segmented volume image. These meshes are used in patient-specific, bio-mechanical finite element simulations to optimize positioning and design of implants. The development of accurate, multi-material volume meshing methods for medical applications is an active and interdisciplinary field of research. Several methods in the field that were proposed in recent years claim to accurately perform the task, each concept with its advantages and disadvantages. The approaches to the task are diverse. The question is: Which approach is the most suitable one? How do we evaluate the excellence of such methods ? What criteria can be applied to measure the quality of a multi-labelled volume mesh ? And which ones have the most impact on the subsequent simulation, so that stress calculations on the implant are realistic and correct ?
These are the basic research questions that are discussed in this work.
This document provides an overview of different types of LiDAR acquisition methods. Aerial LiDAR is used to capture large areas and generates 2.5D data by scanning from aircraft. Terrestrial LiDAR captures smaller areas in full 3D using static or mobile ground-based units. Bathymetric LiDAR maps shallow underwater areas using dual lasers. Atmospheric LiDAR surveys air properties by transmitting laser pulses and analyzing backscatter. Common to all is using a laser transmitter and detector to measure discrete points or full waveforms, with variations depending on the objective and environment.
Depth image recognition using isomorphic graph theoryChristian Kehl
This document discusses using graph theory and depth images to recognize objects. It proposes constructing a mesh of the world from depth camera data, transforming depth images into normal maps, and representing each region or plane as a different color "billboard". Objects in images can be matched by growing regions into nodes connected by edges between neighboring regions. Graphs of images could then be compared to find the largest isomorphic subgraphs and determine if the images match.
Graph theory - Traveling Salesman and Chinese PostmanChristian Kehl
Traveling Salesman and Chinese Postman problems
1. Problem Description and Complexity
2. Theoretical Approach
3. Practical Approaches and Possible Solutions
4. Examples
This document discusses parallel computing on GPUs using OpenCL. It provides an overview of basics of parallel computing, a brief history of SIMD and MIMD architectures, and details of OpenCL. It then describes a case study of using OpenCL and OpenMP to perform a Monte Carlo study of a spring-mass system. The study models the system, uses the Euler method for numerical integration, develops SIMD approaches for GPUs, implements OpenMP, analyzes results and speedup, and provides conclusions on parallelization.
Point clouds are sets of unordered points without connections. They can be generated from 3D scans and used for medical or industrial applications. Point clouds lack properties like textures and normals, so lighting cannot be directly applied. They must be converted into meshes or polygon networks for solid modeling. This can be done through algorithms like triangulation, two-peasant graphs, or marching cubes. Constructive solid geometry uses boolean operations on basic geometric primitives to combine them into complex 3D models. It is commonly used in CAD software for engineering design.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
The use of Nauplii and metanauplii artemia in aquaculture (brine shrimp).pptxMAGOTI ERNEST
Although Artemia has been known to man for centuries, its use as a food for the culture of larval organisms apparently began only in the 1930s, when several investigators found that it made an excellent food for newly hatched fish larvae (Litvinenko et al., 2023). As aquaculture developed in the 1960s and ‘70s, the use of Artemia also became more widespread, due both to its convenience and to its nutritional value for larval organisms (Arenas-Pardo et al., 2024). The fact that Artemia dormant cysts can be stored for long periods in cans, and then used as an off-the-shelf food requiring only 24 h of incubation makes them the most convenient, least labor-intensive, live food available for aquaculture (Sorgeloos & Roubach, 2021). The nutritional value of Artemia, especially for marine organisms, is not constant, but varies both geographically and temporally. During the last decade, however, both the causes of Artemia nutritional variability and methods to improve poorquality Artemia have been identified (Loufi et al., 2024).
Brine shrimp (Artemia spp.) are used in marine aquaculture worldwide. Annually, more than 2,000 metric tons of dry cysts are used for cultivation of fish, crustacean, and shellfish larva. Brine shrimp are important to aquaculture because newly hatched brine shrimp nauplii (larvae) provide a food source for many fish fry (Mozanzadeh et al., 2021). Culture and harvesting of brine shrimp eggs represents another aspect of the aquaculture industry. Nauplii and metanauplii of Artemia, commonly known as brine shrimp, play a crucial role in aquaculture due to their nutritional value and suitability as live feed for many aquatic species, particularly in larval stages (Sorgeloos & Roubach, 2021).
ESR spectroscopy in liquid food and beverages.pptxPRIYANKA PATEL
With increasing population, people need to rely on packaged food stuffs. Packaging of food materials requires the preservation of food. There are various methods for the treatment of food to preserve them and irradiation treatment of food is one of them. It is the most common and the most harmless method for the food preservation as it does not alter the necessary micronutrients of food materials. Although irradiated food doesn’t cause any harm to the human health but still the quality assessment of food is required to provide consumers with necessary information about the food. ESR spectroscopy is the most sophisticated way to investigate the quality of the food and the free radicals induced during the processing of the food. ESR spin trapping technique is useful for the detection of highly unstable radicals in the food. The antioxidant capability of liquid food and beverages in mainly performed by spin trapping technique.
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
hematic appreciation test is a psychological assessment tool used to measure an individual's appreciation and understanding of specific themes or topics. This test helps to evaluate an individual's ability to connect different ideas and concepts within a given theme, as well as their overall comprehension and interpretation skills. The results of the test can provide valuable insights into an individual's cognitive abilities, creativity, and critical thinking skills
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
3. Design Focus
3D representation using Cuberilles
why ? data sharing GPU 3D – CPU
Glyphing Cuberilles with Cubes or Spheres
Preset Editor for Colour-Opacity changes
Lensing for Zooming; separated 3D-2D view
Focus: interaction & vis. mapping; sacrifice render
quality (no render styles or shadows; simple shading)
Design Demo Specifics Reflection3
4. Implementation Choices
Dev Environment: Linux; hence Cross-Platform
Java OpenGL (JOGL); no SceneGraph avail.
Shaders in GLSL
UI: SWT (native UI on each system)
distribution via web (Java webstart) or binary
Graphing library: SWT chart
tryouts with modern OpenCL list sorts failed ...
Design Demo Specifics Reflection4
9. Alpha Composition
Problem: Render-order dependent composition
depth peeling => small number of alpha layers
tried pre-computed render orders (1 per
bounding box corner): didn’t really work ...
Sorting: just points, not cube vertices
Simple view-dependent sorting not interactive
Parallel sorting improved speed reasonably
Design Demo Specifics Reflection9
10. Alpha Composition
Design Demo Specifics Reflection10
common alpha composition with x-y-z render order pre-computed, closest-corner render order
render order computed each frame
11. Normal/Gradient Visualisation
Normal: divergent per dimension & between
positive/negative slope; defined 0-point
Design Demo Specifics Reflection11
full colour spectrum ->colour-
blind problem
divergent, too bright xy mapping+; z mapping - contrast-less; 0-value confuser
good contrast,
good highlights;
confuser:
N[0 0 1] = N[0 0 0]
divergent between x-y-z:
[magenta-blue-purple]
divergent to slope direction
with saturation:
[high-mid-low] = [-1 0 1]
0-value confuser possible
V-1 = I-1
12. Geological Datasets
Geological facies datasets similar to CT (depict
structure)
statistical exploration can help to spot rock
relationships
Design Demo Specifics Reflection12
porosity model depicts structure poorly facies model depicts structure very well
13. Lessons learned ...
Cubrilles: possible, but not advisable (modern GPUs do
volume raycasting better)
Statistics: helpful for exploring new datasets
Lighting: highlights structure in volumes unexpectedly well
GPU sorting: did improved; still not comparable to CPU
Gradient/Normal Mapping: harder than it seems ...
Volume Visualisation: it’s fun
“Lonely Rider” not advisable – a good team is better than
the sum of its individuals ...
And: thank you for the time to update my OpenGL
knowledge
Design Demo Specifics Reflection13
14. River
Discharge
SAND
SILT
Searching for a summer or
semester job ?
Doing Volume Visualisation
in Geology ?
then THIS may be for you!
Delft3D Delta Modelling:
• WebVis using WebGL / osgjs
• detailed, time-dependent,
multi-variate VolumeVis
• teamwork with experienced
3D engineer as guide
• cool project, good team ... &
getting in touch with petroleum
Contact: Simon J. Buckley
simon.buckley@uni.no
&
Editor's Notes
PDF: Probability Distribution Function
One-man group, so feature sacrifices and priorities need to be made!!!