Distributed coloring with O(sqrt. log n) bitsSubhajit Sahu
Distributed Coloring with O˜(√log n) Bits
K Kothapalli, M Onus, C Scheideler, C Schindelhauer
Proc. of IEEE International Parallel and Distributed Processing Symposium …
We consider the well-known vertex coloring problem: given a graph G, find a coloring of its vertices so that no two neighbors in G have the same color. It is trivial to see that every graph of maximum degree∆ can be colored with∆+ 1 colors, and distributed algorithms that find a (∆+ 1)-coloring in a logarithmic number of communication rounds, with high probability, are known since more than a decade. This is in general the best possible if only a constant number of bits can be sent along every edge in each round. In fact, we show that for the n-node cycle the bit complexity of the coloring problem is
Ω (log n). More precisely, if only one bit can be sent along each edge in a round, then every distributed coloring algorithm (ie, algorithms in which every node has the same initial state and initially only knows its own edges) needs at least Ω (log n) rounds, with high probability, to color the n–node cycle, for any finite number of colors. But what if the edges have orientations, ie, the endpoints of an edge agree on its orientation (while bits may still flow in both directions)? Edge orientations naturally occur in dynamic networks where new nodes establish connections to old nodes. Does this allow one to provide faster coloring algorithms?
Map Coloring and Some of Its Applications MD SHAH ALAM
This is a research paper which I have conducted at the final year of undergrad study and got 4.00/4.00. It is mainly related to graph theory and has many applications in practical life.
A Run Length Smoothing-Based Algorithm for Non-Manhattan Document SegmentationUniversity of Bari (Italy)
Layout analysis is a fundamental step in automatic document processing, because its outcome affects all subsequent processing steps. Many different techniques have been proposed to perform this task. In this work, we propose a general bottom-up strategy to tackle the layout analysis of (possibly) non-Manhattan documents, and two specializations of it to handle both bitmap and PS/PDF sources. A famous approach proposed in the literature for layout analysis was the RLSA. Here we consider a variant of RLSA, called RLSO (short for “Run Length Smoothing with OR”), that exploits the OR logical operator instead of the AND and is particularly indicated for the identification of frames in non-Manhattan layouts. Like RLSA, RLSO is based on thresholds, but based on different criteria than those that work in RLSA. Since setting such thresholds is a hard and unnatural task for (even expert) users, and no single threshold can fit all documents, we developed a technique to automatically define such thresholds for each specific document, based on the distribution of spacing therein. Application on selected sample documents, that cover a significant landscape of real cases, revealed that the approach is satisfactory for documents characterized by the use of a uniform text font size.
A Run Length Smoothing-Based Algorithm for Non-Manhattan Document SegmentationUniversity of Bari (Italy)
Layout analysis is a fundamental step in automatic document processing, because its outcome affects all subsequent processing steps. Many different techniques have been proposed to perform this task. In this work, we propose a general bottom-up strategy to tackle the layout analysis of (possibly) non-Manhattan documents, and two specializations of it to handle both bitmap and PS/PDF sources. A famous approach proposed in the literature for layout analysis was the RLSA. Here we consider a variant of RLSA, called RLSO (short for “Run Length Smoothing with OR”), that exploits the OR logical operator instead of the AND and is particularly indicated for the identification of frames in non-Manhattan layouts. Like RLSA, RLSO is based on thresholds, but based on different criteria than those that work in RLSA. Since setting such thresholds is a hard and unnatural task for (even expert) users, and no single threshold can fit all documents, we developed a technique to automatically define such thresholds for each specific document, based on the distribution of spacing therein. Application on selected sample documents, that cover a significant landscape of real cases, revealed that the approach is satisfactory for documents characterized by the use of a uniform text font size.
Stack Zooming for Multi-Focus Interaction in Time-Series Data VisualizationNiklas Elmqvist
In this IEEE PacificVis 2010 presentation, we introduce a method for supporting multi-focus interaction in time-series datasets that we call stack zooming. The approach is based on the user interactively building hierarchies of 1D strips stacked on top of each other, where each subsequent stack represents a higher zoom level, and sibling strips represent branches in the visual exploration. Correlation graphics show the relation between stacks and strips of different levels, providing context and distance awareness among the focus points.
Distributed coloring with O(sqrt. log n) bitsSubhajit Sahu
Distributed Coloring with O˜(√log n) Bits
K Kothapalli, M Onus, C Scheideler, C Schindelhauer
Proc. of IEEE International Parallel and Distributed Processing Symposium …
We consider the well-known vertex coloring problem: given a graph G, find a coloring of its vertices so that no two neighbors in G have the same color. It is trivial to see that every graph of maximum degree∆ can be colored with∆+ 1 colors, and distributed algorithms that find a (∆+ 1)-coloring in a logarithmic number of communication rounds, with high probability, are known since more than a decade. This is in general the best possible if only a constant number of bits can be sent along every edge in each round. In fact, we show that for the n-node cycle the bit complexity of the coloring problem is
Ω (log n). More precisely, if only one bit can be sent along each edge in a round, then every distributed coloring algorithm (ie, algorithms in which every node has the same initial state and initially only knows its own edges) needs at least Ω (log n) rounds, with high probability, to color the n–node cycle, for any finite number of colors. But what if the edges have orientations, ie, the endpoints of an edge agree on its orientation (while bits may still flow in both directions)? Edge orientations naturally occur in dynamic networks where new nodes establish connections to old nodes. Does this allow one to provide faster coloring algorithms?
Map Coloring and Some of Its Applications MD SHAH ALAM
This is a research paper which I have conducted at the final year of undergrad study and got 4.00/4.00. It is mainly related to graph theory and has many applications in practical life.
A Run Length Smoothing-Based Algorithm for Non-Manhattan Document SegmentationUniversity of Bari (Italy)
Layout analysis is a fundamental step in automatic document processing, because its outcome affects all subsequent processing steps. Many different techniques have been proposed to perform this task. In this work, we propose a general bottom-up strategy to tackle the layout analysis of (possibly) non-Manhattan documents, and two specializations of it to handle both bitmap and PS/PDF sources. A famous approach proposed in the literature for layout analysis was the RLSA. Here we consider a variant of RLSA, called RLSO (short for “Run Length Smoothing with OR”), that exploits the OR logical operator instead of the AND and is particularly indicated for the identification of frames in non-Manhattan layouts. Like RLSA, RLSO is based on thresholds, but based on different criteria than those that work in RLSA. Since setting such thresholds is a hard and unnatural task for (even expert) users, and no single threshold can fit all documents, we developed a technique to automatically define such thresholds for each specific document, based on the distribution of spacing therein. Application on selected sample documents, that cover a significant landscape of real cases, revealed that the approach is satisfactory for documents characterized by the use of a uniform text font size.
A Run Length Smoothing-Based Algorithm for Non-Manhattan Document SegmentationUniversity of Bari (Italy)
Layout analysis is a fundamental step in automatic document processing, because its outcome affects all subsequent processing steps. Many different techniques have been proposed to perform this task. In this work, we propose a general bottom-up strategy to tackle the layout analysis of (possibly) non-Manhattan documents, and two specializations of it to handle both bitmap and PS/PDF sources. A famous approach proposed in the literature for layout analysis was the RLSA. Here we consider a variant of RLSA, called RLSO (short for “Run Length Smoothing with OR”), that exploits the OR logical operator instead of the AND and is particularly indicated for the identification of frames in non-Manhattan layouts. Like RLSA, RLSO is based on thresholds, but based on different criteria than those that work in RLSA. Since setting such thresholds is a hard and unnatural task for (even expert) users, and no single threshold can fit all documents, we developed a technique to automatically define such thresholds for each specific document, based on the distribution of spacing therein. Application on selected sample documents, that cover a significant landscape of real cases, revealed that the approach is satisfactory for documents characterized by the use of a uniform text font size.
Stack Zooming for Multi-Focus Interaction in Time-Series Data VisualizationNiklas Elmqvist
In this IEEE PacificVis 2010 presentation, we introduce a method for supporting multi-focus interaction in time-series datasets that we call stack zooming. The approach is based on the user interactively building hierarchies of 1D strips stacked on top of each other, where each subsequent stack represents a higher zoom level, and sibling strips represent branches in the visual exploration. Correlation graphics show the relation between stacks and strips of different levels, providing context and distance awareness among the focus points.
A CHAOTIC CONFUSION-DIFFUSION IMAGE ENCRYPTION BASED ON HENON MAPIJNSA Journal
This paper suggests chaotic confusion-diffusion image encryption based on the Henon map. The proposed chaotic confusion-diffusion image encryption utilizes image confusion and pixel diffusion in two levels. In the first level, the plainimage is scrambled by a modified Henon map for n rounds. In the second level, the scrambled image is diffused using Henon chaotic map. Comparison between the logistic map and modified Henon map is established to investigate the effectiveness of the suggested chaotic confusion-diffusion
image encryption scheme. Experimental results showed that the suggested chaotic confusion-diffusion image encryption scheme can successfully encrypt/decrypt images using the same secret keys. Simulation results confirmed that the ciphered images have good entropy information and low correlation between coefficients. Besides the distribution of the gray values in the ciphered image has random-like behavior.
Semantic segmentation is the task of classifying each and every pixel in an image into a class as shown in the image below. Here you can see that all persons are red, the road is purple, the vehicles are blue, street signs are yellow etc.
Vector sparse representation of color image using quaternion matrix analysis.LeMeniz Infotech
Vector sparse representation of color image using quaternion matrix analysis.
Do Your Projects With Technology Experts
To Get this projects Call : 9566355386 / 99625 88976
Visit : www.lemenizinfotech.com / www.ieeemaster.com
Mail : projects@lemenizinfotech.com
Introduction to image processing and pattern recognitionSaibee Alam
this power point presentation provides a brief introduction to image processing and pattern recognition and its related research papers including conclusion
Contour Line Tracing Algorithm for Digital Topographic MapsCSCJournals
Topographic maps contain information related to roads, contours, landmarks land covers and rivers etc. For any Remote sensing and GIS based project, creating a database using digitization techniques is a tedious and time consuming process especially for contour tracing. Contour line is very important information that these maps provide. They are mainly used for determining slope of the landforms or rivers. These contour lines are also used for generating Digital Elevation Model (DEM) for 3D surface generation from any satellite imagery or aerial photographs. This paper suggests an algorithm that can be used for tracing contour lines automatically from contour maps extracted from the topographical sheets and creating a database. In our approach, we have proposed a modified Moore's Neighbor contour tracing algorithm to trace all contours in the given topographic maps. The proposed approach is tested on several topographic maps and provides satisfactory results and takes less time to trace the contour lines compared with other existing algorithms.
Slides of the presentation of the paper Document Representation Refinement for Precise Region Description by Christian Clausner, Stefan Pletschacher and Apostolos Antonacopoulos. #digidays
Geotagging Social Media Content with a Refined Language Modelling ApproachSymeon Papadopoulos
Presentation of a geotagging approach for social media content with a refined language modelling approach. Presented at PAISI workshop, co-located with PA-KDD 2015, Ho Chi Minh City, Vietnam
A CHAOTIC CONFUSION-DIFFUSION IMAGE ENCRYPTION BASED ON HENON MAPIJNSA Journal
This paper suggests chaotic confusion-diffusion image encryption based on the Henon map. The proposed chaotic confusion-diffusion image encryption utilizes image confusion and pixel diffusion in two levels. In the first level, the plainimage is scrambled by a modified Henon map for n rounds. In the second level, the scrambled image is diffused using Henon chaotic map. Comparison between the logistic map and modified Henon map is established to investigate the effectiveness of the suggested chaotic confusion-diffusion
image encryption scheme. Experimental results showed that the suggested chaotic confusion-diffusion image encryption scheme can successfully encrypt/decrypt images using the same secret keys. Simulation results confirmed that the ciphered images have good entropy information and low correlation between coefficients. Besides the distribution of the gray values in the ciphered image has random-like behavior.
Semantic segmentation is the task of classifying each and every pixel in an image into a class as shown in the image below. Here you can see that all persons are red, the road is purple, the vehicles are blue, street signs are yellow etc.
Vector sparse representation of color image using quaternion matrix analysis.LeMeniz Infotech
Vector sparse representation of color image using quaternion matrix analysis.
Do Your Projects With Technology Experts
To Get this projects Call : 9566355386 / 99625 88976
Visit : www.lemenizinfotech.com / www.ieeemaster.com
Mail : projects@lemenizinfotech.com
Introduction to image processing and pattern recognitionSaibee Alam
this power point presentation provides a brief introduction to image processing and pattern recognition and its related research papers including conclusion
Contour Line Tracing Algorithm for Digital Topographic MapsCSCJournals
Topographic maps contain information related to roads, contours, landmarks land covers and rivers etc. For any Remote sensing and GIS based project, creating a database using digitization techniques is a tedious and time consuming process especially for contour tracing. Contour line is very important information that these maps provide. They are mainly used for determining slope of the landforms or rivers. These contour lines are also used for generating Digital Elevation Model (DEM) for 3D surface generation from any satellite imagery or aerial photographs. This paper suggests an algorithm that can be used for tracing contour lines automatically from contour maps extracted from the topographical sheets and creating a database. In our approach, we have proposed a modified Moore's Neighbor contour tracing algorithm to trace all contours in the given topographic maps. The proposed approach is tested on several topographic maps and provides satisfactory results and takes less time to trace the contour lines compared with other existing algorithms.
Slides of the presentation of the paper Document Representation Refinement for Precise Region Description by Christian Clausner, Stefan Pletschacher and Apostolos Antonacopoulos. #digidays
Geotagging Social Media Content with a Refined Language Modelling ApproachSymeon Papadopoulos
Presentation of a geotagging approach for social media content with a refined language modelling approach. Presented at PAISI workshop, co-located with PA-KDD 2015, Ho Chi Minh City, Vietnam
This presentation is about applications of graph theory applications....it is updated version it was given at international conference at applications of graph theory at KAULALAMPUR MALYSIA 2OO7
Google Earth Web Service as a Support for GIS Mapping in Geospatial Research ...Universität Salzburg
The geospatial work has been performed using combination of the Google Earth imagery, Landsat TM images and Erdas Imagine GIS software. The advantage of utilizing Google Earth scenes with Landsat TM satellite imagery, along with GIS techniques and methods, for inventorying land cover types has been demonstrated for landscape studies. Combination of land cover type characteristics and landscape changes enabled to analyse landscape dynamics, as well as applicability of Google Earth service for thematic mapping. The used data included Landsat TM and ETM+ multi-band imagery covering area in Izmir, western Turkey. The image processing was per- formed using supervised classification in Erdas Imagine software. The Google Earth web service technologies were applied to test the accuracy of mapping via the available module of Erdas Imagine «Linking with Google Earth».
Semantic mapping of road scenes, PhD thesis. The main aim of the thesis is to investigate and propose solutions to the scene understanding problem of finding 'what' objects are present in the world and 'where' are they located.
Advanced Hybrid Color Space Normalization for Human Face Extraction and Detec...ijsrd.com
This paper presents a new color space normalization (CSN) technique for enhancing the discriminating power of color space along with the principal component analysis (PCA) for the face recognition process. The common RGB technique is not suitable for the characterizing of the skin color due to the presence of luminance factor. In the YCbCr color space, the luminance information is contained in Y component, and the chrominance information is in Cb and Cr. Therefore, the luminance information can be easily de-embedded. Different color spaces have different discriminating power, in this paper, eye can be perfectly detected by using YcbCr color space and the mouth regions can be perfectly detected by using the YIQ color space. Then PCA is used to express the large 1-D vector of pixels constructed from 2-D facial image into the compact principal components of the feature space. Each face image may be represented as a weighted sum (feature vector) of the eigenfaces, which are stored in a 1D array. PCA allows us to compute a linear transformation that maps data from a high dimensional space to a lower dimensional space. It covers standard deviation, covariance, eigenvectors and eigenvalues. Face recognition is obtained by PCA without much loss of information. Experiments using different databases by varying the facial expressions (open/closed eyes, smiling/not smiling) show that the proposed method by combining color space discrimination and PCA can improve face recognition to a great extend.
Image Segmentation from RGBD Images by 3D Point Cloud Attributes and High-Lev...CSCJournals
In this paper, an approach is developed for segmenting an image into major surfaces and potential objects using RGBD images and 3D point cloud data retrieved from a Kinect sensor. In the proposed segmentation algorithm, depth and RGB data are mapped together. Color, texture, XYZ world coordinates, and normal-, surface-, and graph-based segmentation index features are then generated for each pixel point. These attributes are used to cluster similar points together and segment the image. The inclusion of new depth-related features provided improved segmentation performance over RGB-only algorithms by resolving illumination and occlusion problems that cannot be handled using graph-based segmentation algorithms, as well as accurately identifying pixels associated with the main structure components of rooms (walls, ceilings, floors). Since each segment is a potential object or structure, the output of this algorithm is intended to be used for object recognition. The algorithm has been tested on commercial building images and results show the usability of the algorithm in real time applications.
EmbNum: Semantic Labeling for Numerical Values with Deep Metric Learning Phuc Nguyen
Semantic labeling for numerical values is a task of assigning semantic labels to unknown numerical attributes. The semantic labels could be numerical properties in ontologies, instances in knowledge bases, or labeled data that are manually annotated by domain experts. In this paper, we refer to semantic labeling as a retrieval setting where the label of an unknown attribute is assigned by the label of the most relevant attribute in labeled data. One of the greatest challenges is that an unknown attribute rarely has the same set of values with the similar one in the labeled data. To overcome the issue, statistical interpretation of value distribution is taken into account. However, the existing studies assume a specific form of distribution. It is not appropriate in particular to apply open data where there is no knowledge of data in advance. To address these problems, we propose a neural numerical embedding model (EmbNum) to learn useful representation vectors for numerical attributes without prior assumptions on the distribution of data. Then, the "semantic similarities" between the attributes are measured on these representation vectors by the Euclidean distance. Our empirical experiments on City Data and Open Data show that EmbNum significantly outperforms state-of-the-art methods for the task of numerical attribute semantic labeling regarding effectiveness and efficiency.
EFFECTIVE SEARCH OF COLOR-SPATIAL IMAGE USING SEMANTIC INDEXINGIJCSEA Journal
Most of the data stored in libraries are in digital form will contain either pictures or video, which is tough to search or browse. Methods which are automatic for searching picture collections made large use of color histograms, because they are very strong to wide changes in viewpoint, and can be calculated trivially. However, color histograms unable to present spatial data, and therefore tend to give lesser results. By using combination of color information with spatial layout we have developed several methods, while retrieving the advantages of histograms. A method computes a given color as a function of the distance between two pixels, which we call a color correlogram. We propose a color-based image descriptor that can be used for image indexing based on high-level semantic concepts. The descriptor is
based on Kobayashi’s Color Image Scale, which is a system that includes 130 basic colors combined in 1180 three-color combinations. The words are represented in a two dimensional semantic space into groups based on perceived similarity. The modified approach for statistical analysis of pictures involves transformations of ordinary RGB histograms. Then a semantic image descriptor is derived, containing semantic data about both color combinations and single colors in the image.
Tracing and Sketching Performance using Blunt-tipped Styli on Direct-Touch ...Niklas Elmqvist
Presentation at ACM AVI 2014 on our evaluation of tracing and sketching using blunt-tipped styli on direct-touch tablets. Presented by Sriram Karthik Badam.
Munin: A Peer-to-Peer Middleware forUbiquitous Analytics and Visualization S...Niklas Elmqvist
Presentation from IEEE VIS 2014 on Munin, our Java toolkit for peer-to-peer visualization systems for ubiquitous analytics. Published in IEEE TVCG and presented by Sriram Karthik Badam.
VASA: Visual Analytics for Simulation-based ActionNiklas Elmqvist
Slides from our IEEE VAST 2014 talk at IEEE VIS on VASA, a visual analytics system for interactive computational steering of pipelines of asynchronous simulation models.
Slides from T.J. Jankun-Kelly's IEEE VisWeek 2012 presentation on visualization for games. Electronic games are starting to incorporate in-game telemetry that collects data about player, team, and community
performance on a massive scale, and as data begins to accumulate, so does the demand for effectively analyzing this data. We use examples from both old and new games of different genres to explore the theory and design space of visualization for games. Drawing on these examples, we define a design space for this novel research topic and use it to formulate design patterns for how to best apply visualization technology to games. We then discuss the implications that this new framework will
potentially have on the design and development of game and visualization technology in the future.
Presentation from ACM AVI 2012 in Capri, Italy on gravity navigation. Gravity navigation (GravNav) is a family of multi-scale navigation techniques that use a gravity-inspired model for assisting navigation in large visual 2D spaces based on the interest and
salience of visual objects in the space. GravNav is an instance of topology-aware navigation, which makes use of the structure of the visual space to aid navigation. We have performed a controlled study comparing GravNav to standard zoom and pan navigation, with and without variable-rate zoom control. Our results show a significant improvement for GravNav over standard navigation, particularly when coupled with variable-rate zoom. We also report findings on user behavior in multi-scale navigation.
PolyZoom: Multiscale and Multifocus Exploration in 2D Visual SpacesNiklas Elmqvist
Slides from ACM CHI 2012 presentation given by Sohaib Ghani.
Abstract: The most common techniques for navigating in multiscale visual spaces are pan, zoom, and bird’s eye views. However, these techniques are often tedious and cumbersome to use, especially when objects of interest are located far apart. We present the PolyZoom technique where users progressively build hierarchies of focus regions, stacked on each other such that each subsequent level shows a higher magnification. Correlation graphics show the relation between parent and child viewports in the hierarchy. To validate the new technique, we compare it to standard navigation techniques in two user studies, one on multiscale visual search and the other on multifocus interaction. Results show that PolyZoom performs better than current standard techniques.
Animated transitions are popular in many visual applications but they can be difficult to follow, especially when many objects
move at the same time. One informal design guideline for creating effective animated transitions has long been the use of slow-in/slow-out pacing, but no empirical data exist to support this practice. We remedy this by studying object tracking performance under different conditions of temporal distortion, i.e., constant speed transitions, slow-in/slow-out, fast-in/fast-out, and an adaptive technique that slows down the visually complex parts of the animation. Slow-in/slow-out outperformed other techniques, but we saw technique differences depending on the type of visual transition.
Hugin: A Framework for Awareness and Coordination in Mixed-Presence Collabora...Niklas Elmqvist
Analysts are increasingly encountering datasets that are larger and more complex than ever before. Effectively exploring such datasets requires collaboration between multiple analysts, who more often than not are distributed in time or in space. Mixed-presence groupware provide a shared workspace medium that supports this combination of co-located and distributed collaboration. However, collaborative visualization systems for such distributed settings have their own cost and are still uncommon in the visualization community. We present Hugin, a novel layer-based graphical framework for this kind of mixed-presence synchronous collaborative visualization over digital tabletop displays. The design of the framework focuses on issues like awareness and access control, while using information visualization for the collaborative data exploration on network-connected tabletops. To validate the usefulness of the framework, we also present examples of how the Hugin toolkit can be used to implement new visualizations with access to these collaborative mechanisms.
Line graphs have been the visualization of choice for temporal data ever since the days of William Playfair (1759-1823), but realistic temporal analysis tasks often include multiple simultaneous time series. In this work, we explore user performance for comparison, slope, and discrimination tasks for different line graph techniques involving multiple time series. Our results show that techniques that create separate charts for each time series--such as small multiples and horizon graphs--are generally more efficient for comparisons across time series with a large visual span. On the other hand, shared-space techniques--like standard line graphs--are typically more efficient for comparisons over smaller visual spans where the impact of overlap and clutter is reduced.
Evaluating Motion Constraints for 3D Wayfinding in Immersive and Desktop Virt...Niklas Elmqvist
Motion constraints providing guidance for 3D navigation have recently been suggested as a way of offloading some of the cognitive effort of traversing complex 3D environments on a computer. We present findings from an evaluation of the benefits of this practice where users achieved significantly better results in memory recall and performance when given access to such a guidance method. The study was conducted on both standard desktop computers with mouse and keyboard, as well as on an immersive CAVE system. Interestingly, our results also show that the improvements were more dramatic for desktop users than for CAVE users, even outperforming the latter. Furthermore, the study indicates that allowing the users to retain local control over the navigation on the desktop platform helps them in familiarizing themselves with the 3D world.
Melange: Space Folding for Multi-Focus InteractionNiklas Elmqvist
Interaction and navigation in large geometric spaces typically require a sequence of pan and zoom actions. This strategy is often ineffective and cumbersome, especially when trying to study several distant objects. We propose a new distortion technique that folds the intervening space to guarantee visibility of multiple focus regions. The folds themselves show contextual information and support unfolding and paging interactions. Compared to previous work, our method provides more context and distance awareness. We conducted a study comparing the space-folding technique to existing approaches, and found that participants performed significantly better with the new technique.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
Static Spatial Graph Features
1. Improving Revisitation in Graphs through Static Spatial Features Presented by PourangIrani University of Manitoba SohaibGhaniPurdue University West Lafayette, IN, USA NiklasElmqvistPurdue University West Lafayette, IN, USA Graphics Interface 2011 May 25-27, 2011 ▪ St. John’s Newfoundland, Canada
4. Memorability & Revisitation Memorability The memorability of a visual space is a measure of a user’s ability to remember information about the space Revisitation Revisitation is the task of remembering where objects in the visual space are located and how they can be reached
5. Motivation Graphsprevalent in many information tasks Social network analysis (Facebook, LinkedIn, Myspace) Road networks and migration patterns Network topology design Graphs often visualized as node-link diagrams Node-link diagrams have few spatial features Low memorability Difficult to remember for revisitation Research questions How to improve graph memorability? How to improve graph revisitation performance?
6. Example: Social Network Analysis Interviewed two social scientists who use graphs for Social Network Analysis(SNA) Often experience trouble in orienting themselves in a social network when returning to previously studied network At least 50% of all navigation in SNA in previously visited parts of a graph
7. + People remember locations in visual spaces usingspatial features and landmarks Geographical maps have many spatial features and are easy to remember Evaluate whether staticspatial features to node-link diagrams help in graph revisitation Inspired by geographic maps Idea: Spatial Features in NL Diagrams?
8. Design Space:Static Spatial Graph Features Three different techniques of adding static spatial features to graphs Substrate Encoding (SE) Node Encoding (NE) Virtual Landmarks (LM) But which technique is optimal?
9. Substrate Encoding Idea: Add visual features to substrate (canvas) Partitioning of the space into regions Space-driven: split into regions of equal size Detail-driven: split into regions with equal numbers of items Encoding identity into each region Color Textures Figure 1 Figure 2
10. Node Encoding Idea: Encode spatial position into the nodes (and potentially the edges) of a graph Available graphical variables: Node Size Node Shape Node Color
11. Virtual Landmarks Idea: Add visual landmarks as static reference points that can be used for orientation Landmarks Discrete objects Evenly distributed invisual space
12. User Studies Experimental Platform Node-link graph viewer in Java Overview and detail windows Participants:16 paid participants per study Task:Graph revisitation Phase I: Learning Phase II: Revisitation
13. Phase I: Learning N blinking nodes shown in sequence, Participants visit and learn their positions.
17. Study 1: Substrate Encoding Study Design: Partitioning: Grid and Voronoi Diagram. Identity Encoding: Color and Texture Layout: Uniform and Clustered Hypotheses: Voronoi diagram will be faster and more accurate than grid for spatial partitioning Texture will be more accurate than color for identity encoding
20. Study 2: Node Encoding Study Design: 3 Node Encoding techniques: Size, Color and Size+Color Hypothesis: Size and color combined will be the best node encoding technique in terms of both time and accuracy
23. Study 3: Combinations Best techniques from Study 1 (Grid with Color) and Study 2 (Size+Color) as well as virtual landmarks Study Design: Eight different techniques: SE, NE, LM,SE+NE, SE+LM, NE+LM, SE+NE+LM, and simple graph (SG) Hypotheses: Techniques utilizing substrate encoding will be faster and more accurate than node encoding and landmarks The combination of all three spatial graph feature techniques will be fastest and most accurate
26. Study 3: Results (cont’d) Techniques with substrate encoding significantly faster and not less accurate. SE+NE+LM not significantly faster and more accurate than all other techniques Virtual landmarks promising strategy, performing second only to substrate encoding
27. Summary Substrate encoding (SE) is dominant strategy Space-driven partitioning Solid color encoding Virtual landmarks (LM) help significantly Node encoding (NE) not as good other two Combination of virtual landmarks (LM) and substrate encoding (SE) is optimal
28. Conclusion Explored design space of adding static spatial features to graphs Performed three user studies Study 1: grid with color is optimal substrate encoding Study 2: node size and color is optimal node encoding Study 3:substrate encoding, landmarks, and their combination are optimal techniques
The basic idea of this work is to improve navigation in node-link representations of graphs by adding spatial features to the graph, similar to the geographical features in a map.In the next few slides, I will describe in more detail how we achieve this, and I will also present results from our user studies where we evaluated the efficiency of this idea.
Memorability is thus closely linked to revisitation.
Our work is motivated by collaborations with social scientists who use visualization tools for social network analysis (SNA).we performed structured interviews with two of our collaborators both faculty members at our university and SNA experts.We observed that these scientists would often experience some trouble orienting themselves when returning to a previously studied social network. Moreover, ad hoc observations of social scientists performing SNA showed that more than 50% of all navigations in a node-link diagram was between previously visited parts of a graph.
Based on our survey of the literature, we study three different classes of static spatial graph features: substrate encoding, node encoding, and virtual landmarks.
Substrate encoding mimics geographical maps by adding graphical features to the visual representation of the graph. In a map, thesefeatures are typically spatial regions, such as roads, city limits, state lines, etc. The regions themselves are generally identifiable throughunique colors or textures. The features can then be used as reference points.We identify two degrees of freedom for substrate encoding: the partitioning of the space into regions, and the encoding of identity into each region to allow the user to separate them.The advantage of a detail-driven approach is that if nodes are clustered in a small area of the whole graph, then we will allocate more partitions in that area. For uniform partitioning, a majority of the nodes may end up in the same partition.Figure 1 shows detail driven partitioning with color encoding and figure 2 shows space driven partitioning with texture encoding.
This approach has the advantage of not introducing a high degree of visual clutter. However, some of these graphical variables may already be utilized to convey underlying information about the data in many existing graph visualizations.Figure shows example of node encoding where node color is varied on x-axis and node size on y-axis.
The basic idea with virtual landmarks mimics the role of landmarks in the real world—they serve as static reference points that can beused for orientation (e.g., the Eiffel tower in Paris). Landmarks typically give rise to less visual clutter than substrate and node encoding techniques without affecting the visual representation of the graph itself.We used 9 virtual landmarks of different shapes as shown in figure.
we include an overview of the visual space so that the visual space could be larger than the screen, preventing participants from remembering nodes by absolute positions on the screen rather than by spatial features. Furthermore, the overview was scaled down to a factor of about 10, making it difficult for participants to remember nodes using just the overview.
Figure shows first node blinking in red color. User will learn its position and click on it than second node start blinking.
Figure shows 2nd node blinking in red color. In this way N blinking nodes are shown to the user.In learning phase we use N=4 for first two studies and N=5 for third study. We increase N in 3rd study so that we get a separation between the techniques.
After the learning phase participants were asked to revisit the learned node in the same as order as before.
In this way user will revisit N nodes learned in the learning phase.
A regular grid is the simplest partitioning technique for equal-sized regions. We use a 3*3 grid to divide the space into the 9 regions (derived by pilot study).Partitioning space into regions with equal numbers of items requires us to group the graph nodes into 9 disjoint clusters. We then use a Voronoi diagram, summing up the cells for node in each cluster, to find the regions covered by these nodes. This yields an irregular partitioning focused on areas of high detail.we used two separate layouts: one yields uniform node distribution with uniform edge lengths , and the second clusters similar nodes based on the graph topology.
Color and Textures are used to encode each region. A solid color is chosen as it is the straightforward way to differentiate between regions. A texture will yield more internal detail to a region, potentially increasing its memorability. However, texture will increase visual clutter as well.
Space-driven partitioning using a grid yields significantly faster and more accurate performance than detail-driven partitioning using a Voronoi diagram.Encoding regions using a solid color yields significantly faster performance, with no significant difference in errors, than encoding using a texture.There is no significant effect of graph layout on completion time or accuracy.
In the second study, the spatial position of nodes was encoded in their size and color.
We use 3 approaches for node encoding. In first approach size of the nodes are varied such that width is varied on x-axis and height on Y axis. In second approach Color of nodes is varied such that Hue is varied on X-axis and Brightness on Y-axis. In 3rd approach both size and color are varied on Y and X-axis respectively.
The combination of size and color for encoding position is both significantly faster and more accurate than each of these techniques separately.There was no significant difference between size and color alone.
Figure shows significant pairwise differences in completion time. Arrows indicate which technique was faster than another. Results suggests that Substrate encoding and landmarks are best approaches for graph revisitation. Both of these techniques especially in combination performed significantly better than the competing techniques. Node encoding seems not to make much difference either way, which is perhaps why the combination of all three approaches is good, but not significantly better than others