The document describes an Advanced Lane Finding project that uses computer vision techniques to identify lane boundaries in video from a front-facing car camera. The steps include: 1) removing distortion, 2) isolating lanes using color and gradient thresholds, 3) warping the image to a bird's-eye view, 4) fitting curves to the lane pixels, and 5) creating the final output image with lane and radius of curvature information overlaid. Improvement areas include automating parameter selection using machine learning rather than manual tuning.
This file contains all the practicals with output regarding GTU syllabus. so it will help to IT and Computer engineering students. It is really knowledgeable so refer these for computer graphics practicals.
This file contains all the practicals with output regarding GTU syllabus. so it will help to IT and Computer engineering students. It is really knowledgeable so refer these for computer graphics practicals.
For all who wish to learn c graphics programming, no knowledge of graphics concepts is required. C Graphics programming is very easy and interesting. You can use graphics programming for developing your own games, in making projects, for animation etc. It's not like traditional C programming in which you have to apply complex logic in your program and then you end up with a lot of errors and warnings in your program. In C graphics programming you have to use standard library functions ( need not worry if you don't know functions ) to get your task done. Just you pass arguments to the functions and it's done. On this website you will find almost all functions with detailed explanation and a sample program showing the usage of a function. To make things easy you are provided with executable files which you can download and execute. Firstly you should know the function initgraph which is used to initialize the graphics mode . To initialize graphics mode we use initgraph function in our program. initgraph function is present in "graphics.h" header file, so your every graphics program should include "graphics.h" header file. We will discuss initgraph withe help of following sample program:-
Computer Graphics in Java and Scala - Part 1bPhilip Schwarz
First see the Scala program from Part 1 translated into Java.
Then see the Scala program modified to produce a more intricate drawing.
Java Code: https://github.com/philipschwarz/computer-graphics-50-triangles-java
Scala Code: https://github.com/philipschwarz/computer-graphics-chessboard-with-a-great-many-squares-scala
Line Drawing Algorithms - Computer Graphics - NotesOmprakash Chauhan
Straight-line drawing algorithms are based on incremental methods.
In incremental method line starts with a straight point, then some fix incrementable is added to current point to get next point on the line and the same has continued all the end of the line.
Notes for Mechanical and Computer science Students,
In the Notes Including the Basic to algo and transformation in Computer grafics Pixels, it is a important topic
For all who wish to learn c graphics programming, no knowledge of graphics concepts is required. C Graphics programming is very easy and interesting. You can use graphics programming for developing your own games, in making projects, for animation etc. It's not like traditional C programming in which you have to apply complex logic in your program and then you end up with a lot of errors and warnings in your program. In C graphics programming you have to use standard library functions ( need not worry if you don't know functions ) to get your task done. Just you pass arguments to the functions and it's done. On this website you will find almost all functions with detailed explanation and a sample program showing the usage of a function. To make things easy you are provided with executable files which you can download and execute. Firstly you should know the function initgraph which is used to initialize the graphics mode . To initialize graphics mode we use initgraph function in our program. initgraph function is present in "graphics.h" header file, so your every graphics program should include "graphics.h" header file. We will discuss initgraph withe help of following sample program:-
Computer Graphics in Java and Scala - Part 1bPhilip Schwarz
First see the Scala program from Part 1 translated into Java.
Then see the Scala program modified to produce a more intricate drawing.
Java Code: https://github.com/philipschwarz/computer-graphics-50-triangles-java
Scala Code: https://github.com/philipschwarz/computer-graphics-chessboard-with-a-great-many-squares-scala
Line Drawing Algorithms - Computer Graphics - NotesOmprakash Chauhan
Straight-line drawing algorithms are based on incremental methods.
In incremental method line starts with a straight point, then some fix incrementable is added to current point to get next point on the line and the same has continued all the end of the line.
Notes for Mechanical and Computer science Students,
In the Notes Including the Basic to algo and transformation in Computer grafics Pixels, it is a important topic
this is the third part in the opencv presentation series. this presentation is about some of the basic operations on images like flipping, cropping, rotating, resizing and extracting B, G & R equivalents from a colour image. the code along with the output images are also added to the explanation for better understanding.
19BCS1815_PresentationAutomatic Number Plate Recognition(ANPR)P.pptxSamridhGarg
Automatic Number Plate Recognition(ANPR)
We are building a python software for optical character Recognition of the license number plate using various Python libraries and importing various packages such as OpenCV, Matplotlib, numpy, imutils and Pytesseract for OCR(optical Character Recognition) of Number plate from image clicked. Let us discuss complete process step by step in this framework diagram shown above:
Step-1 Image will be taken by the camera(CCTV) or normal range cameras
Step-2 Selected image will be imported in our Software for pre-processing of our image and conversion of image into gray-scale for canny edge-detection
Step-3 We have installed OpenCV library for conversion of Coloured image to black and White image.
Step-4 We installed OpenCV package. Opencv(cv2) package is main package which we used in this project. This is image processing library.
Step-5 We have installed Imutils package. Imutils is a package used for modification of images . In this we use this package for change size of image.
Step-6 We have installed Pytesseract library. Pytesseract is a python library used for extracting text from image. This is an optical character recognition(OCR) tool for python.
Step-7 We have installed Matplotlib Library. In matplotlib library we use a package name pyplot. This library is used for plotting the images. % matplotlib inline is used for plot the image at same place.
Step-8 Image is read by the Imread() function and after reading the image we resize the image for further processing of image.
Step-9 Then our selected image is converted to gray-scale using below function.
# RGB to Gray scale conversion
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
plot_image(gray,"Grayscale Conversion")
Step-10 Then we find canny edges in our gray-scale image and then find contours based on edges. Then we find the top 30 contours from our image.
Step-11 Loop over our contours to find the best possible approximate contour of number plate
Step-12 Then Draw the selected contour on the original image.
Step-13 then we will use the Pytesseract Package to convert selected contour image into String.
Step-14 After fetching the number from number plate we store it in our MySQL database and also we have inculcated the feature of exporting data to excel sheet.
Remember: Most important feature of my project is that I can export my fetched number plate data to Government agencies for further investigation.
We are restricted from importing cv2 numpy stats and other.pdfDARSHANACHARYA13
We are restricted from importing cv2, numpy, stats and other third party libraries, with the
only exception of math, importing math library is allowed (import math).
the input image contains objects of four geometric shapes: circle, square, rectangle, and ellipse.
The shapes have a brighter intensity compared to the background. The objective of the
assignment is to count the total number of each geometric shape in the image by performing
binary image processing. The overall steps are
Copmute the histogram
Compute optimal threshold
Create binary image
Perform blob-coloring
For each region, compute area, centroid, and shape (circle, square, rectangle, or ellipse)
Count the number of circles, number of squares, number of rectangles, and number of ellipses.
Mark the center of each region with a label (c for circle, r for rectangle, s for square, and e for
ellipse)
Objective 2: Perform compression using run-length encoding and decoding of a binary image.
Shape Counting:
a. Write a program to binarize a gray-level image based on the assumption that the image has a
bimodal histogram. Determine the optimal threshold required to binarize the image. Your code
should report both the binarized image and the optimal threshold value. Also assume that
background is darker than foreground objects in the input gray-level image.
Starter code available in directory region_analysis/
region_analysis/binary_image.py:
compute_histogram: write your code to compute the histogram in this function, If you return a list it
will automatically save the graph in output folder
find_threshold: Write your code to compute the optimal threshold. This should be implemented
using the iterative algorithm discussed in class (See Week 4, Lecture 7, slide 42 on teams). Do not
implment the Otsu's thresholding method. No points are awarded for Otsu's method.
binarize: write your code to threshold the input image to create a binary image here. This function
should return a binary image which will automatically be saved in output folder. For visualization
one can use intensity value of 255 instead of 1 in the binary image. That way the objects appear
white over black background
Any output images or files will be saved to "output" folder
b. Write a program to perform blob-coloring. The input to your code should be a binary image (0's,
and 255's) and the output should be a list of objects or regions in the image.
region_analysis/shape_counting.py:
blob_coloring: write your code for blob coloring here, takes as input a binary image and returns a
list/dictionary of objects or regions.
Any output images will be saved to "output" folder
c. Ignore shapes smaller than 10 pixels in area generate a report of the remaining regions (region
Number, Centroid, Area, and Shape).
region_analysis/shape_counting.py:identify_shapes: write your code for computing the statistics of
each object/region, i.e area and location (centroid) here, and the shape (c for circle, s for square, r
for rectancle, and e for .
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
FIDO Alliance Osaka Seminar: The WebAuthn API and Discoverable Credentials.pdf
Writeup advanced lane_lines_project
1. Udacity Self Driving Cars Nanodegree- Advanced Lane Finding Project
Objective:
The objective is to write a Python program to identify the lane boundaries in a video from a front-facing
camera on a car. The camera calibration images, test road images, and project videos are provided.
Steps:
1. Removing Distortion
2. Isolate Lanes
3. Warp
4. Curve Fit
5. Final Image
1. Removing distortion:
Cameras use lens which distort image. OpenCV provides function to correct these distortions and to
calibrate the camera.
A set of 20 chess board images were used to calibrate camera.
First, a set of chess board image are passed to findChessboardCorners function to get the corner
points, thru the following code:
Ret, corners=cv2.findChessboardCorners (gray, (9, 6), None)
Then, the corner points discovered are passed to the calibrate Camera function, thru the following
code:
Ret,mtx,dist,rvecs,tvecs=cv2.calibrateCamera(objpoints,imgpoints,gray.shape[::-1],None,None)
This function returns the camera matrix and the distortion coefficients which are then passed to
undistort function to correct the distortion of the images, thru the following code:
undist=cv2.undistort(img,mtx,dist,None,mtx)
The following image shows the Original image and the undistorted image side by side:
2. 2. Isolate Lanes:
Then, color and direction gradients are applied to isolate lanes. Color spaces (RGB, HSV, HLS, LAB, etc.)
are explored for the images to see which channels of these color spaces may be used to distinguish
lane markings based on colors. In our example, S channel of HLS color space and V channel of the HSV
color space was used.
Then, direction gradient (Sobel filter) is used to detect lanes based on change of intensity (edge
detection) where lane markings exists. Both color channels and Sobel filters are applied with the
appropriate thresholds to isolate lane lines.
OpenCV provides cvtColor function to convert the image to different color spaces, like the following
conversion to HLS and HSV color spaces:
hls=cv2.cvtColor (RGB, cv2.COLOR_RGB2HLS) hsv=cv2.cvtColor(img,cv2.COLOR_RGB2HSV)
The following image shows the image after passed to Sobel x operator (with the appropriate
threshold):
3. The following image shows the image after passed to Sobel y operator (with the appropriate
threshold):
The following image shows the image after passed thru the Color spaces (S channel of HLS and V
channel of HSV):
4. The following image shows the image after passed thru the combination of Sobel and Color spaces
channels thresholds:
5. 3. Warp:
Source and destination points are passed to the getPerspective function, thru the following code:
M=cv2.getPerspectiveTransform (src, dst)
This results in a Transformation Matrix M, which along with the original image is passed to the
warpPerspective function to get a bird’s eye view (warped perspective) of the image. Following code is
used:
Warped=cv2.warpPerspective(img,M,img_size)
The following image shows Undistorted image with Source points drawn along with the Warped image
with destination points drawn:
4. Curve Fit:
Find the left and the right lines and then apply curve fit to those lines.
We take a histogram of the bottom half of the image and we determine where the lane lines start based
on wherever the histogram peaks are. Y value is sum of the pixels at that x location and the 2 maximums
are where the lines start.
Following code is used:
6. Histogram=np.sum(img[img.shape[0]//2:,:],axis=0]
Leftx_base=np.argmax(histogram[:midpoint])
Rightx_base=np.argmax(histogram[midpoint:]+midpoint Following
image shows histogram output:
Then, we use a sliding windows technique described as follows: we draw a box around the lane starting
points and any of the white pixels from the binary image that fall into that box; we put them in a list of
x and y values. Then, we add a box on top of that and do the same thing and as we move up the lane
line; we will move the boxes to the left or right based on the average of the previous line. At the end of
this; we will have lists of x and y values of the pixels that are in the line and then we run the 2nd order
numpy polyfit to those pixels to get the curve fit in the pixel space. Following code is used:
Left_fit=np.polyfit(lefty,leftx,2) Right_fit=np.polyfit(righty,rightx,2)
7. We then do the pixel to meters conversion based on US Highway standards (lanes are 12 feet wide).
After the first window frame; instead of the sliding windows technique; we take the previous fit and
move it to left and right by 100 pixels and any pixels (minimum 50) found in that area are added to the
list of x and y values and then we run the polyfit function to get the lane lines.
The following image shows curve fit with sliding windows technique.
8. The following image shows curve fit where the program looks for the next lane point within 100 pixels of
the previous fit.
5. Final Image:
Then, we create the final image and put text (using putText function) on the image for the Radius of
curvature and the Distance to lane center; which are calculated using the following formulae:
Radius of curvature=([1+(dy/dx)^2]^3/2)/(d^2y/dx^2)
laneCenter=(rigthPos-leftPos)/2+leftPOs
distancetoCenter=laneCenter-imageCenter
Then, we plot the curve fits and green color the drive area onto the image using fillPoly and polylines
function as shown below:
Cv2.fillPoly(color_warp,np.int_([pts]),(0,255,0))
We want to overlay on the original image and not the warped image. So we need to unwarp the image
(using undistort function reversing the source and destination points) and then add to the original
(using addWeighted function) as below.
Result=cv2.addWeighted(img,1,newwarp,0.5,0)
The following image shows the final drawn image with color filled between the lane lines.
9. The following image shows the final drawn image with color filled between the lane lines; and text
illustrating radius of curvature and distance from center.
10. Improvement Points:
There is lot of manual effort in the following areas; so the program will not scale well for all videos and
images:
1. Selection of best channels from the color spaces, which are used for lanes detection
2. Selection of thresholds (minimum and maximum) and parameters (example, kernel size) for
Sobel filter and for color spaces
3. Selection of source and destination points for perspective transform
A deep learning/machine learning approach will be more appropriate for the above points; so the
program can learn those parameters itself; rather than we manually specifying or selecting those
parameters.