è«æçŽčä»ïŒGrad-CAM: Visual explanations from deep networks via gradient-based loca...Kazuki Adachi
Â
Selvaraju, Ramprasaath R., et al. "Grad-cam: Visual explanations from deep networks via gradient-based localization." The IEEE International Conference on Computer Vision (ICCV), 2017, pp. 618-626
Video and more content at fsharpforfunandprofit.com/pbt
"The lazy programmer's guide to writing 1000's of tests: An introduction to property based testing"
We are all familiar with example-based testing, as typified by TDD and BDD. Property-based testing takes a very different approach, where a single test is run hundreds of times with randomly generated inputs.
Property-based testing is a great way to find edge cases, and also helps you to understand and document the behaviour of your code under all conditions.
This talk will introduce property-based testing and show how it works, and why you should consider adding it to your arsenal of testing tools.
è«æçŽčä»ïŒGrad-CAM: Visual explanations from deep networks via gradient-based loca...Kazuki Adachi
Â
Selvaraju, Ramprasaath R., et al. "Grad-cam: Visual explanations from deep networks via gradient-based localization." The IEEE International Conference on Computer Vision (ICCV), 2017, pp. 618-626
Video and more content at fsharpforfunandprofit.com/pbt
"The lazy programmer's guide to writing 1000's of tests: An introduction to property based testing"
We are all familiar with example-based testing, as typified by TDD and BDD. Property-based testing takes a very different approach, where a single test is run hundreds of times with randomly generated inputs.
Property-based testing is a great way to find edge cases, and also helps you to understand and document the behaviour of your code under all conditions.
This talk will introduce property-based testing and show how it works, and why you should consider adding it to your arsenal of testing tools.
Chainer is a deep learning framework which is flexible, intuitive, and powerful.
This slide introduces some unique features of Chainer and its additional packages such as ChainerMN (distributed learning), ChainerCV (computer vision), ChainerRL (reinforcement learning)
CS Education for All. A new wave of opportunityPeter Donaldson
Â
Computing education in Scotland has undergone big changes recently with improvements to the National Qualifications, interesting National Progress Awards and the revision of the Technologies Experiences and Outcomes and new benchmarks for 1st to 3rd year pupils. With increased understanding of how to teach CS effectively and an exciting and progressive curriculum there has never been a better time to consider becoming a Computing Teacher.
These slides explain the changes and bust some myths about teaching as a career and were used as part of a series of face to face workshops for final year Computing students in Universities around Scotland in 2017.
Clare Corthell: Learning Data Science Onlinesfdatascience
Â
Clare Corthell, Data Scientist and Designer at Mattermark, and author of the Open Source Data Science Masters, shares her experience teaching herself data science with online resources. http://datasciencemasters.org/
What every teacher should know about cognitive researchStephanie Chasteen
Â
From the Colorado Science Conference (Nov, 2011)
In the past few decades, weâve gained a wealth of information about how people learn. The results of this cognitive and education research can help us become more effective teachers. In this interactive talk, weâll explore some of the main findings of cognitive research in a language accessible to everybody, and discuss how they can be used in our teaching.
This talk goes over what Data Science is and how
you can start working with data in
your role. This is for everyone interested in Data
Science who might be unsure about how to
start working with data. Learn the core
concepts of Data Science and how you can
start learning data science pain-free!
Deep Learning Online Course It's Not as Difficult as You Think.pdfMicrosoft azure
Â
Deep Learning, another part of Machine Learning. is basically a neural network with two or more two layers and is also referred to as Artificial Neural Network. This technology works on a lot of applications in Artificial Intelligence and works on improving analytical and other physical tasks. Basically, it also works on pre-processing the data and converting it into a well-structured format. Using the Deep learning algorithm, it becomes possible to properly clear the concepts in front of the world.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Â
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
Â
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Â
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Â
Are you looking to streamline your workflows and boost your projectsâ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, youâre in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part âEssentials of Automationâ series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Hereâs what youâll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
Weâll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Donât miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
Â
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
DevOps and Testing slides at DASA ConnectKari Kakkonen
Â
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Â
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Â
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
Â
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. Whatâs changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Â
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
Â
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
7. Gradient descent
Stochastic Gradient Descent
l Gradient descent
̶ Compute the gradient of L(q) with regard to q; g(q), then update q
using g(q) as
qt+1 := qt â at g(qt)
where at>0 is a learning rate
l Stochastic gradient descent:
̶ Since the exact computation of gradient is expensive, we instead use an
approximated gradient by using a sampled data set (mini-batch)
gâ(qt) = 1/|B| SiâB l(qt, xi, yi)
-αg
Ξ2
Ξ1
Contour of L(q)
17. Deep and Wide NN also create no bad local minima
[Nguyen+ 2017]
l If the following conditions hold
̶ (1) Activation function s is analytic on R, strictly monotonically
increasing
̶ (2) s is bounded*
̶ (3) the loss function l(a) is twice differentiable,
̶ lâ(a)=0 if a is a global minimum
̶ (4) Training samples are linearly independent,
then every critical point for with the weight matrices have full
column rank, is a global minimum
̶ We can achieve these conditions if we use sigmoid, tanh or softplus for
s and the squared loss for l
̶ -> Solved in the case of non-linear NN with some conditions
20. Random Labeling experiment [Zhang+ 17]
l Model capacity should be restricted to achieve generalization
̶ C.f. Rademacher complexity, VC-dimension, uniform stability
l Conduct an experiment on a copy of the test data where the
true labels were replaced by random labels
-> NN model easily fit even for random labels
l Compare the result with that using regularization techniques
-> No significant difference
l Therefore NN model has enough model complexity to fit to
random labeling but it can generalize well w/o regularization
̶ For random labels, NN memorize the samples, but for true labels NN
learn patterns for generalization [Arpit+ 17]
l ⊠WHY?
21. SGD plays a significant role for generalization
l SGD achieves an approximate Bayesian inference [Mandt+ 17]
̶ Bayesian inference provides a sample following q ~ P(q|D)
l SGDâs noise removes unnecessary information of input to
estimate output [Shwartz-Ziv+ 17]
̶ During training the mutual information between input and the network
is decreased but that between the network and output is kept
l Sharpness and norms of weights also relate to generalization
̶ Flat minima achieve generalization. But it
depends on the scale of weights
̶ If we find a flat minimum with small norm of weights, then it achieves
generalization [Neyshabur+ 17]
FlatSharp
27. Why do we consider generative models?
l For more accurate recognition and inference
̶ If we know the generate process, we can improve recognition and inference
u âWhat I cannot create, I do not understandâ
Richard Feynman
u âComputer vision is inverse computer graphicsâ
Geofferty Hinton
̶ By inverting the generation process, we obtain recognition process
l For transfer learning
̶ By changing covariates, we can transfer the learned model to other
environments
l For sampling examples to compute statistics and validation
27/50
29. Representation learning is more powerful than
the nearest neighbor method and manifold learning
l Actually we can significantly reduce the required training samples when
using representation learning [Arora+ 2017]
l Using the distance metric defined on the original space, or the
neighborhood notion may not work
?
In reality, samples with the same label are
located in very different places in the
original space. Their region may not be
even connected in original space
Ideally, near sample
will help to determine
the label
Man with
glasses
54. ICA: Independent component analysis
Reference: [HyvÀrinen 01]
l Find a component z that generates data x
x = f(z)
where f is an unknown function called mixture function and
components are independent each other p(z) = Pp(zi)
l When f is linear and p(zi) is non-Gaussian, we can identify f and
z correctly
l However, when f is nonlinear, we cannot identify f and z
̶ There are infinitely many possible f and z
l -> When data is time-series data x(1), x(2), âŠ, x(n) and they are
generate from z which are (1) non-stationary or (2) stationary
independent sources, we can identify non-linear f and z
60. Local representation vs distributed representation
l Local representation
̶ each concept is represented by one symbol
̶ e.g. Giraff=1, Panda=2, Lion=3, Tiger=4
̶ no interfere, noise immunity, precise
l Distributed representation
̶ each concept is represented by a set of symbol, and each symbol
participates in representing many concepts
̶ Generalizable
̶ less accurate
̶ interfere
Giraff Pand Lion Tiger
Long neck âŻ
four legs ⯠⯠⯠âŻ
body hair ⯠⯠âŻ
paw pad ⯠âŻ
64. Two-leayr NN update rule interpretation
[Okanohara unpublished]
l The update rule of two layer feedforward network for
h = Relu(W1x)
a = W2h
is
dh = W2
Tda
dW2= da hT
dW1= dh diag(Reluâ(W1x)) xT
= W2
Tda diag(Reluâ(W1x)) xT
l
These update rules correspond to storing the error (da) as a
value and storing input (x) as a key for memory network
̶ Update only for active memories (Reluâ(W1x))
65. Resnet is memory augmented network
[Okanohara unpublished]
l Since resnet is the following form
h = h + Resnet(h)
and Resnet(h) consists of two layer, we can interpret it as
recalling memory and add it to the current vector
̶ Squeeze operation correspond to limit the number of memory cells
l Resnet lookups memory iteratively
̶ Large number of steps = large number of memory lookups
l This interpretation is different from using shortcut [He+15] or
unrolled iterative estimation [Greff+16]
67. Conclusion
l There are still many unsolved problems in DNN
̶ Why can DNN learn in general setting ?
̶ How to represent real world information ?
l There are still many unsolved problems in AI
̶ Disentanglement of information
̶ One-shot learning using attention and memory mechanism
u Avoid catastrophic forgetting, interference
̶ Stable, data-efficient reinforcement learning
̶ How to abstract information
u grounding (language), strong noise (e.g. dropout), extract hidden
factors by using (non-)stationary or commonality among task
68. References
l [Choromanska+ 2015] âThe loss surface of multilayer networksâ, A. Choromanska,
and et al., AIstats 2015
l [Lu+ 2017] âDepth creates No Bad Local Minimaâ, H. Lu, and et al.,
arXiv:1702.08580
l [Nguyen+ 2017] âThe loss surface of deep and wide neural networksâ, Q. Nguyen,
and et al., arXiv:1704.08045
l [Zhang+ 2017] âUnderstanding deep learning requires rethinking generalizationâ, C.
Zhang, and et al., ICLR 2017
l [Arpit+ 2017] âA Closer Look at Memorization in Deep Networksâ, D. Arpit, and et al.,
ICML 2017
l [Mangt+ 2017] âStochastic Gradient Descent as Approximate Bayesian Inferenceâ, S.
Mandt and et al., arXiv:1704.04289
l [Shwartz-Ziv+ 2017] âOpening the Black Box of Deep Neural Networks via
Informationâ, R. Shartz-Ziv, and et al., arXiv:1703.00810
70. l [Goodfellow+ 14] âGenerative Adversarial Netsâ, I. Goodfellow, and et al.,
NIPS 2014
l [Goodfellow 16] âNIPS 16 Tutorial: Generative Adversarial Networksâ,
arXiv:1701.00160
l [Oord+ 16a], âConditional Image Generation with PixelCNN decodersâ, A.
Oord and et al., NIPS 2016
l [Oord+ 16b], âWaveNet: A Generative Model for Raw Audioâ, A. Oord and
et al., arXiv1609.03499
l [Reed+ 17] âParallel Multiscale Autoregressive Density Estimationâ, S. Reed
and et al, arXiv:1703.03664
l [Zhao+ 17] âEnergy-based Generative Adversarial Networkâ, J. Zhao and et
al., arXiv:1609.03126
l [Dai+ 17] âCalibrating Energy-based Generative Adversarial networksâ, Z.
Dai and et al., ICLR 2017