Coding with AI
May 2025
"Evolution, Morpheus.
Evolution! Like The
Dinosaur. Look Out That
Window. You've Had Your
Time. The Future Is OUR
World, Morpheus. The
Future Is OUR Time."
Becoming Neo
May 2025
Agenda
● Getting intuition for the technology and
buzzword
● How to maximize value from working
with AI and what are the limitations
So let’s start with the basics
where
𝑎∈1...𝐴 is the head number, and 𝑓 is some function like RELU or whatever and the 𝐛s are biases (𝑀 is the
attention mask and 𝑑 𝐸is the size of the embedding).
the output of layer 𝑙∈1...𝐋, 𝐗𝐥 is
,
LMM - Large Language Model
● Large - trained on a huge data set and
uses a huge number of parameters
● Language - geared toward understanding
language (
● Model - a type of Neural Network
Ok - so the real basics
Preceptron (1957)
Network
Network
https://playground.tensorflow.org
embeddings
https://projector.tensorflow.org/
Self Supervised Learning
The quick brown fox jumps over the _____
The quick brown fox jumps over the _____ dog
The _____ brown fox jumps over the lazy dog
Attention
Attention helps helps a neural network link related words
Handle disambiguation
● Lexical- e.g. flies as verb vs. part of noun , understand the like is related to arrow/banana
● Structural (fruit-flies is a unit)
Time flies like an arrow; fruit flies like a banana
Consequences
● The main algorithm is next work (actually part of word) prediction
○ What we get is an option
● Common things are easy - if you are using common practices,
working on areas that have a lot of good example, chances are AI
can really push you fast
○ The corollary is that if you have unique patterns in your code,
completely novel area AI will struggle
● Getting exactly what you want is hard
● Probable != Correct (aka “hellucinations”)
● CONTEXT is king
https://github.com/vectara/hallucination-leaderboard
Context is king!
Agent
MCP
Model Context
Protocol
(but there’s already
AI in API)
RAG - Retrieval Augmented Generation
● Used to be a pre-process to enrich the context - now it is basically a big MCP for Search
● Helps bring relevant data (to help with Attention)
● Enforces permissions over data
Consequences
● The main algorithm is next work (actually part of word) prediction
○ What we get is an option
● Common things are easy - if you are using common practices,
working on areas that have a lot of good example, chances are AI
can really push you fast
○ The corollary is that if you have unique patterns in your code,
completely novel area AI will struggle
● Getting exactly what you want is hard
● Probable != Correct (aka “hellucinations”)
● CONTEXT is king
Sample Rules
“Never Send A Human To Do A Machine's Job.“
What’s a “machine job” then?
● Write Unit test
● Check coverage
● Verify Standards
● Scan for vulnerabilities
● Convert Figma to code
● Alot more
But remember
● Simple refactoring are much harder for an LLM - they
don’t copy paste, the regenerate so big refactoring
can be risky
● Don’t try big things if you/LLM didn’t also write down all
the tasks (e.g. in an MD file) the attention can break
Feedback loop
● Long contex
○ hard to hold attention on the right thing
○ Increase odds for hallucination (we predict on prediction rather on concrete knowledge)
● So create contexts often, work in small increments
● Don’t hesitate to git commit successful interim steps
“We Can Never See
Past The Choices We
Don't Understand.”
LLM can also help you understand
● The code you or someone else wrote
● The essence of documentation (sometimes via
tools)
● The plan before making changes
● Nuances of the code you or you and the LLM just
created
● Trace why something is broken
● Analyze profiling data (CPU/ memory )
RISKS
The S in LLM/MCP
stands for security
Once only Claude and I
knew
what this code
does.. now..
Endless POC level code
“I Know Kung-Fu”
(or do I)
Putting it all together
● (modified) RIPER framework
● Structured Workflow: Clear separation of development phases, with Research and Innovate unified for a
streamlined discovery and ideation process.
● Memory Bank: Persistent documentation across sessions
● State Management: Explicit tracking of project phase and mode
"I Don't Know The Future. I Didn't Come Here To Tell
You How This Is Going To End. I Came Here To Tell You
How It's Going To Begin.

Coding with AI - Understanding LLMs and how to use them

  • 1.
  • 2.
    "Evolution, Morpheus. Evolution! LikeThe Dinosaur. Look Out That Window. You've Had Your Time. The Future Is OUR World, Morpheus. The Future Is OUR Time."
  • 3.
  • 4.
    Agenda ● Getting intuitionfor the technology and buzzword ● How to maximize value from working with AI and what are the limitations
  • 5.
    So let’s startwith the basics where 𝑎∈1...𝐴 is the head number, and 𝑓 is some function like RELU or whatever and the 𝐛s are biases (𝑀 is the attention mask and 𝑑 𝐸is the size of the embedding). the output of layer 𝑙∈1...𝐋, 𝐗𝐥 is ,
  • 6.
    LMM - LargeLanguage Model ● Large - trained on a huge data set and uses a huge number of parameters ● Language - geared toward understanding language ( ● Model - a type of Neural Network
  • 7.
    Ok - sothe real basics Preceptron (1957)
  • 8.
  • 10.
  • 11.
  • 12.
    Self Supervised Learning Thequick brown fox jumps over the _____ The quick brown fox jumps over the _____ dog The _____ brown fox jumps over the lazy dog
  • 13.
    Attention Attention helps helpsa neural network link related words Handle disambiguation ● Lexical- e.g. flies as verb vs. part of noun , understand the like is related to arrow/banana ● Structural (fruit-flies is a unit) Time flies like an arrow; fruit flies like a banana
  • 14.
    Consequences ● The mainalgorithm is next work (actually part of word) prediction ○ What we get is an option ● Common things are easy - if you are using common practices, working on areas that have a lot of good example, chances are AI can really push you fast ○ The corollary is that if you have unique patterns in your code, completely novel area AI will struggle ● Getting exactly what you want is hard ● Probable != Correct (aka “hellucinations”) ● CONTEXT is king
  • 15.
  • 16.
  • 18.
  • 19.
  • 20.
    RAG - RetrievalAugmented Generation ● Used to be a pre-process to enrich the context - now it is basically a big MCP for Search ● Helps bring relevant data (to help with Attention) ● Enforces permissions over data
  • 21.
    Consequences ● The mainalgorithm is next work (actually part of word) prediction ○ What we get is an option ● Common things are easy - if you are using common practices, working on areas that have a lot of good example, chances are AI can really push you fast ○ The corollary is that if you have unique patterns in your code, completely novel area AI will struggle ● Getting exactly what you want is hard ● Probable != Correct (aka “hellucinations”) ● CONTEXT is king
  • 22.
  • 23.
    “Never Send AHuman To Do A Machine's Job.“
  • 24.
    What’s a “machinejob” then? ● Write Unit test ● Check coverage ● Verify Standards ● Scan for vulnerabilities ● Convert Figma to code ● Alot more
  • 25.
    But remember ● Simplerefactoring are much harder for an LLM - they don’t copy paste, the regenerate so big refactoring can be risky ● Don’t try big things if you/LLM didn’t also write down all the tasks (e.g. in an MD file) the attention can break
  • 26.
    Feedback loop ● Longcontex ○ hard to hold attention on the right thing ○ Increase odds for hallucination (we predict on prediction rather on concrete knowledge) ● So create contexts often, work in small increments ● Don’t hesitate to git commit successful interim steps
  • 27.
    “We Can NeverSee Past The Choices We Don't Understand.”
  • 28.
    LLM can alsohelp you understand ● The code you or someone else wrote ● The essence of documentation (sometimes via tools) ● The plan before making changes ● Nuances of the code you or you and the LLM just created ● Trace why something is broken ● Analyze profiling data (CPU/ memory )
  • 29.
    RISKS The S inLLM/MCP stands for security Once only Claude and I knew what this code does.. now.. Endless POC level code
  • 30.
  • 31.
    Putting it alltogether ● (modified) RIPER framework ● Structured Workflow: Clear separation of development phases, with Research and Innovate unified for a streamlined discovery and ideation process. ● Memory Bank: Persistent documentation across sessions ● State Management: Explicit tracking of project phase and mode
  • 36.
    "I Don't KnowThe Future. I Didn't Come Here To Tell You How This Is Going To End. I Came Here To Tell You How It's Going To Begin.