In this event we will cover:
- What is Generative AI and how it is being for future of work.
- Best practices for developing and deploying generative AI based models in productions.
- Future of Generative AI, how generative AI is expected to evolve in the coming years.
2. 2
Senior Solutions Engineer at Ashling Partners
4x UiPath MVP Award
Host & Creator at Masters of Automation Podcast
(https://themasters.ai)
Alp Uguray
Introductions
3. Innovation Ambition Matrix
HOW TO WIN
WHERE
TO
PLAY
TRANSFORMATIONAL
Large market opportunity identified
but very different from what we are
doing today.
ADJACENT
Not doing today, but plugs right into
what we are doing today
CORE
Already doing it today
Develop New
Products & Assets
Add incremental
Products & Assets
Use Existing
Products & Assets
Serve
existing
Markets
&
Customers
Enter
Adjacent
Markets
Serve
Adjacent
Customers
Create
New
Market
Target
New
Customers
5. 5
Good ones (Utopic Use)
• Leverages AI versus. Manual execution productivity gains
• Augmentation in task execution as HITL suggestions and recommendations
Not so good ones (Most likely)
• Job Displacement / Re-write
• Digital Misuse
• Digital Divide
• Vulnerability increase with cyberattacks
Worst ones (Cautious view)
• Data Privacy
• Fake Content and IP Law
• Failure of Regulations
• LLMs dominate the communication lines - Don’t know who you speak and widespread
adoption of personalized Face, Voice and Text
Importance of Scenario Planning
Driven by productivity gains and improved Customer and Employee Experiences,
Conversational AI dominance depends on a few different outcomes based on its adoption
6. 6
Focus on realistic applications that can complement existing business capabilities.
• Prioritize applications based on ease of implementation and risk level, gradually moving towards more complex and
valuable ones. An example of a key application is using generative AI for knowledge management, which can provide
immediate value across various business functions
Do not have a perfectionist attitude towards the development of AI applications, which
could trap you in the proof-of-concept phase without ever delivering value.
• An iterative product development approach where applications are developed to solve specific customer or employee
problems and are then continuously adjusted based on feedback until they're ready to be scaled. This ensures that the
efforts have purpose and contribute towards transforming the industry standards
The importance of ensuring that AI adoption doesn't compromise the organization's
data and intellectual property security, customer data security, brand credibility, and
legal protections.
• Collaboration between leaders from operations, technology and data teams, and the legal department to create
guardrails that empower the organization without hindering it.
Some Guiding Principles in Adoption
7. What’s prompt engineering?
Prompt engineering is the ‘art’ of optimizing
natural language for a LLM. Effective prompts
provide the relevant context and detail to a LLM,
therefore improving the accuracy and relevance
of the response.
The quality of prompts directly affects the output
of the model. Effective prompts help the model
understand your request and generate
appropriate responses, in complex or ambiguous
scenarios.
Tips / Tricks –
• Zero-shot Learning: never seen your data,
but makes inferences based on
understanding
• CoT (chain-of-thought) reasoning, ‘break it
down, step-by-step’
• Providing relevant context, ‘I am’ or ‘you are’
• First, do ‘xyz’, then do ‘xyz’, finally…
8. 8
Zero-shot learning
This is a problem set up in machine learning where the model is asked to classify data
accurately it has never seen before during training. In other words, the model is expected
to infer classes that were not part of its training data. The model typically leverages high-
level abstractions and understandings learned from the training data to make accurate
predictions on the unseen classes. Zero-shot learning is especially important in settings
where it is costly or time-consuming to collect large labeled datasets for every possible
class.
Few-shot learning
Few-shot learning refers to the concept where a machine learning model is able to
generalize well from a small number of examples – often just one or two, hence the term
"one-shot" or "two-shot" learning. In a traditional machine learning context, models are
often trained on large amounts of data, but in few-shot learning, the idea is to design
models that can extract useful information from a small number of examples and make
accurate predictions. This is similar to how humans can often learn concepts from just a
few examples.
Shot Learnings
9. Some considerations
Data privacy and security:
• Avoid using real customer data or any personally
identifiable information (PII).
• Use anonymized or synthetic data sets whenever
possible.
• Ensure data storage and transfer follow best practices
and comply with relevant regulations, such as GDPR
or HIPAA.
“Hallucinations” - ChatGPT can make stuff up.
• Be aware of potential biases in data sets and
algorithms, which could lead to unfair or
discriminatory outcomes.
• Use techniques such as data pre-processing or
algorithmic adjustments to minimize the impact of
biases.
Responsible use of AI:
• Ensure that your solution aligns with ethical principles
and responsible AI guidelines.
• Avoid applications that could be harmful,
discriminatory, or promote misinformation.