Sankey, M. 2023. Creating a new culture around authenticity and generative AI. Research Bazaar Northern Territory. Charles Darwin University. Darwin. 25-26 October.
Presiding Officer Training module 2024 lok sabha elections
Creating a new culture around authenticity and generative AI
1. CRICOS Provider No: 00300K (NT/VIC) 03286A (NSW) RTO Provider No: 0373 TEQSA Provider ID PRV12069
Creating a new culture around authenticity
and generative AI
ResBazNT 2023
Professor Michael Sankey
Director, Learning Futures and Lead Education Architect
President, Australasian Council on Open Distance and eLearning (ACODE)
Community Fellow, Australasian Society for Computers in Learning in Tertiary Education (ASCILITE)
https://michaelsankey.com
2. Charles Darwin University acknowledges all
First Nations people across the lands on
which we live and work, and we pay our
respects to Elders both past and present.
2
3. • I’ll share some strategies for having the critical
conversation about the use of Generative AI
• I hope to demystifying its use, but more so, to
create a new culture around its use
• Yes, It’s still early day, and we are still coming to
terms with this
• Development of an institutional framework for
use
• Lay a foundation for a productive engagement
with Generative AI
About this presentation
3
Firefly: academic research integrity and the use of generative AI student wondering what to do
4. • If I was an employer and I knew Generative AI could help my
workers be more productive, would I want them to use it?
• My son, an Executive Producer at the Mayo Clinic, now
expects his peeps to use Gen AI to help produce scripts and
production schedules
• He also uses it with his kids to write bedtime stories
• All high schools will be using it as of 2024
• How do we know what they are doing out in industry?
Statements to respond to
4
5. • Let the machines do what machines do well and let’s train
our students to both master the machines and extend
(value add) on what they can do so well...
• Authenticity is socially constructed, and it is this that is
being called into question with Generative AI
• We can strengthen our epistemic and normative approach
to contemporary research by understanding the part to be
played by the machine, whilst recognising the potential for
human empowerment, democratic agency, and creativity
and productivity.
Some key statements to set the scene
5
13. • This model scored 76.5% on the
multiple choice section of the
Bar exam, up from 73.0% with
Claude 1.3.
• When compared to college
students applying to graduate
school, Claude 2 scores above
the 90th percentile on the
reading and writing exam, and
similarly to the median
applicant on quantitative
reasoning.
https://www.anthropic.com/index/claude-2
13
14. “It promises to open almost limitless spaces for
scientific exploration by revealing hidden patterns in
the otherwise impenetrable heterogeneity of the
natural world, to enable us to collectively examine
and interpret latent properties of complex physical
and biological systems that have heretofore
remained undetected...It can unlock countless
possibilities for scientific discovery across every
discipline of the contemporary natural and applied
sciences, placing human ingenuity at an apex of its
capacity to create future worlds that serve the
greater common good.” (Leslie, 2023)
14
Professor David Leslie. Queen Mary University of London, UK. The Alan Turing Institute
15. • It can improve efficiency and increase productivity by
streamlining critical analysis, synthesis, design and writing
processes, including preparation of grant, fellowship and
project proposals and publications.
• It may be used in certain scenarios to stimulate critical or
creative thinking by providing new insights and perspectives.
• Under appropriate circumstances, it can help with analysis
of large amounts of non-sensitive data and help to highlight
important findings, saving hours of manual data analysis.
Potential benefits of Gen AI
15
https://www.deakin.edu.au/research/support-for-researchers/research-integrity/generative-artificial-intelligence-ai
16. Uses of Gen AI in the research process
16
https://www.iesalc.unesco.org/wp-content/uploads/2023/04/ChatGPT-and-Artificial-Intelligence-in-higher-education-Quick-Start-guide_EN_FINAL.pdf
18. • We need to take considerable care when
uploading information into the interfaces
of Gen AI tools
• Certain types of data should never be input
to these tools
• Guidelines on the ethical and legal
obligations must be adhered to when
interacting with Gen AI tools, particularly
when preparing for grants, fellowships,
project proposals and publications
Guidelines when using commercial
Gen AI tools in research
18
19. • Only input data that would also be appropriate to share with
other organisations
• We lose control over any information uploaded to Gen AI
interfaces, so there is a risk of reuse of the information you
submit
• Uploading certain information can violate ethics, as it can
produce malicious content that can be used for unethical
purposes (theft, fraud, discrimination, misinformation)
• It is never appropriate to submit these data into external Gen
AI platforms: Third party copyrighted materials or materials
that you do not rights to; Confidential or sensitive data or
material, or private or personal information; Human research
data
Cautions when inputting data into Gen AI
19
20. • Information provided to Gen AI may
enter the public domain and be
accessed by unspecified third parties
(NHMRC, 2023)
• Peer reviewers must not input any
part of a grant application, or any
information from a grant application,
into a natural language processing
and/or artificial intelligence
technology system to assist them in
the assessment of applications.
(NHMRC, 2023)
Importantly
20
21. • Anything Gen AI produces is based on material from the internet
(without permission of authors and or appropriate attribution)
• Information produced by Gen AI may inadvertently use the
intellectual property of others or be factually incorrect
• The use of AI for a manuscript, to produce images or graphical
elements, or in the collection and analysis of data, must be
disclosed, how and which AI tool was used
• Authors are fully responsible for the content of their manuscript,
even those parts produced by an AI tool, and are thus liable for any
breach of publication ethics. (COPE, 2023) (NHMRC, 2023)
Cautions Gen AI outputs
21
23. Tools
• Make it your responsibility to get to know the different Gen AI tools and how they
can be used and misused
Accountability
• Understand that you, as the researcher, remain accountable for what you publish
Frameworks
• Understand how your institution’s research integrity and data security frameworks
apply to the use of AI tools
Disclosure
• Be upfront about the use of AI tools in your research, in accordance with relevant
policies and procedures
Limitations
• Keep in mind the limitations - including biases and inaccuracies - of the tools you
use
23
24. Implications
• Seek to understand the intellectual property implications, to you and others, of
uploading content to a third-party platform
Privacy
• Remember that some types of information, such as sensitive patient data, should
never be uploaded to commercial external platforms
Change
• Speak up and lead change at your institution if the appropriate and inappropriate
use of tools is not being communicated
External requirements
• Make sure you understand the requirements of scholarly publishers and funding
bodies
Culture
• Help grow a culture of integrity in Higher Degree by Research students, by
promoting open and honest discussions about the appropriate use of AI tools
24
30. • COPE (2023). Authorship and AI tools. Available from: https://publicationethics.org
• Leslie, D. (2023) Does the sun rise for ChatGPT? Scientific discovery in the age of
generative AI. AI Ethics (2023). https://doi.org/10.1007/s43681-023-00315-3
• Peterson, R. A. (2005). In Search of Authenticity. Journal of Management Studies, Wiley
Blackwell, vol. 42(5), pages 1083-1098, July.
• TEQSA (2023). Artificial Intelligence resources. Available from:
https://www.teqsa.gov.au/guides-resources/higher-education-good-practice-
hub/artificial-intelligence
• Tong, Y. (2023). Research on Criminal Risk Analysis and Governance Mechanism of
Generative Artificial Intelligence such as ChatGPT. Studies in law and justice, 2(2), 85–
94. Available from https://www.pioneerpublisher.com/slj/article/view/340
References
30