1. The document discusses the challenges of widespread adoption of e-research technologies by everyday researchers. While early adopters found success, most researchers are not using the infrastructure services that have been created.
2. It argues that repositories and other e-research tools need to focus on the needs and perspectives of researchers. Researchers work with data, so tools should emphasize data sharing and metadata. They should also support collaboration and open participation in the scientific process.
3. For technologies to truly enable new forms of research, their use needs to become integrated into the everyday work of all researchers, not just a specialized few. Systems must be easy to use, empower researchers' autonomy, and intersect seamlessly with digital and physical
Description of the way in which the software sustainability institute engages the software in research community. It covers why, how, the programmes, how to select people, activities those selected do, benefits, recommendations and more.
What to curate? Preserving and Curating Software-Based Artneilgrindley
This is a presentation given at the CHArt (Computers and History of Art) conference held in London in November 2011. The slides on the title page are images taken from works exhibited at the V&A Decode exhibition.
Crowdsourcing Scientific Work: A Comparative Study of Technologies, Processes...Andrea Wiggins
Slides from my successful dissertation defense. The research focused on the role of technologies in supporting participation and organizing processes in citizen science projects, and the impacts of these processes on scientific outcomes.
Description of the way in which the software sustainability institute engages the software in research community. It covers why, how, the programmes, how to select people, activities those selected do, benefits, recommendations and more.
What to curate? Preserving and Curating Software-Based Artneilgrindley
This is a presentation given at the CHArt (Computers and History of Art) conference held in London in November 2011. The slides on the title page are images taken from works exhibited at the V&A Decode exhibition.
Crowdsourcing Scientific Work: A Comparative Study of Technologies, Processes...Andrea Wiggins
Slides from my successful dissertation defense. The research focused on the role of technologies in supporting participation and organizing processes in citizen science projects, and the impacts of these processes on scientific outcomes.
Are cloud based virtual labs cost effective? (CSEDU 2012)Nane Kratzke
Cost efficiency is an often mentioned strength of cloud computing. In times of decreasing educational budgets virtual labs provided by cloud computing might be an interesting alternative for higher education organizations or IT training facilities. This contribution analyzes the cost advantage of virtual educational labs provided via cloud computing means and compare these costs to costs of classical ed- ucational labs provided in a dedicated manner. This contribution develops a four step decision making model which might be interesting for colleges, universities or other IT training facilities planning to implement cloud based training facilities. Furthermore this contribution provides interesting findings when cloud computing has economical advantages in education and when not. The developed four step decision making model of general IaaS applicability can be used to find out whether an IaaS cloud based virtual IT lab approach is more cost efficient than a dedicated approach.
The Implementing AI: Hardware Challenges, hosted by KTN and eFutures, is the first event of the Implementing AI webinar series to address the challenges and opportunities that realising AI for hardware present.
There will be presentations from hardware organisations and from solution providers in the morning; followed by Q&A. The afternoon session will consist of virtual breakout rooms, where challenges raised in the morning session can be workshopped.
Artificial Intelligence now impacts every aspect of modern life and is key to the generation of valuable business insights.
Implementing AI webinar series is designed for people involved in the management and implementation of AI based solutions – from developers to CTOs.
Find out more: https://ktn-uk.co.uk/news/just-launched-implementing-ai-webinar-series
Stanford IT Open House - Cloud-based Copyright Clearance Services 5 3-12 slid...Martha Russell
Today, many obstacles exist in traditional mechanisms for content licensing, commonly resulting in under-utilization of content or copyright piracy. For example, it can be very difficult to locate the appropriate rights holders, or there are often prohibitively high transaction costs involved in getting permission to use content. Used since Spring 2011 at Stanford for print course materials and extended in Spring 2012 quarter to online course materials, the Stanford Intellectual Property Exchange (SIPX) now creates a user-friendly way of clearing rights for both print and online course materials. Personalized course readers have been produced using PrintGroove by Konica Minolta. The SIPX system will be available for all Stanford courses in Fall 2012. mediax.stanford.edu
Deep Learning has taken the digital world by storm. As a general purpose technology, it is now present in all walks of life. Although the fundamental developments in methodology have been slowing down in the past few years, applications are flourishing with major breakthroughs in Computer Vision, NLP and Biomedical Sciences. The primary successes can be attributed to the availability of large labelled data, powerful GPU servers and programming frameworks, and advances in neural architecture engineering. This combination enables rapid construction of large, efficient neural networks that scale to the real world. But the fundamental questions of unsupervised learning, deep reasoning, and rapid contextual adaptation remain unsolved. We shall call what we currently have Deep Learning 1.0, and the next possible breakthroughs as Deep Learning 2.0.
This is part 2 of the Tutorial delivered at IEEE SSCI 2020, Canberra, December 1st (Virtual).
The Internet, Science, and Transformations of KnowledgeEric Meyer
Talk on June 7, 2012 in the Harvard SAP Speaker Series (Office of the Senior Associate Provost for the Harvard Library).
http://www.provost.harvard.edu/harvard_library/sap_speakers_series.php
Are cloud based virtual labs cost effective? (CSEDU 2012)Nane Kratzke
Cost efficiency is an often mentioned strength of cloud computing. In times of decreasing educational budgets virtual labs provided by cloud computing might be an interesting alternative for higher education organizations or IT training facilities. This contribution analyzes the cost advantage of virtual educational labs provided via cloud computing means and compare these costs to costs of classical ed- ucational labs provided in a dedicated manner. This contribution develops a four step decision making model which might be interesting for colleges, universities or other IT training facilities planning to implement cloud based training facilities. Furthermore this contribution provides interesting findings when cloud computing has economical advantages in education and when not. The developed four step decision making model of general IaaS applicability can be used to find out whether an IaaS cloud based virtual IT lab approach is more cost efficient than a dedicated approach.
The Implementing AI: Hardware Challenges, hosted by KTN and eFutures, is the first event of the Implementing AI webinar series to address the challenges and opportunities that realising AI for hardware present.
There will be presentations from hardware organisations and from solution providers in the morning; followed by Q&A. The afternoon session will consist of virtual breakout rooms, where challenges raised in the morning session can be workshopped.
Artificial Intelligence now impacts every aspect of modern life and is key to the generation of valuable business insights.
Implementing AI webinar series is designed for people involved in the management and implementation of AI based solutions – from developers to CTOs.
Find out more: https://ktn-uk.co.uk/news/just-launched-implementing-ai-webinar-series
Stanford IT Open House - Cloud-based Copyright Clearance Services 5 3-12 slid...Martha Russell
Today, many obstacles exist in traditional mechanisms for content licensing, commonly resulting in under-utilization of content or copyright piracy. For example, it can be very difficult to locate the appropriate rights holders, or there are often prohibitively high transaction costs involved in getting permission to use content. Used since Spring 2011 at Stanford for print course materials and extended in Spring 2012 quarter to online course materials, the Stanford Intellectual Property Exchange (SIPX) now creates a user-friendly way of clearing rights for both print and online course materials. Personalized course readers have been produced using PrintGroove by Konica Minolta. The SIPX system will be available for all Stanford courses in Fall 2012. mediax.stanford.edu
Deep Learning has taken the digital world by storm. As a general purpose technology, it is now present in all walks of life. Although the fundamental developments in methodology have been slowing down in the past few years, applications are flourishing with major breakthroughs in Computer Vision, NLP and Biomedical Sciences. The primary successes can be attributed to the availability of large labelled data, powerful GPU servers and programming frameworks, and advances in neural architecture engineering. This combination enables rapid construction of large, efficient neural networks that scale to the real world. But the fundamental questions of unsupervised learning, deep reasoning, and rapid contextual adaptation remain unsolved. We shall call what we currently have Deep Learning 1.0, and the next possible breakthroughs as Deep Learning 2.0.
This is part 2 of the Tutorial delivered at IEEE SSCI 2020, Canberra, December 1st (Virtual).
The Internet, Science, and Transformations of KnowledgeEric Meyer
Talk on June 7, 2012 in the Harvard SAP Speaker Series (Office of the Senior Associate Provost for the Harvard Library).
http://www.provost.harvard.edu/harvard_library/sap_speakers_series.php
Where are we going and how are we going to get there?David De Roure
Keynote from JISC Projects start-up meeting
Information Environment 2009-11 & Virtual Research Environment http://www.jisc.ac.uk/whatwedo/programmes/inf11/inf11startup.aspx
Knowledge Infrastructure for Global Systems ScienceDavid De Roure
Presentation at the First Open Global Systems Science Conference, Brussels, 8-10 November 2012
http://www.gsdp.eu/nc/news/news/date/2012/10/31/first-open-global-systems-science-conference/
myExperiment and the Rise of Social MachinesDavid De Roure
Talk at hubbub 2012, Indianapolis, 25 September 2012. The talk introduces myExperiment and Wf4Ever, discusses the future of research communication including FORCE11, and introduces the SOCIAM project (Theory and Practice of Social Machines) which launches in October 2012.
The Liber 2009 presentation repeated for a Dutch audience IN Dutch but with the english slides (just the first one is in Dutch :-)
Samenwerking Hogeschool bibliotheken SHB, 5 november 2009
Keynote on software sustainability given at the 2nd Annual Netherlands eScience Symposium, November 2014.
Based on the article
Carole Goble ,
Better Software, Better Research
Issue No.05 - Sept.-Oct. (2014 vol.18)
pp: 4-8
IEEE Computer Society
http://www.computer.org/csdl/mags/ic/2014/05/mic2014050004.pdf
http://doi.ieeecomputersociety.org/10.1109/MIC.2014.88
http://www.software.ac.uk/resources/publications/better-software-better-research
Paul Henning Krogh A New Dawn For E Collaboration In ScienceVincenzo Barone
Plone has growing reputation within research for working as an important component in international scientific collaboration infrastructures. In this panel session researchers shall present and answer questions on both their experiences in using Plone in a scientific context and on their research of studying Plone in use by scientists. Attendees will leave with a better conception of what is needed for international scientific collaboration and what Plone can offer as an e-collaboration tool to support research infrastructures. The panel participants will bring in expertise on computer supported collaborative work (CSCW) to stimulate use and development of Plone applications for such use cases. Panel headlines: - Exchange experiences with Plone in research environments (use cases) - Requirements for Plone in research environments: what's available, which extensions or modifications do we need? - Coordinate actions around Plone products for scientific use - Promote the use of Plone in scientific environments - Confront conceptions of collaborative research processes with Plone implementations of such models
Adoption of Cloud Computing in Scientific ResearchYehia El-khatib
Some might say the scientific research community is somewhat behind the curve of adopting the cloud. In this talk, I present a few examples of adopting the cloud from the wider research community. I also highlight some of the aspects by which cloud computing could affect scientific research in the near future and the associated challenges.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
3. So why are there three
projects addressing
lack of uptake?
4. ...and a theme in the
e-Science Institute?
Adoption of e-Research Technologies
5. How did we get here?!
Early adopter success
Then rollout of infrastructure services
And then wondering where the users are
Heard at another repositories event...
“How do we persuade
researchers to populate
our repositories?”
6. e-Science is about global collaboration
in key areas of science, and the next
generation of infrastructure that will
enable it.
Due to the complexity of the software
and the backend infrastructural
requirements, e-Science projects usually
involve large teams managed and
developed by research laboratories,
large universities or governments.
7. What are we really trying to
achieve here?
A. Everyone using the
Grid/Repositories?
B. Research advances on
an everyday basis that
would not have
happened otherwise?
Not just accelerated but new
8. How do we move from heroic scientists doing
heroic science with heroic infrastructure to
everyday scientists doing science they couldn’t
research
do before? humanists
archaeologists
geographers
musicologists
...
It’s the
researchers!
democratisati
on of e-
Research
9. Jim Downing came up with the idea of “Long Tail
Science”... So we are exploring how big science and
long-tail science work together to communicate
their knowledge. Long-tail science needs its domain
repositories - I am not sanguine that IRs can provide
the metalayers (search, metadata, domain-specific
knowledge, domain data) that are needed for
effective discovery and re-use.
Peter Murray-Rust
10. Virtual Learning
The social process Environment
of science 2.0
Undergraduate
Students
Digital
researchers
Libraries
Graduate
Students
Reprints
Peer-
experimentation
Reviewed Technical
Journal &
Preprints Reports
Conference
&
Papers
Metadata
Local
Web
Data, Metadata
Repositories
Certified
Provenance
Experimental
Workflows
Results & Analyses
Ontologies
11.
12.
13. 1 Everyday researchers doing
everyday research
• Not just a specialist few doing
heroic science with heroic
infrastructure
• Chemists are blogging the lab
• Everyone is mashing up
• Everday hardware – multicore
machines and mobile devices
14. 2 A data-centric perspective, like
researchers
• Data is large, rich, complex and
real-time
• There is new value in
data, through new digital
artefacts and through metadata
e.g.
context, provenance, workflows
• This isn’t “anti-computation” –
design interaction around data
15. 3 Collaborative and participatory
• The social process of science
revisited in the digital age
• Collaborative tools – blogs
and Wikis
• e-Science now focuses
on publishing as well as
consuming
• Scholarly lifecycle perspective
16. 4 Benefitting from the scale of digital
science activity to support science
• This is new and powerful!
• Community intelligence
• Review
• Usage informing
recommendation
• e.g. OpenWetWare
• e.g. myExperiment
17. 5 Increasingly open
• Preprints servers and
institutional repositories
• Open journals
• Open access to data
• Science Commons
• Object Reuse & Exchange
18. 6 Better not Perfect
• The technologies people
are using are not perfect
• They are better
• They are easy to use
• They are chosen by
scientists
19. 7 Empowering researchers
• The success stories come
from the researchers who
have learned to use ICT
• Domain ICT experts are
delivering the solutions
• Anything that takes away
autonomy will be resisted
20. 8 About pervasive computing
• e-Science is about
the intersection of
the digital and
physical worlds
• Sensor networks
• Mobile handheld
devices
21. Onward and Upward
• e-Research is now
enabling
researchers to do
some completely
new stuff!
• As the individual
pieces become easy
to use, researchers
can bring them
together in new
“Standing on the
ways and ask new
shoulders of giants”
questions
• “The next level”
(Everyday researchers are giants
www.w3.org/2007/Talks/www2007-AnsweringScientificQuestions-Ruttenberg.pdf
22. Repositories
Repositories
• Absolutely key role in future research. So
think of a better word!
• Think of a park / reserve / gardens / zoo
– Visitors, rangers, wardens, gardeners, experts,
security, volunteers, ...
– Curation by providers,
experts and consumers
23. www.oreilly.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web-20.html
Those 8 Repository points
1. Not just a specialist few doing heroic science with heroic
infrastructure – repositories for all!
2. There is new value in data, through new digital artefacts and
through metadata e.g. context, provenance, workflows
3. e-Science now focuses on publishing as well as consuming
4. Usage informing recommendation
5. Researchers work with collections - Object Reuse &
Exchange
6. They are easy to use
7. Anything that takes away autonomy will be resisted
8. e-Science is about the intersection of the digital and physical
worlds (not 1970s library catalogue interfaces)
24. And we needprocess processes too!
Curation of to curate
Goble & De Roure Educause Review Sep/Oct 2008
• Find a process based on what it and find copies or
similar services usable as alternates.
• Understand how and when it works, how to
operate it correctly and predict its performance.
• Know the conditions for use: permissions, licenses,
platforms, and costs.
• Judge the benefits of adoption based on its
reputation, provenance and validation by peers.
• Estimate the risk of adoption based on its
reliability and stability.
• Get assistance for its incorporation into
applications and workflows.
25. Transformation is already underway
• To understand where we’re going, look at
communities which have been early to embrace
new technology.
• e-Science is one. What can we learn?
• Incidentally, so is music and broadcast!
– Vinyl was like books
– Now the process is digital from the studio through to
playback on an iPod
– People create content
– People publish content
– Has the business adapted?
26. Note to Reader. The next slides are not intended to be
anti-grid. Everyone working on Grid is doing great work.
27. Don’t think rollout of technologies...
Mass
Use by
Researchers
Think roll-in of researchers...
Mass
Use by
Researchers
Knowledge co-production vs Service Delivery!
28. N
N2
N
Without middleware we need lots of bits of software to join things together
29. N
One Middleware
2N
N
With middleware there are fewer arrows!
30. N
Middleware Middleware
Middleware
Middleware
?
Polynomial
involving N1,
Middleware Middleware
N2 and M
N
But this is what happened. Now the picture with lots of thin arrows isn’t quite so scary!
31. use Web 2.0 here
HPC
Grid
Grid
cloud
Web is being embraced for usability and programmability e.g. mashups
32. And Grid is trying to come to terms with multicore and clouds!
33. A Thought Experiment
Imagine Eprints/Dspace/Fedora isn’t
something you download and run on a local
server
Imagine instead that you just go to the cloud
and make one*
How would this repository ecosystem
self-organise to support Research 2.0?
Would there be institutional repositories?
* (Actually you can!)
34. web
Is it a wave or is it a particle?
Tension between data being “out on the
Web” (user view) or in an institutional
machine room (provider view)
What is the curator view?
Issues perceived differently for metadata
servers and data servers
36. How Repositories can avoid Failing like the Grid
1. Understand what the users will need by
going on the journey together
2. Be open-minded: are we solving the right
problem? (Don’t forget curation of process!)
3. Don’t create artificial distinctions from Web
4. Beware standards as a barrier to adoption
5. Think cloud, outside the institutional box:
imagine the repository factory
6. Think of a new name for repositories!
37. Contact
David De Roure
dder@ecs.soton.ac.uk
Thanks
Carole Goble
Jeremy Frey
Simon Coles
Peter Murray-Rust