The document discusses the BBC's efforts to build coherence across its digital content by implementing a linked data approach. It describes how historically the BBC created separate microsites that were coherent individually but not connected. With linked data, the BBC assigns unique URIs to people, places, concepts and creates RDF metadata to link related information across domains. This allows different teams to model their domains independently while still connecting data to build a coherent whole and enable new discoveries by following semantic connections between topics.
My presentation at the 2nd London Linked Data meet up.
Introduces Wildlife Finder (http://www.bbc.co.uk/wildlifefinder): the types of resources it describes and the ontology used to describe it.
A rework of metade's slides at http://www.slideshare.net/metade/linked-data-on-the-bbc for a SAMT 2009 Industry Day presentation.
Details several linked data projects going on at the BBC, and why/how we do it.
Jane Finnis Keynote NDF2009 Part Two (see Part One)Jane Finnis
Part Two of my key note presentation to the National Digital Forum 2009 in New Zealand (NDF 2009).
You can read the take homes on my blog here: http://janefinnis.wordpress.com/2009/12/02/take-homes-from-the-ndf-2009-in-new-zealand/ less
My presentation at the 2nd London Linked Data meet up.
Introduces Wildlife Finder (http://www.bbc.co.uk/wildlifefinder): the types of resources it describes and the ontology used to describe it.
A rework of metade's slides at http://www.slideshare.net/metade/linked-data-on-the-bbc for a SAMT 2009 Industry Day presentation.
Details several linked data projects going on at the BBC, and why/how we do it.
Jane Finnis Keynote NDF2009 Part Two (see Part One)Jane Finnis
Part Two of my key note presentation to the National Digital Forum 2009 in New Zealand (NDF 2009).
You can read the take homes on my blog here: http://janefinnis.wordpress.com/2009/12/02/take-homes-from-the-ndf-2009-in-new-zealand/ less
A presentation about the traces left behind on twitter about the conference "...Margarida Fonseca
Follow the Money was a conference about the
role that databases and spreadsheets
play in our lives: "everything we do leaves traces behind".
And this presentation is about the traces left behind on twitter, the official back channel of the conference.
Web 2.0 Setting The Stage For Extending Our Reach: Resource Guidekennbicknell
A resource guide that accompanies the PowerPoint presentation on Web 2.0 tools and resources delivered during a workshop for L.A. As Subject members, March 24, 2009.
A guide to the great places to find digital resources to use on your interactive whiteboard in the classroom.
As used at the Teacher2Teacher conference, Bow Island, Alberta, March 2010
Adding Value to Cultural Heritage - Olaf Janssen lecturing for the course "Di...Olaf Janssen
Olaf Janssen lecturing for the course "Digital Access to Cultural Heritage" at Leiden University, the Netherlands, 3-3-2011
In this humurous presentation I give an overview of the history of digital services in the National Library of the Netherlands (KB), and the internal & external problems the current KB services infrastructure is faced with. I present 4 different solutions to these problems. Using the BMICE-model (www.bmice.nl), I how heritage institutions can add value to their services.
An overview of of the BBC's work on exposing an API for programme metadata as presented at XTech08. More information on the Radio Labs blog: http://www.bbc.co.uk/blogs/radiolabs/2008/05/helping_machines_play_with_pro.shtml
Thirty minute talk given at the fourth Portugese Open Access Meeting in Braga in late 2009. This talk draws from previous similar talks focussing on advocacy for open data and how to make it work for researchers on the ground.
Dark Matter - - the dark matter of the internet is open, social, peer-to-peer...Michael Edson
Keynote for Europeana Creative, Kulturstyrelsen - Danish Agency for Culture, Internet Librarian International (London), Southeastern Museum Conference (USA), Library of Congress Reference Forum, St. John's University Library Forum, University of Oklahoma Digital Humanities Presidential Lecture, Smith Leadership Symposium (Balboa Park, USA)...
The Dark Matter of the Internet - - the dark matter of the internet is open, social, peer-to-peer and read write...and it's the future of libraries, museums, archives, and institutions of all kinds.
Also see the essay on which this talk is based: Dark Matter - - https://medium.com/@mpedson/dark-matter-a6c7430d84d1
And a video of me presenting these slides at the 2014 Southeastern Museums Conference (USA): http://youtu.be/-tdLD5rdRTQ
The lack of context that a multimedia document taken in isolation can provide, hinders a proper understanding of the story being reported. International news items are a good example of such phenomena. Therefore, there is a need of unveiling other story's aspects that, even not being explicitly present in the seed document, are crucial to fully capture the backstory. To deal with this problem, we propose an innovative conceptual model called the News Semantic Snapshot (NSS) that is designed to make explicit the wide context of a news event. Following a process called Named Entity Expansion, we query the Web to bring other viewpoints about what is happening around us, from the thousands of news articles and posts where we could potentially find those missing story details. We have also proposed an innovative Concentric-based approach that better spots those contextual entities by leveraging on the duality between the so-called Core, which contains representative entities that are frequently mentioned in the related documents, and the the ones that hold particular semantic relationships with the Core and shape up the Crust around it.
Overview of issues and tools to ensure long-term access to scholarly content. Presented at II Seminário sobre Informação na Internet in Brasilia, 3 - 6 August 2015.
Towards more smart, connected and open audiovisual archivesJohan Oomen
As a result of digitisation of analogue holdings and working processes, more and more material from audiovisual archies is being made available online. This marks a transformative shift, as archives and users are now sharing the same information space. Once digital and part of an open network, objects from audiovisual archives can be shared, recommended, remixed, embedded, cited, referenced to and so on. It is a far cry from several years ago, when users were obliged to visit brick and mortar institutions to access collections. This shift towards digital enables archives to fulfil their pubic missions better; crossing geographical boundaries, using new channels for content distribution, engage with user groups and use new technologies to make work processes more efficient and allow for new access points to collections. It also introduces fundamental challenges, forcing audiovisual archives to [1] rethink their role and function in the value chain of media production and modern society at large, [2] assess which activities and competences are vital to succeed in a digital context.
We envision the future audiovisual archives to be smart, connected and open; using smart technologies to optimise workflows for annotation and content distribution. Collaborating with third parties to co-design and co-develop new technologies in order to manifest themselves as frontrunners rather than followers. Being connected to other sources of information (other collections, contextual sources), to a variety of often niche user communities, researchers and the creative industries. To embrace the use of standards defined by external instances rather than by the cultural heritage communities themselves. Fully embrace ‘open’ as the default to have maximum impact in society: applying open licences for content delivery, using open source software and open standards wherever possible. Promote open access to publications and so on.
This keynote examines how the public mission of archives (i.e. supporting a myriad of users to utilize collections to learn, experience and create) can be achieved in a digital context. It addresses the challenges related to the role and function of institutions and provides practical insights in how archives can establish a culture of innovation to manage challenges they face today. It addresses some of the major questions audiovisual archives are faced with today.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
A presentation about the traces left behind on twitter about the conference "...Margarida Fonseca
Follow the Money was a conference about the
role that databases and spreadsheets
play in our lives: "everything we do leaves traces behind".
And this presentation is about the traces left behind on twitter, the official back channel of the conference.
Web 2.0 Setting The Stage For Extending Our Reach: Resource Guidekennbicknell
A resource guide that accompanies the PowerPoint presentation on Web 2.0 tools and resources delivered during a workshop for L.A. As Subject members, March 24, 2009.
A guide to the great places to find digital resources to use on your interactive whiteboard in the classroom.
As used at the Teacher2Teacher conference, Bow Island, Alberta, March 2010
Adding Value to Cultural Heritage - Olaf Janssen lecturing for the course "Di...Olaf Janssen
Olaf Janssen lecturing for the course "Digital Access to Cultural Heritage" at Leiden University, the Netherlands, 3-3-2011
In this humurous presentation I give an overview of the history of digital services in the National Library of the Netherlands (KB), and the internal & external problems the current KB services infrastructure is faced with. I present 4 different solutions to these problems. Using the BMICE-model (www.bmice.nl), I how heritage institutions can add value to their services.
An overview of of the BBC's work on exposing an API for programme metadata as presented at XTech08. More information on the Radio Labs blog: http://www.bbc.co.uk/blogs/radiolabs/2008/05/helping_machines_play_with_pro.shtml
Thirty minute talk given at the fourth Portugese Open Access Meeting in Braga in late 2009. This talk draws from previous similar talks focussing on advocacy for open data and how to make it work for researchers on the ground.
Dark Matter - - the dark matter of the internet is open, social, peer-to-peer...Michael Edson
Keynote for Europeana Creative, Kulturstyrelsen - Danish Agency for Culture, Internet Librarian International (London), Southeastern Museum Conference (USA), Library of Congress Reference Forum, St. John's University Library Forum, University of Oklahoma Digital Humanities Presidential Lecture, Smith Leadership Symposium (Balboa Park, USA)...
The Dark Matter of the Internet - - the dark matter of the internet is open, social, peer-to-peer and read write...and it's the future of libraries, museums, archives, and institutions of all kinds.
Also see the essay on which this talk is based: Dark Matter - - https://medium.com/@mpedson/dark-matter-a6c7430d84d1
And a video of me presenting these slides at the 2014 Southeastern Museums Conference (USA): http://youtu.be/-tdLD5rdRTQ
The lack of context that a multimedia document taken in isolation can provide, hinders a proper understanding of the story being reported. International news items are a good example of such phenomena. Therefore, there is a need of unveiling other story's aspects that, even not being explicitly present in the seed document, are crucial to fully capture the backstory. To deal with this problem, we propose an innovative conceptual model called the News Semantic Snapshot (NSS) that is designed to make explicit the wide context of a news event. Following a process called Named Entity Expansion, we query the Web to bring other viewpoints about what is happening around us, from the thousands of news articles and posts where we could potentially find those missing story details. We have also proposed an innovative Concentric-based approach that better spots those contextual entities by leveraging on the duality between the so-called Core, which contains representative entities that are frequently mentioned in the related documents, and the the ones that hold particular semantic relationships with the Core and shape up the Crust around it.
Overview of issues and tools to ensure long-term access to scholarly content. Presented at II Seminário sobre Informação na Internet in Brasilia, 3 - 6 August 2015.
Towards more smart, connected and open audiovisual archivesJohan Oomen
As a result of digitisation of analogue holdings and working processes, more and more material from audiovisual archies is being made available online. This marks a transformative shift, as archives and users are now sharing the same information space. Once digital and part of an open network, objects from audiovisual archives can be shared, recommended, remixed, embedded, cited, referenced to and so on. It is a far cry from several years ago, when users were obliged to visit brick and mortar institutions to access collections. This shift towards digital enables archives to fulfil their pubic missions better; crossing geographical boundaries, using new channels for content distribution, engage with user groups and use new technologies to make work processes more efficient and allow for new access points to collections. It also introduces fundamental challenges, forcing audiovisual archives to [1] rethink their role and function in the value chain of media production and modern society at large, [2] assess which activities and competences are vital to succeed in a digital context.
We envision the future audiovisual archives to be smart, connected and open; using smart technologies to optimise workflows for annotation and content distribution. Collaborating with third parties to co-design and co-develop new technologies in order to manifest themselves as frontrunners rather than followers. Being connected to other sources of information (other collections, contextual sources), to a variety of often niche user communities, researchers and the creative industries. To embrace the use of standards defined by external instances rather than by the cultural heritage communities themselves. Fully embrace ‘open’ as the default to have maximum impact in society: applying open licences for content delivery, using open source software and open standards wherever possible. Promote open access to publications and so on.
This keynote examines how the public mission of archives (i.e. supporting a myriad of users to utilize collections to learn, experience and create) can be achieved in a digital context. It addresses the challenges related to the role and function of institutions and provides practical insights in how archives can establish a culture of innovation to manage challenges they face today. It addresses some of the major questions audiovisual archives are faced with today.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
5. Historically the BBC has created a series of
microsites – each coherent in their own right but
not across the breadth of BBC content
Radio 4 Big Bang http://www.bbc.co.uk/radio4/bigbang/
11. I can’t follow my nose, I can’t browse by meaning,
from one page to the next following a semantic
thread
Snickers http://www.flickr.com/photos/homer4k/386980596/
13. Linked Data has helped us build a coherent,
scalable, sane service. One that we hope is a bit
more human literate.
Linked Data cloud diagram http://www4.wiwiss.fu-berlin.de/bizer/pub/lod-datasets_2009-03-05_colored.png
14. Use URIs to identify things not only documents
How it works: The Web http://flickr.com/photos/danbri/2415237566/
15. Use HTTP URIs - globally unique names that
anyone can dereference
Colon Slash Slash http://www.flickr.com/photos/jeffsmallwood/299208539/
16. Provide useful information [in RDF] when someone
looks up a URI
Information Desk http://www.flickr.com/photos/metropol2/149294506/
17. Include links to other URIs to let people discover
related information
Links http://www.flickr.com/photos/ravages/2831688538/
18. One implication of this is that I think there’s only
URIs and metadata... nothing else
Self-portraiture + metadata http://www.flickr.com/photos/saltatempo/323462998/
19. URIs are used as identifiers for real world things
...like Polar Bears and Jeremy Clarkson
45. Linked Data allows loosely coupled, distributed
teams to share data, share models and build on
each others work
46. Thank you
Programmes ontology
http://www.bbc.co.uk/ontologies/programmes
Understanding the big BBC graph
http://blogs.talis.com/n2/archives/569
Music ontology
http://musicontology.com
Editor's Notes
Although I’m in speaking in the semantic web strand of this conference I’m not going to talk about RDF/XML.
That’s not because I don’t think it’s important, I do, but rather because RDF is often conflated with RDF/XML and I would rather consider the model for a bit - what it means and how we’ve used it. So I guess what I really mean is that what I’m going to be talking about is RDF the model not RDF the data format. If however that is something you are interested in that perhaps grab me after my talk because we are publishing lots and lots of RDF/XML.
The BBC is the largest broadcasting corporation in the world.
Its mission is to enrich people's lives with programmes that inform, educate and entertain. It is a public service broadcaster, established by a Royal Charter and funded by the licence fee that is paid by UK households.
The BBC uses the income from the licence fee to provide services, including...
8 national TV channels + regional variations and programming
National TV and radio for Scotland, Wales and Northern Ireland plus 40 local radio stations
and that’s before you get to the World Service which broadcasts to the world in 32 languages.
We’ve had a web presence since 1994
What all this means is that the BBC produces an incredible range, diversity and volume of content .
This volume of content is a challenge in it’s own right let alone before you consider the size of the existing archive
This size presents a number of challenges - how to organise, how to build
For starters traditional 'left hand nav' style navigation doesn't work. From a UX POV, nor from a coordination and governance POV.
As a result the BBC has historically created a series of microsite. Each coherent in their own right but not across the breadth of BBC content.
Consider for example I can navigate around a Radio 4 site about the opening of the LHC... but...
I can’t find everything to BBC knows about CERN... but equally I can’t find everything
I can’t find everything to BBC knows about CERN... but equally I can’t find everything
I can’t find everything to BBC knows about CERN... but equally I can’t find everything
I can’t find everything to BBC knows about CERN... but equally I can’t find everything
I can’t find everything to BBC knows about CERN... but equally I can’t find everything
Paul Weller, or any other artist, nor can I find everything
But things are changing..
Starting with the data and how people think about it rather than starting with the web page down. And when I say data I really mean starting with understanding what things people care about and giving each of those things a URI and returning appropriate representations...
Of course what I’m talking about is Linked Data... even if we didn’t quite realise that when we started.
But the idea that we should care about our URIs, care about having one per concept, care about having machine representations for those resources instead of a separate API has helped us build a coherent, scalable, sane service.
Linking Open Data is a grassroots project to use web technologies to expose data on the web. It is for many people synonymous with the semantic web, or worse web 3.0, a term I personally can’t stand (esp when you consider that TimBLs original memo described a web of things).
It does, as far as I’m concerned, represent a very large subset of the semantic web project.
But what is it?
Well it can be described with 4 simple rules.
The web was designed to be a web of things, not just a web of documents.
Those documents make assertions about things in the real world but that doesn’t mean the identifiers can only be used to identify web documents.
Minting URIs for things rather than pages helps make the web more human literate because it means we are identifying those things that people care about.
The beauty of the web is its ubiquitous nature - the fact it is decentralised and able to function on any platform. This is because of TimBL’s key invention the HTTP URI.
URI’s are globally unique, open to all and decentralised.
Don’t go using DOI or any other identifier - on the web all you need is an HTTP URI.
And obviously you need to provide some information at that URI. When people dereference it you need to give them some data - ideally as RDF as well as HTML.
Providing the data as RDF means that machines can process that information for people to use. Making it more useful.
And of course you also need to provide links to other resources so people can continue their journey.
And that means contextual links to other resources elsewhere on the web, not just your site.
And that’s it. Pretty simple.
And I would argue that, other than the RDF bit, these principles should be followed for any website - they just make sense.
Including that I look like this
Was born here
That my name is this
(diff slide - my driving license is another identifier which also makes assertions about me)
Including that I look like this
Was born here
That my name is this
(diff slide - my driving license is another identifier which also makes assertions about me)
Including that I look like this
Was born here
That my name is this
(diff slide - my driving license is another identifier which also makes assertions about me)
Including that I look like this
Was born here
That my name is this
(diff slide - my driving license is another identifier which also makes assertions about me)
Tigers look like this
Sound like this
Do these things
This has happened to them
They live here
Do have this sort of way of life (adaptations)
Tigers look like this
Sound like this
Do these things
This has happened to them
They live here
Do have this sort of way of life (adaptations)
Tigers look like this
Sound like this
Do these things
This has happened to them
They live here
Do have this sort of way of life (adaptations)
Tigers look like this
Sound like this
Do these things
This has happened to them
They live here
Do have this sort of way of life (adaptations)
People care about our programme brands - they search for them, love watching them and expect the BBC to provide footage/ clips of them.
And we have separate pages for every artist the BBC plays on the new music site.
And you can do the same thing for sounds, news stories, links, wikipedia etc
If you build things correctly then like lego we can stick things together to build more stuff
Information about a thing is important and it is interesting, but it’s interest is somewhat limited. What’s really interesting is the join the link between things.
What programmes or clips do we have about a given species?
Clips live at /programmes but are transcluded onto other pages
Which tracks were plaid on a particular show - linking through to the artist pages.
Again the information about the artist ‘lives’ at /music but it’s pulled into the programme domain because
Which in turn tell you about which programmes and radio stations play that artist - with links through to the programme or station.
What probably isn’t completely obvious is that we have modeled and structured the site around those things.
So we have classes of object and relationships between them, and resources within each class. For example - a Lion is a Species and species have defined relationships to habitats, location, conservation status and adaptation.
What this means is that when we create a new species it appears on it’s habitat, adaptation page etc.