This document discusses the Linked Data publishing framework LODSPeaKr. It allows organizations to publish Linked Data in multiple formats and build applications on top of Linked Data. LODSPeaKr makes it simple to create services, APIs, and web applications using SPARQL, HTML, and the Haanga templating language. It provides default functionality like search and entity browsing. Applications can integrate data from multiple SPARQL endpoints and visualize results.
Semantically enriching content using OpenCalaisMarius Butuc
One of the key challenges of the Semantic Web is how to go from today's unstructured web to a web rich with semantic information. Following the bottom up approach, OpenCalais is an automated system intended to annotate data and information, allowing us to gradually build semantically enabled systems. By semantically enriching the published content, we help users enjoy their on-line experiences and reduce the frustration of dealing with voluminous amounts of information that is incoherently organized and often irrelevant to a particular person's need.
Creation of visualizations based on Linked DataAlvaro Graves
A common task with any relatively large amount of data is to create visual representations that help users to make sense of such data and observe trends that otherwise would be hard for them to appreciate. The creation of these visual- izations usually requires some knowledge in a programming language, making it difficult for non-technical savvy users to create visualizations. In this paper we present Visualbox, a system that makes it easier for non-programmers to create web visualizations based on Linked Data. These visualiza- tions can be accessed by any modern web browser and can be easily embedded in web pages and blogs. We describe how people can create visualizations using Visualbox and we show examples of work done by real users. Finally we present a study that shows that Visualbox makes it easier for users to create Linked Data-based visualizations.
In this talk I will show Visualbox, a "visualization server" based on LODSPeaKr that can make easy for non javascript experts to create simple but meaningful visualizations.
Semantically enriching content using OpenCalaisMarius Butuc
One of the key challenges of the Semantic Web is how to go from today's unstructured web to a web rich with semantic information. Following the bottom up approach, OpenCalais is an automated system intended to annotate data and information, allowing us to gradually build semantically enabled systems. By semantically enriching the published content, we help users enjoy their on-line experiences and reduce the frustration of dealing with voluminous amounts of information that is incoherently organized and often irrelevant to a particular person's need.
Creation of visualizations based on Linked DataAlvaro Graves
A common task with any relatively large amount of data is to create visual representations that help users to make sense of such data and observe trends that otherwise would be hard for them to appreciate. The creation of these visual- izations usually requires some knowledge in a programming language, making it difficult for non-technical savvy users to create visualizations. In this paper we present Visualbox, a system that makes it easier for non-programmers to create web visualizations based on Linked Data. These visualiza- tions can be accessed by any modern web browser and can be easily embedded in web pages and blogs. We describe how people can create visualizations using Visualbox and we show examples of work done by real users. Finally we present a study that shows that Visualbox makes it easier for users to create Linked Data-based visualizations.
In this talk I will show Visualbox, a "visualization server" based on LODSPeaKr that can make easy for non javascript experts to create simple but meaningful visualizations.
This presentation provides an introduction to the semantic web for nonprofits and a vision for a "nonprofit social graph." It explains how nonprofts fit into semantic web standards like RDF, Schema.org, Sparql, etc.
The search world is all about social graphing today. Just look at Google's quick results sidebar when you search for a local business. You see a picture of the business, rating/reviews, hours, menu and more. Structured SEO data can help you define and shape what is shown about your site on search results.
This talks is intended to help people understand how to apply Structured data to a website and then implement this with a minimum of technical skill.
This talk covers:
Why you should be using structured data
An overview of what structured data is
A dive into the Schema.org standard and how Search Engines expect this to be embedded into a site.
A short example of how this was used in the DukeHealth.org site
A how to on using the Metatag and Schema.org Metatag modules to add structured data to your site.
A very quick look at how to go beyond what these can do using code.
Note I'm not an SEO wiz that can tell you 'how to make your site shine' but have learned a bit while implementing this on various sites. In other words, I may not be able to tell you what to do for this, but I can tell you how to do it. :)
X api chinese cop monthly meeting feb.2016Jessie Chuang
Topics
XAPI Vocabulary spec. From ADL
Linked Data / Semantic web. / Web 3.0
Linked Data in education and content recommender
Semantic search and Google Knowledge Graph
APIs eat software (connect with partners and services)
How should we exploit data and build intelligence layer?
Case Study (Hong Ding Educational Technology)
Monetize your data and add value (intelligence)
Achieve Federal Open Data Policy Compliance - SlidesSocrata
The November 1, 2013 deadline for compliance with Executive Order 13642 and OMB Memorandum M-13-13 is fast approaching.
Get your questions answered and accelerate your implementation efforts. Attend a free webinar entitled: How to Achieve Open Data Policy Compliance with Socrata.
http://www.socrata.com/webinars/how-to-comply-with-the-federal-open-data-policy/
An introduction deck for the Web of Data to my team, including basic semantic web, Linked Open Data, primer, and then DBpedia, Linked Data Integration Framework (LDIF), Common Crawl Database, Web Data Commons.
presented at the 2011 SemTech
Open government data and related services/applications are quickly growing on the Web. Although most agree that the government data has great potential in solving real world problems, there are still many challenges that must be addressed. This talk will describe several representative domain applications and provide concrete examples of evolving technical challenges remaining. We will show solution paths that have proven useful and make recommendations on the corresponding Semantic Web best practices.
• Scalability. How can we handle(e.g. search and cleanse) the 3,000+ raw/tool datasets, and the additional 300,000+ geo datasets from data.gov?
• Interoperability. Multi-scale open government data came from city governments, state governments, and national governments. How can one compare the GDP of the US and China, and later link to state-level financial data? Open government data covers many domains. How can one associate open government data with domain knowledge to build a cancer prevention application?
• Provenance and quality. How should provenance be leveraged to facilitate high-quality data management interactions (e.g. reuse, mash-up and feedback) between the government and the public?
The Power of Semantic Technologies to Explore Linked Open DataOntotext
Atanas Kiryakov's, Ontotext’s CEO, presentation at the first edition of Graphorum (http://graphorum2017.dataversity.net/) – a new forum that taps into the growing interest in Graph Databases and Technologies. Graphorum is co-located with the Smart Data Conference, organized by the digital publishing platform Dataversity.
The presentation demonstrates the capabilities of Ontotext’s own approach to contributing to the discipline of more intelligent information gathering and analysis by:
- graphically explorinh the connectivity patterns in big datasets;
- building new links between identical entities residing in different data silos;
- getting insights of what type of queries can be run against various linked data sets;
- reliably filtering information based on relationships, e.g., between people and organizations, in the news;
- demonstrating the conversion of tabular data into RDF.
Learn more at http://ontotext.com/.
The following was presented at the Semantic Technology conference in March of 2006 in San Jose California. This case study examines the extension of the National
Information Exchange Model NIEM to include K-12
education metadata. NIEM’s compliance with ISO/IEC
11179 metadata standards was found to be critical for
cost-effective system interoperability. This study indicates
that extending the NIEM can be compatible with newer
RDF and OWL metadata standards. We discuss how this
strategy will dramatically lower data integration costs and
make longitudinal data analysis more cost-effective. We
make recommendations for state education agencies,
federal policy makers, and metadata standards
organizations. The conclusion discusses the possible
impacts of recent innovations in collaborative metadata
standards efforts.
Video: https://www.youtube.com/watch?v=Rt2oHibJT4k
Technologies such as Hadoop have addressed the "Volume" problem of Big Data, and technologies such as Spark have recently addressed the "Velocity" problem – but the "Variety" problem is largely unaddressed – there is a lot of manual "data wrangling" to mange data models.
These manual processes do not scale well. Not only is the variety of data increasing, also the rate of change in the data definitions is increasing. We can’t keep up. NoSQL data repositories can handle storage, but we need effective models of the data to fully utilize it.
This talk will present tools and a methodology to manage Big Data Models in a rapidly changing world. This talk covers:
Creating Semantic Metadata Models of Big Data Resources
Graphical UI Tools for Big Data Models
Tools to synchronize Big Data Models and Application Code
Using NoSQL Databases, such as Amazon DynamoDB, with Big Data Models
Using Big Data Models with Hadoop, Storm, Spark, Giraph, and Inference
Using Big Data Models with Machine Learning to generate Predictive Models
Developer Collaborative/Coordination processes using Big Data Models and Git
Managing change – Big Data Models with rapidly changing Data Resources
Delivering a Linked Data warehouse and realising the power of graphsBen Gardner
Linklaters is one of the world’s leading global law firms. The firm has a wealth of high value information held within our systems however due to the nature of these systems it is not always easy to leverage this value. Our goal was to improve decision making across the firm by transforming access to and ability to query data. To do this we wanted a solution that would combine our information, was easy to extend in an iterative fashion and would leverage our existing investment in business intelligence. To achieve this we chose to create a graph based warehouse using Linked Data. Data from our SAP Business Warehouse was combined with flat file and XML feeds from our systems of record and transformed into RDF via ETL services that loaded it into a triple store. To provide simple integration with our existing environment a SPARQL to OData service was deployed creating an OData compliant endpoint. Finally a model driven, mobile friendly, user interface was created allowing users to query, review results and explore the underlying graph. This talk will describe the approach we took and the lessons learnt.
Presentación para ABRELATAM13 donde hablo de la necesidad de mejores estándares y tecnología para las iniciativas de Datos Abiertos y cómo la tecnología afecta la utilidad y transparencia de estas iniciativas.
This presentation provides an introduction to the semantic web for nonprofits and a vision for a "nonprofit social graph." It explains how nonprofts fit into semantic web standards like RDF, Schema.org, Sparql, etc.
The search world is all about social graphing today. Just look at Google's quick results sidebar when you search for a local business. You see a picture of the business, rating/reviews, hours, menu and more. Structured SEO data can help you define and shape what is shown about your site on search results.
This talks is intended to help people understand how to apply Structured data to a website and then implement this with a minimum of technical skill.
This talk covers:
Why you should be using structured data
An overview of what structured data is
A dive into the Schema.org standard and how Search Engines expect this to be embedded into a site.
A short example of how this was used in the DukeHealth.org site
A how to on using the Metatag and Schema.org Metatag modules to add structured data to your site.
A very quick look at how to go beyond what these can do using code.
Note I'm not an SEO wiz that can tell you 'how to make your site shine' but have learned a bit while implementing this on various sites. In other words, I may not be able to tell you what to do for this, but I can tell you how to do it. :)
X api chinese cop monthly meeting feb.2016Jessie Chuang
Topics
XAPI Vocabulary spec. From ADL
Linked Data / Semantic web. / Web 3.0
Linked Data in education and content recommender
Semantic search and Google Knowledge Graph
APIs eat software (connect with partners and services)
How should we exploit data and build intelligence layer?
Case Study (Hong Ding Educational Technology)
Monetize your data and add value (intelligence)
Achieve Federal Open Data Policy Compliance - SlidesSocrata
The November 1, 2013 deadline for compliance with Executive Order 13642 and OMB Memorandum M-13-13 is fast approaching.
Get your questions answered and accelerate your implementation efforts. Attend a free webinar entitled: How to Achieve Open Data Policy Compliance with Socrata.
http://www.socrata.com/webinars/how-to-comply-with-the-federal-open-data-policy/
An introduction deck for the Web of Data to my team, including basic semantic web, Linked Open Data, primer, and then DBpedia, Linked Data Integration Framework (LDIF), Common Crawl Database, Web Data Commons.
presented at the 2011 SemTech
Open government data and related services/applications are quickly growing on the Web. Although most agree that the government data has great potential in solving real world problems, there are still many challenges that must be addressed. This talk will describe several representative domain applications and provide concrete examples of evolving technical challenges remaining. We will show solution paths that have proven useful and make recommendations on the corresponding Semantic Web best practices.
• Scalability. How can we handle(e.g. search and cleanse) the 3,000+ raw/tool datasets, and the additional 300,000+ geo datasets from data.gov?
• Interoperability. Multi-scale open government data came from city governments, state governments, and national governments. How can one compare the GDP of the US and China, and later link to state-level financial data? Open government data covers many domains. How can one associate open government data with domain knowledge to build a cancer prevention application?
• Provenance and quality. How should provenance be leveraged to facilitate high-quality data management interactions (e.g. reuse, mash-up and feedback) between the government and the public?
The Power of Semantic Technologies to Explore Linked Open DataOntotext
Atanas Kiryakov's, Ontotext’s CEO, presentation at the first edition of Graphorum (http://graphorum2017.dataversity.net/) – a new forum that taps into the growing interest in Graph Databases and Technologies. Graphorum is co-located with the Smart Data Conference, organized by the digital publishing platform Dataversity.
The presentation demonstrates the capabilities of Ontotext’s own approach to contributing to the discipline of more intelligent information gathering and analysis by:
- graphically explorinh the connectivity patterns in big datasets;
- building new links between identical entities residing in different data silos;
- getting insights of what type of queries can be run against various linked data sets;
- reliably filtering information based on relationships, e.g., between people and organizations, in the news;
- demonstrating the conversion of tabular data into RDF.
Learn more at http://ontotext.com/.
The following was presented at the Semantic Technology conference in March of 2006 in San Jose California. This case study examines the extension of the National
Information Exchange Model NIEM to include K-12
education metadata. NIEM’s compliance with ISO/IEC
11179 metadata standards was found to be critical for
cost-effective system interoperability. This study indicates
that extending the NIEM can be compatible with newer
RDF and OWL metadata standards. We discuss how this
strategy will dramatically lower data integration costs and
make longitudinal data analysis more cost-effective. We
make recommendations for state education agencies,
federal policy makers, and metadata standards
organizations. The conclusion discusses the possible
impacts of recent innovations in collaborative metadata
standards efforts.
Video: https://www.youtube.com/watch?v=Rt2oHibJT4k
Technologies such as Hadoop have addressed the "Volume" problem of Big Data, and technologies such as Spark have recently addressed the "Velocity" problem – but the "Variety" problem is largely unaddressed – there is a lot of manual "data wrangling" to mange data models.
These manual processes do not scale well. Not only is the variety of data increasing, also the rate of change in the data definitions is increasing. We can’t keep up. NoSQL data repositories can handle storage, but we need effective models of the data to fully utilize it.
This talk will present tools and a methodology to manage Big Data Models in a rapidly changing world. This talk covers:
Creating Semantic Metadata Models of Big Data Resources
Graphical UI Tools for Big Data Models
Tools to synchronize Big Data Models and Application Code
Using NoSQL Databases, such as Amazon DynamoDB, with Big Data Models
Using Big Data Models with Hadoop, Storm, Spark, Giraph, and Inference
Using Big Data Models with Machine Learning to generate Predictive Models
Developer Collaborative/Coordination processes using Big Data Models and Git
Managing change – Big Data Models with rapidly changing Data Resources
Delivering a Linked Data warehouse and realising the power of graphsBen Gardner
Linklaters is one of the world’s leading global law firms. The firm has a wealth of high value information held within our systems however due to the nature of these systems it is not always easy to leverage this value. Our goal was to improve decision making across the firm by transforming access to and ability to query data. To do this we wanted a solution that would combine our information, was easy to extend in an iterative fashion and would leverage our existing investment in business intelligence. To achieve this we chose to create a graph based warehouse using Linked Data. Data from our SAP Business Warehouse was combined with flat file and XML feeds from our systems of record and transformed into RDF via ETL services that loaded it into a triple store. To provide simple integration with our existing environment a SPARQL to OData service was deployed creating an OData compliant endpoint. Finally a model driven, mobile friendly, user interface was created allowing users to query, review results and explore the underlying graph. This talk will describe the approach we took and the lessons learnt.
Presentación para ABRELATAM13 donde hablo de la necesidad de mejores estándares y tecnología para las iniciativas de Datos Abiertos y cómo la tecnología afecta la utilidad y transparencia de estas iniciativas.
Publishing Linked Open Data in 15 minutesAlvaro Graves
In this presentation I will show why Linked Open Data is the best technique available to publish government data and how can you use LODSPeaKr, a simple kit for publishing Linked Data, to create from prototypes in minutes to Open Data Portals, APIs and mobile webapps.
TWC LOGD: A Portal for Linking Government DataAlvaro Graves
Experiencias de LOGD un portal sobre open government data. En él es posible encontrar datasets, demos, tutoriales, etc. El mayor colaborador del Linked Data cloud y un socio importante del gobierno de EEUU.
POMELo, a simple, web-based PML (Proof Markup Language) editor. The objective of POMELo is to allow users to create, edit, validate and export provenance information in the form of PML documents. This application was developed with provenance novices in mind, making it usable in various settings, from educational to scientific. Since this is a web-based application, users do not need to install or run any software aside from a normal web browser, which simplifies its adoption and makes it more attractive for inexperienced users.
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
Normal Labour/ Stages of Labour/ Mechanism of LabourWasim Ak
Normal labor is also termed spontaneous labor, defined as the natural physiological process through which the fetus, placenta, and membranes are expelled from the uterus through the birth canal at term (37 to 42 weeks
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
A review of the growth of the Israel Genealogy Research Association Database Collection for the last 12 months. Our collection is now passed the 3 million mark and still growing. See which archives have contributed the most. See the different types of records we have, and which years have had records added. You can also see what we have for the future.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
it describes the bony anatomy including the femoral head , acetabulum, labrum . also discusses the capsule , ligaments . muscle that act on the hip joint and the range of motion are outlined. factors affecting hip joint stability and weight transmission through the joint are summarized.
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...Levi Shapiro
Letter from the Congress of the United States regarding Anti-Semitism sent June 3rd to MIT President Sally Kornbluth, MIT Corp Chair, Mark Gorenberg
Dear Dr. Kornbluth and Mr. Gorenberg,
The US House of Representatives is deeply concerned by ongoing and pervasive acts of antisemitic
harassment and intimidation at the Massachusetts Institute of Technology (MIT). Failing to act decisively to ensure a safe learning environment for all students would be a grave dereliction of your responsibilities as President of MIT and Chair of the MIT Corporation.
This Congress will not stand idly by and allow an environment hostile to Jewish students to persist. The House believes that your institution is in violation of Title VI of the Civil Rights Act, and the inability or
unwillingness to rectify this violation through action requires accountability.
Postsecondary education is a unique opportunity for students to learn and have their ideas and beliefs challenged. However, universities receiving hundreds of millions of federal funds annually have denied
students that opportunity and have been hijacked to become venues for the promotion of terrorism, antisemitic harassment and intimidation, unlawful encampments, and in some cases, assaults and riots.
The House of Representatives will not countenance the use of federal funds to indoctrinate students into hateful, antisemitic, anti-American supporters of terrorism. Investigations into campus antisemitism by the Committee on Education and the Workforce and the Committee on Ways and Means have been expanded into a Congress-wide probe across all relevant jurisdictions to address this national crisis. The undersigned Committees will conduct oversight into the use of federal funds at MIT and its learning environment under authorities granted to each Committee.
• The Committee on Education and the Workforce has been investigating your institution since December 7, 2023. The Committee has broad jurisdiction over postsecondary education, including its compliance with Title VI of the Civil Rights Act, campus safety concerns over disruptions to the learning environment, and the awarding of federal student aid under the Higher Education Act.
• The Committee on Oversight and Accountability is investigating the sources of funding and other support flowing to groups espousing pro-Hamas propaganda and engaged in antisemitic harassment and intimidation of students. The Committee on Oversight and Accountability is the principal oversight committee of the US House of Representatives and has broad authority to investigate “any matter” at “any time” under House Rule X.
• The Committee on Ways and Means has been investigating several universities since November 15, 2023, when the Committee held a hearing entitled From Ivory Towers to Dark Corners: Investigating the Nexus Between Antisemitism, Tax-Exempt Universities, and Terror Financing. The Committee followed the hearing with letters to those institutions on January 10, 202
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
1. Publishing Linked Data
with LODSPeaKr
Alvaro Graves
gravea3@rpi.edu
Twitter: @alvarograves
1
2. What is Linked Data?
Good Idea: There are lots of information
available about government, geographical
organizations, schools, etc. Lets publish
it!
State States
name
County
name
Counties
Schools
School
No. of name Countries
students
2
3. What is Linked Data?
Even better idea: Lets publish this data in a machine-
readable form and use HTTP URIs (http://...) to
describe things, so others can reuse them to refer to
the same things!
Government Data
State States
name
County
name
Counties
Schools
School
No. of name Countries
students
3
4. Why Linked Data?
Now, others can refer to the same
entities as me and add more data!
Government Data
State States
name
County
name
Counties
Schools
International Organization Data
School Measure
No. of name Countries value
students Year
Indicators
Country Categories
name
Category
name
4
5. Why Linked Data?
“If you want to scale up and, specially, if
you want to link and integrate, then you
should consider Linked Data”
Government Data José M. Alonso
State States
name
County
name
Counties
Schools
International Organization Data
School Measure
No. of name Countries value
students Year
Indicators
Country Categories
name
Category
name
5
6. Motivation
• How can organizations publish Linked Data they create?
• How can they build applications based on Linked Data?
• How can they use other people’s Linked Data?
6
7. LODSPeaKr
• A framework for Linked Data-driven applications
• Expose data in multiple formats (RDF/XML, Turtle, *)
automatically
• Create services, APIs and webapps easily
• Easy to install in most LAMP systems
• Only knowledge needed:
• SPARQL
• HTML
• Haanga, a pseudocode-like template language
7
8. Default installation
By default LODSPeaKr
provides:
• Search by label
• Navigation through
entities
•Show all properties of an
entity
• Data in RDFa, RDF/XML
and more
8
13. Building Linked Data apps:
Use of Components
Two main (among other) components
• Types
• Define how information is displayed for entities of
the same type (persons, countries)
• Services
• Create services for aggregated information
13
14. Building applications:
SPARQL + HTML + Haanga
main.query
SELECT DISTINCT ?person ?personName
SPARQL
WHERE {
?person a foaf:Person;
foaf:name ?personName .
}
ORDER BY ?personName
html.template
<ul>
HTML + Haanga
{% for row in models.main %}
<li>
<a href="{{row.person.value}}">
{{row.personName.value}}
</a>
</li>
{% endfor %} 14
</ul>
15. A Simple JSON API
Adding a JSON interface is simple as adding a
new template
json.template
{“people”: [
{% for row in models.main %}
{
“uri”: “{{row.person.value}}”,
“name”: “{{row.personName.value}}”
}
{% endfor %}
]}
15
16. Filters
• Filters in Haanga allow you to process data
before presenting it to the user
{% for i in models.main %}
{{i.personName.value|upper}}
{% endfor %}
“john smith” => “JOHN SMITH”
• LODSPeaKr also provides visualizations
based on filters
{{models.main|GoogleVizColumnChart:”xvar,yvar”}}
DATA =>
16
17. Query workflows
LODSPeaKr can query multiple endpoints, using results
already obtained to specify new queries
SPARQL
Endpoint
LODSPeaKr
17
18. Query workflows
LODSPeaKr can query multiple endpoints, using results
already obtained to specify new queries
SPARQL
Endpoint
SPARQL
Endpoint
LODSPeaKr
18
19. Query workflows
LODSPeaKr can query multiple endpoints, using results
already obtained to specify new queries
SPARQL
Endpoint SPARQL
Endpoint
SPARQL
Endpoint
LODSPeaKr
19
20. Conclusion
• LODSPeaKr is a powerful tool for building
webapps based on Linked Data
• It makes it really simple to publish 5-star
Linked Data
• LODSPeaKr make easier to integrate
external data and create mashups,
simplifying the work for developers
20
23. Components
It is possible to define a set of queries and templates for
each type of things (people, countries)
Bash
$ utils/lodspk.sh create type foaf:Person
components/
|
->types/
File directory
|
->foaf:Person/
|
->html.template
|
->queries/
|
->main.query
|
->query2.query
|
23 ->query3.query
24. Components
It is possible to define a set of queries and templates for
a service (http://mysite.com/myService)
Bash
$ utils/lodspk.sh create service myService
components/
|
->service/
File directory
|
->myService/
|
->html.template
|
->queries/
|
->main.query
|
->query2.query
|
24 ->query3.query