The document discusses the Virtual Research Environment for Cancer Imaging (VRE-CI) project which aims to provide a framework for researchers and clinicians to share cancer imaging information, images, and algorithms. It describes using Business Connectivity Services and managed metadata to organize and search image metadata, and building a reusable SharePoint site definition to manage DICOM files and extract metadata for search. Key aspects covered include mapping folders, issues with document library names, including external code, and adapting the DICOM field model.
Building nTier Applications with Entity Framework Services (Part 2)David McCarter
Learn how to build real world nTier applications with the new Entity Framework and related services. Make sure to attend Part 1. This second part to the series will focus on using the Entity Framework in an nTier/ SOA world by separating out the different layers using T4 templates and using the new WCF Data Services to easily expose entity models via REST and to Silverlight clients. Lots of code!
Marios Chatziangelou presents the EGI applications database | OSFair2017 Workshop
Workshop overview:
This collaborative workshop comes in the context of coordinating EOSC related activities across large European infrastructures at European and national level. The workshop will offer an opportunity for cross-pollination on issues ranging from open scholarship to technical service provision, training, community engagement and support. OpenAIRE NOADs, EGI NGIs, GEANT NRENs and other national e-Infrastructure representatives will discuss gaps, synergies, coordination and service integration opportunities.
DAY 3 - PARALLEL SESSION 6 & 7
ODPi Egeria provides a framework for open metadata management, supporting many use cases in the governance of data in Data Lakes – as described in the Egeria webinar on 2nd June: “Data Lake Design with Egeria”. As described in that webinar, Egeria operates across the Data Lake, without needing centralization of metadata from the different tools into a central tool or repository.
Metadata frequently describe relationships between things like Assets, Schemas, Glossaries, and Terms – and these relationships form graphs. Egeria is distributed in nature, enabling you to see a federated view of the metadata contained in multiple tools and metadata repositories. As a result, the discrete graphs naturally federate to form a distributed graph. In this session, we’ll cover the Open Metadata Repository Services (OMRS) layer that enables Egeria to operate across this distributed graph.
Building nTier Applications with Entity Framework Services (Part 2)David McCarter
Learn how to build real world nTier applications with the new Entity Framework and related services. Make sure to attend Part 1. This second part to the series will focus on using the Entity Framework in an nTier/ SOA world by separating out the different layers using T4 templates and using the new WCF Data Services to easily expose entity models via REST and to Silverlight clients. Lots of code!
Marios Chatziangelou presents the EGI applications database | OSFair2017 Workshop
Workshop overview:
This collaborative workshop comes in the context of coordinating EOSC related activities across large European infrastructures at European and national level. The workshop will offer an opportunity for cross-pollination on issues ranging from open scholarship to technical service provision, training, community engagement and support. OpenAIRE NOADs, EGI NGIs, GEANT NRENs and other national e-Infrastructure representatives will discuss gaps, synergies, coordination and service integration opportunities.
DAY 3 - PARALLEL SESSION 6 & 7
ODPi Egeria provides a framework for open metadata management, supporting many use cases in the governance of data in Data Lakes – as described in the Egeria webinar on 2nd June: “Data Lake Design with Egeria”. As described in that webinar, Egeria operates across the Data Lake, without needing centralization of metadata from the different tools into a central tool or repository.
Metadata frequently describe relationships between things like Assets, Schemas, Glossaries, and Terms – and these relationships form graphs. Egeria is distributed in nature, enabling you to see a federated view of the metadata contained in multiple tools and metadata repositories. As a result, the discrete graphs naturally federate to form a distributed graph. In this session, we’ll cover the Open Metadata Repository Services (OMRS) layer that enables Egeria to operate across this distributed graph.
Enterprise & Web based Federated Identity Management & Data Access Controls Kingsley Uyi Idehen
This presentation breaks down issues associated with federated identity management and protected resource access controls (policies). Specifically, it uses Virtuoso and RDF to demonstrate how this longstanding issue has been addressed using the combination of RDF based entity relationship semantics and Linked Open Data.
Screenshots prepared by Ben Blaiszik and Kyle Chard, used in our Globus publication demo at GlobusWorld 2014. See https://www.globus.org/data-publication for more information and the notes on the slides for details.
E FFICIENT D ATA R ETRIEVAL F ROM C LOUD S TORAGE U SING D ATA M ININ...IJCI JOURNAL
Cloud computing is an emanating technology allowing
users to perform data processing, use as storage
and data admission services from around the world t
hrough internet. The Cloud service providers charge
depending on the user’s usage. Imposing confidentia
lity and scalability on cloud data increases the
complexity of cloud computing. As sensitive informa
tion is centralized into the cloud, this informatio
n must
be encrypted and uploaded to cloud for the data pri
vacy and efficient data utilization. As the data be
comes
complex and number of users are increasing searchin
g of the files must be allowed through multiple
keyword of the end users interest. The traditional
searchable encryption schemes allows users to searc
h in
the encrypted cloud data through keywords, which su
pport only Boolean search, i.e., whether a keyword
exists in a file or not, without any relevance of d
ata files and the queried keyword. Searching of dat
a in the
cloud using Single keyword ranked search results to
o coarse output and the data privacy is opposed usi
ng
server side ranking based on order-preserving encry
ption (OPE)
Enterprise & Web based Federated Identity Management & Data Access Controls Kingsley Uyi Idehen
This presentation breaks down issues associated with federated identity management and protected resource access controls (policies). Specifically, it uses Virtuoso and RDF to demonstrate how this longstanding issue has been addressed using the combination of RDF based entity relationship semantics and Linked Open Data.
Screenshots prepared by Ben Blaiszik and Kyle Chard, used in our Globus publication demo at GlobusWorld 2014. See https://www.globus.org/data-publication for more information and the notes on the slides for details.
E FFICIENT D ATA R ETRIEVAL F ROM C LOUD S TORAGE U SING D ATA M ININ...IJCI JOURNAL
Cloud computing is an emanating technology allowing
users to perform data processing, use as storage
and data admission services from around the world t
hrough internet. The Cloud service providers charge
depending on the user’s usage. Imposing confidentia
lity and scalability on cloud data increases the
complexity of cloud computing. As sensitive informa
tion is centralized into the cloud, this informatio
n must
be encrypted and uploaded to cloud for the data pri
vacy and efficient data utilization. As the data be
comes
complex and number of users are increasing searchin
g of the files must be allowed through multiple
keyword of the end users interest. The traditional
searchable encryption schemes allows users to searc
h in
the encrypted cloud data through keywords, which su
pport only Boolean search, i.e., whether a keyword
exists in a file or not, without any relevance of d
ata files and the queried keyword. Searching of dat
a in the
cloud using Single keyword ranked search results to
o coarse output and the data privacy is opposed usi
ng
server side ranking based on order-preserving encry
ption (OPE)
Enterprise guide to building a Data MeshSion Smith
Making Data Mesh simple, Open Source and available to all; without vendor lock-in, without complex tooling and to use an approach centered around ‘specifications’, existing tools and baking in a ‘domain’ model.
For our next ArcReady, we will explore a topic on everyone’s mind: Cloud computing. Several industry companies have announced cloud computing services . In October 2008 at the Professional Developers Conference, Microsoft announced the next phase of our Software + Services vision: the Azure Services Platform. The Azure Services Platforms provides a wide range of internet services that can be consumed from both on premises environments or the internet.
Session 1: Cloud Services
In our first session we will explore the current state of cloud services. We will then look at how applications should be architected for the cloud and explore a reference application deployed on Windows Azure. We will also look at the services that can be built for on premise application, using .NET Services. We will also address some of the concerns that enterprises have about cloud services, such as regulatory and compliance issues.
Session 2: The Azure Platform
In our second session we will take a slightly different look at cloud based services by exploring Live Mesh and Live Services. Live Mesh is a data synchronization client that has a rich API to build applications on. Live services are a collection of APIs that can be used to create rich applications for your customers. Live Services are based on internet standard protocols and data formats.
Metadata management for data storage spaces :
INDEXATOR is a metadata management tool that addresses the problems of organising, documenting, storing and sharing data in a research unit or infrastructure, and fits perfectly into a data management plan of a collective.
The central idea is that the storage space becomes the data repository, so the metadata should go to the data and not the other way around.
Given the diversity of domains, the approach chosen is to be both as flexible and as pragmatic as possible by allowing each collective to choose its own (controlled) vocabulary corresponding to the reality of its field and activities. The main idea is to be able to "capture" the user's metadata as easily as possible using their vocabulary. It is possible to define the whole terminology using a spreadsheet.
The choice was made for the JSON format, which is very appropriate for describing metadata, readable by both humans and machines.
This tool is built around a web interface coupled with a MongoDB database. The web interface allows you to i) Describe a dataset using metadata of various types (Description), ii) Search datasets by their metadata (Accessibility).
The Enterprise Guide to Building a Data Mesh - Introducing SpecMeshIanFurlong4
For organisations to successfully adopt data mesh, setting up and maintaining infrastructure needs to be easy.
We believe the best way to achieve this is to leverage the learnings from building a ‘central nervous system‘, commonly used in modern data-streaming ecosystems. This approach formalises and automates of the manual parts of building a data mesh.
This presentation introduces SpecMesh; a methodology and supporting developer toolkit to enable business to build the foundations of their data mesh.
Copy of the slides from the Advanced Web Development Workshop presented by Ed Bachta, Charlie Moad and Robert Stein of the Indianapolis Museum of Art during the Museums and the Web 2008 conference in Montreal
Silverlight Development & The Model-View-ViewModel PatternDerek Novavi
Presentation covering some of the features of Silverlight 3, some background on developing Silverlight applications, the Model-View-ViewModel Pattern, the Silverlight Unit Test Framework, and some of the new features in Silverlight 4 Beta.
Instructions for Submissions thorugh G- Classroom.pptxJheel Barad
This presentation provides a briefing on how to upload submissions and documents in Google Classroom. It was prepared as part of an orientation for new Sainik School in-service teacher trainees. As a training officer, my goal is to ensure that you are comfortable and proficient with this essential tool for managing assignments and fostering student engagement.
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
Macroeconomics- Movie Location
This will be used as part of your Personal Professional Portfolio once graded.
Objective:
Prepare a presentation or a paper using research, basic comparative analysis, data organization and application of economic information. You will make an informed assessment of an economic climate outside of the United States to accomplish an entertainment industry objective.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdfTechSoup
In this webinar you will learn how your organization can access TechSoup's wide variety of product discount and donation programs. From hardware to software, we'll give you a tour of the tools available to help your nonprofit with productivity, collaboration, financial management, donor tracking, security, and more.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Unit 8 - Information and Communication Technology (Paper I).pdfThiyagu K
This slides describes the basic concepts of ICT, basics of Email, Emerging Technology and Digital Initiatives in Education. This presentations aligns with the UGC Paper I syllabus.
Francesca Gottschalk - How can education support child empowerment.pptxEduSkills OECD
Francesca Gottschalk from the OECD’s Centre for Educational Research and Innovation presents at the Ask an Expert Webinar: How can education support child empowerment?
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
28. Visual Studio 2010 BCS Solution to Access Remote Image Metadata User Case: Unix file storing system in ROB Large volume of files Raw data containing no or less meaningful metadata Using a specific folder structure to indicate the image metadata e.g. Technology, Image Acquisition Machine, Group Head, Users. Allow SSH to the file server
29. Folder Structure Example tomography Systems Inveon MRI47 Data Data RM1 RM1 SF1 SF1 RM1_SF1_SF36_420 RM1_SF1_SF36_420 20100223_static_VCAM_tumour_M1_5hpi 20100223180102_01
31. Solution Shell script to populate the image metadata XML Use SSH to transfer the image metadata XML to the SharePoint Server Apply Business Data Connectivity Services to connect SharePoint to the external image metadata XML Navigating and searching the image metadata within SharePoint
38. Aims A reusable site definition for SharePoint to manage cancer related DICOM files. Make visible the metadata from inside the DICOM files within SharePoint. Especially within Search. Enable areas of the DICOM files to be marked for subsequent analysis. To link with Trident for subsequent analysis
39. Visual Studio 2010 The definition is built using Visual Studio XML, C#, SharePoint APIs, Silverlight APIs
41. Site provisioning SharePoint ‘provisions’ a new site from a site definition. This requires the site definition to have to perform various actions – in the right order: Global onet.xmlThis file defines list templates for hidden lists, list base types, a default definition configuration, and modules that apply globally to the deployment. SPSite scoped features defined in site definitions onet.xml, in the order they are defined in the file.The onet.xml file defined in the site definition can define navigational areas, list templates, document templates, configurations, modules, components, and server e-mail footers used in the site definition to which it corresponds. SPSite scoped stapled features, in quasi random order SPWeb scoped features defined in onet.xml, in the order they are defined in the file. SPWeb scoped stapled features, in quasi random order List instances defined in onet.xml Modules defined in onet.xml [Source: Site Configurator v2: User Guide for Developers] This is frustratingly easy to get wrong!
42. Site Configurator To help build the definition we used the SharePoint Site Configurator Feature http://spsiteconfigurator.codeplex.com/ (January 2011) This enabled simpler ‘provisioning’
44. Issues with document library names Site Configurator does not deal correctly with spaces in document library names for provisioning. Best to avoid spaces. Thicket folders: folders ending in ‘_Files’ and a host of other similar names are hidden/renamed by SharePoint. See KB905231. This ‘feature’ directly interacts with the DeepZoom technology, which uses ‘_files’ as a suffix for a directory.
45. Including external unmanaged code Technically tricky: avoid if at all possible Map C:rogram Filesommon Filesicrosoft Sharedeb Server Extensions4IN and place unmanaged DLLs in it. Use managed code to call the unmanaged code. E.g. [DllImport("DICOM.dll")]
46. DICOM types DICOM has approximately 3500 field types – identified by their Tag name which has the form (Group, Element). We adapted the IETF RFC4122 name-based GUID algorithm (section 4.3 of http://www.ietf.org/rfc/rfc4122.txt) to map these onto SharePoint Field definitions with name based GUIDs, and generated corresponding fields within the site definition. An example generated field is: There can be issues when re-loading fields into SharePoint. It worked first time isn’t good enough! We found that the above form worked the second time as well. The key extra attribute is ‘DisplaceOnUpgrade’.
47. Things we would do differently Use the Silverlight client object model Prefer ‘Plain Old Document Libraries’ over Document Sets until they are more mature Separate the DICOM field definitions from the site definition during development (they take an age to load on a development VM) Consider recoding some of the ‘Untrusted code’ as trusted code.
Editor's Notes
//Based on the algorithm from section 4.3 of http://www.ietf.org/rfc/rfc4122.txt public static Guid NameBasedGuid(string Group, string Element) { byte[] inputBytes = new byte[4];inputBytes[0] = Byte.Parse(Group.Substring(0, 2), System.Globalization.NumberStyles.HexNumber);inputBytes[1] = Byte.Parse(Group.Substring(2, 2), System.Globalization.NumberStyles.HexNumber);inputBytes[2] = Byte.Parse(Element.Substring(0, 2), System.Globalization.NumberStyles.HexNumber);inputBytes[3] = Byte.Parse(Element.Substring(2, 2), System.Globalization.NumberStyles.HexNumber); //Get an array of the input in byte form //byte[] inputBytes = Encoding.Default.GetBytes(input); //Prepare an array for the input bytes and the Namespace Guid byte[] concatenatedBytes = new byte[16 + inputBytes.Length]; //Copy the namespace id guid and the input sequence to a second array { //GuidNameSpaceID = new Guid("596bdb90-e5ad-4952-a904-dc47da78d260"); byte[] NamespaceIDBytes = new byte[] { 0x90,0xDB,0x6B,0x59,0xAD,0xE5,0x52,0x49,0xA9,0x04,0xDC,0x47,0xDA,0x78,0xD2,0x60 }; //GuidNameSpaceID = new Guid(NamespaceIDBytes);Array.Copy(NamespaceIDBytes, concatenatedBytes, 16);Array.Copy(inputBytes, 0, concatenatedBytes, 16, inputBytes.Length); } //Calculate the SHA1 hash of the second array - this gives us 20 bytes of which we use the first 16. SHA1CryptoServiceProvider SHAProvider = new SHA1CryptoServiceProvider(); byte[] hashBytes = SHAProvider.ComputeHash(concatenatedBytes); //Construct the guid byte[] GuidData = new byte[16]; //Copy across the first 16 bytes of the hashArray.Copy(hashBytes, GuidData, 16); //Special treatment is required for bytes 7 and 8 //In byte 7 the 4 most significant bits of the time_hi_and_version field need to be set to 1010 (small endian) //This is to set it to version 5 of the UUID as outlined in section 4.1.3. Note they are using big endian for their numbers. byte LowerMask_7 = 15; // 00001111 byte UpperBits_7 = 160; // 10100000 GuidData[7] &= LowerMask_7;GuidData[7] |= UpperBits_7; //Byte 8 has to have the two most significant bits (bits 6 and 7) of the //clock_seq_hi_and_reserved to zero and one, respectively. byte LowerMask_8 = 63; // 00111111 byte UpperBits_8 = 128; // 10000000 GuidData[8] &= LowerMask_8;GuidData[8] |= UpperBits_8;GuidhashGuid = new Guid(GuidData); return hashGuid; }