Filemaker is not a suitable option for the gallery's data sharing needs due to its limitations in integrating with SQL databases and running across platforms like Macs. While Filemaker's external SQL sources feature allows connecting to SQL databases, it has significant limitations including inability to base value lists on external data, date/time data entry restrictions, potential for out of date external data, lack of binary data support, slow scrolling of large record sets, and sorting not performed in the database. Filemaker is primarily designed for smaller, single platform solutions rather than scaling beyond a purely Filemaker based solution.
Oracle Warehouse Builder is Oracle's tool for designing, deploying, and managing business intelligence and data integration projects on the Oracle database. It provides a graphical environment to extract, transform, and load data from various sources into a Oracle data warehouse or datamarts. Warehouse Builder manages the full lifecycle of metadata and data, and enables users to design and deploy ETL processes, reporting infrastructure, and manage the target schema.
Trends and issues impacting database management systems circa 2004 included increasing complexity, lack of resources, and rapid changes in technology. New database management system versions were being released frequently with new features enabled for the internet and real-time usage. Emerging technologies like Java, .NET, and XML were becoming more widely adopted and database systems were taking on additional functionality beyond traditional querying and storage. The internet was driving changes requiring database administrators to have new skills to support increasingly complex enterprise infrastructure and applications.
Cloud computing, big data, and mobile technologies are driving major changes in the IT world. Cloud computing provides scalable computing resources over the internet. Big data involves extremely large data sets that are analyzed to reveal business insights. Hadoop is an open-source software framework that allows distributed processing of big data across commodity hardware. It includes tools like HDFS for storage and MapReduce for distributed computing. The Hadoop ecosystem also includes additional tools for tasks like data integration, analytics, workflow management, and more. These emerging technologies are changing how businesses use and analyze data.
Paris NoSQL User Group - In Memory Data Grids in Action (without transactions...Cyrille Le Clerc
In Memory Data Grids in Action with Oracle Coherence presented to No SQL users.
The "transactions" chapter is missing as it has been rescheduled to another session.
Application Grid: Platform for Virtualization and Consolidation of your Java ...Bob Rhubart
This document discusses how organizations can consolidate and virtualize their Java applications. It notes trends toward consolidation, standardization, security compliance and doing more with less. It states that 8 out of 10 IT dollars are "dead money" spent on maintenance. Shared services can help businesses reduce costs and complexity while improving productivity and customer service levels agreements. The document outlines different levels of consolidation and how companies have achieved significant cost savings through consolidation. It introduces the Oracle Application Grid as a way to provision and monitor shared application infrastructure using technologies like Oracle Coherence, WebLogic Server and Oracle Fusion Middleware to improve efficiency, competitiveness and simplify IT environments.
HBase is a distributed, scalable, big data store that provides fast lookup capabilities like Google BigTable. It uses a table-like data structure with rows indexed by a key and stores data in columns grouped by families. HBase is designed to operate on top of Hadoop HDFS for scalability and high availability. It allows for fast lookups, full table scans, and range scans across large datasets distributed across clusters of commodity servers.
Oracle Warehouse Builder is Oracle's tool for designing, deploying, and managing business intelligence and data integration projects on the Oracle database. It provides a graphical environment to extract, transform, and load data from various sources into a Oracle data warehouse or datamarts. Warehouse Builder manages the full lifecycle of metadata and data, and enables users to design and deploy ETL processes, reporting infrastructure, and manage the target schema.
Trends and issues impacting database management systems circa 2004 included increasing complexity, lack of resources, and rapid changes in technology. New database management system versions were being released frequently with new features enabled for the internet and real-time usage. Emerging technologies like Java, .NET, and XML were becoming more widely adopted and database systems were taking on additional functionality beyond traditional querying and storage. The internet was driving changes requiring database administrators to have new skills to support increasingly complex enterprise infrastructure and applications.
Cloud computing, big data, and mobile technologies are driving major changes in the IT world. Cloud computing provides scalable computing resources over the internet. Big data involves extremely large data sets that are analyzed to reveal business insights. Hadoop is an open-source software framework that allows distributed processing of big data across commodity hardware. It includes tools like HDFS for storage and MapReduce for distributed computing. The Hadoop ecosystem also includes additional tools for tasks like data integration, analytics, workflow management, and more. These emerging technologies are changing how businesses use and analyze data.
Paris NoSQL User Group - In Memory Data Grids in Action (without transactions...Cyrille Le Clerc
In Memory Data Grids in Action with Oracle Coherence presented to No SQL users.
The "transactions" chapter is missing as it has been rescheduled to another session.
Application Grid: Platform for Virtualization and Consolidation of your Java ...Bob Rhubart
This document discusses how organizations can consolidate and virtualize their Java applications. It notes trends toward consolidation, standardization, security compliance and doing more with less. It states that 8 out of 10 IT dollars are "dead money" spent on maintenance. Shared services can help businesses reduce costs and complexity while improving productivity and customer service levels agreements. The document outlines different levels of consolidation and how companies have achieved significant cost savings through consolidation. It introduces the Oracle Application Grid as a way to provision and monitor shared application infrastructure using technologies like Oracle Coherence, WebLogic Server and Oracle Fusion Middleware to improve efficiency, competitiveness and simplify IT environments.
HBase is a distributed, scalable, big data store that provides fast lookup capabilities like Google BigTable. It uses a table-like data structure with rows indexed by a key and stores data in columns grouped by families. HBase is designed to operate on top of Hadoop HDFS for scalability and high availability. It allows for fast lookups, full table scans, and range scans across large datasets distributed across clusters of commodity servers.
This document discusses tuning PowerCenter for performance. It outlines steps to measure performance, determine bottlenecks, and make targeted changes. Key aspects of the PowerCenter architecture like the engine, memory usage, and threading model are explained. Common bottlenecks like targets, sources, and mappings are described along with solutions like indexing, filtering, and transformation optimization.
This document summarizes a presentation on data integration given by Shawn D'Souza in October 2012. It discusses the definition of data integration, the need for it due to multiple data sources, and challenges in integrating data. Approaches to data integration include manual integration, common user interfaces, integration by applications/middleware, uniform data access, and common data storage. The presentation also provides examples of data integration patterns and architectures. It concludes by discussing the importance of data integration and ways to improve it.
This document discusses distributed data warehouses and online analytical processing (OLAP). It begins by describing different data warehouse architectures like enterprise data warehouses, data marts, and distributed enterprise data warehouses. It then outlines challenges for achieving performance in distributed OLAP systems, including dynamically managing aggregates, using partial aggregates, allocating data and balancing loads. The document proposes techniques like redundancy and patchworking queries across sites to optimize distributed querying.
Introducing Open XDX Technology for Open Data API developmentBizagi Inc
Introduction to the concepts of Open-XDX for building Open Data APIs using the CAMeditor toolkit. See also http://www.verifyXML.org for working online demonstration site.
For online demonstration site please see: http://www.verifyxml.org
Social network architecture - Part 3. Big data - Machine learningPhu Luong Trong
This document provides an overview of big data architecture and machine learning. It discusses:
1. The core components of a social network architecture including user data storage, activity systems, notifications, and external integration.
2. Definitions of big data from various sources focusing on the volume, velocity, and variety of large and complex data sets.
3. How machine learning is used for data analysis, applications like weather forecasting and search, and algorithms like supervised learning and decision trees.
The document provides an introduction to IBM's DB2 Advanced Enterprise Server Edition (AESE). It outlines several key features included in AESE such as compression, workload management tools, federation support between DB2 and Oracle databases, high availability options including Q-Replication, and label-based access control security. Pricing and packaging information is also presented, showing AESE is priced at $450 per Value Unit and includes many features that were previously priced separately or only available in higher editions of DB2.
Why Every NoSQL Deployment Should Be Paired with Hadoop WebinarCloudera, Inc.
This document discusses how NoSQL databases are well-suited for interactive web applications with large audiences due to their ability to scale out horizontally, while Hadoop is well-suited for analyzing large volumes of data. It provides examples of how NoSQL and Hadoop can work together, with NoSQL serving as a low-latency data store and Hadoop performing batch analysis on the large volumes of data generated by web applications and their users. The document argues that NoSQL and Hadoop address different but complementary challenges and are highly synergistic when used together.
SharePoint Performance - Tales from the FieldChris McNulty
This document provides an overview of best practices for optimizing SharePoint performance from the field. It discusses server farm architecture and design considerations, including sizing recommendations for single server, medium, and large farms. It also covers installation, post-installation configuration, monitoring, optimization techniques, patching processes, and SQL maintenance best practices. The goal is to help organizations implement SharePoint in a high performance manner based on real-world experience.
Track 2, Session 2, worlds most powerful intelligent and trusted storage syst...EMC Forum India
The document discusses the challenges faced by service providers with aging storage infrastructure and the requirements for a new storage solution. It then summarizes the EMC solution of using Symmetrix VMAX for scalability and security, FAST VP for automated tiering, and flash drives for performance. Key benefits of VMAX and FAST VP that meet the customer's requirements are high scalability through a scale-out architecture and optimization of storage use and costs through automated tiering.
This document provides information about Database Architechs, a consulting firm that specializes in database architecture, design, and management. It summarizes the company's services, team, and experience. The company has expertise in all major database platforms and provides services such as database design, performance tuning, data integration, high availability, and education/training. It has worked with large companies across many industries to implement and optimize their database solutions.
The document appears to be a presentation covering various topics related to database management systems (DBMS). It includes sections on NoSQL databases, database appliances, data center rationalization (DCR), and database administration staffing ratios. Several slides discuss Brewer's CAP theorem and how it relates to database consistency models. The presentation provides an overview of different NoSQL database types and discusses some pros and cons of NoSQL databases compared to traditional SQL databases.
This document discusses analytics on Hadoop. It provides an overview of Hadoop, including its origins from Google's papers on MapReduce and how it provides scalable storage and distributed processing. The key benefits of Hadoop are that it can handle large, growing amounts of structured and unstructured data in a cost effective manner. Examples are given of how a retailer could use Hadoop to analyze web logs and customer data to gain insights like customer locations and behaviors.
Sap BPC nw 10.0 master data load from BPC to BWCloneskills
This document provides steps to load master data (attributes and texts) from flat files into SAP BW (InfoObjects) and then from BW into SAP BPC 10.0 NW dimensions. It involves two parts:
Part I loads master data into BW InfoObjects from flat files. This includes creating a source system, application component, data source, transformation, and InfoPackage to load attributes and texts from flat files into BW.
Part II loads the BW data loaded in Part I into BPC dimensions using the Data Manager package.
The document assumes familiarity with the BPC and BW user interfaces and that required dimensions and models are already created.
Self-Service Access and Exploration of Big DataInside Analysis
The Briefing Room with Robin Bloor and Cirro
Live Webcast on Dec. 11, 2012
As the information landscape expands with all kinds of Big Data, businesses are searching for ways to unite their traditional analytics with this new source of insight. One ambitious approach involves federating access to multiple data sources, even across various operating systems. The idea is to take analytic processing to the data, then intelligently assemble the results for a business user. Could this be the long-awaited alternative to data virtualization?
Check out this episode of The Briefing Room to hear veteran Analyst Robin Bloor explain how federated access to data sources can pave the way for a truly integrated data fabric. Bloor will be briefed by Mark Theissen of Cirro, who will tout his company's patent-pending Data Hub, which simplifies data access by federating queries across multiple sources of structured, semi-structured, and unstructured data. He'll discuss Cirro's cost based optimizer, smart caching, dynamic query plan re-optimization, normalization of cost estimates and a metadata repository for unstructured data sources.
Visit: http://www.insideanalysis.com
Diagnosability versus The Cloud, Redwood Shores 2011-08-30Cary Millsap
In our increasingly virtualized environments, it's ever more difficult to diagnose application defects—especially performance defects that affect response time or throughput expectations. Runtime diagnosis of defects can be an unbearably complicated problem to solve once the application is sealed up and put into production use. But having excellent runtime diagnostics is surprisingly easy if you design the diagnostic features into the application from its inception, as it is being grown, like you would with any other desired application feature.
This document provides guidelines for handling art works, including receiving, documenting, packing, and unpacking art works. It discusses appropriate wrapping materials like Tyvek, glassine, acid-free tissue, and bubble wrap. It also covers hanging mechanisms, condition reports, basic electrical safety, and audiovisual setups. Proper planning and a well-stocked toolkit are emphasized for safely transporting, storing, and displaying art works.
Strategy for Optimal Documentation of Museum ObjectsDaniel Pletinckx
This document discusses strategies for optimally documenting museum objects using 3D digitization. It recommends a three stage approach: 1) Create image-based visualizations using multiple photos from different angles. 2) Generate draft 3D models from the images when needed. 3) For specific goals, create high-end 3D models using specialized equipment and techniques. Each stage produces digital assets suitable for different uses like online viewing, research, or 3D printing. The document provides examples of digital documentation and interactive applications created for various museum objects.
Donna Williams - The Met's Multicultural Audience Development InititiativeCitiesTelAviv
The Multicultural Audience Development Initiative (MADI) at the Metropolitan Museum of Art aims to increase awareness of the museum's global collections, create relationships with diverse New York communities, and diversify visitorship and membership. MADI collaborates with arts organizations, multicultural organizations, and museum departments. It engages with communities through media outreach, events celebrating various cultures, and educational programs. Upcoming MADI events include celebrations for Diwali, Veterans Day, and Martin Luther King Jr.
Object Report for Managing Collections & Heritage Sites Unit (MMHS, Sydney Uni)Antony Skinner
This document is an object report for a drawing by Russell Drysdale that is being considered for acquisition by the Art Gallery of New South Wales (AGNSW). It provides details about the drawing, including its description, provenance, condition, and an assessment of its significance. The report finds that the drawing was a gift from Drysdale to the author's grandfather. It establishes clear and undisputed ownership. A condition report finds the drawing to be in good condition. An analysis of the AGNSW acquisition policy and the drawing's significance suggests it would be a suitable addition to the gallery's collection.
This document discusses tuning PowerCenter for performance. It outlines steps to measure performance, determine bottlenecks, and make targeted changes. Key aspects of the PowerCenter architecture like the engine, memory usage, and threading model are explained. Common bottlenecks like targets, sources, and mappings are described along with solutions like indexing, filtering, and transformation optimization.
This document summarizes a presentation on data integration given by Shawn D'Souza in October 2012. It discusses the definition of data integration, the need for it due to multiple data sources, and challenges in integrating data. Approaches to data integration include manual integration, common user interfaces, integration by applications/middleware, uniform data access, and common data storage. The presentation also provides examples of data integration patterns and architectures. It concludes by discussing the importance of data integration and ways to improve it.
This document discusses distributed data warehouses and online analytical processing (OLAP). It begins by describing different data warehouse architectures like enterprise data warehouses, data marts, and distributed enterprise data warehouses. It then outlines challenges for achieving performance in distributed OLAP systems, including dynamically managing aggregates, using partial aggregates, allocating data and balancing loads. The document proposes techniques like redundancy and patchworking queries across sites to optimize distributed querying.
Introducing Open XDX Technology for Open Data API developmentBizagi Inc
Introduction to the concepts of Open-XDX for building Open Data APIs using the CAMeditor toolkit. See also http://www.verifyXML.org for working online demonstration site.
For online demonstration site please see: http://www.verifyxml.org
Social network architecture - Part 3. Big data - Machine learningPhu Luong Trong
This document provides an overview of big data architecture and machine learning. It discusses:
1. The core components of a social network architecture including user data storage, activity systems, notifications, and external integration.
2. Definitions of big data from various sources focusing on the volume, velocity, and variety of large and complex data sets.
3. How machine learning is used for data analysis, applications like weather forecasting and search, and algorithms like supervised learning and decision trees.
The document provides an introduction to IBM's DB2 Advanced Enterprise Server Edition (AESE). It outlines several key features included in AESE such as compression, workload management tools, federation support between DB2 and Oracle databases, high availability options including Q-Replication, and label-based access control security. Pricing and packaging information is also presented, showing AESE is priced at $450 per Value Unit and includes many features that were previously priced separately or only available in higher editions of DB2.
Why Every NoSQL Deployment Should Be Paired with Hadoop WebinarCloudera, Inc.
This document discusses how NoSQL databases are well-suited for interactive web applications with large audiences due to their ability to scale out horizontally, while Hadoop is well-suited for analyzing large volumes of data. It provides examples of how NoSQL and Hadoop can work together, with NoSQL serving as a low-latency data store and Hadoop performing batch analysis on the large volumes of data generated by web applications and their users. The document argues that NoSQL and Hadoop address different but complementary challenges and are highly synergistic when used together.
SharePoint Performance - Tales from the FieldChris McNulty
This document provides an overview of best practices for optimizing SharePoint performance from the field. It discusses server farm architecture and design considerations, including sizing recommendations for single server, medium, and large farms. It also covers installation, post-installation configuration, monitoring, optimization techniques, patching processes, and SQL maintenance best practices. The goal is to help organizations implement SharePoint in a high performance manner based on real-world experience.
Track 2, Session 2, worlds most powerful intelligent and trusted storage syst...EMC Forum India
The document discusses the challenges faced by service providers with aging storage infrastructure and the requirements for a new storage solution. It then summarizes the EMC solution of using Symmetrix VMAX for scalability and security, FAST VP for automated tiering, and flash drives for performance. Key benefits of VMAX and FAST VP that meet the customer's requirements are high scalability through a scale-out architecture and optimization of storage use and costs through automated tiering.
This document provides information about Database Architechs, a consulting firm that specializes in database architecture, design, and management. It summarizes the company's services, team, and experience. The company has expertise in all major database platforms and provides services such as database design, performance tuning, data integration, high availability, and education/training. It has worked with large companies across many industries to implement and optimize their database solutions.
The document appears to be a presentation covering various topics related to database management systems (DBMS). It includes sections on NoSQL databases, database appliances, data center rationalization (DCR), and database administration staffing ratios. Several slides discuss Brewer's CAP theorem and how it relates to database consistency models. The presentation provides an overview of different NoSQL database types and discusses some pros and cons of NoSQL databases compared to traditional SQL databases.
This document discusses analytics on Hadoop. It provides an overview of Hadoop, including its origins from Google's papers on MapReduce and how it provides scalable storage and distributed processing. The key benefits of Hadoop are that it can handle large, growing amounts of structured and unstructured data in a cost effective manner. Examples are given of how a retailer could use Hadoop to analyze web logs and customer data to gain insights like customer locations and behaviors.
Sap BPC nw 10.0 master data load from BPC to BWCloneskills
This document provides steps to load master data (attributes and texts) from flat files into SAP BW (InfoObjects) and then from BW into SAP BPC 10.0 NW dimensions. It involves two parts:
Part I loads master data into BW InfoObjects from flat files. This includes creating a source system, application component, data source, transformation, and InfoPackage to load attributes and texts from flat files into BW.
Part II loads the BW data loaded in Part I into BPC dimensions using the Data Manager package.
The document assumes familiarity with the BPC and BW user interfaces and that required dimensions and models are already created.
Self-Service Access and Exploration of Big DataInside Analysis
The Briefing Room with Robin Bloor and Cirro
Live Webcast on Dec. 11, 2012
As the information landscape expands with all kinds of Big Data, businesses are searching for ways to unite their traditional analytics with this new source of insight. One ambitious approach involves federating access to multiple data sources, even across various operating systems. The idea is to take analytic processing to the data, then intelligently assemble the results for a business user. Could this be the long-awaited alternative to data virtualization?
Check out this episode of The Briefing Room to hear veteran Analyst Robin Bloor explain how federated access to data sources can pave the way for a truly integrated data fabric. Bloor will be briefed by Mark Theissen of Cirro, who will tout his company's patent-pending Data Hub, which simplifies data access by federating queries across multiple sources of structured, semi-structured, and unstructured data. He'll discuss Cirro's cost based optimizer, smart caching, dynamic query plan re-optimization, normalization of cost estimates and a metadata repository for unstructured data sources.
Visit: http://www.insideanalysis.com
Diagnosability versus The Cloud, Redwood Shores 2011-08-30Cary Millsap
In our increasingly virtualized environments, it's ever more difficult to diagnose application defects—especially performance defects that affect response time or throughput expectations. Runtime diagnosis of defects can be an unbearably complicated problem to solve once the application is sealed up and put into production use. But having excellent runtime diagnostics is surprisingly easy if you design the diagnostic features into the application from its inception, as it is being grown, like you would with any other desired application feature.
This document provides guidelines for handling art works, including receiving, documenting, packing, and unpacking art works. It discusses appropriate wrapping materials like Tyvek, glassine, acid-free tissue, and bubble wrap. It also covers hanging mechanisms, condition reports, basic electrical safety, and audiovisual setups. Proper planning and a well-stocked toolkit are emphasized for safely transporting, storing, and displaying art works.
Strategy for Optimal Documentation of Museum ObjectsDaniel Pletinckx
This document discusses strategies for optimally documenting museum objects using 3D digitization. It recommends a three stage approach: 1) Create image-based visualizations using multiple photos from different angles. 2) Generate draft 3D models from the images when needed. 3) For specific goals, create high-end 3D models using specialized equipment and techniques. Each stage produces digital assets suitable for different uses like online viewing, research, or 3D printing. The document provides examples of digital documentation and interactive applications created for various museum objects.
Donna Williams - The Met's Multicultural Audience Development InititiativeCitiesTelAviv
The Multicultural Audience Development Initiative (MADI) at the Metropolitan Museum of Art aims to increase awareness of the museum's global collections, create relationships with diverse New York communities, and diversify visitorship and membership. MADI collaborates with arts organizations, multicultural organizations, and museum departments. It engages with communities through media outreach, events celebrating various cultures, and educational programs. Upcoming MADI events include celebrations for Diwali, Veterans Day, and Martin Luther King Jr.
Object Report for Managing Collections & Heritage Sites Unit (MMHS, Sydney Uni)Antony Skinner
This document is an object report for a drawing by Russell Drysdale that is being considered for acquisition by the Art Gallery of New South Wales (AGNSW). It provides details about the drawing, including its description, provenance, condition, and an assessment of its significance. The report finds that the drawing was a gift from Drysdale to the author's grandfather. It establishes clear and undisputed ownership. A condition report finds the drawing to be in good condition. An analysis of the AGNSW acquisition policy and the drawing's significance suggests it would be a suitable addition to the gallery's collection.
The document contains information about Harrison Ford's roles as an actor. It provides details about some of the characters he has played, including Hans Solo in Star Wars and Duke Nukem in a video game. Metadata is added to clarify the meaning of different terms and relationships. An RDF graph is created to represent these relationships between Ford, the characters he played, and other entities in a structured way.
This document discusses best practices for storing and transporting art objects. It describes condition reports, guidelines for various storage methods like shelving, racks and cabinets, and considerations for transportation both within and outside of museums. Optimal storage provides physical security and environmental protection while allowing access. Transportation requires cushioning objects from vibration, shock and damage. The goal is to preserve the condition of art objects over their lifetime using appropriate handling and storage.
This curriculum vitae provides personal and professional information about Gianluigi Negroni. It summarizes his education, languages, membership in professional associations, areas of expertise, field experience in over 80 countries, short term consultancy projects, and participation in meetings and seminars. His education includes an MSc in Animal Science from the University of Bologna, and post-graduate training in topics like aquaculture, agriculture, and renewable energy. His professional expertise is extensive and includes areas like aquaculture management, feed production, project management, and more.
1. El documento proporciona información sobre el horno HR-550 de Teka, incluyendo sus características y especificaciones. 2. Describe las diferentes funciones del horno como grill, calentamiento convencional, convección forzada y descongelación. 3. También explica cómo configurar el reloj, operar el horno manualmente y programar la duración de la cocción.
El Schnauzer es una raza de perro originaria de Alemania. Existen tres variedades según el tamaño: miniatura, estándar y gigante. Son perros activos, inteligentes y cariñosos que se adaptan bien como mascotas familiares. Requieren cepillado frecuente de su abundante pelo y una dieta balanceada para evitar problemas digestivos.
10 grandes citas para Emprendedores. 1a Sesión de Coaching GratisDilmerAlvarado
http://www.dilmeralvarado.com/coaching-2/sesion-de-coaching-gratuita/ 1a Sesión de Coaching Gratis! Te presento una selección de 10 grandes citas para emprendedores. Coaching Presencial, Coaching telefónico, Coaching por internet, Coaching por Skype. Coaching Presencial en Barcelona i Girona. Coaching para emprendedores
SAP es una empresa alemana fundada en 1972 que desarrolla software de gestión empresarial. Sus principales productos son SAP ERP, que ayuda con la planificación de recursos, y soluciones para gestión de clientes, cadena de suministro, productos y proveedores. SAP se ha convertido en líder mundial en software empresarial y atiende a más de 100,000 empresas en todo el mundo.
La última mitad de 2014 y lo que llevamos de 2015 ha sido el año de la consolidación de la
jurisprudencia del Tribunal Supremo en las grandes materias introducidas por la reforma laboral de
2012 y un año de grandes sentencias judiciales.
En este seminario, haremos un repaso intenso a lo que nuestros Tribunales han comentado acerca de
las normas laborales vigentes y su aplicación práctica en la empresa, desde la polémica derivada de
cómo deben actuar las mismas ante la finalización del periodo máximo de ultraactividad de los
convenios colectivos, hasta el más reciente cuestionamiento de la licitud de la legislación española
respecto de los umbrales del despido colectivo.
La tecnología educativa se define como la aplicación sistemática de procesos de enseñanza y aprendizaje que tienen en cuenta los recursos humanos y técnicos, así como las interacciones entre ellos, con el fin de lograr una educación más efectiva. Surge en la década de 1940 en Estados Unidos para capacitar militares de manera eficiente y ha evolucionado desde un enfoque conductista y técnico-racional hacia el estudio de procesos educativos mediados tecnológicamente en distintos contextos.
Revista MOTORSPOT RACING, semanal, gratuita, dedicada al mundo de la competición (motorsport). Todas las categorías: Fórmula 1, F1, MotoGP, Rallys, WRC, Le Mans, Dakar, Nascar, SBK, etc
El documento resume las principales normativas acústicas vigentes en Chile, incluyendo normas sobre ruido de fuentes fijas, lugares de trabajo, viviendas y vibraciones. También describe conceptos como escalas de niveles de ruido, instrumentos de medición, condiciones de medición, límites de exposición al ruido y percepción de vibraciones.
El documento discute el uso generalizado de teléfonos móviles entre los jóvenes y cómo esto está transformando la cultura juvenil. Los jóvenes adoptan rápidamente las tecnologías móviles y las usan intensamente en su vida diaria, desarrollando nuevos usos. Esto los conecta en redes y fomenta una cultura participativa basada en la creación e intercambio de contenidos. Sin embargo, el acceso depende del poder adquisitivo y la supervisión familiar sigue siendo importante.
Este documento ofrece consejos para evitar la acumulación de secreciones en las vías respiratorias de pacientes gravemente afectados. Recomienda evitar cambios bruscos de temperatura, consumir vitaminas C y frutas de temporada, mantener hábitos de higiene como lavarse las manos frecuentemente, estornudar en el codo o pañuelos desechables, no fumar y visitar regularmente al médico.
This document provides an overview of relational database design for geographic information systems (GIS). It discusses how GIS databases can be designed using a relational model with spatial data stored in tables along with associated attribute data for efficient management and analysis. The key aspects covered include normalization of tables, use of primary and foreign keys to link features to their attributes, and queries using SQL to access both spatial and non-spatial data together. Maintaining data integrity and relationships between features and attributes is also emphasized.
Data Lakehouse, Data Mesh, and Data Fabric (r2)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a modern data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. They all may sound great in theory, but I'll dig into the concerns you need to be aware of before taking the plunge. I’ll also include use cases so you can see what approach will work best for your big data needs. And I'll discuss Microsoft version of the data mesh.
Data Management - Full Stack Deep LearningSergey Karayev
This document discusses data management for deep learning projects. It covers five main topics: sources of data, labeling data, data storage, data versioning, and data processing. For data sources, it describes obtaining publicly available datasets, collecting and labeling proprietary data, and techniques for data augmentation. For labeling data, it discusses interfaces for annotators, sources of labor like outsourcing, and labeling software. For storage, it outlines options for files, objects, databases, and data lakes. It describes different levels of data versioning from unversioned to specialized solutions. And it proposes using workflows and schedulers like Airflow to automate multi-step data processing tasks.
Hadoop in the Enterprise - Dr. Amr Awadallah @ Microstrategy World 2011Cloudera, Inc.
- Apache Hadoop is an open-source software framework for distributed storage and processing of large datasets across clusters of commodity hardware.
- Cloudera's Data Operating System (CDH) is an enterprise-grade distribution of Apache Hadoop that includes additional components for management, security, and integration with existing systems.
- CDH enables enterprises to leverage Hadoop for data agility, consolidation of structured and unstructured data sources, complex data processing using various programming languages, and economical storage of data regardless of type or size.
Data Lakehouse, Data Mesh, and Data Fabric (r1)James Serra
So many buzzwords of late: Data Lakehouse, Data Mesh, and Data Fabric. What do all these terms mean and how do they compare to a data warehouse? In this session I’ll cover all of them in detail and compare the pros and cons of each. I’ll include use cases so you can see what approach will work best for your big data needs.
The DataFinder is a data management application that supports organizing, describing, and automating access to large datasets produced during experiments and stored across grids and clouds. It provides a unified interface for various backend data stores, allowing easy management and transfer of data between grid and cloud resources. The DataFinder supports various backends including webDAV, FTP, local file systems, cloud storage services like Amazon S3, and gridFTP servers, giving users flexibility in storing data. It helps researchers and small companies archive and access technical and scientific data generated by simulations distributed across computational resources.
Hadoop - Architectural road map for Hadoop Ecosystemnallagangus
This document provides an overview of an architectural roadmap for implementing a Hadoop ecosystem. It begins with definitions of big data and Hadoop's history. It then describes the core components of Hadoop, including HDFS, MapReduce, YARN, and ecosystem tools for abstraction, data ingestion, real-time access, workflow, and analytics. Finally, it discusses security enhancements that have been added to Hadoop as it has become more mainstream.
This document provides an introduction to GraphTO, a graph database conference. It discusses who the organizers are and provides an introduction to graph databases and concepts. It highlights how graph databases are better suited than SQL for complex, connected data and provides examples of querying and visualizing graph data using technologies like Neo4j, Cypher, and SPARQL. Finally, it discusses loading patent grant data from XML into a graph and available resources for working with graph databases.
Join Objectivity, Inc.’s VP of Product Management, Brian Clark, in a discussion of the latest trends in Big Data Analytics, defining what is Big Data and understanding how to maximize your existing architectures by utilizing NOSQL technologies to improve functionality and provide real-time results. There will be a focus on relationship analytics as well as an introduction to NOSQL data stores, object and graph databases, such as the architecture behind Objectivity/DB and InfiniteGraph.
Shaping the Role of a Data Lake in a Modern Data Fabric ArchitectureDenodo
Watch full webinar here:
Data lakes have been both praised and loathed. They can be incredibly useful to an organization, but it can also be the source of major headaches. Its ease to scale storage with minimal cost has opened the door to many new solutions, but also to a proliferation of runaway objects that have coined the term data swamp.
However, the addition of an MPP engine, based on Presto, to Denodo’s logical layer can change the way you think about the role of the data lake in your overall data strategy.
Watch on-demand this session to learn:
- The new MPP capabilities that Denodo includes
- How to use them to your advantage to improve security and governance of your lake
- New scenarios and solutions where your data fabric strategy can evolve
The document discusses an Informix Warehouse Accelerator that is designed to accelerate select queries for data warehouses running on Informix Database Server. It uses breakthrough technologies like extreme data compression, row and columnar storage formats, and in-memory databases to provide unprecedented query response times in an appliance-like package. The accelerator is integrated with and transparent to the Informix Database Server, offloading analytics workloads to improve performance and reduce the need for database tuning tasks.
This document discusses web data extraction and analysis using Hadoop. It begins by explaining that web data extraction involves collecting data from websites using tools like web scrapers or crawlers. Next, it describes that the data extracted is often large in volume and requires processing tools like Hadoop for analysis. The document then provides details about using MapReduce on Hadoop to analyze web data in a parallel and distributed manner by breaking the analysis into mapping and reducing phases.
Is cloud and NDT a good mix? NDT has its own specificity. Clouds can truly simplify the file management, but is any cloud solution adapted for the NDT? For example, Dropbox may not work right out of the box for our market. This presentation highlights different avenues about clouds (IaaS, PaaS, and SaaS); and highlights NDT critical requirements (constraints and needs). A list of different levels of cloud services (component, option, security, ...) will be defined. It is important to remember that private and public servers are 2 possible avenues. NDT was an early user of private servers even before it was called a cloud. Overall the main idea is to optimize the operation process to reduce OPEX and to increase availability and accuracy of data.
See: www.amotus-solutions.com or www.nubitus.com
ADV Slides: When and How Data Lakes Fit into a Modern Data ArchitectureDATAVERSITY
Whether to take data ingestion cycles off the ETL tool and the data warehouse or to facilitate competitive Data Science and building algorithms in the organization, the data lake – a place for unmodeled and vast data – will be provisioned widely in 2020.
Though it doesn’t have to be complicated, the data lake has a few key design points that are critical, and it does need to follow some principles for success. Avoid building the data swamp, but not the data lake! The tool ecosystem is building up around the data lake and soon many will have a robust lake and data warehouse. We will discuss policy to keep them straight, send data to its best platform, and keep users’ confidence up in their data platforms.
Data lakes will be built in cloud object storage. We’ll discuss the options there as well.
Get this data point for your data lake journey.
How Apache Hadoop is Revolutionizing Business Intelligence and Data Analytics...Amr Awadallah
Apache Hadoop is revolutionizing business intelligence and data analytics by providing a scalable and fault-tolerant distributed system for data storage and processing. It allows businesses to explore raw data at scale, perform complex analytics, and keep data alive for long-term analysis. Hadoop provides agility through flexible schemas and the ability to store any data and run any analysis. It offers scalability from terabytes to petabytes and consolidation by enabling data sharing across silos.
This document provides an overview of the SESAM project, which aims to increase the usage and quality of an archive system for an energy company by automatically enriching document metadata and connecting documents to structured business data. It describes how metadata is extracted from source systems into a triple store using separate ontologies for each system. Documents can then be searched across systems and metadata can be translated between them. When archiving documents, additional metadata is automatically attached based on information from the triple store.
This document provides an introduction to a course on data science. It outlines the course objectives, which are to recognize key concepts in extraction, transformation and loading of data, and to complete a sample project in Hadoop. It also lists the expected course outcome, which is for students to recognize technologies for handling big data. The document then provides a chapter index and overview of topics to be covered, including distributed and parallel computing for big data, big data technologies, cloud computing, in-memory technologies, and big data techniques.
Similar to Creating an RAD Authoratative Data Environment (20)
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
What is an RPA CoE? Session 1 – CoE VisionDianaGray10
In the first session, we will review the organization's vision and how this has an impact on the COE Structure.
Topics covered:
• The role of a steering committee
• How do the organization’s priorities determine CoE Structure?
Speaker:
Chris Bolin, Senior Intelligent Automation Architect Anika Systems
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
3. Origins of data sharing architecture
Initiative (November 2007)
I need a database, to share education
programs and I have a MAC
Greg, can Art help Kim?
Art, help Kim?,
Hmm, Filemaker is not SQL
Compliant, and MSAccess
does not run on the MAC,
We need something better.
O.K., What should we do? 3
4. 3 Areas of Data Integration
• Collect
Data Data
– Bring data in to Collection Sharing Dessemination
structured format
• Share IRIS TMS
Raisers
Edge
$ Gallery
Data
– DefineReference IRIMS
COE FMS
Authoritative Data AI
$
Vista
• Disseminate Empl
Text
5
4
3
2
1
– Make data Bar graph
available to users Gallery
Data
4
5. Questions this presentation
tries to answer?
Data • What problem is being addressed by a web database
Collection
• When should Filemaker or MSAccess be used to
address this problem
Raisers
IRIS TMS
Edge • When should Metastorm be used to address this
COE FMS
IRIMS Problem
AI
$
Vista • Why was PHPRunner selected for the web database
Empl
front end
• How is the PHPRunner architecture to fullfull the
Gallery
requirement that there will be little to none application
Data
code in the implementation (allowing allowing it to be
easily ported to a new application (DCLPA used as
example).
5
6. Types of Gallery Data sharing
Word Docs One user at a time, not
Excel shared
MSAccess Isolated non-integrated database (Stand-
Filemaker alone) shared between a handful of users,
Highly customized
Departmental Systems integrated to authoritative source,
Authoritative [ Missing Gap]
Need to have customized departmentusers across departments,
shared with many systems
Data Systems customizable
Workflow (IRIS) that reference authoritative data
Raisers Edge Many users shared, expensive, little-
TMS customization
Internet Enterprise-wide and beyond. 6
Intranet
7. Database ?
• The gallery has two definitions of a
database.
– MSAccessFilemaker (Single Platform)
– SQL ServerPostgress (Multiple Platform)
7
8. Pluses and Minus of COTs
• Cots are very inexpensive solution.. However:
– They frequently do not have everything you need, and
you cannot easily modifyExtend them
– They will not follow any naming standards based on
your organization
– The governance of the codelogic and schema is
outside of the clients control (Although the Actual data
is in their control)
– They cannot “Alone” be used to collect the Gallery’s
Data needs – They need to be extended!
8
9. Documentation (UDB)
Project to Define when and how to define
different application architectures (Nov, 2007)
Project Folder
• ngagrouptTDPProjectsUDB - User Database Deployment
Documents
– Proposed End User Data Architecture Implementations (UDB) 20071129a.doc
– Filemaker and SQL Compliant Database(UDB) 20080109c.doc
– User Database Prototype and Alpha Five(UDB).doc
– End User Database Deployments Summary (UDB) 20071128a.doc
9
10. Fate of UDB project
There was concern at the time, that the UDB approach would cause
“End Users” to be involved in the development process and that TDP
would be unable to control their development demands.
The contrarian position was that TDP would still be the primary
developers for any enterprise component of the architecture (e.g. Data
Model) and the proposed solution would eliminate the current practice
of users “independently” developing on systems that could not be
shared, secured or extended to the enterprise (e.g. Filemaker,
MSAccess, …)
These considerations were never resolved and the project was
cancelled in February of 2008.
However, the issues still exist, and the proposed architecture is still
worth reviewing. 10
11. What is a Database? (Two Views)
Technologist Programmer Business Analysts End User
understanding of the term understanding of the term
IDE + Presentation +
Integrated Business Rules + Data Storage
Development
Environment (IDE)
“Tools”
An all-in-one
System where I
Presentation
“Front End” Can save data
(e.g. MSAccess)
A part of a
Single Product
much larger
Business
System, used to View
Application
Store and share
“Business Rules”
Data
(e.g. MSSQL)
Multi-Product
010010101001010101010101010101
Database
Technologist
View What is a
(Storage)
Database?
010101010101010100010010010101
11
12. Examples of “Single” verses
“Multi” software applications
Application Front End Business Rules Database Storage Development
Presentation Application Environment
Single product Systems
Gallery Archives DB File Maker
Request Management System File Maker
Purchase Card MS Access
Excel Excel
Multi-Product Systems
Asset Inventory MS Access MSSQL MS Access
Employee Database MS Access MSSQL MS Access
Portfolio Extensis (Service) Web Portfolio MSSQL Vendor
Inside.Nga.Gov Web MasonPerlCGI PostgresSQL Eclipse
IRIS Web Metastorm MSSQL Metastorm
FMS Java Applet J2EE Oracle OracleForms
TMS Visual Basic Visual Basic MSSQL Visual Basic
Raisers Edge Visual Basic Visual Basic MSSQL Visual Basic
Paper Conservation Web PHP MSSQL PHPRunner
12
14. Worst Case
• Data Everywhere, No single version of truth
• How is correct?
Exhibition Code: 23 = ZZ Exhibition Code: 23 = XX
Exhibition Code : 21 = AA
Exhibition Code: 23 = XX
Exhibition Code: 23 = XX
Exhibition Code: 23 = XX
Exhibition Code : 23 = YY
Exhibition Code : 21 = AA
Exhibition Code: 21 = AA
Exhibition Code: 21 = ZZ
14
15. Data Duplication (or Silos)
Who is the authoritative source ?
I have the
Latest Phone
Numbers
I have an I have the
I need an old Phone # latest phone
Arts in my file numbers
Phone #
A
X8 rt
00
0
A
Art X6 rt
6 54
X654 A
X8 rt
5
000
Keeping local copies of data caused obvious problems. Manual
Synchronization process are difficult to maintain. 15
16. Goal
• Single version of truth
• Defined Source of Authoritative data
Exhibition Code: 23 = XX
16
17. Using Authoritative Source
Authoritative Source
I have the
I need an latest phone
Arts numbers
Phone #
Official Phone Book
Using a single source for the data, provides all the systems with the latest up
to data information 17
19. Option 4.
Five Strategies
Option 3.
Shared Shared
Enterprise Enterprise
Data (Fat Client) Data
OPTION 3 Within Single Department
Authoritative Data
(Thin Client)
Option 5.
File Maker
Application
1
Custom
X (Thin
Temp Storage or Local
Copy Storage Only
Client)
Conversion
PHP
No Data
Storage
PHP
Applciation
Dept C
Option 1. OPTION 4
OPTION 1 OPTION 2
Single
Single User Within Single Department
Option 2.
Non-Authoritative Data Cross Departments Non-Authoratative
Authoritative Data
Read Only
User Access Shared
File
Access
Maker
Department
File Maker
Data
Dept A Dept B
Application
2
`
(Fat Client)
19
20. Attributes Not Displayed in Slides
• Conditions for selecting each deployment
• User responsibility in each deployment
• Number of users estimates for each
deployment
• TDP Role in each deployment
• ...
Proposed End User Data Architecture Implementations (UDB) 20071129a.doc
20
21. Option 1 – Single User
• A single user works alone, disconnected from the rest of the world. With there own version of
the data
• This is another example of a “Single User” application. Filemaker, Excel and Access also fit into
this category.
• A minor improvement, is to allow this “single user” the ability to linked to existing ‘authoritative
source’. This is mostly a “Reporting: feature. For example, Greg and Alan Newman use this
method for reporting
Authorization
One Time
Load
Optional
•Susan’s Pull from IA to Excel
•TMS Crystal Reports 21
•Raisers Edge Queries
22. Option 2 – Multi-User Departmental Application
for Departmental-only Data
• This is a multiple user application. The data in this situation is not
‘authoritative’ to the Gallery and therefore is considered a ‘copy’ of some
other data stored as a Gallery resource. This Architecture assume simple
business rules, and little to no workflow.
Independent
User User / User
Developer
AFM AFM AFM
•Gallery Filemaker Archives 22
•AFM Access database
23. Option 2 – Multi-User Departmental Application
for Departmental-only Data
• This is a multiple user application. The data in this Access Databases is not
‘authoritative’ to the Gallery and therefore is considered a ‘copy’ of some
other data stored as a Gallery resource. However, the accessed “read-only’
Central data is Authoratative. This Architecture assume simple business
rules, and little to no workflow.
Linked to Authoritative Source
Authorization
Read Only
(RO)
User
(RO)
(RO)
User / User
Developer
•Old patch reporting Model 23
24. Option 3 – Multi-User Departmental
Application for Gallery-Wide Data
This architecture attempts to blend the “ease of use” of a “Single product System”
with the centralized control of the “Enterprise Database”. This Architecture
assumes simple business rules, and little to no workflow.
Web
XDO
Data User
Consistency
Verfication
Authorization
User
Authoritative
Source
Other .. IRIMS
RE IRIS
DAC TMS
AFM
End User
Developer
•Asset Inventory 24
•Scott Steven Employee Database
25. Option 4 - Multi-User Gallery-Wide
Application for Gallery-Wide Data
This architecture extends the previous architecture to limit the impact of
“Client Maintenance” by using a thin client Web Based Application..
User
Consistency
Checking
XDO
Authorization User
Authoritative User
Source
Other .. IRIMS WEB
RE IRIS
DAC TMS
User
TDP DPUB
User
Convert
Deploy to To
Server / PHP
Generated
End User from User
Developer
Data Model
•DCLPA Data PHP
•VDMS Model
Runner 25
DM Overrides
•Patch Reporting
26. Option 5 - Strategic Applications
Consistency User
Checking
XDO
Authorization User
Authoritative
Source
Other .. IRIMS
RE IRIS User
DAC TMS
WEB
TMSWEB Other ..
TRAIN iHeat User
DPUB
User
Developer
Complex Web Services , User
Native AJAX ,
•IRIS Code Complex 26
•Art Extract TDP application Rules
27. Metastorm
• Not around when UDB study was
developed (Dec 2007)
• Today, I would define it as a hybrid of
Complex Business Rules and Generated
application
27
28. Architecture
Web Browser
`
PHPRunner Application Server
(Web Server)
SQL Server Database
28
29. What kind of tool do we need to do
this ?
Required:
- Must be easy to build – Low Maintenance
- Must use database standards
- Must have Security
- Must have adequate performance
Desired:
- We did not need to install it (e.g. Web based)
- Was reasonably priced
- Will be easy to maintain in the future (Standards based)
- We should be able to easily replace it !
29
30. A few options
• MSAccess
• Filemaker
• PHPRunner
• MetaStorm
• Custom Web Develop? (Mason,Perl,CGI)
• Other?
30
31. MSAccess (November 2007)
I need a database, to share education
programs and I have a MAC
Will you change to a PC,
then we’ll give you
Microsoft Access
In that case, “Nevermind”
Hmm, We really cannot ignore the
requirement for MACs. For that matter,
expense to install is so high, a web solution
would save the gallery the most money.. The
Web should be a requirement.
31
32. Filemaker !
Required:
- Must be easy to build – Low Maintenance
- Must use database standards
- Must have Security
- Must have adequate performance
Desired:
- We did not need to install it (e.g. Web based)
- Was reasonably priced
- Will be easy to maintain in the future (Standards based)
32
33. Filemaker !
Required:
- Must be easy to build – Low Maintenance
- Must use database standards
- Must have Security
- Must have adequate performance
How about File Maker to a SQL Database …
Isn't that a feature a feature of Filemaker server?
Desired: that called ESS (External Data Sources)?
Isn't
- We did not need to install it (e.g. Web based)
- Was reasonably priced
- Will be easy to maintain in the future (Standards based)
33
34. Problems With Filemaker
• “Filemaker External SQL Sources (ESS) In Depth”
(Filemaker publication)
– “Value lists cannot be based on data in an ESS table”
– “in a FileMaker Pro context, date-only or time-only data entry will not be valid”
– ESS data in FileMaker Pro has the potential to be slightly out of date
– Binary data is not supported by ESS at present
– Scrolling operations … can be problematic in large record sets. . . . and will
perform fairly slowl
– Sort not performed in database
– The ESS feature set is primarily designed to allow FileMaker Pro solutions to
integrate data from SQL-based solutions. ESS is not primarily intended as a means
to scale solutions beyond the bounds of a purely FileMaker Pro based solution.
– See: Filemaker and SQL Compliant Database(UDB) 20080109c.doc
34
35. Problems With Filemaker
• “Filemaker External SQL Sources (ESS) In Depth”
(Filemaker publication)
– “Value lists cannot be based on data in an ESS table”
“Value lists cannot be based on data in an ESS table”
– “in a FileMaker Pro context, date-only or time-only data entry will not be valid”
Means
– ESS data in FileMaker Pro has the potential to be slightly out of date
– Binary data is not supported by ESS at present
No field list validation (Dropdown List) from
– Scrolling operations … can be problematic in large record sets. . . . and will
perform fairly slowl non-Filemaker authoritative sources!
– Sort not performed in database
– The ESS feature set is primarily designed to termsFileMaker Pro solutions to
Or in simple allow
integrate data from SQL-based solutions. ESS is not primarily intended as a means
to scale solutions beyond the bounds of a purely FileMaker Pro based solution.
“Can’t validate against to authoritative data”
– See: Filemaker and SQL Compliant Database(UDB) 20080109c.doc
35
36. Filemaker (My Last Straw)
From: John Blakeley [mailto:john@fbsl.co.nz]
Sent: Tue 1/8/08 2:27 PM
To: Nicewick, Arthur
Subject: RE: your post on Filemaker about ODBC conversion errors
Hi Arthur
Thanks and a happy New Year to you!
We gave up on the idea of pulling data from Filemaker using it as a linked server. In the
end we scheduled a script to run that exported data on an hourly basis. SQL would then
import it. We had to use MS scheduled tasks to open a FM file that would autostart an
export script as FM server schedule cannot run scripts that aren't web compatible.
Nothing is ever simple in Filemaker! One day...
Cheers
John Blakeley
John Blakeley Mobile: + 64 21 948037
Email: john@fbsl.co.nz
Skype: john.blakeley
Bayview
North Shore
New Zealand
36
37. Filemaker (My Last Straw)
From: John Blakeley [mailto:john@fbsl.co.nz]
Sent: Tue 1/8/08 2:27 PM
To: Nicewick, Arthur
Subject: RE: your post on Filemaker about ODBC conversion errors
Hi Arthur
Thanks and a happy New Year to you!
We gave up on the idea of pulling data from Filemaker using it as a linked server. In the
end we scheduled a script to run that exported data on an hourly basis. SQL would then
import it. We had to use MS scheduled tasks to open a FM file that would autostart an
export script as FM server schedule cannot run scripts that aren't web compatible.
Nothing is ever simple in Filemaker! One day...
Cheers
“We gave up on the idea of pulling data
John Blakeley
John from Filemaker using it as a linked server”
Blakeley Mobile: + 64 21 948037
Email: john@fbsl.co.nz
Skype: john.blakeley
Bayview
North Shore
New Zealand
37
39. Prototyped
Typical “Types” of Assets , with issue numbers
mapped
“Lookup” provides means to search for Issuance
39
40. Rated the best
Easy to Learn Use
Can be extended
Can integrate with Cots Packages
“Open” Architecture
Java Server Ruby on Rails Alpha Cold
.Net
PHPRunner Faces Active Scaffold Five Fusion
Netbeans
40
41. PHPRunner
Required:
- Must be easy to build – Low Maintenance Very little coding
- Must use database standards Yes
- Must have Security Pretty good, but we need to make it better
- Must have adequate performance Looks OK, Need testing
Desired:
- We did not need to install it (e.g. Web based) Yes
- Was reasonably priced Yes
- Will be easy to maintain in the future (Standards based)
Standard Industry Language (PHP)
Framework not widespread as we would
41
like
42. es
PHPRunner Code Generation
e
ang
Ch reak
od n’t b des
C o a
W upgr
by Data
DB Definitions
Rules
Security /
s
n ge Navigation
C ha ak
de ’t bre des
Co on ra
Framework
Relationship
W upg Custom
by Framework
Code
Tables Web
Relationships Generator Forms
Framework Upgra
Reve des will
rt cha
ge Codes nges
chan by
Code break
Overrides
t”
“Migh rades. Codes
upg t em Overrides ,
B ut t
he sy
s Access Routine
y to
will tr it.. Screen
nt
preve Customizations
42
Templates
43. es
PHPRunner Code Generation
e
ang
Ch reak
od n’t b des
C o a
W upgr
by Data
DB Definitions
Rules
Security /
s
n ge Navigation
C ha ak
de ’t bre des
Co on ra
Framework
Relationship
W upg Custom
by Framework
Code
Tables Web
Relationships Generator Forms
Framework Upgra
Reve des will
rt cha
ge Codes nges
chan by
Code break
Overrides
t”
“Migh rades. Codes
upg t em Overrides ,
B ut t
he sy
s Access Routine
y to
will tr it.. Screen
nt
preve Customizations
43
Templates
51. End User Reports Creation
(Web Based)
• Very Simple
• Version 1 is very limited
• Tightly integrated with Security
• Version 2 expected in the summer
51
52. How about Metastorm
• Workflow Focus
• Excellent integration with in-box process
• Ideal for “Approval” Processing
• Ideal for “Request” processing
• Somewhat RAD
• No focus Data model
• No focus of Data Business Rules
• “Probably” ported from non-relational
52
53. Web Database vs Workflow
• PHPRunner (Web Database) has the user
create the a normalized database
(Modeling Business Rules) , then “Semi-
Automatically” creates the User Interface
• Metastorm (Workflow) has the user create
a workflow , and then “Semi-
Automatically” creates a “deNormalized”
(Technical) database behind it
53
54. Metastorm vs Web Database
• PHPRunner is “Only” code development, The
tools is run on “ApachePHP”. Therefore there
is no license issues for users (unlimited users)
– PHPRunner toolkit $300 per developer
– Unlimited users – No Additional cost
• Metastorm provides both a “Toolkit” and a
“Runtime” environment. The runtime
environment cost
– User License $149.88 per named user (not concurrent)
– Work Workgroup Server, restricted to a maximum of 250 named users$17,984.89 per
server
– The Gallery has purchased 8 developer licenses at ~$2500 each (Need to verify)
– Note: I current do not know many users exist in metastorm, We may be required to
purchase more licenses. However, once purchased, a single user license will work with
unlimited Workflow task (However, we may need addition servers)
54
55. Replacing Filemaker and
MSAccess
• PHPRunner is a Database driven tool and is
therefore dependent on a normalized data model.
Filemaker and MSAccess are also database
driven tools that are also dependent on
normalized data models. Therefore, PHPRunner
should be ideal for database application migration
of systems like “Gallery Archives”.
• Metastorm is “Workflow” focused and does not
allow the user to define the database. For pure
data storage, it is not a good application.
However, For workflow, it is very good.
55
56. PHPRunner
Normalized Data Model
Simple CRUD User Interface
- Clean Data Type
- Basic Menu
- Logical Business Modeled Data Model
- No “Workflow” or “Inbox”
- Data enforced Business Rules
- Light integration with Email
- Easy to report
- Decent report generation
Header
(e.g. Auto
Name,
Address )
Generated
Admin
Child1 Child3 Reference
Child2 Tables
(e.g. Child1
Kids (Roles ) (e.g.
Names) Kids
(e.g. Cars)
Names)
56
57. Metastorm
Complex Workflow DeNormalized Data Model
- Great “Inbox” - Data not always Types (Integers are character )
- “Approvals” out of box - Single Logical Table storing parents and children
- Logical Diagramming of Workflow - “Temp” Screen data stored in Database
- Tight integration with Email - Fields defined but not used
- “Bad” report generation - Admin screen cannot be linked to data model
Keys Header
(e.g. Child1 Child2 Child3 Temp
And Name, (e.g. Kids (e.g. Screen
(Roles)
Pointers Address ) Names) Cars) Data
Auto ignore ignore ignore ignore
Generated
ignore ignore ignore ignore
ignore ignore ignore ignore
ignore ignore ignore ignore
ignore ignore ignore ignore
Admin
Reference
Tables ignore ignore ignore ignore
ignore ignore ignore ignore
ignore ignore ignore ignore
57
58. Metastorm and Workflow
• RADVisual development of Workflow is what
Metatstorm is good at!
• A good standard for Workflow application in the Gallery
58
60. PHPRunner (DCLPA) Data Model RequestStatuses RequestBulkClassificationCounts
Exhibitions Column Name Data Type Allow Nulls RequestBulkClassificationCount_id Classifications
Column Name Data Type Allow Nulls
RequestStatus_id int Request_id Column Name Data Type Allow Nulls
Exhibition_id int
Name nvarchar(100) CountForClassification Classification_id int
code nvarchar(50)
DropDown_Display_Order int Classification_id Name nvarchar(50)
Name nvarchar(100)
LastUpdateUserName nvarchar(50) isNGA LastUpdateUserName nvarchar(50)
isMaintainedInTMS bit
LastUpdateTime datetime Comment LastUpdateTime datetime
Comment nvarchar(MAX)
SetToThisWhenFirstReportIs... bit
RequestReason_id int
DropDown_Display_Or... int
Requests * Loans
Column Name Data Type Allow Nulls
Column Name Data Type Allow Nulls
Loan_id int
Request_id int
Name nvarchar(50)
EntryShortcutSelect_... int
DropDown_Display_Order int
RequestReason_id int
RequestTMSObjects RequestNonTMSObjects * Comment nvarchar(MAX)
RequestType_id int
RequestTMSObject_id RequestNonTMSObject_id
RequestStatus_id int RequestReason_id int
Request_id ExhibitionActivities * Name
RequestReportDACs ExhibitionActivity_id Requestor_id int
NGAContactGallery_id
LastUpdateUserName nvarchar(50)
TMSObject_id LastUpdateTime datetime
RequestReport_DAC_id Comment nvarchar(300)
SortOrderInReport Request_id
RequestReport_id EntryShortcutAjax_TMSObje...
Name QuickNotes nvarchar(MAX)
DAC_Record_id RequestDate datetime
RequestReason_id
Comment LastUpdateUserName nvarchar(50)
isOnWorklog
LastUpdateUserName LastUpdateTime datetime
LastUpdateTime
LoanActivities
RequestReports * Exhibition_id int
Column Name Data Type Allow Nulls
Column Name Data Type Allow Nulls ExhibitionActivity_id int
LoanActivity_id int
RequestReport_id int ExhibitionOtherActivit... int
Name nvarchar(50)
Request_id int EntryShortcutAjax_M... int
RequestReportWorkOrders Comment nvarchar(MAX)
ReportType_id int Loan_id int
RequestReport_WorkOrder_id RequestReportTechnicalInfos LastUpdateUserName nvarchar(50)
StartedDate datetime LoanActivity_id int
RequestReport_id RequestReport_id LastUpdateTime datetime
EndedDate datetime
WorkOrder_Line TechnicalNotes RequestReason_id int
Conservator_id int
WorkOrderType_id BetaRadiograph
Description nvarchar(MAX)
LastUpdateUserName XRayFluorescenceSpectroscopy
User_id int
LastUpdateTime InfraredSpectroscopy RequestApprovingRoles
RequestReportStatus_id int
MicroFadometer RequestApprovingRole_id
AnalogImageLocation_id int
FourierTransformInfraredSpectroscopy Request_id
IncludeInFinalObjectR... bit RequestTypes *
FluorescenceSpectroscopy Column Name ApprovingRole_id
Report nvarchar(MAX) RequestReasons * ApprovingUser_id
ColorSpectroscopy RequestType_id
LastUpdateUserName nvarchar(50) Column Name
None Name Approval_id
LastUpdateTime datetime RequestReason_id
WorkOrderTypes RequestReportAttachments * Other isMajor Comment
RequestTMSObject_id int DropDown_Display_Order
WorkOrderType_id RequestReportAttachment_id OtherDescription isMinor LastUpdateUserName
Attachment nvarchar(MAX) Name
Name RequestReport_id isQuick LastUpdateTime
Image nvarchar(MAX)
LastUpdateUserName FileName isLoan
RequestNonTMSObjec... int
LastUpdateTime Comment isBulk
isObjectRequired
isMultiObjectsAllowed RequestTypeApprovingRoles
RequestTypeApprovingRole_id
isApprovalRequired
RequestType_id
DropDown_Display_Order
ApprovingRole_id
LastUpdateUserName
ReportTypes Conservators Comment
Column Name Data Type Allow Nulls RequestReportStatuses Conservator_id LastUpdateUserName
ReportType_id int ReportTypeGroups RequestReportStatus_id Gallery_id LastUpdateTime
ReportTypeGroup_id int ReportTypeGroup_id Name Comment
Name nvarchar(100) Name DropDown_Display_Order isInEmployeeDirectory
isTechnicalAnalysis
RequestTypeReasons *
LastUpdateUserName nvarchar(50) ReportType_id LocalVersionOfName RequestTypeReasons_id
LastUpdateTime datetime isDetailedExam LastUpdateUserName LastUpdateUserName RequestType_id
isValidWithBulk bit isTreatmentProposal LastUpdateTime LastUpdateTime RequestReason_id
isValidWithQuick bit isObjectReport
isValidWithMajor bit isOther
isValidWithMinor bit LastUpdateUserName
60
DropDown_Display_Or... int LastUpdateTime