Objeto de conferencia
VI International Conference on Education and Information Systems, Technologies and Applications (EISTA) (Florida, Estados Unidos)
Simulation is the process of executing a model, that is a representation of a system with enough detail to describe it but not too excessive. This model has a set of entities, an internal state, a set of input variables that can be controlled and others that cannot, a list of processes that bind these input variables with the entities and one or more output values, which result from the execution of events.
Running a model is totally useless if it can not be analyzed, which means to study all interactions among input variables, model entities and their weight in the values of the output variables. In this work we consider Discrete Event Simulation, which means that the status of the system variables being simulated change in a countable set of instants, finite or countable infinite.
Existing GPSS implementations and IDE's provide a wide range of tools for analysis of the results, for generation and execution of experiments and to perform complex analysis (such as Analysis of Variance, screening and so on). This is usually enough for most common analysis, but more detailed information and much more specific analysis are often required.
In addition, teaching this kind of simulation languages is always a challenge, since the way it executes the models and the abstraction level that their entities achieve is totally different compared to general purpose programming languages, well known by most students of the area. And this is usually hard for students to understand how everything works underground.
We have developed an open source simulation framework that implements a subset of entities of GPSS language, which can be used for students to improve the understanding of these entities. This tool has also the ability to store all entities of simulations in every single simulation time, which is very useful for debugging simulations, but also for having a detailed history of all entities (permanents and temporary) in the simulations, knowing exactly how they have behaved in every simulation time.
In this paper we provide an overview of this development, making special stress on the simulation model design and on the persistence of simulation entities, which are the basis that students and researchers of the area need in order to extend the model, adapt it to their needs or add all analysis tools to the framework.
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/5557
Pareto Type II Based Software Reliability Growth ModelWaqas Tariq
The past 4 decades have seen the formulation of several software reliability growth models to predict the reliability and error content of software systems. This paper presents Pareto type II model as a software reliability growth model, together with expressions for various reliability performance measures. Theory of probability, distribution function, probability distributions plays major role in software reliability model building. This paper presents estimation procedures to access reliability of a software system using Pareto distribution, which is based on Non Homogenous Poisson Process (NHPP).
SEMANTIC STUDIES OF A SYNCHRONOUS APPROACH TO ACTIVITY RECOGNITIONcscpconf
Many important and critical applications such as surveillance or healthcare require some formof (human) activity recognition. Activities are usually represented by a series of actions driven and triggered by events. Recognition systems have to be real time, reactive, correct, complete, and dependable. These stringent requirements justify the use of formal methods to describe, analyze, verify, and generate effective recognition systems. Due to the large number of possible application domains, the researchers aim at building a generic recognition system. They choose the synchronous approach because it has a well-founded semantics and it ensures determinism and safe parallel composition. They propose a new language to represent activities as synchronous automata and they supply it with two complementary formal semantics. First a behavioral semantics gives a reference definition of program behavior using rewriting rules. Second, an equational semantics describes the behavior in a constructive way and can be directly implemented. This paper focuses on the description of these two semantics and their relation.
Objeto de conferencia
International PKP Schorlaly Publishing Conference 2009 (Harbour Centre - Simon Frasier University, Canada)
La Plata National University is making a great effort to support and encourage scientific, technologic and artistic research and innovation, maintaining its quality, as well as the development and knowledge transference that benefit society and generate new scientific, technologic and artistic knowledge.
To achieve this purpose, the University fosters the education of high-quality human resources, the promotion of scientific, technologic and artistic activity, and its dissemination around the world.
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/5551
Objeto de conferencia
VI Simposio Internacional de Bibliotecas Digitales (Brasil)
Digital repositories acting as resource aggregators typically face different challenges, roughly classified in three main categories: extraction, improvement and storage. The first category comprises issues related to dealing with different resource collection protocols: OAI-PMH, web-crawling, webservices, etc and their representation: XML, HTML, database tuples, unstructured documents, etc. The second category comprises information improvements based on controlled vocabularies, specific date formats, correction of malformed data, etc. Finally, the third category deals with the destination of downloaded resources: unification into a common database, sorting by certain criteria, etc.
This paper proposes an ETL architecture for designing a software application that provides a comprehensive solution to challenges posed by a digital repository as resource aggregator.
Design and implementation aspects considered during the development of this tool are described, focusing especially on architecture highlights.
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/5529
Pareto Type II Based Software Reliability Growth ModelWaqas Tariq
The past 4 decades have seen the formulation of several software reliability growth models to predict the reliability and error content of software systems. This paper presents Pareto type II model as a software reliability growth model, together with expressions for various reliability performance measures. Theory of probability, distribution function, probability distributions plays major role in software reliability model building. This paper presents estimation procedures to access reliability of a software system using Pareto distribution, which is based on Non Homogenous Poisson Process (NHPP).
SEMANTIC STUDIES OF A SYNCHRONOUS APPROACH TO ACTIVITY RECOGNITIONcscpconf
Many important and critical applications such as surveillance or healthcare require some formof (human) activity recognition. Activities are usually represented by a series of actions driven and triggered by events. Recognition systems have to be real time, reactive, correct, complete, and dependable. These stringent requirements justify the use of formal methods to describe, analyze, verify, and generate effective recognition systems. Due to the large number of possible application domains, the researchers aim at building a generic recognition system. They choose the synchronous approach because it has a well-founded semantics and it ensures determinism and safe parallel composition. They propose a new language to represent activities as synchronous automata and they supply it with two complementary formal semantics. First a behavioral semantics gives a reference definition of program behavior using rewriting rules. Second, an equational semantics describes the behavior in a constructive way and can be directly implemented. This paper focuses on the description of these two semantics and their relation.
Objeto de conferencia
International PKP Schorlaly Publishing Conference 2009 (Harbour Centre - Simon Frasier University, Canada)
La Plata National University is making a great effort to support and encourage scientific, technologic and artistic research and innovation, maintaining its quality, as well as the development and knowledge transference that benefit society and generate new scientific, technologic and artistic knowledge.
To achieve this purpose, the University fosters the education of high-quality human resources, the promotion of scientific, technologic and artistic activity, and its dissemination around the world.
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/5551
Objeto de conferencia
VI Simposio Internacional de Bibliotecas Digitales (Brasil)
Digital repositories acting as resource aggregators typically face different challenges, roughly classified in three main categories: extraction, improvement and storage. The first category comprises issues related to dealing with different resource collection protocols: OAI-PMH, web-crawling, webservices, etc and their representation: XML, HTML, database tuples, unstructured documents, etc. The second category comprises information improvements based on controlled vocabularies, specific date formats, correction of malformed data, etc. Finally, the third category deals with the destination of downloaded resources: unification into a common database, sorting by certain criteria, etc.
This paper proposes an ETL architecture for designing a software application that provides a comprehensive solution to challenges posed by a digital repository as resource aggregator.
Design and implementation aspects considered during the development of this tool are described, focusing especially on architecture highlights.
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/5529
Articulo
The Online Journal of New Horizons in Education; vol. 3, no. 1
This work presents an open source web environment to learn GPSS language in Modeling and Simulation courses. With this environment, students build their models by selecting entities and configuring them instead of programming GPSS codes from scratch. Teachers can also create models so that students can apply, analyze and interpret results. Thus, it includes a simulation engine that stores snapshots of models as they are executed, and allows students to navigate through these snapshots. The environment may be combined with existing learning management systems.
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/25669
Objeto de conferencia
III International Conference on New Horizons in Education (INTE) (Praga, República Checa)
This work presents an open source web environment to learn GPSS language in Modeling and Simulation courses. With this environment, students build their models by selecting entities and configuring them instead of programming GPSS codes from scratch. Teachers can also create models so that students can apply, analyze and interpret results. Thus, it includes a simulation engine that stores snapshots of models as they are executed, and allows students to navigate through these snapshots. The environment may be combined with existing learning management systems.
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/25674
Towards a Docker-based architecture for open multi-agent systemsIAESIJAI
In open multi-agent systems (OMAS), heterogeneous agents in different environments or models can migrate from one system to another, taking their attributes and knowledge and increasing developing complexity compared to conventional multi-agent systems (MAS). Furthermore, the complexity of opening may be due to the uncertainties and dynamic behavior that the change of agents entails, needing to formulate techniques to analyze this complexity and understand the system’s global behavior. We used Docker to approach these problems and make the architecture flexible to handle distinct types of programming languages and frameworks of agents. This paper presents a Docker-based architecture to aid OMAS development, acting on agent migration between different models running in heterogeneous hardware and software scenarios. We present a simulation scenario with NetLogo’s Open Sugarscape 2 Constant Growback and JaCaMo’s Gold Miners to verify the proposal’s feasibility
Modeling and simulation is the use of models as a basis for simulations to develop data utilized for managerial or technical decision making. In the computer application of modeling and simulation a computer is used to build a mathematical model which contains key parameters of the physical model.
Proposing a Formal Method for Workflow Modelling: Temporal Logic of Actions (...ijcsta
The study and implementation of formal techniques to aid the design and implementation ofWorkflow Management
Systems (WfMS) is still required. Using these techniques, we can provide this technology with automated reasoning
capacities, which are required for the automated demonstration of the properties that will verify a given model.
This paper develops a formalization of the workflow paradigm based on communication (speech-act theory) by
using a temporal logic, namely, the Temporal Logic of Actions (TLA). This formalization provides the basic
theoretical foundation for the automated demonstration of the properties of a workflow map, its simulation, and
fine-tuning by managers.
FORMALIZATION & DATA ABSTRACTION DURING USE CASE MODELING IN OBJECT ORIENTED ...cscpconf
In object oriented analysis and design, use cases represent the things of value that the system performs for its actors in UML and unified process. Use cases are not functions or features.
They allow us to get behavioral abstraction of the system to be. The purpose of the behavioral abstraction is to get to the heart of what a system must do, we must first focus on who (or what)
will use it, or be used by it. After we do this, we look at what the system must do for those users in order to do something useful. That is what exactly we expect from the use cases as the
behavioral abstraction. Apart from this fact use cases are the poor candidates for the data abstraction. Rather the do not have data abstraction. The main reason is it shows or describes
the sequence of events or actions performed by the actor or use case, it does not take data in to account. As we know in earlier stages of the development we believe in ‘what’ rather than
‘how’. ‘What’ does not need to include data whereas ‘how’ depicts the data. As use case moves around ‘what’ only we are not able to extract the data. So in order to incorporate data in use cases one must feel the need of data at the initial stages of the development. We have developed the technique to integrate data in to the uses cases. This paper is regarding our investigations to take care of data during early stages of the software development. The collected abstraction of data helps in the analysis and then assist in forming the attributes of the candidate classes. This makes sure that we will not miss any attribute that is required in the abstracted behavior using use cases. Formalization adds to the accuracy of the data abstraction. We have investigated object constraint language to perform better data abstraction during analysis & design in unified paradigm. In this paper we have presented our research regarding early stage data abstraction and its formalization.
Formalization & data abstraction during use case modeling in object oriented ...csandit
In object oriented analysis and design, use cases represent the things of value that the system
performs for its actors in UML and unified process. Use cases are not functions or features.
They allow us to get behavioral abstraction of the system to be. The purpose of the behavioral
abstraction is to get to the heart of what a system must do, we must first focus on who (or what)
will use it, or be used by it. After we do this, we look at what the system must do for those users
in order to do something useful. That is what exactly we expect from the use cases as the
behavioral abstraction. Apart from this fact use cases are the poor candidates for the data
abstraction. Rather the do not have data abstraction. The main reason is it shows or describes
the sequence of events or actions performed by the actor or use case, it does not take data in to
account. As we know in earlier stages of the development we believe in ‘what’ rather than
‘how’. ‘What’ does not need to include data whereas ‘how’ depicts the data. As use case moves
around ‘what’ only we are not able to extract the data. So in order to incorporate data in use
cases one must feel the need of data at the initial stages of the development. We have developed
the technique to integrate data in to the uses cases. This paper is regarding our investigations
to take care of data during early stages of the software development. The collected abstraction
of data helps in the analysis and then assist in forming the attributes of the candidate classes.
This makes sure that we will not miss any attribute that is required in the abstracted behavior
using use cases. Formalization adds to the accuracy of the data abstraction. We have
investigated object constraint language to perform better data abstraction during analysis &
design in unified paradigm. In this paper we have presented our research regarding early stage
data abstraction and its formalization.
The European Space Agency (ESA) uses an engine to perform tests in the Ground Segment infrastructure, specially the Operational Simulator. This engine uses many different tools to ensure the development of regression testing infrastructure and these tests perform black-box testing to the C++ simulator implementation. VST (VisionSpace Technologies) is one of the companies that provides these services to ESA and they need a tool to infer automatically tests from the existing C++ code, instead of writing manually scripts to perform tests. With this motivation in mind, this paper explores automatic testing approaches and tools in order to propose a system that satisfies VST needs.
Objeto de conferencia
Ciclo de conferencias en la Universidad Nacional Experimental del Táchira (UNET) (Venezuela, 2012)
Objetivo:
• Compartir la experiencia del SEDICI en todas las áreas que hacen al quehacer del repositorio: edición, catalogación, comunicación y difusión, software de soporte e interoperabilidad, servicios asociados y cuestiones legales, entre otras.
• Crear conciencia sobre el acceso abierto en todas sus formas.
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/37866
Tesis de doctorado
De Giusti, Marisa Raquel; Gordillo, Silvia; Castro, Silvia; Suppi, Remo; Baldasarri, Sandra
Doctor en Ciencias Informáticas; Facultad de Informática
La enseñanza en el área de simulación de eventos discretos requiere integrar una variedad de conceptos teóricos y ponerlos en práctica a través de la creación y ejecución de modelos abstractos de simulación, con el objetivo de recopilar información que pueda traspolarse hacia los sistemas reales. Para construir modelos, ejecutarlos y analizar los resultados de cada ejecución se utilizan herramientas de software cada vez más sofisticadas que permiten expresar los elementos de los modelos en términos de entidades abstractas y relaciones, y que recopilan gran cantidad de datos y estadísticas sobre cada una de estas entidades del modelo. GPSS es una de estas herramientas, y se compone de un lenguaje de programación por bloques y un motor de simulación que traduce estos bloques en distintas entidades del modelo. A pesar de que su primera versión data de 1961, GPSS es aún muy utilizado por profesionales y empresas, y es una de las herramientas más utilizadas para la enseñanza de simulación de eventos discretos por instituciones académicas de todo el mundo.
El avance de la capacidad de cómputo de las computadoras ha permitido incorporar una mayor cantidad de herramientas y funciones a las distintas implementaciones de GPSS. Mientras que esto representa una ventaja para sus usuarios, requiere también un cada vez mayor esfuerzo por parte de los docentes para enseñar a sus estudiantes a aprovechar todo su potencial. Muchos docentes e investigadores han buscado optimizar la enseñanza de simulación de eventos discretos desde múltiples ángulos: la organización del curso y la metodología de enseñanza, la creación de elementos de aprendizaje que ayuden a aplicar los distintos elementos teóricos, la generación de herramientas para construir modelos GPSS, y la construcción de herramientas para comprender el motor de simulación por dentro.
En esta tesis se introduce una herramienta de software que permite construir modelos GPSS de manera interactiva, cuyo diseño fue pensado para integrar los elementos teóricos del curso con los objetos y entidades de GPSS. Esta herramienta también permite ejecutar estos modelos y analizar con alto nivel de detalle su evolución a través del tiempo de simulación, lo que permite a los estudiantes comprender cómo funciona el motor de simulación y cómo interactúan las distintas entidades entre sí. Se incluye también una propuesta de enseñanza basada en una fuerte participación de los estudiantes, que, por medio de esta nueva herramienta, les permite incorporar los conceptos más fácilmente. Esta propuesta de enseñanza fue puesta a prueba con alumnos del área de sistemas, quienes tomaron un curso que contiene los m
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/29753
More Related Content
Similar to Open modeling and simulation framework for evolutive analysis
Articulo
The Online Journal of New Horizons in Education; vol. 3, no. 1
This work presents an open source web environment to learn GPSS language in Modeling and Simulation courses. With this environment, students build their models by selecting entities and configuring them instead of programming GPSS codes from scratch. Teachers can also create models so that students can apply, analyze and interpret results. Thus, it includes a simulation engine that stores snapshots of models as they are executed, and allows students to navigate through these snapshots. The environment may be combined with existing learning management systems.
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/25669
Objeto de conferencia
III International Conference on New Horizons in Education (INTE) (Praga, República Checa)
This work presents an open source web environment to learn GPSS language in Modeling and Simulation courses. With this environment, students build their models by selecting entities and configuring them instead of programming GPSS codes from scratch. Teachers can also create models so that students can apply, analyze and interpret results. Thus, it includes a simulation engine that stores snapshots of models as they are executed, and allows students to navigate through these snapshots. The environment may be combined with existing learning management systems.
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/25674
Towards a Docker-based architecture for open multi-agent systemsIAESIJAI
In open multi-agent systems (OMAS), heterogeneous agents in different environments or models can migrate from one system to another, taking their attributes and knowledge and increasing developing complexity compared to conventional multi-agent systems (MAS). Furthermore, the complexity of opening may be due to the uncertainties and dynamic behavior that the change of agents entails, needing to formulate techniques to analyze this complexity and understand the system’s global behavior. We used Docker to approach these problems and make the architecture flexible to handle distinct types of programming languages and frameworks of agents. This paper presents a Docker-based architecture to aid OMAS development, acting on agent migration between different models running in heterogeneous hardware and software scenarios. We present a simulation scenario with NetLogo’s Open Sugarscape 2 Constant Growback and JaCaMo’s Gold Miners to verify the proposal’s feasibility
Modeling and simulation is the use of models as a basis for simulations to develop data utilized for managerial or technical decision making. In the computer application of modeling and simulation a computer is used to build a mathematical model which contains key parameters of the physical model.
Proposing a Formal Method for Workflow Modelling: Temporal Logic of Actions (...ijcsta
The study and implementation of formal techniques to aid the design and implementation ofWorkflow Management
Systems (WfMS) is still required. Using these techniques, we can provide this technology with automated reasoning
capacities, which are required for the automated demonstration of the properties that will verify a given model.
This paper develops a formalization of the workflow paradigm based on communication (speech-act theory) by
using a temporal logic, namely, the Temporal Logic of Actions (TLA). This formalization provides the basic
theoretical foundation for the automated demonstration of the properties of a workflow map, its simulation, and
fine-tuning by managers.
FORMALIZATION & DATA ABSTRACTION DURING USE CASE MODELING IN OBJECT ORIENTED ...cscpconf
In object oriented analysis and design, use cases represent the things of value that the system performs for its actors in UML and unified process. Use cases are not functions or features.
They allow us to get behavioral abstraction of the system to be. The purpose of the behavioral abstraction is to get to the heart of what a system must do, we must first focus on who (or what)
will use it, or be used by it. After we do this, we look at what the system must do for those users in order to do something useful. That is what exactly we expect from the use cases as the
behavioral abstraction. Apart from this fact use cases are the poor candidates for the data abstraction. Rather the do not have data abstraction. The main reason is it shows or describes
the sequence of events or actions performed by the actor or use case, it does not take data in to account. As we know in earlier stages of the development we believe in ‘what’ rather than
‘how’. ‘What’ does not need to include data whereas ‘how’ depicts the data. As use case moves around ‘what’ only we are not able to extract the data. So in order to incorporate data in use cases one must feel the need of data at the initial stages of the development. We have developed the technique to integrate data in to the uses cases. This paper is regarding our investigations to take care of data during early stages of the software development. The collected abstraction of data helps in the analysis and then assist in forming the attributes of the candidate classes. This makes sure that we will not miss any attribute that is required in the abstracted behavior using use cases. Formalization adds to the accuracy of the data abstraction. We have investigated object constraint language to perform better data abstraction during analysis & design in unified paradigm. In this paper we have presented our research regarding early stage data abstraction and its formalization.
Formalization & data abstraction during use case modeling in object oriented ...csandit
In object oriented analysis and design, use cases represent the things of value that the system
performs for its actors in UML and unified process. Use cases are not functions or features.
They allow us to get behavioral abstraction of the system to be. The purpose of the behavioral
abstraction is to get to the heart of what a system must do, we must first focus on who (or what)
will use it, or be used by it. After we do this, we look at what the system must do for those users
in order to do something useful. That is what exactly we expect from the use cases as the
behavioral abstraction. Apart from this fact use cases are the poor candidates for the data
abstraction. Rather the do not have data abstraction. The main reason is it shows or describes
the sequence of events or actions performed by the actor or use case, it does not take data in to
account. As we know in earlier stages of the development we believe in ‘what’ rather than
‘how’. ‘What’ does not need to include data whereas ‘how’ depicts the data. As use case moves
around ‘what’ only we are not able to extract the data. So in order to incorporate data in use
cases one must feel the need of data at the initial stages of the development. We have developed
the technique to integrate data in to the uses cases. This paper is regarding our investigations
to take care of data during early stages of the software development. The collected abstraction
of data helps in the analysis and then assist in forming the attributes of the candidate classes.
This makes sure that we will not miss any attribute that is required in the abstracted behavior
using use cases. Formalization adds to the accuracy of the data abstraction. We have
investigated object constraint language to perform better data abstraction during analysis &
design in unified paradigm. In this paper we have presented our research regarding early stage
data abstraction and its formalization.
The European Space Agency (ESA) uses an engine to perform tests in the Ground Segment infrastructure, specially the Operational Simulator. This engine uses many different tools to ensure the development of regression testing infrastructure and these tests perform black-box testing to the C++ simulator implementation. VST (VisionSpace Technologies) is one of the companies that provides these services to ESA and they need a tool to infer automatically tests from the existing C++ code, instead of writing manually scripts to perform tests. With this motivation in mind, this paper explores automatic testing approaches and tools in order to propose a system that satisfies VST needs.
Objeto de conferencia
Ciclo de conferencias en la Universidad Nacional Experimental del Táchira (UNET) (Venezuela, 2012)
Objetivo:
• Compartir la experiencia del SEDICI en todas las áreas que hacen al quehacer del repositorio: edición, catalogación, comunicación y difusión, software de soporte e interoperabilidad, servicios asociados y cuestiones legales, entre otras.
• Crear conciencia sobre el acceso abierto en todas sus formas.
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/37866
Tesis de doctorado
De Giusti, Marisa Raquel; Gordillo, Silvia; Castro, Silvia; Suppi, Remo; Baldasarri, Sandra
Doctor en Ciencias Informáticas; Facultad de Informática
La enseñanza en el área de simulación de eventos discretos requiere integrar una variedad de conceptos teóricos y ponerlos en práctica a través de la creación y ejecución de modelos abstractos de simulación, con el objetivo de recopilar información que pueda traspolarse hacia los sistemas reales. Para construir modelos, ejecutarlos y analizar los resultados de cada ejecución se utilizan herramientas de software cada vez más sofisticadas que permiten expresar los elementos de los modelos en términos de entidades abstractas y relaciones, y que recopilan gran cantidad de datos y estadísticas sobre cada una de estas entidades del modelo. GPSS es una de estas herramientas, y se compone de un lenguaje de programación por bloques y un motor de simulación que traduce estos bloques en distintas entidades del modelo. A pesar de que su primera versión data de 1961, GPSS es aún muy utilizado por profesionales y empresas, y es una de las herramientas más utilizadas para la enseñanza de simulación de eventos discretos por instituciones académicas de todo el mundo.
El avance de la capacidad de cómputo de las computadoras ha permitido incorporar una mayor cantidad de herramientas y funciones a las distintas implementaciones de GPSS. Mientras que esto representa una ventaja para sus usuarios, requiere también un cada vez mayor esfuerzo por parte de los docentes para enseñar a sus estudiantes a aprovechar todo su potencial. Muchos docentes e investigadores han buscado optimizar la enseñanza de simulación de eventos discretos desde múltiples ángulos: la organización del curso y la metodología de enseñanza, la creación de elementos de aprendizaje que ayuden a aplicar los distintos elementos teóricos, la generación de herramientas para construir modelos GPSS, y la construcción de herramientas para comprender el motor de simulación por dentro.
En esta tesis se introduce una herramienta de software que permite construir modelos GPSS de manera interactiva, cuyo diseño fue pensado para integrar los elementos teóricos del curso con los objetos y entidades de GPSS. Esta herramienta también permite ejecutar estos modelos y analizar con alto nivel de detalle su evolución a través del tiempo de simulación, lo que permite a los estudiantes comprender cómo funciona el motor de simulación y cómo interactúan las distintas entidades entre sí. Se incluye también una propuesta de enseñanza basada en una fuerte participación de los estudiantes, que, por medio de esta nueva herramienta, les permite incorporar los conceptos más fácilmente. Esta propuesta de enseñanza fue puesta a prueba con alumnos del área de sistemas, quienes tomaron un curso que contiene los m
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/29753
Objeto de conferencia
The Benefits of Model-Driven Development in Institutional Repositories
II Conferencia Internacional Acceso Abierto, Comunicación Científica y Preservación Digital (BIREDIAL) (Colombia, 2012)
Los Repositorios Institucionales (RI) se han consolidado en las instituciones en las áreas científicas y académicas, así lo demuestran los directorios de repositorios existentes de acceso abierto y en los depósitos diarios de artículos o documentos realizados por diferentes vías, tales como el autoarchivo por parte de los usuarios registrados y las catalogaciones por parte de los bibliotecarios. Los sistemas RI se basan en diversos modelos conceptuales, por lo que en este trabajo se realiza un relevamiento bibliográfico del Desarrollo de Software Dirigido por Modelos (MDD) en los sistemas y aplicaciones para los RI con el propósito de exponer los beneficios de la aplicación del MDD en los RI. El MDD es un paradigma de construcción de software que asigna a los modelos un rol central y activo bajo el cual se derivan modelos que van desde los más abstractos a los concretos, este proceso se realiza a través de transformaciones sucesivas. Este paradigma proporciona un marco de trabajo que permite a los interesados compartir sus puntos de vista y manipular directamente las representaciones de las entidades de este dominio. Por ello, se presentan los beneficios agrupados según los actores que están presentes, a saber, desarrolladores, dueños de negocio y expertos del dominio. En conclusión, estos beneficios ayudan a que todo el entorno del dominio de los RI se concentre en implementaciones de software más formales, generando una consolidación de tales sistemas, donde los principales beneficiarios serán los usuarios finales a través de los múltiples servicios que son y serán ofrecidos por estos sistemas.; The Institutional Repositories (IR) have been consolidated into the institutions in scientific and academic areas, as shown by the directories existing open access repositories and the deposits daily of articles made by different ways, such as by self-archiving of registered users and the cataloging by librarians. IR systems are based on various conceptual models, so in this paper a bibliographic survey Model-Driven Development (MDD) in systems and applications for RI in order to expose the benefits of applying MDD in IR. The MDD is a paradigm for building software that assigns a central role models and active under which derive models ranging from the most abstract to the concrete, this is done through successive transformations. This paradigm provides a framework that allows interested parties to share their views and directly manipulate representations of the entities of this domain. Therefore, the benefits are grouped by actors that are present, namely, developers, business owners and domain experts. In conclusion, t
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/26044
Objeto de conferencia
I Conferencia sobre Bibliotecas y Repositorios Digitales (BIREDIAL) (Colombia, 2011)
El Servicio de Difusión de la Creación Intelectual es un proyecto de Repositorio Digital Institucional creado dentro de la UNLP, orientado a funcionar como el punto central de difusión de toda la producción académica generada dentro de la institución. Dada la tendencia vista en las principales instituciones académicas del mundo de hacer pública su producción científica a través de repositorios digitales de acceso abierto, el SeDiCI pasa a ser una herramienta estratégica para la jerarquización de la institución.
Desde su creación en el año 2003, el SeDiCI ha afrontado diversas dificultades que han influído directa o indirectamente en su desarrollo y crecimiento, pero aún a pesar de estas problemáticas, actualmente el SeDiCI cuenta con una base de datos documental que supera los 14000 recursos académicos propios (de la UNLP) expuestos bajo las políticas del Acceso Abierto. Esto convierte al SeDiCI en uno de los principales exponentes en su tipo, tanto a nivel nacional como regional (América Latina). En este documento se presentan experiencias y desafíos que el SeDiCI ha enfrentado, describiendo en cada caso el problema, su contexto y las vías de acción tomadas para superarlo. Los principales tópicos son: necesidad de apoyo institucional, reglas y metodologías de catalogación, mejoras en los servicios provistos a los usuarios, importación de recursos, entre otros.
Adicionalmente se describen algunos de los desafíos actuales, diferentes líneas de investigación y desarrollo orientadas a resolver los retos y a expandir y mejorar los servicios proporcionados a la comunidad de usuarios. Entre estos se encuentran: mecanismos y herramientas de harvesting, gestión de grandes volúmenes de información, ontologías y repositorios semánticos, legislación relacionada al Acceso Abierto, Diseminación Selectiva de la Información, Autoarchivo, etc.
El principal objetivo de este documento es exponer la experiencia adquirida a partir de este proyecto, con la intención de que resulte de utilidad para aquellas instituciones que se encuentran en el proceso de creación de sus propios repositorios institucionales, o bien que se encuentren frente a problemáticas similares a las aquí expuestas.; The Intellectual Creation Dissemination Service is the Institutional Digital Repository of La Plata National University.
The project is intended to be the main distribution source of all the academic work produced inside UNLP. In view of main worldwide institutions's trend towards the publication of academic resources through open access digital repositories, SeDiCI has pointed to become a strategic tool to bring relevance to the University.
Since its creation in the year 2003, SeDiCI has faced up many challenges and difficulties. Th
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/5528
Preprint
DYNA; Edition 184
Los Repositorios Institucionales (RI) se han consolidado en la academia, prueba de ello es el crecimiento en número de registros en los directorios existentes realizado por diferentes vías: autoarchivo por parte de autores, la incorporación de material a cargo de bibliotecarios, entre otras. En este trabajo se hace un relevamiento bibliográfico sobre el uso del enfoque de Desarrollo de Software Dirigido por Modelos (MDD) en los sistemas de RI con el propósito de establecer una relación entre ellos. El MDD es un paradigma de construcción de software que asigna a los modelos un rol central y se derivan modelos que van desde los más abstractos a los más concretos. Este paradigma, además, proporciona un marco de trabajo que permite a los interesados compartir sus puntos de vista y manipular las representaciones de las entidades del dominio. En conclusión, el seguimiento de las diferentes investigaciones relevadas y lo aquí expuesto permiten incentivar implementaciones de software para los RI.
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/35601
Articulo
e-colabora; vol. 1, no. 2
Desde su creación en el año 2003, el Servicio de Difusión de la Creación Intelectual ha afrontado diversas dificultades que han influido directa o indirectamente en su desarrollo y crecimiento. En este documento se presentan algunas de estas experiencias, describiendo en cada caso el problema, su contexto y las vías de acción tomadas para superarlo. Adicionalmente se describen algunas de las diferentes líneas de investigación y desarrollo actuales, orientadas a expandir y mejorar los servicios proporcionados a la comunidad de usuarios. De ahí que el objetivo principal de este trabajo sea exponer la experiencia adquirida, con la intención de que resulte de utilidad para aquellas instituciones que se encuentren en el proceso de creación de sus propios repositorios.; Since its creation in the year 2003, the Intellectual Creation Dissemination Service has faced up many difficulties. Thus the initiative has been directly and indirectly affected during its development and growth. This work presents some of these experiencies, describing problems, context and solution approaches. This document also describes some of the new and most recent challenges and the current research and development trends, which are oriented to improve and extend the services provided by SeDiCI. The main purpose of this document is to share the lived experiences with this project, which may be useful to other institutions working on their own digital repositories.
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/5527
Informe tecnico
America Learning & Media; Edición 027
Celsius es el software utilizado por los miembros de ISTEC que participan de la iniciativa LibLink para gestionar los pedidos de material bibliográfico de sus usuarios, atender solicitudes de provisión desde otras instituciones participantes, facilitar el intercambio de los documentos y generar estadísticas que permiten transparentar el intercambio y evaluar la calidad del trabajo de los participantes. Este software es desarrollado por la UNLP, y su primera versión data del año 2001. Celsius es ofrecido a todas las instituciones participantes de manera gratuita, a quienes también se les brinda documentación actualizada y asistencia personalizada para realizar la instalación y mantenimiento, instalar actualizaciones y formar al equipo de personas que utilizarán esta herramienta en cada institución. El proyecto Celsius3 tiene como característica principal la gestión centralizada de todas las instancias de Celsius de los miembros de LibLink. Esto implica, por un lado, la creación de instancias a medidas que se incorporan nuevos miembros, y por el otro la centralización todas las instalaciones existentes de Celsius 1.x y 2.x, lo que implica a su vez las migraciones y normalizaciones de sus respectivas bases de datos. Cabe aclarar que, si bien se busca contar con una instalación única y centralizada de esta plataforma, las instituciones seguirán contando con su propia instancia de Celsius: usuarios, pedidos, administradores, comunicaciones y estadísticas de cada instancia serán datos que gestionará cada institución de manera independiente del resto. Además la plataforma está pensada para que puedan mantener sus dominios actuales de acceso a Celsius.
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/34504
Objeto de conferencia
PKP International Scholarly Publishing Conferences (Mexico, 2013)
La Universidad Nacional de La Plata tiene como objetivo prioritario la difusión de todo el conocimiento generado en la institución, a fin de devolver a la sociedad parte del esfuerzo invertido en la Universidad pública. Para alcanzar este objetivo se han generado diversas iniciativas desde la gestión, la creación de nuevos servicios y la implementación de nuevas líneas de investigación y desarrollo para potenciar tales servicios. El Servicio de Difusión de la Creación Intelectual es el Repositorio Institucional central de la UNLP, y posee en la actualidad todo tipo de materiales científicos y académicos producidos desde la Universidad, incluyendo artículos de revistas científicas, publicaciones en congresos, tesis de posgrado, tesinas, normativas y ordenanzas, libros y libros electrónicos, documentos de audio, materiales educativos, y fotografías de piezas de museos, entre otros. Este repositorio institucional, el mayor de la Argentina y uno de los principales de América Latina, posee un rol central en la difusión de la producción intelectual de la UNLP y en la coordinación con otros servicios y desarrollos, entre los que se destacan el Portal de Revistas de la UNLP, soportado por el software Open Journal Systems, y el Portal de Congresos de la UNLP, soportado por el software Open Conference Systems. En este trabajo se detallan los distintos mecanismos que se han implementado desde la Universidad Nacional de La Plata para facilitar la interacción entre su repositorio institucional SEDICI -construido sobre el software Dspace- y estos portales, evitar la duplicación de esfuerzos y maximizar la difusión abierta del conocimiento. Dichos mecanismos incluyen desarrollos tecnológicos como la utilización de diversos protocolos (RSS/Atom, Sword, OAI-PMH) para establecer un camino unificado de comunicación entre los sistemas, o el desarrollo de plugins que permiten exportar la información desde las herramientas de gestión de producción académica, en un formato comprensible por otros sistemas. Todo esto, además, involucra el establecimiento de flujos de trabajo entre los equipos de personas que conforman los distintos servicios, con la finalidad de obtener un mecanismo unificado que contemple los métodos y las herramientas para exportar los datos e incorporarlos al repositorio. Finalmente, en este trabajo se incluyen otros esfuerzos generados tanto desde el SEDICI como desde la presidencia de la Universidad para asegurar la preservación de toda la producción intelectual y brindar nuevos mecanismos para maximizar la visibilidad de estos documentos científicos y académicos. Esto abarca a otros actores y direcciones de esta institución tales como la Editorial de la Universidad (EDULP), la Dirección de Educación a Distancia (EAD) y la Rad
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/27406
Objeto de conferencia
XX Asamblea General de ISTEC (Puebla, México, 2014)
Sumario de la presentación:
Parte 1 - Conceptos básicos. Repositorio, interoperabilidad, preservación, guías, proyectos
Parte 2 - Metadatos de preservación
Parte 3 - Directrices sobre preservación PREMIS, Modelo de datos PREMIS, METS Otros esquema de metadatos y más posibilidades en la preservación
Parte 4- OAIS
Parte 5- DSPACE Modelo de datos, OAIS en Dspace
Panel LibLink.
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/34889
Objeto de conferencia
XX Asamblea General de ISTEC (Puebla, México, 2014)
Sumario:
- Innovación tecnológica
- Formación de RRHH
- Calidad de servicios
- Integración con otras iniciativas
- Administración y gestión
Sesión final LibLink.
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/34941
Objeto de conferencia
III Conferencia Internacional de Biblioteca Digital y Educación a Distancia
Se presenta una plataforma de recolección destinada a relacionar y unificar información disponible en distintos lugares de la Web-que siguen diferentes convenciones-para crear un repositorio temático que puedan navegar los usuarios. La plataforma será usada en el Servicio de Difusión de la Creación Intelectual (SeDiCI) y utiliza de manera combinada ontologías y tesauros para brindar información mejor clasificada.
Actualmente, la información está diseminada en recursos de la Web y los motores de búsqueda tradicionales le devuelven al usuario listas rankeadas sin proveer ninguna relación semántica entre documentos. Los usuarios pasan gran cantidad de tiempo para vincular unos documentos con otros y saber cuáles atacan el dominio completo del problema; recién al localizar las semejanzas y las diferencias entre fragmentos de información éstas se trasladan a su trabajo y sirven para la creación de nuevo conocimiento.
La plataforma propuesta separa los módulos de funcionamiento de los diferentes dominios de interés (temas) para permitir su utilización en distintas áreas de conocimiento. El desarrollo incluye dos agentes que recorren las URLs almacenadas en una base de datos (uno responsable de poblar una ontología y otro de obtener URLs relacionadas), un módulo capaz de reconocer las páginas marcadas, interpretar las etiquetas y proveer las reglas para extraer la información y guardarla en un fichero RDF; tras esta etapa se aplica una homogeneización y la información así transformada se clasifica en función de una ontología de dominio.
La plataforma vuelve más eficientes los procesos de extracción automática y búsqueda de información en fuentes heterogéneas que representan los mismos conceptos siguiendo distintas convenciones.; Presentation of a web collection platform designed to relate and unify information available on different standard web sources with a view to creating a user-browseable thematic repository.
The platform will be used at the Servicio de Difusión de la Creación Intelectual (SeDiCI) [Intellectual Creation Diffusion Service] combined with ontologies and thesaurus to provide improved data sorting.
Data is currently spread on web resources and traditional search engines return ranked lists with no semantic relation among documents. Users have to spend a great deal of time relating documents and trying to figure out which ones fully address the issue domain. It is only after locating similarities and differences that information fragments are applied to the user's work, enabling knowledge creation.
The proposed platform sorts out the different theme domain functioning modules to allow their use in various knowledge areas. Development includes two agents that searches data base stored URLs, one is
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/5555
Objeto de conferencia
III Simposio Internacional de Bibliotecas Digitales (San Pablo, Brasil)
El proceso de reconocimiento de la escritura manuscrita forma parte de las iniciativas que propenden a la preservación de patrimonio cultural resguardado en Bibliotecas y archivos donde existe una gran riqueza de documentos y hasta fichas manuscritas que acompañan libros incunables. Este trabajo es el punto de partida de un proyecto de investigación y desarrollo orientado a la digitalización y reconocimiento de material manuscrito y la ponencia que aquí se presenta discute diferentes algoritmos utilizados en una primera etapa dedicada a "limpiar" la imagen de ruido para mejorarla antes de comenzar el reconocimiento de caracteres. Dado que PrEBi-SeDiCI forman parte integrante de redes de bibliotecas que intercambian documentos digitalizados vía scanning, el presente desarrollo ha tenido una utilización adicional relacionada al mejoramiento de las imágenes de documentos de intercambio que presentaban problemas comunes en la digitalización: bordes, impurezas, descentrado, etc.., si bien no es esta la finalidad de esta investigación no por ello resulta una utilidad menor en el marco de intercambios de consorcios de bibliotecas. Para que el proceso de digitalización y reconocimiento de textos manuscritos sea eficiente debe estar precedido de una etapa de "preprocesamiento" de la imagen a tratar que incluye umbralización, limpieza de ruido, adelgazamiento, enderezamiento de la línea base y segmentación de la imagen entre otros. Cada uno de estos pasos permitirá reducir la variabilidad nociva al momento de reconocer los textos manuscritos (ruido, niveles aleatorios de grises, inclinación de caracteres, zonas con más y menos tinta), aumentando así la probabilidad de reconocer adecuadamente los textos. En este trabajo se consideran dos métodos de adelgazamiento de imágenes, se realiza la implementación y finalmente se lleva adelante una evaluación obteniendo conclusiones relativas a la eficiencia, velocidad y requerimientos, así como también ideas para futuras implementaciones. En la primera parte del documento, se presentan algunas definiciones relacionadas con los métodos utilizados, luego se muestran los resultados obtenidos sobre un mismo conjunto de imágenes aplicando las teorías propuestas y finalmente, se exponen algunas ideas para optimizar los algoritmos elegidos.; The handwritten manusctipt recognizing process belongs to the iniciatives which lean to cultural patrimony preservation shielded in Libraries and files where there exists a big wealth in documents and even handritten cards that accompany incunable books. This work is point to begin with a research and development proyect oriented to digitalization and recognition of manuscipt materials and the paper presented here discuss diferent algorithms used in the first stage ded
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/5534
Objeto de conferencia
III Conferencia de Bibliotecas y Repositorios Digitales de América Latina (BIREDIAL) y VIII Simposio Internacional de Bibliotecas Digitales (SIBD) (Costa Rica, 2013)
Este trabajo describe 2 vías de extensión sobre el módulo de curation de Dspace. En primer lugar se describe un conjunto de curation tasks orientadas a analizar y reportar distintos aspectos asociados a la calidad de los datos y a brindar un soporte adicional a las tareas de preservación sobre el repositorio por medio de chequeos de integridad y de generación de nuevos metadatos. En segundo lugar se plantea la modificación de la estrategia de ejecución de curation tasks provisto por DSpace, en pos de minimizar su impacto en la performance de la aplicación, y flexibilizar los criterios de selección de recursos a procesar.
A continuación se mencionan curation tasks que serán consideradas en este trabajo:
● chequeo de enlaces web a documentos alojados en servidores externos al repositorio, configurable;
● chequeo de metadatos conectados con autoridades (o vocabularios controlados) dentro o fuera del repositorio, para chequeo de integridad en los datos;
● chequeo de archivos cargados en el repositorio, asegurando que todos los recursos cuenten con un archivo asociado bajo las normas y políticas del repositorio;
● control de metadatos obligatorios, según el tipo de documento;
● control del dominio de metadatos, de acuerdo a tipos primitivos como ser fecha, número, texto, etc;
● generación de metadatos de preservación a partir de los archivos asociados a los recursos (ej.: software con el que se realizó el archivo, con su correspondiente versión, versión del formato, nivel de compresión utilizado, etc);
● testeo y posterior reporte de recursos a partir de condiciones lógicas sobre metadatos y archivos, utilizando un lenguaje de expresión simple.
Actualmente las curation tasks se ejecutan sobre todos los ítems de una colección, una comunidad o incluso el repositorio completo, sin interrupciones y de manera secuencial. Esta estrategia de selección y ejecución genera una elevada demanda de recursos sobre el servidor que aloja el repositorio durante todo el tiempo de ejecución de los procesos de curation tasks, degradando la performance del mismo. Además, cuando se incluye más de una tarea en una misma orden de ejecución, éstas se ejecutan de forma secuencial, es decir, una tarea no puede iniciar su ejecución hasta tanto la tarea anterior no haya finalizado completamente. De aquí que en este trabajo se propone una nueva estrategia para la selección de los recursos a procesar y dos nuevas estrategias de ejecución de curation tasks:
● estrategia de selección de recursos a procesar en base a una expresión lógica configurable (ej.: seleccionar recursos según el valor de su metadato dc.type);
● estrategi
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/30524
Objeto de conferencia
III Conferencia de Bibliotecas y Repositorios Digitales de América Latina (BIREDIAL) y VIII Simposio Internacional de Bibliotecas Digitales (SIBD) (Costa Rica, 2013)
Durante el año 2012, el equipo del Portal de Congresos de la UNLP desarrolló un plugin para el software OCS que permite automatizar la generación de actas de congresos, y que a la vez simplifica de manera considerable la integración de dichas actas con el SEDICI. Este plugin puede, en líneas generales, exportar información de un mismo congreso en diferentes formatos, según se requiera en cada caso. Se han implementado distintos formatos de exportación, como por ejemplo CSV (valores separados por coma), HTML e incluso documento de texto (Word).
Estas tres exportaciones toman los datos de todos los trabajos (título, autores, resumen, instituciones, etc.) y los combinan en un único archivo, que el gestor de cada congreso puede descargar. También se implementó la exportación de todos los artículos de cada congreso en un único archivo .zip, y se modificaron los nombres de los archivos incluidos en el archivo .zip para que adopten el título del trabajo al que corresponden. De esta forma, los gestores pueden generar los libros de resúmenes de manera muy simple, y el repositorio institucional puede cargar todos los trabajos de cada congreso de una forma mucho más rápida.
Si bien se han implementado hasta ahora algunos formatos de exportación, el plugin fue pensado desde el principio para ser extendido hacia nuevos formatos, mediante un diseño basado en objetos que implementa el patrón de diseño conocido tomo Template Method.
Esta herramienta fue puesta a disponibilidad para usuarios en varios congresos del Portal de Congresos de la UNLP a modo de pruebas y los resultados fueron satisfactorios. En la actualidad, la herramienta ya está siendo utilizada para incorporar las producciones de los congresos al repositorio institucional SEDICI.
Aún cuando este desarrollo permitió simplificar la generación de actas de resúmenes de congresos, puede optimizarse más la incorporación de los trabajos al repositorio institucional, a fin de evitar la carga manual de los mismos. Por este motivo, el equipo del Portal de Congresos se encuentra evaluando diversas tecnologías de interoperabilidad entre sistemas. Entre ellas se destaca el protocolo sword en su versión 2 como una opción muy prometedora y relativamente simple. Este protocolo permite a los gestores de los congresos realizar el depósito automático o semiautomático de los trabajos desde el Portal de Congresos en el repositorio institucional, soportado por DSpace. Dicho depósito puede realizarse sobre una colección privada, lo que permitirá que el equipo de SEDICI revise los trabajos antes de confirmar su incorporación definitiva al repositorio.
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/30522
Objeto de conferencia
International Conference on Engineering Education ICEE-2011 (Irlanda)
The Ibero-American Science and Technology Education Consortium (ISTEC) is a non-profit organization comprised of educational, research, industrial, and multilateral organizations throughout the Americas and the Iberian Peninsula. The Consortium was established in 1990 to foster scientific, engineering, and technology education, joint international research and development efforts among its members, and to provide a cost-effective vehicle for the application and transfer of technology. After twenty years, ISTEC has established a presence in the region, but it also has experienced problems to interact with different cultures and interests. During 2010 it suffered important changes in its organization and big efforts were realized to accomplish new goals and to share worldwide expertise, to facilitate distributed problem solving, creating the local critical mass needed for the development of regional projects in areas such as: continuing education, libraries and repositories, globalization of the culture of quality and accreditation standards, R&D, intellectual property development, capital acquisition, and social responsibility, among others. ISTEC continues to be dedicated to the improvement of Science, Engineering, Technology, Math education, R&D, and Entrepreneurship. The Consortium will foster technology transfer and the development of social and business entrepreneurs through the implementation of a global network that pretends to reach other countries in the world creating clusters of businesses and institutions that share common interest, assisting in the establishment of strategic alliances/joint ventures, and the promotion of collaborative partnerships in general.
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/27159
Objeto de conferencia
BIREDIAL - Conferencia Internacional Acceso Abierto, Comunicación Científica y Preservación Digital
La preservación digital se define como el conjunto de prácticas de naturaleza política, estratégica y acciones concretas, destinadas a asegurar el acceso a los objetos digitales a largo plazo. El desarrollo de los repositorios institucionales, el crecimiento de sus contenidos y el reconocimiento de que la actividad institucional se canaliza, cada vez más, en soporte digital, obliga a los repositorios a acompañar su desarrollo con actividades destinadas a la preservación. En este trabajo se presentan el estándar 14721 (OAIS), los metadatos PREMIS y las directrices para la preservación, en conjunto con el esquema METS, para finalmente, explorar los metadatos en esquemas muy utilizados en la tarea normal de un repositorio (MODS, DC) y señalar los que resultan útiles a los fines de la preservación, proponiendo su reutilización. Un segundo objetivo práctico es mostrar qué herramientas de preservación ofrece el desarrollo DSpace que sustenta al repositorio SeDiCI - UNLP.; Digital preservation is defined as a set of political and strategic practices and concrete actions deployed to ensure long term access to digital objects. The development of institutional repositories, growth of their content and awareness that institutional activity is increasingly channeled through digital media require the deployment of preservation activities to sustain the evolution of repositories. This work presents ISO standard 14721 (OAIS), PREMIS metadata and guidelines for preservation, along with the METS schema to finally explWore metadata in widely-used repository-related schema (MODS, DC) and point out those useful for preservation purposes, proposing their reuse. A second practical purpose is to show the preservation tools offered by the DSpace development which supports the SeDiCI - UNLP repository.
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/26045
Objeto de conferencia
XX Asamblea General de ISTEC (México, 2014)
Ponencia presentada en la XX Asamblea General de ISTEC (Puebla, México), en la cual se describen los distintos mecanismos de interoperabilidad implementados entre el repositorio institucional (SEDICI) y distintos servicios en línea de la Universidad Nacional de La Plata.
Lugar: INAOE (Puebla, México).
Expositores a través de videoconferencia: Gonzalo Luján Villarreal y Franco Agustín Terruzzi.
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/34200
Articulo
Jornada Virtual de Acceso Abierto
El Servicio de Difusión de la Creación Intelectual (SeDiCI) es el repositorio institucional de la Universidad Nacional de La Plata (UNLP), creado en el 2003 con el objetivo de dar visibilidad a la producción académica producida en esta casa de estudios considerando que el acceso libre posibilita un mayor número de citas y por tanto un mayor impacto, atendiendo al rol fundamental de una institución pública de socializar el conocimiento. Creado en el año 2003, actualmente SeDiCI se encuentra posicionado entre los primeros 10 principales repositorios digitales de América Latina según la Webometrics, y ocupa la primera posición en Argentina como repositorio institucional. En este trabajo se presentan algunas de las principales características y servicios ofrecidos por el portal, desde su fundación hasta la actualidad.
Publicado en CD-ROM en: <i>Jornada Virtual de Acceso Abierto (Argentina 2010)</i>.
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/27160
Objeto de conferencia
IV Simposio Internacional de Bibliotecas Digitales (Málaga, España)
El objetivo de esta ponencia será presentar un estudio de avance junto con los resultados, asi como también el
impacto de la implementación de SRU/W sobre el repositorio de documentos locales de SeDiCI. También se
estudiará el grado de complejidad basado en la bibliografía existente acerca de la implementación de un gateway
SRU/W que permita acceder a la información referencial obtenida a través de OAI y el grado de impacto esperado en
los OPAC’s de la Universidad Nacional de La Plata.
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/5562
Objeto de conferencia
III Conferencia de Bibliotecas y Repositorios Digitales de América Latina (BIREDIAL) y VIII Simposio Internacional de Bibliotecas Digitales (SIBD) (Costa Rica, 2013)
Creada en diciembre de 1990, ISTEC es una organización internacional sin fines de lucro, compuesta por más de 100 instituciones educativas, de investigación, agencias gubernamentales, industria y organismos multilaterales en América y la Península Ibérica.
Ver registro completo en: http://sedici.unlp.edu.ar/handle/10915/30511
More from Servicio de Difusión de la Creación Intelectual (SEDICI) (20)
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Welocme to ViralQR, your best QR code generator.ViralQR
Welcome to ViralQR, your best QR code generator available on the market!
At ViralQR, we design static and dynamic QR codes. Our mission is to make business operations easier and customer engagement more powerful through the use of QR technology. Be it a small-scale business or a huge enterprise, our easy-to-use platform provides multiple choices that can be tailored according to your company's branding and marketing strategies.
Our Vision
We are here to make the process of creating QR codes easy and smooth, thus enhancing customer interaction and making business more fluid. We very strongly believe in the ability of QR codes to change the world for businesses in their interaction with customers and are set on making that technology accessible and usable far and wide.
Our Achievements
Ever since its inception, we have successfully served many clients by offering QR codes in their marketing, service delivery, and collection of feedback across various industries. Our platform has been recognized for its ease of use and amazing features, which helped a business to make QR codes.
Our Services
At ViralQR, here is a comprehensive suite of services that caters to your very needs:
Static QR Codes: Create free static QR codes. These QR codes are able to store significant information such as URLs, vCards, plain text, emails and SMS, Wi-Fi credentials, and Bitcoin addresses.
Dynamic QR codes: These also have all the advanced features but are subscription-based. They can directly link to PDF files, images, micro-landing pages, social accounts, review forms, business pages, and applications. In addition, they can be branded with CTAs, frames, patterns, colors, and logos to enhance your branding.
Pricing and Packages
Additionally, there is a 14-day free offer to ViralQR, which is an exceptional opportunity for new users to take a feel of this platform. One can easily subscribe from there and experience the full dynamic of using QR codes. The subscription plans are not only meant for business; they are priced very flexibly so that literally every business could afford to benefit from our service.
Why choose us?
ViralQR will provide services for marketing, advertising, catering, retail, and the like. The QR codes can be posted on fliers, packaging, merchandise, and banners, as well as to substitute for cash and cards in a restaurant or coffee shop. With QR codes integrated into your business, improve customer engagement and streamline operations.
Comprehensive Analytics
Subscribers of ViralQR receive detailed analytics and tracking tools in light of having a view of the core values of QR code performance. Our analytics dashboard shows aggregate views and unique views, as well as detailed information about each impression, including time, device, browser, and estimated location by city and country.
So, thank you for choosing ViralQR; we have an offer of nothing but the best in terms of QR code services to meet business diversity!
Monitoring Java Application Security with JDK Tools and JFR Events
Open modeling and simulation framework for evolutive analysis
1. Open modeling and simulation framework for evolutive analysis
Eng. De Giusti Marisa Raquel1
, Lic. Lira Ariel Jorge2
, Lic. Villarreal, Gonzalo Luján3
1
Comisión de Investigaciones Científicas de la Pcia. De Buenos Aires (CICPBA)
.
1,2,3
Proyecto de Enlace de Bibliotecas (PrEBi)- Universidad Nacional de La Plata, Argentina
3 Consejo Nacional de Investigaciones Técnicas y Científicas (CONICET)
Index Terms: Discrete event simulation, storage of runs, statistical and evolutive analysis,
simulation teaching.
Simulation is the process of executing a model, that is a representation of a system with enough
detail to describe it but not too excessive. This model has a set of entities, an internal state, a set of
input variables that can be controlled and others that cannot, a list of processes that bind these input
variables with the entities and one or more output values, which result from the execution of events.
Running a model is totally useless if it can not be analyzed, which means to study all interactions
among input variables, model entities and their weight in the values of the output variables. In this
work we consider Discrete Event Simulation, which means that the status of the system variables
being simulated change in a countable set of instants, finite or countable infinite.
Existing GPSS implementations and IDE's provide a wide range of tools for analysis of the results,
for generation and execution of experiments and to perform complex analysis (such as Analysis of
Variance, screening and so on). This is usually enough for most common analysis, but more detailed
information and much more specific analysis are often required.
In addition, teaching this kind of simulation languages is always a challenge, since the way it
executes the models and the abstraction level that their entities achieve is totally different compared
to general purpose programming languages, well known by most students of the area. And this is
usually hard for students to understand how everything works underground.
We have developed an open source simulation framework that implements a subset of entities of
GPSS language, which can be used for students to improve the understanding of these entities. This
tool has also the ability to store all entities of simulations in every single simulation time, which is
very useful for debugging simulations, but also for having a detailed history of all entities
(permanents and temporary) in the simulations, knowing exactly how they have behaved in every
simulation time.
In this paper we provide an overview of this development, making special stress on the simulation
model design and on the persistence of simulation entities, which are the basis that students and
researchers of the area need in order to extend the model, adapt it to their needs or add all analysis
tools to the framework.
1
marisa.degiusti@sedici.unlp.edu.ar
2
alira@sedici.unlp.edu.ar
3
gonetil@sedici.unlp.edu.ar
2. 1. Introduction.
The purpose of any simulation is not to run (execute) a system in a computer but to gather as much
information as possible for making all the analysis we can imagine and then get to know deeply how the real
system behaves, detect problems and impove it, making it more efficient, faster and reliable. In order to
achieve this goals, the analyst must at least:
● study the real system to simulate
● extract both system variables (controllable and un-controllable) and internal process from the system
● dump this information into a simulation program, reflecting data and behavior in a way the computer
understands, creating an abstract – mathematical - model of the real system
● run somehow the model
● extract as much information as possible from the run, adjust the model and keep running it
Block programming languages offer an important abstraction level that allows the programmer to
map objects from the real system to entities of the simulation in an almost transparent way, providing with
each object a set of functions and facilities, which lets the programmer focus on designing the model and
forget about programing of implementation details. This is particularly useful when looking for quick
solutions, since time savings achieved can be very significant and this the required cost for modeling,
simulating and experimenting with systems goes considerably down.
Simulation languages usually collect lots of data along simulation runs, and most of them offer tools
for displaying this information, either using simple text reports and probably statistics graphs, tables and
functions. GPSS programming language implementations, which this work are based on, are not an
exception.
GPSS common model is thought to store and calculate statistics while the model are being run, and
allow the analyst to access them when the simulation has finished. This way, it is very easy to know how
most permanent entities (facilities, storages, logics switch, and so on) have behaved according to the model
definition. This is very useful to detect bottlenecks in the system, useless objects or probable failure points,
among many other uses.
In addition to all the information given once the simulation has ended, it is very important to know
how our mathematical model has indeed reached this final state and why it has happened this way. This kind
of analysis is far too complicated in discrete models executed in GPSS-like languages, specially average or
large models with many entities, transactions and processes inside. Thousands of transactions might been
created, moved and destroyed, and it is really hard to track them back individually; the main problem here is
that the analyst only gets the information once everything has finished. Few GPSS implementations permits
to use some sort of breakpoints (STOP/STEP commands), but it is not really handy to run large models step
by step.
In this work we present an incipient open simulation framework that allows to store in a database all
simulation objects in order to know how they have evolved along the simulation time and what has happened
in each clock instant. This allows the analyst, for example, to pick any entity and know all changes it has
suffered in all simulation times, which entities it has interacted with and what was its time life inside the
model. In addition, they may select a range of time and see what entities existed at that moment. Or we can
go even further, and consider running and persisting many simulations with different parameters and then
compare information from different simulations runs.
As we have mentioned, it is an open and incipient framework. It is open because it has been created
according to GPL license, and incipient because it implements - up to now - all basic GPSS model and
entities which permits to run simple simulations but is being extended in order to incorporate other entities,
completing the whole model or even enhancing it with new entities.
3. 2. The model.
For the development of this framework a subset of GPSS entities have been selected, which allow us
to execute simple simulations but that require the execution model to be complete enough. Among these
entities we have Transaction, Facility and, clearly, the Simulation entity. The framework is written in Java
language, using all Object Oriented power and encouraging programers to improve it since the language is
well kown, powerful, portable and very easy to learn.
Before going on with the description of the model, we should make clear the following concept: A
discrete simulation starts in a virtual zero time, which controlled by an internal system clock called the
Simulation Clock. Once the simulation has started, it moves forward this internal clock in discrete, ordered
and finite time instants, until the ending condition is achieved or something wrong has happened; whether
the simulation has ended for any unexpected situation or everything went fine, we have a final clock, set to
the value the clock had when the simulation had ended.
The importance of the concept lies in the fact that everything that happens in any simulation,
happens with the clock set to a determined value. And here is the essence of this framework: given a time
instant or moment in a simulation, we want to know what has happened, what was the simulation like before
and how has it moved forward.
In any simulation, many entities interact among each other in very diferent ways, depending the class
of entity (permanent, temporary), the moment they are creating (at the begining of the simulation, when it is
first refered to, when the system requires it) and the amount of entities of the same class exist (only one, like
Simulation or Clock entities, a fixed amount, like Facilities, or a dynamic amount, such as transactions). All
entities form an horizontal composition cyclic graph, being the entity Simulation as the root of this graph,
from which all other nodes can be visited (Fig. 1), making it a fully connected graph.
Figure 1: Simulation graph
4. Simulation entity is in charge of the execution of commands and the generation of transactions; it is
also task of this entity to invoke the transaction scheduler which must decide which transaction have to be
the next active transaction, and to manage the simulation clock, detecting when it must be updated and
making all required tasks in each clock change; this implied the addition of Transaction Scheduler and
System Clock entities inside the model.
Entities are dynamically being creating while the simulation runs, making them harder to locate them
unless an extra system catalogue is used, adding more memory usage. To avoid this, it has been added an
spare entity which deals with references to entities resolution, either by name or identifier; is entity is able to
efficiently locate any entity at any time anywhere in the simulation, named Entity References Manager
(ERM).
The entity Transaction Scheduler is extremely complex since it must interact, at first, with the two
main chains of a simulation: current events chain and future events chain. But this entity must also be able to
access transactions held by other chains in each existing entity every time is needed; for example, if a
transaction is trying to release a facility, the transaction scheduler must select which transaction will be the
following owner of the facility being released, meaning that it must analyze the current events chain, the
delay chain of the facility, the preempt chain and the interrupt chain. Additionally, the scheduler must check
if this transaction must be activated, queue in the CEC or stay where it this until other event ocurr.
The movement among chains is also a difficult task, since in implies not only the exchange of
hundreds or thousands transactions from chain to chain, but it must also select transactions to exchange and
choose the right moment to make this operation (change in the system clock, delay condition testing, and
others). This taks is performed by both the Scheduler and Simulation entities, working together in order to
keep the model integrity.
As mentioned before, Facility entity has been included in this first version of the framework. The
decition of implementing this entity was not taken randomly; this entity involves a wide set of functions and
entities, and has an intricate method for selecting transactions according to the internal state of each one of
its chains and the way it is being accessed. With this entity, analyst may represent many diferent real world
objects making model more useful; besides, this entity has set the basis for further implementations or other
entities with intrincate chains orders such as storages or user chains. The hierarchical organization of all
entities in the java model permits programmers to add new entities and reuse all functionality already
implemented. For example, if a new entity has a special behavior and a FIFO, priority or BDT queue
3. Simulation persistence.
In order to store simulation in the disk, a relational database engine (MySQL) together with a java
ORM (JPOX) has been used. As mentioned above, we are dealing with discrete simulations, hence we have a
countable set of simulation instants in which many events have occurred and entities have been altered. So,
we have consider to take a picture of the whole simulation, including all entities with their internal state, and
to store this picture exactly as it is. The storage process happens every time the simulation clock is updated,
just before this events actually occurs, in order to persist the simulation with the state before the clock
changes.
Store a simulation in a determined time t implies to store all entities that take part in this simulation
at this time t. In order to collect all entities, the composition graph is walked along, starting in the node
representing the simulation entity and moving forward all simulation chains, entities list and chains of every
entity. Since blocks are also entities, they must be persisted too.
Every time the graph is walked along, all persistible information is collected and, once it is finished,
the simulation clock is updated and the execution goes on. Then, ideally, all this data should be persisted to
disk and the system clock updated in order to go on with the run. The problem is that writing to the disk is
too slow compared to main memory and processor speed, and we can not let the simulation stop until objects
are stored.
5. Our solution to this problem consist of having a ready-to-store queue, and a low priority thread in
charge of the persistence (Fig. 2). All data ready to persist is queued and remains in there until it is actually
persisted, and the simulation continues running without waiting until data is written to disk. To persist
objects, we have include a single thread, which runs together with the rest of the simulation. This thread
picks up simulations ready from the persist queue and deals with DAO object and all database stuff
(connections, queries, transactions and so on). It is important to remark that the extra thread has a lower
priority than the simulation run; we have designed it this way because we do not want the simulation to be
delayed for anything.
This solutions is good for having results immediately when the simulation ends while the persistence
to the database is being made in background. The user may start analyzing data or adapting the model while
the simulation keeps being stored in background, improving user experience and efficiency.
3.1 Data collection.
The whole application is written in Java, which means it uses all well known Object Oriented
concepts (inheritance, hierarchy, OID, etc). When saving this objects to the database, we must make sure that
they are not updated since we would loss the state of the objects before this update. One solution would be to
have a version scheme where only new or changed data are stored; the main problem of using versions is that
it would be really difficult to retrieve a simulation in any system clock. It would be efficient in disk, slow in
data retrieve though.
Since hard disks are really huge these days, and computers have lots of memory, it is not expensive
to duplicate data even if it has not changed. With this idea in mind, our solution has been both simple and
efficient: the whole simulation graph is cloned in every change of the system clock, copying every node that
takes parts of the simulation (transactions, facilities, blocks, chains, and everything else). Once the cloning
process has ended, the cloned simulation is queued to store and, as said above, the simulation can move on.
4. Data recovery.
Figure 2: Producer/Consumer scheme
6. The way data are stored permits to retrieve simulations very easily; in the database there are many
copies of every simulation run, where each copy will differs in, at least, the state of the simulation clock.
Hence, to recover a simulation in a specific time t is as simple as retrieve the Simulation object and again,
start a chain effect retrieving all objects reachable from it.
But being able to retrieve a simulation in a time t is no the only advantage of this tool. All entities,
both permanent and temporary, have an internal identifier, which never changes along the simulation. This is
very useful to write new analysis tools that pick an specific entity and all copies generated before it; with all
this information, this tool could show exactly how the entity changed, or which transaction were accessing it
or were in its chains, or any other data that determine its internal state. This tool could even go beyond this
ideas: we could have stored many simulation executions for the same model but with different parameters,
and then we could retrieve any particular entity with all its states for all simulations. This way, we can not
only see how this entity has changed, but to compare this changes with all instances of the same entity in
different simulations.
6. Conclusion.
The framework is still being developed, incorporating new entities and blocks and improving the
already developed ones. Besides the completion of GPSS model, this system is a great opportunity to every
programmer who wants to add his own analysis tool, since all data of every simulation is ready to access and
use; the fact that the framework is open source permits programmers to understand how data is organized, so
they can efficiently retrieve it and show it the way they want.
Another goal of this implementation is that it can be used in any personal computer, since hardware
requirements are the same for running any modern computer program. It is remarkably important that the
whole framework can be installed in computers running any operating system that supports that Java Virtual
Machine (Linux, Mac OS/X, MS Windows, OpenSolaris, among others).
There is also another important advantage that comes up from the use of an external database system.
The framework data source is very flexible, permitting to distribute its location in other (or even computers)
computers; this means that it can be used one computer for simulation entities processing and another for the
database engine.
Besides the benefit of having a tool of this characteristics, there is another advantage not so obvious
but very important: its use in academic environments to ease learning. Teaching simulation languages is a
difficult task, since it requires to incorporate from one side a new way of executing sequential code where
objects enter a blocks, execute a special routine and leave this block, and on the other side it simplifies the
understanding of the operation of many entities very different and, in some cases, extremely complex. Here
there is a development in a widely spread programming language and open source, which allows students
from model and simulation areas to access and study in an idiom they already understand and are used to
how these entities work, what actually mean some operations or routines, what happens then some functions
are executed, and they can even change this behavior and learn by testing and customizing the framework.
References.
[AND99] Foundations of multi threaded, parallel and distributed programming. Gregory R. Andrews.
Addison Wesley. 1999
[DUN06] Simulación y Análisis de sistemas con ProModel. Eduardo García Dunna. Heriberto García Reyes.
Leopoldo E. Cárdenas Barrón. Pearson. 2006.
[JOR03] Java Data Objects. David Jordan, Craig Russell. O'Reilly Media, Inc. 2003
[JPO08] Java Persistent Objects.
7. http://www.jpox.org/
[LAN85] Teoría de los sistemas de información. Langefors, Börje. 2a. ed. (1985)
[KIM88]Schema versions and DAG rearrangement views in object-oriented databases (Technical report.
University of Texas at Austin. Dept. of Computer Sciences). Hyoung Joo Kim. University of Texas at Austin,
Dept. of Computer Sciences. 1988
[KIM90] Introduction to Object-Oriented Databases. Won Kim. The MIT Press. 1990.
[MIN05] Minuteman Software http://www.minutemansoftware.com
[SIM08] Simulation and Model-Based Design. http://www.mathworks.com/products/simulink/
[VIT86] Design and Analysis of Coalesced Hashing. Jeffrey Scott Vitter, Wen-chin Chen. Oxford University
Press, USA. 1986.
[WIL66] Computer simulation techniques. Wiley; 1st corr. printing edition (1966).