Abstract. The procedural method is cutting edge in gaming and virtual
cities generation. This paper presents a novel technique for procedural
real-time scene generation using a truncated icosidodecahedron as basic
element and a custom physics system. This can generate a virtual
museum in an interactive-way. For doing this, we have created a simple
interface that enables creating a procedural scenario regarding the
user-specified patterns, like number of pieces, connection mode, seed and
constraints into the path. The scene is generated around three dimensional
edges meaning a new point of view of hypermuseums. It has its
own physics system to adapt the movement through the three dimensions
of the scene. As a result, it allows the user to create a procedural scene in
almost real-time where a user character can go over the 3D scene using
a simple interactive interface.
Reporte 2013, Anuario elaborado por el periódico Reporte Energía.
El resumen de las noticias más destacadas del año. Petróleo y gas, electricidad, minería, energías alternativas, medio ambiente.
Acceda también a la edición a través de http://bit.ly/Ed113_Anuario_Reporte_Energia
¿Es la primera vez que vas al Contempopránea? ¿Tienes dudas sobre si merece la pena ir o no? ¿Habrá mejorado el equipo de sonido del bar El Castillo o tendrás que llevarte el iPod para seguir de fiesta el domingo? Todas esas dudas y muchas más te las resolverá El Gallo Verde en su Guía de viaje Contempopránea 2011. Ponemos a vuestra disposición toda nuestra experiencia adquirida en nuestros cinco (intensos) años como poperos de pro. Esperamos que sea de ayuda.
Mortality Statistics Rates of Brazil
http://mortalidade.inca.gov.br/
- Information available 24/7 all around the world
- Rates generated automatically, as soon as the official population and mortality data is available
- Analyze specific geographical areas
- Create ICD Groups(International Classification of Diseases)
- Create Data Base to be used by Business Intelligence Applications
Silicon Valley Bank and Orrick supported by CB Insights released this years new York Venture Capital Almanach 2013: a useful snapshot of where the New York venture community is right now, as well as a brief summary of
where we’ve been.
Quantum computing takes a giant leap forward from today’s technology—one that will forever alter our economic, industrial, academic, and societal landscape. This has massive implications for your customers in any industry including healthcare, energy, environmental systems, smart materials, and more. Learn how Microsoft is taking a unique revolutionary approach to quantum and how your customers can get started developing quantum solutions with the Quantum Development Kit.
Reporte 2013, Anuario elaborado por el periódico Reporte Energía.
El resumen de las noticias más destacadas del año. Petróleo y gas, electricidad, minería, energías alternativas, medio ambiente.
Acceda también a la edición a través de http://bit.ly/Ed113_Anuario_Reporte_Energia
¿Es la primera vez que vas al Contempopránea? ¿Tienes dudas sobre si merece la pena ir o no? ¿Habrá mejorado el equipo de sonido del bar El Castillo o tendrás que llevarte el iPod para seguir de fiesta el domingo? Todas esas dudas y muchas más te las resolverá El Gallo Verde en su Guía de viaje Contempopránea 2011. Ponemos a vuestra disposición toda nuestra experiencia adquirida en nuestros cinco (intensos) años como poperos de pro. Esperamos que sea de ayuda.
Mortality Statistics Rates of Brazil
http://mortalidade.inca.gov.br/
- Information available 24/7 all around the world
- Rates generated automatically, as soon as the official population and mortality data is available
- Analyze specific geographical areas
- Create ICD Groups(International Classification of Diseases)
- Create Data Base to be used by Business Intelligence Applications
Silicon Valley Bank and Orrick supported by CB Insights released this years new York Venture Capital Almanach 2013: a useful snapshot of where the New York venture community is right now, as well as a brief summary of
where we’ve been.
Quantum computing takes a giant leap forward from today’s technology—one that will forever alter our economic, industrial, academic, and societal landscape. This has massive implications for your customers in any industry including healthcare, energy, environmental systems, smart materials, and more. Learn how Microsoft is taking a unique revolutionary approach to quantum and how your customers can get started developing quantum solutions with the Quantum Development Kit.
Le Song, Assistant Professor, College of Computing, Georgia Institute of Tech...MLconf
Understanding Deep Learning for Big Data: The complexity and scale of big data impose tremendous challenges for their analysis. Yet, big data also offer us great opportunities. Some nonlinear phenomena, features or relations, which are not clear or cannot be inferred reliably from small and medium data, now become clear and can be learned robustly from big data. Typically, the form of the nonlinearity is unknown to us, and needs to be learned from data as well. Being able to harness the nonlinear structures from big data could allow us to tackle problems which are impossible before or obtain results which are far better than previous state-of-the-arts.
Nowadays, deep neural networks are the methods of choice when it comes to large scale nonlinear learning problems. What makes deep neural networks work? Is there any general principle for tackling high dimensional nonlinear problems which we can learn from deep neural works? Can we design competitive or better alternatives based on such knowledge? To make progress in these questions, my machine learning group performed both theoretical and experimental analysis on existing and new deep learning architectures, and investigate three crucial aspects on the usefulness of the fully connected layers, the advantage of the feature learning process, and the importance of the compositional structures. Our results point to some promising directions for future research, and provide guideline for building new deep learning models.
From the moment you open up a website in your browser multiple virtual machines (VMs) are at work. The server generating the website might use Java, your browser executes JavaScript and maybe there is some Flash content running — with everything being executed in a VM.
Virtual machines became increasingly important and popular after Google’s introduction of V8. We expect our code to run fast but let’s step back for a second and see how these complicated pieces of software work. With a better understanding of how your daily ActionScript or JavaScript code is being executed you might start coding a little different.
Join Joa and dive deep into the the world of virtual machines. Learn about different garbage collection strategies and understand why those beasts behave the way they do.
1. Why was Unicord successful in Thailand Describe the opportuni.docxpaynetawnya
1. Why was Unicord successful in Thailand? Describe the opportunities, challenges and the strategic choices taken by Unicord to overcome the institutional voids in becoming successful
2. What was the rationale for Unicord for acquiring Bumble Bee in USA? Why did Dumri persist in acquiring Bumble Bee in the midst of many bidders?
3. What caused the failure of the acquisition and the eventual collapse of Unicord ?
4. Unicord is a family conglomerate founded by Dumri. Would Unicord be better positioned if Unicord is managed by professional managers instead of centrally controlled by Dumri?
PROJECT SCHEDULING WITH PERT/CPM
********************************
*** PROJECT ACTIVITY LIST ***
IMMEDIATE OPTIMISTIC MOST PROBABLE PESSIMISTIC
ACTIVITY PREDECESSORS TIME TIME TIME
------------------------------------------------------------------------
A - 1.0 5.0 12.0
B - 1.0 1.5 5.0
C A 2.0 3.0 4.0
D A 3.0 4.0 11.0
E A 2.0 3.0 4.0
F C 1.5 2.0 2.5
G D 1.5 3.0 4.5
H B,E 2.5 3.5 7.5
I H 1.5 2.0 2.5
J F,G,I 1.0 2.0 3.0
------------------------------------------------------------------------
EXPECTED TIMES AND VARIANCES FOR ACTIVITIES
ACTIVITY EXPECTED TIME VARIANCE
-------------------------------------------
A 5.5 3.36
B 2.0 0.44
C 3.0 0.11
D 5.0 1.78
E 3.0 0.11
F 2.0 0.03
G 3.0 0.25
H 4.0 0.69
I 2.0 0.03
J 2.0 0.11
-------------------------------------------
*** ACTIVITY SCHEDULE ***
EARLIEST LATEST EARLIEST LATEST CRITICAL
ACTIVITY START START FINISH FINISH SLACK ACTIVITY
------------------------------------------------------------------------
A 0.0 0.0 5.5 5.5 0.0 YES
B 0.0 6.5 ...
Most high-level programming languages run on top of a virtual machine (VM) to abstract away from the underlying hardware. To reach high-performance, the VM typically relies on an optimising just-in-time compiler (JIT), which speculates on the program behavior based on its first runs to generate at runtime efficient machine code and speed-up the program execution. As multiple runs are required to speculate correctly on the program behavior, such a VM requires a certain amount of time at start-up to reach peak performance. The optimising JIT itself is usually compiled ahead-of-time to executable code as part of the VM.
The dissertation proposes Sista, an architecture for an optimising JIT, in which the optimised state of the VM can be persisted across multiple VM start-ups and the optimising JIT is running in the same runtime than the program executed. To do so, the optimising JIT is split in two parts. One part is high-level: it performs optimisations specific to the programming language run by the VM and is written in a metacircular style. Staying away from low-level details, this part can be read, edited and debugged while the program is running using the standard tool set of the programming language executed by the VM. The second part is low-level: it performs machine specific optimisations and is compiled ahead-of-time to executable code as part of the VM. The two parts of the JIT use a well-defined intermediate representation to share the code to optimise. This representation is machine-independent and can be persisted across multiple VM start-ups, allowing the VM to reach peak performance very quickly.
To validate the architecture, the dissertation includes the description of an implementation on top of Pharo Smalltalk and its VM. The implementation is able to run a large set of benchmarks, from large application benchmarks provided by industrial users to micro-benchmarks used to measure the performance of specific code patterns. The optimising JIT is implemented according to the architecture proposed and shows significant speed-up (up to 5x) over the current production VM. In addition, large benchmarks show that peak performance can be reached almost immediately after VM start-up if the VM can reuse the optimised state persisted from another run.
Keynote for the Yahoo! Frontend Developer's Summit 2008 held at the Yahoo! campus in Sunnyvale, CA. Looks at lessons from programming from the past and applies to web developer's today.
Language Language Models (in 2023) - OpenAISamuelButler15
1. Emergent Abilities with Scale: The presentation underscores the significance of viewing the development of language models with a perspective of “yet”, highlighting that many ideas may not work now but could become viable as models scale. This perspective challenges traditional scientific experimentation by suggesting that axioms in the field of language models are subject to change with advancements in model capabilities.
2. The Need for Constant Unlearning: As models scale, previously held intuitions may become outdated. The presentation discusses the necessity for researchers to unlearn invalidated ideas, noting that newcomers may sometimes have an advantage due to having fewer entrenched misconceptions.
3. Scaling Challenges and Techniques: The presentation elaborates on the technical challenges and complexities involved in scaling LLMs, using examples from training processes that include unexpected loss spikes. It also touches upon the importance of documenting experiments that fail due to insufficient model “intelligence” and retesting them as models evolve.
4. Instruction Fine-Tuning and RLHF: The presentation discusses instruction fine-tuning as a method to improve model performance across a wide range of tasks by framing tasks in natural language. However, it also points out the limitations of instruction fine-tuning and the potential of reinforcement learning from human feedback (RLHF) to address some of these challenges by learning the objective function.
5. Technical Insights on Transformer Models: Detailed technical insights into the functioning of Transformer models are provided, including tokenization, embedding, and the sequential processing that underpins these models’ ability to understand and generate language.
6. Scaling Infrastructure: The presentation gives an overview of the infrastructure considerations for scaling LLMs, including the use of tensor processing units (TPUs) and the role of software tools like JAX for parallelizing model training across multiple hardware units.
7. The Bitter Lesson and Future Directions: Reiterating “the bitter lesson” in AI research—that progress often comes from scalable general methods rather than specialized approaches—the presentation hints at ongoing and future directions in LLM research, emphasizing scalability, the reduction of inductive biases, and the exploration of novel training paradigms.
Process Mining: Data Science in Action - Wil van der Aalst, TU/e, DSC/e, HSEYandex
Data science is the profession of the future, because organizations that are unable to use (big) data in a smart way will not survive. It is not sufficient to focus on data storage and data analysis. The data scientist also needs to relate data to process analysis. Process mining bridges the gap between traditional model-based process analysis (e.g., simulation and other business process management techniques) and data-centric analysis techniques, such as machine learning and data mining. Process mining seeks to find a connection between event data (i.e., observed behavior) and process models (hand-made or discovered automatically). This technology has become available only recently, but it can be applied to any type of operational processes (organizations and systems). Example applications can include: analyzing treatment processes in hospitals, improving customer service processes in multinational companies, understanding browsing behavior of customers on a booking site, analyzing failures of a baggage handling system, or improving user interface of the X-ray machine. What all of these applications have in common is the need to relate dynamic behavior to process models. Not only does process mining provide a bridge between data mining and business process management, but it also helps to address the classical divide between "business" and "IT". Evidence-based business process management based on process mining helps to create a common ground for business process improvement and information systems development.
B.Sc.IT: Semester - VI (April - 2017) [CBSGS - 75:25 Pattern | Question Paper]Mumbai B.Sc.IT Study
B.Sc.IT: Semester - VI (April - 2017) [CBSGS - 75:25 Pattern | Question Paper]
april - 2017, april - 2016, april - 2015, april - 2014, april - 2013, october - 2017, october - 2016, october - 2015, october - 2014, may - 2016, may - 2017, december - 2017, 75:25 pattern, 60:40 pattern, revised course, old course, mumbai bscit study, mumbai university, bscit semester vi, bscit question paper, old question paper, previous year question paper, semester vi question paper, question paper, CBSGS, IDOL, kamal t, internet technology, digital signals and systems, data warehousing, ipr and cyber laws, project management, geographic information system
Keynote (Mike Muller) - Is There Anything New in Heterogeneous Computing - by...AMD Developer Central
Keynote presentation, Is There Anything New in Heterogeneous Computing, by Mike Muller, Chief Technology Officer, ARM, at the AMD Developer Summit (APU13), Nov. 11-13, 2013.
Testing is fundamental in software development. Quality gates demand high coverage levels, pull requests need sufficient tests, leading to teams spending considerable time writing and maintaining them. But are we using our tests to their full potential?
'If code is hard to test, the design can be improved'. Starting from this mantra, this deep-dive session unveils hints to simplify code, break-down complexity, and effectively use functional programming. We'll delve into topics like fixture creep, partial mocks, onion architecture, and pure functions, providing numerous best practices and practical tips for your testing.
Be warned: This session may significantly disrupt your work routine and will likely change how you see testing. Attend at your own risk.
Similar to CoSECiVi'15 - Towards real-time procedural scene generation from a truncated icosidodecahedron (20)
Artículo publicado en el reciente CoSECiVi 2020, celebrado online el 7 y 8 de octubre de 2020.
RESUMEN:
En los últimos años, la competición internacional de IA en Hearthstone ha aumentado exponencialmente su fama entre la comunidad científica, que ha participado de forma activa con múltiples agentes. Uno de los mejores, EVA, aplicaba un enfoque greedy combinado con un algoritmo evolutivo (AE). Sin embargo, casi todas las propuestas (incluida EVA) fueron diseñadas para funcionar de manera generalista, es decir, para cualquiera de los posibles héroes del juego. Esta generalización presenta una gran carencia, ya que no se explotan las cartas exclusivas de cada héroe, así como sus posibles perfiles de comportamiento diferentes. Este artículo sigue una filosofía similar a EVA, también híbrida (Greedy + AE), pero teniendo en cuenta tres arquetipos o perfiles muy extendidos entre la comunidad de jugadores: Aggro (ofensivo), Control (defensivo) y Midrange (intermedio). De forma que, en esencia, se han optimizado tres comportamientos diferentes con el objetivo de crear un agente más especializado capaz de usar un motor de comportamiento diferente dependiendo del héroe con el que juegue. Para demostrar la valía de este enfoque, se han llevado a cabo varios experimentos, comparando los agentes evolucionados con EVA en muchas partidas diferentes, usando tres héroes distintos. Los resultados muestran una mejora sobre EVA para los tres agentes basados en perfil, y un gran rendimiento en general frente a otras propuestas menos competitivas.
Artículo publicado en el reciente CoSECiVi 2020, celebrado online el 7 y 8 de octubre de 2020.
RESUMEN:
The core challenge facing search techniques when used to play Real-Time Strategy (RTS) games is the extensive combinatorial decision space. Several approaches were proposed to alleviate this dimensionality burden, using scripts or action probability distributions, based on expert knowledge. We propose to replace expert-authored scripts by a collection of smaller parametric scripts we call heuristics and use them to pre-select actions for Monte Carlo Tree Search (MCTS). The advantages of this proposal consist of granular control of the decision space and the ability to adapt the agent’s strategy in-game, all by altering the heuristics and their parameters. Experimentation results in μRTS using a proposed implementation have shown a significant performance gain over state-of-the-art agents.
Artículo publicado en el reciente CoSECiVi 2020, celebrado online el 7 y 8 de octubre de 2020.
RESUMEN:
Este documento examina las posibilidades del glitch, o fallos de software, dentro del entorno virtual, con el objetivo de analizar las consecuencias que acarrea el uso de estos efectos e intentar, por otra parte, rescatar resultados estéticamente relevantes. Para eso se recogen distintas definiciones del glitch, así cómo se estudian y valoran como son aplicados estos mismos efectos en distintos videojuegos. Se completa la investigación con un ejercicio práctico en dónde se aplican los efectos que se estudian para observar de manera más controlada sus resultados en la experiencia visual, aunque también se valoran en menor medida las consecuencias en la experiencia narrativa y en la jugabilidad. La investigación se beneficia de las metodologías art-based research y la programación extrema que se coordinan entre sí. Con todo esto, se ha podido concluir de manera resumida que, a través del proceso de construcción de espacios y aplicación de efectos glitch, encontramos distintos recursos que pueden ser útiles por su valor formal y cromático. Pueden rescatarse fragmentos de imagen que tienen interés, ya que proporcionan nuevas formas y representaciones debido a su carácter parcialmente aleatorio. También se puede concluir que no todos los glitches que se aplican a estas experiencias pueden dar como resultado imágenes o experiencias de calidad, y que no se puede pretender tampoco que en el glitch no controlado proporcione una experiencia enriquecedora.
Artículo publicado en el reciente CoSECiVi 2020, celebrado online el 7 y 8 de octubre de 2020.
RESUMEN:
Videogames depict a great variety of architectural and urban spaces. Digital environments often present a significant level of thought and detail, employing real references and historical architectural styles in their design, and becoming an influential source of inspiration for architecture professionals. Our research combines the fields of game studies and architectural education through an experiment where advanced students analyze digital architectures in video-games with the same rigor that they would study real built spaces.The objective is to provide innovative transdisciplinary training to new architects who may develop a hybrid career designing both physical and digital environments. As an example, we present a first experimental product: an undergraduate thesis project in which the student analyzed the design of BioShock Infinite’s Columbia. The student identified how diverse architectural elements and references were used in Bioshock’s introductory sequences. The methodology used was mainly graphical, applying classic architectural drawing techniques that are not found in most game studies works developed from the fields of arts and humanities. The results include a full set of new images, analytical drawings, urban landscapes, and creative cartographies. All steps of this architectural education experiment have been detailed, including its context, stages of development, and conclusions; while emphasizing the teaching tools employed and the personal perceptions of the student this teaching methodology is potentially applicable during the architectural analysis of other videogames, as well as it may be integrated into most international architecture graduate and undergraduate programs.
Artículo publicado en el reciente CoSECiVi 2020, celebrado online el 7 y 8 de octubre de 2020.
RESUMEN:
This work presents a continuous level of detail representation of foliage of trees. Multiresolution modeling allows to adapt the number of polygons to render to the relevance of the object in the scene. However, foliage is represented by isolated polygons, so most of the multiresolution modeling methods do not work properly with this part of the tree. This paper presents a multiresolution model that allows to adapt the number of leaves to the relevance of the foliage in the scene. The criterion to select the appropriate leaves to render is based on a previously performed view-driven simplification. To adapt this parameter in real time, data structures and the necessary algorithms that allow us to extract the appropriate number of polygons are presented. Some tests have been developed to evaluate the proposed solution and results show the good performance of the presented continuous level of detail.
Artículo publicado en el reciente CoSECiVi 2020, celebrado online el 7 y 8 de octubre de 2020.
RESUMEN:
Video Game Industry has grown exponentially in the last years. As a consecuence of this evolution, development environments have become more complex toolkits focused on the many features that are common in modern video games. This phenomenon has allowed the emergence of tools that non-technical users can use, making video game development accessible to virtually everyone. Due to the high interest on this type of tools we have decided to develop TRPG Maker, an intuitive and self-contained tool focused on the development of tactical role-playing games for non-technical users without interest on development’s technical details. Along with the development of the tool we have carried out a first experimental validation with real users to determine the degree of usefulness and comfort of the tool, as well as to detect failures and discover possible improvements. Finally, the feedback obtained has been used to create a more complete version of the product, which has been published and made available to the community for free use.
Artículo publicado en el reciente CoSECiVi 2020, celebrado online el 7 y 8 de octubre de 2020.
RESUMEN:
Actualmente los entornos de Realidad Virtual están limitados al espacio de trabajo del que disponen los usuarios, siendo necesario buscar nuevas formas de aprovechar al máximo este espacio disponible en los diseños de experiencias virtuales. En el presente artículo se plantea un método de generación de espacios geométricamente superpuestos mediante el uso de unos portales imperceptibles para el usuario. Gracias a estos portales es posible crear una ruptura momentánea en la geometría euclidiana convencional, logrando así la generación de infinitos mundos virtuales dentro de un ́unico espacio de trabajo físico. Pese a que esta clase de efectos se ha podido observar en videojuegos tradicionales, su aplicación en entornos virtuales tiene varias implicaciones técnicas, las cuales son detalladas y definidas en el presente artículo. Así mismo, en este artículo se discuten las implicaciones perceptuales que este efecto visual podría tener en los usuarios, abriendo la posibilidad a nuevos estudios asociados a este efecto de computación gráfica.
Artículo publicado en el reciente CoSECiVi 2020, celebrado online el 7 y 8 de octubre de 2020.
RESUMEN:
When creating videogames, designers adjust the game characteristics to optimize the experience of the target players. This design process is usually manual, and the amount of insight that videogame designers have over the player is limited. Identifying the psychological profile of a videogame user can lead to specific adaptation of videogame aspects like narrative, length or challenge. Additionally, performing this identification automatically can both leverage the game experience and reduce the amount of work needed for customizing the players’ experience. This research describes an emergent methodology to automatically create the personality profile of a player, and a prototype implementation. Additionally, a pilot study has been run, with preliminary but promising results.
Artículo publicado en el reciente CoSECiVi 2020, celebrado online el 7 y 8 de octubre de 2020.
RESUMEN:
Computer games have become a very interesting environment or testbed to develop new algorithms in many of the branches of Artificial Intelligence. In fact, collectible card games, such as Hearthstone, have recently attracted the attention of researchers because of their characteristics: uncertainty, randomness, or the infinite and unpredictable interactions that can occur in a game. In this game each playercomposes decks to face other players from a pool of more than 3,000 cards, each one with its own rules and statistics. This implies a great variability of decks and card combinations with rich effects. This paper proposes the use of clustering techniques to extract information from data provided by Hearthstone players, i.e. a Game Mining approach. Todo so, more than 500,000 decks created by game players (both experts and just enthusiasts) have been downloaded from Hearthpwn website.Thus, a descriptive analysis of this dataset, along with Data Mining techniques, have been carried out in order to understand which archetypes (or deck types) are the favourites among the community of players, and what relationships can be identified between them. The results show that it is possible to use clustering algorithms such as K-Means to automatically detect the archetypes used by the players.
Artículo publicado en el reciente CoSECiVi 2020, celebrado online el 7 y 8 de octubre de 2020.
RESUMEN:
Los juegos serios facilitan el aprendizaje de capacidades enentornos educativos por su carácter lúdico. El desarrollo de juegos perso-nalizados facilita alinear la experiencia lúdica con los intereses de aprendizaje concretos, a la vez que permite el registro de la actividad de losjugadores. Sin embargo, la información que se genera puede ser demasiado voluminosa para analizarse con técnicas manuales. En este artículo proponemos aplicar minería de procesos a los datos resultantes de un juego serio. En concreto, se aplica descubrimiento de modelos a un juego serio desarrollado al efecto para aprender modelado conceptual de datos en un sistema informático. En el juego se registra información detallada de cada partida, incluyendo no solo el resultado final sino también cómo el jugador ha llegado a él. Los resultados preliminares en el contexto real de una asignatura del Grado en Ingeniería Informática permiten analizarde manera escalable la información producida por los estudiantes mediante los perfiles que se generan. De este modo, se
obtienen evidencias del comportamiento de perfiles específicos de estudiantes para proporcionar retroalimentación o incluso evaluar a los alumnos.
Artículo publicado en el reciente CoSECiVi 2020, celebrado online el 7 y 8 de octubre de 2020.
RESUMEN:
Objective: The treatment of ADHD is multimodal: pharmacological, psychoeducational and cognitive. Currently, cognitive treatment is expensive and unattractive for most patients diagnosed with ADHD, which might lead to halting therapy. To alleviating this deficit, we have created a therapeutic video game, called The Secret Trail of Moon (TSTM), based on Virtual Reality allied with a gamified version of chess. The combination of Virtual Reality, video game and chess allows a greater immersion of training, adherence and transference to daily life.
Materials and Methods: The video game is based on Thomas Brown's psychological theories on the executive dysfunction of ADHD patients and Barkley's model of Behavioral Inhibition. TSTM is composed of 6 different mechanics to work with: attention, memory, planning, visuospatial capacity, impulse control and reasoning. The technology used has been VR goggles, an actual gaming console and a standalone application. The software runs on a PlayStation 4 test device.
Results: TSTM has been elaborated in three phases: 1) Theoretical foundation, design and programming; 2) Validation and proof of concept phase; 3) Clinical trial phase. The validation and proof of concept phase allowed us to improve TSTM according to the users' experience in order to achieve a more powerful cognitive training. Currently, TSTM is in the third phase. The main hypothesis is that TSTM training produces improvements in patients with ADHD. Conclusions:This article shows the development of the creation of TSTM. Currently, TSTM is undergoing its first clinical trial to demonstrate efficacy in the treatment of patients with ADHD.
RESUMEN.
Este artículo es parte de un trabajo emergente dirigido a la creación
automática de juegos de estrategia en tiempo real (RTS), lo cual incluye la
generación de contenido, la creación de la inteligencia artificial del juego (es
decir, de los jugadores virtuales) y de la propia mecánica del mismo. En este
artículo se describe un editor que consta de múltiples parámetros que pueden
configurarse para activar o desactivar decenas de reglas que pueden cambiar la
mecánica de un juego RTS. El trabajo aquí descrito se realiza en el contexto del
juego Eryna, una plataforma de experimentación implementada en la
Universidad de Málaga. El editor de reglas es una herramienta que será usada
en el futuro para permitir la evolución automática de las reglas de este
videojuego y la creación de distintas mecánicas para el mismo. Lo que
presentamos aquí debe ser evaluado como un trabajo emergente en el cual
estamos trabajando.
RESUMEN.
La Computacion Efímera (Eph-C , por sus siglas en inglés,
Ephemerical Computing) es un nuevo paradigma de computación de reciente
creación que pretende sacar provecho de la naturaleza pasajera (o
sea, asociada a un tiempo de vida limitado) de los recursos computacionales.
En este trabajo se introduciría este nuevo paradigma Eph-C de
forma general, y se irá poco a poco enfocando específicamente dentro del
contexto del proceso de desarrollo de videojuegos, mostrando posibles
aplicaciones y beneficios dentro de las principales líneas de investigación
asociadas a la creacion de los mismos. Se trata de un trabajo preliminar
que intenta indagar en las posibilidades de aplicar la computación
efímera en la creacion de productos en la industria del videojuego. Lo
que presentamos aquí debe ser valorado como un trabajo preliminar que
intenta a su vez servir de inspiración para otros posibles investigadores
o desarrolladores de videojuegos.
ABSTRACT
Presence is often used as a quality measure for virtual reality
experiences. It refers to the sensation of "being there" that users feel while wearing a head-mounted display. In contrast, simulator sickness refers to the feeling of unease of some users while experiencing virtual motion. Nowadays, many virtual reality games do not allow the player to walk, trying to minimize the generation of unpleasant symptoms. This
study explores how presence is affected by the ability to walk in VR games, as well as how simulator sickness actually grows when the player takes a virtual stroll. For this purpose, two prototypes of a small puzzle were built. In the first one, the player is able to walk, whereas the second one does not allow the user to move in any way. Presence and simulator
sickness were measured using standard questionnaires while real players faced our puzzle. The results point to a strong correlation between the action of walking and an increment of the level of presence achieved by the subjects. However, there is no clear correspondence between walking and simulator sickness in our experiment. This last observation opens the
way for further research and questioning of early studies about simulator sickness, as technical differences between current virtual reality devices and older ones may in
uence how uncomfortable users feel while wearing them.
RESUMEN:
Carlos, Rey Emperador es un juego de estrategia ambientado
en la época de Carlos V. Además de la simulación del reino y de cómo
las políticas utilizadas afectan a la fidelidad del pueblo, el juego presenta
al usuario eventos históricos (reales o inventados para el juego) que debe
intentar solucionar y cuyo resultado afecta a los siguientes sucesos en el
juego. Para aumentar la rejugabilidad evitando caer en partidas repetidas
con la misma sucesión de eventos, se han extendido los grafos de
dependencias utilizados en otros juegos.
RESUMEN:
Este artículo propone un algoritmo evolutivo para optimizar el comportamiento de bots (NPCs) que no requiere de una función de fitness explícita,
usando en su lugar combates por pares (a modo de "justa") en los que sólo uno
de los contendientes sobrevivirá. Este proceso hará las veces de mecanismo de
selección del algoritmo, en el que sólo los ganadores pasarán a la siguiente generación del mismo. Se ha utilizado un algoritmo de Programación Genética,
diseñado para generar motores de comportamiento para bots del conocido RTS
Planet Wars. Este método tiene dos objetivos principales: en primer lugar, paliar
el efecto que la naturaleza "ruidosa" de la función de fitness añade a la evaluación
y, en segundo lugar, generar bots más generales (menos especializados) que los
que se obtienen mediante algoritmos evolutivos en los que se usa siempre un contendiente común para evaluar los individuos. Más aún, la omisión de un proceso
de evaluación explícito reduce el número de combates necesarios para evolucionar, lo que reduce a su vez el tiempo de cómputo del algoritmo. Los resultados
demuestran que el método converge y que es menos sensible al ruido que otros
métodos más tradicionales. Además de esto, con este algoritmo se obtienen bots
muy competitivos en comparación con otros bots de la literatura.
RESUMEN:
El presente artículo describe una aplicación para dispositivos móviles desarrollada en el marco de un proyecto de investigación cuyo objetivo
es fomentar las visitas al patrimonio, tanto cultural como académico y científico, de la Universidad de Granada (España). Dicha aplicación
se ha planteado como un juego serio que hace uso de mecanismos de geolocalización para ofrecer una experiencia gamificada que guiará la visita
del usuario a través de varios centros del complejo universitario. El juego se desarrolla como una aventura gráca en la que el jugador/usuario es
protagonista, y plantea una gran diversidad de desafíos que combinan aspectos físicos e intelectuales para incentivar al jugador a recopilar las
"piezas" distribuidas por los distintos edificios con el n último de dar vida (o no) a un nuevo Frankenstein. Actualmente se dispone de un prototipo
de la aplicación/juego que será probado próximamente en una experiencia organizada con varios grupos que competirán para crear el
Frankenstein en menos tiempo. Esta experiencia permitirá analizar la efectividad del juego en el objetivo con el que fue diseñado: motivar a
locales y turistas a visitar el patrimonio de la Universidad de Granada y adquirir conocimientos sobre el mismo.
La utilización intensiva de diferentes técnicas relacionadas
con la Inteligencia Artificial (IA) en el área de los videojuegos ha demostrado ser una necesidad para el campo. El uso de estas técnicas permite dotar de una mayor flexibilidad y adaptabilidad a los juegos que es muy apreciada por los jugadores. Temas como la generación procedimental de contenido, la creación de agentes que puedan jugar a un
videojuego de forma competente, o de agentes cuya conducta sea indistinguible de la de un jugador humano atraen a una cantidad creciente de investigadores. El objetivo de este trabajo es la presentación de una plataforma basada en el motor Unity3D que permita de manera simple la integración y prueba de algoritmos de IA. La plataforma ofrecerá como nuevas características, adicionales a las ya disponibles en la actualidad, la utilización de un entorno 3D, el desarrollo de un juego innovador (basado en múltiples agentes), y la exploración de aspectos de juego como el análisis del terreno, la cooperación entre agentes independientes y heterogéneos, la comunicación de información entre los mismos y la
formación de jerarquías.
ABSTRACT:
La tecnología está cada vez más presente en todos los aspectos de nuestra vida, la utilización de los videojuegos y las técnicas basadas en gamificación se están convirtiendo rápidamente en una necesidad en las aulas. Actualmente, la utilización de ordenadores, tablets o Pizarras Digitales Interactivas (PDIs) se ha convertido en una realidad en la mayoría de aulas de colegios de educación primaria o secundaria.
Estos dispositivos son habitualmente utilizados por los educadores para facilitar el acceso a repositorios de datos online, para la exposición de temas, vídeos, etc. Este trabajo presenta una nueva aplicación, llamada Educapiz, que ha sido diseñada e implementada siguiendo principalmente factores de usabilidad y gamificación. Se ha diseñado una herramienta de autor enfocada a las necesidades de los docentes que permitirá: la creación de juegos para educación (inicialmente se han diseñado un conjunto de juegos para 6º de primaria), desarrollo de una arquitectura web que permita la compartición de los recursos generados, diseño de una arquitectura que permita el posterior análisis del sistema creado. Por último se han realizado un conjunto de pruebas con la herramienta sobre un total de 73 alumnos de 6º de primaria, obteniendo un análisis inicial de la usabilidad de la herramienta creada.
Abstract. In this paper we compare different machine learning algorithms
to predict the outcome of 2 player games in StarCraft, a wellknown
Real-Time Strategy (RTS) game. In particular we discuss the
game state representation, the accuracy of the prediction as the game
progresses, the size of the training set and the stability of the predictions.
More from Sociedad Española para las Ciencias del Videojuego (20)
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
Professional air quality monitoring systems provide immediate, on-site data for analysis, compliance, and decision-making.
Monitor common gases, weather parameters, particulates.
Cancer cell metabolism: special Reference to Lactate PathwayAADYARAJPANDEY1
Normal Cell Metabolism:
Cellular respiration describes the series of steps that cells use to break down sugar and other chemicals to get the energy we need to function.
Energy is stored in the bonds of glucose and when glucose is broken down, much of that energy is released.
Cell utilize energy in the form of ATP.
The first step of respiration is called glycolysis. In a series of steps, glycolysis breaks glucose into two smaller molecules - a chemical called pyruvate. A small amount of ATP is formed during this process.
Most healthy cells continue the breakdown in a second process, called the Kreb's cycle. The Kreb's cycle allows cells to “burn” the pyruvates made in glycolysis to get more ATP.
The last step in the breakdown of glucose is called oxidative phosphorylation (Ox-Phos).
It takes place in specialized cell structures called mitochondria. This process produces a large amount of ATP. Importantly, cells need oxygen to complete oxidative phosphorylation.
If a cell completes only glycolysis, only 2 molecules of ATP are made per glucose. However, if the cell completes the entire respiration process (glycolysis - Kreb's - oxidative phosphorylation), about 36 molecules of ATP are created, giving it much more energy to use.
IN CANCER CELL:
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
introduction to WARBERG PHENOMENA:
WARBURG EFFECT Usually, cancer cells are highly glycolytic (glucose addiction) and take up more glucose than do normal cells from outside.
Otto Heinrich Warburg (; 8 October 1883 – 1 August 1970) In 1931 was awarded the Nobel Prize in Physiology for his "discovery of the nature and mode of action of the respiratory enzyme.
WARNBURG EFFECT : cancer cells under aerobic (well-oxygenated) conditions to metabolize glucose to lactate (aerobic glycolysis) is known as the Warburg effect. Warburg made the observation that tumor slices consume glucose and secrete lactate at a higher rate than normal tissues.
A brief information about the SCOP protein database used in bioinformatics.
The Structural Classification of Proteins (SCOP) database is a comprehensive and authoritative resource for the structural and evolutionary relationships of proteins. It provides a detailed and curated classification of protein structures, grouping them into families, superfamilies, and folds based on their structural and sequence similarities.
Nutraceutical market, scope and growth: Herbal drug technologyLokesh Patil
As consumer awareness of health and wellness rises, the nutraceutical market—which includes goods like functional meals, drinks, and dietary supplements that provide health advantages beyond basic nutrition—is growing significantly. As healthcare expenses rise, the population ages, and people want natural and preventative health solutions more and more, this industry is increasing quickly. Further driving market expansion are product formulation innovations and the use of cutting-edge technology for customized nutrition. With its worldwide reach, the nutraceutical industry is expected to keep growing and provide significant chances for research and investment in a number of categories, including vitamins, minerals, probiotics, and herbal supplements.
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
3. Introduction
Related Works
A survey on procedural modelling for virtual worlds
Ruben M. Smelik et al.
Aim: A complete survey with different procedural methods useful to generate features
of virtual worlds.
Real-time procedural generation of ‘pseudo infinite’ cities
Stefan Greuter et al.
Aim: Describe a generation of “pseudo infinite” procedural city.
Building virtual and augmented reality museum exhibitions
Rafal Wojciechowski et al.
Aim: Generates virtual content in which visitors can interact on a display
or via web.
6. Selection of Useful Faces
Figure 1 Figure 3Figure 2
Useful faces: face selected
Border faces: faces connected to a face
Limit faces: face without all their border faces
7. Selection of Useful Faces
1 7 8 1
43 1 7 8
2 3 4
9 12
Avaible faces Limit face Border faces
All faces Face to add Inaccesible faces
3
1
78
3
42 4
8. Selection of Useful Faces
Connection mode
Full
Add all border faces, then change limit face.
Lineal
The new face is the new limit face.
Random
Random face selection.
10. Path generator
11 12 13 14 15
1 2 3 4 9 5 6 7 8 10 16 17
16 17 18 19
11 1616
Available TI TI to Check
Full TI Border TI TI to rebuild
Actual TI TI checked Delete
12. Path Generator
Union of TIs
Border face from
Inception face
Border face from
limit face
TI father
New TI
13. Path Generator
Collisions detection
Have
GhostLayer?
Mark to destroy Do nothing
Does the TI
belong to a lower
order?
Is the TI of
lower order?
Add collider TI to
collision list
No
Does it belong to
TI father?
No
Yes
Yes Yes
No
Yes
No
14. Path Generator
Collision Detection
Can be rebuild
Add collided face to block face list.
Generate new face selection.
Regenerate the TI
Check again collision
17. Space Division
Virtual space was voxelizated into cubes with the
same size as TI
A margin was created to avoid uncontroled collision
and limit the representation
21. Conclusion
This kind of procedural generation presents a new
point of view to understand hypermuseums.
22. Future Works
Improvement of collision system
Creation of custom collision system
Adapt other kinds of figures to work, making more
complex scenes and better adapted to the user needs