Goodbye Flash, Hello OpenFL: Next Generation Cross-Platform Game Development Jessica Tams
Delivered at Casual Connect Tel Aviv | Haxe is an open source language with syntax similar to Actionscript, with some major improvements. Games written in Haxe can target many environments including Flash Player, HTML5, iOS and Android. OpenFL is an open source library built in Haxe, which provides a Flash-like API. The combination of Haxe and OpenFL is a natural fit for developers looking to move away from Actionscript/Flash. This talk will show you how.
Data-Center Replication with Apache AccumuloJosh Elser
This document describes the implementation of data replication in Apache Accumulo. It discusses justifying the need for replication to handle failures, describes how replication is implemented using write-ahead logs, and outlines future work including replicating to other systems and improving consistency.
Hubot is a customizable robot assistant created with Node.js and CoffeeScript. It can interact with chat platforms like Campfire and IRC. Developers can build scripts to give Hubot new abilities like unlocking doors, finding apartment rentals, getting weather forecasts, and more. The document demonstrates how to configure Hubot and add scripts to extend its functionality.
This document discusses interests and how they develop in children. It notes that interests have both subjective and objective aspects, with the subjective focusing on feelings and the objective on observable behaviors. Interests are generated through three types of learning - trial and error, identification with admired people, and guidance from others. A child's interests can be identified by observing their activities, questions, conversations, reading materials, drawings, wishes, and self reports. The document also states that all individuals have both inborn and acquired interests that show individual differences.
This document discusses scalable genome analysis using ADAM (Apache Spark-based framework). It begins by describing genomes and the goal of analyzing genetic variations. The document then discusses challenges like the large size of genomes and complexity of linking variations to traits. It proposes using ADAM's schema, optimized storage and algorithms to accelerate common access patterns like overlap joins. The document also emphasizes applying biological knowledge like protein grammars to make sense of non-coding variations. Finally, it acknowledges contributions from various institutions that have helped develop ADAM and its ability to enable genome analysis at scale.
Rethinking Data-Intensive Science Using Scalable Analytics Systems fnothaft
Presentation from SIGMOD 2015. With Matt Massie, Timothy Danford, Zhao Zhang, Uri Laserson, Carl Yeksigian, Jey Kottalam, Arun Ahuja, Jeff Hammerbacher, Michael Linderman, Michael J. Franklin, Anthony D. Joseph, David A. Patterson. Paper at http://dl.acm.org/citation.cfm?id=2742787.
This document provides a summary of the Scalable Genome Analysis with ADAM project. ADAM is an open-source, high-performance, distributed platform for genomic analysis that defines a data schema, data layout on disk, and programming interface for distributed processing of genomic data using Spark and Scala. The goal of ADAM is to integrate across terabyte and petabyte-scale datasets to enable the discovery of low frequency genetic variants linked to traits and diseases.
Goodbye Flash, Hello OpenFL: Next Generation Cross-Platform Game Development Jessica Tams
Delivered at Casual Connect Tel Aviv | Haxe is an open source language with syntax similar to Actionscript, with some major improvements. Games written in Haxe can target many environments including Flash Player, HTML5, iOS and Android. OpenFL is an open source library built in Haxe, which provides a Flash-like API. The combination of Haxe and OpenFL is a natural fit for developers looking to move away from Actionscript/Flash. This talk will show you how.
Data-Center Replication with Apache AccumuloJosh Elser
This document describes the implementation of data replication in Apache Accumulo. It discusses justifying the need for replication to handle failures, describes how replication is implemented using write-ahead logs, and outlines future work including replicating to other systems and improving consistency.
Hubot is a customizable robot assistant created with Node.js and CoffeeScript. It can interact with chat platforms like Campfire and IRC. Developers can build scripts to give Hubot new abilities like unlocking doors, finding apartment rentals, getting weather forecasts, and more. The document demonstrates how to configure Hubot and add scripts to extend its functionality.
This document discusses interests and how they develop in children. It notes that interests have both subjective and objective aspects, with the subjective focusing on feelings and the objective on observable behaviors. Interests are generated through three types of learning - trial and error, identification with admired people, and guidance from others. A child's interests can be identified by observing their activities, questions, conversations, reading materials, drawings, wishes, and self reports. The document also states that all individuals have both inborn and acquired interests that show individual differences.
This document discusses scalable genome analysis using ADAM (Apache Spark-based framework). It begins by describing genomes and the goal of analyzing genetic variations. The document then discusses challenges like the large size of genomes and complexity of linking variations to traits. It proposes using ADAM's schema, optimized storage and algorithms to accelerate common access patterns like overlap joins. The document also emphasizes applying biological knowledge like protein grammars to make sense of non-coding variations. Finally, it acknowledges contributions from various institutions that have helped develop ADAM and its ability to enable genome analysis at scale.
Rethinking Data-Intensive Science Using Scalable Analytics Systems fnothaft
Presentation from SIGMOD 2015. With Matt Massie, Timothy Danford, Zhao Zhang, Uri Laserson, Carl Yeksigian, Jey Kottalam, Arun Ahuja, Jeff Hammerbacher, Michael Linderman, Michael J. Franklin, Anthony D. Joseph, David A. Patterson. Paper at http://dl.acm.org/citation.cfm?id=2742787.
This document provides a summary of the Scalable Genome Analysis with ADAM project. ADAM is an open-source, high-performance, distributed platform for genomic analysis that defines a data schema, data layout on disk, and programming interface for distributed processing of genomic data using Spark and Scala. The goal of ADAM is to integrate across terabyte and petabyte-scale datasets to enable the discovery of low frequency genetic variants linked to traits and diseases.
ADAM is an open source, high performance, distributed platform for genomic analysis that defines a data schema and layout on disk using Parquet and Avro, integrates with Spark's Scala and Java APIs, and provides a command line interface. ADAM achieves linear scalability out to 128 nodes for most tasks and provides a 2-4x performance improvement over other tools like GATK and samtools. The platform includes various tools like avocado for efficient local variant calling via de Bruijn graph reassembly of sequencing reads.
The document discusses ADAM, a new framework for scalable genomic data analysis. It aims to make genomic pipelines horizontally scalable by using a columnar data format and in-memory computing. This avoids disk I/O bottlenecks. The framework represents genomic data as schemas and stores data in Parquet for efficient column-based access. It has been shown to reduce genome analysis pipeline times from 100 hours to 1 hour by enabling analysis on large datasets in parallel across many nodes.
Reproducible Emulation of Analog Behavioral Modelsfnothaft
1) Analog behavioral models are abstracted using SystemVerilog real numbers to allow simulation in digital emulation environments with higher throughput.
2) Key challenges to emulating analog models include converting floating-point implementations to fixed-point and handling high sampling rates in filters.
3) The document describes techniques used by Broadcom to synthesize analog behavioral models for emulation, including pragmas for sensitivity analysis and parallelizing filters.
ADAM is an open source platform for scalable genomic analysis that defines a data schema, Scala API, and command line interface. It uses Apache Spark for efficient parallel and distributed processing of large genomic datasets stored in Parquet format. Key features of ADAM include its ability to perform iterative analysis on whole genome datasets while minimizing data movement through Spark. The document also describes using ADAM and PacMin for long read assembly through techniques like minhashing for fast read overlapping and building consensus sequences on read graphs.
The document discusses genome assembly from sequencing reads. It describes how reads can be aligned to a reference genome if available, but for a new genome the reads must be assembled without a reference. Two main assembly approaches are described: overlap-layout-consensus which builds an overlap graph, and de Brujin graph assembly which constructs a de Brujin graph from k-mers. Both approaches aim to find contiguous sequences (contigs) from the reads but face challenges from computational complexity and sequencing errors in the reads.
ADAM is an open source, high performance, distributed platform for genomic analysis built on Apache Spark. It defines a Scala API and data schema using Avro and Parquet to store data in a columnar format, addressing the I/O bottleneck in genomics pipelines. ADAM implements common genomics algorithms as data or graph parallel computations and minimizes data movement by sending code to the data using Spark. It is designed to scale to processing whole human genomes across distributed file systems and cloud infrastructure.
ADAM is an open source, scalable genome analysis platform developed by researchers at UC Berkeley and other institutions. It includes tools for processing, analyzing and accessing large genomic datasets using Apache Spark. ADAM provides efficient data formats, rich APIs, and scalable algorithms to allow genome analysis to be performed on clusters and clouds. The goal is to enable fast, distributed analysis of genomic data across platforms while enhancing data access and flexibility.
ADAM is a scalable genome analysis platform that uses a column-oriented file format called Parquet to efficiently store and access large genomic datasets across distributed systems. It provides APIs and tools for transforming, analyzing, and querying genomic data in a scalable way using Apache Spark. Some key goals of ADAM include enabling efficient processing of genomes using clusters/clouds, providing a data format for parallel data access, and enhancing data semantics to allow more flexible access patterns.
Unlocking the mysteries of reproduction: Exploring fecundity and gonadosomati...AbdullaAlAsif1
The pygmy halfbeak Dermogenys colletei, is known for its viviparous nature, this presents an intriguing case of relatively low fecundity, raising questions about potential compensatory reproductive strategies employed by this species. Our study delves into the examination of fecundity and the Gonadosomatic Index (GSI) in the Pygmy Halfbeak, D. colletei (Meisner, 2001), an intriguing viviparous fish indigenous to Sarawak, Borneo. We hypothesize that the Pygmy halfbeak, D. colletei, may exhibit unique reproductive adaptations to offset its low fecundity, thus enhancing its survival and fitness. To address this, we conducted a comprehensive study utilizing 28 mature female specimens of D. colletei, carefully measuring fecundity and GSI to shed light on the reproductive adaptations of this species. Our findings reveal that D. colletei indeed exhibits low fecundity, with a mean of 16.76 ± 2.01, and a mean GSI of 12.83 ± 1.27, providing crucial insights into the reproductive mechanisms at play in this species. These results underscore the existence of unique reproductive strategies in D. colletei, enabling its adaptation and persistence in Borneo's diverse aquatic ecosystems, and call for further ecological research to elucidate these mechanisms. This study lends to a better understanding of viviparous fish in Borneo and contributes to the broader field of aquatic ecology, enhancing our knowledge of species adaptations to unique ecological challenges.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
ADAM is an open source, high performance, distributed platform for genomic analysis that defines a data schema and layout on disk using Parquet and Avro, integrates with Spark's Scala and Java APIs, and provides a command line interface. ADAM achieves linear scalability out to 128 nodes for most tasks and provides a 2-4x performance improvement over other tools like GATK and samtools. The platform includes various tools like avocado for efficient local variant calling via de Bruijn graph reassembly of sequencing reads.
The document discusses ADAM, a new framework for scalable genomic data analysis. It aims to make genomic pipelines horizontally scalable by using a columnar data format and in-memory computing. This avoids disk I/O bottlenecks. The framework represents genomic data as schemas and stores data in Parquet for efficient column-based access. It has been shown to reduce genome analysis pipeline times from 100 hours to 1 hour by enabling analysis on large datasets in parallel across many nodes.
Reproducible Emulation of Analog Behavioral Modelsfnothaft
1) Analog behavioral models are abstracted using SystemVerilog real numbers to allow simulation in digital emulation environments with higher throughput.
2) Key challenges to emulating analog models include converting floating-point implementations to fixed-point and handling high sampling rates in filters.
3) The document describes techniques used by Broadcom to synthesize analog behavioral models for emulation, including pragmas for sensitivity analysis and parallelizing filters.
ADAM is an open source platform for scalable genomic analysis that defines a data schema, Scala API, and command line interface. It uses Apache Spark for efficient parallel and distributed processing of large genomic datasets stored in Parquet format. Key features of ADAM include its ability to perform iterative analysis on whole genome datasets while minimizing data movement through Spark. The document also describes using ADAM and PacMin for long read assembly through techniques like minhashing for fast read overlapping and building consensus sequences on read graphs.
The document discusses genome assembly from sequencing reads. It describes how reads can be aligned to a reference genome if available, but for a new genome the reads must be assembled without a reference. Two main assembly approaches are described: overlap-layout-consensus which builds an overlap graph, and de Brujin graph assembly which constructs a de Brujin graph from k-mers. Both approaches aim to find contiguous sequences (contigs) from the reads but face challenges from computational complexity and sequencing errors in the reads.
ADAM is an open source, high performance, distributed platform for genomic analysis built on Apache Spark. It defines a Scala API and data schema using Avro and Parquet to store data in a columnar format, addressing the I/O bottleneck in genomics pipelines. ADAM implements common genomics algorithms as data or graph parallel computations and minimizes data movement by sending code to the data using Spark. It is designed to scale to processing whole human genomes across distributed file systems and cloud infrastructure.
ADAM is an open source, scalable genome analysis platform developed by researchers at UC Berkeley and other institutions. It includes tools for processing, analyzing and accessing large genomic datasets using Apache Spark. ADAM provides efficient data formats, rich APIs, and scalable algorithms to allow genome analysis to be performed on clusters and clouds. The goal is to enable fast, distributed analysis of genomic data across platforms while enhancing data access and flexibility.
ADAM is a scalable genome analysis platform that uses a column-oriented file format called Parquet to efficiently store and access large genomic datasets across distributed systems. It provides APIs and tools for transforming, analyzing, and querying genomic data in a scalable way using Apache Spark. Some key goals of ADAM include enabling efficient processing of genomes using clusters/clouds, providing a data format for parallel data access, and enhancing data semantics to allow more flexible access patterns.
Unlocking the mysteries of reproduction: Exploring fecundity and gonadosomati...AbdullaAlAsif1
The pygmy halfbeak Dermogenys colletei, is known for its viviparous nature, this presents an intriguing case of relatively low fecundity, raising questions about potential compensatory reproductive strategies employed by this species. Our study delves into the examination of fecundity and the Gonadosomatic Index (GSI) in the Pygmy Halfbeak, D. colletei (Meisner, 2001), an intriguing viviparous fish indigenous to Sarawak, Borneo. We hypothesize that the Pygmy halfbeak, D. colletei, may exhibit unique reproductive adaptations to offset its low fecundity, thus enhancing its survival and fitness. To address this, we conducted a comprehensive study utilizing 28 mature female specimens of D. colletei, carefully measuring fecundity and GSI to shed light on the reproductive adaptations of this species. Our findings reveal that D. colletei indeed exhibits low fecundity, with a mean of 16.76 ± 2.01, and a mean GSI of 12.83 ± 1.27, providing crucial insights into the reproductive mechanisms at play in this species. These results underscore the existence of unique reproductive strategies in D. colletei, enabling its adaptation and persistence in Borneo's diverse aquatic ecosystems, and call for further ecological research to elucidate these mechanisms. This study lends to a better understanding of viviparous fish in Borneo and contributes to the broader field of aquatic ecology, enhancing our knowledge of species adaptations to unique ecological challenges.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
Current Ms word generated power point presentation covers major details about the micronuclei test. It's significance and assays to conduct it. It is used to detect the micronuclei formation inside the cells of nearly every multicellular organism. It's formation takes place during chromosomal sepration at metaphase.
Phenomics assisted breeding in crop improvementIshaGoswami9
As the population is increasing and will reach about 9 billion upto 2050. Also due to climate change, it is difficult to meet the food requirement of such a large population. Facing the challenges presented by resource shortages, climate
change, and increasing global population, crop yield and quality need to be improved in a sustainable way over the coming decades. Genetic improvement by breeding is the best way to increase crop productivity. With the rapid progression of functional
genomics, an increasing number of crop genomes have been sequenced and dozens of genes influencing key agronomic traits have been identified. However, current genome sequence information has not been adequately exploited for understanding
the complex characteristics of multiple gene, owing to a lack of crop phenotypic data. Efficient, automatic, and accurate technologies and platforms that can capture phenotypic data that can
be linked to genomics information for crop improvement at all growth stages have become as important as genotyping. Thus,
high-throughput phenotyping has become the major bottleneck restricting crop breeding. Plant phenomics has been defined as the high-throughput, accurate acquisition and analysis of multi-dimensional phenotypes
during crop growing stages at the organism level, including the cell, tissue, organ, individual plant, plot, and field levels. With the rapid development of novel sensors, imaging technology,
and analysis methods, numerous infrastructure platforms have been developed for phenotyping.
When I was asked to give a companion lecture in support of ‘The Philosophy of Science’ (https://shorturl.at/4pUXz) I decided not to walk through the detail of the many methodologies in order of use. Instead, I chose to employ a long standing, and ongoing, scientific development as an exemplar. And so, I chose the ever evolving story of Thermodynamics as a scientific investigation at its best.
Conducted over a period of >200 years, Thermodynamics R&D, and application, benefitted from the highest levels of professionalism, collaboration, and technical thoroughness. New layers of application, methodology, and practice were made possible by the progressive advance of technology. In turn, this has seen measurement and modelling accuracy continually improved at a micro and macro level.
Perhaps most importantly, Thermodynamics rapidly became a primary tool in the advance of applied science/engineering/technology, spanning micro-tech, to aerospace and cosmology. I can think of no better a story to illustrate the breadth of scientific methodologies and applications at their best.
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
Describing and Interpreting an Immersive Learning Case with the Immersion Cub...Leonel Morgado
Current descriptions of immersive learning cases are often difficult or impossible to compare. This is due to a myriad of different options on what details to include, which aspects are relevant, and on the descriptive approaches employed. Also, these aspects often combine very specific details with more general guidelines or indicate intents and rationales without clarifying their implementation. In this paper we provide a method to describe immersive learning cases that is structured to enable comparisons, yet flexible enough to allow researchers and practitioners to decide which aspects to include. This method leverages a taxonomy that classifies educational aspects at three levels (uses, practices, and strategies) and then utilizes two frameworks, the Immersive Learning Brain and the Immersion Cube, to enable a structured description and interpretation of immersive learning cases. The method is then demonstrated on a published immersive learning case on training for wind turbine maintenance using virtual reality. Applying the method results in a structured artifact, the Immersive Learning Case Sheet, that tags the case with its proximal uses, practices, and strategies, and refines the free text case description to ensure that matching details are included. This contribution is thus a case description method in support of future comparative research of immersive learning cases. We then discuss how the resulting description and interpretation can be leveraged to change immersion learning cases, by enriching them (considering low-effort changes or additions) or innovating (exploring more challenging avenues of transformation). The method holds significant promise to support better-grounded research in immersive learning.
2. v0.5 API
• Support data access to reads/variants over REST
• Most existing applications using API are interactive
3. Batch Processing
• Is REST the correct approach?
• API is consistent for both local & remote data
• But, has overhead (perf + admin) for local data
• Approaches moving forward:
• Shims to current file formats
• Native interface to Hadoop ecosystem?
4. Common Workflow
Language
• Pain point: how do we build reproducible pipelines
of tasks?
• A group has started building a common workflow
description language for bioinformatics:
• https://groups.google.com/forum/#!forum/
common-workflow-language
• Should the GA4GH take this task on?