There is currently a large number of data programming models and their respective frontends such as relational tables, graphs, tensors, and streams. This has lead to a plethora of runtimes that typically focus on the efficient execution of just a single frontend. This fragmentation manifests today into highly complex pipelines that bundle multiple runtimes to support the necessary models. Hence, joint optimisation and execution of such pipelines across these frontend-bound runtimes is infeasible. We propose Arc as the first unified Intermediate Representation (IR) for data analytics that incorporates stream semantics based on a modern specification of streams, windows and stream aggregation, to combine batch and stream computation models. Arc extends Weld, an IR for batch computation, and adds stream interoperability as a natural extension to describe static computational graphs suitable for stream processing.
Lightning talk from F#nctional Londoners user group meeting 04/06/2015. Briefly discusses the instrument control software we have written in F# to control a custom experiment at the University of Warwick.
Congresso Sociedade Brasileira de Computação CSBC2016 Porto Alegre (Brazil)
Workshop on Cloud Networks & Cloudscape Brazil
Rodolfo Azevedo - Associate professor at University of Campinas, Brazil
Interdisciplinary Research for Cloud Computing: Future and challenges
The growing interest in FPGA-based solutions for accelerating compute demanding algorithms is pushing the need for new tools and methods to improve productivity. High-Level Synthesis (HLS) tools already provide an handy way to describe an FPGA-based hardware implementations starting from a software description of an algorithm. However, HLS directives allow to improve the hardware design only from a computational perspective, requiring a manual code restructuring in case memory transfer needs optimizing. This aspect limits the effectiveness of Design Space Exploration (DSE) approaches that only target HLS directives. Therefore, we present a comprehensive methodology to support the designer in the generation of optimal HLS-based hardware implementations. First, we propose an automated roofline model generation that directly operates on a C/C++ description of the target algorithm. The approach enables a fast evaluation of the operational intensity of the target function and visualizes the main bottlenecks of the current HLS implementation, providing guidance on how to improve it. Second, we introduce a DSE methodology for quickly evaluating different HLS directives to identify an optimal implementation. We report the DSE performance when running on the PolyBench test suite, outperforming previous automated solutions in the literature. Finally, we illustrate the process of accelerating by means of our framework a complex application such as the N-body physics simulation algorithm, achieving results comparable to bespoke state-of-the-art implementations.
Lightning talk from F#nctional Londoners user group meeting 04/06/2015. Briefly discusses the instrument control software we have written in F# to control a custom experiment at the University of Warwick.
Congresso Sociedade Brasileira de Computação CSBC2016 Porto Alegre (Brazil)
Workshop on Cloud Networks & Cloudscape Brazil
Rodolfo Azevedo - Associate professor at University of Campinas, Brazil
Interdisciplinary Research for Cloud Computing: Future and challenges
The growing interest in FPGA-based solutions for accelerating compute demanding algorithms is pushing the need for new tools and methods to improve productivity. High-Level Synthesis (HLS) tools already provide an handy way to describe an FPGA-based hardware implementations starting from a software description of an algorithm. However, HLS directives allow to improve the hardware design only from a computational perspective, requiring a manual code restructuring in case memory transfer needs optimizing. This aspect limits the effectiveness of Design Space Exploration (DSE) approaches that only target HLS directives. Therefore, we present a comprehensive methodology to support the designer in the generation of optimal HLS-based hardware implementations. First, we propose an automated roofline model generation that directly operates on a C/C++ description of the target algorithm. The approach enables a fast evaluation of the operational intensity of the target function and visualizes the main bottlenecks of the current HLS implementation, providing guidance on how to improve it. Second, we introduce a DSE methodology for quickly evaluating different HLS directives to identify an optimal implementation. We report the DSE performance when running on the PolyBench test suite, outperforming previous automated solutions in the literature. Finally, we illustrate the process of accelerating by means of our framework a complex application such as the N-body physics simulation algorithm, achieving results comparable to bespoke state-of-the-art implementations.
This presentation is a Users' Guide for the Computational Fluid Dynamics (CFD) framework One-Click CFD. This framework enables CFD analysis of vehicle such as racing cars by people with no prior experience in CFD.
The One-Click CFD framework is distribution by Khamsin Virtual Racecar Challenge - http://www.khamsinvirtualracecarchallenge.com - and is used as part of the 2015 Challenge.
I'm glad to present my Work Portfolio with skills i earned during my BIM Training period. If you wish you can review the works and tell me your comments and it is most welcome as always. Looking for Entry Level BIM Modeler
A Virtual Machine Placement Algorithm for Energy Efficient Cloud Resource Res...SuvomDas
In this slide, we are going to discuss a new graph colouring model for advance resource reservation with minimum energy consumption in heterogeneous IaaS cloud data centres. We will start with an exact integer linear programming (ILP) formulation which generalises the graph colouring problem mathematically and follow with a Energy Efficient Graph Pre colouring (EEGP) heuristic to address the scalability and to reduce convergence times. The results of performance evaluation and comparisons of EEGP with the exact algorithm will demonstrate the efficiency of EEGP for the energy efficient advance resource reservation problem.
We will see the efficiency of the EEGP algorithm by comparing it with the exact integer linear programming solution
Extracting a Rails Engine to a separated applicationJônatas Paganini
As a Rails Application grows, there is a need to decouple heavy systems from the monolithic applications. Several teams in different companies are doing the same: extracting (micro) services from their monolithic applications to give the engineering teams more flexibility to speed up the workflow.
From the separation of the business logic to the server's setup, every change should respect the zero-downtime approach.
This talk shares the automated steps and exercises we created to have a smooth transition to the new system.
I'll share the context of the tool that is automatically extracting an entire
rails engine from a project and moving it to a separate service.
ICIAM 2019: Reproducible Linear Algebra from Application to ArchitectureJason Riedy
All computing must be parallel to take advantage of modern systems like multicore processors, GPUs, and distributed systems. Results that are not bit-wise reproducible introduce doubt on many levels. Sometimes that is appropriate. Reproducibility limitations occur because underlying libraries do not specify their reproducibility requirements. New advances in interfaces, algorithms, and architectures allow selecting among those requirements in the future. This talk covers many of the upcoming options and their trade-offs.
Extension of RTKLIB for the calculation and validation of protection levelsZoltan Siki
System integrity (i.e. the capability of self-monitoring) and the reliability of the positions provided need to be ensured within all safety critical applications of the GPS technology. For the sake of such applications, GPS augmentations, for example Space Based Augmentation Systems (SBAS) are to be applied to achieve the required level of integrity. SBAS provides integrity in a multi-step procedure that is laid out in the Radio Technical Commission for Aeronautics (RTCA) Minimum Operational Performance Standards (MOPS) for airborne navigation equipment using GPS. Besides integrity, SBAS also improves accuracy of positioning via broadcasting corrections to reduce the most important systematic errors on standalone positioning. To quantify integrity, the protection level is defined, which is calculated from the standard deviation of the models broadcast in SBAS.
The following presentation describes the use of numerical simulation to optimize the shape, elasticity and volume of the compensation chamber in an axial pump while minimizing pressure peaks.
For my first paper review at my lab seminar, this paper is dedicated by Borzsonyi, S., Kossmann, D., and Stocker, K. I made the PPT file to present in front of my colleague students and my professor. This paper is good to study not only algorithm(divide and conquer) but also mathematical aspect of database of linear algebraic methods.
Presentation shows through a numerical example of a BOP model how to optimize a critical subsea component using the SIMULIA Power of Portfolio components for fatigue (fe-safe) and reliability (Isight).
A small and brief presentation for internship project at BEL on Data Visualization using Seaborn and matplotlib
Some sensitive information has been redacted.
ICIAM 2019: A New Algorithm Model for Massive-Scale Streaming Graph AnalysisJason Riedy
Applications in many areas analyze an ever-changing environment. On billion vertices graphs, providing snapshots imposes a large performance cost. We propose the first formal model for graph analysis running concurrently with streaming data updates. We consider an algorithm valid if its output is correct for the initial graph plus some implicit subset of concurrent changes. We show theoretical properties of the model, demonstrate the model on various algorithms, and extend it to updating results incrementally.
Using Deep Learning in Production Pipelines to Predict Consumers’ Interest wi...Databricks
To optimize customer conversion in e-commerce and promote the right message to the right person at the right time it’s necessary to build powerful predictive models, which usually involves a lot of feature engineering to aggregate consumer related past events (clicks, page views, purchasesâ¦) and extract relevant signals. Recurrent Neural Networks (RNN) is a class of deep-learning techniques, especially good at working with sequences of inputs and at learning time-dependant patterns. They can learn from past customer behavior, and their internal state can then be used as latent features for downstream models.
In this session, we will see how to use RNN algorithms like LSTM to learn from sequences of events, and produce new features to be used in predictive models. Come see how we managed to use those RNN networks in our Spark pipelines!
In linear projects activities are associated with certain locations. They are usually construction projects, such as building and roads where each activities have location attributes in addition to duration, start and finish times, cost, resources and other attributes of traditional project schedules.
Time location charts are a way of visualizing project schedules with linear locations on the horizontal axis, and dates on the vertical axis. Schedule activities are then plotted onto the chart according to the locations over which they occur and the dates that the project schedule determines.
Time location charts could be presented for original project schedule and risks-adjusted project schedule.
Risk adjusted project schedule is a result of project risk analysis
Traduttori traditori (“translators traitors”) or the difficulty of converting data
BIM – the Building Information Model – can describe many aspects related to the construction and the management of buildings and built infrastructure. It seems only natural to connect information from BIM files to data in Geographical Information Systems (GIS).
Many fundamental differences make this task more challenging as you might expect and requires a deeper transformation of concepts from the world of buildings to GIS.
As a final step in the translations, a GeoSPARQL extension was developed for the well known Open Source software Ontop, which allows to access geographical data in a PostGIS database as linked Data.
These activities were performed within the ERDF funded project GEOBIMM.
Introducing Arc: A Common Intermediate Language for Unified Batch and Stream...Flink Forward
Today's end-to-end data pipelines need to combine many diverse workloads such as machine learning, relational operations, stream dataflows, tensor transformations, and graphs. For each of these workload types exist several frontends (e.g., DataFrames/SQL, Beam, Keras) based on different programming languages as well as different runtimes (e.g., Spark, Flink, Tensorflow) that target a particular frontend and possibly a hardware architecture (e.g., GPUs). Putting all the pieces of a data pipeline together simply leads to excessive data materialisation, type conversions and hardware utilisation as well as miss-matches of processing guarantees.
Our research group at RISE and KTH in Sweden has founded Arc, an intermediate language that bridges the gap between any frontend and a dataflow runtime (e.g., Flink) through a set of fundamental building blocks for expressing data pipelines. Arc incorporates Flink and Beam-inspired stream semantics such as windows, state and out of order processing as well as concepts found in batch computation models. With Arc, we can cross- compile and optimise diverse tasks written in any programming language into a unified dataflow program. Arc programs can run on various hardware backends efficiently as well as allowing seamless, distributed execution on dataflow runtimes. To that end, we showcase Arcon a concept runtime built in Rust that can execute Arc programs natively as well as presenting a minimal set of extensions to make Flink an Arc-ready runtime.
This presentation is a Users' Guide for the Computational Fluid Dynamics (CFD) framework One-Click CFD. This framework enables CFD analysis of vehicle such as racing cars by people with no prior experience in CFD.
The One-Click CFD framework is distribution by Khamsin Virtual Racecar Challenge - http://www.khamsinvirtualracecarchallenge.com - and is used as part of the 2015 Challenge.
I'm glad to present my Work Portfolio with skills i earned during my BIM Training period. If you wish you can review the works and tell me your comments and it is most welcome as always. Looking for Entry Level BIM Modeler
A Virtual Machine Placement Algorithm for Energy Efficient Cloud Resource Res...SuvomDas
In this slide, we are going to discuss a new graph colouring model for advance resource reservation with minimum energy consumption in heterogeneous IaaS cloud data centres. We will start with an exact integer linear programming (ILP) formulation which generalises the graph colouring problem mathematically and follow with a Energy Efficient Graph Pre colouring (EEGP) heuristic to address the scalability and to reduce convergence times. The results of performance evaluation and comparisons of EEGP with the exact algorithm will demonstrate the efficiency of EEGP for the energy efficient advance resource reservation problem.
We will see the efficiency of the EEGP algorithm by comparing it with the exact integer linear programming solution
Extracting a Rails Engine to a separated applicationJônatas Paganini
As a Rails Application grows, there is a need to decouple heavy systems from the monolithic applications. Several teams in different companies are doing the same: extracting (micro) services from their monolithic applications to give the engineering teams more flexibility to speed up the workflow.
From the separation of the business logic to the server's setup, every change should respect the zero-downtime approach.
This talk shares the automated steps and exercises we created to have a smooth transition to the new system.
I'll share the context of the tool that is automatically extracting an entire
rails engine from a project and moving it to a separate service.
ICIAM 2019: Reproducible Linear Algebra from Application to ArchitectureJason Riedy
All computing must be parallel to take advantage of modern systems like multicore processors, GPUs, and distributed systems. Results that are not bit-wise reproducible introduce doubt on many levels. Sometimes that is appropriate. Reproducibility limitations occur because underlying libraries do not specify their reproducibility requirements. New advances in interfaces, algorithms, and architectures allow selecting among those requirements in the future. This talk covers many of the upcoming options and their trade-offs.
Extension of RTKLIB for the calculation and validation of protection levelsZoltan Siki
System integrity (i.e. the capability of self-monitoring) and the reliability of the positions provided need to be ensured within all safety critical applications of the GPS technology. For the sake of such applications, GPS augmentations, for example Space Based Augmentation Systems (SBAS) are to be applied to achieve the required level of integrity. SBAS provides integrity in a multi-step procedure that is laid out in the Radio Technical Commission for Aeronautics (RTCA) Minimum Operational Performance Standards (MOPS) for airborne navigation equipment using GPS. Besides integrity, SBAS also improves accuracy of positioning via broadcasting corrections to reduce the most important systematic errors on standalone positioning. To quantify integrity, the protection level is defined, which is calculated from the standard deviation of the models broadcast in SBAS.
The following presentation describes the use of numerical simulation to optimize the shape, elasticity and volume of the compensation chamber in an axial pump while minimizing pressure peaks.
For my first paper review at my lab seminar, this paper is dedicated by Borzsonyi, S., Kossmann, D., and Stocker, K. I made the PPT file to present in front of my colleague students and my professor. This paper is good to study not only algorithm(divide and conquer) but also mathematical aspect of database of linear algebraic methods.
Presentation shows through a numerical example of a BOP model how to optimize a critical subsea component using the SIMULIA Power of Portfolio components for fatigue (fe-safe) and reliability (Isight).
A small and brief presentation for internship project at BEL on Data Visualization using Seaborn and matplotlib
Some sensitive information has been redacted.
ICIAM 2019: A New Algorithm Model for Massive-Scale Streaming Graph AnalysisJason Riedy
Applications in many areas analyze an ever-changing environment. On billion vertices graphs, providing snapshots imposes a large performance cost. We propose the first formal model for graph analysis running concurrently with streaming data updates. We consider an algorithm valid if its output is correct for the initial graph plus some implicit subset of concurrent changes. We show theoretical properties of the model, demonstrate the model on various algorithms, and extend it to updating results incrementally.
Using Deep Learning in Production Pipelines to Predict Consumers’ Interest wi...Databricks
To optimize customer conversion in e-commerce and promote the right message to the right person at the right time it’s necessary to build powerful predictive models, which usually involves a lot of feature engineering to aggregate consumer related past events (clicks, page views, purchasesâ¦) and extract relevant signals. Recurrent Neural Networks (RNN) is a class of deep-learning techniques, especially good at working with sequences of inputs and at learning time-dependant patterns. They can learn from past customer behavior, and their internal state can then be used as latent features for downstream models.
In this session, we will see how to use RNN algorithms like LSTM to learn from sequences of events, and produce new features to be used in predictive models. Come see how we managed to use those RNN networks in our Spark pipelines!
In linear projects activities are associated with certain locations. They are usually construction projects, such as building and roads where each activities have location attributes in addition to duration, start and finish times, cost, resources and other attributes of traditional project schedules.
Time location charts are a way of visualizing project schedules with linear locations on the horizontal axis, and dates on the vertical axis. Schedule activities are then plotted onto the chart according to the locations over which they occur and the dates that the project schedule determines.
Time location charts could be presented for original project schedule and risks-adjusted project schedule.
Risk adjusted project schedule is a result of project risk analysis
Traduttori traditori (“translators traitors”) or the difficulty of converting data
BIM – the Building Information Model – can describe many aspects related to the construction and the management of buildings and built infrastructure. It seems only natural to connect information from BIM files to data in Geographical Information Systems (GIS).
Many fundamental differences make this task more challenging as you might expect and requires a deeper transformation of concepts from the world of buildings to GIS.
As a final step in the translations, a GeoSPARQL extension was developed for the well known Open Source software Ontop, which allows to access geographical data in a PostGIS database as linked Data.
These activities were performed within the ERDF funded project GEOBIMM.
Introducing Arc: A Common Intermediate Language for Unified Batch and Stream...Flink Forward
Today's end-to-end data pipelines need to combine many diverse workloads such as machine learning, relational operations, stream dataflows, tensor transformations, and graphs. For each of these workload types exist several frontends (e.g., DataFrames/SQL, Beam, Keras) based on different programming languages as well as different runtimes (e.g., Spark, Flink, Tensorflow) that target a particular frontend and possibly a hardware architecture (e.g., GPUs). Putting all the pieces of a data pipeline together simply leads to excessive data materialisation, type conversions and hardware utilisation as well as miss-matches of processing guarantees.
Our research group at RISE and KTH in Sweden has founded Arc, an intermediate language that bridges the gap between any frontend and a dataflow runtime (e.g., Flink) through a set of fundamental building blocks for expressing data pipelines. Arc incorporates Flink and Beam-inspired stream semantics such as windows, state and out of order processing as well as concepts found in batch computation models. With Arc, we can cross- compile and optimise diverse tasks written in any programming language into a unified dataflow program. Arc programs can run on various hardware backends efficiently as well as allowing seamless, distributed execution on dataflow runtimes. To that end, we showcase Arcon a concept runtime built in Rust that can execute Arc programs natively as well as presenting a minimal set of extensions to make Flink an Arc-ready runtime.
Graphs in data structures are non-linear data structures made up of a finite ...bhargavi804095
Graphs in data structures are non-linear data structures made up of a finite number of nodes or vertices and the edges that connect them. Graphs in data structures are used to address real-world problems in which it represents the problem area as a network like telephone networks, circuit networks, and social networks
FPGA Implementation of Pipelined CORDIC Sine Cosine Digital Wave Generator cscpconf
The coordinate rotation digital computer (CORDIC) algorithm is well known iterative
algorithm for performing rotations in digital signal processing applications. Hardware
implementation of CORDIC results increase in Critical path delay. Pipelined architecture isused in CORDIC to increase the clock speed and to reduce the Critical path delay. In this paper a hardware efficient Digital sine and cosine wave generator is designed and implemented using Pipelined CORDIC architecture. FPGA based architecture is presented and design has been implemented using Xilinx 12.3 device
In this paper, a novel reduced instruction set computer (RISC)-
communication processor (RCP) has been designed with 32-bit operations
which access 64-bit instruction format and implemented using field
programmable gate array (FPGA). The design of the RISC processor is
facilitated with communication operations like basic signals sine, cosine, and
square, and modulation schemes like amplitude modulation, amplitude shift
keying, analog, and digital quadrature amplitude modulation. Additionally,
application-oriented operations like a traffic light, digital clock, and linear
feedback shift register are included in the design. The pipeline mechanism is
incorporated in the design to enhance the performance characteristics of the
processor, hence allowing the execution of the instructions more effectively.
Also, the design is implemented with Xilinx Virtex 7 family FPGA. The
device utilization analysis of the proposed FPGA along with different FPGA
families is evaluated and compared.
Design of Adjustable Reconfigurable Wireless Single Core CORDIC based Rake Re...IOSR Journals
In wireless communication system transmitted signals are subjected to multiple reflections,
diffractions and attenuation caused by obstacles such as buildings and hills, etc. At the receiver end, multiple
copies of the transmitted signal are received that arrive at clearly distinguishable time instants and are faded by
signal cancellation. Rake receiver is a technique to combine these so called multi-paths [2] by utilizing multiple
correlation receivers allocated to those delay positions on which the significant energy arrives which achieves a
significant improvement in the SNR of the output signal. This paper shows how the rake, including dispreading
and descrambling could be replaced by a receiver that can be implemented on a CORDIC based hardware
architecture. The performance in conjunction with the computational requirements of the receiver is widely
adjustable which is significantly better than that of the conventional rake receiver
Design of Adjustable Reconfigurable Wireless Single Core CORDIC based Rake Re...IOSR Journals
Abstract : In wireless communication system transmitted signals are subjected to multiple reflections, diffractions and attenuation caused by obstacles such as buildings and hills, etc. At the receiver end, multiple copies of the transmitted signal are received that arrive at clearly distinguishable time instants and are faded by signal cancellation. Rake receiver is a technique to combine these so called multi-paths [2] by utilizing multiple correlation receivers allocated to those delay positions on which the significant energy arrives which achieves a significant improvement in the SNR of the output signal. This paper shows how the rake, including dispreading and descrambling could be replaced by a receiver that can be implemented on a CORDIC based hardware architecture. The performance in conjunction with the computational requirements of the receiver is widely adjustable which is significantly better than that of the conventional rake receiver. Keywords - Rake receiver, Multi-paths, CORDIC
Realization of Direct Digital Synthesis in Cordic AlgorithmIJASRD Journal
Nowadays the modern communication system and software defined radio-based applications needs Trans receiver consisting of fully programmable circuit which performs modulation and demodulation process. The method which does not need memory for realizing modulators and demodulators is CORDIC algorithm. The CORDIC algorithm is a versatile algorithm which calculates only adder and shifter operations instead of using multiplier. So, this algorithm is mostly used for VLSI and digital signal processing. The main concept used in this project is Direct Digital Synthesis (DDS) which generates the analog waveform in digital format based on CORDIC algorithm approach .This paper focuses on analysis and simulation of Binary phase shift keying (BPSK), Binary amplitude shift keying (BASK), Binary frequency shift keying (BFSK), Quadrature phase shift keying (QPSK) modulation scheme using DDS based on CORDIC algorithm instead of ROM look up table which greatly reduce the number of slices and no of look up tables. The whole simulation is done on Modelsim and Xilinx-ISE using Verilog descriptive language and these modulation schemes are implemented on Spartan-3 FPGA kit.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.
Arc: An IR for Batch and Stream Programming
1. Arc
An IR for Batch and Stream Programming
Lars Kroll*, Klas Segeljakt*, Paris Carbone†, Christian Schulte*, Seif Haridi*†
*KTH Royal Institute of Technology, Stockholm, Sweden – †RISE SICS, Stockholm, Sweden
presented by Lars Kroll at the 17th Symposium on Data Base Programming Languages (DBPL 2019) in Phoenix, Arizona, USA in June 2019
2. Arc: An IR for Batch and Stream Programming
π
Tensors
Relational Streams
Dynamic Graphs
The Challenge
!2
knowledge
∞
Data Data Programming Systems Decision
Making
3. Arc: An IR for Batch and Stream Programming
π
Tensors
Relational Streams
Dynamic Graphs
Arcon: A Streaming Runtime for Heterogeneous Hardware
!3
Arcon
Worker
worker specific binaries
Worker Worker
4. Arc: An IR for Batch and Stream Programming
Arcon Compiler Pipeline
!4
Arcon
Arc (High Level IR)
Frontends
Logical Dataflow IR
Physical Dataflow IR
Binaries
5. Arc: An IR for Batch and Stream Programming
Arc Compiler Overview
!5
Lexer/Tokeniser
Parser
Macro Expansion
Type Inference
Arc
Descriptor
Translate to Dataflow IR
OptimisationsOptimisationsOptimisations
until Fixpoint
Antlr generated
Constraint based
implemented in
6. Arc: An IR for Batch and Stream Programming
What is Arc?
!6
• The Weld IR*
• A restrictive language for describing data transformations
• Pure expressions without side-effects (except: CUDFs)
• Collections: Read-only data types (e.g., vec[T], dict[T])
• Builders: Write-only data types (e.g., appender[T], merger[T], groupbuilder[T])
• Calling result on a builder returns the associated collection type (or a primitive for merger)
• Arc extends the Weld for streaming
• Observation
– Stream Sources are read-only
– Stream Sinks are write-only
– Connect Sinks to Sources via Channels
• Source is a collection stream[T]
• Sink is a builder streamappender[T]
• Calling result on a Sink returns a Source and creates a Channel between them
*Palkar, Shoumik, et al. "Weld: A common runtime for high performance data analytics." Conference on Innovative Data Systems Research (CIDR). 2017.
7. Arc: An IR for Batch and Stream Programming
|input:vec[i32]|
result(
for(input:vec[i32],
appender[i32],
|app:appender[i32], _:i64, i:i32|
merge(app, i + 5)))
Weld Example
!7
Weld
macro map(data, func) = (
result(for(data, appender, |b, i, x| merge(b, func(x))))
);
cmp.Weld
map(i+5)[1, 2, 3, 4, 5] [6, 7, 8, 9, 10]
8. Arc: An IR for Batch and Stream Programming
Arc Example 1
!8
|input:Stream[i32], output: StreamAppender[i32]|
for(input,
output,
|out, i|
merge(out, i + 5))
Arc
map(i+5)source (input) sink (output)
9. Arc: An IR for Batch and Stream Programming
Arc Example 1
!9
|input:Stream[i32], output: StreamAppender[i32]|
for(input: Stream[i32],
output: StreamAppender[i32],
|out: StreamAppender[i32], i:i32|
merge(out, i + 5): StreamAppender[i32])
Arc
map(i+5)source (input) sink (output)
10. Arc: An IR for Batch and Stream Programming
Arc Example 2
!10
|input:Stream[i32], output: StreamAppender[i32]|
for(input,
output,
|out, i|
if(i % 2 == 0,
merge(out, i),
out))
Arc
filter(i%2==0)source (input) sink (output)
11. Arc: An IR for Batch and Stream Programming
Arc Example 3
!11
|source:Stream[i32],
evenSink:StreamAppender[i32],
oddSink:StreamAppender[i32]|
let mapped = result(
for(source,
StreamAppender[i32],
|out, i| merge(out, i + 5)));
for(mapped, evenSink, |out, i|
if (i % 2 == 0, merge(out, i), out));
for(mapped, oddSink, |out, i|
if (i % 2 != 0, merge(out, i), out))
Arc
source
evenSink
oddSink
map(i+5)
filter(i%2==0)
filter(i%2!=0)
12. Arc: An IR for Batch and Stream Programming
Arc Optimisations
!12
• Operator Reordering
• Redundancy Elimination
• Operator Separation (Fission)
• Operator Fusion
13. Arc: An IR for Batch and Stream Programming
Arc Example 3
!13
Arc
source
evenSink
oddSink
map(i+5) then if(j%2==0)
|source:Stream[i32],
evenSink:StreamAppender[i32],
oddSink:StreamAppender[i32]|
for(source,
{evenSink,oddSink},
|out, i|
let j = i + 5;
if(j % 2 == 0,
{merge(out.$0, j),out.$1},
{out.$0, merge(out.$1, j)}))
15. Arc: An IR for Batch and Stream Programming
Summary
!15
• Arcon deploys continuous streaming
analytics on heterogenous hardware
• Arc is its high-level IR to abstract
across frameworks and languages
• Arc extends Weld’s side-effect free
approach for streams and windows
• Arc allows logical dataflow
optimisations
Arc
Arcon