E-learning Python for Ocean Mapping (ePOM) project.
Complementary slides to the Data Visualization module (part of the Introduction to Ocean Data Science training).
More details at https://www.hydroffice.org/epom
ePOM - Intro to Ocean Data Science - Scientific ComputingGiuseppe Masetti
E-learning Python for Ocean Mapping (ePOM) project.
Complementary slides to the Scientific Computing module (part of the Introduction to Ocean Data Science training).
More details at https://www.hydroffice.org/epom
ePOM - Intro to Ocean Data Science - Raster and Vector Data FormatsGiuseppe Masetti
E-learning Python for Ocean Mapping (ePOM) project.
Complementary slides to the Raster and Vector Data Formats module (part of the Introduction to Ocean Data Science training).
More details at https://www.hydroffice.org/epom
Improving the usability of the Information system of land cover in Spain (SIOSE)Benito Zaragozí
This technical presentation is focused on the usability gap of the Geospatial Reference Information compiled and published by the SDIs.
We present a case study on Land Occupation databases which use an object-oriented data model following the INSPIRE technical specifications. In this case the usability gap consists on the object-relational impedance mismatch. This happens when an object-oriented data model has to be stored in a relational database.
We performed a computational experiment for testing if there are any benefits in storing land use (LU) and land cover (LC) data in a document store database. In this experiment we used the LU/LC database of Spain (SIOSE).
The results show that there are some benefits in terms of throughput capacity and response times. Based on these results we propose some opportunities for achieving even better results.
This experiment was performed using the dockers containerization technology, so the experiment is completely reproducible in less than eight hours (it took weeks to prepare the experiment) by executing a few lines of code. As a suggestion, the dockers technology could be used as a way for sharing GRD databases with companies and advanced users willing to use this data for research or business.
This research is going to be continued in a research project starting in 2017 and funded by the Spanish Ministry of Economy and Competitiveness.
ePOM - Intro to Ocean Data Science - Scientific ComputingGiuseppe Masetti
E-learning Python for Ocean Mapping (ePOM) project.
Complementary slides to the Scientific Computing module (part of the Introduction to Ocean Data Science training).
More details at https://www.hydroffice.org/epom
ePOM - Intro to Ocean Data Science - Raster and Vector Data FormatsGiuseppe Masetti
E-learning Python for Ocean Mapping (ePOM) project.
Complementary slides to the Raster and Vector Data Formats module (part of the Introduction to Ocean Data Science training).
More details at https://www.hydroffice.org/epom
Improving the usability of the Information system of land cover in Spain (SIOSE)Benito Zaragozí
This technical presentation is focused on the usability gap of the Geospatial Reference Information compiled and published by the SDIs.
We present a case study on Land Occupation databases which use an object-oriented data model following the INSPIRE technical specifications. In this case the usability gap consists on the object-relational impedance mismatch. This happens when an object-oriented data model has to be stored in a relational database.
We performed a computational experiment for testing if there are any benefits in storing land use (LU) and land cover (LC) data in a document store database. In this experiment we used the LU/LC database of Spain (SIOSE).
The results show that there are some benefits in terms of throughput capacity and response times. Based on these results we propose some opportunities for achieving even better results.
This experiment was performed using the dockers containerization technology, so the experiment is completely reproducible in less than eight hours (it took weeks to prepare the experiment) by executing a few lines of code. As a suggestion, the dockers technology could be used as a way for sharing GRD databases with companies and advanced users willing to use this data for research or business.
This research is going to be continued in a research project starting in 2017 and funded by the Spanish Ministry of Economy and Competitiveness.
Build 2017 - B8037 - Explore the next generation of innovative UI in the Visu...Windows Developer
Experience a new wave of UI design with the animations, effects, and transitions that are the platform building blocks in the Visual Layer. See how physics, depth, lighting, and unique materials allow you to create immersive and personalized experiences, optimized for the range of Windows devices.
As David McCandless famously said “Information Visualization is a form of knowledge compression”. In particular, it is a way of compressing information in a visual way that can be easily and correctly interpreted by the visual system in our brains.
In this tutorial we will discuss the way in which our eyes and visual cortex process colors and shapes and how we may use it to our advantage. Ideas and concepts will be presented in an intuitive and practical way while providing references for the more technical descriptions and explanations available in the relevant scientific literature.
Matplotlib is the workhorse of visualization in Python and underlies all other major Python visualization packages and it it particularly well integrated into the Jupyter ecosystem. Mastering it is a fundamental requirement to be proficient in python data visualization. Seaborn, on the other hand, is a more recent package that builds on top of matplotlib and simplifies it for some of the most common use cases, making it more productive. We will cover both tools through practical examples and highlight the main differences and advantages of each one.
Code and slides available here: https://bmtgoncalves.github.io/DataVisualization/
Exploratory data analysis of 2017 US Employment data using RChetan Khanzode
Data Science- Exploratory data analysis of year 2017 US Employment data using R – Use Case.Use of R library's for visualization of Employment data by state, county and industry sector - simple Geo spatial data visualization of employment data
Looking into the past - feature extraction from historic maps using Python, O...James Crone
Tutorial presentation providing an overview of extracting geospatial features from scanned historic maps in an automated fashion using Python, OpenCV and PostGIS.
Vincent Sarago (Mapbox) | Traitement d’imagerie satellitaires de masse en ut...ACSG Section Montréal
Comment tirer le meilleur des technologies d’infonuagique pour traiter des TB de données satellitaires afin de créer un fond de cartes satellitaires global. Depuis 2010, Mapbox utilise les technologies proposées par Amazon Web Services afin de créer la meilleure carte globale a haute résolution.
An introduction to GIS Data Types. Strengths and weaknesses of raster and vector data are discussed. Also covered is the importance of topology. Concludes with a discussion of the vector-based format of OpenStreetMap data.
CSUN 2023 Automated Descriptions 3 March 2023 TG.pptxTed Gies
Highcharts is a world leading provider of accessible charting tools for the web, used by 80 of the top 100 Fortune companies. Recently Highcharts and global publishing company Elsevier’s Digital Accessibility Team collaborated to provide better accessibility for line charts with large datasets. Line charts are often used to visualize datasets with thousands of data points. This presents a challenge for non-visual access, as providing access to individual data points is not sufficient. A reader of a line chart with a large amount of data will aim to extract information about trends, patterns, and outliers from the chart. Can we make this information more accessible by communicating it through text and sound? What is the most intuitive way to experience this data through sound? And to which extent can we automate the text description? Human authored text descriptions of charts are historically difficult to beat, but can in many cases be impractical – such as where data is dynamically loaded in real-time. Automated text descriptions can also be designed to be more objective and less prone to biases. Will users be able to trust these descriptions? Will they still prefer those created by a human? With each of the new accessibility research questions we will provide user feedback from non-sighted users on our approaches. We will share findings about best practices, and show screen reader demos to help illustrate design considerations.
Build 2017 - B8037 - Explore the next generation of innovative UI in the Visu...Windows Developer
Experience a new wave of UI design with the animations, effects, and transitions that are the platform building blocks in the Visual Layer. See how physics, depth, lighting, and unique materials allow you to create immersive and personalized experiences, optimized for the range of Windows devices.
As David McCandless famously said “Information Visualization is a form of knowledge compression”. In particular, it is a way of compressing information in a visual way that can be easily and correctly interpreted by the visual system in our brains.
In this tutorial we will discuss the way in which our eyes and visual cortex process colors and shapes and how we may use it to our advantage. Ideas and concepts will be presented in an intuitive and practical way while providing references for the more technical descriptions and explanations available in the relevant scientific literature.
Matplotlib is the workhorse of visualization in Python and underlies all other major Python visualization packages and it it particularly well integrated into the Jupyter ecosystem. Mastering it is a fundamental requirement to be proficient in python data visualization. Seaborn, on the other hand, is a more recent package that builds on top of matplotlib and simplifies it for some of the most common use cases, making it more productive. We will cover both tools through practical examples and highlight the main differences and advantages of each one.
Code and slides available here: https://bmtgoncalves.github.io/DataVisualization/
Exploratory data analysis of 2017 US Employment data using RChetan Khanzode
Data Science- Exploratory data analysis of year 2017 US Employment data using R – Use Case.Use of R library's for visualization of Employment data by state, county and industry sector - simple Geo spatial data visualization of employment data
Looking into the past - feature extraction from historic maps using Python, O...James Crone
Tutorial presentation providing an overview of extracting geospatial features from scanned historic maps in an automated fashion using Python, OpenCV and PostGIS.
Vincent Sarago (Mapbox) | Traitement d’imagerie satellitaires de masse en ut...ACSG Section Montréal
Comment tirer le meilleur des technologies d’infonuagique pour traiter des TB de données satellitaires afin de créer un fond de cartes satellitaires global. Depuis 2010, Mapbox utilise les technologies proposées par Amazon Web Services afin de créer la meilleure carte globale a haute résolution.
An introduction to GIS Data Types. Strengths and weaknesses of raster and vector data are discussed. Also covered is the importance of topology. Concludes with a discussion of the vector-based format of OpenStreetMap data.
CSUN 2023 Automated Descriptions 3 March 2023 TG.pptxTed Gies
Highcharts is a world leading provider of accessible charting tools for the web, used by 80 of the top 100 Fortune companies. Recently Highcharts and global publishing company Elsevier’s Digital Accessibility Team collaborated to provide better accessibility for line charts with large datasets. Line charts are often used to visualize datasets with thousands of data points. This presents a challenge for non-visual access, as providing access to individual data points is not sufficient. A reader of a line chart with a large amount of data will aim to extract information about trends, patterns, and outliers from the chart. Can we make this information more accessible by communicating it through text and sound? What is the most intuitive way to experience this data through sound? And to which extent can we automate the text description? Human authored text descriptions of charts are historically difficult to beat, but can in many cases be impractical – such as where data is dynamically loaded in real-time. Automated text descriptions can also be designed to be more objective and less prone to biases. Will users be able to trust these descriptions? Will they still prefer those created by a human? With each of the new accessibility research questions we will provide user feedback from non-sighted users on our approaches. We will share findings about best practices, and show screen reader demos to help illustrate design considerations.
Visualization, A Primer - Basics, Techniques and GuidelinesCagatay Turkay
Slides for my talk at a workshop at DigitalCatapult in June 2015 on visualization design basics, common techniques and some guidelines. Discuss some of the underlying theory and basic methods, followed by a selection of common vis methods arranged according to data type. Then move on to some guidelines and recommendations whilst designing visualisations. Basically a very high-level 45-min. crash course in visualisation.
The Inquisitive Data Scientist: Facilitating Well-Informed Data Science throu...Cagatay Turkay
Slides for my talk at the VRVis Research Centre in Vienna as part of their VRVIS Forum talk series on November 8th 2018 -- https://www.vrvis.at/newsroom/events/forum/148-invited-talk-by-cagatay-turkay-the-inquisitive-data-scientist/
The talk argues the importance of being "inquisitive" as a data scientist and discusses techniques from visualisation that support this.
Guest Lecture for the Data Visualization Class at Ateneo de Manila University. Basic design for Computer Science students. For educational purposes only, no copyright infringement intended.
Highcharts and Elsevier share recent research into making interactive web charts more accessible. Our usability studies focused on three areas, including stacked column charts, scatter plots, and charts with drill-down interactivity. We will share design considerations for keyboard navigation and the understandability of non-visual representations of data visualizations.
A collection of slides on visualizing data (BIG or not). I am still adding slides here and tweaking things so if you have a correction, or opinion, or addition please let me know on Twitter @jamesonthecrow
Why is it suboptimal to visualize data as plain figures? What is the purpose of data visualization? Why should you care? What is the interplay between statistics, data analysis, and a good marketing story? In this talk, I'll give some answers and try to convince you to adopt best practices in dataviz.
LASTconf 2018 - System Mapping: Discover, Communicate and Explore the Real Co...Colin Panisset
This session provides evidence-based techniques for uncovering this complexity, visualising it in a machine-friendly but fundamentally human-centric manner, and using the results to drive real organisational awareness that facilitates conversations and change.
There's a wealth of data readily available, but few people know what to do with it. Based on our 7 years of practical experience running the leading Canadian data-visualization studio and working with high-profile clients, we share practical ways to use data in design & communications, while giving an overview of the challenges & opportunities ahead.
Creatives will be interested in learning how to use data in their works, marketers will discover new ways of communicating information.
Five things you will learn:
1- How data can be used as an input in the create process
2- How data can be used in communication & public relation
3- Discover "the spectrum of visualization"
4- Learn about the challenges of working with data
5- Discover the new disciplines emerging around the usage of data
Open Backscatter Toolchain (OpenBST) Project - A Community-vetted Workflow fo...Giuseppe Masetti
Presentation given at the Canadian Hydrographic Conference 2020
Dates: Mon., Feb. 24, 2020 – Thu., Feb. 27, 2020
Location: Quebec City, Canada
Authors: M. Smith, G. Masetti, L. Mayer, M. Malik, J.-M. Augustin, C. Poncelet, I. Parnum
e-learning Python for Ocean Mapping - Empowering the next generation of ocean...Giuseppe Masetti
Presentation given at the Canadian Hydrographic Conference 2020
Dates: Mon., Feb. 24, 2020 – Thu., Feb. 27, 2020
Location: Quebec City, Canada
Authors: G. Masetti, S. Dijkstra, R. Wigley, S. Greenaway,
D. Manda, A. Armstrong, and L. Mayer
ePOM - Fundamentals of Research Software Development - Code Version ControlGiuseppe Masetti
E-learning Python for Ocean Mapping (ePOM) project.
Complementary slides to the "Code Version Control" module (part of the Fundamentals of Research Software Development training).
More details at https://www.hydroffice.org/epom
ePOM - Fundamentals of Research Software Development - Integrated Development...Giuseppe Masetti
E-learning Python for Ocean Mapping (ePOM) project.
Complementary slides to the "Integrated Development Environment" module (part of the Fundamentals of Research Software Development training).
More details at https://www.hydroffice.org/epom
ePOM - Fundamentals of Research Software Development - IntroductionGiuseppe Masetti
E-learning Python for Ocean Mapping (ePOM) project.
Complementary slides to the Introduction module (part of the Fundamentals of Research Software Development training).
More details at https://www.hydroffice.org/epom
ePOM - Intro to Ocean Data Science - Object-Oriented ProgrammingGiuseppe Masetti
E-learning Python for Ocean Mapping (ePOM) project.
Complementary slides to the Object-Oriented Programming module (part of the Introduction to Ocean Data Science training).
More details at https://www.hydroffice.org/epom
AusSeabed workshop - Pydro and Hydroffice - Days 2 and 3Giuseppe Masetti
Slides presented by Giuseppe Masetti (UNH, CCOM/JHC) and Tyanne Faulkes (NOAA, OCS PHB) during the "Effective Seabed Mapping Workflow" Workshop. June 19 and 20, 2019. Canberra, ACT, Australia
AusSeabed workshop - Pydro and Hydroffice - Day 1Giuseppe Masetti
Slides presented by Giuseppe Masetti (UNH, CCOM/JHC) and Tyanne Faulkes (NOAA, OCS PHB) during the "Effective Seabed Mapping Workflow" Workshop. June 18, 2019. Canberra, ACT, Australia
Hydrographic Survey Validation and Chart Adequacy Assessment Using Automated ...Giuseppe Masetti
Authors: G.Masetti, T.Faulkes, C.Kastrisios
The presentation was given at the U.S. Hydro 2019 Conference.
Abstract:
The rising trend in automation is constantly pushing the hydrographic field toward the exploration and the adoption of more effective approaches for each step of the ping-to-public workflow. However, the large amount of data collected by modern acquisition systems - especially when paired with the force multiplier factor provided by autonomous vessels - conflict with the increasing timeliness expected by today’s final users. Such a situation represents a processing challenge for the largely human-centered solutions that are currently available, and the adoption of automated and semi-automated data quality procedures seems the only scalable and long-term solution to the problem. At the same time, there is an inherent value in propagating the application of such procedures upstream in the survey workflow. In fact, capturing potential issues close (in time and space) to their occurrence has the advantages of reducing the efforts required for their solution and limiting their extent. As such, modern surveys should rely more and more on robust data quality procedures that are applied in near real-time.
With the challenge to automate and standardize a large portion of the quality controls used to analyze hydrographic data, NOAA’s Office of Coast Survey and the UNH Center for Coastal and Ocean Mapping have jointly developed (and made publicly available) a pair of software solutions - QC Tools for quality control and CA Tools for chart adequacy - that collect algorithmic implementations for a number of these tasks. Their aim is to verify whether the acquired data satisfy the adopted agency standards (and, in a more general sense, fit for the intended purpose). These standards usually focus on data quality aspects like data density, coverage, and uncertainty evaluation which are largely automated by the developed tools discussed in this paper, leaving to the experienced hydrographer the duty to review the results and supervise the validation process. After an overview of the tools (and the relevant recent improvements driven by field feedback), this work focuses on a new chart adequacy algorithm as well as an experimental approach for bathymetric anomaly detection and classification. A number of examples that use the publicly available solutions in real-world scenarios are also illustrated.
The Open Backscatter Toolchain (OpenBST) project: towards an open-source and ...Giuseppe Masetti
Authors: G.Masetti, J-M.Augustin, M.Malik, C.Poncelet, X.Lurton, L.Mayer, G.Rice, M.Smith
The presentation was given at the U.S. Hydro 2019 Conference.
Abstract:
Most ocean mapping surveys collect seafloor reflectivity (backscatter) along with bathymetry. While the consistency of bathymetry processed by commonly adopted algorithms is well established, surprisingly large variability is observed between the backscatter mosaics generated by different software packages when processing the same dataset. Such a situation severely limits the use of acoustic backscatter for quantitative analysis (e.g., monitoring seafloor change over time, or remote characterization of seafloor characteristics) and other commonly attempted tasks (e.g., merging mosaics from different origins).
Acoustic backscatter processing involves a complex sequence of steps, but inasmuch as commercial software packages mainly provide end-results, comparisons between those results offer little insight into where in the workflow the differences are generated. In addition, preliminary results of a software-inter-comparison working group have shown that each processing algorithm tends to adopt a distinct, unique workflow; this causes large disagreements even in the initial per-beam reflectivity values resulting from differences in basic operations such as snippet averaging and evaluation of flagged beams.
Far from ideal, this situation requires a clear shift from the past closed-source approach that has caused it. As such, the Open Backscatter Toolchain (OpenBST) project aims to provide the community with an open-source and metadata-rich modular implementation of a toolchain dedicated to acoustic backscatter processing. The long-term goal is not to create processing tools that would compete with available commercial solutions, but rather a set of open-source, community-vetted, reference algorithms usable by both developers and users for benchmarking their processing algorithms.
As a proof-of-concept, we present a prototype implementation with the key elements of the OpenBST approach:
• The data conversion from a native acquisition format (i.e., Kongsberg EM Series) to NetCDF-based data structures (components of the eXtensible Sounder Format) better suited to data exploration, processing and metadata coupling.
• A processing pipeline constituted by a set of interlocking, task-oriented tools simplifying their substitution with alternative approaches.
• The creation of final products (i.e., angular response curves and backscatter mosaics) capturing relevant acquisition and processing metadata.
Pydro & HydrOffice: Open Tools for Ocean MappersGiuseppe Masetti
Workshop given by Damian Manda (NOAA Office of Coast Survey) and Giuseppe Masetti (UNH Center for Coastal and Ocean Mapping/NOAA-UNH Joint Hydrographic Center) on March 18, 2019 at the US Hydro Conference in Biloxi, MS, USA.
Backscatter Working Group Software Inter-comparison ProjectRequesting and Co...Giuseppe Masetti
Backscatter mosaics of the seafloor are now routinely produced from multibeam sonar data, and used in a wide range of marine applications. However, significant differences (up to 5 dB) have been observed between the levels of mosaics produced by different software processing a same dataset. This is a major detriment to several possible uses of backscatter mosaics, including quantitative analysis, monitoring seafloor change over time, and combining mosaics. A recently concluded international Backscatter Working Group (BSWG) identified this issue and recommended that “to check the consistency of the processing results provided by various software suites, initiatives promoting comparative tests on common data sets should be encouraged […]”. However, backscatter data processing is a complex (and often proprietary) sequence of steps, so that simply comparing end-results between software does not provide much information as to the root cause of the differences between results.
In order to pinpoint the source(s) of inconsistency between software, it is necessary to understand at which stage(s) of the data processing chain do the differences become substantial. We have invited willing software developers to discuss this framework and collectively adopt a list of intermediate processing steps. We provided a small dataset consisting of various seafloor types surveyed with the same multibeam sonar system, using constant acquisition settings and sea conditions, and have the software developers generate these intermediate processing results, to be eventually compared. If the experiment proves fruitful, we may extend it to more datasets, software and intermediate results. Eventually, software developers may consider making the results from intermediate stages a standard output as well as adhering to a consistent terminology, as advocated by Schimel et al. (2018). To date, the developers of four software (Sonarscope, QPS FMGT, CARIS SIPS, MB Process) have expressed their interest in collaborating on this project.
Shallow Survey 2018 - Applications of Sonar Detection Uncertainty for Survey ...Giuseppe Masetti
Authors: Giuseppe Masetti1*, Jean-Marie Augustin2, Xavier Lurton2, Brian R. Calder3
1. CCOM/JHC, University of New Hampshire, Durham, NH, USA, gmasetti@ccom.unh.edu
2. Institut Français de Recherche pour l’Exploitation de la Mer (Ifremer), Brest, France
3. CCOM/JHC, University of New Hampshire, Durham, NH, USA
An objective measurement of the bathymetric uncertainty introduced by sonar bottom detection has been proposed (Lurton and Augustin, 2009) to overcome the sonar-specific heuristic solutions proposed by constructors. This approach pairs each sounding with an estimation of sonar detection uncertainty (SDU) based on the width of the signal envelope (amplitude detection) or the noise level of the phase ramp (phase detection), thus capturing the intrinsic quality of the received signal and any applied signal-processing step.
Along with the environment characterization and the motion sensor accuracy, the SDU represents a major contributor to the total vertical uncertainty (TVU). As such, the monitoring of the SDU statistics by detection types, acquisition modes, and transmission sectors (when available) provides an effective way to alert the surveyor about ongoing issues in the data collection. It also has potential application in the evaluation of the health status of the sonar - for example, by comparing SDU-derived performance of repeated surveys on the same seafloor area and estimating the uncertainty contributions from environment and motion. Finally, the SDU may be integrated in multiple stages of the data processing workflow, from data pre-filtering to hydrographic uncertainty modeling, up to more advanced applications like hypotheses disambiguation in statistical gridding algorithms (e.g., CUBE).
Based on such considerations, we conducted a study to explore possible applications of the estimated SDU values for survey quality control and data processing. The results of the analysis applied to real data – collected using multibeam echosounders from manufacturers who are early adopters of this metric (i.e., Kongsberg Maritime and Teledyne Reson) – provide evidence that SDU is a useful tool for survey monitoring.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Comparing Evolved Extractive Text Summary Scores of Bidirectional Encoder Rep...University of Maribor
Slides from:
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Track: Artificial Intelligence
https://www.etran.rs/2024/en/home-english/
This pdf is about the Schizophrenia.
For more details visit on YouTube; @SELF-EXPLANATORY;
https://www.youtube.com/channel/UCAiarMZDNhe1A3Rnpr_WkzA/videos
Thanks...!
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
Deep Behavioral Phenotyping in Systems Neuroscience for Functional Atlasing a...Ana Luísa Pinho
Functional Magnetic Resonance Imaging (fMRI) provides means to characterize brain activations in response to behavior. However, cognitive neuroscience has been limited to group-level effects referring to the performance of specific tasks. To obtain the functional profile of elementary cognitive mechanisms, the combination of brain responses to many tasks is required. Yet, to date, both structural atlases and parcellation-based activations do not fully account for cognitive function and still present several limitations. Further, they do not adapt overall to individual characteristics. In this talk, I will give an account of deep-behavioral phenotyping strategies, namely data-driven methods in large task-fMRI datasets, to optimize functional brain-data collection and improve inference of effects-of-interest related to mental processes. Key to this approach is the employment of fast multi-functional paradigms rich on features that can be well parametrized and, consequently, facilitate the creation of psycho-physiological constructs to be modelled with imaging data. Particular emphasis will be given to music stimuli when studying high-order cognitive mechanisms, due to their ecological nature and quality to enable complex behavior compounded by discrete entities. I will also discuss how deep-behavioral phenotyping and individualized models applied to neuroimaging data can better account for the subject-specific organization of domain-general cognitive systems in the human brain. Finally, the accumulation of functional brain signatures brings the possibility to clarify relationships among tasks and create a univocal link between brain systems and mental functions through: (1) the development of ontologies proposing an organization of cognitive processes; and (2) brain-network taxonomies describing functional specialization. To this end, tools to improve commensurability in cognitive science are necessary, such as public repositories, ontology-based platforms and automated meta-analysis tools. I will thus discuss some brain-atlasing resources currently under development, and their applicability in cognitive as well as clinical neuroscience.
Introduction:
RNA interference (RNAi) or Post-Transcriptional Gene Silencing (PTGS) is an important biological process for modulating eukaryotic gene expression.
It is highly conserved process of posttranscriptional gene silencing by which double stranded RNA (dsRNA) causes sequence-specific degradation of mRNA sequences.
dsRNA-induced gene silencing (RNAi) is reported in a wide range of eukaryotes ranging from worms, insects, mammals and plants.
This process mediates resistance to both endogenous parasitic and exogenous pathogenic nucleic acids, and regulates the expression of protein-coding genes.
What are small ncRNAs?
micro RNA (miRNA)
short interfering RNA (siRNA)
Properties of small non-coding RNA:
Involved in silencing mRNA transcripts.
Called “small” because they are usually only about 21-24 nucleotides long.
Synthesized by first cutting up longer precursor sequences (like the 61nt one that Lee discovered).
Silence an mRNA by base pairing with some sequence on the mRNA.
Discovery of siRNA?
The first small RNA:
In 1993 Rosalind Lee (Victor Ambros lab) was studying a non- coding gene in C. elegans, lin-4, that was involved in silencing of another gene, lin-14, at the appropriate time in the
development of the worm C. elegans.
Two small transcripts of lin-4 (22nt and 61nt) were found to be complementary to a sequence in the 3' UTR of lin-14.
Because lin-4 encoded no protein, she deduced that it must be these transcripts that are causing the silencing by RNA-RNA interactions.
Types of RNAi ( non coding RNA)
MiRNA
Length (23-25 nt)
Trans acting
Binds with target MRNA in mismatch
Translation inhibition
Si RNA
Length 21 nt.
Cis acting
Bind with target Mrna in perfect complementary sequence
Piwi-RNA
Length ; 25 to 36 nt.
Expressed in Germ Cells
Regulates trnasposomes activity
MECHANISM OF RNAI:
First the double-stranded RNA teams up with a protein complex named Dicer, which cuts the long RNA into short pieces.
Then another protein complex called RISC (RNA-induced silencing complex) discards one of the two RNA strands.
The RISC-docked, single-stranded RNA then pairs with the homologous mRNA and destroys it.
THE RISC COMPLEX:
RISC is large(>500kD) RNA multi- protein Binding complex which triggers MRNA degradation in response to MRNA
Unwinding of double stranded Si RNA by ATP independent Helicase
Active component of RISC is Ago proteins( ENDONUCLEASE) which cleave target MRNA.
DICER: endonuclease (RNase Family III)
Argonaute: Central Component of the RNA-Induced Silencing Complex (RISC)
One strand of the dsRNA produced by Dicer is retained in the RISC complex in association with Argonaute
ARGONAUTE PROTEIN :
1.PAZ(PIWI/Argonaute/ Zwille)- Recognition of target MRNA
2.PIWI (p-element induced wimpy Testis)- breaks Phosphodiester bond of mRNA.)RNAse H activity.
MiRNA:
The Double-stranded RNAs are naturally produced in eukaryotic cells during development, and they have a key role in regulating gene expression .
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
2. WHY DO WE NEED DATA VISUALIZATION?
“Computer scientists are going to have to realize that
primary memory is the human brain, not RAM”
(Buxton, 2001)
AMOUNT OF
AVAILABLE DATA
HUMAN COGNITIVE
ABILITIES
Time
3. WHY DO WE NEED DATA VISUALIZATION?
“We are all cognitive cyborgs in this Internet age
in the sense that we rely heavily on cognitive
tools to amplify our mental abilities.”
(Ware, 2010)
“Often the most effective way to describe, explore,
and summarize a set of numbers – even a very large
set – is to look at pictures of those numbers.”
(Tufte, 2001)
4. CRITERIA FOR DATA VISUALIZATION
Perceptual hierarchy of visual cues
(Cleveland and McGill, 1985)
Accuracy
LENGTH (ALIGNED)
LENGTH
SLOPE ANGLE
AREA COLOR INTENSITY
COLOR HUE
VOLUME
5. CRITERIA FOR DATA VISUALIZATION
Which chart type?
Try with different ones!
Example of chart-chooser → Abela (2009)
6.
7. CRITERIA FOR DATA VISUALIZATION
Which colormap?
Think at the following color wheel …
(source: Wikimedia Commons)
10. CRITERIA FOR DATA VISUALIZATION
𝑫𝒂𝒕𝒂 𝑰𝒏𝒌 𝑹𝒂𝒕𝒊𝒐 =
𝐷𝑎𝑡𝑎 𝐼𝑛𝑘
𝑇𝑜𝑡𝑎𝑙 𝐼𝑛𝑘 𝑖𝑛 𝑡ℎ𝑒 𝐺𝑟𝑎𝑝ℎ𝑖𝑐
(Tufte, 1983)
vs
???
(data source: http://pypl.github.io/PYPL.html)
11. CRITERIA FOR DATA VISUALIZATION
𝑫𝒂𝒕𝒂 𝑰𝒏𝒌 𝑹𝒂𝒕𝒊𝒐 =
𝐷𝑎𝑡𝑎 𝐼𝑛𝑘
𝑇𝑜𝑡𝑎𝑙 𝐼𝑛𝑘 𝑖𝑛 𝑡ℎ𝑒 𝐺𝑟𝑎𝑝ℎ𝑖𝑐
(Tufte, 1983)
Experiment on Data Ink Ratio
(Inbar et al., 2007)
• Approach: 87 students rated 2 graphs from Tufte (1983) work.
• Findings: a clear preference of non-minimalist bar-graphs.
• Take away message: “People did not like Tufte’s minimalist design of bar-
graphs; they seem to prefer "chartjunk" instead”.
14. • Well-tested, popular tool → First release: 2003
• Designed like Matlab → Ease the switch from Matlab
• Many rendering backends → Cross-platform, multiple formats
• A major weakness is the rendering speed for large data → Slow!
• Able to create just about any chart (with some efforts)