E-learning Python for Ocean Mapping (ePOM) project.
Complementary slides to the "Code Version Control" module (part of the Fundamentals of Research Software Development training).
More details at https://www.hydroffice.org/epom
ePOM - Fundamentals of Research Software Development - Integrated Development...Giuseppe Masetti
E-learning Python for Ocean Mapping (ePOM) project.
Complementary slides to the "Integrated Development Environment" module (part of the Fundamentals of Research Software Development training).
More details at https://www.hydroffice.org/epom
Open Backscatter Toolchain (OpenBST) Project - A Community-vetted Workflow fo...Giuseppe Masetti
Presentation given at the Canadian Hydrographic Conference 2020
Dates: Mon., Feb. 24, 2020 – Thu., Feb. 27, 2020
Location: Quebec City, Canada
Authors: M. Smith, G. Masetti, L. Mayer, M. Malik, J.-M. Augustin, C. Poncelet, I. Parnum
Kubernetes GitOps featuring GitHub, Kustomize and ArgoCDSunnyvale
A brief dissertation about using GitOps paradigm to operate an application on multiple Kubernetes environments thanks to GitHub, ArgoCD and Kustomize. A talk about this matters has been taken at the event #CloudConf2020
ePOM - Fundamentals of Research Software Development - Integrated Development...Giuseppe Masetti
E-learning Python for Ocean Mapping (ePOM) project.
Complementary slides to the "Integrated Development Environment" module (part of the Fundamentals of Research Software Development training).
More details at https://www.hydroffice.org/epom
Open Backscatter Toolchain (OpenBST) Project - A Community-vetted Workflow fo...Giuseppe Masetti
Presentation given at the Canadian Hydrographic Conference 2020
Dates: Mon., Feb. 24, 2020 – Thu., Feb. 27, 2020
Location: Quebec City, Canada
Authors: M. Smith, G. Masetti, L. Mayer, M. Malik, J.-M. Augustin, C. Poncelet, I. Parnum
Kubernetes GitOps featuring GitHub, Kustomize and ArgoCDSunnyvale
A brief dissertation about using GitOps paradigm to operate an application on multiple Kubernetes environments thanks to GitHub, ArgoCD and Kustomize. A talk about this matters has been taken at the event #CloudConf2020
GitOps è un nuovo metodo di CD che utilizza Git come unica fonte di verità per le applicazioni e per l'infrastruttura (declarative infrastructure/infrastructure as code), fornendo sia il controllo delle revisioni che il controllo delle modifiche. In questo talk vedremo come implementare workflow di CI/CD Gitops basati su Kubernetes, dalla teoria alla pratica passando in rassegna i principali strumenti oggi a disposizione come ArgoCD, Flux (aka Gitops engine) e JenkinsX
GitOps: Git come unica fonte di verità per applicazioni e infrastrutturasparkfabrik
GitOps è un nuovo metodo di CD che utilizza Git come unica fonte di verità per le applicazioni e per l'infrastruttura (declarative infrastructure / infrastructure as code), fornendo sia il controllo delle revisioni che il controllo delle modifiche. In questo talk vedremo i concetti alla base di CI/CD, ovvero Continuous Integration e Continuous Deployment (o anche Continuous Delivery), pratiche nello sviluppo software che permettono ai team di creare dei progetti collaborativi in modo rapido, efficiente e idealmente con meno errori. Infine vedremo come implementare un flusso di lavoro GitOps usando Github actions e ArgoCD.
Watch the recording here: https://youtu.be/0KmqEp4VxSQ
Welcome Helm users! CNCF Flux has a best-in-class way to use Helm according to GitOps principles. For you, that means improved security, reliability, and velocity - no more being on the pager on the weekends or having painful troubleshooting or rollback when things go wrong. Built on Kubernetes controller-runtime, Flux’s Helm Controller is an example of a mature software agent that uses Helm’s SDK to full effect.
Flux’s biggest addition to Helm is a structured declaration layer for your releases that automatically gets reconciled to your cluster based on your configured rules:
⭐️ The Helm client commands let you imperatively do things
⭐️ Flux Helm Custom Resources let you declare what you want the Helm SDK to do automatically
During this session, Scott Rigby, Developer Experience Engineer at Weaveworks and Flux & Helm Maintainer, will take you on a tour of Flux’s Helm Controller, share the additional benefits Flux adds to Helm and then walk through a live demo of how to manage helm releases using Flux.
If you want to follow along with Scott’s demo, here are a couple of resources to help you prepare ahead of time:
📄 Flux for Helm Users Docs: https://fluxcd.io/docs/use-cases/helm/
📄 Flux Guide: Manage Helm Releases: https://fluxcd.io/docs/guides/helmreleases/
Speaker Bio:
Scott is a Brooklyn based interdisciplinary artist and Developer Advocate at Weaveworks. He co-founded the Basekamp art and research group in 1998 and the massively collaborative Plausible Artworlds international network. In technology he enjoys helping develop open source software that anyone can use, most recently projects in the cloud native landscape including co-maintaining Helm and Flux. In daily decisions, large or small, he tries to help make the world a better place for everyone.
These are the slides for a talk/workshop delivered to the Cloud Native Wales user group (@CloudNativeWal) on 2019-01-10.
In these slides, we go over some principles of gitops and a hands on session to apply these to manage a microservice.
You can find out more about GitOps online https://www.weave.works/technologies/gitops/
CI:CD in Lightspeed with kubernetes and argo cdBilly Yuen
Enterprises have benefited greatly from the elastic scalability and multi-region availability by moving to AWS, but the fundamental deployment model remains the same.
At Intuit, we have adopted k8s as our new saas platform and re-invented our CI/CD pipeline to take full advantage of k8s. In this presentation, we will discuss our journey from Spinnaker to Argo CD.
1. Reduce CI/CD time from 60 minutes to 10 minutes.
2. Reduce production release (or rollback) from 10 minutes to 2 minutes.
3. Enable concurrent deployment using spinnaker and argo cd as HA/DR to safely adopt the new platform with no downtime.
4. Be compatible with the existing application monitoring toolset.
Stefan is currently working on a new exciting project, GitOps Toolkit (https://github.com/fluxcd/toolkit), which is an experimental toolkit for assembling CD pipelines the GitOps way
The Power of GitOps with Flux & GitOps ToolkitWeaveworks
GitOps Days Community Special
Watch the video here: https://youtu.be/0v5bjysXTL8
New to GitOps or been a long-time Flux user?
We'll walk you through the benefits of GitOps and then demo it in action with a sneak peak into the next gen Flux and GitOps Toolkit!
* Automation!
* Visibility!
* Reconciliation!
* Powerful use of Prometheus and Grafana!
* GitOps for Helm!
For Flux users, Flux v1 is decoupled into Flux v2 and GitOps Toolkit. We'll demo how this decoupling gives you more control over how you can do GitOps and with fewer steps!
Join Leigh Capili and Tamao Nakahara as they show you GitOps in action with Flux and GitOps Toolkit.
Note to our Flux community that Flux v2 and the GitOps Toolkit is in development and Flux v1 is in maintenance mode. These talks and upcoming guides will give you the most up-to-date info and steps to migrate once we reach feature parity and start the migration process. We are dedicated to the smoothest experience possible for our Flux community, so please join us if you'd like early access and to give us feedback for the migration process.
We are really excited by the improvements and want to take this opportunity to show you what the GitOps Toolkit is all about, walk you through the guides and get your feedback!
For more info, see https://toolkit.fluxcd.io/.
Here's our latest blog post on Flux v2 and GitOps Toolkit updates: https://www.weave.works/blog/the-road-to-flux-v2-october-update
Docker New York City: From GitOps to a scalable CI/CD Pattern for KubernetesAndrew Phillips
Slides from the presentation "From GitOps to a scalable CI/CD Pattern for Kubernetes" at the Docker New York City meetup, by Andrew Phillips. See https://www.meetup.com/Docker-NewYorkCity/events/257539512/
GitOps - Modern best practices for high velocity app dev using cloud native t...Weaveworks
Alexis Richardson, Weaveworks CEO, recently presented this slide deck at the KubeCon + CloudNativeCon event. He covers GitOps - modern best practices for developing apps faster using cloud native tools.
OpenStack is a open source software for creating private and public clouds that coordinated collection of software from a few dozen related projects. This presentation give you an introduction about OpenStack and how OpenStack can help you in DevOps culture.
DevOps Meetup at AIS Tower 2 on February 10, 2017
Git is a distributed version-control system for tracking changes in source code during software development.
GitFlow is a branching model for Git which is very well suited to collaboration and scaling the development team.
GitOps and Kubernetes introduces a radical idea—managing your infrastructure with the same Git pull requests you use to manage your codebase. In this in-depth tutorial, you’ll learn to operate infrastructures based on powerful-but-complex technologies such as Kubernetes with the same Git version control tools most developers use daily. With these GitOps techniques and best practices, you’ll accelerate application development without compromising on security, easily roll back infrastructure changes, and seamlessly introduce new team members to your automation process.
If you want to learn more about the book, go here: http://mng.bz/G45O
In one of our weekly training, we’ve talked about Git. Here is a quick overview of the main concepts, basic commands and branching strategy, how to work with Git, how to contribute to an OSS project, …
GitOps è un nuovo metodo di CD che utilizza Git come unica fonte di verità per le applicazioni e per l'infrastruttura (declarative infrastructure/infrastructure as code), fornendo sia il controllo delle revisioni che il controllo delle modifiche. In questo talk vedremo come implementare workflow di CI/CD Gitops basati su Kubernetes, dalla teoria alla pratica passando in rassegna i principali strumenti oggi a disposizione come ArgoCD, Flux (aka Gitops engine) e JenkinsX
GitOps: Git come unica fonte di verità per applicazioni e infrastrutturasparkfabrik
GitOps è un nuovo metodo di CD che utilizza Git come unica fonte di verità per le applicazioni e per l'infrastruttura (declarative infrastructure / infrastructure as code), fornendo sia il controllo delle revisioni che il controllo delle modifiche. In questo talk vedremo i concetti alla base di CI/CD, ovvero Continuous Integration e Continuous Deployment (o anche Continuous Delivery), pratiche nello sviluppo software che permettono ai team di creare dei progetti collaborativi in modo rapido, efficiente e idealmente con meno errori. Infine vedremo come implementare un flusso di lavoro GitOps usando Github actions e ArgoCD.
Watch the recording here: https://youtu.be/0KmqEp4VxSQ
Welcome Helm users! CNCF Flux has a best-in-class way to use Helm according to GitOps principles. For you, that means improved security, reliability, and velocity - no more being on the pager on the weekends or having painful troubleshooting or rollback when things go wrong. Built on Kubernetes controller-runtime, Flux’s Helm Controller is an example of a mature software agent that uses Helm’s SDK to full effect.
Flux’s biggest addition to Helm is a structured declaration layer for your releases that automatically gets reconciled to your cluster based on your configured rules:
⭐️ The Helm client commands let you imperatively do things
⭐️ Flux Helm Custom Resources let you declare what you want the Helm SDK to do automatically
During this session, Scott Rigby, Developer Experience Engineer at Weaveworks and Flux & Helm Maintainer, will take you on a tour of Flux’s Helm Controller, share the additional benefits Flux adds to Helm and then walk through a live demo of how to manage helm releases using Flux.
If you want to follow along with Scott’s demo, here are a couple of resources to help you prepare ahead of time:
📄 Flux for Helm Users Docs: https://fluxcd.io/docs/use-cases/helm/
📄 Flux Guide: Manage Helm Releases: https://fluxcd.io/docs/guides/helmreleases/
Speaker Bio:
Scott is a Brooklyn based interdisciplinary artist and Developer Advocate at Weaveworks. He co-founded the Basekamp art and research group in 1998 and the massively collaborative Plausible Artworlds international network. In technology he enjoys helping develop open source software that anyone can use, most recently projects in the cloud native landscape including co-maintaining Helm and Flux. In daily decisions, large or small, he tries to help make the world a better place for everyone.
These are the slides for a talk/workshop delivered to the Cloud Native Wales user group (@CloudNativeWal) on 2019-01-10.
In these slides, we go over some principles of gitops and a hands on session to apply these to manage a microservice.
You can find out more about GitOps online https://www.weave.works/technologies/gitops/
CI:CD in Lightspeed with kubernetes and argo cdBilly Yuen
Enterprises have benefited greatly from the elastic scalability and multi-region availability by moving to AWS, but the fundamental deployment model remains the same.
At Intuit, we have adopted k8s as our new saas platform and re-invented our CI/CD pipeline to take full advantage of k8s. In this presentation, we will discuss our journey from Spinnaker to Argo CD.
1. Reduce CI/CD time from 60 minutes to 10 minutes.
2. Reduce production release (or rollback) from 10 minutes to 2 minutes.
3. Enable concurrent deployment using spinnaker and argo cd as HA/DR to safely adopt the new platform with no downtime.
4. Be compatible with the existing application monitoring toolset.
Stefan is currently working on a new exciting project, GitOps Toolkit (https://github.com/fluxcd/toolkit), which is an experimental toolkit for assembling CD pipelines the GitOps way
The Power of GitOps with Flux & GitOps ToolkitWeaveworks
GitOps Days Community Special
Watch the video here: https://youtu.be/0v5bjysXTL8
New to GitOps or been a long-time Flux user?
We'll walk you through the benefits of GitOps and then demo it in action with a sneak peak into the next gen Flux and GitOps Toolkit!
* Automation!
* Visibility!
* Reconciliation!
* Powerful use of Prometheus and Grafana!
* GitOps for Helm!
For Flux users, Flux v1 is decoupled into Flux v2 and GitOps Toolkit. We'll demo how this decoupling gives you more control over how you can do GitOps and with fewer steps!
Join Leigh Capili and Tamao Nakahara as they show you GitOps in action with Flux and GitOps Toolkit.
Note to our Flux community that Flux v2 and the GitOps Toolkit is in development and Flux v1 is in maintenance mode. These talks and upcoming guides will give you the most up-to-date info and steps to migrate once we reach feature parity and start the migration process. We are dedicated to the smoothest experience possible for our Flux community, so please join us if you'd like early access and to give us feedback for the migration process.
We are really excited by the improvements and want to take this opportunity to show you what the GitOps Toolkit is all about, walk you through the guides and get your feedback!
For more info, see https://toolkit.fluxcd.io/.
Here's our latest blog post on Flux v2 and GitOps Toolkit updates: https://www.weave.works/blog/the-road-to-flux-v2-october-update
Docker New York City: From GitOps to a scalable CI/CD Pattern for KubernetesAndrew Phillips
Slides from the presentation "From GitOps to a scalable CI/CD Pattern for Kubernetes" at the Docker New York City meetup, by Andrew Phillips. See https://www.meetup.com/Docker-NewYorkCity/events/257539512/
GitOps - Modern best practices for high velocity app dev using cloud native t...Weaveworks
Alexis Richardson, Weaveworks CEO, recently presented this slide deck at the KubeCon + CloudNativeCon event. He covers GitOps - modern best practices for developing apps faster using cloud native tools.
OpenStack is a open source software for creating private and public clouds that coordinated collection of software from a few dozen related projects. This presentation give you an introduction about OpenStack and how OpenStack can help you in DevOps culture.
DevOps Meetup at AIS Tower 2 on February 10, 2017
Git is a distributed version-control system for tracking changes in source code during software development.
GitFlow is a branching model for Git which is very well suited to collaboration and scaling the development team.
GitOps and Kubernetes introduces a radical idea—managing your infrastructure with the same Git pull requests you use to manage your codebase. In this in-depth tutorial, you’ll learn to operate infrastructures based on powerful-but-complex technologies such as Kubernetes with the same Git version control tools most developers use daily. With these GitOps techniques and best practices, you’ll accelerate application development without compromising on security, easily roll back infrastructure changes, and seamlessly introduce new team members to your automation process.
If you want to learn more about the book, go here: http://mng.bz/G45O
In one of our weekly training, we’ve talked about Git. Here is a quick overview of the main concepts, basic commands and branching strategy, how to work with Git, how to contribute to an OSS project, …
The Basics of Open Source Collaboration With Git and GitHubBigBlueHat
A revised/minimized version of Nick Quaranto's (http://www.slideshare.net/qrush ) presentation on the same topic. This revised version was used to present Git to a group of students at ECPI who were not yet familiar with the concepts of version control or Git.
a way to manage files and directories.
track changes over time.
recall previous versions.
source control is subset of VCS.
sharing on multiple computers
Types of vcs:
Local VCS
Centralized VCS
Distributed VCS
Features of git
commands in git
Git is an important tool to know in any software development process. In this PPT I tried to explain why Git and GitHub are important for Data Scientist, what role Git plays in Data Science related projects, basic git commands essential for daily development and GitHub for personal branding.
Learn Git - For Beginners and Intermediate levelsGorav Singal
Learn Git Basics and Fundamentals.
This is a perfect start for beginners and at Intermediate levels.
This contains a few commands and fundamentals about Git. Topics ranging from basic commands to creating branches, stashes. How to revert your code, how to tag your releases.
It also covers a few branching strategies.
Git is a free and open source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.
The slide contains Git workflow, command line instructions to work with Git, examples of project management over GitHub.
e-learning Python for Ocean Mapping - Empowering the next generation of ocean...Giuseppe Masetti
Presentation given at the Canadian Hydrographic Conference 2020
Dates: Mon., Feb. 24, 2020 – Thu., Feb. 27, 2020
Location: Quebec City, Canada
Authors: G. Masetti, S. Dijkstra, R. Wigley, S. Greenaway,
D. Manda, A. Armstrong, and L. Mayer
ePOM - Fundamentals of Research Software Development - IntroductionGiuseppe Masetti
E-learning Python for Ocean Mapping (ePOM) project.
Complementary slides to the Introduction module (part of the Fundamentals of Research Software Development training).
More details at https://www.hydroffice.org/epom
ePOM - Intro to Ocean Data Science - Raster and Vector Data FormatsGiuseppe Masetti
E-learning Python for Ocean Mapping (ePOM) project.
Complementary slides to the Raster and Vector Data Formats module (part of the Introduction to Ocean Data Science training).
More details at https://www.hydroffice.org/epom
ePOM - Intro to Ocean Data Science - Scientific ComputingGiuseppe Masetti
E-learning Python for Ocean Mapping (ePOM) project.
Complementary slides to the Scientific Computing module (part of the Introduction to Ocean Data Science training).
More details at https://www.hydroffice.org/epom
ePOM - Intro to Ocean Data Science - Data VisualizationGiuseppe Masetti
E-learning Python for Ocean Mapping (ePOM) project.
Complementary slides to the Data Visualization module (part of the Introduction to Ocean Data Science training).
More details at https://www.hydroffice.org/epom
ePOM - Intro to Ocean Data Science - Object-Oriented ProgrammingGiuseppe Masetti
E-learning Python for Ocean Mapping (ePOM) project.
Complementary slides to the Object-Oriented Programming module (part of the Introduction to Ocean Data Science training).
More details at https://www.hydroffice.org/epom
AusSeabed workshop - Pydro and Hydroffice - Days 2 and 3Giuseppe Masetti
Slides presented by Giuseppe Masetti (UNH, CCOM/JHC) and Tyanne Faulkes (NOAA, OCS PHB) during the "Effective Seabed Mapping Workflow" Workshop. June 19 and 20, 2019. Canberra, ACT, Australia
AusSeabed workshop - Pydro and Hydroffice - Day 1Giuseppe Masetti
Slides presented by Giuseppe Masetti (UNH, CCOM/JHC) and Tyanne Faulkes (NOAA, OCS PHB) during the "Effective Seabed Mapping Workflow" Workshop. June 18, 2019. Canberra, ACT, Australia
Hydrographic Survey Validation and Chart Adequacy Assessment Using Automated ...Giuseppe Masetti
Authors: G.Masetti, T.Faulkes, C.Kastrisios
The presentation was given at the U.S. Hydro 2019 Conference.
Abstract:
The rising trend in automation is constantly pushing the hydrographic field toward the exploration and the adoption of more effective approaches for each step of the ping-to-public workflow. However, the large amount of data collected by modern acquisition systems - especially when paired with the force multiplier factor provided by autonomous vessels - conflict with the increasing timeliness expected by today’s final users. Such a situation represents a processing challenge for the largely human-centered solutions that are currently available, and the adoption of automated and semi-automated data quality procedures seems the only scalable and long-term solution to the problem. At the same time, there is an inherent value in propagating the application of such procedures upstream in the survey workflow. In fact, capturing potential issues close (in time and space) to their occurrence has the advantages of reducing the efforts required for their solution and limiting their extent. As such, modern surveys should rely more and more on robust data quality procedures that are applied in near real-time.
With the challenge to automate and standardize a large portion of the quality controls used to analyze hydrographic data, NOAA’s Office of Coast Survey and the UNH Center for Coastal and Ocean Mapping have jointly developed (and made publicly available) a pair of software solutions - QC Tools for quality control and CA Tools for chart adequacy - that collect algorithmic implementations for a number of these tasks. Their aim is to verify whether the acquired data satisfy the adopted agency standards (and, in a more general sense, fit for the intended purpose). These standards usually focus on data quality aspects like data density, coverage, and uncertainty evaluation which are largely automated by the developed tools discussed in this paper, leaving to the experienced hydrographer the duty to review the results and supervise the validation process. After an overview of the tools (and the relevant recent improvements driven by field feedback), this work focuses on a new chart adequacy algorithm as well as an experimental approach for bathymetric anomaly detection and classification. A number of examples that use the publicly available solutions in real-world scenarios are also illustrated.
The Open Backscatter Toolchain (OpenBST) project: towards an open-source and ...Giuseppe Masetti
Authors: G.Masetti, J-M.Augustin, M.Malik, C.Poncelet, X.Lurton, L.Mayer, G.Rice, M.Smith
The presentation was given at the U.S. Hydro 2019 Conference.
Abstract:
Most ocean mapping surveys collect seafloor reflectivity (backscatter) along with bathymetry. While the consistency of bathymetry processed by commonly adopted algorithms is well established, surprisingly large variability is observed between the backscatter mosaics generated by different software packages when processing the same dataset. Such a situation severely limits the use of acoustic backscatter for quantitative analysis (e.g., monitoring seafloor change over time, or remote characterization of seafloor characteristics) and other commonly attempted tasks (e.g., merging mosaics from different origins).
Acoustic backscatter processing involves a complex sequence of steps, but inasmuch as commercial software packages mainly provide end-results, comparisons between those results offer little insight into where in the workflow the differences are generated. In addition, preliminary results of a software-inter-comparison working group have shown that each processing algorithm tends to adopt a distinct, unique workflow; this causes large disagreements even in the initial per-beam reflectivity values resulting from differences in basic operations such as snippet averaging and evaluation of flagged beams.
Far from ideal, this situation requires a clear shift from the past closed-source approach that has caused it. As such, the Open Backscatter Toolchain (OpenBST) project aims to provide the community with an open-source and metadata-rich modular implementation of a toolchain dedicated to acoustic backscatter processing. The long-term goal is not to create processing tools that would compete with available commercial solutions, but rather a set of open-source, community-vetted, reference algorithms usable by both developers and users for benchmarking their processing algorithms.
As a proof-of-concept, we present a prototype implementation with the key elements of the OpenBST approach:
• The data conversion from a native acquisition format (i.e., Kongsberg EM Series) to NetCDF-based data structures (components of the eXtensible Sounder Format) better suited to data exploration, processing and metadata coupling.
• A processing pipeline constituted by a set of interlocking, task-oriented tools simplifying their substitution with alternative approaches.
• The creation of final products (i.e., angular response curves and backscatter mosaics) capturing relevant acquisition and processing metadata.
Pydro & HydrOffice: Open Tools for Ocean MappersGiuseppe Masetti
Workshop given by Damian Manda (NOAA Office of Coast Survey) and Giuseppe Masetti (UNH Center for Coastal and Ocean Mapping/NOAA-UNH Joint Hydrographic Center) on March 18, 2019 at the US Hydro Conference in Biloxi, MS, USA.
Backscatter Working Group Software Inter-comparison ProjectRequesting and Co...Giuseppe Masetti
Backscatter mosaics of the seafloor are now routinely produced from multibeam sonar data, and used in a wide range of marine applications. However, significant differences (up to 5 dB) have been observed between the levels of mosaics produced by different software processing a same dataset. This is a major detriment to several possible uses of backscatter mosaics, including quantitative analysis, monitoring seafloor change over time, and combining mosaics. A recently concluded international Backscatter Working Group (BSWG) identified this issue and recommended that “to check the consistency of the processing results provided by various software suites, initiatives promoting comparative tests on common data sets should be encouraged […]”. However, backscatter data processing is a complex (and often proprietary) sequence of steps, so that simply comparing end-results between software does not provide much information as to the root cause of the differences between results.
In order to pinpoint the source(s) of inconsistency between software, it is necessary to understand at which stage(s) of the data processing chain do the differences become substantial. We have invited willing software developers to discuss this framework and collectively adopt a list of intermediate processing steps. We provided a small dataset consisting of various seafloor types surveyed with the same multibeam sonar system, using constant acquisition settings and sea conditions, and have the software developers generate these intermediate processing results, to be eventually compared. If the experiment proves fruitful, we may extend it to more datasets, software and intermediate results. Eventually, software developers may consider making the results from intermediate stages a standard output as well as adhering to a consistent terminology, as advocated by Schimel et al. (2018). To date, the developers of four software (Sonarscope, QPS FMGT, CARIS SIPS, MB Process) have expressed their interest in collaborating on this project.
Shallow Survey 2018 - Applications of Sonar Detection Uncertainty for Survey ...Giuseppe Masetti
Authors: Giuseppe Masetti1*, Jean-Marie Augustin2, Xavier Lurton2, Brian R. Calder3
1. CCOM/JHC, University of New Hampshire, Durham, NH, USA, gmasetti@ccom.unh.edu
2. Institut Français de Recherche pour l’Exploitation de la Mer (Ifremer), Brest, France
3. CCOM/JHC, University of New Hampshire, Durham, NH, USA
An objective measurement of the bathymetric uncertainty introduced by sonar bottom detection has been proposed (Lurton and Augustin, 2009) to overcome the sonar-specific heuristic solutions proposed by constructors. This approach pairs each sounding with an estimation of sonar detection uncertainty (SDU) based on the width of the signal envelope (amplitude detection) or the noise level of the phase ramp (phase detection), thus capturing the intrinsic quality of the received signal and any applied signal-processing step.
Along with the environment characterization and the motion sensor accuracy, the SDU represents a major contributor to the total vertical uncertainty (TVU). As such, the monitoring of the SDU statistics by detection types, acquisition modes, and transmission sectors (when available) provides an effective way to alert the surveyor about ongoing issues in the data collection. It also has potential application in the evaluation of the health status of the sonar - for example, by comparing SDU-derived performance of repeated surveys on the same seafloor area and estimating the uncertainty contributions from environment and motion. Finally, the SDU may be integrated in multiple stages of the data processing workflow, from data pre-filtering to hydrographic uncertainty modeling, up to more advanced applications like hypotheses disambiguation in statistical gridding algorithms (e.g., CUBE).
Based on such considerations, we conducted a study to explore possible applications of the estimated SDU values for survey quality control and data processing. The results of the analysis applied to real data – collected using multibeam echosounders from manufacturers who are early adopters of this metric (i.e., Kongsberg Maritime and Teledyne Reson) – provide evidence that SDU is a useful tool for survey monitoring.
Multi-source connectivity as the driver of solar wind variability in the heli...Sérgio Sacani
The ambient solar wind that flls the heliosphere originates from multiple
sources in the solar corona and is highly structured. It is often described
as high-speed, relatively homogeneous, plasma streams from coronal
holes and slow-speed, highly variable, streams whose source regions are
under debate. A key goal of ESA/NASA’s Solar Orbiter mission is to identify
solar wind sources and understand what drives the complexity seen in the
heliosphere. By combining magnetic feld modelling and spectroscopic
techniques with high-resolution observations and measurements, we show
that the solar wind variability detected in situ by Solar Orbiter in March
2022 is driven by spatio-temporal changes in the magnetic connectivity to
multiple sources in the solar atmosphere. The magnetic feld footpoints
connected to the spacecraft moved from the boundaries of a coronal hole
to one active region (12961) and then across to another region (12957). This
is refected in the in situ measurements, which show the transition from fast
to highly Alfvénic then to slow solar wind that is disrupted by the arrival of
a coronal mass ejection. Our results describe solar wind variability at 0.5 au
but are applicable to near-Earth observatories.
(May 29th, 2024) Advancements in Intravital Microscopy- Insights for Preclini...Scintica Instrumentation
Intravital microscopy (IVM) is a powerful tool utilized to study cellular behavior over time and space in vivo. Much of our understanding of cell biology has been accomplished using various in vitro and ex vivo methods; however, these studies do not necessarily reflect the natural dynamics of biological processes. Unlike traditional cell culture or fixed tissue imaging, IVM allows for the ultra-fast high-resolution imaging of cellular processes over time and space and were studied in its natural environment. Real-time visualization of biological processes in the context of an intact organism helps maintain physiological relevance and provide insights into the progression of disease, response to treatments or developmental processes.
In this webinar we give an overview of advanced applications of the IVM system in preclinical research. IVIM technology is a provider of all-in-one intravital microscopy systems and solutions optimized for in vivo imaging of live animal models at sub-micron resolution. The system’s unique features and user-friendly software enables researchers to probe fast dynamic biological processes such as immune cell tracking, cell-cell interaction as well as vascularization and tumor metastasis with exceptional detail. This webinar will also give an overview of IVM being utilized in drug development, offering a view into the intricate interaction between drugs/nanoparticles and tissues in vivo and allows for the evaluation of therapeutic intervention in a variety of tissues and organs. This interdisciplinary collaboration continues to drive the advancements of novel therapeutic strategies.
What is greenhouse gasses and how many gasses are there to affect the Earth.moosaasad1975
What are greenhouse gasses how they affect the earth and its environment what is the future of the environment and earth how the weather and the climate effects.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
Cancer cell metabolism: special Reference to Lactate PathwayAADYARAJPANDEY1
Normal Cell Metabolism:
Cellular respiration describes the series of steps that cells use to break down sugar and other chemicals to get the energy we need to function.
Energy is stored in the bonds of glucose and when glucose is broken down, much of that energy is released.
Cell utilize energy in the form of ATP.
The first step of respiration is called glycolysis. In a series of steps, glycolysis breaks glucose into two smaller molecules - a chemical called pyruvate. A small amount of ATP is formed during this process.
Most healthy cells continue the breakdown in a second process, called the Kreb's cycle. The Kreb's cycle allows cells to “burn” the pyruvates made in glycolysis to get more ATP.
The last step in the breakdown of glucose is called oxidative phosphorylation (Ox-Phos).
It takes place in specialized cell structures called mitochondria. This process produces a large amount of ATP. Importantly, cells need oxygen to complete oxidative phosphorylation.
If a cell completes only glycolysis, only 2 molecules of ATP are made per glucose. However, if the cell completes the entire respiration process (glycolysis - Kreb's - oxidative phosphorylation), about 36 molecules of ATP are created, giving it much more energy to use.
IN CANCER CELL:
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
Unlike healthy cells that "burn" the entire molecule of sugar to capture a large amount of energy as ATP, cancer cells are wasteful.
Cancer cells only partially break down sugar molecules. They overuse the first step of respiration, glycolysis. They frequently do not complete the second step, oxidative phosphorylation.
This results in only 2 molecules of ATP per each glucose molecule instead of the 36 or so ATPs healthy cells gain. As a result, cancer cells need to use a lot more sugar molecules to get enough energy to survive.
introduction to WARBERG PHENOMENA:
WARBURG EFFECT Usually, cancer cells are highly glycolytic (glucose addiction) and take up more glucose than do normal cells from outside.
Otto Heinrich Warburg (; 8 October 1883 – 1 August 1970) In 1931 was awarded the Nobel Prize in Physiology for his "discovery of the nature and mode of action of the respiratory enzyme.
WARNBURG EFFECT : cancer cells under aerobic (well-oxygenated) conditions to metabolize glucose to lactate (aerobic glycolysis) is known as the Warburg effect. Warburg made the observation that tumor slices consume glucose and secrete lactate at a higher rate than normal tissues.
Richard's aventures in two entangled wonderlandsRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
A brief information about the SCOP protein database used in bioinformatics.
The Structural Classification of Proteins (SCOP) database is a comprehensive and authoritative resource for the structural and evolutionary relationships of proteins. It provides a detailed and curated classification of protein structures, grouping them into families, superfamilies, and folds based on their structural and sequence similarities.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
ePOM - Fundamentals of Research Software Development - Code Version Control
1. CODE VERSION CONTROL
GIUSEPPE MASETTI
ESCI 872 – APPLIED TOOLS FOR OCEAN MAPPING – FUNDAMENTALS OF RESEARCH SOFTWARE DEVELOPMENT
Durham, NH – November 19 & 21, 2019
V1
2. WHAT IS CODE VERSION CONTROL?
A mechanism to manage changes to code over time.
• Tracking of per-file creation/modifications/deletion
• Ability to switch to past versions to fix bugs
• Facilitating concurrent work on the same project
Help to protect code from both catastrophes
and casual introduction of human error!
3. VERSION CONTROL SYSTEM (VCS)
Also known as:
Source Code Management (SCM)
Revision Control System (RCS)
CVS, Subversion, Mercurial, Bazaar, Git, etc.
Essential part of the every-day workflow
of a modern research software development team!
5. GIT .
• Command-line tool that tracks changes in
files and eases collaboration
• Created in 2005 by Linus Torvalds
• Free and open-source → GNU GPL v2
• Goals:
• Handling large project with speed/efficiency
• Data integrity
• Support for non-linear, distributed development
GFDL. Permission of Martin Streicher, Editor-in-
Chief, LINUXMAG.com. CC BY-SA 3.0
15. GIT SETUP
• Configure Git with your name and email
• git config --global user.email "your@email.com"
• git config --global user.name "your name"
• Check the current configuration
• git config --list
• The config commands only need to be done once
16. IN A WINDOWS SHELL
CONFIGURE GIT
WITH YOUR NAME AND EMAIL.
18. USING GIT
• Creating a new project
• git init → convert a directory into a Git repository (a .git directory is created)
• git add . → a snapshot of the directory content in a temporary staging area
• git commit –m "First commit" → permanently store the snapshot
• The init commands only need to be done once per project.
19.
20. IN A WINDOWS SHELL
CREATE A GIT PROJECT, THEN
CREATE AND COMMIT A README FILE.
21. USING GIT
• Making code changes
• git add file0.py file1.py → stage new/updated files
• git diff → to list un-staged changes
• git diff --cached → to see what is ready to be committed
• git status → a brief summary of the repository situation
• git commit –m "A meaningful message" → finally commit your changes
22.
23.
24.
25.
26.
27. USING GIT
• Making code changes
• git diff → to list un-staged changes
• git add file0.py file1.py → stage new/updated files
• git diff --cached → to see what is ready to be committed
• git status → a brief summary of the repository situation
• git commit –m "A meaningful message" → finally commit your changes
• When to commit code?
• Do it frequently
• Whenever you have reached a milestone/step in your task
28. FIRST CREATE AND COMMIT TWO SCRIPT FILES,
THEN EDIT ONE OF THEM AND
COMMIT THE CHANGES.
30. GIT HOSTING SERVICES
• Several sites offer services for hosting Git repositories
• Popular ones that are free for open-source projects:
31. GITHUB
• A social network for code share/collaboration
• It was acquired by Microsoft in 2018 for $7.5 billion.
32.
33.
34.
35.
36. GITHUB
• A social network for code share/collaboration
• It was acquired by Microsoft in 2018 for $7.5 billion.
• To push to the GitHub repository:
• Setup a remote (just once):
• git remote add origin https://github.com/giumas/myproject.git
• Push committed code (multiple times):
• git push origin master
• Reload project webpage
54. TORTOISEGIT → ICON OVERLAYS
• Normal → all changes committed
• Modified → changes are present
• Staged → changes are ready to be committed
• Deleted → file/folder scheduled to be deleted
• Added → file/folder scheduled to be added
• Ignored → file/folder ignored by Git
• Conflict → there is a conflict between code versions
https://tortoisegit.org/docs/tortoisegit/tgit-dug-wcstatus.html#tgit-dug-wcstatus-1