Recent advances in technoeconomic analysis (TEA) were reviewed:
- TEA is useful for process design, cost estimation, and identifying bottlenecks early in research.
- Studies now enable faster iteration, robust uncertainty analysis, and open-source platforms.
- Trends include more expansive system boundaries and potential integration with high-throughput experiments.
A brief information about the SCOP protein database used in bioinformatics.
The Structural Classification of Proteins (SCOP) database is a comprehensive and authoritative resource for the structural and evolutionary relationships of proteins. It provides a detailed and curated classification of protein structures, grouping them into families, superfamilies, and folds based on their structural and sequence similarities.
THE IMPORTANCE OF MARTIAN ATMOSPHERE SAMPLE RETURN.Sérgio Sacani
The return of a sample of near-surface atmosphere from Mars would facilitate answers to several first-order science questions surrounding the formation and evolution of the planet. One of the important aspects of terrestrial planet formation in general is the role that primary atmospheres played in influencing the chemistry and structure of the planets and their antecedents. Studies of the martian atmosphere can be used to investigate the role of a primary atmosphere in its history. Atmosphere samples would also inform our understanding of the near-surface chemistry of the planet, and ultimately the prospects for life. High-precision isotopic analyses of constituent gases are needed to address these questions, requiring that the analyses are made on returned samples rather than in situ.
This pdf is about the Schizophrenia.
For more details visit on YouTube; @SELF-EXPLANATORY;
https://www.youtube.com/channel/UCAiarMZDNhe1A3Rnpr_WkzA/videos
Thanks...!
The increased availability of biomedical data, particularly in the public domain, offers the opportunity to better understand human health and to develop effective therapeutics for a wide range of unmet medical needs. However, data scientists remain stymied by the fact that data remain hard to find and to productively reuse because data and their metadata i) are wholly inaccessible, ii) are in non-standard or incompatible representations, iii) do not conform to community standards, and iv) have unclear or highly restricted terms and conditions that preclude legitimate reuse. These limitations require a rethink on data can be made machine and AI-ready - the key motivation behind the FAIR Guiding Principles. Concurrently, while recent efforts have explored the use of deep learning to fuse disparate data into predictive models for a wide range of biomedical applications, these models often fail even when the correct answer is already known, and fail to explain individual predictions in terms that data scientists can appreciate. These limitations suggest that new methods to produce practical artificial intelligence are still needed.
In this talk, I will discuss our work in (1) building an integrative knowledge infrastructure to prepare FAIR and "AI-ready" data and services along with (2) neurosymbolic AI methods to improve the quality of predictions and to generate plausible explanations. Attention is given to standards, platforms, and methods to wrangle knowledge into simple, but effective semantic and latent representations, and to make these available into standards-compliant and discoverable interfaces that can be used in model building, validation, and explanation. Our work, and those of others in the field, creates a baseline for building trustworthy and easy to deploy AI models in biomedicine.
Bio
Dr. Michel Dumontier is the Distinguished Professor of Data Science at Maastricht University, founder and executive director of the Institute of Data Science, and co-founder of the FAIR (Findable, Accessible, Interoperable and Reusable) data principles. His research explores socio-technological approaches for responsible discovery science, which includes collaborative multi-modal knowledge graphs, privacy-preserving distributed data mining, and AI methods for drug discovery and personalized medicine. His work is supported through the Dutch National Research Agenda, the Netherlands Organisation for Scientific Research, Horizon Europe, the European Open Science Cloud, the US National Institutes of Health, and a Marie-Curie Innovative Training Network. He is the editor-in-chief for the journal Data Science and is internationally recognized for his contributions in bioinformatics, biomedical informatics, and semantic technologies including ontologies and linked data.
Observation of Io’s Resurfacing via Plume Deposition Using Ground-based Adapt...Sérgio Sacani
Since volcanic activity was first discovered on Io from Voyager images in 1979, changes
on Io’s surface have been monitored from both spacecraft and ground-based telescopes.
Here, we present the highest spatial resolution images of Io ever obtained from a groundbased telescope. These images, acquired by the SHARK-VIS instrument on the Large
Binocular Telescope, show evidence of a major resurfacing event on Io’s trailing hemisphere. When compared to the most recent spacecraft images, the SHARK-VIS images
show that a plume deposit from a powerful eruption at Pillan Patera has covered part
of the long-lived Pele plume deposit. Although this type of resurfacing event may be common on Io, few have been detected due to the rarity of spacecraft visits and the previously low spatial resolution available from Earth-based telescopes. The SHARK-VIS instrument ushers in a new era of high resolution imaging of Io’s surface using adaptive
optics at visible wavelengths.
Earliest Galaxies in the JADES Origins Field: Luminosity Function and Cosmic ...Sérgio Sacani
We characterize the earliest galaxy population in the JADES Origins Field (JOF), the deepest
imaging field observed with JWST. We make use of the ancillary Hubble optical images (5 filters
spanning 0.4−0.9µm) and novel JWST images with 14 filters spanning 0.8−5µm, including 7 mediumband filters, and reaching total exposure times of up to 46 hours per filter. We combine all our data
at > 2.3µm to construct an ultradeep image, reaching as deep as ≈ 31.4 AB mag in the stack and
30.3-31.0 AB mag (5σ, r = 0.1” circular aperture) in individual filters. We measure photometric
redshifts and use robust selection criteria to identify a sample of eight galaxy candidates at redshifts
z = 11.5 − 15. These objects show compact half-light radii of R1/2 ∼ 50 − 200pc, stellar masses of
M⋆ ∼ 107−108M⊙, and star-formation rates of SFR ∼ 0.1−1 M⊙ yr−1
. Our search finds no candidates
at 15 < z < 20, placing upper limits at these redshifts. We develop a forward modeling approach to
infer the properties of the evolving luminosity function without binning in redshift or luminosity that
marginalizes over the photometric redshift uncertainty of our candidate galaxies and incorporates the
impact of non-detections. We find a z = 12 luminosity function in good agreement with prior results,
and that the luminosity function normalization and UV luminosity density decline by a factor of ∼ 2.5
from z = 12 to z = 14. We discuss the possible implications of our results in the context of theoretical
models for evolution of the dark matter halo mass function.
Richard's entangled aventures in wonderlandRichard Gill
Since the loophole-free Bell experiments of 2020 and the Nobel prizes in physics of 2022, critics of Bell's work have retreated to the fortress of super-determinism. Now, super-determinism is a derogatory word - it just means "determinism". Palmer, Hance and Hossenfelder argue that quantum mechanics and determinism are not incompatible, using a sophisticated mathematical construction based on a subtle thinning of allowed states and measurements in quantum mechanics, such that what is left appears to make Bell's argument fail, without altering the empirical predictions of quantum mechanics. I think however that it is a smoke screen, and the slogan "lost in math" comes to my mind. I will discuss some other recent disproofs of Bell's theorem using the language of causality based on causal graphs. Causal thinking is also central to law and justice. I will mention surprising connections to my work on serial killer nurse cases, in particular the Dutch case of Lucia de Berk and the current UK case of Lucy Letby.
Seminar of U.V. Spectroscopy by SAMIR PANDASAMIR PANDA
Spectroscopy is a branch of science dealing the study of interaction of electromagnetic radiation with matter.
Ultraviolet-visible spectroscopy refers to absorption spectroscopy or reflect spectroscopy in the UV-VIS spectral region.
Ultraviolet-visible spectroscopy is an analytical method that can measure the amount of light received by the analyte.
1. Expression of a bacterial 3-dehydroshikimate dehydratase
(QsuB) reduces lignin and improves biomass saccharification
efficiency in switchgrass (Panicum virgatum L.)
Approach
• QsuB from Corynebacterium glutamicum was
expressed in the bioenergy crop switchgrass (Panicum
virgatum L.) using the stem-specific promoter of an O-
methyltransferase gene (pShOMT) from sugarcane.
Outcomes and Impacts
• We show that independent QsuB switchgrass lines
display 12-21% reduction in lignin, a 2-3-fold increase
in the bioaccumulation of PCA and a 5-30% increase
in saccharification efficiency (p < 0.05).
• We are currently field testing our engineered QsuB
switchgrass to assess its agronomic performance and
resilience to environmental stress.
Hao et al. (2021) BMC plant biology. doi: 10.1186/s12870-021-02842-9
Lignin reduction
PCA accumulation
C
O
N
T
R
O
L
Higher sugar yields
C
O
N
T
R
O
L
*p < 0.05
QsuB
C
O
N
T
R
O
L
C
O
N
T
R
O
L
C
O
N
T
R
O
L
QsuB QsuB QsuB
QsuB
C
O
N
T
R
O
L
QsuB
PCA E4P + PEP
3-dehydroshikimate
LIGNIN
Background
• Lignin negatively affects biomass conversion into advanced
bioproducts. Therefore, there is a strong interest in developing
bioenergy crops with reduced lignin content.
• Accumulation of bioproducts is another desired trait for
bioenergy crops.
• Expression of a 3-dehydroshikimate dehydratase (QsuB) in
plants offers the potential for decreasing lignin content and
overproducing a value-added metabolic coproduct suitable for
biological upgrading (i.e., protocatechuate –PCA).
2. Multi-Omics Driven Metabolic Network Reconstruction and Analysis
of Lignocellulosic Carbon Utilization in Rhodosporidium toruloides
Background
• An oleaginous yeast Rhodosporidium toruloides is a promising host
for converting lignocellulosic biomass to bioproducts and biofuels
• In this study, we performed multi-omics analysis and reconstructed
the genome-scale metabolic network of R. toruloides to study the
utilization of carbon sources derived from lignocellulosic biomass
Approach
• Reconstruction of the genome-scale metabolic network, manual
curation of the reconstructed network, and validation of the
metabolic model were performed using Jupyter notebooks in a fully
reproducible manner
• A multi-omics dataset including transcriptomics, proteomics,
metabolomics, lipidomics, and RB-TDNA sequencing was
generated and integrated with the metabolic model to investigate
lignocellulosic carbon utilization
Outcomes and Impacts
• A large and comprehensive multi-omics dataset was generated for
R. toruloides grown on glucose, xylose, arabinose, or p-coumaric
acid as carbon sources found after deconstruction of lignocellulose
• An accurate genome-scale metabolic network was developed for
R. toruloides and validated against high-throughput growth
phenotype and functional genomics data
• The multi-omics dataset and genome-scale metabolic model will be
utilized to maximize the use of the carbon in lignocellulosic
biomass feedstocks and improve the production of biofuel and
bioproduct precursors
Kim et al. (2021) Front. Bioeng. Biotechnol. 8:612832, doi: 10.3389/fbioe.2020.612832
The metabolic reactions, genes, and their localization
for p-coumaric acid degradation pathway in
R. toruloides were proposed using the multi-omics
dataset and metabolic network reconstruction
3. Efficient production of oxidized terpenoids via
engineering fusion proteins of terpene synthase and
cytochrome P450
Background
• The functionalization of terpenes using cytochrome P450 enzymes
is a versatile route to the production of useful derivatives that can
be further converted to value-added products.
• Many terpenes are hydrophobic and volatile making their
availability as a substrate for P450 enzymes significantly limited
during microbial production.
Approach
• This work developed a strategy to improve the accessibility of
terpene molecules for the P450 reaction by linking terpene
synthase and P450.
• As a model system, fusion proteins of 1,8-cineole synthase (CS)
and P450cin were investigated via experimental and structural
analysis.
Outcomes and Impacts
• Fusion proteins of CS and P450cin showed an improved
hydroxylation of the monoterpenoid 1,8-cineole up to 5.4-fold.
• Structural analysis by SEC-SAXS indicated the linker length
affects the flexibility, which eventually affects the catalytic activity,
of the fusion enzymes.
• The application of fusion strategy to the biosynthetic pathway for
oxidized epi-isozizaene products resulted in a 90-fold increase.
• This strategy could be widely applicable to improve the
biosynthetic titer of the functionalized products from hydrophobic
terpene intermediates.
Wang et al. (2021) Metabolic Engineering, 64: 41–51. (doi.org/10.1016/j.ymben.2021.01.004)
Enzyme fusions were engineered to improve substrate
availability as terpenes are hydrophobic and easily lost by
phase separation. Fusion proteins showed improved
production of oxidized mono- and sesquiterpenoids.
4. Genomic mechanisms of climate adaptation in
polyploid bioenergy switchgrass
Background
• Switchgrass (P. virgatum) is both a promising biofuel crop and an
important component of the North American tallgrass prairie.
• Biomass production is the principal breeding target for switchgrass
as a forage and bioenergy crop
Approach
• Deep PacBio long-read sequencing coupled with deep short-read
polishing and bacterial artificial chromosome (BAC) clone validation
produced a highly contiguous ‘v5’ AP13 genome assembly.
• Analysis of biomass and survival among 732 resequenced
genotypes, which were grown across 10 common gardens that span
1,800 km of latitude to find evidence of climate adaptation.
Outcomes and Impacts
• Highly accurate and complete reference genome for tetraploid
switchgrass was generated using long-read DNA sequencing
technology
• With the help of whole genome sequence it was estimated that the
two parental species of switchgrass diverged from a common
ancestor about 6.7 million years ago, and that the two genomes
came back together in a whole-genome duplication at least 4.6
million years ago
• Investigating patterns of climate adaptation, the genome resources
and gene–trait associations developed here provide breeders with
the necessary tools to increase switchgrass yield for the sustainable
production of bioenergy.
Geographical distribution of common gardens (n = 10)
and plant collection locations (n = 700 georeferenced
genotypes), and spatial distribution models of each
ecotype. The ecotype color legend accompanies the
representative images of each ecotype to the right of
the map (images were taken at the end of the 2019
growing season and the background was removed
with ImageJ (https://imagej.nih.gov/ij)). White-outlined
points (coloured by ecotype, or in white if no ecotype
assignment was made) indicate the georeferenced
collection sites of the diversity panel. The labeled white
circles with black crosses indicate the locations of the
10 experimental gardens. Scale bars, 1 m.
Lovell, J.T., MacQueen, A.H., Mamidi, S. et al. Nature (2021). https://doi.org/10.1038/s41586-020-
03127-1
5. Technoeconomic analysis for biofuels and
bioproducts
Background
• This article provides a review of current literature on Technoeconomic
analysis (TEA), which is an approach for conducting process design and
simulation, informed by empirical data, to estimate capital costs,
operating costs, mass balances, and energy balances for a commercial
scale biorefinery
• TEA serves as a useful method to screen potential research priorities,
identify cost bottlenecks at the earliest stages of research, and provide
the mass and energy data needed to conduct life-cycle environmental
assessments.
Approach
• We reviewed recently published work on TEA applied to biofuel and
bioproduct production
• We reviewed the challenges of integrating conventional process
simulation software with uncertainty analysis and life-cycle assessment
and noted recent examples of good implementations of this approach
• We also reviewed the types of financial metrics that are most commonly
used in industry to evaluate potential projects, in contrast with metrics
used in academic and research settings
Outcomes and Impacts
• Recent studies have produced new tools and methods to enable faster
iteration on potential designs, more robust uncertainty analysis, and
greater accessibility through the use of open-source platforms.
• There is also a trend toward more expansive system boundaries to
incorporate the impact of policy incentives, use-phase performance
differences, and potential impacts on global market supply.
• Recent advances in high-throughput experimental pipelines have great
potential if integrated with TEA to generate insights about commercial-
scale implications.
Scown et al. (2021) COBIOT, doi: 10.1016/j.copbio.2021.01.002
Figure 1. Scope of well-conducted
technoeconomic analyses for biofuels and
bioproducts