Probit analysis is used to analyze binomial response experiments, such as testing the toxicity of chemicals. It transforms a sigmoid dose-response curve into a straight line to allow regression analysis. Key steps include:
1) Calculating proportions killed at different doses
2) Determining empirical and expected probits from tables and regression
3) Computing weighting coefficients and working probits
4) Estimating LC50 or dose that kills 50% of the subjects from the regression equation.
The Indian Dental Academy is the Leader in continuing dental education , training dentists in all aspects of dentistry and offering a wide range of dental certified courses in different formats.
1. Dose response relationships can be represented by either graded or quantal curves, with graded curves showing a continuous response to varying doses and quantal curves showing the proportion of subjects responding at different doses.
2. Key features of dose response curves include the median effective dose (ED50) which produces a 50% response, potency which is measured by the dose required for 50% effect, and the therapeutic index which is the ratio of toxic to effective doses.
3. Both curve types provide information about a drug's potency but graded curves also indicate maximum efficacy while quantal curves show variability in individual responses.
Virtual screening studies in search of dopamine D1 receptor ligands as antips...Monika Marcinkowska
1) Researchers used virtual screening of over 4,000 compounds to identify potential dopamine D1 receptor ligands for treating schizophrenia.
2) In vitro testing of the 411 top-scoring compounds from virtual screening identified 172 hits with substantial D1 receptor affinity, a 42% hit rate.
3) One promising hit was a novel arylsulfonamide derivative (ADN-3772) that showed full D1 receptor affinity and potent partial agonist activity at the D2 receptor, suggesting potential as an antipsychotic agent.
Poster - COMPARABILITY METHODS FOR BIOSIMILAR TESTING USING THE BIACORE T200 ...Melanie Verneret
The document describes methods for using surface plasmon resonance (SPR) to compare the binding properties of proposed biosimilar products to reference products. SPR was used to assess potency and kinetics of trastuzumab binding to FcRI and bevacizumab binding to VEGF. Assays showed good linearity, accuracy, precision and ability to distinguish differences between biosimilars and references, making SPR suitable for biosimilar comparability studies.
This document provides an overview of receptor pharmacology and radioligand binding assays. It defines key concepts such as receptors, ligands, agonists, antagonists, and their interactions. It describes different types of receptors including G protein-coupled receptors and nuclear hormone receptors. It explains the principles and steps of radioligand binding assays, including saturation binding curves, competition binding curves, and the determination of dissociation constants (Kd) and inhibition constants (Ki). The document outlines techniques used in today's laboratory, which involves transfection of cells with dopamine receptor subtypes D1 or D5, and two-point saturation binding assays to derive Kd values and identify the receptor subtypes.
The document provides an overview of computational chemistry methods for structure-activity relationship analysis, pharmacophore modeling, and protein-ligand docking. It discusses topics like SAR, QSAR, molecular alignment, conformational analysis, homology modeling of protein targets, and docking programs. Examples are given of applying these methods to study benzodiazepine ligands and GABA receptor subtypes.
Probit analysis is used to analyze binomial response experiments, such as testing the toxicity of chemicals. It transforms a sigmoid dose-response curve into a straight line to allow regression analysis. Key steps include:
1) Calculating proportions killed at different doses
2) Determining empirical and expected probits from tables and regression
3) Computing weighting coefficients and working probits
4) Estimating LC50 or dose that kills 50% of the subjects from the regression equation.
The Indian Dental Academy is the Leader in continuing dental education , training dentists in all aspects of dentistry and offering a wide range of dental certified courses in different formats.
1. Dose response relationships can be represented by either graded or quantal curves, with graded curves showing a continuous response to varying doses and quantal curves showing the proportion of subjects responding at different doses.
2. Key features of dose response curves include the median effective dose (ED50) which produces a 50% response, potency which is measured by the dose required for 50% effect, and the therapeutic index which is the ratio of toxic to effective doses.
3. Both curve types provide information about a drug's potency but graded curves also indicate maximum efficacy while quantal curves show variability in individual responses.
Virtual screening studies in search of dopamine D1 receptor ligands as antips...Monika Marcinkowska
1) Researchers used virtual screening of over 4,000 compounds to identify potential dopamine D1 receptor ligands for treating schizophrenia.
2) In vitro testing of the 411 top-scoring compounds from virtual screening identified 172 hits with substantial D1 receptor affinity, a 42% hit rate.
3) One promising hit was a novel arylsulfonamide derivative (ADN-3772) that showed full D1 receptor affinity and potent partial agonist activity at the D2 receptor, suggesting potential as an antipsychotic agent.
Poster - COMPARABILITY METHODS FOR BIOSIMILAR TESTING USING THE BIACORE T200 ...Melanie Verneret
The document describes methods for using surface plasmon resonance (SPR) to compare the binding properties of proposed biosimilar products to reference products. SPR was used to assess potency and kinetics of trastuzumab binding to FcRI and bevacizumab binding to VEGF. Assays showed good linearity, accuracy, precision and ability to distinguish differences between biosimilars and references, making SPR suitable for biosimilar comparability studies.
This document provides an overview of receptor pharmacology and radioligand binding assays. It defines key concepts such as receptors, ligands, agonists, antagonists, and their interactions. It describes different types of receptors including G protein-coupled receptors and nuclear hormone receptors. It explains the principles and steps of radioligand binding assays, including saturation binding curves, competition binding curves, and the determination of dissociation constants (Kd) and inhibition constants (Ki). The document outlines techniques used in today's laboratory, which involves transfection of cells with dopamine receptor subtypes D1 or D5, and two-point saturation binding assays to derive Kd values and identify the receptor subtypes.
The document provides an overview of computational chemistry methods for structure-activity relationship analysis, pharmacophore modeling, and protein-ligand docking. It discusses topics like SAR, QSAR, molecular alignment, conformational analysis, homology modeling of protein targets, and docking programs. Examples are given of applying these methods to study benzodiazepine ligands and GABA receptor subtypes.
Each and every biological function in living organism occurs due to protein-protein interactions. The
diseases are no exception to this. Identifying one or more proteins for a particular disease and then
designing a suitable chemical compound (which is known as drug or ligand) to destroy those proteins is a
challenging topic of research in computational biology. In earlier methods, drugs were designed using only
a few chemical components and were represented as a fixed-length tree. But in reality, a drug contains
many chemical groups collectively known as pharmacophore. Moreover, the chemical length of the drug
cannot be determined before designing that drug.
In the present work, a Particle Swarm Optimization (PSO) based methodology has been proposed to find
out a suitable drug for a particular disease so that the drug-target protein interaction energy becomes
minimum. In the proposed algorithm, the drug is represented as a variable length tree and essential
functional groups are arranged in different positions of that drug. Finally, the structure of the drug is
obtained and its docking energy is minimized simultaneously. Also, the orientation of chemical groups in
the drug is tested so that it can bind to a particular active site of a target protein and the drug fits well
inside the active site of target protein. Here, several inter-molecular forces have been considered for
accuracy of the docking energy. Results are demonstrated for three different target proteins both
numerically and pictorially. Results show that PSO performs better than the earlier methods.
Radioligand binding studies involve incubating radioactive ligands with tissue samples to measure binding to receptors. This provides information on receptor characteristics like binding affinity and number of sites. The radioligand must have high affinity and selectivity for the receptor of interest. Measuring specific binding versus nonspecific binding allows determining properties of the receptor population under study.
1) Pharmacophores are sets of steric and electronic features common to active drug molecules that interact with biological targets in a specific way. They include features like hydrogen bond donors/acceptors and hydrophobic regions.
2) Feature trees (Ftrees) are a ligand-based approach that represents molecules as trees to capture major building blocks and overall alignment in a conformation-independent way, supporting "lead hopping" between chemical classes.
3) Ftrees describe molecular fragments as nodes labeled with shape and chemical descriptors. Molecules are compared by matching subtrees using topology-preserving search algorithms. This allows identification of actives from different chemical scaffolds.
Quantitative aspects of drug receptor interactionDrSahilKumar
This document provides an overview of quantitative aspects of drug receptor interactions, including concentration-binding relationships and dose-response relationships. It discusses graded dose-response curves and how they are used to quantify drug agonism and antagonism. Parameters like EC50, Emax, and pA2 values are extracted from graded curves to characterize drug potency and efficacy. Quantal dose-response curves are also covered, which analyze population variability in drug response and are used to determine values like ED50 and LD50 for evaluating drug safety. The document concludes by emphasizing the importance of quantifying these relationships for understanding and comparing drug behavior.
Exploring Compound Combinations in High Throughput Settings: Going Beyond 1D ...Rajarshi Guha
The document describes efforts to screen drug combinations in high throughput settings beyond traditional one-dimensional metrics. It discusses the infrastructure and workflows used to screen compound combinations against a library of over 2000 small molecules with diverse mechanisms of action. Quality control of combination screening experiments poses challenges due to the multi-dimensional nature of the data. The researchers are exploring various metrics and analytical approaches to characterize synergistic, additive and antagonistic combination responses across different cell lines and combinations.
This document discusses high-throughput screening (HTS) workflows for identifying biologically active small molecules. It describes how robots are used to rapidly screen large libraries of compounds in assays and generate large datasets. Statistical and machine learning methods in R can then be used to build predictive models from these datasets to identify promising leads and guide the screening of additional compounds. Caveats regarding the applicability of models to new chemical spaces are also discussed.
This document discusses SAR by NMR (structure-activity relationship by nuclear magnetic resonance) and fragment-based drug discovery. It notes that high-throughput screening often fails to produce high-quality drug candidates. Fragment-based screening covers more chemical diversity space than large compound libraries and can identify weakly binding fragments that can be linked together. NMR spectroscopy allows monitoring of protein-ligand interactions and determining binding sites to develop tighter binding inhibitors. An example is given of using NMR to develop small molecule inhibitors of the LFA-1 protein involved in inflammation.
This document discusses types of chemical data including data on drugs, agrochemicals, fragrances, food additives, and natural products. It focuses on drug data such as chemical properties, adverse events, toxicology, absorption, distribution, metabolism, and excretion (ADME). LogP is discussed as a measure of solubility, with examples of how it is calculated from molecular fragments and corrections. Molecular descriptors that can predict properties are also introduced, including topological, geometrical, electronic, and hybrid descriptors. Finally, some freely available tools for calculating molecular descriptors are listed.
Chemical risk assessment is often limited by the lack of experimental toxicity data for a large number of diverse chemicals. In the absence of experimental data, potential chemical hazard is often predicted using data gap filling techniques such as quantitative structure activity relationship (QSAR) models. QSARs are theoretical models that relate a quantitative measure of chemical structure to a physical property or a biological effect. QSAR tools are a widely utilized alternative to time-consuming clinical and animal testing methods, yet concerns over reliability and uncertainty limit application of QSAR models for regulatory chemical risk assessments. The reliability of a QSAR model depends on the quality and quantity of experimental training data and the applicability domain of the model. This talk will describe the basics concepts and best practices in QSAR modeling, principles associated with validation of QSAR models, summary of available QSAR tools, limitations and challenges in the acceptance of QSAR models, and the current status and prospects of QSAR modeling methods in the medical devices community.
The document discusses approaches to novel drug development, specifically quantitative structure-activity relationship (QSAR) modeling and high-throughput screening (HTS). It provides background on QSAR, describing how it establishes mathematical relationships between molecular properties and biological activity. It also outlines the history, goals and process of HTS, noting it allows rapid testing of large numbers of compounds against biological targets to identify initial hits for further development.
1. Quantitative Structure Activity Relationship (QSAR) uses mathematical models to correlate chemical descriptors of molecules to their biological activity.
2. Descriptors include physicochemical properties like hydrophobicity which can be measured experimentally.
3. QSAR allows medicinal chemists to predict activity of novel analogues and guide synthesis toward more active compounds.
Virtual Toxicity panels focussed on interpretable machine learning models that can guide medicinal chemists to identify critical substructures that are assocaited with toxicities.
This document summarizes a study that systematically analyzed 3,158 druggable human genes to identify those that lack orthologous (equivalent) genes in mouse, rat, and dog. The researchers used several databases and tools to map human genes to orthologs in these species. They identified 41 genes that lack orthologs in all three species, as well as 22 genes that are missing orthologs in mouse and rat but have them in dog. The authors discuss implications for toxicity testing and drug development for targets lacking rodent orthologs.
This document discusses issues with commonly used ligand efficiency metrics. It argues that ligand efficiency metrics make unrealistic assumptions by normalizing potency based on trends not actually observed in data. Specifically, ligand efficiency assumes a linear relationship between potency and risk factors like lipophilicity, but data does not always support this assumption. It also notes that ligand efficiency incorporates arbitrary concentration units that can affect calculated values. The document suggests plotting affinity against risk factors to test the assumptions behind ligand efficiency metrics.
A QSAR is a mathematical relationship between a biological activity of a molecular system and its geometric and chemical characteristics.
QSAR attempts to find consistent relationship between biological activity and molecular properties, so that these “rules” can be used to evaluate the activity of new compounds.
Accelerating lead optimisation with active learning by exploiting MMPA based ...Ed Griffen
Presented at the 15th GCC - German Conference on Cheminformatics November 2019
We combine regression forest machine learning with our MMPA based generative methods to deliver an active learning system to accelerate lead optimisation. In the process we identify permutative MMPA as a method to leverage SAR information from small data sets.
Published by MedChemica Ltd
Speciation And Physicochemical Studies of Some Biospecific CompoundsIOSR Journals
Abstract: A green, safer , efficient , eco-friendly approach for the synthesis of novel compounds which reveal biological and spermicidal activity. The nature of the pharmacophore decides the physiological reactivity of the compound.
The talk describes the science and results of a consortium of multiple pharmaceutical companies extracting medicinal chemistry knowledge from research data and the application to real drug design projects. A new technique for automating pharmacophore / toxophore finding from public data is disclosed.
Using Matched Molecular Pairs To Cluster CompoundsWillem van Hoorn
This document discusses using matched molecular pairs (MMPs) to cluster compounds based on common cores. It analyzes a test set of 4,609 compounds from the EGFR dataset in ChEMBL. The compounds were clustered based on their common cores, identifying 430 unique cores that were not a substructure of another core. All compounds mapped to these unique cores, forming 430 clusters. The clusters were analyzed to identify series with interpretable structure-activity relationships based on the MMPs within each cluster.
Each and every biological function in living organism occurs due to protein-protein interactions. The
diseases are no exception to this. Identifying one or more proteins for a particular disease and then
designing a suitable chemical compound (which is known as drug or ligand) to destroy those proteins is a
challenging topic of research in computational biology. In earlier methods, drugs were designed using only
a few chemical components and were represented as a fixed-length tree. But in reality, a drug contains
many chemical groups collectively known as pharmacophore. Moreover, the chemical length of the drug
cannot be determined before designing that drug.
In the present work, a Particle Swarm Optimization (PSO) based methodology has been proposed to find
out a suitable drug for a particular disease so that the drug-target protein interaction energy becomes
minimum. In the proposed algorithm, the drug is represented as a variable length tree and essential
functional groups are arranged in different positions of that drug. Finally, the structure of the drug is
obtained and its docking energy is minimized simultaneously. Also, the orientation of chemical groups in
the drug is tested so that it can bind to a particular active site of a target protein and the drug fits well
inside the active site of target protein. Here, several inter-molecular forces have been considered for
accuracy of the docking energy. Results are demonstrated for three different target proteins both
numerically and pictorially. Results show that PSO performs better than the earlier methods.
Radioligand binding studies involve incubating radioactive ligands with tissue samples to measure binding to receptors. This provides information on receptor characteristics like binding affinity and number of sites. The radioligand must have high affinity and selectivity for the receptor of interest. Measuring specific binding versus nonspecific binding allows determining properties of the receptor population under study.
1) Pharmacophores are sets of steric and electronic features common to active drug molecules that interact with biological targets in a specific way. They include features like hydrogen bond donors/acceptors and hydrophobic regions.
2) Feature trees (Ftrees) are a ligand-based approach that represents molecules as trees to capture major building blocks and overall alignment in a conformation-independent way, supporting "lead hopping" between chemical classes.
3) Ftrees describe molecular fragments as nodes labeled with shape and chemical descriptors. Molecules are compared by matching subtrees using topology-preserving search algorithms. This allows identification of actives from different chemical scaffolds.
Quantitative aspects of drug receptor interactionDrSahilKumar
This document provides an overview of quantitative aspects of drug receptor interactions, including concentration-binding relationships and dose-response relationships. It discusses graded dose-response curves and how they are used to quantify drug agonism and antagonism. Parameters like EC50, Emax, and pA2 values are extracted from graded curves to characterize drug potency and efficacy. Quantal dose-response curves are also covered, which analyze population variability in drug response and are used to determine values like ED50 and LD50 for evaluating drug safety. The document concludes by emphasizing the importance of quantifying these relationships for understanding and comparing drug behavior.
Exploring Compound Combinations in High Throughput Settings: Going Beyond 1D ...Rajarshi Guha
The document describes efforts to screen drug combinations in high throughput settings beyond traditional one-dimensional metrics. It discusses the infrastructure and workflows used to screen compound combinations against a library of over 2000 small molecules with diverse mechanisms of action. Quality control of combination screening experiments poses challenges due to the multi-dimensional nature of the data. The researchers are exploring various metrics and analytical approaches to characterize synergistic, additive and antagonistic combination responses across different cell lines and combinations.
This document discusses high-throughput screening (HTS) workflows for identifying biologically active small molecules. It describes how robots are used to rapidly screen large libraries of compounds in assays and generate large datasets. Statistical and machine learning methods in R can then be used to build predictive models from these datasets to identify promising leads and guide the screening of additional compounds. Caveats regarding the applicability of models to new chemical spaces are also discussed.
This document discusses SAR by NMR (structure-activity relationship by nuclear magnetic resonance) and fragment-based drug discovery. It notes that high-throughput screening often fails to produce high-quality drug candidates. Fragment-based screening covers more chemical diversity space than large compound libraries and can identify weakly binding fragments that can be linked together. NMR spectroscopy allows monitoring of protein-ligand interactions and determining binding sites to develop tighter binding inhibitors. An example is given of using NMR to develop small molecule inhibitors of the LFA-1 protein involved in inflammation.
This document discusses types of chemical data including data on drugs, agrochemicals, fragrances, food additives, and natural products. It focuses on drug data such as chemical properties, adverse events, toxicology, absorption, distribution, metabolism, and excretion (ADME). LogP is discussed as a measure of solubility, with examples of how it is calculated from molecular fragments and corrections. Molecular descriptors that can predict properties are also introduced, including topological, geometrical, electronic, and hybrid descriptors. Finally, some freely available tools for calculating molecular descriptors are listed.
Chemical risk assessment is often limited by the lack of experimental toxicity data for a large number of diverse chemicals. In the absence of experimental data, potential chemical hazard is often predicted using data gap filling techniques such as quantitative structure activity relationship (QSAR) models. QSARs are theoretical models that relate a quantitative measure of chemical structure to a physical property or a biological effect. QSAR tools are a widely utilized alternative to time-consuming clinical and animal testing methods, yet concerns over reliability and uncertainty limit application of QSAR models for regulatory chemical risk assessments. The reliability of a QSAR model depends on the quality and quantity of experimental training data and the applicability domain of the model. This talk will describe the basics concepts and best practices in QSAR modeling, principles associated with validation of QSAR models, summary of available QSAR tools, limitations and challenges in the acceptance of QSAR models, and the current status and prospects of QSAR modeling methods in the medical devices community.
The document discusses approaches to novel drug development, specifically quantitative structure-activity relationship (QSAR) modeling and high-throughput screening (HTS). It provides background on QSAR, describing how it establishes mathematical relationships between molecular properties and biological activity. It also outlines the history, goals and process of HTS, noting it allows rapid testing of large numbers of compounds against biological targets to identify initial hits for further development.
1. Quantitative Structure Activity Relationship (QSAR) uses mathematical models to correlate chemical descriptors of molecules to their biological activity.
2. Descriptors include physicochemical properties like hydrophobicity which can be measured experimentally.
3. QSAR allows medicinal chemists to predict activity of novel analogues and guide synthesis toward more active compounds.
Virtual Toxicity panels focussed on interpretable machine learning models that can guide medicinal chemists to identify critical substructures that are assocaited with toxicities.
This document summarizes a study that systematically analyzed 3,158 druggable human genes to identify those that lack orthologous (equivalent) genes in mouse, rat, and dog. The researchers used several databases and tools to map human genes to orthologs in these species. They identified 41 genes that lack orthologs in all three species, as well as 22 genes that are missing orthologs in mouse and rat but have them in dog. The authors discuss implications for toxicity testing and drug development for targets lacking rodent orthologs.
This document discusses issues with commonly used ligand efficiency metrics. It argues that ligand efficiency metrics make unrealistic assumptions by normalizing potency based on trends not actually observed in data. Specifically, ligand efficiency assumes a linear relationship between potency and risk factors like lipophilicity, but data does not always support this assumption. It also notes that ligand efficiency incorporates arbitrary concentration units that can affect calculated values. The document suggests plotting affinity against risk factors to test the assumptions behind ligand efficiency metrics.
A QSAR is a mathematical relationship between a biological activity of a molecular system and its geometric and chemical characteristics.
QSAR attempts to find consistent relationship between biological activity and molecular properties, so that these “rules” can be used to evaluate the activity of new compounds.
Accelerating lead optimisation with active learning by exploiting MMPA based ...Ed Griffen
Presented at the 15th GCC - German Conference on Cheminformatics November 2019
We combine regression forest machine learning with our MMPA based generative methods to deliver an active learning system to accelerate lead optimisation. In the process we identify permutative MMPA as a method to leverage SAR information from small data sets.
Published by MedChemica Ltd
Speciation And Physicochemical Studies of Some Biospecific CompoundsIOSR Journals
Abstract: A green, safer , efficient , eco-friendly approach for the synthesis of novel compounds which reveal biological and spermicidal activity. The nature of the pharmacophore decides the physiological reactivity of the compound.
The talk describes the science and results of a consortium of multiple pharmaceutical companies extracting medicinal chemistry knowledge from research data and the application to real drug design projects. A new technique for automating pharmacophore / toxophore finding from public data is disclosed.
Using Matched Molecular Pairs To Cluster CompoundsWillem van Hoorn
This document discusses using matched molecular pairs (MMPs) to cluster compounds based on common cores. It analyzes a test set of 4,609 compounds from the EGFR dataset in ChEMBL. The compounds were clustered based on their common cores, identifying 430 unique cores that were not a substructure of another core. All compounds mapped to these unique cores, forming 430 clusters. The clusters were analyzed to identify series with interpretable structure-activity relationships based on the MMPs within each cluster.
This document summarizes a presentation on discovering inhibitors for the histone-lysine N-methyltransferase SETD2 using an in silico approach. It discusses methyltransferases and histone methyltransferases as a potential target. The hypothesis is that selective, high-affinity SETD2 inhibitors can be identified by targeting its SAM binding site. The methodology involves generating pharmacophore models using software and screening databases of compounds. Results show two pharmacophore models and top-hit compounds identified. The conclusions are that the SETD2 binding site is a potential drug target and compounds with high predicted binding energies were identified. Future work involves refining models and testing top compounds in assays.
Collaborative medicinal chemistry research between AstraZeneca and external partners aims to build more open innovation organizations. AstraZeneca shares examples of compound collection collaborations and a case study of collaborating in real time on a design-make-test-analyze project. Challenges include defining roles and managing processes, but tools like ChemTraX help enable real-time collaboration. AstraZeneca's open innovation platform provides opportunities for target innovation and new molecule profiling to further external partnerships.
Pharmacophore based ligand-designing_using_substructure_searching_to_explore_...Prasanthperceptron
The document describes a protocol for identifying new chemical entities (NCEs) using pharmacophore-based ligand design and substructure searching. It involves taking a known biologically active "pivot" molecule, searching for similar structures in PubChem based on smiles or InChi key, uploading matched structures to PharmaGist for pharmacophore analysis, and interpreting the results to identify potential new ligands that maintain essential pharmacophore features. The goal is to discover novel molecules for patenting while leveraging knowledge of established pharmacophores.
7 Transmedia Families merged with @Gestoried v1Karine Halpern
This document outlines a 5-step process for creating transmedia families:
1. Audit the storyworld by adapting to themes and making an inventory of existing assets.
2. Contextualize the story by selecting sub-themes and validating the target audience.
3. Produce and process the story by managing roles and organizing achievable processes.
4. Focus on sustainability and convergence by aligning with values, running campaigns, and building communities.
5. Release the story and manage its spread by adapting, inviting participation, and fostering conversation.
1) The document discusses the Iraq-Based Industrial Zone (I-BIZ) program which aims to establish secure areas on Forward Operating Bases for Iraqi businesses to provide goods and services, inject money into the Iraqi economy, and employ Iraqis.
2) I-BIZ sites have been assessed or established at several bases in Iraq, with businesses ranging from vehicle repair to construction companies operating.
3) Challenges to I-BIZ include ensuring security within zones and addressing financial concerns, while the future goal is that the program continues after US forces withdraw through support from the Iraqi government.
What is the business value of my project?Joe Raynus
Most projects do not meet their expected business goals even if they are completed on time and on budget. The presentation argues that project teams need to focus more on strategic alignment and delivering business value rather than just meeting schedules and budgets. It recommends that project managers develop a clear problem statement and value proposition upfront, and define project outcomes and benefits in a business case to better link project work to organizational strategy and goals. Taking a more strategic approach will help ensure projects are delivering the expected value and benefits to stakeholders.
Ey barometrul antreprenoriatului romanesc 2016_sintezaMihaela Matei
2016 este al patrulea an în care EY România este alături de antreprenorii români prin analize ale mediului în care oamenii cu inițiativă își dezvoltă proiectele antreprenoriale.
Urmând celor trei ediții ale Barometrului EY al antreprenoriatului românesc și altor trei ediții dedicate educației și culturii antreprenoriale în rândul studenților (2014), respectiv afacerilor de familie (2015) și antreprenorilor care conduc un startup (aprilie 2016), a venit rândul celei de a patra ediții a Barometrului antreprenoriatului românesc: Antreprenorii vorbesc.
Ne-am asociat în realizarea acestei noi ediții cu una dintre băncile românești de top și una dintre instituții financiare cele mai dedicate cauzei antreprenoriatului și dezvoltării mediului antreprenorial local. Este vorba despre Raiffeisen Bank.
Barometrul din acest an analizează răspunsurile a 350 oameni de afaceri. 45% dintre respondenți conduc afaceri cu venituri de peste 10 milioane de EUR, în timp ce alți 45% dintre respondenți se regăsesc în intervalul 1-10 milioane EUR. 10% din companii au o cifră de afaceri mai mică de 1 milion EUR.
Ca și la celelalte ediții, antreprenorii ne-au transmis opiniile lor referitoare la stadiul de dezvoltare a celor cinci piloni EY de susținere a antreprenoriatului: impozitare și reglementare, acces la finanțare, ajutor coordonat, cultură și educație antreprenorială.
Studiul are la bază un chestionar aplicat în perioada:
28 martie – 20 aprilie 2016.
Aceasta este o sinteză a rezultatelor principale ale ediției 2016.
Este documento describe dos ecosistemas: la sabana y el bosque mesofílico de montaña. La sabana se caracteriza por tener estaciones húmedas y secas, y contiene una variedad de pastos, arbustos, árboles dispersos, y animales como cebras, elefantes y leones. El bosque mesofílico de montaña se encuentra en las montañas de México, tiene neblina constante, contiene especies únicas como el quetzal, y juega un papel importante en la diversidad biológica del país
TOP 5 TIPS TO LEADING A LIMITLESS LIFE Dr Gary Tho
The document discusses the benefits of exercise for mental health. Regular physical activity can help reduce anxiety and depression and improve mood and cognitive function. Exercise stimulates the production of endorphins in the brain which elevate mood and reduce stress levels.
Spark, Deep Learning and Life Sciences, Systems Biology in the Big Data Agebatchinsights
In this talk I will outline current advances in the use of Spark for next generation sequencing, protein interaction networks and folding challenges. I will outline how Spark with Cassandra can be used with Deep Learning to predict biological function and disease. I also outline use cases for virtual screening and drug discovery.
With cycling increasing in popularity in the UK, ŠKODA teamed up with The Telegraph to communicate its lifelong heritage with bikes via Tour de France related activity.
The document summarizes a presentation on the Direct Peptide Reactivity Assay (DPRA) method. The DPRA is an in chemico method that uses HPLC to measure the depletion of synthetic peptides exposed to test chemicals, in order to predict a chemical's ability to bind to epidermal proteins and sensitize skin. The presentation describes the objectives, procedures, advantages and precautions of the DPRA method. Reference chemicals are used to establish the DPRA's ability to distinguish sensitizers from non-sensitizers. Percent peptide depletion is calculated to indicate a chemical's sensitization potential and reactivity.
Quantitative structure-activity relationships (QSARs) attempt to establish mathematical relationships between biological activity and measurable physicochemical parameters of drugs. These parameters represent properties like lipophilicity, size, and shape. QSAR studies relate parameter values to biological activity using regression analysis to generate equations. These equations can then be used to predict activity and guide the synthesis of new analogs. Key parameters include lipophilicity, represented by partition coefficients, electronic effects, represented by Hammett constants, and steric effects, represented by Taft constants. Lipophilicity shows an optimal value for activity that balances solubility in aqueous and lipid phases.
This document discusses various topics related to drug discovery through bioinformatics and computational approaches. It covers target identification and validation, high-throughput screening, developing hits into leads, evaluating drug-likeness of compounds using rules like Lipinski's Rule of Five, and using computational descriptors for virtual screening. The goal is to discuss how computational tools can help streamline the drug discovery process by aiding in target selection and validation, compound screening and optimization of leads.
This document discusses quantitative structure-activity relationships (QSARs) which attempt to establish mathematical relationships between biological activity and measurable physicochemical parameters of drugs. These parameters include lipophilicity, electronic effects, and steric effects. Lipophilicity is often represented by partition coefficients, electronic effects by Hammett constants, and steric effects by Taft's steric constants. QSAR studies correlate these parameters to biological activity using regression analysis to obtain equations that can predict activity of new analogs. Lipophilic and electronic substituent constants are also discussed which represent contributions of groups to physicochemical properties.
The document discusses computational models that have been and can be used for predicting human toxicities. It provides examples of models that have been developed for predicting various physicochemical properties, interactions with proteins, and toxicity outcomes like mutagenicity, environmental toxicity, and drug-induced liver injury. It also outlines future areas that could be modeled, like mixtures and more specific protein targets. The key enablers of these models are increased computing power and data availability from literature and open sources.
The document discusses various topics related to drug discovery through bioinformatics and computational approaches. It begins by discussing comparative genomics and using knowledge about model organisms to identify similar biological areas and pathways in other species. It also discusses topics like high-throughput screening of large libraries, the definitions of targets, hits and leads in drug discovery, and approaches like using RNAi and phenotypic screening in model organisms. Finally, it discusses computational methods that can be used throughout the drug discovery process, including for target identification and validation, virtual screening, assessing drug-likeness of compounds, and describing compounds using structural and physicochemical descriptors.
The document discusses various topics related to drug discovery including target identification and validation, high-throughput screening, hit and lead identification, computational approaches like docking and de novo design, and clinical trial phases. It provides definitions for key terms like target, screening, hit, and lead. It also discusses sources for screening libraries and describes factors to consider for an optimal drug target.
The Utility of H/DX-MS in Biopharmaceutical Comparability StudiesAbhijeet Lokras
A presentation based on the research of Engen et al, which compares the utility of Hydrogen-Deuterium Exchange MS in Biopharmaceutics. HDX-MS is briefly introduced and some concepts are explained.
Network analysis of cancer metabolism: A novel route to precision medicineVarshit Dusad
This document discusses using network analysis and mass flow graphs to analyze cancer cell metabolism. It assesses different published genome-scale metabolic models of cancer and determines that PRIME models are best suited for applying mass flow graph analysis. Constraint-based analysis is performed on PRIME models to simulate metabolic conditions and genetic perturbations. Centrality analysis using PageRank reveals changes in network structure under different conditions but does not fully support the centrality-lethality hypothesis regarding essential reactions. Future work is needed to better integrate omics data and identify centrality measures that correlate with biological importance.
Methods and Approaches for Defining Mechanism Signatures from Human Primary C...BioMAP® Systems
The document describes methods for using human primary cell-based disease models called BioMAP systems to define mechanism signatures of compounds. BioMAP systems analyze compound profiles across multiple readouts in various cell types to classify compounds according to their mechanisms of action or toxicity pathways. High reproducibility is shown for compound profiles even across different experiments and cell donors.
- The document proposes a multi-view stacking ensemble method for drug-target interaction (DTI) prediction that combines predictions from multiple machine learning models trained on different drug and target feature view combinations.
- It generates 126 view combination datasets from 14 drug views and 9 target views, then trains extra trees, random forest, and XGBoost classifiers on each view combination. Predictions from these base models are then combined using a stacking ensemble with an extra trees meta-learner.
- The method is shown to outperform single models and voting ensembles, and calibration of the meta-learner and use of local imbalance measures provide further improvements to predictive performance on DTI prediction tasks.
Humans are potentially exposed to tens of thousands of man-made chemicals in the environment. It is well known that some environmental chemicals mimic natural hormones and thus have the potential to be endocrine disruptors. Most of these environmental chemicals have never been tested for their ability to disrupt the endocrine system, in particular, their ability to interact with the estrogen receptor. EPA needs tools to prioritize thousands of chemicals, for instance in the Endocrine Disruptor Screening Program (EDSP). This project was intended to be a demonstration of the use of predictive computational models on HTS data including ToxCast and Tox21 assays to prioritize a large chemical universe of 32464 unique structures for one specific molecular target – the estrogen receptor. CERAPP combined multiple computational models for prediction of estrogen receptor activity, and used the predicted results to build a unique consensus model. Models were developed in collaboration between 17 groups in the U.S. and Europe and applied to predict the common set of chemicals. Structure-based techniques such as docking and several QSAR modeling approaches were employed, mostly using a common training set of 1677 compounds provided by U.S. EPA, to build a total of 42 classification models and 8 regression models for binding, agonist and antagonist activity. All predictions were evaluated on ToxCast data and on an external validation set collected from the literature. In order to overcome the limitations of single models, a consensus was built weighting models based on their prediction accuracy scores (including sensitivity and specificity against training and external sets). Individual model scores ranged from 0.69 to 0.85, showing high prediction reliabilities. The final consensus predicted 4001 chemicals as actives to be considered as high priority for further testing and 6742 as suspicious chemicals. This abstract does not necessarily reflect U.S. EPA policy
Discovery PBPK: How to estimate the expected accuracy of ISIVB and IVIVB for ...Simulations Plus, Inc.
This slideshow was presented at the 2018 - 6th Asia Pacific Regional ISSX meeting in Hangzhou, China. Chief Scientist Michael Bolger, explains how Simulations Plus’ PBPK modeling and simulation software can be used successfully in the lead optimization phase of drug discovery.
Novel Methodology for Predicting Synergistic Cancer Drug Pairs SlidesMegan Yin
Gene expression data can be used to predict synergistic drug combinations for cancer treatment. The author developed three algorithms to do this - regularized bilinear regression, up-down gene analysis regression, and a neighborhood predictor. Performance improved with each subsequent model as they incorporated more biological context. The neighborhood predictor performed best by looking at similar drug combinations and cell lines based on gene expression similarity. This suggests gene expression is key for predicting synergy and combining drugs targeted at multiple pathways may overcome drug resistance in cancer. More gene expression data on more cell lines and drug perturbations could further improve predictions of synergistic combinations.
Using computational models like pharmacophores and machine learning, researchers developed in silico models to predict interactions of drugs and compounds with important human drug transporters. Pharmacophore models of P-gp, ASBT, and OCTN2 were able to retrieve known substrates and inhibitors from databases and discover new interacting drug classes. A Bayesian model for ASBT performed well in classification, though external test sets remained challenging. Transporter models aid understanding of absorption, distribution, and toxicity of drugs.
Lecture 3 about it governanace and how it worksssuser2d7235
This document discusses statistical methods for estimating confidence intervals and comparing means and variances. It provides examples of how to calculate confidence limits for a sample mean, perform hypothesis tests to compare two means or variances, and determine if differences are statistically significant. Several examples show how to apply t-tests and F-tests to compare analytical data and determine if measurement methods or treatment processes yield significantly different results.
PROGRAM PHASE IN LIGAND-BASED PHARMACOPHORE MODEL GENERATION AND 3D DATABASE ...Simone Brogi
We have applied a novel approach to generate a ligand-based pharmacophore model. The pharmacophore was built from a set of 42 compounds showing activity against MCF-7 cell line derived from human mammary adenocarcinoma, using the program PHASE, implemented in the Schrödinger suite software package. PHASE is a highly flexible system for common pharmacophore identification and assessment and 3D-database creation and searching. The best pharmacophore hypothesis showed five features: two hydrogen-bond acceptors, one hydrogen-bond donor, and two aromatic rings. The structure–activity relationship (SAR) so acquired was applied within PHASE for molecular alignment in a comparative molecular field analysis (CoMFA) 3D-QSAR study. The 3D-QSAR model yielded a test set r2 equal to 0.97 and demonstrated to be highly predictive with respect to an external test set of 18 compounds (r2 =0.93). In summary, in this study we improved a previously developed Catalyst MCF-7 inhibitory pharmacophore, and established a predictive 3D-QSAR model. We have further used this model to detect novel MCF-7 cell line inhibitors through 3D database searching
Scoring and ranking of metabolic trees to computationally prioritize chemical...Kamel Mansouri
The aim of this work was to design an in silico and in vitro approach to prioritize compounds and perform a quantitative safety assessment. To this end, we pursue a tiered approach taking into account bioactivity and bioavailability of chemicals and their metabolites using a human uterine epithelial cell (Ishikawa)-based assay. This biologically relevant fit-for-purpose assay was designed to quantitatively recapitulate in vivo human response and establish a margin of safety.
Similar to Pharmacophore extraction from Matched Molecular Pair Analysis (20)
MedChemica Levinthal Lecture at Openeye CUP XX 2020Ed Griffen
This document summarizes a lecture on improving medicinal and computational medicinal chemistry. It discusses defining clear target product profiles through collaboration between medicinal chemists and other experts. Navigating medicinal chemistry projects requires estimating the predicted therapeutic dose of compounds. The document outlines tactics for exploring a compound's structure-activity relationship, including introducing and modifying chiral centers. It also describes how mining past medicinal chemistry data can provide rules for modifying compounds to improve properties like solubility while maintaining potency.
Emerging Challenges for Artificial Intelligence in Medicinal ChemistryEd Griffen
Presentation by Dr Ed Griffen of MedChemica Ltd, at The IBSA Conference "How Artificial Intelligence Can Change the Pharmaceutical Landscape“ - LUGANO, October 9th 2019.
Presented at Artificial Intelligence and Machine Learning for Advanced Drug Discovery & Development 2019 on 28th May 2019 by Dr Ed Griffen of MedChemica Ltd
SCI What can Big Data do for Chemistry 2017 MedChemicaEd Griffen
This document discusses how advanced analytics and big data techniques can be applied in the chemistry industry. It provides examples of how matched molecular pair analysis has been used to extract statistically valid structure-activity relationships from large datasets and summarize them in the form of transformation rules. These rules have helped suggest new molecules, explore structure-activity relationships, identify exceptional structure-property relationships, and enable the rapid optimization of drug candidates. The document argues that combining data from multiple sources yields more comprehensive rules and that interfaces must be designed with the intended users in mind.
Lecture given by Ed Griffen UKQSAR meeting Sept 2017. Covers material from work in our paper http://pubs.acs.org/doi/10.1021/acs.jmedchem.7b00935 background discussed in https://www.linkedin.com/pulse/first-draft-medicinal-chemistry-admet-encyclopedia-ed-griffen/
Travis Hills' Endeavors in Minnesota: Fostering Environmental and Economic Pr...Travis Hills MN
Travis Hills of Minnesota developed a method to convert waste into high-value dry fertilizer, significantly enriching soil quality. By providing farmers with a valuable resource derived from waste, Travis Hills helps enhance farm profitability while promoting environmental stewardship. Travis Hills' sustainable practices lead to cost savings and increased revenue for farmers by improving resource efficiency and reducing waste.
Unlocking the mysteries of reproduction: Exploring fecundity and gonadosomati...AbdullaAlAsif1
The pygmy halfbeak Dermogenys colletei, is known for its viviparous nature, this presents an intriguing case of relatively low fecundity, raising questions about potential compensatory reproductive strategies employed by this species. Our study delves into the examination of fecundity and the Gonadosomatic Index (GSI) in the Pygmy Halfbeak, D. colletei (Meisner, 2001), an intriguing viviparous fish indigenous to Sarawak, Borneo. We hypothesize that the Pygmy halfbeak, D. colletei, may exhibit unique reproductive adaptations to offset its low fecundity, thus enhancing its survival and fitness. To address this, we conducted a comprehensive study utilizing 28 mature female specimens of D. colletei, carefully measuring fecundity and GSI to shed light on the reproductive adaptations of this species. Our findings reveal that D. colletei indeed exhibits low fecundity, with a mean of 16.76 ± 2.01, and a mean GSI of 12.83 ± 1.27, providing crucial insights into the reproductive mechanisms at play in this species. These results underscore the existence of unique reproductive strategies in D. colletei, enabling its adaptation and persistence in Borneo's diverse aquatic ecosystems, and call for further ecological research to elucidate these mechanisms. This study lends to a better understanding of viviparous fish in Borneo and contributes to the broader field of aquatic ecology, enhancing our knowledge of species adaptations to unique ecological challenges.
The technology uses reclaimed CO₂ as the dyeing medium in a closed loop process. When pressurized, CO₂ becomes supercritical (SC-CO₂). In this state CO₂ has a very high solvent power, allowing the dye to dissolve easily.
The debris of the ‘last major merger’ is dynamically youngSérgio Sacani
The Milky Way’s (MW) inner stellar halo contains an [Fe/H]-rich component with highly eccentric orbits, often referred to as the
‘last major merger.’ Hypotheses for the origin of this component include Gaia-Sausage/Enceladus (GSE), where the progenitor
collided with the MW proto-disc 8–11 Gyr ago, and the Virgo Radial Merger (VRM), where the progenitor collided with the
MW disc within the last 3 Gyr. These two scenarios make different predictions about observable structure in local phase space,
because the morphology of debris depends on how long it has had to phase mix. The recently identified phase-space folds in Gaia
DR3 have positive caustic velocities, making them fundamentally different than the phase-mixed chevrons found in simulations
at late times. Roughly 20 per cent of the stars in the prograde local stellar halo are associated with the observed caustics. Based
on a simple phase-mixing model, the observed number of caustics are consistent with a merger that occurred 1–2 Gyr ago.
We also compare the observed phase-space distribution to FIRE-2 Latte simulations of GSE-like mergers, using a quantitative
measurement of phase mixing (2D causticality). The observed local phase-space distribution best matches the simulated data
1–2 Gyr after collision, and certainly not later than 3 Gyr. This is further evidence that the progenitor of the ‘last major merger’
did not collide with the MW proto-disc at early times, as is thought for the GSE, but instead collided with the MW disc within
the last few Gyr, consistent with the body of work surrounding the VRM.
Immersive Learning That Works: Research Grounding and Paths ForwardLeonel Morgado
We will metaverse into the essence of immersive learning, into its three dimensions and conceptual models. This approach encompasses elements from teaching methodologies to social involvement, through organizational concerns and technologies. Challenging the perception of learning as knowledge transfer, we introduce a 'Uses, Practices & Strategies' model operationalized by the 'Immersive Learning Brain' and ‘Immersion Cube’ frameworks. This approach offers a comprehensive guide through the intricacies of immersive educational experiences and spotlighting research frontiers, along the immersion dimensions of system, narrative, and agency. Our discourse extends to stakeholders beyond the academic sphere, addressing the interests of technologists, instructional designers, and policymakers. We span various contexts, from formal education to organizational transformation to the new horizon of an AI-pervasive society. This keynote aims to unite the iLRN community in a collaborative journey towards a future where immersive learning research and practice coalesce, paving the way for innovative educational research and practice landscapes.
Authoring a personal GPT for your research and practice: How we created the Q...Leonel Morgado
Thematic analysis in qualitative research is a time-consuming and systematic task, typically done using teams. Team members must ground their activities on common understandings of the major concepts underlying the thematic analysis, and define criteria for its development. However, conceptual misunderstandings, equivocations, and lack of adherence to criteria are challenges to the quality and speed of this process. Given the distributed and uncertain nature of this process, we wondered if the tasks in thematic analysis could be supported by readily available artificial intelligence chatbots. Our early efforts point to potential benefits: not just saving time in the coding process but better adherence to criteria and grounding, by increasing triangulation between humans and artificial intelligence. This tutorial will provide a description and demonstration of the process we followed, as two academic researchers, to develop a custom ChatGPT to assist with qualitative coding in the thematic data analysis process of immersive learning accounts in a survey of the academic literature: QUAL-E Immersive Learning Thematic Analysis Helper. In the hands-on time, participants will try out QUAL-E and develop their ideas for their own qualitative coding ChatGPT. Participants that have the paid ChatGPT Plus subscription can create a draft of their assistants. The organizers will provide course materials and slide deck that participants will be able to utilize to continue development of their custom GPT. The paid subscription to ChatGPT Plus is not required to participate in this workshop, just for trying out personal GPTs during it.
The binding of cosmological structures by massless topological defectsSérgio Sacani
Assuming spherical symmetry and weak field, it is shown that if one solves the Poisson equation or the Einstein field
equations sourced by a topological defect, i.e. a singularity of a very specific form, the result is a localized gravitational
field capable of driving flat rotation (i.e. Keplerian circular orbits at a constant speed for all radii) of test masses on a thin
spherical shell without any underlying mass. Moreover, a large-scale structure which exploits this solution by assembling
concentrically a number of such topological defects can establish a flat stellar or galactic rotation curve, and can also deflect
light in the same manner as an equipotential (isothermal) sphere. Thus, the need for dark matter or modified gravity theory is
mitigated, at least in part.
The ability to recreate computational results with minimal effort and actionable metrics provides a solid foundation for scientific research and software development. When people can replicate an analysis at the touch of a button using open-source software, open data, and methods to assess and compare proposals, it significantly eases verification of results, engagement with a diverse range of contributors, and progress. However, we have yet to fully achieve this; there are still many sociotechnical frictions.
Inspired by David Donoho's vision, this talk aims to revisit the three crucial pillars of frictionless reproducibility (data sharing, code sharing, and competitive challenges) with the perspective of deep software variability.
Our observation is that multiple layers — hardware, operating systems, third-party libraries, software versions, input data, compile-time options, and parameters — are subject to variability that exacerbates frictions but is also essential for achieving robust, generalizable results and fostering innovation. I will first review the literature, providing evidence of how the complex variability interactions across these layers affect qualitative and quantitative software properties, thereby complicating the reproduction and replication of scientific studies in various fields.
I will then present some software engineering and AI techniques that can support the strategic exploration of variability spaces. These include the use of abstractions and models (e.g., feature models), sampling strategies (e.g., uniform, random), cost-effective measurements (e.g., incremental build of software configurations), and dimensionality reduction methods (e.g., transfer learning, feature selection, software debloating).
I will finally argue that deep variability is both the problem and solution of frictionless reproducibility, calling the software science community to develop new methods and tools to manage variability and foster reproducibility in software systems.
Exposé invité Journées Nationales du GDR GPL 2024
Or: Beyond linear.
Abstract: Equivariant neural networks are neural networks that incorporate symmetries. The nonlinear activation functions in these networks result in interesting nonlinear equivariant maps between simple representations, and motivate the key player of this talk: piecewise linear representation theory.
Disclaimer: No one is perfect, so please mind that there might be mistakes and typos.
dtubbenhauer@gmail.com
Corrected slides: dtubbenhauer.com/talks.html
ESPP presentation to EU Waste Water Network, 4th June 2024 “EU policies driving nutrient removal and recycling
and the revised UWWTD (Urban Waste Water Treatment Directive)”
EWOCS-I: The catalog of X-ray sources in Westerlund 1 from the Extended Weste...Sérgio Sacani
Context. With a mass exceeding several 104 M⊙ and a rich and dense population of massive stars, supermassive young star clusters
represent the most massive star-forming environment that is dominated by the feedback from massive stars and gravitational interactions
among stars.
Aims. In this paper we present the Extended Westerlund 1 and 2 Open Clusters Survey (EWOCS) project, which aims to investigate
the influence of the starburst environment on the formation of stars and planets, and on the evolution of both low and high mass stars.
The primary targets of this project are Westerlund 1 and 2, the closest supermassive star clusters to the Sun.
Methods. The project is based primarily on recent observations conducted with the Chandra and JWST observatories. Specifically,
the Chandra survey of Westerlund 1 consists of 36 new ACIS-I observations, nearly co-pointed, for a total exposure time of 1 Msec.
Additionally, we included 8 archival Chandra/ACIS-S observations. This paper presents the resulting catalog of X-ray sources within
and around Westerlund 1. Sources were detected by combining various existing methods, and photon extraction and source validation
were carried out using the ACIS-Extract software.
Results. The EWOCS X-ray catalog comprises 5963 validated sources out of the 9420 initially provided to ACIS-Extract, reaching a
photon flux threshold of approximately 2 × 10−8 photons cm−2
s
−1
. The X-ray sources exhibit a highly concentrated spatial distribution,
with 1075 sources located within the central 1 arcmin. We have successfully detected X-ray emissions from 126 out of the 166 known
massive stars of the cluster, and we have collected over 71 000 photons from the magnetar CXO J164710.20-455217.
Remote Sensing and Computational, Evolutionary, Supercomputing, and Intellige...University of Maribor
Slides from talk:
Aleš Zamuda: Remote Sensing and Computational, Evolutionary, Supercomputing, and Intelligent Systems.
11th International Conference on Electrical, Electronics and Computer Engineering (IcETRAN), Niš, 3-6 June 2024
Inter-Society Networking Panel GRSS/MTT-S/CIS Panel Session: Promoting Connection and Cooperation
https://www.etran.rs/2024/en/home-english/