Next generation sequencing: research opportunities and bioinformatic challenges. A seminar I gave for the Computational Life Science (Univ. of Oslo) seminar series, March 2, 2011
Introduction to Next-Generation Sequencing (NGS) TechnologyQIAGEN
The continuous evolution of NGS technology has led to an enormous diversification in NGS applications and dramatically decreased the costs to sequence a complete human genome.
In this presentation, we will discuss the following major topics:
• Basic overview of NGS sequencing technologies
• Next-generation sequencing workflow
• Spectrum of NGS applications
• QIAGEN universal NGS solutions
Course: Bioinformatics for Biomedical Research (2014).
Session: 2.1.2- Next Generation Sequencing. Technologies and Applications. Part II: NGS Applications I.
Statistics and Bioinformatisc Unit (UEB) & High Technology Unit (UAT) from Vall d'Hebron Research Institute (www.vhir.org), Barcelona.
Next Generation Sequencing (NGS) Is A Modern And Cost Effective Sequencing Technology Which Enables Scientists To Sequence Nucleic Acids At Much Faster Rate. In This Presentation, You Will Learn About What is NGS, Idea Behind NGS, Methodology And Protocol, Widely Adapted NGS Protocols, Applications And References For Further Study.
Introduction to Next-Generation Sequencing (NGS) TechnologyQIAGEN
The continuous evolution of NGS technology has led to an enormous diversification in NGS applications and dramatically decreased the costs to sequence a complete human genome.
In this presentation, we will discuss the following major topics:
• Basic overview of NGS sequencing technologies
• Next-generation sequencing workflow
• Spectrum of NGS applications
• QIAGEN universal NGS solutions
Course: Bioinformatics for Biomedical Research (2014).
Session: 2.1.2- Next Generation Sequencing. Technologies and Applications. Part II: NGS Applications I.
Statistics and Bioinformatisc Unit (UEB) & High Technology Unit (UAT) from Vall d'Hebron Research Institute (www.vhir.org), Barcelona.
Next Generation Sequencing (NGS) Is A Modern And Cost Effective Sequencing Technology Which Enables Scientists To Sequence Nucleic Acids At Much Faster Rate. In This Presentation, You Will Learn About What is NGS, Idea Behind NGS, Methodology And Protocol, Widely Adapted NGS Protocols, Applications And References For Further Study.
White Paper: Next-Generation Genome Sequencing Using EMC Isilon Scale-Out NAS...EMC
This EMC Isilon sizing and performance guideline White Paper reviews the Key Performance Indicators (KPIs) that most strongly impact the production processes for the storage of data from Next-Generation Sequencing (NGS) workflows.
As next generation sequencing has moved into the clinic, there is an increased demand for accuracy and reproducibility. Target enrichment is needed for applications where high read depth is critical, but some performance limitations, especially in GC-rich regions of the genome, have raised questions about the overall usefulness of target capture methods. In this presentation, Dr Kristina Giorda presents a method using individually synthesized and quality checked capture baits that performs well, even for GC-rich sequences, and delivers accurate coverage of the target space. Dr Giorda covers library preparation and target capture, and shares informative data generated using our xGen® Exome Research Panel.
Course: Bioinformatics for Biomedical Research (2014).
Session: 2.3- Introduction to NGS Variant Calling Analysis.
Statistics and Bioinformatisc Unit (UEB) & High Technology Unit (UAT) from Vall d'Hebron Research Institute (www.vhir.org), Barcelona.
Knowing Your NGS Upstream: Alignment and VariantsGolden Helix Inc
Alignment algorithms are not just about placing reads in best-matching locations to a reference genome. They are now being expected to handle small insertions, deletions, gapped alignment of reads across intron boundaries and even span breakpoints of structural variations, fusions and copy number changes. At the same time, variant-calling algorithms can only reach their full potential by being intimately matched to the aligner's output or by doing local assemblies themselves. Knowing when these tools can be expected to perform well and when they will produce technical artifacts or be incapable of detecting features is critical when interpreting any analysis based on their output.
This presentation will compare the performance of the alignment and variant calling tools used by sequencing service providers including Illumina Genome Network, Complete Genomics and The Broad Institute. Using public samples analyzed by each pipeline, we will look at the level of concordance and dive into investigating problematic variants and regions of the genome.
Next Generation Sequencing (NGS) in food safety-Game changer or just another ...ExternalEvents
http://tiny.cc/faowgsworkshop
The use of genome sequencing technology on food safety management. Presentation from the FAO expert workshop on practical applications of Whole Genome Sequencing (WGS) for food safety management - 7-8 December 2015, Rome, Italy.
Target enrichment enables researchers to focus their next generation sequencing (NGS) efforts on regions of interest, allowing them to obtain more sequencing data relevant to their study. In-solution target capture is a method of enrichment using oligonucleotide probes directed to specific regions within a genome. Target capture can be used to enrich multiple samples simultaneously, reducing the cost per sample, while using individually synthesized probes allows researchers to construct gene panels that can be optimized over time.
QIAseq Targeted DNA, RNA and Fusion Gene PanelsQIAGEN
Tumor heterogeneity has been known for a while but quantifying heterogeneity is still a challenge. NGS is the method of choice in the analysis of tumor heterogeneity, however, there are some inherent challenges associated with it. These include false positives, gaps in the gene due to overrepresentation and incomplete representation of low-frequency transcripts – all contributing to an inaccurate picture. Conventional library prep strategies for NGS are based on PCR, which introduces sequence-based bias and amplification noise, leading to these inaccuracies. In this webinar, we will cover
1. Principles of UMI and the new QIAseq product porfolio
2. How UMI along with SPE (single primer extension) allows for increased uniformity across difficult-to-sequence regions, removal of library construction bias, improved data analysis and sequencing optimization
3. How data generated from using UMI and SPE is directly comparable to analysis derived from whole transcriptome and exome sequencing
4. Application of UMI and SPE in the discovery of novel gene fusions and in the analysis of gene expression and genetic variation
A full picture of -omics cellular networks of regulation brings researchers closer to a realistic and reliable understanding of complex conditions. For more information, please visit: http://tbioinfopb.pine-biotech.com/
Course: Bioinformatics for Biomedical Research (2014).
Session: 2.1.1- Next Generation Sequencing. Technologies and Applications. Part I: NGS Introduction and Technology Overview.
Statistics and Bioinformatisc Unit (UEB) & High Technology Unit (UAT) from Vall d'Hebron Research Institute (www.vhir.org), Barcelona.
Understanding and controlling for sample and platform biases in NGS assaysCandy Smellie
What is the impact of assay failure in your laboratory and how do you monitor for it?
The advancement of next-generation sequencing has provided invaluable resources to researchers in multiple industries and disciplines, and will be a major driver during the personalized medicine revolution that is upon us. However, while the cost of generating sequencing data continues to decrease this does not take into account the significant costs associated with the infrastructure and expertise that are required to develop a robust, routine NGS pipeline.
Specifically, as predicted by Sboner, et al in 2011, the cost of the sequencing portion of the experiment continues to decrease and the costs associated with upfront experimental design and downstream analysis dominate the cost of each assay. This is true whether you are performing a pre-clinical R&D project, and perhaps even more so for clinical assays. In the paper, the authors note the unpredictable and considerable ‘human time’ spent on the upstream design and downstream analysis. Here at Horizon, we aim to develop tools that help researchers and clinicians optimize these workflows to make NGS more reliable and ultimately, more affordable by streamlining these resource intensive areas.
Molecular QC: Interpreting your Bioinformatics PipelineCandy Smellie
What is the impact of assay failure in your laboratory and how do you monitor for it?
The most heavily degraded samples are not suitable for standard exome coverage: sometimes it’s not even a matter of getting bad sequencing, you might get nothing at all!
FFPE artifacts increase with storage time
Artifacts go against the statistical power of your variant calling analysis
Molecular reference standards help filter out bad mappings and spurious variants
Bioinformatics pipelines allow adding Molecular Reference Standards in your joint variant calling pipeline
Genome In A Bottle Reference Standards are invaluable for validating variant calling analysis
NIST and its collaborators shared datasets created with most NGS technologies
Horizon Diagnostics shared annotated, merged variant calls from NIST for the Ashkenazim Trio
~35K variants are predicted having high or moderate impact within the Trio
GM24385 (Ashkenazim Son) includes 352 small variants with high/moderate impact which are absent in Father and Mother
Routinely monitor the performance of your workflows and assays with independent external controls
IDT provides a range of solutions for targeted next generation sequencing. Labs processing hundreds to thousands of samples can create highly uniform, custom panels using xGen® Lockdown Probes. The new xGen Acute Myeloid Leukemia (AML) panel is a predesigned set of Lockdown Probes that captures 260 genes identified by whole genome and exome sequencing of 200 patient samples. The AML panel can be used as stand-alone or customized with additional probes to detect other targets of interest.
White Paper: Next-Generation Genome Sequencing Using EMC Isilon Scale-Out NAS...EMC
This EMC Isilon sizing and performance guideline White Paper reviews the Key Performance Indicators (KPIs) that most strongly impact the production processes for the storage of data from Next-Generation Sequencing (NGS) workflows.
As next generation sequencing has moved into the clinic, there is an increased demand for accuracy and reproducibility. Target enrichment is needed for applications where high read depth is critical, but some performance limitations, especially in GC-rich regions of the genome, have raised questions about the overall usefulness of target capture methods. In this presentation, Dr Kristina Giorda presents a method using individually synthesized and quality checked capture baits that performs well, even for GC-rich sequences, and delivers accurate coverage of the target space. Dr Giorda covers library preparation and target capture, and shares informative data generated using our xGen® Exome Research Panel.
Course: Bioinformatics for Biomedical Research (2014).
Session: 2.3- Introduction to NGS Variant Calling Analysis.
Statistics and Bioinformatisc Unit (UEB) & High Technology Unit (UAT) from Vall d'Hebron Research Institute (www.vhir.org), Barcelona.
Knowing Your NGS Upstream: Alignment and VariantsGolden Helix Inc
Alignment algorithms are not just about placing reads in best-matching locations to a reference genome. They are now being expected to handle small insertions, deletions, gapped alignment of reads across intron boundaries and even span breakpoints of structural variations, fusions and copy number changes. At the same time, variant-calling algorithms can only reach their full potential by being intimately matched to the aligner's output or by doing local assemblies themselves. Knowing when these tools can be expected to perform well and when they will produce technical artifacts or be incapable of detecting features is critical when interpreting any analysis based on their output.
This presentation will compare the performance of the alignment and variant calling tools used by sequencing service providers including Illumina Genome Network, Complete Genomics and The Broad Institute. Using public samples analyzed by each pipeline, we will look at the level of concordance and dive into investigating problematic variants and regions of the genome.
Next Generation Sequencing (NGS) in food safety-Game changer or just another ...ExternalEvents
http://tiny.cc/faowgsworkshop
The use of genome sequencing technology on food safety management. Presentation from the FAO expert workshop on practical applications of Whole Genome Sequencing (WGS) for food safety management - 7-8 December 2015, Rome, Italy.
Target enrichment enables researchers to focus their next generation sequencing (NGS) efforts on regions of interest, allowing them to obtain more sequencing data relevant to their study. In-solution target capture is a method of enrichment using oligonucleotide probes directed to specific regions within a genome. Target capture can be used to enrich multiple samples simultaneously, reducing the cost per sample, while using individually synthesized probes allows researchers to construct gene panels that can be optimized over time.
QIAseq Targeted DNA, RNA and Fusion Gene PanelsQIAGEN
Tumor heterogeneity has been known for a while but quantifying heterogeneity is still a challenge. NGS is the method of choice in the analysis of tumor heterogeneity, however, there are some inherent challenges associated with it. These include false positives, gaps in the gene due to overrepresentation and incomplete representation of low-frequency transcripts – all contributing to an inaccurate picture. Conventional library prep strategies for NGS are based on PCR, which introduces sequence-based bias and amplification noise, leading to these inaccuracies. In this webinar, we will cover
1. Principles of UMI and the new QIAseq product porfolio
2. How UMI along with SPE (single primer extension) allows for increased uniformity across difficult-to-sequence regions, removal of library construction bias, improved data analysis and sequencing optimization
3. How data generated from using UMI and SPE is directly comparable to analysis derived from whole transcriptome and exome sequencing
4. Application of UMI and SPE in the discovery of novel gene fusions and in the analysis of gene expression and genetic variation
A full picture of -omics cellular networks of regulation brings researchers closer to a realistic and reliable understanding of complex conditions. For more information, please visit: http://tbioinfopb.pine-biotech.com/
Course: Bioinformatics for Biomedical Research (2014).
Session: 2.1.1- Next Generation Sequencing. Technologies and Applications. Part I: NGS Introduction and Technology Overview.
Statistics and Bioinformatisc Unit (UEB) & High Technology Unit (UAT) from Vall d'Hebron Research Institute (www.vhir.org), Barcelona.
Understanding and controlling for sample and platform biases in NGS assaysCandy Smellie
What is the impact of assay failure in your laboratory and how do you monitor for it?
The advancement of next-generation sequencing has provided invaluable resources to researchers in multiple industries and disciplines, and will be a major driver during the personalized medicine revolution that is upon us. However, while the cost of generating sequencing data continues to decrease this does not take into account the significant costs associated with the infrastructure and expertise that are required to develop a robust, routine NGS pipeline.
Specifically, as predicted by Sboner, et al in 2011, the cost of the sequencing portion of the experiment continues to decrease and the costs associated with upfront experimental design and downstream analysis dominate the cost of each assay. This is true whether you are performing a pre-clinical R&D project, and perhaps even more so for clinical assays. In the paper, the authors note the unpredictable and considerable ‘human time’ spent on the upstream design and downstream analysis. Here at Horizon, we aim to develop tools that help researchers and clinicians optimize these workflows to make NGS more reliable and ultimately, more affordable by streamlining these resource intensive areas.
Molecular QC: Interpreting your Bioinformatics PipelineCandy Smellie
What is the impact of assay failure in your laboratory and how do you monitor for it?
The most heavily degraded samples are not suitable for standard exome coverage: sometimes it’s not even a matter of getting bad sequencing, you might get nothing at all!
FFPE artifacts increase with storage time
Artifacts go against the statistical power of your variant calling analysis
Molecular reference standards help filter out bad mappings and spurious variants
Bioinformatics pipelines allow adding Molecular Reference Standards in your joint variant calling pipeline
Genome In A Bottle Reference Standards are invaluable for validating variant calling analysis
NIST and its collaborators shared datasets created with most NGS technologies
Horizon Diagnostics shared annotated, merged variant calls from NIST for the Ashkenazim Trio
~35K variants are predicted having high or moderate impact within the Trio
GM24385 (Ashkenazim Son) includes 352 small variants with high/moderate impact which are absent in Father and Mother
Routinely monitor the performance of your workflows and assays with independent external controls
IDT provides a range of solutions for targeted next generation sequencing. Labs processing hundreds to thousands of samples can create highly uniform, custom panels using xGen® Lockdown Probes. The new xGen Acute Myeloid Leukemia (AML) panel is a predesigned set of Lockdown Probes that captures 260 genes identified by whole genome and exome sequencing of 200 patient samples. The AML panel can be used as stand-alone or customized with additional probes to detect other targets of interest.
Next-generation sequencing: Data mangementGuy Coates
Next-generation sequencing is producing vast amounts of data. Providing storage and compute is only half the battle. Researchers and IT staff need to be able to "manage" data, in order to stay productive.
Talk given at BIO-IT World, Europe 2010.
OVium Bio-Information Solutions use forefront algorithms to analyze key data resources such NCBI, EBLM and PDB to develop cell signal pathways.
OVium employs cloud and MPP computing solutions with homology and signal network mapping to develop chemical and protein pathways for discovery research.
How to Standardise and Assemble Raw Data into Sequences: What Does it Mean fo...Joseph Hughes
11th OIE Seminar at the XVII INTERNATIONAL SYMPOSIUM OF THE WORLD ASSOCIATION OF VETERINARY LABORATORY DIAGNOSTICIANS (WAVLD)
Saskatoon - 17th June 2015
Repeatable plant pathology bioinformatic analysis: Not everything is NGS dataLeighton Pritchard
Presentation on use of Galaxy for plant pathology bioinformatics, presented by Peter Cock, at the Genomics for Non-Model Organisms workshop, ISMB/ECCB, Vienna, Austria, 19 July 2011
Data-intensive applications on cloud computing resources: Applications in lif...Ola Spjuth
Presentation at the de.NBI 2017 symposium “The Future Development of Bioinformatics in Germany and Europe” held at the Center for Interdisciplinary Research (ZiF) of Bielefeld University, October 23-25, 2017.
https://www.denbi.de/symposium2017
The suite of free software tools created within the OpenCB (Open Computational Biology – https://github.com/opencb) initiative makes possible to efficiently manage large genomic databases.
These tools are not widely used, since there is quite a steep learning curve for their adoption, thanks to the complexity of the software stack, but they may be really cost-effective for hospitals, research institutions etcetera.
The objective of the talk is showing the potential of the OpenCB suite, the information to start using it and the advantages for the end users. BioDec is currently deploying a large OpenCGA installation for the Genetic Unit of one of the main Italian Hospitals, where data in the order of the hundreds of TBs will be managed and analyzed by bioinformaticians.
National scale research computing and beyond pearc panel 2017Gregory Newby
Panel at the PEARC 2017 event in New Orleans, July 11-13. Panelists were: Gregory Newby, Chief Technology Officer, Compute Canada; Florian Berberich, Member of the Board of Directors PRACE aisbl; Gergely Sipos, Customer and Technical Outreach Manager, EGI Foundation; and John Towns, Director of Collaborative eScience Programs, National Center for Supercomputing Applications.
Panel abstract: How might the international community of research computing users and stakeholders benefit from knowledge sharing among national- or international-scale research computing organizations and providers? It is common for large-scale investments in research computing systems, services and support to be guided and funded with government oversight and centralized planning. There are many commonalities, including stakeholder relations, outcomes reporting, long-range strategic planning, and governance. What trends exist currently, and how might information sharing and collaboration among resource providers be beneficial? Is there desire to form a partnership, or to build upon existing relationships? Participants in this panel will include personnel involved in US, Canadian and European research computing jurisdictions.
iMicrobe and iVirus: Extending the iPlant cyberinfrastructure from plants to ...Bonnie Hurwitz
iMicrobe and iVirus: Extending the iPlant cyberinfrastructure from plants to microbes. Overview of work underway to add applications and computational analysis pipelines to iPlant for metagenomics and microbial ecology.
BSC and Integrating Persistent Data and Parallel Programming Modelsinside-BigData.com
In this deck from the HPC Advisory Council Spain Conference, Toni Cortés from the Barcelona Supercomputing Center presents: BSC and Integrating Persistent Data and Parallel Programming Models.
Watch the video presentation: http://wp.me/p3RLHQ-exQ
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
How Can AI and IoT Power the Chemical Industry?Xiaonan Wang
AI, IoT and Blockchain tech briefing to the industry to showcase our research at NUS.
by Dr. Xiaonan Wang
Assistant Professor
NUS Department of Chemical & Biomolecular Engineering
CINECA webinar slides: Data Gravity in the Life Sciences: Lessons learned fro...CINECAProject
We live in an era of cloud computing. Many of the services in the life sciences are keenly planning cloud transformations, seeking to create globally distributed ecosystems of harmonised data based on standards from organisations like GA4GH. CINECA faces similar challenges, gathering cohort datasets from all over the globe, many of which are pinned in place, due to their size, legal restrictions, or other considerations. But is “bringing compute to the data” always the right choice? In this webinar, based on experiences from the Human Cell Atlas Data Coordination Platform and other projects from EMBL-EBI, we will explore the concept of “data gravity”: The idea that whilst there are forces that may hold data in one place, there are others that require it to be mobile. We’ll consider how effectively planning a cloud strategy requires consideration of the gravity of datasets, and the impact it may have on team skills required, incentives for good practice, and storage and compute costs.
The CINECA webinar series aims to discuss ways to address common challenges and share best practices in the field of cohort data analysis, as well as distribute CINECA project results. All CINECA webinars include an audience Q&A session during which attendees can ask questions and make suggestions. Please note that all webinars are recorded and available for posterior viewing. CINECA webinars include an audience Q&A session during which attendees can ask questions and make suggestions.
This webinar took place on 12th November 2020 and is part of the CINECA webinar series.
For previous and upcoming CINECA webinars see:
https://www.cineca-project.eu/webinars
Keynote on software sustainability given at the 2nd Annual Netherlands eScience Symposium, November 2014.
Based on the article
Carole Goble ,
Better Software, Better Research
Issue No.05 - Sept.-Oct. (2014 vol.18)
pp: 4-8
IEEE Computer Society
http://www.computer.org/csdl/mags/ic/2014/05/mic2014050004.pdf
http://doi.ieeecomputersociety.org/10.1109/MIC.2014.88
http://www.software.ac.uk/resources/publications/better-software-better-research
Open Standard Internet of Things for Smart CitiesSensorUp
In this Greening Government Speaker webinar, Dr. Steve Liang presented an open standard-based Internet of Things architecture enables 10X faster development and instant on-demand integration. Several real-world smart cities use cases were presented as well. Video is available here: https://www.youtube.com/watch?v=rLbvz6f4qb0
(Em)Powering Science: High-Performance Infrastructure in Biomedical ScienceAri Berman
We’ll explore current and future considerations in advanced computing architectures that empower the conversion of data into knowledge. Life sciences produce the largest amount of data production out of all major science domains, making analytics and scientific computing cornerstones of modern research programs and methodologies. We’ll highlight the remarkable biomedical discoveries that are emerging through combined efforts, and discuss where and how the right infrastructure can catalyze the advancement of human knowledge. On-premises architectures as well as cloud, hybrid, and exotic architectures will all be discussed. It’s likely that all life science researchers will required advanced computing to perform their research within the next year. However, there has been less focus on advanced computing infrastructures across the industry due to the increased availability of public cloud infrastructure anything as a service models.
A talk I gave at the Dec 2013 Assembly Masterclass at UC Davis. Really licensed under CC0. UPDATED May 2014, for the presentation I gave at the combined SeRC Nordic Assembly Workshop in Stockholm, Sweden, May 14th 2014
Updated: New High Throughput Sequencing technologies at the Norwegian Sequenc...Lex Nederbragt
Un update of the previous talk with the same title. A talk I gave at the Computational Life Science initiative (University of Oslo) about new High Throughput Sequencing instruments at the Norwegian Sequencing Centre. I also mentioned future upgrades, and the upcoming nanopore sequencing platform of Oxford nanopore.
New High Throughput Sequencing technologies at the Norwegian Sequencing Centr...Lex Nederbragt
A talk I gave at the Microbiology Research Group (University of Oslo) about new High Throughput Sequencing instruments at the Norwegian Sequencing Centre. I also mentioned future upgrades, and the upcoming nanopore sequencing platform of Oxford nanopore
A talk I gave for my colleagues on how and why I use blogging and twitter for science, trying to convince them to start doing the same. DO check out the presenter notes! (see tab 'notes')
How to sequence a large eukaryotic genome - and how we sequenced the cod genome. A seminar I gave for the Computational Life Science (Univ. of Oslo) seminar series, September 28, 2011
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Essentials of Automations: The Art of Triggers and Actions in FME
NGS: bioinformatic challenges
1. Next generation sequencing research opportunities and bioinformatic challenges Lex Nederbragt Norwegian High-Throughput Sequencing Centre (NSC) and Centre for Ecological and Evolutionary Synthesis (CEES)
14. Challenges "The rule of thumb in the genomics community is that every dollar spent on sequencing hardware must be matched by a comparable investment in informatics." http://www.the-scientist.com/2011/3/1/60/1/
21. Challenges New technologies coming soon Next-Next-Generation Sequencing NSC expects delivery of the Pacific Biosciences RS in December 2011 http://www.pacificbiosciences.com
35. Norwegian Sequencing Center User: "I have the hypothesis" "You gave me the data" "I need help to answer my question"
36. NSC user user comfortable with data amounts, type, software? should user learn unix? should user buy commercial programs? are these good enough? should user go into cloud computing? fsteurope.com
37. NSC bioinformaticians should NSC test these programs? should NSC/UiO provide computing power? does NSC have a responsibility to provide infrastructure? applications? training? analyses?
38. NSC bioinformaticians if we do analyses: as collaboration, i.e. coauthor? charge per hour? are Norwegian researchers used to collaborate in this way?
Yes, it is true that I poorly understand what I am doing, but I am a molecular biologist and I don't have degree in bioinformatics/statistics/or any computer related field. I don't want to describe here my situation with my supervisor, I have now two ways out from my situation - give up on my PhD or do everything I can do to finish.My supervisor doesn't want to collaborate with anyone, he says that "this is easy". Instead of getting to the end of my PhD I find myself fighting with my supervisor and seeking for help on forum. All I dream of is to finish and find a job within a group doing 'real' metagenomics. But, without any publications I don't have chance to obtain that.