How cloud computing can accelerate your research. Presentation given at Moscow State University on 19th May 2015.
Apply for Azure for Research Awards at http://research.microsoft.com/en-US/projects/azure/awards.aspx
This talk describes our experiences from hosting scientific research application in the Microsoft Cloud. Covers an overview of Microsoft Azure capabilities, examples of big data analysis for science, data collections, science gateways and science virtual machine libraries.
This talk explains how cloud computing to speed up your research, and provide capabilities that are otherwise out of reach. Big data, data science, machine learning, high-performance computing are all available on-demand using Microsoft Azure.
Researchers around the world can apply for free cloud computing time for their projects at www.Azure4Research.com
Research results in peer-reviewed publications are reproducible, right? If only it was so clear cut. With high profile paper retractions and pushes for better data sharing by funders, publishers and the community, the spotlight is now focussing on the whole way research is conducted around the world.
This talk from the Software Sustainability Institute's Collaborations Workshop 2014 describes how cloud computing, with Microsoft Azure, is helping researchers realize the goals of scientific reproducibility.
Find out more at www.azure4research.com
A presentation at the NIH Workshop on Advanced Networking for Data-Intensive Biomedical Research. The talk covers our work with the science community on using cloud computing to enhance and improve basic research for data analysis and scientific discovery
These are the slides from a plenary panel that I participated in at IEEE Cloud 2011 on July 5, 2011 in Washington, D.C. I discussed the Open Science Data Cloud and concluded the talk by three research questions
How cloud computing can accelerate your research. Presentation given at Moscow State University on 19th May 2015.
Apply for Azure for Research Awards at http://research.microsoft.com/en-US/projects/azure/awards.aspx
This talk describes our experiences from hosting scientific research application in the Microsoft Cloud. Covers an overview of Microsoft Azure capabilities, examples of big data analysis for science, data collections, science gateways and science virtual machine libraries.
This talk explains how cloud computing to speed up your research, and provide capabilities that are otherwise out of reach. Big data, data science, machine learning, high-performance computing are all available on-demand using Microsoft Azure.
Researchers around the world can apply for free cloud computing time for their projects at www.Azure4Research.com
Research results in peer-reviewed publications are reproducible, right? If only it was so clear cut. With high profile paper retractions and pushes for better data sharing by funders, publishers and the community, the spotlight is now focussing on the whole way research is conducted around the world.
This talk from the Software Sustainability Institute's Collaborations Workshop 2014 describes how cloud computing, with Microsoft Azure, is helping researchers realize the goals of scientific reproducibility.
Find out more at www.azure4research.com
A presentation at the NIH Workshop on Advanced Networking for Data-Intensive Biomedical Research. The talk covers our work with the science community on using cloud computing to enhance and improve basic research for data analysis and scientific discovery
These are the slides from a plenary panel that I participated in at IEEE Cloud 2011 on July 5, 2011 in Washington, D.C. I discussed the Open Science Data Cloud and concluded the talk by three research questions
Scaling collaborative data science with Globus and JupyterIan Foster
The Globus service simplifies the utilization of large and distributed data on the Jupyter platform. Ian Foster explains how to use Globus and Jupyter to seamlessly access notebooks using existing institutional credentials, connect notebooks with data residing on disparate storage systems, and make data securely available to business partners and research collaborators.
Data Tribology: Overcoming Data Friction with Cloud AutomationIan Foster
A talk at the CODATA/RDA meeting in Gaborone, Botswana. I made the case that the biggest barriers to effective data sharing and reuse are often those associated with "data friction" and that cloud automation can be used to overcome those barriers.
The image on the first slide shows a few of the more than 20,000 active Globus endpoints.
Big Data, Big Computing, AI, and Environmental ScienceIan Foster
I presented to the Environmental Data Science group at UChicago, with the goal of getting them excited about the opportunities inherent in big data, big computing, and AI--and to think about how to collaborate with Argonne in those areas. We had a great and long conversation about Takuya Kurihana's work on unsupervised learning for cloud classification. I also mentioned our work making NASA and CMIP data accessible on AI supercomputers.
Presentation at the MOC Workshop, at Boston University.
Cloud Dataverse will be a new service for accessing and processing public data sets in a the Massachusetts Open Cloud (MOC). It is based on Dataverse, a popular software framework for sharing, archiving, and analyzing research data. Cloud Dataverse extends Dataverse to replicate datasets from institution repositories to a cloud-based repository and store their data files in Swift, making data processing faster for in-situ application running in the cloud.
Cloud Dataverse is a collaborative effort between two open source projects: Massachusetts Open Cloud (MOC) and Dataverse. The Dataversesoftware is being developed at Harvard's Institute for Quantitative Social Science (IQSS) with contributors worldwide providing 21 Dataverse installations. The Harvard Dataverse installation alone hosts more than 60,000 datasets from 300 institutions by 15,000 data authors. The MOC is a collaboration between higher education (BU, NEU, Harvard, MIT and UMass), government, and industry. Its mission is to create a self-sustaining at-scale public cloud based on the Open Cloud eXchange model.
In this video from the DDN User Group at SC17, Jake Carroll from The Queensland Brain Institute, The University of Queensland, Australia presents: MeDiCI - How to Withstand a Research Data Tsunami.
The Metropolitan Data Caching Infrastructure (MeDiCI) project is a data storage fabric developed and trialled at UQ that delivers data to researchers where needed at any time. The “magic” of MeDiCI is it offers the illusion of a single virtual data centre next door even when it is actually distributed over potentially very wide areas with varying network connectivity. MeDiCI enables a range of techniques for national and international data sharing. Recent tests between UQ, the University of California, San Diego and the National Institute of Advanced Industrial Science and Technology in Japan indicated that MeDiCI can even support seamless access at very high data rates globally.
Watch the video: https://wp.me/p3RLHQ-hNG
Learn more: http://ddn.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from the DDN User Group at SC17, Ron Hawkins from the San Diego Supercomputer Center presents: BioBurst — Leveraging Burst Buffer Technology for Campus Research Computing.
Under an NSF award, SDSC will implement a separately scheduled partition of TSCC with technology designed to address key areas of bioinformatics computing including genomics, transcriptomics, and immune receptor repertoire analysis. Called ‘BioBurst’, the system will incorporate the following major components:
* An input/output (I/O) accelerator appliance with 40 terabytes of non-volatile memory and software designed to improve network throughput by alleviating the small-block/small-file I/O problem characteristic of many bioinformatics codes
* A field programmable gate array (FPGA)-based computational accelerator system that has been demonstrated to perform de-multiplexing, read mapping, and variant calling of complete human genomes in about 22 minutes
* About 700 commodity computing cores, which will access the I/O accelerator and provide a separately scheduled resource for running bioinformatics applications;
* Integration with a large-scale, Lustre parallel file system which supports streaming I/O and has the capacity to stage large amounts of data associated with many bioinformatics studies
Customization of the job scheduler to accommodate bioinformatics workflows, which can consist of hundreds to thousands of jobs submitted by a single user at one time.
Watch the video: https://wp.me/p3RLHQ-hLn
Learn more: http://ddn.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Accelerating data-intensive science by outsourcing the mundaneIan Foster
Talk at eResearch New Zealand Conference, June 2011 (given remotely from Italy, unfortunately!)
Abstract: Whitehead observed that "civilization advances by extending the number of important operations which we can perform without thinking of them." I propose that cloud computing can allow us to accelerate dramatically the pace of discovery by removing a range of mundane but timeconsuming research data management tasks from our consciousness. I describe the Globus Online system that we are developing to explore these possibilities, and propose milestones for evaluating progress towards smarter science.
Big Data R&D Strategy - Ensure the long term sustainability, access, and deve...Sky Bristol
Presentation on one of the strategic themes being considered for a U.S. Government Big Data R&D strategy - https://www.nitrd.gov/bigdata/rfi/02102014.aspx.
A Campus-Scale High Performance Cyberinfrastructure is Required for Data-Int...Larry Smarr
11.12.12
Seminar Presentation
Princeton Institute for Computational Science and Engineering (PICSciE)
Princeton University
Title: A Campus-Scale High Performance Cyberinfrastructure is Required for Data-Intensive Research
Princeton, NJ
Scaling collaborative data science with Globus and JupyterIan Foster
The Globus service simplifies the utilization of large and distributed data on the Jupyter platform. Ian Foster explains how to use Globus and Jupyter to seamlessly access notebooks using existing institutional credentials, connect notebooks with data residing on disparate storage systems, and make data securely available to business partners and research collaborators.
Data Tribology: Overcoming Data Friction with Cloud AutomationIan Foster
A talk at the CODATA/RDA meeting in Gaborone, Botswana. I made the case that the biggest barriers to effective data sharing and reuse are often those associated with "data friction" and that cloud automation can be used to overcome those barriers.
The image on the first slide shows a few of the more than 20,000 active Globus endpoints.
Big Data, Big Computing, AI, and Environmental ScienceIan Foster
I presented to the Environmental Data Science group at UChicago, with the goal of getting them excited about the opportunities inherent in big data, big computing, and AI--and to think about how to collaborate with Argonne in those areas. We had a great and long conversation about Takuya Kurihana's work on unsupervised learning for cloud classification. I also mentioned our work making NASA and CMIP data accessible on AI supercomputers.
Presentation at the MOC Workshop, at Boston University.
Cloud Dataverse will be a new service for accessing and processing public data sets in a the Massachusetts Open Cloud (MOC). It is based on Dataverse, a popular software framework for sharing, archiving, and analyzing research data. Cloud Dataverse extends Dataverse to replicate datasets from institution repositories to a cloud-based repository and store their data files in Swift, making data processing faster for in-situ application running in the cloud.
Cloud Dataverse is a collaborative effort between two open source projects: Massachusetts Open Cloud (MOC) and Dataverse. The Dataversesoftware is being developed at Harvard's Institute for Quantitative Social Science (IQSS) with contributors worldwide providing 21 Dataverse installations. The Harvard Dataverse installation alone hosts more than 60,000 datasets from 300 institutions by 15,000 data authors. The MOC is a collaboration between higher education (BU, NEU, Harvard, MIT and UMass), government, and industry. Its mission is to create a self-sustaining at-scale public cloud based on the Open Cloud eXchange model.
In this video from the DDN User Group at SC17, Jake Carroll from The Queensland Brain Institute, The University of Queensland, Australia presents: MeDiCI - How to Withstand a Research Data Tsunami.
The Metropolitan Data Caching Infrastructure (MeDiCI) project is a data storage fabric developed and trialled at UQ that delivers data to researchers where needed at any time. The “magic” of MeDiCI is it offers the illusion of a single virtual data centre next door even when it is actually distributed over potentially very wide areas with varying network connectivity. MeDiCI enables a range of techniques for national and international data sharing. Recent tests between UQ, the University of California, San Diego and the National Institute of Advanced Industrial Science and Technology in Japan indicated that MeDiCI can even support seamless access at very high data rates globally.
Watch the video: https://wp.me/p3RLHQ-hNG
Learn more: http://ddn.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
In this deck from the DDN User Group at SC17, Ron Hawkins from the San Diego Supercomputer Center presents: BioBurst — Leveraging Burst Buffer Technology for Campus Research Computing.
Under an NSF award, SDSC will implement a separately scheduled partition of TSCC with technology designed to address key areas of bioinformatics computing including genomics, transcriptomics, and immune receptor repertoire analysis. Called ‘BioBurst’, the system will incorporate the following major components:
* An input/output (I/O) accelerator appliance with 40 terabytes of non-volatile memory and software designed to improve network throughput by alleviating the small-block/small-file I/O problem characteristic of many bioinformatics codes
* A field programmable gate array (FPGA)-based computational accelerator system that has been demonstrated to perform de-multiplexing, read mapping, and variant calling of complete human genomes in about 22 minutes
* About 700 commodity computing cores, which will access the I/O accelerator and provide a separately scheduled resource for running bioinformatics applications;
* Integration with a large-scale, Lustre parallel file system which supports streaming I/O and has the capacity to stage large amounts of data associated with many bioinformatics studies
Customization of the job scheduler to accommodate bioinformatics workflows, which can consist of hundreds to thousands of jobs submitted by a single user at one time.
Watch the video: https://wp.me/p3RLHQ-hLn
Learn more: http://ddn.com
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Accelerating data-intensive science by outsourcing the mundaneIan Foster
Talk at eResearch New Zealand Conference, June 2011 (given remotely from Italy, unfortunately!)
Abstract: Whitehead observed that "civilization advances by extending the number of important operations which we can perform without thinking of them." I propose that cloud computing can allow us to accelerate dramatically the pace of discovery by removing a range of mundane but timeconsuming research data management tasks from our consciousness. I describe the Globus Online system that we are developing to explore these possibilities, and propose milestones for evaluating progress towards smarter science.
Big Data R&D Strategy - Ensure the long term sustainability, access, and deve...Sky Bristol
Presentation on one of the strategic themes being considered for a U.S. Government Big Data R&D strategy - https://www.nitrd.gov/bigdata/rfi/02102014.aspx.
A Campus-Scale High Performance Cyberinfrastructure is Required for Data-Int...Larry Smarr
11.12.12
Seminar Presentation
Princeton Institute for Computational Science and Engineering (PICSciE)
Princeton University
Title: A Campus-Scale High Performance Cyberinfrastructure is Required for Data-Intensive Research
Princeton, NJ
Keynote talk at the International Conference on Supercoming 2009, at IBM Yorktown in New York. This is a major update of a talk first given in New Zealand last January. The abstract follows.
The past decade has seen increasingly ambitious and successful methods for outsourcing computing. Approaches such as utility computing, on-demand computing, grid computing, software as a service, and cloud computing all seek to free computer applications from the limiting confines of a single computer. Software that thus runs "outside the box" can be more powerful (think Google, TeraGrid), dynamic (think Animoto, caBIG), and collaborative (think FaceBook, myExperiment). It can also be cheaper, due to economies of scale in hardware and software. The combination of new functionality and new economics inspires new applications, reduces barriers to entry for application providers, and in general disrupts the computing ecosystem. I discuss the new applications that outside-the-box computing enables, in both business and science, and the hardware and software architectures that make these new applications possible.
Applying Photonics to User Needs: The Application ChallengeLarry Smarr
05.02.28
Invited Talk to the 4th Annual On*VECTOR International Photonics Workshop
Sponsored by NTT Network Innovation Laboratories
Title: Applying Photonics to User Needs: The Application Challenge
University of California, San Diego
Charith Perera, Arkady Zaslavsky, Peter Christen, Michael Compton, and Dimitrios Georgakopoulos, Context-aware Sensor Search, Selection and Ranking Model for Internet of Things Middleware, Proceedings of the IEEE 14th International Conference on Mobile Data Management (MDM), Milan, Italy, June, 2013
The Importance of Large-Scale Computer Science Research EffortsLarry Smarr
05.10.20
Talk at Public Seminar on Large-Scale NSF Research Efforts for the Future Computer Museum
Title: The Importance of Large-Scale Computer Science Research Efforts
Mountain View, CA
Cyberinfrastructure to Support Ocean ObservatoriesLarry Smarr
05.03.18
Invited Talk to the Ocean Studies Board
National Research Council
Title: Cyberinfrastructure to Support Ocean Observatories
University of California San Diego
Building a Global Collaboration System for Data-Intensive DiscoveryLarry Smarr
11.01.06
Distinguished Lecture
Hawaii International Conference on System Sciences (HICSS-44)
Title: Building a Global Collaboration System for Data-Intensive Discovery
Kauai, HI
Machine learning using Microsoft Azure at the Microsoft Research - Moscow State University Joint Research Centre. Specifically, this describes the Adaptive Skip-gram (AdaGram) model; a nonparametric extension of famous Skip-gram model implemented in word2vec software which is able to learn multiple representations per word capturing different word meanings. This projects implements AdaGram in Julia language.
The team uses Microsoft Azure to run at scale, allowing analysis of the Wikipedia corpus on large virtual machines.
MSU Bayes Group homepage - http://bayesgroup.ru/
Github project - https://github.com/sbos/AdaGram.jl
Research paper preprint - http://arxiv.org/abs/1502.07257
In this talk we describe how the Fourth Paradigm for Data-Intensive Research is providing a framework for us to develop tools, technologies and platforms to support actionable science. We discuss applications that take advantage of cloud computing, particularly Microsoft Azure, to realise the potential for turning data into decisions, knowledge and understanding. http://www.fourthpardigm.org and http://www.azure4research.com
Scientific instruments, environmental sensors, and large-scale simulations are creating more scientific data than ever before. By using advanced, large-scale information processing facilities, scientists are now able to analyze massive volumes of data in ways that never would have been possible just a few years ago. While a few researchers have access to these large computer systems, most are limited by the processing capacity they can access conveniently and quickly. Cloud computing solutions utilizing Microsoft Azure allow environmental science researchers to access the compute and storage resources that they need, when they need them—without the up-front financial investment required—and helps reduce the time between progress and breakthroughs. Microsoft Azure brings on-demand computing and data access to environmental scientists and researchers everywhere.
Keynote presentation describing current efforts in Microsoft Research including scaling applications to the cloud, programming devices (.NET Gadgeteer) and how Microsoft is innovating through research.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
7. Last
few decades
Thousand
years ago
Today and the FutureLast few
hundred years
2
2
2.
3
4
a
cG
a
a
Simulation of
complex phenomena
Newton’s laws,
Maxwell’s equations…
Description of natural
phenomena
Unify theory, experiment and
simulation with large
multidisciplinary Data
Using data exploration and
data mining
(from instruments, sensors,
humans…)
Distributed Communities
17. The IPython notebook deployed on a VM.
1. Create a coreos VM and open the https port 443
2. Login and issue this one command
$docker run -d -p 443:8888 -e "PASSWORD=****" ipython/scipyserver
3. Go to https://yourVMaddress and log in.
29. • Many Examples
• Challenges:
• I only talked about the Analysis … but it there is more
• Sustainability
• Sharing
• Reproducible Science
Data
Acquisition &
modelling
Collaboration
and
visualisation
Analysis &
data mining
Dissemination
& sharing
Archiving and
preserving