Cern, the vast underground laboratory under the mountains bordering France and Switzerland, has taken on some huge challenges – but few could be larger than storing all the data it has picked up.
Dr. Brian Mac Namee
Insight@DCU Machine Learning Workshop
Dublin City University, Dublin, Ireland
April 27, 2017
https://telecombcn-dl.github.io/dlmm-2017-dcu/
These are the slides from a plenary panel that I participated in at IEEE Cloud 2011 on July 5, 2011 in Washington, D.C. I discussed the Open Science Data Cloud and concluded the talk by three research questions
This is a talk titled "Cloud-Based Services For Large Scale Analysis of Sequence & Expression Data: Lessons from Cistrack" that I gave at CAMDA 2009 on October 6, 2009.
Dr. Brian Mac Namee
Insight@DCU Machine Learning Workshop
Dublin City University, Dublin, Ireland
April 27, 2017
https://telecombcn-dl.github.io/dlmm-2017-dcu/
These are the slides from a plenary panel that I participated in at IEEE Cloud 2011 on July 5, 2011 in Washington, D.C. I discussed the Open Science Data Cloud and concluded the talk by three research questions
This is a talk titled "Cloud-Based Services For Large Scale Analysis of Sequence & Expression Data: Lessons from Cistrack" that I gave at CAMDA 2009 on October 6, 2009.
In 2001, as early high-speed networks were deployed, George Gilder observed that “when the network is as fast as the computer's internal links, the machine disintegrates across the net into a set of special purpose appliances.” Two decades later, our networks are 1,000 times faster, our appliances are increasingly specialized, and our computer systems are indeed disintegrating. As hardware acceleration overcomes speed-of-light delays, time and space merge into a computing continuum. Familiar questions like “where should I compute,” “for what workloads should I design computers,” and "where should I place my computers” seem to allow for a myriad of new answers that are exhilarating but also daunting. Are there concepts that can help guide us as we design applications and computer systems in a world that is untethered from familiar landmarks like center, cloud, edge? I propose some ideas and report on experiments in coding the continuum.
Large Scale On-Demand Image Processing For Disaster ReliefRobert Grossman
This is a status update (as of Feb 22, 2010) of a new Open Cloud Consortium project that will provide on-demand, large scale image processing to assist with disaster relief efforts.
Running a GPU burst for Multi-Messenger Astrophysics with IceCube across all ...Igor Sfiligoi
The San Diego Supercomputer Center (SDSC) and the Wisconsin IceCube Particle Astrophysics Center (WIPAC) at the University of Wisconsin–Madison successfully completed a computational experiment as part of a multi-institution collaboration that marshalled all globally available for sale GPUs (graphics processing units) across Amazon Web Services, Microsoft Azure, and the Google Cloud Platform.
In all, some 51,500 GPU processors were used during the approximately 2-hour experiment conducted on November 16 and funded under a National Science Foundation EAGER grant.
The experiment – completed just prior to the opening of the International Conference for High Performance Computing, Networking, Storage, and Analysis (SC19) in Denver, CO – was coordinated by Frank Würthwein, SDSC Lead for High-Throughput Computing, and Benedikt Riedel, Computing Manager for the IceCube Neutrino Observatory and Global Computing Coordinator at WIPAC. Igor Sfiligoi, SDSC’s lead scientific software developer for high-throughput computing, and David Schultz, a production software manager with IceCube, conducted the actual run.
This presentation was given at several booths during SC19 by Frank Würthwein.
The OpenTopography facility was funded by the National Science Foundation (NSF) in 2009 to provide efficient online access to Earth science-oriented high-resolution lidar topography data, online processing tools, and derivative products.
NRP Engagement webinar - Running a 51k GPU multi-cloud burst for MMA with Ic...Igor Sfiligoi
NRP Engagement webinar: Description of the 380 PFLOP32S , 51k GPU multi-cloud burst using HTCondor to run IceCube photon propagation simulation.
Presented January 27th, 2020.
"Building and running the cloud GPU vacuum cleaner"Frank Wuerthwein
This talk, describing the "Largest Cloud Simulation in History" (Jensen Huang at SC19), was given at the MAGIC meeting on Dec. 4th 2019. MAGIC stands for "Middleware and Grid Interagency Cooperation", and is a group within NITRD. Current federal agencies that are members of MAGIC include DOC, DOD, DOE, HHS, NASA, and NSF.
Burst data retrieval after 50k GPU Cloud runIgor Sfiligoi
We ran a 50k GPU multi-cloud simulation to support the IceCube science. This talk provided an overview of what happened to the associated data.
Presented at the Internet2 booth at SC19.
Demonstrating a Pre-Exascale, Cost-Effective Multi-Cloud Environment for Scie...Igor Sfiligoi
Presented at PEARC20.
This talk presents expanding the IceCube’s production HTCondor pool using cost-effective GPU instances in preemptible mode gathered from the three major Cloud providers, namely Amazon Web Services, Microsoft Azure and the Google Cloud Platform. Using this setup, we sustained for a whole workday about 15k GPUs, corresponding to around 170 PFLOP32s, integrating over one EFLOP32 hour worth of science output for a price tag of about $60k. In this paper, we provide the reasoning behind Cloud instance selection, a description of the setup and an analysis of the provisioned resources, as well as a short description of the actual science output of the exercise.
For IceCube, large amount of photon propagation simulation is needed to properly calibrate natural Ice. Simulation is compute intensive and ideal for GPU compute. This Cloud run was more data intensive than precious ones, producing 130 TB of output data. To keep egress costs in check, we created dedicated network links via the Internet2 Cloud Connect Service.
The Next Light Wave: Why Too Much Light is An IssueGTTP-GHOU-NUCLIO
Presentation on importance of light for astronomy and society presented at "International Conference on Communication and Light" from 2 - 4 November in Braga, Portugal by Pedro Russo.
The presentation "Case Study: Developing and Integrating a Cloud Sourcing Strategy at CERN" was part of the "Gartner sourcing and strategic vendor relationship summit" held on the 6 and 7 June 2016, in London, United Kingdom.The main objective of the event was to bring together Senior Sourcing Executives, Sourcing Managers, Contract Managers and Vendor Managers to discuss practical advice, insight into future trends and to validate their sourcing strategies and initiatives by exchanging best practices. In this context Helge Meinhard, CERN, presented the CERN cloud sourcing strategy, the past CERN procurements and the plans with the recently funded HNSciCloud pre-commercial procurement.
In this slidecast, Jason Stowe from Cycle Computing describes the company's recent record-breaking Petascale CycleCloud HPC production run.
"For this big workload, a 156,314-core CycleCloud behemoth spanning 8 AWS regions, totaling 1.21 petaFLOPS (RPeak, not RMax) of aggregate compute power, to simulate 205,000 materials, crunched 264 compute years in only 18 hours. Thanks to Cycle's software and Amazon's Spot Instances, a supercomputing environment worth $68M if you had bought it, ran 2.3 Million hours of material science, approximately 264 compute-years, of simulation in only 18 hours, cost only $33,000, or $0.16 per molecule."
Learn more: http://blog.cyclecomputing.com/2013/11/back-to-the-future-121-petaflopsrpeak-156000-core-cyclecloud-hpc-runs-264-years-of-materials-science.html
Watch the video presentation: http://wp.me/p3RLHQ-aO9
In 2001, as early high-speed networks were deployed, George Gilder observed that “when the network is as fast as the computer's internal links, the machine disintegrates across the net into a set of special purpose appliances.” Two decades later, our networks are 1,000 times faster, our appliances are increasingly specialized, and our computer systems are indeed disintegrating. As hardware acceleration overcomes speed-of-light delays, time and space merge into a computing continuum. Familiar questions like “where should I compute,” “for what workloads should I design computers,” and "where should I place my computers” seem to allow for a myriad of new answers that are exhilarating but also daunting. Are there concepts that can help guide us as we design applications and computer systems in a world that is untethered from familiar landmarks like center, cloud, edge? I propose some ideas and report on experiments in coding the continuum.
Large Scale On-Demand Image Processing For Disaster ReliefRobert Grossman
This is a status update (as of Feb 22, 2010) of a new Open Cloud Consortium project that will provide on-demand, large scale image processing to assist with disaster relief efforts.
Running a GPU burst for Multi-Messenger Astrophysics with IceCube across all ...Igor Sfiligoi
The San Diego Supercomputer Center (SDSC) and the Wisconsin IceCube Particle Astrophysics Center (WIPAC) at the University of Wisconsin–Madison successfully completed a computational experiment as part of a multi-institution collaboration that marshalled all globally available for sale GPUs (graphics processing units) across Amazon Web Services, Microsoft Azure, and the Google Cloud Platform.
In all, some 51,500 GPU processors were used during the approximately 2-hour experiment conducted on November 16 and funded under a National Science Foundation EAGER grant.
The experiment – completed just prior to the opening of the International Conference for High Performance Computing, Networking, Storage, and Analysis (SC19) in Denver, CO – was coordinated by Frank Würthwein, SDSC Lead for High-Throughput Computing, and Benedikt Riedel, Computing Manager for the IceCube Neutrino Observatory and Global Computing Coordinator at WIPAC. Igor Sfiligoi, SDSC’s lead scientific software developer for high-throughput computing, and David Schultz, a production software manager with IceCube, conducted the actual run.
This presentation was given at several booths during SC19 by Frank Würthwein.
The OpenTopography facility was funded by the National Science Foundation (NSF) in 2009 to provide efficient online access to Earth science-oriented high-resolution lidar topography data, online processing tools, and derivative products.
NRP Engagement webinar - Running a 51k GPU multi-cloud burst for MMA with Ic...Igor Sfiligoi
NRP Engagement webinar: Description of the 380 PFLOP32S , 51k GPU multi-cloud burst using HTCondor to run IceCube photon propagation simulation.
Presented January 27th, 2020.
"Building and running the cloud GPU vacuum cleaner"Frank Wuerthwein
This talk, describing the "Largest Cloud Simulation in History" (Jensen Huang at SC19), was given at the MAGIC meeting on Dec. 4th 2019. MAGIC stands for "Middleware and Grid Interagency Cooperation", and is a group within NITRD. Current federal agencies that are members of MAGIC include DOC, DOD, DOE, HHS, NASA, and NSF.
Burst data retrieval after 50k GPU Cloud runIgor Sfiligoi
We ran a 50k GPU multi-cloud simulation to support the IceCube science. This talk provided an overview of what happened to the associated data.
Presented at the Internet2 booth at SC19.
Demonstrating a Pre-Exascale, Cost-Effective Multi-Cloud Environment for Scie...Igor Sfiligoi
Presented at PEARC20.
This talk presents expanding the IceCube’s production HTCondor pool using cost-effective GPU instances in preemptible mode gathered from the three major Cloud providers, namely Amazon Web Services, Microsoft Azure and the Google Cloud Platform. Using this setup, we sustained for a whole workday about 15k GPUs, corresponding to around 170 PFLOP32s, integrating over one EFLOP32 hour worth of science output for a price tag of about $60k. In this paper, we provide the reasoning behind Cloud instance selection, a description of the setup and an analysis of the provisioned resources, as well as a short description of the actual science output of the exercise.
For IceCube, large amount of photon propagation simulation is needed to properly calibrate natural Ice. Simulation is compute intensive and ideal for GPU compute. This Cloud run was more data intensive than precious ones, producing 130 TB of output data. To keep egress costs in check, we created dedicated network links via the Internet2 Cloud Connect Service.
The Next Light Wave: Why Too Much Light is An IssueGTTP-GHOU-NUCLIO
Presentation on importance of light for astronomy and society presented at "International Conference on Communication and Light" from 2 - 4 November in Braga, Portugal by Pedro Russo.
The presentation "Case Study: Developing and Integrating a Cloud Sourcing Strategy at CERN" was part of the "Gartner sourcing and strategic vendor relationship summit" held on the 6 and 7 June 2016, in London, United Kingdom.The main objective of the event was to bring together Senior Sourcing Executives, Sourcing Managers, Contract Managers and Vendor Managers to discuss practical advice, insight into future trends and to validate their sourcing strategies and initiatives by exchanging best practices. In this context Helge Meinhard, CERN, presented the CERN cloud sourcing strategy, the past CERN procurements and the plans with the recently funded HNSciCloud pre-commercial procurement.
In this slidecast, Jason Stowe from Cycle Computing describes the company's recent record-breaking Petascale CycleCloud HPC production run.
"For this big workload, a 156,314-core CycleCloud behemoth spanning 8 AWS regions, totaling 1.21 petaFLOPS (RPeak, not RMax) of aggregate compute power, to simulate 205,000 materials, crunched 264 compute years in only 18 hours. Thanks to Cycle's software and Amazon's Spot Instances, a supercomputing environment worth $68M if you had bought it, ran 2.3 Million hours of material science, approximately 264 compute-years, of simulation in only 18 hours, cost only $33,000, or $0.16 per molecule."
Learn more: http://blog.cyclecomputing.com/2013/11/back-to-the-future-121-petaflopsrpeak-156000-core-cyclecloud-hpc-runs-264-years-of-materials-science.html
Watch the video presentation: http://wp.me/p3RLHQ-aO9
Proposals to introduce new high speed broadband services onto Britain's rail network this week may sound like great news for companies who can benefit from the remote working possibilities this would bring – but the likelihood of this being established effectively has been questioned.
How data loss can cause payroll problemsJohn Davis
There are many forms of data a company may keep that are vital to its operations, but pay and remuneration information is vital. http://www.storetec.net/news-blog/how-data-loss-can-cause-payroll-problems/.
New microsoft application security problemJohn Davis
A zero-day attack on Microsoft XP has been discovered, emphasising the need for businesses to be using the latest software to prevent data loss. http://www.storetec.net/news-blog/new-microsoft-application-security-problem.
2013 DataCite Summer Meeting - Opening Keynote: A short history of the Higgs ...datacite
2013 DataCite Summer Meeting - Making Research better
DataCite. Co-sponsored by CODATA.
Thursday, 19 September 2013 at 13:00 - Friday, 20 September 2013 at 12:30
Washington, DC. National Academy of Sciences
http://datacite.eventbrite.co.uk/
The Large Hadron Collider (LHC) is the world's largest and most powerful particle collider, most complex experimental facility ever built, and the largest single machine in the world.
It was built by the European Organization for Nuclear Research (CERN) between 1998 and 2008 in collaboration with over 10,000 scientists and engineers from over 100 countries, as well as hundreds of universities and laboratories.
The Higgs boson is an elementary particle in the Standard Model of particle physics. It is the quantum excitation of the Higgs field, a fundamental field of crucial importance to particle physics theory first suspected to exist in the 1960s and was discovered in 2012 in lhc.
Inside the accelerator, two high-energy particle beams travel at close to the speed of light before they are made to collide. The beams travel in opposite directions in separate beam pipes – two tubes kept at ultrahigh vacuum. They are guided around the accelerator ring by a strong magnetic field maintained by superconducting electromagnets.
The God Particle or God particle may refer to: Higgs boson, a particle in physics sometimes referred to as the God's Particle.
[DSC Europe 23] Mila Pandurovic - Data science in high energy physicsDataScienceConferenc1
High energy physics experiments such as currently running Large Hadron Collider (LHC) or the future collider experiments (CEPC, CLIC, ILC, FCC), rely strongly on data science. Only from four LHC experiments the CERN Data Centre stores more than thirty petabytes of data per year, where over hundred petabytes of data are archived permanently. The collider experiments are characterized not only by the vast amount of data, but also with the necessity for the high precision measurement, unfavorable ratio of signal to background, where the tiny signals are covered by the huge pile of background events, with ratio of one per million, or less. In Higgs physics special challenge present the studies with purely hadronic final states, jets, where the lack of the sharp tagging variables lead to strenuous signal and background separation. The presentation will give the overview of the use of data science in the Higgs boson physics at future Circular electron positron collider, CEPC, China.
Higg's Boson - The GOD particle
An Abstract submitted for the prize winning presentation on "CHENMAPH - 2K17" for " PHYSICS PAPER PRESENTATION" on " The GOD particle - Higg's Boson"
This presentation is about large hadron colliders .
The LHC is based at the European particle physics laboratory CERN, near Geneva in Switzerland,
Topics covered in presentation are
1)What is LHC?
2)What is main purpose of the LHC?
3)What is Higg boson and its speed
4)How particles are accelerated
5)Detectors
1)ATLAS
2)ALICE
3)CMS
4)LHCB
6)Application
7)Merits
8)Demerits
Government boost for data technology researchJohn Davis
As companies become more high-tech, generate larger quantities of data, seek ever larger banks of storage facilities and turn to innovations like the cloud, it seems clear that the future belongs to those who can use this technology best.
The Large Hadron Collider (LHC) is the world's largest and most powerful particle collider, most complex experimental facility ever built,
Largest single machine in the world.
It was built by the European Organization for Nuclear Research (CERN) between 1998 & 2008
10,000 scientists and engineers from over 100 countries,
Lies in a tunnel 27 kilometres (17 mi) in circumference, as deep as 175 metres (574 ft) beneath the France–Switzerland border near Geneva, Switzerland,
“We are now at a cusp of human history,” notes Beckwith, “equivalent to the discovery of fire, hundreds of thousands of years ago, and for a very unexpected reason. For the first time in human history, we are at the point where an analysis of false to true vacuum transitions, the entire nexus of symmetry breaking and the formation of mass in the early universe is enabling scientists and technologists the foundations of space time geometry. This could lead to within 50 years to developing appropriate technology for star travel and to genuine space ship ventures to the neighboring solar systems. At stake is to understand what closer to the idea of an interstellar ram jet, similar to Bussard ram jet with tweaking will work.” - Dr. Andrew W. Beckwith, Chongqing University Department of Physics, Chongqing, Peoples Republic of China
Norfolk County Council Announces Cloud-based Storage Network John Davis
Norfolk County Council has announced that it is to launch a £26 million cloud-based data sharing system, which will be used to link up information from various service providers, including district councils - See more at: http://www.storetec.net/news-blog/norfolk-county-council-announces-cloudbased-storage-network
Data protection rules could cost firms £75k a yearJohn Davis
New legislation aimed protecting data being discussed by Europe's Justice and Home Affairs ministers could potentially place a greater financial strain on smaller firms, the Federation of Small Businesses (FSB) has argued. - See more at: http://www.storetec.net/news-blog/data-protection-rules-could-cost-firms-75k-a-year
App Developers Urged to Take Greater Care in Accessing DataJohn Davis
The Information Commissioner's Office (ICO) has claimed that mobile app developers need to be a lot clearer about what data they happen to be accessing, after a YouGov survey highlighting concerns around how such programs are being used to access people's information. - See more at: http://www.storetec.net/news-blog/app-developers-urged-to-take-greater-care-in-accessing-data
Choose Your Own Device ‘To Replace Bring Your Own Device’John Davis
Bring your own device – or BYOD – has been a familiar element of the changing way in which businesses and their employees communicate, store and use data. News/Blogs http://www.storetec.net/news-blog/choose-your-own-device-to-replace-bring-your-own-device.
Banks ‘falling s short on data protection’John Davis
UK banks have seen their reputations taking a battering in recent years over the credit crunch, bailouts and the personal protection insurance scandal, but matters could get even worse after a study indicated they are falling short on data security. Storetec News/Blogs. http://www.storetec.net/news-blog/banks-falling-short-on-data-protection/.
Ico underlines importance of encryption after data lossJohn Davis
The Information Commissioner's Office (ICO) has reiterated the importance of encrypting sensitive data or finding another way to store it altogether after news of a data breach at a local authority emerged. http://www.storetec.net/news-blog/ico-underlines-importance-of-encryption-after-data-loss
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
zkStudyClub - Reef: Fast Succinct Non-Interactive Zero-Knowledge Regex ProofsAlex Pruden
This paper presents Reef, a system for generating publicly verifiable succinct non-interactive zero-knowledge proofs that a committed document matches or does not match a regular expression. We describe applications such as proving the strength of passwords, the provenance of email despite redactions, the validity of oblivious DNS queries, and the existence of mutations in DNA. Reef supports the Perl Compatible Regular Expression syntax, including wildcards, alternation, ranges, capture groups, Kleene star, negations, and lookarounds. Reef introduces a new type of automata, Skipping Alternating Finite Automata (SAFA), that skips irrelevant parts of a document when producing proofs without undermining soundness, and instantiates SAFA with a lookup argument. Our experimental evaluation confirms that Reef can generate proofs for documents with 32M characters; the proofs are small and cheap to verify (under a second).
Paper: https://eprint.iacr.org/2023/1886
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
20240605 QFM017 Machine Intelligence Reading List May 2024
Cern uses cloud for next challenge
1. Cern Uses Cloud for Next Challenge
Facebook.com/storetec
Storetec Services Limited
@StoretecHull www.storetec.net
Cern, the vast underground laboratory under the mountains bordering
France and Switzerland, has taken on some huge challenges – but
few could be larger than storing all the data it has picked up.
The centre has become famous for its Large Hadron Collider (LHC)
experiments, based on firing beams and particles down a long
tunnel deep underground, in an attempt to break new ground in
human knowledge about the universe and the physics behind it.
2. All this has gained plenty of popular publicity over legal attempts to
stop the LHC being switched on in 2008 amid claims it could cause a
'mini big-bang' that could destroy the Earth, along with the recent
discovery of the elusive Higgs Boson. However, the work that has
created the headlines has involved not only the use of enormous
energy, but also the collating and study of vast quantities of data.
Researchers based at Cern – from the professors at the top to PHD
students helping quantify and analyse the data – are often engaged in
a far less glamorous job, away from the press releases and talk of
Nobel prizes. The discoveries made have not occurred because of
instant results flashing up on a screen, but through a comprehensive
and careful process of sifting through the immense pile of data created.
3. This has created another issue. While Cern might have discovered the
particle that causes matter to have mass, there is no doubt that huge
amounts of offsite backup, document storage and cloud computing
facilities are essential to enable it to be stored.
Speaking to the Structure Europe conference, Cern's head of
infrastructure Tim Bell noted that when the LHC is operational, the
particle collisions are captured by 100 megapixel cameras, each of
which takes 40 million pictures per second, meaning that every second
sees one petabyte of data being created, Cloudpro reports.
4. He added: “Our challenge is that 35 petabytes of that data needs to be
recorded every year and the the Large Hadron Collider will double that.
And the scientists want to keep that data for 20 years.”
This almost unimaginably large quantity of data poses a major
challenge, but it is one Cernbelieves can be solved by using the public
cloud.
5. At present the LHC is non-functional and is being cooled down so
maintenance can be carried out before it starts up again in 2015. By
that time, some of the functionality of Cern will have been removed to
an offsite location at Wigner, near Budapest. This will boost the 3 MW
capacity of Cern by an extra 2.5 MW. Its facilities include 500 servers,
20,000 cores and 5.5 petabytes of data storage that Cern can make
use of.
Using this extra capacity alongside its existing computers, Cern will be
conducting 90 per cent of its computing via the cloud when the LHC is
switched on again.
6. In doing so, it will be seeking to learn more about the properties of the
Higgs Boson, which Edinburgh-based professor Peter Higgs first
postulated the existence of in 1964. Back then, of course, there was no
way of carrying out the necessary experiments to test the theory, or
store the immense amount of data the computers would be required to
store for analysis.
Cern was created back in 1954, since when it has grown from having
two member nations to 20, with that alone extending the scale of its
data storage needs even before its various experiments with particles
are taken into account.
7. It states of its Boson discovery: "This particle is consistent with the
Higgs Boson but it will take further work to determine whether or not it
is the Higgs Boson predicted by the standard model. The Higgs Boson,
as proposed within the standard model, is the simplest manifestation of
the Brout-Englert-Higgs mechanism. Other types of Higgs bosons are
predicted by other theories that go beyond the standard model."
Experiments aimed at confirming or disproving these notions will take
time, energy, money and, of course, the lengthy analysis of vast
quantities of data. However, by using the cloud, this cutting edge
scientific venture can at least be sure of being able to obtain the
storage space it needs.