Green computing or green IT, refers to environmentally sustainable computing or IT.
It is “the study and practice of Designing, Manufacturing, Using, and Disposing of computers, servers, and associated subsystems efficiently and effectively with minimal or no impact on the environment.”
Green computing is the environmentally responsible and eco-friendly use of computers and their resources.
-- Tech Talk Givn By Siddharth
Green computing or green IT, refers to environmentally sustainable computing or IT.
It is “the study and practice of Designing, Manufacturing, Using, and Disposing of computers, servers, and associated subsystems efficiently and effectively with minimal or no impact on the environment.”
Green computing is the environmentally responsible and eco-friendly use of computers and their resources.
-- Tech Talk Givn By Siddharth
Haku, a toy functional language based on literary JapaneseWim Vanderbauwhede
Haku is a natural language functional programming language based on literary Japanese. This talk is discusses the motivation behind Haku and explains the language by example. You don't need to know Japanese or have read the Haku documentation.
https://codeberg.org/wimvanderbauwhede/haku
On the need for low-carbon and sustainable computing and the path towards zero-carbon computing.
See https://wimvanderbauwhede.github.io/articles/frugal-computing/ for the complete article with references.
* The problem:
The current emissions from computing are about 2% of the world total but are projected to rise steeply over the next two decades. By 2040 emissions from computing alone will be close to 80% of the emissions level acceptable to keep global warming below the safe limit of 1.5°C. This growth in computing emissions is unsustainable: it would make it virtually impossible to meet the emissions warming limit.
The emissions from production of computing devices far exceed the emissions from operating them, so even if devices are more energy efficient producing more of them will make the emissions problem worse. Therefore we must extend the useful life of our computing devices.
* The solution:
As a society we need to start treating computational resources as finite and precious, to be utilised only when necessary, and as effectively as possible. We need frugal computing: achieving the same results for less energy.
* The vision:
Imagine we can extend the useful life of our devices and even increase their capabilities without any increase in energy consumption.
Meanwhile, we will develop the technologies for the next generation of devices, designed for energy efficiency as well as long life.
Every subsequent cycle will last longer, until finally the world will have computing resources that last forever and hardly use any energy.
NOTE: there is a small mistake in the presentation, the safe limit for 2040 is 13 GtCO2e, not 23. This makes it even more important to embrace frugal computing.
As Slideshare does not allow re-uploads, please find the corrected slides at https://wimvanderbauwhede.github.io/presentation/Zero-Carbon-Computing.pdf
Many people working in academia find it difficult to achieve or maintain a good work-life balance. This talk goes into the reasons for this, the consequences of working too much, the benefits of having the right balance, and ways of achieving a better balance. The talk is very much based on my personal views and experiences, but I hope there is some interest in sharing these.
In this talk I introduce Perl 6 and some of its exciting new features, especially gradual typing, roles and some functional programming features like lazy lists.
This talk was given at the Scottish Programming Languages Seminar on 24th Feb 2016 at the School of Computing Science of Glasgow University.
FPGAs as Components in Heterogeneous HPC Systems (paraFPGA 2015 keynote) Wim Vanderbauwhede
Keynote I gave at the ParCo conference (http://www.parco2015.org) workshop paraFPGA in Edinburgh, Sept 2015, on the need to raise the abstraction level for programming of heterogeneous systems.
Perl and Haskell: Can the Twain Ever Meet? (tl;dr: yes)Wim Vanderbauwhede
This talk is about two Perl modules (Call:Haskell and Functional::Types) I developed to call Haskell functions as transparently as possible.
In general, the only way to guarantee the correctness of the types of the function arguments in Haskell is to ensure they are well-typed in Perl. So I ended up writing a Haskell-inspired type system for Perl. In this talk I will first discuss the approach I took to call Haskell from Perl, and then the reasons why a type system is needed, and the actual type system I developed. The type system is based on "prototypes", functions that create type descriptors, and a small API of functions to create type constructors and manipulate the types. The system is type checked at run time and supports sum types, product types, function types and polymorphism. The approach is not Perl-specific and suitable for other dynamic languages.
https://github.com/wimvanderbauwhede
These are the slides of the talk I gave at the Dyla'14 workshop (http://conferences.inf.ed.ac.uk/pldi2014/). It's about monads for languages like Perl, Ruby and LiveScript.
The source code is available at
https://github.com/wimvanderbauwhede/Perl-Parser-Combinators
https://github.com/wimvanderbauwhede/parser-combinators-ls
Don't be put off by the word monad or the maths. This is basically a very practical way for doing tasks such as parsing.
As Europe's leading economic powerhouse and the fourth-largest hashtag#economy globally, Germany stands at the forefront of innovation and industrial might. Renowned for its precision engineering and high-tech sectors, Germany's economic structure is heavily supported by a robust service industry, accounting for approximately 68% of its GDP. This economic clout and strategic geopolitical stance position Germany as a focal point in the global cyber threat landscape.
In the face of escalating global tensions, particularly those emanating from geopolitical disputes with nations like hashtag#Russia and hashtag#China, hashtag#Germany has witnessed a significant uptick in targeted cyber operations. Our analysis indicates a marked increase in hashtag#cyberattack sophistication aimed at critical infrastructure and key industrial sectors. These attacks range from ransomware campaigns to hashtag#AdvancedPersistentThreats (hashtag#APTs), threatening national security and business integrity.
🔑 Key findings include:
🔍 Increased frequency and complexity of cyber threats.
🔍 Escalation of state-sponsored and criminally motivated cyber operations.
🔍 Active dark web exchanges of malicious tools and tactics.
Our comprehensive report delves into these challenges, using a blend of open-source and proprietary data collection techniques. By monitoring activity on critical networks and analyzing attack patterns, our team provides a detailed overview of the threats facing German entities.
This report aims to equip stakeholders across public and private sectors with the knowledge to enhance their defensive strategies, reduce exposure to cyber risks, and reinforce Germany's resilience against cyber threats.
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Haku, a toy functional language based on literary JapaneseWim Vanderbauwhede
Haku is a natural language functional programming language based on literary Japanese. This talk is discusses the motivation behind Haku and explains the language by example. You don't need to know Japanese or have read the Haku documentation.
https://codeberg.org/wimvanderbauwhede/haku
On the need for low-carbon and sustainable computing and the path towards zero-carbon computing.
See https://wimvanderbauwhede.github.io/articles/frugal-computing/ for the complete article with references.
* The problem:
The current emissions from computing are about 2% of the world total but are projected to rise steeply over the next two decades. By 2040 emissions from computing alone will be close to 80% of the emissions level acceptable to keep global warming below the safe limit of 1.5°C. This growth in computing emissions is unsustainable: it would make it virtually impossible to meet the emissions warming limit.
The emissions from production of computing devices far exceed the emissions from operating them, so even if devices are more energy efficient producing more of them will make the emissions problem worse. Therefore we must extend the useful life of our computing devices.
* The solution:
As a society we need to start treating computational resources as finite and precious, to be utilised only when necessary, and as effectively as possible. We need frugal computing: achieving the same results for less energy.
* The vision:
Imagine we can extend the useful life of our devices and even increase their capabilities without any increase in energy consumption.
Meanwhile, we will develop the technologies for the next generation of devices, designed for energy efficiency as well as long life.
Every subsequent cycle will last longer, until finally the world will have computing resources that last forever and hardly use any energy.
NOTE: there is a small mistake in the presentation, the safe limit for 2040 is 13 GtCO2e, not 23. This makes it even more important to embrace frugal computing.
As Slideshare does not allow re-uploads, please find the corrected slides at https://wimvanderbauwhede.github.io/presentation/Zero-Carbon-Computing.pdf
Many people working in academia find it difficult to achieve or maintain a good work-life balance. This talk goes into the reasons for this, the consequences of working too much, the benefits of having the right balance, and ways of achieving a better balance. The talk is very much based on my personal views and experiences, but I hope there is some interest in sharing these.
In this talk I introduce Perl 6 and some of its exciting new features, especially gradual typing, roles and some functional programming features like lazy lists.
This talk was given at the Scottish Programming Languages Seminar on 24th Feb 2016 at the School of Computing Science of Glasgow University.
FPGAs as Components in Heterogeneous HPC Systems (paraFPGA 2015 keynote) Wim Vanderbauwhede
Keynote I gave at the ParCo conference (http://www.parco2015.org) workshop paraFPGA in Edinburgh, Sept 2015, on the need to raise the abstraction level for programming of heterogeneous systems.
Perl and Haskell: Can the Twain Ever Meet? (tl;dr: yes)Wim Vanderbauwhede
This talk is about two Perl modules (Call:Haskell and Functional::Types) I developed to call Haskell functions as transparently as possible.
In general, the only way to guarantee the correctness of the types of the function arguments in Haskell is to ensure they are well-typed in Perl. So I ended up writing a Haskell-inspired type system for Perl. In this talk I will first discuss the approach I took to call Haskell from Perl, and then the reasons why a type system is needed, and the actual type system I developed. The type system is based on "prototypes", functions that create type descriptors, and a small API of functions to create type constructors and manipulate the types. The system is type checked at run time and supports sum types, product types, function types and polymorphism. The approach is not Perl-specific and suitable for other dynamic languages.
https://github.com/wimvanderbauwhede
These are the slides of the talk I gave at the Dyla'14 workshop (http://conferences.inf.ed.ac.uk/pldi2014/). It's about monads for languages like Perl, Ruby and LiveScript.
The source code is available at
https://github.com/wimvanderbauwhede/Perl-Parser-Combinators
https://github.com/wimvanderbauwhede/parser-combinators-ls
Don't be put off by the word monad or the maths. This is basically a very practical way for doing tasks such as parsing.
As Europe's leading economic powerhouse and the fourth-largest hashtag#economy globally, Germany stands at the forefront of innovation and industrial might. Renowned for its precision engineering and high-tech sectors, Germany's economic structure is heavily supported by a robust service industry, accounting for approximately 68% of its GDP. This economic clout and strategic geopolitical stance position Germany as a focal point in the global cyber threat landscape.
In the face of escalating global tensions, particularly those emanating from geopolitical disputes with nations like hashtag#Russia and hashtag#China, hashtag#Germany has witnessed a significant uptick in targeted cyber operations. Our analysis indicates a marked increase in hashtag#cyberattack sophistication aimed at critical infrastructure and key industrial sectors. These attacks range from ransomware campaigns to hashtag#AdvancedPersistentThreats (hashtag#APTs), threatening national security and business integrity.
🔑 Key findings include:
🔍 Increased frequency and complexity of cyber threats.
🔍 Escalation of state-sponsored and criminally motivated cyber operations.
🔍 Active dark web exchanges of malicious tools and tactics.
Our comprehensive report delves into these challenges, using a blend of open-source and proprietary data collection techniques. By monitoring activity on critical networks and analyzing attack patterns, our team provides a detailed overview of the threats facing German entities.
This report aims to equip stakeholders across public and private sectors with the knowledge to enhance their defensive strategies, reduce exposure to cyber risks, and reinforce Germany's resilience against cyber threats.
Techniques to optimize the pagerank algorithm usually fall in two categories. One is to try reducing the work per iteration, and the other is to try reducing the number of iterations. These goals are often at odds with one another. Skipping computation on vertices which have already converged has the potential to save iteration time. Skipping in-identical vertices, with the same in-links, helps reduce duplicate computations and thus could help reduce iteration time. Road networks often have chains which can be short-circuited before pagerank computation to improve performance. Final ranks of chain nodes can be easily calculated. This could reduce both the iteration time, and the number of iterations. If a graph has no dangling nodes, pagerank of each strongly connected component can be computed in topological order. This could help reduce the iteration time, no. of iterations, and also enable multi-iteration concurrency in pagerank computation. The combination of all of the above methods is the STICD algorithm. [sticd] For dynamic graphs, unchanged components whose ranks are unaffected can be skipped altogether.
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
2. 1 cup of tea
=
10 gram
CO2
(boiling 250 ml water)
3. “Performing two Google searches from a desktop
computer can generate about the same amount of
carbon dioxide as boiling a kettle for a cup of tea,
according to new research.”
(The Sunday Times)
14. To do so, Google deploys
an estimated 500,000 servers
in about 50 data centre worldwide
(these are rough estimates,
Google isn't telling)
15. Note the cooling plants ...
Google's data centres in Dalles, Oregon
16. Each data centre consumes about 50MW
Cooling requires most of the power
Infrastructure cost is about $22,000/kW
Reducing power consumption can result in
huge cost savings
23. Effect on data centre cost
0 2 4 6 8 10 12 14 16
$0
$10,000,000
$20,000,000
$30,000,000
$40,000,000
$50,000,000
$60,000,000
Total data centre cost
1000 servers, cost accumulated over 15 years
Total Cost – no FPGA
Total Cost – FPGA
FPGA Yearly License
License per FPGA
Years
Cost($)