Internet is the linchpin of modern society, which the various threads of modern life weave around. But being a part of the bigger energy- guzzling industrial economy, it is vulnerable to disruption. It is widely believed that our society is exhausting its vital resources to meet our energy requirements, and the cheap fossil fuel fiesta will soon abate as we cross the tipping point of global oil production. We will then enter the long arc of scarcity, constraints, and limits— a post-peak “long emergency” that may subsist for a long time. To avoid the collapse of the networking ecosystem in this long emer- gency, it is imperative that we start thinking about how network- ing should adapt to these adverse “undeveloping” societal condi- tions. We propose using the idea of “approximate networking”— which will provide good-enough networking services by employ- ing contextually-appropriate tradeoffs—to survive, or even thrive, in the conditions of scarcity and limits.
See the video at: https://www.youtube.com/watch?v=4hKvgIi-HZY
zenoh -- the ZEro Network OverHead protocolAngelo Corsaro
This presentation introduces the key ideas behind zenoh -- an Internet scale data-centric protocol that unifies data-sharing between any kind of device including those constrained with respect to the node resources, such as computational resources and power, as well as the network.
Engineering Micro-intelligence at the Edge of CPCS: Design GuidelinesRoberta Calegari
The Intelligent Edge computing paradigm is playing a major role in the design and development of Cyber-Physical and Cloud Systems (CPCS), extending the Cloud and overcoming its limitations so as to better address the issues related with the physical dimension of data—and therefore of the data-aware intelligence (such as context-awareness and real-time responses). Despite the proliferation of research works in this area, a well-founded software engineering approach specifically addressing the distribution of intelligence sources between the Edge and the Cloud is still missing. In this paper we propose some general criteria along with a coherent set of guidelines to follow in the design of dis- tributed intelligence within CPCS, suitably exploiting Edge and Cloud paradigms to effectively enable data intelligence and accounting for both symbolic and sub-symbolic approaches to reasoning. Then, we exploit the notion of micro-intelligence as situated intelligence for Edge computing, promoting the idea of intelligent environment embodying rational processes meant to complement the cognitive process of individuals in order to reduce their cognitive workload and augment their cognitive capabilities. In order to demonstrate the general applicability of our guidelines, we propose Situated Logic Programming (SLP) as the conceptual frame- work for delivering micro-intelligence in CPCS, and Logic Programming as a Service (LPaaS) as its reference architecture and technological embodiment.
Report-Fog Based Emergency System For Smart Enhanced Living EnvironmentKEERTHANA M
Report-An ambient assisted-living emergency system exploits cloud and fog computing, an outdoor positioning mechanism, and emergency and communication protocols to locate activity-challenged individuals.
zenoh -- the ZEro Network OverHead protocolAngelo Corsaro
This presentation introduces the key ideas behind zenoh -- an Internet scale data-centric protocol that unifies data-sharing between any kind of device including those constrained with respect to the node resources, such as computational resources and power, as well as the network.
Engineering Micro-intelligence at the Edge of CPCS: Design GuidelinesRoberta Calegari
The Intelligent Edge computing paradigm is playing a major role in the design and development of Cyber-Physical and Cloud Systems (CPCS), extending the Cloud and overcoming its limitations so as to better address the issues related with the physical dimension of data—and therefore of the data-aware intelligence (such as context-awareness and real-time responses). Despite the proliferation of research works in this area, a well-founded software engineering approach specifically addressing the distribution of intelligence sources between the Edge and the Cloud is still missing. In this paper we propose some general criteria along with a coherent set of guidelines to follow in the design of dis- tributed intelligence within CPCS, suitably exploiting Edge and Cloud paradigms to effectively enable data intelligence and accounting for both symbolic and sub-symbolic approaches to reasoning. Then, we exploit the notion of micro-intelligence as situated intelligence for Edge computing, promoting the idea of intelligent environment embodying rational processes meant to complement the cognitive process of individuals in order to reduce their cognitive workload and augment their cognitive capabilities. In order to demonstrate the general applicability of our guidelines, we propose Situated Logic Programming (SLP) as the conceptual frame- work for delivering micro-intelligence in CPCS, and Logic Programming as a Service (LPaaS) as its reference architecture and technological embodiment.
Report-Fog Based Emergency System For Smart Enhanced Living EnvironmentKEERTHANA M
Report-An ambient assisted-living emergency system exploits cloud and fog computing, an outdoor positioning mechanism, and emergency and communication protocols to locate activity-challenged individuals.
In the last decade, the amelioration of internet automation has led to the eloquent escalation in salvation and concealment contention for customers. This is the deliberation to how to impregnable computer network. In the network security, cybercrime automations have consort many good things by means of the internet: e-commerce, easy admittance to colossal stores of advertence perspective collaborative computing, e-mail and new approaches for broadcasting and enlightens dispensation to name a seldom. As with most automating advances, there is also another side: criminal hackers. Everyone around the world are anxious to be a part of this revolution, but they are afraid that some hacker will break into their web server and replace their logo with erotica scrutinize their e-mail purloin their credit card number from an on-line shopping site or lodge software that will secretly transmit their organizations enigma to the open internet. With these concerns and others, the ethical hacker can help. This paper describes conscientious computer jock: their adeptness, their persuasions and how they go about advocating their industries find and spigot up security holes. “Hacking” is the word that trembles everyone whenever it is said or heard by someone. Everyone born in this world with persuasions wants to be a hacker. But it is not a job of an infant or a matured person. A hacker needs an accomplished mind to hack contrivance. There are many rules that a person should learn to become a conscientious computer jock which is also called as insinuate testing. These rules comprehend acquaintance of HTML, Java Scripts, Computer equivocation, cleaving and crumbling, etc. In this paper we explain about the hacking capabilities and the operations of how it takes place in the check board and the disposition to be deciphered.
Security and Privacy Issues of Fog Computing: A SurveyHarshitParkar6677
Abstract. Fog computing is a promising computing paradigm that ex-
tends cloud computing to the edge of networks. Similar to cloud comput-
ing but with distinct characteristics, fog computing faces new security
and privacy challenges besides those inherited from cloud computing. In
this paper, we have surveyed these challenges and corresponding solu-
tions in a brief manner.
Fog Computing is a paradigm that complements and extends cloud computing by providing an end-to-end virtualisation of computing, storage and communication resources. As such, fog computing allow applications to be transparently provisioned and managed end-to-end. This presentation first motivates the need for fog computing, then introduced fog05 the first and only Open Source fog computing platform!
zenoh -- the ZEro Network OverHead protocolAngelo Corsaro
This presentation introduces the key ideas behind zenoh -- an Internet scale data-centric protocol that unifies data-sharing between any kind of device including those constrained with respect to the node resources, such as computational resources and power, as well as the network.
Ericsson Review: Capillary networks – a smart way to get things connectedEricsson
A capillary network is a local network that uses short-range radio-access technologies to provide local connectivity to things and devices. By leveraging the key capabilities of cellular networks – ubiquity, integrated security, network management and advanced backhaul connectivity – capillary networks will become a key enabler of the Networked Society.
Presentation "Why We Have to Embrace Complexity for Reliability, Availability and Serviceability of Future Networks and 5G" (Bologna, 3rd July, 2017 - IEEE ETR round table
A brief presentation on cloud computing explaining how IaaS, PaaS, and SaaS work and different kind of clouds. It also introduces to the new trend : Internet of Things.
Small, Dumb, ¬¬Cheap, and Copious – the Future of the Internet of Things,
Abstract
Over the next decade, billions of interconnected devices will be monitoring and responding to transportation systems, factories, farms, forests, utilities, soil and weather conditions, oceans, and other resources.
The unique characteristic that the majority of these otherwise incredibly diverse Internet of Things (IOT) devices will share is that they will be too small, too dumb, too cheap, and too copious to use traditional networking protocols such as IPv6.
For the same reasons, this tidal wave of IOT devices cannot be controlled by existing operational techniques and tools. Instead, lessons from Nature’s massive scale will guide a new architecture for the IOT.
Taking cues from Nature, and in collaboration with our OEM licensees, MeshDynamics is extending concepts outlined in the book “Rethinking the Internet of Things” to real-world problems of supporting “smart: secure and scalable” IOT Machine-to-Machine (M2M) communities at the edge.
Simple devices, speaking simply
Today companies view the IOT as an extension of current networking protocols and practices. But those on the front lines of the Industrial Internet of Things are seeing problems already:
“While much of the ink spilled today is about evolutionary improvements using modern IT technologies to address traditional operational technology concerns, the real business impact will be to expand our horizon of addressable concerns. Traditional operational technology has focused on process correctness and safety; traditional IT has focused on time to market and, as a recent concern, security. Both disciplines have developed in a world of relative scarcity, with perhaps hundreds of devices interconnected to perform specific tasks. The future, however, points toward billions of devices and tasks that change by the millisecond under autonomous control, and are so distributed they cannot be tracked by any individual. Our existing processes for ensuring safety, security and management break down when faced with such scale. Stimulating the redevelopment of our technologies for this new world is a focal point for the Industrial Internet Consortium.”
Network performance - skilled craft to hard scienceMartin Geddes
This document describes the technical and business journey for network operators wanting to turn network performance from a skilled craft into hard science.
In the last decade, the amelioration of internet automation has led to the eloquent escalation in salvation and concealment contention for customers. This is the deliberation to how to impregnable computer network. In the network security, cybercrime automations have consort many good things by means of the internet: e-commerce, easy admittance to colossal stores of advertence perspective collaborative computing, e-mail and new approaches for broadcasting and enlightens dispensation to name a seldom. As with most automating advances, there is also another side: criminal hackers. Everyone around the world are anxious to be a part of this revolution, but they are afraid that some hacker will break into their web server and replace their logo with erotica scrutinize their e-mail purloin their credit card number from an on-line shopping site or lodge software that will secretly transmit their organizations enigma to the open internet. With these concerns and others, the ethical hacker can help. This paper describes conscientious computer jock: their adeptness, their persuasions and how they go about advocating their industries find and spigot up security holes. “Hacking” is the word that trembles everyone whenever it is said or heard by someone. Everyone born in this world with persuasions wants to be a hacker. But it is not a job of an infant or a matured person. A hacker needs an accomplished mind to hack contrivance. There are many rules that a person should learn to become a conscientious computer jock which is also called as insinuate testing. These rules comprehend acquaintance of HTML, Java Scripts, Computer equivocation, cleaving and crumbling, etc. In this paper we explain about the hacking capabilities and the operations of how it takes place in the check board and the disposition to be deciphered.
Security and Privacy Issues of Fog Computing: A SurveyHarshitParkar6677
Abstract. Fog computing is a promising computing paradigm that ex-
tends cloud computing to the edge of networks. Similar to cloud comput-
ing but with distinct characteristics, fog computing faces new security
and privacy challenges besides those inherited from cloud computing. In
this paper, we have surveyed these challenges and corresponding solu-
tions in a brief manner.
Fog Computing is a paradigm that complements and extends cloud computing by providing an end-to-end virtualisation of computing, storage and communication resources. As such, fog computing allow applications to be transparently provisioned and managed end-to-end. This presentation first motivates the need for fog computing, then introduced fog05 the first and only Open Source fog computing platform!
zenoh -- the ZEro Network OverHead protocolAngelo Corsaro
This presentation introduces the key ideas behind zenoh -- an Internet scale data-centric protocol that unifies data-sharing between any kind of device including those constrained with respect to the node resources, such as computational resources and power, as well as the network.
Ericsson Review: Capillary networks – a smart way to get things connectedEricsson
A capillary network is a local network that uses short-range radio-access technologies to provide local connectivity to things and devices. By leveraging the key capabilities of cellular networks – ubiquity, integrated security, network management and advanced backhaul connectivity – capillary networks will become a key enabler of the Networked Society.
Presentation "Why We Have to Embrace Complexity for Reliability, Availability and Serviceability of Future Networks and 5G" (Bologna, 3rd July, 2017 - IEEE ETR round table
A brief presentation on cloud computing explaining how IaaS, PaaS, and SaaS work and different kind of clouds. It also introduces to the new trend : Internet of Things.
Small, Dumb, ¬¬Cheap, and Copious – the Future of the Internet of Things,
Abstract
Over the next decade, billions of interconnected devices will be monitoring and responding to transportation systems, factories, farms, forests, utilities, soil and weather conditions, oceans, and other resources.
The unique characteristic that the majority of these otherwise incredibly diverse Internet of Things (IOT) devices will share is that they will be too small, too dumb, too cheap, and too copious to use traditional networking protocols such as IPv6.
For the same reasons, this tidal wave of IOT devices cannot be controlled by existing operational techniques and tools. Instead, lessons from Nature’s massive scale will guide a new architecture for the IOT.
Taking cues from Nature, and in collaboration with our OEM licensees, MeshDynamics is extending concepts outlined in the book “Rethinking the Internet of Things” to real-world problems of supporting “smart: secure and scalable” IOT Machine-to-Machine (M2M) communities at the edge.
Simple devices, speaking simply
Today companies view the IOT as an extension of current networking protocols and practices. But those on the front lines of the Industrial Internet of Things are seeing problems already:
“While much of the ink spilled today is about evolutionary improvements using modern IT technologies to address traditional operational technology concerns, the real business impact will be to expand our horizon of addressable concerns. Traditional operational technology has focused on process correctness and safety; traditional IT has focused on time to market and, as a recent concern, security. Both disciplines have developed in a world of relative scarcity, with perhaps hundreds of devices interconnected to perform specific tasks. The future, however, points toward billions of devices and tasks that change by the millisecond under autonomous control, and are so distributed they cannot be tracked by any individual. Our existing processes for ensuring safety, security and management break down when faced with such scale. Stimulating the redevelopment of our technologies for this new world is a focal point for the Industrial Internet Consortium.”
Network performance - skilled craft to hard scienceMartin Geddes
This document describes the technical and business journey for network operators wanting to turn network performance from a skilled craft into hard science.
Finding your Way in the Fog: Towards a Comprehensive Definition of Fog ComputingHarshitParkar6677
The cloud is migrating to the edge of the network, where
routers themselves may become the virtualisation infrastructure,
in an evolution labelled as “the fog”. However, many
other complementary technologies are reaching a high level
of maturity. Their interplay may dramatically shift the information
and communication technology landscape in the
following years, bringing separate technologies into a common
ground. This paper offers a comprehensive definition
of the fog, comprehending technologies as diverse as cloud,
sensor networks, peer-to-peer networks, network virtualisation
functions or configuration management techniques. We
highlight the main challenges faced by this potentially breakthrough
technology amalgamation.
ACTOR CRITIC APPROACH BASED ANOMALY DETECTION FOR EDGE COMPUTING ENVIRONMENTSIJCNCJournal
The pivotal role of data security in mobile edge-computing environments forms the foundation for the
proposed work. Anomalies and outliers in the sensory data due to network attacks will be a prominent
concern in real time. Sensor samples will be considered from a set of sensors at a particular time instant as
far as the confidence level on the decision remains on par with the desired value. A “true” on the
hypothesis test eventually means that the sensor has shown signs of anomaly or abnormality and samples
have to be immediately ceased from being retrieved from the sensor. A deep learning Actor-Criticbased
Reinforcement algorithm proposed will be able to detect anomalies in the form of binary indicators and
hence decide when to withdraw from receiving further samples from specific sensors. The posterior trust
value influences the value of the confidence interval and hence the probability of anomaly detection. The
paper exercises a single-tailed normal function to determine the range of the posterior trust metric. The
decision taken by the prediction model will be able to detect anomalies with a good percentage of anomaly
detection accuracy.
Actor Critic Approach based Anomaly Detection for Edge Computing EnvironmentsIJCNCJournal
The pivotal role of data security in mobile edge-computing environments forms the foundation for the
proposed work. Anomalies and outliers in the sensory data due to network attacks will be a prominent
concern in real time. Sensor samples will be considered from a set of sensors at a particular time instant as
far as the confidence level on the decision remains on par with the desired value. A “true” on the
hypothesis test eventually means that the sensor has shown signs of anomaly or abnormality and samples
have to be immediately ceased from being retrieved from the sensor. A deep learning Actor-Criticbased
Reinforcement algorithm proposed will be able to detect anomalies in the form of binary indicators and
hence decide when to withdraw from receiving further samples from specific sensors. The posterior trust
value influences the value of the confidence interval and hence the probability of anomaly detection. The
paper exercises a single-tailed normal function to determine the range of the posterior trust metric. The
decision taken by the prediction model will be able to detect anomalies with a good percentage of anomaly
detection accuracy
I am Tapas Kumar Palei. I am studying B.Tech CSE in Ajay Binay Institute Of Technology. Grid computing is my seminar presentation topic. I try to gather everything about the grid computing in this seminar presentation.
The Abstracted Network for Industrial InternetMeshDynamics
Widespread adoption of TCI/IP protocols over the last two decades appears on the surface to have created a lingua franca for computer networking. And with the emergence of IPv6 removing the addressing restrictions of earlier versions, it would appear that now every device in the world may easily be connected with a common protocol.
But three emerging factors are requiring a fresh look at this worldview. The first is the coming wave of sensors, actuators, and devices making up the Internet of Things (IOT). Although not yet widely recognized, it is beginning to be understood that a majority of these devices will be too small, too cheap, too dumb, and too copious to run the hegemonic IPv6 protocol. Instead, much simpler protocols will predominate (see below), which must somehow be incorporated into the IP networks of Enterprises and the Internet.
At the other end of the scale from these tiny devices are huge Enterprise networks, increasing movingly to the cloud for computing and communication resources. An important requirement of these Enterprises is the capacity to manage, control, and tune their networks using a variety of Software Defined Networking (SDN) technologies and protocols. These depend on computing resource at the edges of the network to manage the interactions.
The third element is a conundrum presented by the first two: Enterprises will be struggling with the need to bring vast numbers of simple IOT devices into their networks. Though many of these devices will lack computing and protocol smarts, the requirement will still remain to manage everything via SDN. Along with this, many legacy Machine-to-Machine (M2M) networks (such as those on the factory floor) present the same challenges as the IOT: simple and/or proprietary protocols operating in operational silos today that Enterprises desire to manage and tune with SDN techniques.
Deep Learning Approaches for Information Centric Network and Internet of Thingsijtsrd
Technologies are rapidly increasing with additions to them every single day. Cloud Computing and the Internet of Things IoT have become two very closely associated with future internet technologies. One provides a platform to the other for success, the benefits of which could be from computing to processing and analyzing the information to reduce latency for real time applications. However, there are a few IoT devices that do not support on device processing. An alternate solution of this is Edge Computing, where the consumers can witness a close call with the computation and services. In this work, we will be to studying and discussing the application of combining Deep Learning with IoT and Information Centric Networking. A Convolutional Neural Network CNN model, a Deep Learning model, can make the most reliable data available from the complex IoT environment. Additionally, some Deep Learning models such as Recurrent Neural Network RNN and Reinforcement Learning have also integrated with IoT, which can also collect the information from real time applications. Aashay Pawar "Deep Learning Approaches for Information - Centric Network and Internet of Things" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-6 , October 2020, URL: https://www.ijtsrd.com/papers/ijtsrd33346.pdf Paper Url: https://www.ijtsrd.com/engineering/computer-engineering/33346/deep-learning-approaches-for-information--centric-network-and-internet-of-things/aashay-pawar
SDN Service Provider use cases Network Function Virtualization (NFV)Brent Salisbury
SDN for Service Providers as Defined by Service Providers. This was from the Software Defined Networking Summit | 13-14 November 2012. Thoughts at http://networkstatic.net/sdn-use-cases-for-service-providers/
SECURITY AND PRIVACY AWARE PROGRAMMING MODEL FOR IOT APPLICATIONS IN CLOUD EN...ijccsa
The introduction of Internet of Things (IoT) applications into daily life has raised serious privacy concerns
among consumers, network service providers, device manufacturers, and other parties involved. This paper
gives a high-level overview of the three phases of data collecting, transmission, and storage in IoT systems
as well as current privacy-preserving technologies. The following elements were investigated during these
three phases:(1) Physical and data connection layer security mechanisms(2) Network remedies(3)
Techniques for distributing and storing data. Real-world systems frequently have multiple phases and
incorporate a variety of methods to guarantee privacy. Therefore, for IoT research, design, development,
and operation, having a thorough understanding of all phases and their technologies can be beneficial. In
this Study introduced two independent methodologies namely generic differential privacy (GenDP) and
Cluster-Based Differential privacy ( Cluster-based DP) algorithms for handling metadata as intents and
intent scope to maintain privacy and security of IoT data in cloud environments. With its help, we can
virtual and connect enormous numbers of devices, get a clearer understanding of the IoT architecture, and
store data eternally. However, due of the dynamic nature of the environment, the diversity of devices, the
ad hoc requirements of multiple stakeholders, and hardware or network failures, it is a very challenging
task to create security-, privacy-, safety-, and quality-aware Internet of Things apps. It is becoming more
and more important to improve data privacy and security through appropriate data acquisition. The
proposed approach resulted in reduced loss performance as compared to Support Vector Machine (SVM) ,
Random Forest (RF) .
The Ghazalian Project for the AI Era: A Multiplex Critical AI ApproachJunaid Qadir
Presentation by Junaid Qadir (Qatar University) at
Symposium on Ghazali on Education: Contemporary Practical Applications from an Enduring Legacy - Day 2
Organized by Hamad Bin Khalifa University (HBKU)
Ihsan for Muslim Professionals Short CourseJunaid Qadir
Ihsan for Muslim Professionals Short Course
by JUNAID QADIR, Information Technology University.
Ramadan 1441, May 2020.
Watch the entire short course at
https://www.youtube.com/playlist?list=PL4AueLFeEG0DAqrRgnZKQR163mvpOOkz-
A Thinking Person's Guide to Using Big Data for Development: Myths, Opportuni...Junaid Qadir
A Thinking Person's Guide to Using Big Data for Development: Myths, Opportunities, and Pitfalls
Accompanying Paper Available at:
Caveat Emptor: The Risks of Using Big Data for Human Development
IEEE Technology and Society Magazine 38(3):82-90
DOI: 10.1109/MTS.2019.2930273
September 2019
https://www.researchgate.net/publication/335745617_Caveat_Emptor_The_Risks_of_Using_Big_Data_for_Human_Development
IWCMC Invited Talk for the E-5G PresentationJunaid Qadir
The unprecedented rapid adoption of mobile technology has motivated great interest in using mobile technology for health (mHealth). Bolstering the mHealth promise are three important trends: Firstly, big data—through which there has been unprecedented commoditization and opening up of data through the instrumentation of modern phones and environments (e.g., using the native sensors in mobile phones or using embedded devices in the so-called Internet of Things). This opens up the opportunity of collecting individual level ``small data’’ that can be used to provide personalized healthcare. Secondly, artificial intelligence (AI) and machine learning (ML) advances have democratized diagnostic capabilities to some extent and further significant improvement is expected. With advances in computational capabilities of mobile phones, and with resource augmentation from clouds, it will be possible to support data and computation intensive mHealth applications. Finally, high-performance communication (e.g., high throughput and low latency) capabilities can change the landscape of healthcare in terms of operational efficiency and accuracy and enable a range of telehealth services. In this talk, we will present the research agenda for bringing the 5G-Enabled Health Revolution.
Presentation on "The pedagogy of online education: historical overview and future directions" at the VU 3rd e-Learning and Distance Education Conference (ELDEC) conference.
Common Student mistakes: What We Can Learn From Socrates, the Cognitive Scien...Junaid Qadir
``To make no mistakes is not in the power of man; but from their errors and mistakes; the wise and good learn wisdom for the future.''---Plutarch.
Have you ever wondered why you, being a university or college student, aren't progressing despite your effort?
It may be that you are working frustratingly hard but without commensurate results, due to deficient mindset and/ or methodology.
While a wide variety of problems can result in sub-optimal learning performance, there are 7 main categories of student mistakes that recur with alarming regularity. You may be making one, or more, of the these mistakes. Not to fret though; each of these debilitating hindrances can be surmounted with self-awareness and some effort, paving the way for enjoyable, and productive, learning experiences.
Lectures of CS-721 (Network Performance Evaluation) taught for the Virtual University by Junaid Qadir.
To access other resources, visit http://sites.google.com/site/netperfeval
On The Necessity Of Loving The Prophet (Sallalahu Alaihi Wassalam)Junaid Qadir
This presentation presents a chapter of the book 'Ash-Shifa' by Qadi Iyad and talks about the great position occupied by the Messenger of Allah (Sallalahu Alaihi Wassalam).
Reference: Aisha Bewley: "Muhammad - Messenger of Allah, Ash-Shifa of Qadi Iyad"
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
Students, digital devices and success - Andreas Schleicher - 27 May 2024..pptxEduSkills OECD
Andreas Schleicher presents at the OECD webinar ‘Digital devices in schools: detrimental distraction or secret to success?’ on 27 May 2024. The presentation was based on findings from PISA 2022 results and the webinar helped launch the PISA in Focus ‘Managing screen time: How to protect and equip students against distraction’ https://www.oecd-ilibrary.org/education/managing-screen-time_7c225af4-en and the OECD Education Policy Perspective ‘Students, digital devices and success’ can be found here - https://oe.cd/il/5yV
The French Revolution, which began in 1789, was a period of radical social and political upheaval in France. It marked the decline of absolute monarchies, the rise of secular and democratic republics, and the eventual rise of Napoleon Bonaparte. This revolutionary period is crucial in understanding the transition from feudalism to modernity in Europe.
For more information, visit-www.vavaclasses.com
We all have good and bad thoughts from time to time and situation to situation. We are bombarded daily with spiraling thoughts(both negative and positive) creating all-consuming feel , making us difficult to manage with associated suffering. Good thoughts are like our Mob Signal (Positive thought) amidst noise(negative thought) in the atmosphere. Negative thoughts like noise outweigh positive thoughts. These thoughts often create unwanted confusion, trouble, stress and frustration in our mind as well as chaos in our physical world. Negative thoughts are also known as “distorted thinking”.
3. Approximate computing
Figure Credit: “Is “Good Enough” Computing Good
Enough?”, Logan Kugler.
Many computing systems and applications can tolerate some loss of quality. With such
applications in mind, it makes sense to trade off “precision” for gains in “efficiency”).
4. Setting up the stage
The Pareto
Principle (also
called the 80/20
rule)
The Tainter Curve
(the downside of
too much
complexity)
Iatrogenics:
the externalities of
technology
5. Pareto Principle
80/20 RULE:
“Among the factors
to be considered
there will usually be
the vital few and the
trivial many.”
Instead of aiming for 100%
ideal networks, we can get
good enough services with
considerably less effort
(and less negative
externality).
6. Tainter Curve
Dangers of over-technologizing
`Over-technologizing a
society makes it
fragile/brittle in the face of
future “Black Swans”
7. Iatrogenics: healer-given problems
The externalities of technology
Medicine when overdosed
can become a poison.
Figure Credit: “Antifragile: Things That Gain from Disorder.”, Taleb, N. N.
More technology may be less useful!
The mindset of abundance, much like
scarcity, brings about its own
problems—such as overconsumption.
8. Approximate networking (AN)
Existing examples of AN:
Best Effort Internet
Bloom Filters;
UDP; UDP-Lite
Delay tolerant Networks
If we relax the requirements of “ideal networking”, we can obtain many
advantages – e.g., in terms of reducing cost, complexity, and negative harmful
externalities; and in increasing efficiency (e.g., energy efficiency)
9. The AN simplicity imperative
Occam’s hypothesis:
The simplest model that fits the
data is also the most plausible.
Approximate Networking’s Razor
The simplest network
(architecture/protocol/algorithm) that satisfies
the desired QoS is the most desirable
approximation.
APPROXIMATE
NETWORKING
In Machine Learning, there’s a regularization technique known
as Lasso
that can help build better models by penalizing complexity.
Approximate networking can be thought of as the Lasso of
networking that aims to build simpler, more scalable, networking
solutions.
11. Context-appropriate tradeoffs
(high-tech, low-tech)
In 5G, the aim is to provide seemingly infinite
bandwidth (in others as much bandwidth as the
user needs). 5G can exploit the great diversity in
application requirements and employ context-
appropriate tradeoffs for universal provisioning.
Main tradeoffs:
Latency vs. Throughput
Fidelity vs. Convenience
Throughput vs. Coverage/ Reliability
Coverage vs. Consumed Power
Performance vs. Cost efficiency
Privacy vs. Free Content/ Services
Spectrum of connectivity options:
12. Context-appropriate tradeoffs
(high-tech, low-tech, or even no-tech)
How to outsource some of things that we do on the current Internet to
less-costly and less energy-intensive offline methods (while ensuring
that we get enough QoS that is necessary for our applications)?
13. Why Approximate Networking?
• To be anti-fragile/robust to limits and
``undeveloping’’ environments
• For universal coverage (e.g., global
access to the Internet for all - GAIA)
• Efficiency and Reducing Cost
• Dealing with resource constraints (such
as limited power).
14. Paradoxically, being somewhat
defeatured make you resilient!
When limits will hit us ...
the architectures/ protocols
with the bells and the whistles
will be hit much harder.
Approximate solutions may be more
sustainable in an unpredictable world.
15. Open research challenges
1. Mitigating the highlighted pitfalls/ challenges
2. Multimodal BD4D analytics
3. Predictive BD4D analytics
4. Combining humans, crowds, and AI
5. Unsupervised BD4D analytics
Open questions
1. How do we define context?
2. How do we quantify when our approximation is working and
when it is not?
• How do we measure the cost of approximation in terms of
performance degradation?
3. How to dynamically control the approximation tradeoffs
according to the network condition?
4. How to design proper incentivizes for the service provider and
the user so that both act harmoniously for provisioning a
customer-centric contextually-appropriate service?
16. Concluding remarks
The deep-rooted reliance on infrastructure—which itself
depends on many exogenously sourced depletable
resources (such as energy and materials)—makes the
modern society vulnerable to a disruptive collapse.
To cope up with such a disruptive collapse, we have
proposed the idea of “approximate networking” that
emphasizes the simplicity imperative.
Approximate networking is based on the idea that
coping with such a world burdened with limits will
require us to adopt context-specific tradeoffs to provide
“good enough” service.
junaid.qadir@itu.edu.pk
Editor's Notes
Approximate networking is inspired in part from the emerging architectural trend of “approximate computing” [9] that leverages the capability of many computing systems and applications to tolerate some loss of quality and optimality by trading off “precision” for “efficiency” (typically energy efficiency). Approximate computing works by relaxing the convention of exact equivalence be- tween specification and hardware implementation. By employing approximations at the hardware level, the energy efficiency of systems can be boosted significantly without too much loss in quality. As a concrete example, consider that it has been reported that using approximate computing techniques for the k-means clustering al- gorithm can provide an astounding 50x energy efficiency gain with an an accuracy loss of only 5 percent [10].
As Joseph Tainter argues in his classic work [37], societies gen- erally solve problems in ways that increase complexity (e.g., build- ing additional and/or more complex sociotechnical systems). In- deed, it can be argued that civilization itself, including traditions of great art and writing, “are epiphenomena or covariables of social, political, and economic complexity” [37]. For a time, increasing complexity produces benefits in excess of costs, but complexity is also subject to declining marginal returns, as Tainter depicts with a simple but profound diagram shown in Figure 1, which we refer to as the Tainter curve.
The Tainter curve indicates that simple solutions with major benefits will eventually be followed by more complex solutions with modest incremental benefits. At some point, increasing complexity fails to yield net benefits. At that point—point C2 on the Tainter curve—a wise approach would be to control or preferably reduce complexity. Yet seldom are such circumstances recognized or mea- sures taken, and Tainter (as well as Diamond [10]) showed as a result how many historical civilizations continued to increase com- plexity until the attendant costs drove these societies into collapse, aphaseofrapiddeclineinsocietalcomplexity.1
In what ways is abundance harder to cater for than scarcity:
“Consider that as I am writing these lines, we are living in a debt crisis. The world as a whole has never been richer, and it has never been more heavily in debt, living off borrowed money. The record shows that, for society, the richer we become, the harder it gets to live within our means. Abundance is harder for us to handle than scarcity.”
Excerpt From: Taleb, Nassim Nicholas. “Antifragile: Things That Gain from Disorder.” iBooks.
Refer to the fact that the world as a whole has never been richer, and it has never been more heavily in debt.
For example, consider ``Affluenza’’ is a one-hour television special that explores the high social and environmental costs of materialism and overconsumption.
Origin of iatrogenic
Greek iatros physician + English -genic
Definition of iatrogenic
: induced inadvertently by a physician or surgeon or by medical treatment or diagnostic procedures <an iatrogenic rash>
“Naive Interventionism: Intervention with disregard to iatrogenics. The preference, even obligation, to “do something” over doing nothing. While this instinct can be beneficial in emergency rooms or ancestral environments, it hurts in others in which there is an “expert problem.”
Excerpt From: Taleb, Nassim Nicholas. “Antifragile: Things That Gain from Disorder.” iBooks.
Approximate Networking: Old Wine in a New Bottle?
Approximation is a classic tool that is employed in computing and networking when faced with constraints, intractability, and trade- offs. We do not claim that approximate networking is a novel way of dealing with networking problems that have resource constraints. Indeed, delay-tolerant networking (DTN), information-centric net- working (ICN), approximate computing, the use of caching and opportunistic communication are all approximate networking solu- tions. However, we contend that the idea of viewing approximate networking as first-class networking and to view it as an overar- ching general framework for dealing with context-appropriate net- working tradeoffs holds new promise. The concept of approximate networking generalizes the concepts of lowest-common denomina- tion networking (LCD-NET) [10] in that the idea of approximate networking extends to the design of network infrastructure as well as algorithms and protocols. Another related research theme is that of challenged networks—which are networks that have very long communication delay or latency; or unstable or intermittently avail- able links; or very low data rates; or very high congestion; or very high error rate.
Yaser Mostafa’s course.
Tom Mitchell: Why Prefer Short Hypotheses? (Occam’s Razor)
Argument in favor:
• Fewer short hypotheses than long ones
—> a short hypothesis that fits the data is less likely to be a statistical coincidence
More bikes are now produced than cars according to David Edgerton.
In recent years around 100 million bicycles were produced every year and only about 40 million cars. In 1950 there were around 10 million of each, and they remained about equal to 1970. The great change was the expansion in Chinese production to 40–50 million bicycles from a few million in the early 1970s.29 In addition Taiwan and India between them were, at the end of the century, making more bicycles than were produced in the world in 1950. Bicycle-derived technologies of the poor megacity provide an instance of a creole technology.
If we can come to grips with the tricky question of defining context and what it means in terms of the context of the user/ application/ networking/ and operators?
Coping With Resource Constraints
In many developing parts of the world, resource constraints (such as limited power, unstable government) are a norm of life. Even at a global level, it is anticipated that the modern fossil fuel based industrial system is not sustainable, and the impending depletion of these resources will probably give rise to a sudden and permanent shock that may lead to economic instability and infrastruc- tural challenges [2]. Such a severe permanent energy crisis can have far-reaching consequences on the economy and lead developed countries towards being “undeveloping countries” [2]. Ap- proximate networking insights can be used to reorient the design of the Internet’s algorithms, protocols, and infrastructure to better manage the overarching energy, societal, material, and economic limits that this looming scarcity-based future will impose.
Need of Energy Efficiency
Information and communication technology (ICT) is a big con- sumer of world’s electrical energy, using up to 5% of the overall energy (2012 statistics) [14]. The urgency of developing an energy efficiency manifesto is reinforced when we consider the impend- ing decline of non-renewable energy resources as well as the in- creased demand of ICT (as more and more people get online and use ICT for exchanging greater and greater amounts of data traffic) [4]. This strongly motivates the need for energy efficient Internet- working [15]. The approximate networking trend can augment the hardware-focused approximate computing trend to ensure that the energy crisis is managed through the ingenuous use of approxima- tion.
he right of affordable access to broadband Internet is enshrined in the 2015 sustainable development goals of the United Nations. The ITU broadband “Goal 20-20” initiative aims at an optimistic target of universal broadband Internet speeds of 20 Mbps for $20 a month, accessible to everyone in the world by 2020 (Source: Alliance for Affordable Internet (A4AI) Report, 2014). Such an approach—that aims at providing an “ideal networking” experience universally—has historically always failed (due to various socioe- conomical and technical issues). An important reason is that most modern technologies (such as 3G/ 4G LTE and the planned 5G) are urban focused since rural systems (being sparsely populated by definition) do not thus hold much business potential for mobile car- riers [2]. The current percentage of households with Internet access worldwide is 46.4%, with only 34% and 7% of the households in developing countries and least-developed countries having Inter- net access, respectively [3]. The Internet is also large unaffordable when we consider that on average the mobile broadband price and the fixed-line broadband prices are 12 and 40% of the average per- son’s monthly income (with women and rural populations hit the most).3 The UN target aims to reduce the true cost of connecting to the Internet to around 5% of a person’s monthly income. Ap- proximate networking is a particularly appealing option to reach out to the offline human population (more than 4 billion of which live in developing countries).
“The road to robustification starts with a modicum of harm”
Excerpt From: Taleb, Nassim Nicholas. “Antifragile: Things That Gain from Disorder.” iBooks.
“But simplicity is not so simple to attain. Steve Jobs figured out that “you have to work hard to get your thinking clean to make it simple.” The Arabs have an expression for trenchant prose: no skill to understand it, mastery to write it.”
Excerpt From: Taleb, Nassim Nicholas. “Antifragile: Things That Gain from Disorder.” iBooks.
*Antifragility is a property of systems that increase in capability, resilience, or robustness as a result of stressors, shocks, volatility, noise, mistakes, faults, attacks, or failures. It is a concept developed by Professor Nassim Nicholas Taleb in his book, Antifragile.
There’s a lot to like about the altruism and idealism of BD4D, but implementing practical BD4D systems that are useful in practice is going to be challenging.