Mid-semester presentation for my Computers & Society course at Mount Royal University. Has some technical detail about how the internet works, web protocols, data centres, and typical security threats.
A Fairer, Faster Internet Protocol
The Internet is founded on a very simple premise: shared communications
links are more efficient than dedicated channels that lie idle much of the time.
And so we share. We share local area networks at work and neighborhood
links from home. And then we share again—at any given time, a terabit
backbone cable is shared among thousands of folks surfing the Web,
downloading videos, and talking on Internet phones.
But there’s a profound flaw in the protocol that governs how people share the
Internet’s capacity. The protocol allows you to seem to be polite, even as you
elbow others aside, taking far more resources than they do.
Network providers like Verizon and BT either throw capacity at the problem or
improvise formulas that attempt to penalize so-called bandwidth hogs. Let me
speak up for this much-maligned beast right away: bandwidth hogs are not
the problem. There is no need to prevent customers from downloading huge
amounts of material, so long as they aren’t starving others.
Rather than patching over the problem, my colleagues and I at BT (formerly
British Telecom) have worked out how to fix the root cause: the Internet’s
sharing protocol itself. It turns out that this solution will make the Internet not
just simpler but much faster too.
A Fairer, Faster Internet Protocol
The Internet is founded on a very simple premise: shared communications
links are more efficient than dedicated channels that lie idle much of the time.
And so we share. We share local area networks at work and neighborhood
links from home. And then we share again—at any given time, a terabit
backbone cable is shared among thousands of folks surfing the Web,
downloading videos, and talking on Internet phones.
But there’s a profound flaw in the protocol that governs how people share the
Internet’s capacity. The protocol allows you to seem to be polite, even as you
elbow others aside, taking far more resources than they do.
Network providers like Verizon and BT either throw capacity at the problem or
improvise formulas that attempt to penalize so-called bandwidth hogs. Let me
speak up for this much-maligned beast right away: bandwidth hogs are not
the problem. There is no need to prevent customers from downloading huge
amounts of material, so long as they aren’t starving others.
Rather than patching over the problem, my colleagues and I at BT (formerly
British Telecom) have worked out how to fix the root cause: the Internet’s
sharing protocol itself. It turns out that this solution will make the Internet not
just simpler but much faster too.
Fog Computing: A Platform for Internet of Things and AnalyticsHarshitParkar6677
Internet of Things (IoT) brings more than an explosive proliferation of
endpoints. It is disruptive in several ways. In this chapter we examine those disruptions,
and propose a hierarchical distributed architecture that extends from the edge
of the network to the core nicknamed Fog Computing. In particular, we pay attention
to a new dimension that IoT adds to Big Data and Analytics: a massively distributed
number of sources at the edge.
1. Software-Defined Networks (SDN) is a new paradigm in network ma.docxjackiewalcutt
1. Software-Defined Networks (SDN) is a new paradigm in network management that adds another layer (i.e., Network Operating System) to the architecture. Answer the following questions in the context of SDN with your reasoning.
(a) Is it scalable? Why?
(b) Is it less responsive? Why?
(c) Does it create a single point of failure? Why?
(d) Is it inherently less secure? Why?
(e) Is it incrementally deployable? Why?
2.RED randomly drops packets when it experience congestion. The probability of drop increases as the average queue size increases.
(a) Does it do a better job for uniform or bursty traffic? and why?
(b) Does it drop packets from the head of the queue or from the tail of the queue? and why?
(c) Does it make any difference; head/tail drop? and why?
3. Carefully read the short article OpenFlow: A Radical New Idea in Networking (http://queue.acm.org/detail.cfm?id=2305856), and answer the following questions.
The author argues that the deployment of SDN in general and OpenFlow in specific towards network democratization is a crazy idea. Do you agree? If yes, how come SDN has been supported and being deployed by many networking vendors. If not, give one scenario that SDN could cause disruptions.
NET WORKS
1
OpenFlow:
A Radical New Idea in Networking
An open standard that enables software-defined networking
Thomas A. Limoncelli
Computer networks have historically evolved box by box, with individual network elements
occupying specific ecological niches as routers, switches, load balancers, NATs (network address
translations), or firewalls. Software-defined networking proposes to overturn that ecology, turning
the network as a whole into a platform and the individual network elements into programmable
entities. The apps running on the network platform can optimize traffic flows to take the shortest
path, just as the current distributed protocols do, but they can also optimize the network to
maximize link utilization, create different reachability domains for different users, or make device
mobility seamless.
OpenFlow, an open standard that enables software-defined networking in IP networks, is a new
network technology that will enable many new applications and new ways of managing networks.
Here are three real, though somewhat fictionalized, applications:
EXAMPLE 1: BANDWIDTH MANAGEMENT. A typical wide area network has 30 percent utilization;
it must “reserve” bandwidth for “burst” times. Using OpenFlow, however, a system was developed
in which internal application systems (consumers) that need bulk data transfer could use the
spare bandwidth. Typical uses include daily replication of datasets, database backups, and the
bulk transmission of logs. Consumers register the source, destination, and quantity of data to
be transferred with a central service. The service does various calculations and sends the results
to the routers so they know how to forward this bulk data when links are otherwise unused.
Communication ...
Internet of Things (IoT) represents a remarkable transformation of the way in which our world will soon interact. Much like the World Wide Web connected computers to networks, and the next evolution connected people to the Internet and other people, IoT looks poised to interconnect devices, people, environments, virtual objects and machines in ways that only science fiction writers could have imagined.
The proliferation of Internet of Things (IoT) and the success of rich cloud services have pushed the horizon of a new computing paradigm, edge computing, which calls for processing the data at the edge of the network. Edge computing has the potential to address the concerns of response time requirement, battery life constraint, bandwidth cost saving, as well as data safety and privacy. In this paper, we introduce the definition of edge computing, followed by several case studies, ranging from cloud offloading to smart home and city, as well as collaborative edge to materialize the concept of edge computing. Finally, we present several challenges and opportunities in the field of edge computing, and hope this paper will gain attention from the community and inspire more research in this direction.
Edge computing refers to the enabling technologies allowing computation to be performed at the edge of the network, on downstream data on behalf of cloud services and upstream data on behalf of IoT services. Here we define “edge” as any computing and network resources along the path between data sources and cloud data centers. For example, a smart phone is the edge between body things and cloud, a gateway in a smart home is the edge between home things and cloud, a micro data center and a cloudlet is the edge between a mobile device and cloud. The rationale of edge computing is that computing should happen at the proximity of data sources. From our point of view, edge computing is interchangeable with fog computing, but edge computing focus more toward the things side, while fog computing focus more on the infrastructure side. Edge computing could have as big an impact on our society as has the cloud computing.
Despite the fact that the Web3 developer ecosystem is a small part of the greater online developer ecosystem, it appears to be rapidly increasing, so it makes sense to try to figure out what makes up the Web3 tech stack. This is the main reason why companies have started investing their time in it. As a result of which various Web3 Development Company
have emerged as per the changing trends in the market.
In the Field of Internet of Things IOT the devices by themselves can recognize the environment and conduct a certain functions by itself. IOT Devices majorly consist with sensors. Cloud Computing which is based in sensor networks manages huge amount of data which includes transferring and processing which takes delayed in service response time. As the growth of sensor network is increased, the demand to control and process the data on IOT devices is also increasing. Vikas Vashisth | Harshit Gupta | Dr. Deepak Chahal "Fog Computing: An Empirical Study" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-4 | Issue-3 , April 2020, URL: https://www.ijtsrd.com/papers/ijtsrd30675.pdf Paper Url :https://www.ijtsrd.com/computer-science/realtime-computing/30675/fog-computing-an-empirical-study/vikas-vashisth
A COMPREHENSIVE SOLUTION TO CLOUD TRAFFIC TRIBULATIONSijwscjournal
Cloud computing is generally believed to the most gifted technological revolution in computing and it will soon become an industry standard. It is believed that cloud will replace the traditional office setup. However a big question mark exists over the network performance when the cloud traffic explodes. We
call it “explosion” as in future we know that various cloud services replacing desktop computing will be accessed via cloud and the traffic increases exponentially. This journal aims at addressing some of these doubts better called “dangers” about the network performance, when cloud becomes a standard globally and providing a comprehensive solution to those problems. Our study concentrates on, that despite of offering better round-trip times and throughputs, cloud appears to consistently lose large amounts of the data that it is required to send to the clients. In this journal, we give a concise survey on the research efforts in this area. Our survey findings show that the networking research community has converged to the common understanding that a measurement infrastructure is insufficient for the optimal operation and future growth of the cloud. Despite many proposals on building an network measurement infrastructure from the research community, we believe that it will not be in the near future for such an
infrastructure to be fully deployed and operational, due to both the scale and the complexity of the network. We also suggest a set of technologies to identify and manage cloud traffic using IP header DS field, QoS protocols, MPLS/IP Header Compression, Use of high speed edge routers and cloud traffic flow measurement. In the solution DS Field of IP header will be used to recognize the cloud traffic separately, QOS protocols provide the cloud traffic, the type of QOS it requires by allocating resources and marking cloud traffic identification. Further the MPLS/IP Header Compression is performed so that the traffic can pass through the existing network efficiently and speedily. The solution also suggests deployment of high speed edge routers to improve network conditions and finally it suggest to measure the traffic flow using meters for better cloud network management. Our solutions assume that cloud is being assessed via basic public network.
A COMPREHENSIVE SOLUTION TO CLOUD TRAFFIC TRIBULATIONSijwscjournal
Cloud computing is generally believed to the most gifted technological revolution in computing and it
will soon become an industry standard. It is believed that cloud will replace the traditional office setup.
However a big question mark exists over the network performance when the cloud traffic explodes. We
call it “explosion” as in future we know that various cloud services replacing desktop computing will be
accessed via cloud and the traffic increases exponentially. This journal aims at addressing some of these
doubts better called “dangers” about the network performance, when cloud becomes a standard globally
and providing a comprehensive solution to those problems. Our study concentrates on, that despite of
offering better round-trip times and throughputs, cloud appears to consistently lose large amounts of the
data that it is required to send to the clients. In this journal, we give a concise survey on the research
efforts in this area. Our survey findings show that the networking research community has converged to
the common understanding that a measurement infrastructure is insufficient for the optimal operation
and future growth of the cloud. Despite many proposals on building an network measurement
infrastructure from the research community, we believe that it will not be in the near future for such an
infrastructure to be fully deployed and operational, due to both the scale and the complexity of the
network. We also suggest a set of technologies to identify and manage cloud traffic using IP header DS
field, QoS protocols, MPLS/IP Header Compression, Use of high speed edge routers and cloud traffic
flow measurement. In the solution DS Field of IP header will be used to recognize the cloud traffic
separately, QOS protocols provide the cloud traffic, the type of QOS it requires by allocating resources
and marking cloud traffic identification. Further the MPLS/IP Header Compression is performed so that
the traffic can pass through the existing network efficiently and speedily. The solution also suggests
deployment of high speed edge routers to improve network conditions and finally it suggest to measure
the traffic flow using meters for better cloud network management. Our solutions assume that cloud is
being assessed via basic public network.
Similar to Where is the Internet? (2019 Edition) (20)
Celebrating the Release of Computing Careers and DisciplinesRandy Connolly
Talk given at CANNEXUS 2020 on the release of our Computing Careers and Disciplines booklet, which has gone on to be downloaded over 200000 times since its release.
Public Computing Intellectuals in the Age of AI CrisisRandy Connolly
This talk advocates for a conceptual archetype (the Public Computer Intellectual) as a way of practically imagining the expanded possibilities of academic practice in the computing disciplines, one that provides both self-critique and an outward-facing orientation towards the public good.
Lightning Talk given at the start of the celebration evening for the ten-year anniversary of our Bachelor of Computer Information Systems at Mount Royal University.
Facing Backwards While Stumbling Forwards: The Future of Teaching Web Develop...Randy Connolly
Talk given at SIGCSE'19. Web development continues to grow as an essential skill and knowledge area for employed computer science graduates. Yet within the ACM CS2013 curriculum recommendation and within computing education research in general, web development has been shrinking or even disappearing all together. This paper uses an informal systematic literature review methodology to answer three research questions: what approaches are being advocated in existing web development education research, what are current trends in industry practice, and how should web development be taught in light of these current trends. Results showed a significant mismatch between the type of web development typically taught in higher education settings in comparison to web development in industry practice. Consequences for the pedagogy of web development courses, computer science curriculum in general, and for computing education research are also discussed.
Helping Prospective Students Understand the Computing DisciplinesRandy Connolly
Presentation at Cannexus 2018 in Ottawa in which we discussed the results of our three-year research project on student understandings of the computing disciplines and described the 32-page full-color booklet for advisers and prospective students.
Keynote address at barcamp: the next web conference in Salzburg on April 21, 2017. The presentation explains why textbooks in this area still make sense and describes the difficulties in writing a textbook in this area.
Talk given at University of Applied Sciences at Krems , Austria for Master Forum 2017. Provides a rich overview of contemporary web development suitable for managers and business people.
Disrupting the Discourse of the "Digital Disruption of _____"Randy Connolly
Talk given at University of Applied Sciences for Management and Communication in Vienna in January 2017. It critically interrogates the narrative of digital disruption. It will describe some of the contemporary psychological and social research about the digital lifeworld and make some broader observations about how to best think about technological change.
Every year at our new student orientation, I used to give this talk to our first year students. Instead of telling them what they should do to achieve success, we thought it would be more effective and humourous to tell them instead how best to fail your courses. This was the last version of this talk from 2017.
Red Fish Blue Fish: Reexamining Student Understanding of the Computing Discip...Randy Connolly
This 2016 presentation (for a paper) updates the findings of a multi-year study that is surveying major and non-major students’ understanding of the different computing disciplines. This study is a continuation of work first presented by Uzoka et al in 2013, which in turn was an expansion of work originally conducted by Courte and Bishop-Clark from 2009. In the current study, data was collected from 668 students from four universities from three different countries. Results show that students in general were able to correctly match computing tasks with specific disciplines, but were not as certain as the faculty about the degree of fit. Differences in accuracy between student groups were, however, discovered. Software engineering and computer science students had statistically significant lower accuracy scores than students from other computing disciplines. Consequences and recommendations for advising and career counselling are discussed.
Constructing and revising a web development textbookRandy Connolly
A Pecha Kucha for WWW2016 in Montreal. Web development is widely considered to be a difficult topic to teach successfully within post-secondary computing programs. One reason for this difficulty is the large number of shifting technologies that need to be taught along with the conceptual complexity that needs to be mastered by both student and professor. Another challenge is helping students see the scope of web development, and their role in an era where the web is a part of everyday human affairs. This presentation describes our 2014 textbook and our plans for a second edition revision (which will be published in early 2017).
Computing is Not a Rock Band: Student Understanding of the Computing DisciplinesRandy Connolly
This presentation reports the initial findings of a multi-year study that is surveying major and non-major students’ understanding of the different computing disciplines. This study is based on work originally conducted by Courte and Bishop-Clark from 2009, but which uses a broadened study instrument that provided additional forms of analysis. Data was collected from 199 students from a single institution who were computer science, information systems/information technology and non-major students taking a variety of introductory computing courses. Results show that undergraduate computing students are more likely to rate tasks as being better fits to computer disciplines than are their non-major (NM) peers. Uncertainty among respondents did play a large role in the results and is discussed alongside implications for teaching and further research.
Citizenship: How do leaders in universities think about and experience citize...Randy Connolly
This presentation explores the concept of citizenship based on the experience of student leaders from a mid-sized university in western Canada. Five student leaders participated in semi-structured individual interviews to explore their experience with, and understanding of, citizenship. Interviews concentrated on personal view points and definitions of citizenship, explored whether or not there are “good” and “great” citizens, and the role universities play in fostering strong citizenship amongst its student body. The measurement of citizenship and opportunities to foster citizenship were also explored. Qualitative content analysis revealed five themes, including political participation, social citizenship/solidarity, engagement, transformative action and autonomy. Citizenship, while highly valued by this population, also appears to be impossible to measure. If post-secondary institutions are aiming to create better citizens, more work needs to be done to create a common understanding of the intended outcome. Based on these findings, a new potential model of citizenship is proposed, in line with the work of Dalton and others who emphasize a shift towards personal involvement over traditional political engagement. Further, these results suggest that students could benefit from understanding themselves as political agents, capable of inculcating change within the university context and beyond.
Presentation for a guest lecture for a colleague's Media History and Contemporary Issues course. She wanted me to cover technological determinism and social constructivism, as well as through in some content about my research on multitasking and online reading.
A longitudinal examination of SIGITE conference submission dataRandy Connolly
Presents our examination of submission data for the SIGITE conference between the years 2007-2012. SIGITE is an ACM computing conference on IT education. The presentation describes which external factors and which internal characteristics of the submissions are related to eventual reviewer ratings. Ramifications of the findings for future authors and conference organizers are also discussed. If you want to read the full paper, visit http://dl.acm.org/citation.cfm?id=2656450.2656465
This presentation is based on the 16th chapter of our textbook Fundamentals of Web Development. The book is published by Addison-Wesley. It can be purchased via http://www.amazon.com/Fundamentals-Web-Development-Randy-Connolly/dp/0133407152.
This book is intended to be used as a textbook on web development suitable for intermediate to upper-level computing students. It may also be of interest to a non-student reader wanting a single book that encompasses the entire breadth of contemporary web development.
This book will be the first in what will hopefully be a textbook series. Each book in the series will have the same topics and coverage but each will use a different web development environment. The first book in the series will use PHP.
To learn more about the book, visit http://www.funwebdev.com.
Is Human Flourishing in the ICT World of the Future Likely?Randy Connolly
The role that information and computing technology (ICT) plays in improving human flourishing is not always clear. This presentation examines current research on one aspect of ICT, namely electronic reading, to demonstrate that in this case the ICT in question may actually diminish flourishing. It begins with an overview of the idea of flourishing in positive psychology, and then presents research on electronic reading comprehension, multitasking and distraction, and online scanning behaviors. The paper then makes an argument about the close connection between reading and flourishing, and then concludes by hypothesizing that mindful‐based reading practices may mitigate some of the worst features of electronic reading.
Textbooks are an essential part of the student experience, but may seem a daunting prospect to write. This presentation describes my experience with a recently-written textbook. It covers such issues as: writing a prospectus, the current textbook market, writing schedules, production issues, and marketing.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
UiPath Test Automation using UiPath Test Suite series, part 5DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 5. In this session, we will cover CI/CD with devops.
Topics covered:
CI/CD with in UiPath
End-to-end overview of CI/CD pipeline with Azure devops
Speaker:
Lyndsey Byblow, Test Suite Sales Engineer @ UiPath, Inc.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
3. It is common to visualize the internet using
some type of cloud icon. While convenient, it
does hide the fact that the internet is most
certainly not composed of magic water vapor,
but a whole lot of stuff.
5. The Internet is composed of millions of
kilometers of wires (metal and fiber optic)
and millions of computing devices, such as
servers, routers, switches, hubs, and other
networking devices, most of which is
housed in specialized environments
requiring countless air conditioners and
power devices.
6.
7.
8. The internet is a conglomeration of many
different physical networks that are able to
communicate thanks to the use of common
connection protocols.
The internet is built on top of a massive
amount of telecommunications
infrastructure, most of it initially
government-funded, but now generally
privately owned.
9. The most important infrastructure belongs
to what are commonly called Tier 1
Networks or Tier 1 ISPs. When someone
talks about the Internet Backbone, they are
talking about Tier 1 networks.
About sixteen different companies are
considered to be Tier 1 networks, and
include Level 3, Tata Communications, NTT,
AT&T, and Verizon.
10.
11. Tier 2 Networks may peer for free with
some networks but must pay to access at
least some other Tier 1 networks (referred
to as buying transit).
Many regional networks are Tier 2. Some
examples include Rogers, Telus, Comcast,
British Telecom, and Vodaphone.
12. R O G E R S
C A N A D A
2 5 , 0 0 0 K M
CENTURYLINK
855,000 KM
13.
14.
15. Since the internet is composed of many
interconnected, but independent networks,
there needs to be mechanisms for creating
those interconnections.
Internet Exchange Points (IXPs) have
become one of the most important
mechanisms for creating those
interconnections.
16. An Internet Exchange Point is a physical
location where different IP networks and
content providers meet to exchange local
traffic with each other (that is, peer) via a
switch.
17.
18. The internet was designed to be a robust
communication network that could continue
to work even if parts of the network are
disrupted or destroyed.
It is the TCP/IP set of protocols that makes
this possible. A given message is broken
into smaller packets which can take their
own independent route from the sender to
the destination.
19. Routers are a key technology in the network
in that they shuttle packets from one
network to another.
How are destination computers identified?
Each piece of hardware has a unique IP
address. Initially each IP address was 12
digits longs. Due to the increase in the
number of devices, IP addresses are now
substantially longer.
23. The web uses a client-server model of
communication.
The client-server model is one in which a
computer client, such as a browser, makes
requests of another computer called a
server, which is normally continually active,
listening for requests from clients.
24.
25.
26. HTTP (Hypertext Transfer Protocol) defines
a set of rules about how computers
communicate with one another. It is actually
a simple text-based protocol.
While the latest generations of browsers
often hide the “http://” in the address bar,
HTTP is still present.
27. Other than the fact that almost all web-
communication makes use of HTTP, having
some idea about how HTTP works can help
you in understanding many of the
constraints that exist within the field of web
development, and many of the security
problems that bedevil the web space.
28.
29.
30. What about HTTPS?
HTTP Secure (sometimes also called, more
long-windely, as HTTP over Transport Layer
Security).
This protocol is essentially identical to HTTP
except the connection content is also
encrypted. It protects against man-in-the-
middle attacks, so that an eavesdropper on
a session can not read or tamper with it.
31. In some of the earlier diagrams, the server
was represented as a single entity. This is in
fact quite misleading.
A typical website makes use of several,
dozens, hundreds or even hundreds of
thousands of servers. Why?
32. Partly this is for functional reasons: different
types of tasks will be isolated in different
servers.
Partly this is for performance reasons: a
single server has limits to how many
simultaneous requests it can manage.
Another important reason is for
redundancy: computers do fail and so
having multiple servers ensures a service
works even when a single server stops
working.
33.
34. Server farms are typically housed within
specialized facilities known as data centers.
These facilities contain a lot more than just
lots of computers contained within server
racks.
35.
36.
37. All those computers will generate a great
deal of heat, and so a key component of a
data center will be its heat generation
counter-measures.
These include reliable air conditioning,
forced air recirculation, and using chilled
water directly within the server racks.
38.
39.
40. Reliable and even power is the other key
component of any data center. This will be
achieved via UPS and other devices to
normalize electrical power as well as diesel
generators and DC battery supplies to
preserve electrical power even during power
outages.
41.
42.
43. Data centers in 2013 consumed somewhere
between 2% to 4% of the entire United
States electrical consumption.
Data centers in Ireland in 2016 consumed about
20% of Ireland’s entire electrical consumption.
44. Computing in general in 2012 consumed
somewhere about 5% of the world’s
electricity.
Optimistic Estimate: by 2025, computing
will consume 20% of world-wide electricity.
In 2016 about 11% of all global electricity
was consumed by computing.
45. Computing will soon produce about 3% of
global carbon emissions.
Optimistic Estimate: Within a decade,
computing will produce about 14% of global
carbon emissions.
46. “The analysis shows that for the worst-case scenario, CT
could use as much as 51% of global electricity in 2030.
…
the present investigation suggests, for the worst case
scenario, that CT electricity usage could contribute up to
23% of the globally released greenhouse gas emissions in
2030.”
47. In 2011, Google reported its energy
consumption to be 230 MWh.
In 2014, it reported 3.2 GWh (i.e. 3200 MWh)
even though it had made many of its data
centers significantly more energy efficient.
How is this possible?
48. Governments and environmentalists
generally assume that improving the energy
efficiency of a process will lower its resource
consumption.
Yet in economics, the Jevons Paradox
argues that the opposite will often occur.
49. In economics, the Jevons paradox occurs
when technological progress increases the
efficiency with which a resource is used
(reducing the amount necessary for any one
use),
but the rate of consumption of that resource
rises because of increased demand due to
falling prices.
50. Thus, the dramatic improvements of energy efficiency in
data centers in recent years has actually increased the
amount of energy being consumed in data centers
(because improved energy efficiency has lowered costs
thereby encouraging more people to make use of data
centers).
51. Estimate: 550GWh of power consumed just to
serve these 3 billion views (YouTube
servers+downloads+views) … roughly equal to
Canada’s yearly energy consumption)
52. But what about energy savings as a result
of the displacement of older technologies
with newer computing-based ones?
53. One study, for instance, that examined the
total energy footprint of a paper newspaper
compared to its online version found that
paper version consumed about half as much
energy (and that study didn't even factor in
data center energy consumption).
54. However, a different study examining
energy consumption of rented DVDs vs
streamed movies found a reduction in the
total energy footprint with the switch to
streaming (however that study also didn't
factor in data center energy consumption).
61. W h y C l o u d H o s t i n g ?
R e d u n d a n c y
01
O n - D e m a n d
P r o v i s i o n i n g02
S c a l a b i l i t y
03
C o s t
E f f i c i e n c y04
L o w S t a r t u p
C o s t s05
M a n a g e r s
S e e m t o
L o v e C l o u d s
…
06
65. C l o u d S e r v i c e M o d e l s
C l o u d c o m p u t i n g p r o m i s e s s o m e t h i n g u s u a l l y r e f e r r e d t o a s e l a s t i c
c a p a c i t y / c o m p u t i n g ,
m e a n i n g t h a t s e r v e r c a p a b i l i t y c a n s c a l e w i t h d e m a n d .
Platform as a Service
(PaaS)
Infrastructure as a Service
(IaaS)
Software as a Service
(SaaS)
Amazon Web Services
Microsoft Azure
Google Cloud Platform