Introduce the basic concept of Open vSwitch. In this slide, we talked about how Linux kernel and networking stack worked together to forward and process the network packet and also compare those Linux networking stack functionality with Open vSwitch and Openflow.
At the end of this slide, we talk about the challenge to integrate the Open vSwitch with Kubernetes, what kind of the networking function we need to resolve and what is the benefit we can get from the Open Vswitch.
Demystifying EVPN in the data center: Part 1 in 2 episode seriesCumulus Networks
Network operators are slowly but surely embracing L3-based leaf-spine designs. However, either due to legacy applications or certain multi-tenancy requirements, the need for L2 across racks is still present. How do you solve the problem of providing L2 across multiple racks? EVPN is quickly emerging as the best answer to this question.
In this episode of our 2-part series on EVPN, we start with a discussion of the use cases, a review of the technologies EVPN competes with, and dive into an evaluation of the pros and cons of each.
For a recording of the live event, go to http://go.cumulusnetworks.com/l/32472/2017-09-22/95t27t
Introduce the basic concept of Open vSwitch. In this slide, we talked about how Linux kernel and networking stack worked together to forward and process the network packet and also compare those Linux networking stack functionality with Open vSwitch and Openflow.
At the end of this slide, we talk about the challenge to integrate the Open vSwitch with Kubernetes, what kind of the networking function we need to resolve and what is the benefit we can get from the Open Vswitch.
Demystifying EVPN in the data center: Part 1 in 2 episode seriesCumulus Networks
Network operators are slowly but surely embracing L3-based leaf-spine designs. However, either due to legacy applications or certain multi-tenancy requirements, the need for L2 across racks is still present. How do you solve the problem of providing L2 across multiple racks? EVPN is quickly emerging as the best answer to this question.
In this episode of our 2-part series on EVPN, we start with a discussion of the use cases, a review of the technologies EVPN competes with, and dive into an evaluation of the pros and cons of each.
For a recording of the live event, go to http://go.cumulusnetworks.com/l/32472/2017-09-22/95t27t
In this talk Jiří Pírko discusses the design and evolution of the VLAN implementation in Linux, the challenges and pitfalls as well as hardware acceleration and alternative implementations.
Jiří Pírko is a major contributor to kernel networking and the creator of libteam for link aggregation.
The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers.
The talk will continue with a demo showing how to build your own simple overlay using these technologies.
this slide is created for understand open vswitch more easily.
so I tried to make it practical. if you just follow up this scenario, then you will get some knowledge about OVS.
In this document, I mainly use only two command "ip" and "ovs-vsctl" to show you the ability of these commands.
Webinar topic: VLAN vs VXLAN
Presenter: Achmad Mardiansyah
In this webinar series, We are discussing VLAN vs VXLAN
Please share your feedback or webinar ideas here: http://bit.ly/glcfeedback
Check our schedule for future events: https://www.glcnetworks.com/schedule/
Follow our social media for updates: Facebook, Instagram, YouTube Channel, and telegram
The recording is available on Youtube
https://youtu.be/HDo7XVLRd9E
The prpl foundation is an open-source, community-driven, collaborative, organization. It mainly targets and supports the MIPS architecture – but it is open to all –, with a focus on enabling next-generation datacenter-to-device portable software and virtualized architectures.
LinuxCon 2015 Linux Kernel Networking WalkthroughThomas Graf
This presentation features a walk through the Linux kernel networking stack for users and developers. It will cover insights into both, existing essential networking features and recent developments and will show how to use them properly. Our starting point is the network card driver as it feeds a packet into the stack. We will follow the packet as it traverses through various subsystems such as packet filtering, routing, protocol stacks, and the socket layer. We will pause here and there to look into concepts such as networking namespaces, segmentation offloading, TCP small queues, and low latency polling and will discuss how to configure them.
Deeper Dive in Docker Overlay NetworksDocker, Inc.
The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers. The talk will continue with a demo showing how to build your own simple overlay using these technologies. Finally, it will show how we can dynamically distribute IP and MAC information to every hosts in the overlay.
Linux offers an extensive selection of programmable and configurable networking components from traditional bridges, encryption, to container optimized layer 2/3 devices, link aggregation, tunneling, several classification and filtering languages all the way up to full SDN components. This talk will provide an overview of many Linux networking components covering the Linux bridge, IPVLAN, MACVLAN, MACVTAP, Bonding/Team, OVS, classification & queueing, tunnel types, hidden routing tricks, IPSec, VTI, VRF and many others.
In this talk Jiří Pírko discusses the design and evolution of the VLAN implementation in Linux, the challenges and pitfalls as well as hardware acceleration and alternative implementations.
Jiří Pírko is a major contributor to kernel networking and the creator of libteam for link aggregation.
The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers.
The talk will continue with a demo showing how to build your own simple overlay using these technologies.
this slide is created for understand open vswitch more easily.
so I tried to make it practical. if you just follow up this scenario, then you will get some knowledge about OVS.
In this document, I mainly use only two command "ip" and "ovs-vsctl" to show you the ability of these commands.
Webinar topic: VLAN vs VXLAN
Presenter: Achmad Mardiansyah
In this webinar series, We are discussing VLAN vs VXLAN
Please share your feedback or webinar ideas here: http://bit.ly/glcfeedback
Check our schedule for future events: https://www.glcnetworks.com/schedule/
Follow our social media for updates: Facebook, Instagram, YouTube Channel, and telegram
The recording is available on Youtube
https://youtu.be/HDo7XVLRd9E
The prpl foundation is an open-source, community-driven, collaborative, organization. It mainly targets and supports the MIPS architecture – but it is open to all –, with a focus on enabling next-generation datacenter-to-device portable software and virtualized architectures.
LinuxCon 2015 Linux Kernel Networking WalkthroughThomas Graf
This presentation features a walk through the Linux kernel networking stack for users and developers. It will cover insights into both, existing essential networking features and recent developments and will show how to use them properly. Our starting point is the network card driver as it feeds a packet into the stack. We will follow the packet as it traverses through various subsystems such as packet filtering, routing, protocol stacks, and the socket layer. We will pause here and there to look into concepts such as networking namespaces, segmentation offloading, TCP small queues, and low latency polling and will discuss how to configure them.
Deeper Dive in Docker Overlay NetworksDocker, Inc.
The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers. The talk will continue with a demo showing how to build your own simple overlay using these technologies. Finally, it will show how we can dynamically distribute IP and MAC information to every hosts in the overlay.
Linux offers an extensive selection of programmable and configurable networking components from traditional bridges, encryption, to container optimized layer 2/3 devices, link aggregation, tunneling, several classification and filtering languages all the way up to full SDN components. This talk will provide an overview of many Linux networking components covering the Linux bridge, IPVLAN, MACVLAN, MACVTAP, Bonding/Team, OVS, classification & queueing, tunnel types, hidden routing tricks, IPSec, VTI, VRF and many others.
deep understanding of howto packet would reach to destination and basic understanding of network protocols.
learn howto manipulate with linux network and know howto manipulate with linux iptables.
Docker networking basics & coupling with Software Defined NetworksAdrien Blind
This presentation reminds Docker networking, exposes Software Defined Network basic paradigms, and then proposes a mixed-up implementation taking benefits of a coupled use of these two technologies. Implementation model proposed could be a good starting point to create multi-tenant PaaS platforms.
As a bonus, OpenStack Neutron internal design is presented.
You can also have a look on our previous presentation related to enterprise patterns for Docker:
http://fr.slideshare.net/ArnaudMAZIN/docker-meetup-paris-enterprise-docker
Container Network Interface: Network Plugins for Kubernetes and beyondKubeAcademy
With the rise of modern containers comes new problems to solve – especially in networking. Numerous container SDN solutions have recently entered the market, each best suited for a particular environment. Combined with multiple container runtimes and orchestrators available today, there exists a need for a common layer to allow interoperability between them and the network solutions.
As different environments demand different networking solutions, multiple vendors and viewpoints look to a specification to help guide interoperability. Container Network Interface (CNI) is a specification started by CoreOS with the input from the wider open source community aimed to make network plugins interoperable between container execution engines. It aims to be as common and vendor-neutral as possible to support a wide variety of networking options — from MACVLAN to modern SDNs such as Weave and flannel.
CNI is growing in popularity. It got its start as a network plugin layer for rkt, a container runtime from CoreOS. Today rkt ships with multiple CNI plugins allowing users to take advantage of virtual switching, MACVLAN and IPVLAN as well as multiple IP management strategies, including DHCP. CNI is getting even wider adoption with Kubernetes adding support for it. Kubernetes accelerates development cycles while simplifying operations, and with support for CNI is taking the next step toward a common ground for networking. For continued success toward interoperability, Kubernetes users can come to this session to learn the CNI basics.
This talk will cover the CNI interface, including an example of how to build a simple plugin. It will also show Kubernetes users how CNI can be used to solve their networking challenges and how they can get involved.
KubeCon schedule link: http://sched.co/4VAo
Chris Swan ONUG Academy - Container Networks TutorialCohesive Networks
Slides from Chris Swan's ONUG Academy "Hands-On Container Networks" on May 12, 2015
This hands on session will begin by looking at how Docker modifies a Linux host to enable containers to be connected to a network. It will then go through how applications running in containers can be connected together, and the different options for interconnectivity on a host and between hosts. Finally we will take a look at running network application services inside of containers.
Syllabus
Learn what Docker does to your Linux host on installation.
Connect applications running across multiple containers using configuration metadata and compositing tools.
Understand the different Docker networking modes (host, container, none).
Using Pipework to customise network configuration.
Connecting containers across VMs using Open vSwitch.
Using containers for application network services sush as proxies, load balancers and for TLS termination
Learning Objective 1: Understand how containers relate to the host network, and the consequences that has for services running within containers
Learning Objective 2: Understand the different ways that containers can be networked and internetworked.
Learning Objective 3: Use containers to run network application services.
About the topic:
Containers aren’t a new thing, but the Docker project has made them a hot topic as organisations look at new ways to build, ship and run their applications. This brings new challenges for the network as containers are likely to be ten times as numerous as virtual machines. At the same time there is regulatory pressure to move away from the flat LAN model and deliver greater separation and segregation. This presentation will look at how these two forces are coming together, firstly by examining how containers are networked and some of the new approaches and challenges that come with that. This will be followed by a look at how overlay networks are being deployed to achieve ‘microsegmentation’, and ultimately drive a shift towards application centric networking. Of course these forces will collide, bringing us to contained networks of containers.
Workshop Consul .- Service Discovery & Failure DetectionVincent Composieux
This workshop uses a Docker Swarm cluster to deploy a Consul agent and uses Registrator to automatically register Docker containers services into Consul and add a health check on it.
The Docker network overlay driver relies on several technologies: network namespaces, VXLAN, Netlink and a distributed key-value store. This talk will present each of these mechanisms one by one along with their userland tools and show hands-on how they interact together when setting up an overlay to connect containers.
The talk will continue with a demo showing how to build your own simple overlay using these technologies.
Docker Networking with New Ipvlan and Macvlan DriversBrent Salisbury
Docker Networking presentation at ONS2016.
Docker Macvlan and Ipvlan Networking Drivers Experimental Readme:
github.com/docker/docker/blob/master/experimental/vlan-networks.md
Kernel requirements for Ipvlan mode is v4.2+, Macvlan mode is v3.19.
If using Virtualbox to test with, use NAT mode interfaces unless you have multiple MAC addresses working in your setup. Use the 172.x.x.x subnet and gateway used by the VBox NAT network. Vmware Fusion works out of the box.
Here is a screenshot of a VirtualBox NAT interface:
https://www.dropbox.com/s/w1rf61n18y7q4f1/Screenshot%202016-03-20%2001.55.13.png?dl=0
Italy Agriculture Equipment Market Outlook to 2027harveenkaur52
Agriculture and Animal Care
Ken Research has an expertise in Agriculture and Animal Care sector and offer vast collection of information related to all major aspects such as Agriculture equipment, Crop Protection, Seed, Agriculture Chemical, Fertilizers, Protected Cultivators, Palm Oil, Hybrid Seed, Animal Feed additives and many more.
Our continuous study and findings in agriculture sector provide better insights to companies dealing with related product and services, government and agriculture associations, researchers and students to well understand the present and expected scenario.
Our Animal care category provides solutions on Animal Healthcare and related products and services, including, animal feed additives, vaccination
Gen Z and the marketplaces - let's translate their needsLaura Szabó
The product workshop focused on exploring the requirements of Generation Z in relation to marketplace dynamics. We delved into their specific needs, examined the specifics in their shopping preferences, and analyzed their preferred methods for accessing information and making purchases within a marketplace. Through the study of real-life cases , we tried to gain valuable insights into enhancing the marketplace experience for Generation Z.
The workshop was held on the DMA Conference in Vienna June 2024.
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024APNIC
Ellisha Heppner, Grant Management Lead, presented an update on APNIC Foundation to the PNG DNS Forum held from 6 to 10 May, 2024 in Port Moresby, Papua New Guinea.
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptxBrad Spiegel Macon GA
Brad Spiegel Macon GA’s journey exemplifies the profound impact that one individual can have on their community. Through his unwavering dedication to digital inclusion, he’s not only bridging the gap in Macon but also setting an example for others to follow.
Instagram has become one of the most popular social media platforms, allowing people to share photos, videos, and stories with their followers. Sometimes, though, you might want to view someone's story without them knowing.
2. INTRODUCTION TO MININET
Mininet
A network emulator which creates realistic virtual network
Runs real kernel, switch and application code on a single machine
Provides both Command Line Interface (CLI) and Application Programming Interface (API)
• CLI: interactive commanding
• API: automation
Abstraction
• Host: emulated as an OS level process
• Switch: emulated by using software-based switch
• E.g., Open vSwitch, SoftSwitch
2
4. FLOW OF THE LAB
Download the virtual machine
Setup the virtual machine
Check the setup in Linux
Create the network
Examine the network
Work with basic OpenFlow commands
Add a controller
4
5. DOWNLOAD THE VIRTUAL MACHINE
For this lab a virtual machine appliance in ovf format must be
downloaded from the OpenFlow Tutorial website here
https://github.com/downloads/mininet/mininet/mininet-2.0.0-113012-amd64-ovf.zip
Download this file
Expand the zip file
You should see these files
5
6. SETUP THE VIRTUAL MACHINE
To import this appliance into VirtualBox
Select
File
Import Appliance
Select the ovf image
Press the Import button
This lab requires two virtual NICs
The first one should be set to host-only network
The second one to NAT
6
7. CHECK LINUX
Mininet is a command line tool that runs in Linux
The Mininet prompt looks like this
mininet>
The Linux prompt ends with a $ for a normal user
It ends in # for the root account
We will use the sudo command to run the Linux commands with root
privileges at the normal user prompt
7
8. LOGIN VM
Start the virtual machine
Login to Linux
The login is
mininet
mininet
The screen should look like this
8
10. VM NETWORK CONFIGURATION
Let’s see if the two network interfaces are setup correctly
At the Linux prompt enter
ifconfig
10
11. VM NETWORK CONFIGURATION
Three interfaces should appear
Two physical interfaces called
eth0
eth1
And the loopback interface
11
12. VM NETWORK CONFIGURATION
One of the physical interfaces should have a 192 address and the
other a 10 address
We will access the virtual machine using a terminal program using the
192 address
If either of the eth Ethernet interfaces are missing, run this command
sudo dhclient ethx
Where the x in ethx is the number of the interface
12
14. SET UP NETWORK ACCESS
As you can see the eth1 interface is missing
After the dhclient command is run this appears
sudo dhclient ethx
Where the x in ethx is the number of the interface
14
16. The tutorial VM is shipped without a desktop environment, to reduce its size. All
the exercises will be done through X forwarding, where programs display graphics
through an X server running on the host OS.
Open a terminal (Terminal.app in Mac, Gnome terminal in Ubuntu, etc). In that
terminal, run:
$ ssh -X [user]@[Guest IP Here]
Replace [user] with the correct user name for your VM image.
Replace [Guest] with the IP you just noted.
If ssh does not connect, make sure that you can ping the IP address you are connecting to.
ACCESS VM VIA SSH
16
17. To not use X11 and log in to the VM console window - not via an ssh
session!
Install GUI
Login VM, and type:
$ sudo apt-get update && sudo apt-get install xinit lxde virtualbox-guest-dkms
At this point, you should be able to start an X11 session in the VM
console window by typing:
$ startx
ALTERNATIVE: RUN A GUI IN THE VM CONSOLE
WINDOW
17
18. MININET TUTORIAL
Mininet Command Line Interface Usage
Interact with hosts and switches
• Start a minimal topology
The default run of Mininet sudo mn will create a topology consisting of one controller (c0), one switc
h (s1) and two hosts (h1 and h2).
• Display nodes
• Display links
• Dump information about all nodes
• Exit Mininet:
$ sudo mn
mininet> nodes
mininet> net
mininet> dump
The switches generated with Mininet will
be just simple forwarding devices, without
any "brain" of their own (no control plane)
mininet> exit
18
19. To help you start up, here are the most important options for running Mininet:
--topo=TOPO represents the topology of the virtual network, where TOPO could
be:
minimal - this is the default topology with 1 switch and 2 hosts
single,X - a single switch with X hosts attached to it
linear,X - creates X switches connected in a linear/daisy-chain fashion, each switch with one
host attached
tree,X,Y - a tree topology with X depth, Y fanout
--switch=SWITCH creates different type of switches, such as:
ovsk - this is the default Open vSwitch that comes preinstalled in the VM
user - this is a switch running in software namespace (much slower)
MININET TUTORIAL
19
20. --controller=CONTROLLER where CONTROLLER can be:
ovsc - this creates the default OVS Controller that comes preinstalled in the VM
nox - this creates the well-known NOX controller
remote - does not create a controller but instead listens for connections from
external controllers
--mac set easy-to-read MAC addresses for the devices
For more information
$ mn help
MININET TUTORIAL
20
21. Display Mininet Command Line Interface (CLI) commands:
mininet> help
Display nodes:
mininet> nodes
If the first string of the CLI command is a host, switch or controller name, the
command is executed on that node. For instance, to show the interface of host
h1:
mininet> h1 ifconfig
Test connectivity between hosts. For example, test the connectivity between h1
and h2:
mininet> h1 ping h2
Alternatively, you can test the connectivity between all hosts by typing:
mininet> pingall
Exit Mininet:
mininet> exit
Clean up: After every exit do cleanup
$ sudo mn -c
MORE COMMANDS
21
25. sudo mn --link tc,bw=20,delay=20ms
Take a moment to think about our current, very basic,
topology.
(h1)-----20ms-----(s1)-----20ms-----(h2)
Q: When you send a ping, you measure the roundtrip delay for
an ICMP packet to travel from one host to another. Assuming
our current deployment, what will be the reported roundtrip
delay?
EXAMPLE
25
26. A1: ~80 ms
Let’s test that assertion!
mininet> h1 ping -c 8 h2
EXAMPLE (CONT..)
26
27. Do not confuse the dpctl with a controller (it's not the same thing) -
dpctl is just a management/monitoring utility!
SIMPLE EXERCISE
Use ovs-oftcl instead of dpctl
for open vSwitch when running
a controller
In this scenario we have no
remote controller running
27
28. STEP 1: Start Mininet with a single switch (the default, Open vSwitch =
ovsk) and 3 hosts:
mininet@mininet-vm:~$ sudo mn --topo=single,3 --mac --switch=ovsk --
controller=remote
EXERCISE (CONT..)
This message: "Unable to contact the remote
controller at 127.0.0.1:6633". This is because,
for the time being, we are going to use mininet
without any controller
28
29. In order to double-check that everything started correctly, use the
following mininet commands:
nodes - to list all virtual devices in the topology
net - to list of links between them
dump - to see more info about the hosts
EXERCISE (CONT..)
29
30. STEP 2: Open terminals for each host and run tcpdump on each:
Attention: for Windows/MAC users, make sure you installed & run Xming/XQuartz,
plus you enabled X-forwarding if you are using ssh session to the Mininet VM!
> xterm h1 h2 h3
In the xterms for h2 and h3, run tcpdump, a utility to print the packets seen by a
host:
# tcpdump -XX -n -i h2-eth0
and respectively:
# tcpdump -XX -n -i h3-eth0
In the xterm for h1, send a ping:
# ping -c1 10.0.0.2
EXERCISE (CONT..)
30
31. STEP 3: Test connectivity between h1 and h2: on host h1 perform a
ping -c3 10.0.0.2 (the IP address of host h2)
EXERCISE (CONT..)
31
32. Results:
ping will fail, because the switch does NOT know what to do with
such traffic (and remember, we don't run any controller)
EXERCISE (CONT..)
32
33. Checking the list of flows on the switch (with command dpctl dump-
flows) will show an empty list (again, nobody told the switch how to
deal with the traffic)
CHECKING FLOW RULES
33
34. Open new terminal window or create a second SSH window, if you
don't already have one, and run:
$ dpctl show tcp:127.0.0.1:6634
The 'show' command connects to the switch and dumps out its port
state and capabilities.
$ dpctl dump-flows tcp:127.0.0.1:6634
Since we haven't started any controller yet, the flow-table should be
empty.
CHECKING FLOW RULES
34
35. STEP 4: Manually add flows on the switch to allow connectivity
between h1 and h2
Use the dpctl add-flow utility to manually install flows on the switch
that will allow connectivity between host h1 and host h2.
$ dpctl add-flow tcp:127.0.0.1:6634 in_port=1,actions=output:2
everything received on port 1 (in_port) send out on port 2
$ dpctl add-flow tcp:127.0.0.1:6634 in_port=2,actions=output:1
everything received on port 2 (return traffic) send out on port 1
ADDING FLOW RULES
35
37. Result:
ping is successful
tcpdump on host h2 shows the traffic from/to h1 (ARP and ICMP)
tcpdump on host h3 does not see anything (not even the ARP which
should be broadcast)!
FLOW RULES
37
38. ACTIVATE WIRESHARK
Start Wireshark as a background process
$sudo wireshark $
Click on OK to clear any error messages
38
39. OBSERVE SDN TRAFFIC
Start a capture in Wireshark using the loopback interface
Create and apply a filter for just the OpenFlow traffic by entering a
display filter in Wireshark using the string
of
39
40. LOAD THE CONTROLLER
To generate some traffic we will load a controller as that is the next
step anyway
There are a number of software based or hardware based controllers
that can be used in an SDN
In this example we will load the POX controller
40
41. LOAD THE CONTROLLER
To start POX enter these commands
$cd pox
./pox.py forwarding.l2_learning
41
45. GUI - Automatic Creation of Mininet Scripts
Visual Network Description - VND (http://www.ramonfontes.com/vnd) - A GUI tool
that allows automatic creation of Mininet and Openflow Controllers Scripts.
GUI - MiniEdit
Included in Mininet in the examples/ directory. miniedit.py
MININET APPS
45
46. The MiniEdit script is located in Mininet’s examples folder.
To run MiniEdit, execute the command:
$ sudo ~/mininet/examples/miniedit.py
Mininet needs to run with root privileges so start MiniEdit using the
sudo command.
START MINIEDIT
46
49. Right-click on each controller and select Properties from the menu
that appears.
The default port number for each controller is 6633.
Change this so the port numbers used by controllers c0, c1, and c2 are 6633, 6634,
and 6635, respectively.
CONFIGURE THE CONTROLLERS
49
51. To set MiniEdit preferences, Edit → Preferences. In the dialogue box
that appears, make the changes you need.
SET MINIEDIT PREFERENCES
51
52. Set the Start CLI option
SET MINIEDIT PREFERENCES
52
53. SAVE THE CONFIGURATION
Save topology file
To save the Mininet Topology (*.mn) file, click on File in the top menu
bar and select Save from the drop-down menu. Type in a file name
and save the file.
Save custom Mininet script
To save the Mininet Custom Topology (*.py) file, click on File in the
top menu bar and select Save Level 2 Script from the drop-down
menu. Type in the file name and save the file.
53
54. To start the simulation scenario, click the Run button on the MiniEdit
GUI
RUN THE MINIEDIT NETWORK SCENARIO
In the terminal window from which
you started MiniEdit, you will see
some messages showing the
progress of the simulation startup and
then the Miniedit CLI prompt
(because we checked Start CLI box in
the MiniEdit preferences window).
54
57. First change the userid from root to mininet
# su mininet
Then, check the flow table on switch s1 using the commands below.
It should be empty.
$ sudo ovs-ofctl dump-flows s1
CONT..
57
58. Open a xterm window on hosts h1 and h8. Right-click on each host in
the MiniEdit GUI and select Terminal from the menu that appears.
In the h1 xterm window, start a Wireshark with the command, wireshark &.
In the h8 xterm window, start a packet trace with the command tcpdump.
$ tcpdump –n –i h8-eth0
Run a ping command to send traffic between host h1 and h8
mininet> h1 ping h8
RUN PROGRAMS TO GENERATE AND MONITOR
TRAFFIC
58
60. Right-click on link. Choose Link Down from the menu that appears
SIMULATE A BROKEN LINK
60
61. Ping again
no more traffic is received at host h8 and that the ping command shows packets
sent from host h1 are not being responded to
Restore the link by choosing Link Up
Check flow tables
$ sudo ovs-ofctl dump-flows s1
CONT..
61
62. Quit Wireshark and tcpdump on hosts h1 and h8.
Quit the ping command in the MiniEdit console window by pressing
Ctrl-C on the keyboard.
Then, quit the Mininet CLI by typing exit at the mininet> prompt.
Now, press the Stop button on the MiniEdit GUI.
STOP THE SIMULATION
62
63. $ cd ~/mininet/<filepath>
$ sudo chmod 777 <filename>.py
$ sudo ./<filename>.py
RUN A SAVED MININET CUSTOM TOPOLOGY
SCRIPT
63
64. Open web browser and go to http://www.ramonfontes.com/vnd/#
VISUAL NETWORK DESCRIPTION(VND)
64
65. Create the following topology using drag/drop
CREATE A CUSTOM NETWORK TOPOLOGY
65
69. Using the l2_multi pox controller module to find the shortest path
from sender to receiver to send packets.
Run the l2_multi pox controller.
Use discovery module to construct the network topology.
When the topology is known, the l2_multi can use Floyd-Warshall
algorithm to find a shortest path.
Note l2_multi.py is under /pox/pox/forwarding and discovery.py is
under /pox/pox/openflow
RUN CONTROLLER
69
73. There are two paths from h1 to h2, i.e.
h1->s3->s4->h2
h1->s3->s5->s6->s4->h4.
The shortest path is h1->s3->s4->h2.
Check the rules for s3 (we can see the rules for arp and ip operations
between 10.0.0.1 (h1) and 10.0.0.2(h2))
CONT..
73
79. 79
Switching hubs have a variety of functions
Learns the MAC address of the host connected to a port and retains it in the MAC
address table.
When receiving packets addressed to a host already learned, transfers them to the
port connected to the host.
When receiving packets addressed to an unknown host, performs flooding.
SWITCHING HUB
90. Execute ping from host 1 to host 2
mininet> h1 ping –c 1 h2
Before executing the ping command, execute the tcpdump command so that it is
possible to check what packets were received by each host.
CONFIRMING OPERATION
90