This repo is used to test new network configurations for EVE OS using docker containers to simulate different network stacks and CLI tools to make configuration changes. The purpose is to quickly and easily validate proposed network configuration changes before commencing any implementation work in the EVE repository.
Developed and tested on Ubuntu 20.04.
The EdgeDevice JSON configuration corresponding to this scenario can be found here.
This is the simplest scenario we could think of that covers all important aspects of networking in EVE OS:
- edge device has 4 uplink interfaces:
eth0,eth1,eth2,eth3 - 6 network instances are created:
- local network
NI1, usingeth0as uplink (in the moment which is being simulated here) - local network
NI2, also usingeth0as uplink - local network
NI3, usingeth1as uplink - vpn network
NI4, usingeth1as uplink - vpn network
NI5, usingeth2as uplink - switch network
NI6, bridged witheth3
- local network
- 6 applications are deployed:
app3is VM-in-container, the other applications are only containersapp1connected toNI1andNI2- it runs HTTP server on the local port
80
- it runs HTTP server on the local port
app2connected toNI2- it runs HTTP server on the local port
80
- it runs HTTP server on the local port
app3connected toNI3app4connected toNI4app5connected toNI5app6connected toNI6
- there is a
GWcontainer, simulating the router to which the edge device is connected- for simplicity, in this simulation all uplinks are connected to the same router
GWruns dnsmasq as an (eve-external) DNS+DHCP service for the switch networkNI5- i.e. this is the DHCP server that will allocate IP address for
app5
- i.e. this is the DHCP server that will allocate IP address for
GWcontainer is connected to the host via docker bridge with NAT- this gives apps the access to the Internet
- there is a
zedboxcontainer, representing the default network namespace of EVE OS- in multi-ns proposal there is also one container per local network instance
- remote clouds are represented by
cloud1andcloud2containers- in both clouds there is an HTTP server running on port
80 - VPN network
NI4is configured to open IPsec tunnel tocloud1 - VPN network
NI5is configured to open IPsec tunnel tocloud2
- in both clouds there is an HTTP server running on port
- the simulated ACLs are configured as follows:
app1:- able to access
*github.com - able to access
app2http server:- either directly via
NI2(eidsetrule withfport=80 (TCP)) - or hairpinning:
NI1->zedboxnamespace (using portmap) ->NI2- i.e. without leaving the edge node (note that this should be allowed because
NI1anNI2use the same uplink) - not sure what the
ACCEPTACE should look like in this case - statically configured uplink subnet(s)?
- i.e. without leaving the edge node (note that this should be allowed because
- either directly via
- able to access
app2:- http server is exposed on the uplink IP and port
8080 - is able to access
eidset/fport=80 (TCP)- which means it can talk toapp1http server
- http server is exposed on the uplink IP and port
app3:- is able to communicate with any IPv4 endpoint
app4:- is able to access any endpoint (on the cloud) listening on the HTTP port
80(TCP)
- is able to access any endpoint (on the cloud) listening on the HTTP port
app5:- is able to access any endpoint (on the cloud) listening on the HTTP port
80(TCP)
- is able to access any endpoint (on the cloud) listening on the HTTP port
app6:- is able to access
app2by hairpinning outside the box- this is however limited to 5 packets per second with bursts up to 15 packets
- is able to access
- (1) IP subnets of
NI1andNI3are identical - (2) IP subnets of
NI2and that of the uplinketh1are identical - (3) IP subnets of the remote cloud networks and that of the uplink
eth0are identical - (4) Traffic selectors of IPsec tunnels
NI4<->cloud1andNI5<->cloud2are identical
(1)(2)(3)(4) Because of all that, the separation of NIs via namespaces or VRFs is necessary.