GRE(4) Device Drivers Manual GRE(4)

NAME

gre, egre, nvgreGeneric Routing Encapsulation network device

SYNOPSIS

pseudo-device gre

DESCRIPTION

The gre pseudo-device provides interfaces for tunnelling protocols across IPv4 and IPv6 networks using the Generic Routing Encapsulation (GRE) encapsulation protocol.
GRE datagrams (IP protocol number 47) consist of a GRE header and an outer IP header for encapsulating another protocol's datagram. The GRE header specifies the type of the encapsulated datagram, allowing for the tunnelling of multiple protocols.
Different tunnels between the same endpoints may be distinguished by an optional Key field in the GRE header. The Key field may be partitioned to carry flow information about the encapsulated traffic to allow better use of multipath links.
This pseudo driver provides the clonable network interfaces:
gre
Point-to-point Layer 3 tunnel interfaces.
egre
Point-to-point Ethernet tunnel interfaces.
nvgre
Network Virtualization Using Generic Routing Encapsulation (NVGRE) overlay Ethernet network interfaces.
All GRE packet processing in the system is allowed or denied by setting the net.inet.gre.allow sysctl(8) variable. To allow GRE packet processing, set net.inet.gre.allow to 1.
gre, egre, and nvgre interfaces can be created at runtime using the ifconfig ifaceN create command or by setting up a hostname.if(5) configuration file for netstart(8).
For correct operation, encapsulated traffic must not be routed over the interface itself. This can be implemented by adding a distinct or a more specific route to the tunnel destination than the hosts or networks routed via the tunnel interface. Alternatively, the tunnel traffic may be configured in a separate routing table to the encapsulated traffic.

Point-to-Point Layer 3 GRE tunnel interfaces (gre)

A gre tunnel can encapsulate IPv4, IPv6, and MPLS packets. The MTU is set to 1476 by default to match the value used by Cisco routers.
gre supports sending keepalive packets to the remote endpoint which allows tunnel failure to be detected. To return keepalives, the remote host must be configured to forward IP packets received from inside the tunnel back to the address of the local tunnel endpoint.
gre interfaces may be configured to receive IPv4 packets in Web Cache Communication Protocol (WCCP) encapsulation by setting the link0 flag on the interface. WCCP reception may be enabled globally by setting the net.inet.gre.wccp sysctl value to 1. Some magic with the packet filter configuration and a caching proxy like squid are needed to do anything useful with these packets. This sysctl requires net.inet.gre.allow to also be set.

Point-to-Point Ethernet over GRE tunnel interfaces (egre)

A egre tunnel interface carries Ethernet over GRE (EoGRE). Ethernet traffic is encapsulated using Transparent Ethernet (0x6558) as the protocol identifier in the GRE header, as per RFC 1701. The MTU is set to 1500 by default.

Network Virtualization Using GRE interfaces (nvgre)

nvgre interfaces allow construction of virtual overlay Ethernet networks on top of an IPv4 or IPv6 underlay network as per RFC 7367. Ethernet traffic is encapsulated using Transparent Ethernet (0x6558) as the protocol identifier in the GRE header, a 24-bit Virtual Subnet ID (VSID), and an 8-bit FlowID.
By default the MTU of an nvgre interface is set to 1500, and the Don't Fragment flag is set. The MTU on the network interfaces carrying underlay network traffic must be raised to accomodate this and the overhead of the NVGRE encapsulation, or the nvgre interface must be reconfigured for less capable underlays.
The underlay network parameters on a nvgre interface are a unicast tunnel source address, a multicast tunnel destination address, and a parent network interface. The unicast source address is used as the NVE Provider Address (PA) on the underlay network. The parent interface is used to identify which interface the multicast group should be joined to.
The multicast group is used to transport broadcast and multicast traffic from the overlay to other participating NVGRE endpoints. It is also used to flood unicast traffic to Ethernet addresses in the overlay with an unknown association to a NVGRE endpoint. Traffic received from other NVGRE endpoints, either to the Provider Address or via the multicast group, is used to learn associations between Ethernet addresses in the overlay network and the Provider Addresses of NVGRE endpoints in the underlay.

Programming Interface

gre, egre, and nvgre interfaces support the following ioctl(2) calls for configuring tunnel options:
SIOCSLIFPHYADDR struct if_laddrreq *
Set the IPv4 or IPv6 addresses for the encapsulating IP packets. The addresses may only be configured while the interface is down.
gre and egre interfaces support configuration of unicast IP addresses as the tunnel endpoints.
nvgre interfaces support configuration of a unicast IP address as the local endpoint and a multicast group address as the destination address.
SIOCGLIFPHYADDR struct if_laddrreq *
Get the addresses used for the encapsulating IP packets.
SIOCDIFPHYADDR struct ifreq *
Clear the addresses used for the encapsulating IP packets. The addresses may only be cleared while the interface is down.
SIOCSVNETID struct ifreq *
Configure a virtual network identifier for use in the GRE Key header. The virtual network identifier may only be configured while the interface is down.
gre and egre interfaces configured with a virtual network identifier will enable the use of the GRE Key header. The Key is a 32-bit value by default, or a 24-bit value when the virtual network flow identifier is enabled.
nvgre interfaces use the virtual network identifier in the 24-bit Virtual Subnet Identifer (VSID) aka Tenant Network Identifier (TNI) field in of the GRE Key header.
SIOCGVNETID struct ifreq *
Get the virtual network identifer used in the GRE Key header.
SIOCDVNETID struct ifreq *
Disable the use of the virtual network identifier. The virtual network identifer may only be disabled while the interface is down.
When the virtual network identifier is disabled on gre and egre interfaces, it disables the use of the GRE Key header.
nvgre interfaces do not support this ioctl as a Virtual Subnet Identifier is required by the protocol.
SIOCSLIFPHYRTABLE struct ifreq *
Set the routing table the tunnel traffic operates in. The routing table may only be configured while the interface is down.
SIOCGLIFPHYRTABLE struct ifreq *
Get the routing table the tunnel traffic operates in.
SIOCSLIFPHYTTL struct ifreq *
Set the Time-To-Live field in IPv4 encapsulation headers, or the Hop Limit field in IPv6 encapsulation headers.
gre interfaces configured with a TTL of -1 will copy the TTL in and out of the encapsulated protocol headers.
SIOCGLIFPHYTTL struct ifreq *
Get the value used in the Time-To-Live field in a IPv4 encapsulation header or the Hop Limit field in a IPv6 encapsulation header.
SIOCSLIFPHYDF struct ifreq *
Configure whether the tunnel traffic sent by the interface can be fragmented or not. This sets the Don't Fragment (DF) bit on IPv4 packets, and disables fragmentation of IPv6 packets.
SIOCGLIFPHYDF struct ifreq *
Get whether the tunnel traffic sent by the interface can be fragmented or not.
gre and egre interfaces support the following ioctl(2) calls:
SIOCSVNEFLOWID struct ifreq *
Enable or disable the partitioning of the optional GRE Key header into a 24-bit virtual network identifier and an 8-bit flow identifier.
gre and egre must already be configured with a virtual network identifer before enabling flow identifiers in the GRE Key header. The configured virtual network identify must also fit into 24 bits.
SIOCDVNETFLOWID struct ifreq *
Get the status of the partitioning of the GRE key.
gre interfaces support the following ioctl(2) calls:
SIOCSETKALIVE struct ifkalivereq *
Enable the transmission of keepalive packets to detect tunnel failure.
Setting the keepalive period or count to 0 disables keepalives on the tunnel.
SIOCGETKALIVE struct ifkalivereq *
Get the configuration of keepalive packets.
nvgre interfaces support the following ioctl(2) calls:
SIOCSIFPARENT struct if_parent *
Configure which interface will be joined to the multicast group specified by the tunnel destination address. The parent interface may only be configured while the interface is down.
SIOCGIFPARENT struct if_parent *
Get the name of the interface used for multicast communication.
SIOCGIFPARENT struct ireq *
Remove the configuration of the interface used for multicast communication.

EXAMPLES

Point-to-Point Layer 3 GRE tunnel interfaces (gre)

gre Configuration example:
Host X ---- Host A ------------ tunnel ------------ Cisco D ---- Host E 
               \                                      / 
                \                                    / 
                 +------ Host B ------ Host C ------+
On Host A (OpenBSD):
# route add default B 
# ifconfig greN create 
# ifconfig greN A D netmask 0xffffffff up 
# ifconfig greN tunnel A D 
# route add E D
On Host D (Cisco):
Interface TunnelX 
 ip unnumbered D   ! e.g. address from Ethernet interface 
 tunnel source D   ! e.g. address from Ethernet interface 
 tunnel destination A 
ip route C <some interface and mask> 
ip route A mask C 
ip route X mask tunnelX
OR
On Host D (OpenBSD):
# route add default C 
# ifconfig greN create 
# ifconfig greN D A 
# ifconfig greN tunnel D A
To reach Host A over the tunnel (from Host D), there has to be an alias on Host A for the Ethernet interface:
# ifconfig <etherif> alias Y
and on the Cisco:
ip route Y mask tunnelX
gre keepalive packets may be enabled with ifconfig(8) like this:
# ifconfig greN keepalive period count
This will send a keepalive packet every period seconds. If no response is received in count * period seconds, the link is considered down. To return keepalives, the remote host must be configured to forward packets:
# sysctl net.inet.ip.forwarding=1
If pf(4) is enabled then it is necessary to add a pass rule specific for the keepalive packets. The rule must use no state because the keepalive packet is entering the network stack multiple times. In most cases the following should work:
pass quick on gre proto gre no state

Network Virtualization Using GRE interfaces (nvgre)

NVGRE can be used to build a distinct logical Ethernet network on top of another network. nvgre is therefore like a vlan(4) interface configured on top of a physical Ethernet interface, except it can sit on any IP network capable of multicast.
The following shows a basic nvgre configuration and an equivalent vlan(4) configuration. In the examples, 192.168.0.1/24 will be the network configured on the relevent virtual interfaces. The NVGRE underlay network will be configured on 100.64.10.0/24, and will use 239.1.1.100 as the multicast group address.
The vlan(4) interface only relies on Ethernet, it does not rely on IP configuration on the parent interface:
# ifconfig em0 up 
# ifconfig vlan0 create 
# ifconfig vlan0 parent em0 
# ifconfig vlan0 vnetid 10 
# ifconfig vlan0 inet 192.168.0.1/24 
# ifconfig vlan0 up
nvgre relies on IP configuration on the parent interface, and an MTU large enough to carry the encapsulated traffic:
# ifconfig em0 mtu 1600 
# ifconfig em0 inet 100.64.10.1/24 
# ifconfig em0 up 
# ifconfig nvgre0 create 
# ifconfig nvgre0 parent em0 tunnel 100.64.10.1 239.1.1.100 
# ifconfig nvgre0 vnetid 10010 
# ifconfig nvgre0 inet 192.168.0.1/24 
# ifconfig nvgre0 up
NVGRE is intended for use in a multitenant datacentre environment to provide each customer with distinct Ethernet networks as needed, but without running into the limit on the number of VLAN tags, and without requiring reconfiguration of the underlying Ethernet infrastructure. Another way to look at it is NVGRE can be used to construct multipoint Ethernet VPNs across an IP core.
For example, if a customer has multiple virtual machines running in vmm(4) on distinct physical hosts, nvgre and bridge(4) can be used to provide network connectivity between the tap(4) interfaces connected to the virtual machines. If there are 3 virtual machines, all using tap0 on each hosts, and those hosts are connected to the same network described above, nvgre with a distinct virtual network identifier and multicast group can be created for them. The following assumes nvgre1 and bridge1 have already been created on each host, and em0 has had the MTU raised:
On physical host 1:
hv0# ifconfig em0 inet 100.64.10.10/24 
hv0# ifconfig nvgre1 parent em0 tunnel 100.64.10.10 239.1.1.111 
hv0# ifconfig nvgre1 vnetid 10011 
hv0# ifconfig bridge1 add nvgre1 add tap0 up
On physical host 2:
hv1# ifconfig em0 inet 100.64.10.11/24 
hv1# ifconfig nvgre1 parent em0 tunnel 100.64.10.11 239.1.1.111 
hv1# ifconfig nvgre1 vnetid 10011 
hv1# ifconfig bridge1 add nvgre1 add tap0 up
On physical host 3:
hv2# ifconfig em0 inet 100.64.10.12/24 
hv2# ifconfig nvgre1 parent em0 tunnel 100.64.10.12 239.1.1.111 
hv2# ifconfig nvgre1 vnetid 10011 
hv2# ifconfig bridge1 add nvgre1 add tap0 up
Being able to carry working multicast and jumbo frames over the public internet is unlikely, which makes it difficult to use NVGRE to extended Ethernet VPNs between different sites. nvgre and egre can be bridged together to provide such connectivity.
In this example the NVE device at the first site has a public IP of 192.0.2.1, and uses 100.64.10.0/24 for the NVGRE underlay network. The second site has a public IP 203.0.113.2, and uses 100.64.11.0/24 for the NVGRE underlay. egre is explicitly configured to provide the same MTU as the nvgre interfaces, but allows the encapsulated frames to be fragmented. Multiple egre interfaces are used to carry traffic for two different NVGRE networks, so each interface must configure distinct virtual network identifiers.
At the first site:
nve0# ifconfig nvgre0 parent em0 tunnel 100.64.10.1 239.1.1.100 
nve0# ifconfig nvgre0 vnetid 10000 
nve0# ifconfig egre0 create 
nve0# ifconfig egre0 tunnel 192.0.2.1 203.0.113.2 
nve0# ifconfig egre0 vnetid 10000 vnetflowid -tunneldf 
nve0# ifconfig bridge0 add nvgre0 add egre0 up 
nve0# ifconfig nvgre1 parent em0 tunnel 100.64.10.1 239.1.1.111 
nve0# ifconfig nvgre1 vnetid 10011 
nve0# ifconfig egre1 create 
nve0# ifconfig egre1 tunnel 192.0.2.1 203.0.113.2 
nve0# ifconfig egre1 vnetid 10011 vnetflowid -tunneldf 
nve0# ifconfig bridge0 add nvgre0 add egre0 up
At the second site:
nve1# ifconfig nvgre0 parent em0 tunnel 100.64.11.1 239.1.1.100 
nve1# ifconfig nvgre0 vnetid 10000 
nve1# ifconfig egre0 create 
nve1# ifconfig egre0 tunnel 203.0.113.2 192.0.2.1 
nve1# ifconfig egre0 vnetid 10000 vnetflowid -tunneldf 
nve1# ifconfig bridge0 add nvgre0 add egre0 up 
nve1# ifconfig nvgre1 parent em0 tunnel 100.64.11.1 239.1.1.111 
nve1# ifconfig nvgre1 vnetid 10011 
nve1# ifconfig egre1 create 
nve1# ifconfig egre1 tunnel 203.0.113.2 192.0.2.1 
nve1# ifconfig egre1 vnetid 10011 vnetflowid -tunneldf 
nve1# ifconfig bridge1 add nvgre1 add egre1 up

SEE ALSO

inet(4), ip(4), netintro(4), options(4), hostname.if(5), protocols(5), ifconfig(8), netstart(8), sysctl(8)

STANDARDS

S. Hanks, T. Li, D. Farinacci, and P. Traina, Generic Routing Encapsulation (GRE), RFC 1701, October 1994.
S. Hanks, T. Li, D. Farinacci, and P. Traina, Generic Routing Encapsulation over IPv4 networks, RFC 1702, October 1994.
D. Farinacci, T. Li, S. Hanks, D. Meyer, and P. Traina, Generic Routing Encapsulation (GRE), RFC 2784, March 2000.
G. Dommety, Key and Sequence Number Extensions to GRE, RFC 2890, September 2000.
P. Garg and Y. Wang, NVGRE: Network Virtualization Using Generic Routing Encapsulation, RFC 7647, September 2015.
Web Cache Coordination Protocol V1.0, https://tools.ietf.org/html/draft-ietf-wrec-web-pro-00.txt.
Web Cache Coordination Protocol V2.0, https://tools.ietf.org/html/draft-wilson-wrec-wccp-v2-00.txt.

AUTHORS

Heiko W. Rupp <hwr@pilhuhn.de>

CAVEATS

RFC 1701 and RFC 2890 describe a variety of optional GRE header fields in the protocol that are not implemented in the gre and egre interface drivers. The only optional field the drivers implement support for is the Key header.
gre interfaces skip the redirect header in WCCPv2 GRE encapsulated packets.
The NVGRE RFC specifies VSIDs 0 (0x0) to 4095 (0xfff) as reserved for future use, and VSID 16777215 (0xffffff) for use for vendor-specific endpoint communication. The NVGRE RFC also explicitly states encapsulated Ethernet packets must not contain IEEE 802.1Q (VLAN) tags. The nvgre driver does not restrict the use of these VSIDs, and does not prevent the configuration of child vlan(4) interfaces or the bridging of VLAN tagged traffic across the tunnel. These non-restrictions allow non-compliant tunnels to be configured which may not interoperate with other vendors.

SECURITY CONSIDERATIONS

The GRE protocol in all its flavours does not provide any integrated security features. GRE should only be deployed on trusted private networks, or protected with IPsec to add authentication and encryption for confidentiality. IPsec is especially recommended when transporting GRE over the public internet.
The Packet Filter pf(4) can be used to filter tunnel traffic with endpoint policies pf.conf(5).
The Time-to-Live (TTL) value of a tunnel can be set to 1 or a low value to restrict the traffic to the local network:
# ifconfig gre0 tunnelttl 1
February 23, 2018 OpenBSD 6.1