NetworkManager
The NetworkManager has undergone a significant update and now provides a command-line network management utility called nmcli. This is now the preferred way to configure all the network part.
A Red Hat article has been written about the difference between Network Initscript and NetworkManager. Interestingly, device hotplug support through udev rules has been disabled inRHEL 7 since it can result in race conditions when initializing newly found devices.
Also, a presentation of nmcli is available.
A Red Hat article has been written about the difference between Network Initscript and NetworkManager. Interestingly, device hotplug support through udev rules has been disabled inRHEL 7 since it can result in race conditions when initializing newly found devices.
Also, a presentation of nmcli is available.
Network Device Naming
Because network device names could change when adding new hardware, it has been decided to apply a consistent network device naming. This new naming convention, totally predictable, relies on the type of interface, the slot used, etc.
Although this new rule is well suited for servers, it is not really desirable for laptops with one network interface now called enp6s0 or enp4s2f0 instead of eth0.
Hopefully, it is still possible to restore the old network interface naming convention.
Although this new rule is well suited for servers, it is not really desirable for laptops with one network interface now called enp6s0 or enp4s2f0 instead of eth0.
Hopefully, it is still possible to restore the old network interface naming convention.
Speed Enhancement
The Red Hat Enterprise Linux 7 now provides support for 40 Gigabit Ethernet link speeds and for WiGig (IEEE 802.11ad) specification to increase wireless performance (up to 7 Gbps).
New Bonding Driver
Although there was already a bonding driver in RHEL 6, RedHat has decided to create a new one called Team Driver. More modular, it is implemented as a user space daemon making debugging easier.
A lot of explanations can be found in this Red Hat blog.
A lot of explanations can be found in this Red Hat blog.
Chrony
Chrony is a different implementation of the NTP v3 protocol (Network Time Protocol). It should replace ntpd for mobile and virtual systems because it can synchonize clocks quicker and with better accuracy. Chrony also provides much better response to rapid changes in the clock frequency, which is useful for virtual machines with unstable clocks or power-saving technologies that don’t keep the clock frequency constant.
A NTP configuration quick recipe is available.
If you don’t know which time service to run, HP can help you choose between chronyd and ntpdor this article comparing NTP implementations.
Also, Miroslav Lichvar wrote a detailed article about differences between ntpd and chronyd when dealing with leap seconds.
A NTP configuration quick recipe is available.
If you don’t know which time service to run, HP can help you choose between chronyd and ntpdor this article comparing NTP implementations.
Also, Miroslav Lichvar wrote a detailed article about differences between ntpd and chronyd when dealing with leap seconds.
Precision Time Protocol
Red Hat Enterprise Linux 7 includes support for the IEEE 1588 Version 2 specification, Precision Time Protocol (PTP). When used in conjunction with hardware support found in various network interface cards and network switches, PTP is capable of sub-microsecond accuracy, far better than NTP. And, by using a GPS-based time source, PTP can even be used to synchronize disparate networks with a high-degree of accuracy.
TCP Performance Optimizations
Red Hat Enterprise Linux 7 brings new TCP performance optimizations aimed at reducing overall communication latency:
- TCP Fast Open: an experimental TCP extension designed to reduce the overhead when establishing a TCP connection by eliminating one round time trip (RTT) from certain kinds of TCP conversations. It’s useful for accelerating HTTP connection handshaking and could result in speed improvements of between 4% and 41% in the page load times on popular web sites.
- TCP Tail Loss Probe (TLP): an experimental algorithm that improves the efficiency of how the TCP networking stack deals with lost packets at the end of a TCP transaction. For short transactions, TLP should be able to reduce transmission timeouts by 15% and shorten HTTP response times by an average of 6%.
- TCP Early Retransmit: allows the transport to use fast retransmits to recover segment losses that would otherwise require a lengthy retransmission timeout. In other words, connections recover from lost packets faster, which improves overall latency.
- TCP Proportional Rate Reduction (PRR): an experimental algorithm designed to adapt transmission rates to the rates that can be processed by the recipient and by the routers along the way; especially after throttling the rate to prevent an imminent overload. It is designed to return to the maximum transfer rate faster and can help reduce HTTP response times by as much as 3-10%.
Source: Red Hat blog.
Low Latency Sockets using Busy Poll
Low Latency Sockets is a software implementation designed to reduce networking latency and jitter. The native protocol stack is enhanced with a low latency path in conjunction with packet classification by the NIC. This feature allows an application to enable polling for new packets directly in the device driver. It is designed to be transparent, make polling easy to use by applications and benefits applications sensitive to unpredictable latency.
Red Hat Presentation
During the Red Hat annual Summit (2014), a detailed presentation was given concerning the network changes in RHEL7.
No comments:
Post a Comment