Skip to content

VMA over RHEL 7.x with inbox driver

Liran Oz edited this page Nov 13, 2018 · 24 revisions

This page will help you get VMA installed and running on a clean RHEL with the inbox drivers coming from Redhat.
Meaning, you will not need any additional MLNX_OFED driver besides what you get from Redhat and the latest VMA copy.
NOTE: Only versions 7.2 and up are supported.

1. Install RHEL-7.x

Follow the installation guide from Redhat

2. Verify Mellanox NIC is connected properly

$ lspci | grep Mellanox
Output should be some thing like this:
   15:00.0 Ethernet controller: Mellanox Technologies MT27700 Family [ConnectX-4]
   15:00.1 Ethernet controller: Mellanox Technologies MT27700 Family [ConnectX-4]
   1a:00.0 Ethernet controller: Mellanox Technologies MT28800 Family [ConnectX-5 Ex]
   1a:00.1 Ethernet controller: Mellanox Technologies MT28800 Family [ConnectX-5 Ex]

3. Install libvma

RHEL-7.3 and above:

 yum install libvma

RHEL-7.2

  1. Install Prerequisites:
    Basic drivers for user space access for Ethernet and Infiniband:

    yum groupinstall -y Infiniband
    

    Additional devel packages for libvma compilation:

    yum install libibverbs-devel.x86_64 librdmacm-devel.x86_64
    

    Important! Don’t use libnl-devel, use libnl3-devel:

    yum --setopt=group_package_types=optional install libnl3-devel  
    

    NOTE: You may need to edit "/etc/yum.repos.d/redhat.repo" and enable "rhel-7-server-optional-rpms" if the above command does not work. Do not be tempted to use libnl-devel !

  2. Download libvma source:

    git clone https://github.com/Mellanox/libvma.git
    
    cd libvma
    
  3. Build libvma and install:

    ./autogen.sh
    
    ./configure --prefix=/usr --libdir=/usr/lib64
    
    make -j install
    

4. Common Configurations:

  1. Memory settings for high performance and RDMA Configure huge-pages for better libvma performance:

     echo 1000000000 > /proc/sys/kernel/shmmax
    
     echo 800 > /proc/sys/vm/nr_hugepages
    

Relax memlock restrictions for users allowing more direct memory mapping for HW.
Create file /etc/security/limits.d/rdma.conf with the following:

 # configuration for rdma tuning
 *       soft    memlock         unlimited
 *       hard    memlock         unlimited
 # rdma tuning end

Enable new limit for the user.
You will need to logout and back in and then run:

ulimit -l unlimited
  1. Configure and load driver modules for ConnectX-3/ConnectX-3 Pro:

    echo options mlx4_core log_num_mgm_entry_size=-1 >> /etc/modprobe.d/mlnx.conf
    dracut -f
    
    modprobe -rv mlx4_en mlx4_ib mlx4_core
    
    modprobe -v mlx4_en mlx4_ib mlx4_core rdma_ucm
    
  2. Disable firewall service: systemctl disable firewalld

  3. Load libvma and run the app (as root): LD_PRELOAD=libvma.so <test_app>

  4. For running as user (as non root user): Set cap_net_raw for the executable. For example to use sockperf:

    setcap cap_net_raw=ep /usr/bin/sockperf
    

    Set special permission for library (SET_UID) and place in standard location (which libvma is now already). These steps are required when using LD_PRELOAD with capabilities set on executable.

    chmod u+s /usr/lib64/libvma.so.8*
    

5. Supported Mellanox NICs:

ConnectX3, ConnectX3 Pro
ConnectX4, ConnectX4 Lx
ConnectX5, ConnectX5 Ex

Check the release notes link under chapter 1 [Supported HCAs Firmware Versions]

[RHEL 7.3 Release Notes](http://www.mellanox.com/pdf/prod_software/Red_Hat_Enterprise_Linux_(RHEL)_7.3_Driver_Release_Notes_1.0.pdf)