forked from memkind/memkind
-
Notifications
You must be signed in to change notification settings - Fork 0
License
cmcantalupo/memkind
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
======= MEMKIND ======= DISCLAIMER ---------- SEE COPYING FILE FOR LICENSE INFORMATION. THIS SOFTWARE IS PROVIDED AS A DEVELOPMENT SNAPSHOT TO AID COLLABORATION AND WAS NOT ISSUED AS A RELEASED PRODUCT BY INTEL. LAST UPDATE ----------- Christopher Cantalupo <christopher.m.cantalupo@intel.com> 2015 January 9 SUMMARY ------- The memkind library is a user extensible heap manager built on top of jemalloc which enables control of memory characteristics and a partitioning of the heap between kinds of memory. The kinds of memory implemented within the library enable the control of NUMA and page size features. The jemalloc heap manager has been extended to include callbacks which modify the parameters for the mmap() system call and provide a wrapper around the mbind() system call. The callbacks are managed by the memkind library which provides both a heap manager and functions to create new user defined kinds of memory. BUILD REQUIREMENTS ------------------ To build the memkind libraries on a RHEL Linux system first install the required packages with the following command: sudo yum install numactl-devel gcc-c++ On a SLES Linux system install the dependencies with the following command: sudo zypper install libnuma-devel gcc-c++ The jemalloc package must be versioned with the +mk extension. Minimum version 3.5.1+mk3 of jemalloc is required. To use memkind, jemalloc must be configured with the options below: configure --enable-memkind --with-jemalloc-prefix=je_ The memkind supported source of jemalloc contains a spec file that can be used with rpmbuild to configure jemalloc for SLES and RHEL distributions. Install this RPM if possible, otherwise, you can direct the memkind configure command at a locally installed version of jemalloc with the option configure --with-jemalloc=/path/to/jemalloc/prefix There are similar options for linking libnuma. See configure --help and the INSTALL file for more options. RUN REQUIREMENTS ---------------- Requires kernel patch introduced in Linux v3.11 that impacts functionality of the NUMA system calls. This is patch is commit 3964acd0dbec123aa0a621973a2a0580034b4788 in the linux-stable git repository from kernel.org. Red Hat has back-ported this patch to the v3.10 kernel in the RHEL 7.0 GA release, so RHEL 7.0 onward supports memkind even though this kernel version predates v3.11. To use the interfaces for obtaining 2MB and 1GB pages please be sure to follow the instructions here: https://www.kernel.org/doc/Documentation/vm/hugetlbpage.txt and pay particular attention to the use of the proc files: /proc/sys/vm/nr_hugepages /proc/sys/vm/nr_overcommit_hugepages for enabling the kernel's huge page pool. TESTING ------- All existing tests pass both within the simics simulation environment of KNL and when run on Xeon. When run on Xeon the MEMKIND_HBW_NODES environment variable is used in conjunction with "numactl --membind" to force standard allocations to one NUMA node and high bandwidth allocations through a different NUMA node. See next section for more details. SIMULATE HIGH BANDWIDTH MEMORY ------------------------------ A method for testing for the benefit of high bandwidth memory on a dual socket Xeon system is to use the QPI bus to simulate slow memory. This is not an accurate model of the bandwidth and latency characteristics of the KNL on package memory, but is a reasonable way to determine which data structures rely critically on bandwidth. If the application a.out has been modified to use high bandwidth memory with the memkind library then this can be done with numactl as follows with the bash shell: export MEMKIND_HBW_NODES=0 numactl --membind=1 --cpunodebind=0 a.out or with csh: setenv MEMKIND_HBW_NODES 0 numactl --membind=1 --cpunodebind=0 a.out The MEMKIND_HBW_NODES environment variable set to zero will bind high bandwidth allocations to NUMA node 0. The --membind=1 flag to numactl will bind standard allocations and stack variables to NUMA node 1. The --cpunodebind=0 option to numactl will bind the process to CPU's associated with NUMA node 0. With this configuration standard allocations will be fetched across the QPI bus, and high bandwidth allocations will be local to the process CPU. STATUS ------ This software is being made available for early evaluation. The memkind library should be considered pre-alpha: bugs may exist and the interfaces may be subject to change prior to alpha release. Feedback on design or implementation is greatly appreciated. Please see the TODO file for more information about future work. The memkind interface detailed in the memkind.3 man page should be considered experimental, and is not necessarily going to be exposed to customers on alpha release. Any feedback about the advantages or disadvantages of the memkind interface over the hbwmalloc interface described in the hbwmalloc.3 man page would be very helpful.
About
No description, website, or topics provided.
Resources
License
Stars
Watchers
Forks
Packages 0
No packages published
Languages
- C 70.8%
- C++ 22.1%
- Shell 3.7%
- Makefile 3.4%