conferences | speakers | series

Hugepages and databases

home

Hugepages and databases
FOSDEM 2019

The introduction of the 64-bit version of the x86 architecture allowed servers to operate with much larger amounts of memory and the use of memory pages of different sizes. This talk will explore the Linux kernel implementation of HugePages in view of two of the most popular open source RDBMS, MySQL and PostgreSQL. We will be looking at how HugePages works, what is necessary to configure to make use of it, and whether it is really worth and under which circumstances.

The turn of the century saw the introduction of the 64-bit version of the popular x86 instruction set. The flagship feature of this new architecture specification was the use of a larger physical (and virtual) address space, increasing the practical memory limit of 4GB (or 64GB when in PAE mode) found in 32-bit x86 processors to values expressed in the order of Terabytes. Beyond the obvious advantage of operating with more memory, a bigger address space allowed programs to store larger amounts of data in memory: in many cases it made possible the mapping of entire files (instead of only fragments of it) in the respective process' address space.

Another important factor to consider in this context is the translation of virtual memory (divided in "pages") into physical memory. This direct mapping is maintained in a data structure called "page table". However, each lookup made there is an expensive operation, thus the Memory Management Unit (MMU) employs a caching mechanism named Translation Lookaside Buffer (TLB) for faster lookups. Whenever a translation from virtual memory to physical memory is necessary the TLB is verified first. If the target entry is found to be there the physical address is promptly returned, otherwise a TLB miss is computed and a more costly lookup in the page table is made. But as any other cache, it can only fit a limited number of entries. Making optimal usage of this space is important: the bigger the page size is the most of the memory area will be mapped by the TLB. Applications that perform a lot of memory accesses may obtain performance improvements by using large pages due to reduced TLB misses (https://dev.mysql.com/doc/refman/8.0/en/large-page-support.html).

A class of software that potentially benefits from operating with huge pages is that of database systems with their particular caching mechanisms. The PostgreSQL manual (https://www.postgresql.org/docs/9.6/static/kernel-resources.html) states that "using huge pages reduces overhead when using large contiguous chunks of memory, as PostgreSQL does, particularly when using large values of shared_buffers".

While MySQL and PostgreSQL do have support for huge pages MongoDB doesn't. Applications running in userland that do not natively support huge pages may indirectly make use of them through a kernel feature named Transparent HugePages (THP). But does it works the same ?

Furthermore, the enabling and use of huge pages is not made through a simple configuration flag that may be turned on or off: one needs to specify the number of huge pages needed in advance as well as the preferable page size. How should a database administrator makes this choice, and based in which factors ? Is there any tweaks necessary in the database side in order to benefit from huge pages ? What are the possible drawbacks in its use ? In this presentation we'll have a look at how this applies to MySQL and PostgreSQL.

Speakers: Fernando Laudares