D1027R0 draft 2: SG14 design guidelines for latency preserving (standard) libraries

Document #: D1027R0 draft 2 Date: 2018-05-31 Project: Programming Language C++ Library Evolution Working Group SG14 Low Latency study group Reply-to: Niall Douglas Soft, rm and hard realtime applications are likely to become an increasingly common application for newly started C++ codebases. The language is well capable of xed latency applications, however much of its standard library is currently unsuitable. SG14 hopes that these guidelines will help inform the design of future latency preserving standard libraries. Changes since draft 1: • Completely rearchitected paper based on SG14 feedback on draft 1, refactoring it from `design guidelines for future low level libraries' into `design guidelines for latency preserving libraries'.

Contents

1 Introduction

2

2 Recommended latency preserving design principles

5

1.1 What is latency preservation? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 A worked example comparing the Linux and Windows kernel page caches . . . . . . 1.3 How is this relevant to C++ Direction? . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 2.2 2.3 2.4 2.5

Have the user supply memory to you; don't allocate it yourself . . . . . . . . . . . Don't throw exceptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Avoid consuming and returning types which are not trivially copyable . . . . . . . Kernel syscall guarantees and latency distributions should be maintained . . . . . . Latency preserving libraries should try to be P0829 Freestanding C++ compatible

. . . . .

2 2 4

5 5 6 7 7

3 Future C++ standards

7

4 Acknowledgements 5 References

8 8

3.1 C++ 20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 Current papers relevant to these guidelines . . . . . . . . . . . . . . . . . . . . . . . .

1

7 7

1 Introduction 1.1

What is latency preservation?

Lots of engineering talent gets invested into tuning hardware and software systems to deliver as at as possible latency distributions for repeated operations. Fixed latency programming requires each function to perturb latency distribution curves as little as possible over the functions it calls. Writing code which branches a lot will tend to steepen the slope of the curve, hence the importance of branch-free programming. Writing code which always does a lot of extra work will shift the whole curve upwards equally. These guidelines are SG14's recommendations to help you write C++ (standard) libraries which preserve the latency distribution of the underlying operations, and to not degrade a latency distribution. 1.2

A worked example comparing the Linux and Windows kernel page caches

Modern computing systems are fundamentally stochastic, and writing soft, rm and especially hard realtime programs is mainly an exercise in statistical analysis. Let us examine the latency distribution of randomly accessing 4Mb of RAM: 3600

1000 800

2600

Nanoseconds

Nanoseconds

3100 2100 1600 1100

600

400 200

600

0 1 104 207 310 413 516 619 722 825 928 1031 1134 1237 1340 1443 1546 1649 1752 1855 1958

1 104 207 310 413 516 619 722 825 928 1031 1134 1237 1340 1443 1546 1649 1752 1855 1958

100 Sample

Sorted samples

21%

96%

Figure 1: Random 4Kb read latencies, raw and sorted, within a 4Mb region of RAM on an Intel Skylake CPU. One can see that there is a structural break at approximately 162 nanoseconds, where the curve shifts, and the slope of the distribution exceeds 45 degrees at approximately 400 nanoseconds. 21% of random 4Kb reads will occur within 162 nanoseconds, and 96% of random 4Kb reads will occur within 400 nanoseconds, if our working set is within 4Mb. The Intel Skylake CPU has a L3 cache of 4Mb and TLB entries for up to 6Mb of 4Kb paged RAM, so in addition to lack of TLB exhaustion, most of these memory reads will occur from L3 cache.

2

6100 Nanoseconds

Nanoseconds

5100 4100 3100 2100 1100 1 104 207 310 413 516 619 722 825 928 1031 1134 1237 1340 1443 1546 1649 1752 1855 1958

1 104 207 310 413 516 619 722 825 928 1031 1134 1237 1340 1443 1546 1649 1752 1855 1958

100

1000 900 800 700 600 500 400 300 200 100

Sample

Sorted samples

68%

96%

Figure 2: Random 4Kb read latencies, raw and sorted, within a 100Mb region of RAM on an Intel Skylake CPU. Taking it up a notch, Figure 2 shows the same operation over 100Mb of RAM. This will easily exhaust the TLB as well as L2 and L3 caches, forcing the CPU to go to main memory most of the time. One gets an almost symmetrical logistic curve, though it is usually more usefully modelled as a power law curve for the purposes of statistically calculating worst case latency. Note that at 96%, latencies are about twofold those in the bottom 5%. File i/o can be similarly analysed:

Nanoseconds

Nanoseconds

16500 11500 6500

16000 11000

6000 1000

1 109 217 325 433 541 649 757 865 973 1081 1189 1297 1405 1513 1621 1729 1837 1945

1 109 217 325 433 541 649 757 865 973 1081 1189 1297 1405 1513 1621 1729 1837 1945

1500 Sample

Sorted samples

66%

90%

Figure 3: Random 4Kb read latencies, raw and sorted, within a 100Mb region of Microsoft Windows 10 kernel page cache via the ReadFile() system call. The random spikes reect a third of i/o latencies, as we can see from the structural break in Figure 3 at 3,600 nanoseconds, so two thirds of the time the read latency will be under 3,600 nanoseconds. Even amongst this peaky third, the slope of the distribution does not exceed 45 degrees until 88% of samples are accounted for, meaning that 88% of the time we can be sure that an individual random 4Kb i/o read from the kernel page cache will complete within 8,700 nanoseconds. This sort of latency curve degradation probably ts the stereotype of i/o in the minds of most readers i.e. non-deterministic. However remember that this test is exclusively reading from the kernel page cache i.e. the 100Mb of memory being randomly read by a read() syscall ought to have exactly the same latency distribution as for randomly memory copying from RAM in userspace, just with a xed shift upwards to reect the userspace to kernel space transition. 3

Let us look at the same test on Linux: Multiples of memcpy()

1500 1000 500

1 104 207 310 413 516 619 722 825 928 1031 1134 1237 1340 1443 1546 1649 1752 1855 1958

0

Random 4Kb memcpy

9 7 5 3 1

1 104 207 310 413 516 619 722 825 928 1031 1134 1237 1340 1443 1546 1649 1752 1855 1958

Nanoseconds

2000

Win10 page cache latency degradation

Random 4Kb i/o

Linux page cache latency degradation

Figure 4: Random 4Kb read latencies from the kernel page cache using the read() syscall on Linux kernel 2.6.15, with the dierential from the gures for RAM compared to the Microsoft Windows latencies in Figure 3. The latency degradation of Linux i/o over raw RAM is less than threefold right up to 99%, with a smooth inclined dierential. This is not signicantly worse than the twofold slope for main memory1 . We can thus conclude that Linux has markedly superior latency preservation than Microsoft Windows for reads from the kernel page cache. 1.3

How is this relevant to C++ Direction?

[P0939] Direction for ISO C++ laid out some useful guidelines regarding what C++ should prioritise in the next decade, specically mentioning: 1. Support for demanding applications in important application areas, such as medical, nance, automotive, and games (e.g., key libraries, such as JSON, at containers, and pool and stack allocators). 2. Embedded systems: make C++ easier to use and more eective for large and small embedded systems, (that does not mean just simplied C-like C++). SG14, the low latency study group, felt it could usefully add to those guidelines from Direction by setting out how best to design a latency preserving modern C++ library meeting the criteria of typical SG14 low latency members. We hope that WG21 will consider splitting the standard library into layers, latency preserving and latency degrading, with the latency preserving layer meeting these design guidelines and thus becoming the standard library [subset] for embedded and xed latency C++ programming. These guidelines will be revised and updated as the C++ standard advances, and these guidelines refer to the latest published C++ standard which is C++ 17. Where we know of exciting upcoming advances in a draft standard which may aect a later edition of these guidelines, we mention them in a later section.

1 Note that due to unavoidable personal factors in the run up to the Rapperswil WG21 meeting, the Linux results were performed on much older hardware than the Microsoft Windows tests, and with a very old Linux kernel. A future revision of this paper will use similar hardware for both graphs, and a much newer Linux kernel.

4

2 Recommended latency preserving design principles 2.1

Have the user supply memory to you; don't allocate it yourself

Probably the single biggest cause of latency degradation in libraries is the internal use of dynamic memory allocation in a function. You should avoid this. There are a number of ways to avoid dynamic memory allocation: 1. The best form of all is simply to have your library's APIs be given memory to use by the caller. This is the most recommended mechanism by SG14. 2. Less ideal is to permit the caller to supply a STL allocator which your library will use to allocate memory dynamically. This can be acceptable if that STL allocator is a C++ 17 std::pmr::monotonic_buffer_resource or equivalent2 , as such allocators are fully deterministic. However the reason why SG14 recommends against accepting STL allocators is that it is very easy to accidentally write or call code assuming that deallocations return free space which can be reused by subsequent allocations, which is not the case with monotonic allocators. There is also a suspicion, without evidence, that the rst and most preferred approach above encourages more careful design of memory layout with better CPU cache locality, whereas monotonic allocators might tend lay out data in a less optimal cache locality. Summary: • Don't allocate memory, let the caller pass you memory to use. • Avoid using STL allocators or calling code which uses them. If no other choice, try to use C++

17 std::pmr::monotonic_buffer_resource, but be aware of its limitations (deallocations do not free resource).

2.2

Don't throw exceptions

Throwing and catching exceptions in C++ 17 have unbounded execution times, especially on table EH implementations. A xed latency library will have bounded failure times just as it has bounded success times. Therefore, do not throw exceptions in a latency preserving C++ library. Some will nd that the easiest way to do this is to globally disable C++ exceptions in the compiler. Many SG14 members do this. However this requirement, strictly speaking, only applies to latency preserving functions. A library may oer mostly latency preserving functions, but also some latency degrading functions. A useful way to indicate to your library users which preserve latency, and which do not, is to mark the latency preserving functions as noexcept. We recognise that this is not present WG21 committee guidance (the `Lakos rule', see [N3279]), and that future improvements to the C++ language may make some types of exception throw deterministic ([P0709] Zero-overhead deterministic exceptions: Throwing values ). However, for the C++ 17 language edition (and all preceding editions), this is our current advice. 2

Boost has an implementation of std::pmr::monotonic_buffer_resource for pre-C++ 17. 5

Instead of throwing exceptions, you should strongly consider using std::error_code to report failures. This arrived in C++ 11, and is a exible and deterministic mechanism for reporting failure to other code which need not understand the specic details of failure, just that the failure matches one of the std::errc::* values. Another option for C++ 14 code is Boost.Outcome, and a later C++ standard may have [P0323] std::expected. Summary: • Latency preserving functions should never throw exceptions, and thus be always marked noexcept.

• Strongly consider using std::error_code instead to report failure instead of exception throws. 2.3

Avoid consuming and returning types which are not trivially copyable

TriviallyCopyable

types are a special category of aggressively optimised types in C++:

• Every copy constructor is trivial or deleted. • Every move constructor is trivial or deleted. • Every copy assignment operator is trivial or deleted. • Every move assignment operator is trivial or deleted. • At least one copy constructor, move constructor, copy assignment operator, or move assign-

ment operator is non-deleted.

• Trivial non-deleted destructor.

All the integral types meet TriviallyCopyable, as do C structures. The compiler is thus free to store such types in CPU registers, relocate them at certain well-dened points of control ow as if by memcpy, and overwrite their storage as no destruction is needed. This greatly simplies the job of the compiler optimiser, making for tighter codegen, faster compile times, and less stack usage, all highly desirable things. You should therefore try to avoid the use of types which are not trivially copyable in your latency preserving library. If you take the discipline to ensure you design all the types in your program to be trivially copyable, or as nearly trivially copyable as possible (a non-trivial destructor is the most common deviation), you will see large gains in predictability due to the aggregatation of aggressive optimisation. Examining the assembler produced by the compiler, you will often be surprised at just how much of your code is completely eliminated, resulting in no assembler produced at all. And the fastest and most predictable program is always, by denition, the one which does no work. Summary: • Wherever possible, make your types trivially copyable. • And don't forget to add trap code to your programs of the form: 1

static_assert(std::is_trivially_copyable_v);

6

2.4

Kernel syscall guarantees and latency distributions should be maintained

As we saw above, kernel engineers may have invested signicant eort in making a syscall latency preserving. You should not ruin that eort by assuming in your library code that syscalls always degrade latency, and thus you are free to allocate memory or throw exceptions because `the syscall will dominate'. Summary: • Always

benchmark

your kernel syscalls to nd out if they preserve latencies, like the Linux read() implementation does as demonstrated above. Then benchmark your syscall wrapper function and ensure its latency distribution is as close as possible to the underlying syscall.

• Don't do anything with worse than linear latency in time and space in kernel syscall wrapper

functions.

• Move any ltering or pre or post processing (i.e. memory copying) into separate functions.

Don't copy memory, except what is unavoidable, in your kernel syscall wrapper function.

• Avoid calling multiple kernel syscalls in a single kernel syscall wrapper function, or make it

possible for the caller to specify kernel syscalls to skip calling.

2.5

Latency preserving libraries should try to be P0829 Freestanding C++ compatible

[P0829] Freestanding C++ proposes a subset of the standard library which works well on embedded systems without an operating system. Such systems tend to require xed latency execution, and thus if you can make your latency preserving library freestanding C++ compatible, you make it available for use in such embedded systems.

3 Future C++ standards The guidelines above are in reference to the latest published C++ standard, which is C++ 17. 3.1

C++ 20

There are no features currently voted into the C++ 20 draft which would aect these guidelines. 3.2



Current papers relevant to these guidelines

[P0132] Non-throwing container operations These are expected to make it into C++ 20, and add non-throwing modier functions to the existing STL containers. These can be useful for where STL container use is unavoidable, but otherwise the guidelines above would remain unchanged. 7



[P0709] Zero-overhead deterministic exceptions: Throwing values This replaces/adds an exception throw and catch mechanism which is fully deterministic to C++. If approved, this would enable these guidelines to approve the use of exception throw and catch.



[P1028]

SG14 status_code and standard error object for P0709 Zero-overhead

deterministic exceptions

This is a rened and improved std::error_code. If approved, it would allow these guidelines to recommend using it as an improved alternative to std::error_code.



[P1029] SG14 [[move_relocates]]

This makes it possible for the programmer to guarantee that a type with non-trivial move constructor and destructor is bitwise copyable: move relocatable. If approved, it would enable these guidelines to recommend the use of move relocatable types in addition to trivially copyable types.

4 Acknowledgements Thanks to Arthur O'Dwyer, Jason McKesson and Tony van Eerd for their copy editing and pointing out of bad code.

5 References [N3279] Alisdair Meredith and John Lakos,

Conservative use of noexcept in the Library https://wg21.link/N3279

[P0132] Ville Voutilainen,

Non-throwing container operations https://wg21.link/P0132

[P0323] Vicente Botet, JF Bastien, std::expected https://wg21.link/P0323

[P0709] Herb Sutter,

Zero-overhead deterministic exceptions: Throwing values https://wg21.link/P0709

[P0829] Ben Craig,

Freestanding proposal https://wg21.link/P0829

8

[P0939] B. Dawes, H. Hinnant, B. Stroustrup, D. Vandevoorde, M. Wong,

Direction for ISO C++

http://wg21.link/P0939

[P1028] Douglas, Niall

SG14 status_code and standard error object for P0709 Zero-overhead deterministic exceptions https://wg21.link/P1028

[P1029] Douglas, Niall

SG14 [[move_relocates]]

https://wg21.link/P1029

9

SG14 design guidelines for latency preserving -

2.1 Have the user supply memory to you; don't allocate it yourself . ..... And the fastest and most predictable program is always, by definition, the one which does ...

3MB Sizes 3 Downloads 773 Views

Recommend Documents

SG14 -
of our titles to Linux and OS X.ааClang will parse the complete class template when it encounters it, rather than parsing the necessary functions as they are called.ааThis means that std::function is unavailable to nonRTTI projects built using Cl

Manzini, Design, Ethics and Sustainability, Guidelines for a Transition ...
Manzini, Design, Ethics and Sustainability, Guidelines for a Transition Phase.pdf. Manzini, Design, Ethics and Sustainability, Guidelines for a Transition Phase.

pdf-1462\design-for-profitability-guidelines-to-cost-effectively ...
... the apps below to open or edit this item. pdf-1462\design-for-profitability-guidelines-to-cost-eff ... ess-of-complex-products-industrial-innovation-series.pdf.

pdf-1595\design-guidelines-for-creating-defensible-space-by-oscar ...
pdf-1595\design-guidelines-for-creating-defensible-space-by-oscar-newman.pdf. pdf-1595\design-guidelines-for-creating-defensible-space-by-oscar-newman.

[PDF Online] Guidelines for Laboratory Design: Health ...
Read Best Book Online Guidelines for Laboratory Design: Health, Safety, and Environmental Considerations, Fourth Edition, Online PDF Guidelines for Laboratory Design: Health, Safety, and Environmental Considerations, Fourth Edition, Read PDF Guidelin

2016 School Design Guidelines for Comment_June 30 2016.pdf ...
Page 1 of 117. Wyoming School Facilities Commission. School Design Guidelines. Originally Prepared by. Coleman Engineering, Inc. In collaboration with.

DEEEER0 draft 4: SG14 [[move_relocates]] -
Apr 19, 2018 - to slots in global memory. The second major .... If there is a virtual inheritance anywhere in the inheritance tree. • Not all member data types are ...

Privacy-Preserving Protocols for Perceptron ... - Semantic Scholar
the case of client-server environment, and it is assumed that the neural ... Section 4 is dedicated ... preserving protocol neural network for client-server environ-.

SPO: Structure Preserving Oversampling for ...
Technology and Design. ... minority class. Although several recent algorithm-level methods reviewed in [2] achieved good results on some imbalanced datasets, data-level methods are advantageous of addressing the imbalance issue ..... Figure 3. Visual

Coarsest Controllability-Preserving Minimization for ...
(under)specifications for a greater modeling convenience [5]. Note that ...... failed pairs in total, leading to the complexity of O(S3) for updating the little brother ...

Privacy-Preserving Protocols for Perceptron ... - Semantic Scholar
School of Information Technology and. Engineering (SITE). University ... to the best of our knowledge, there is no privacy-preserving technique to collaboratively ...

Central Area Design Guidelines Board_PLUZ briefing FINAL.pdf ...
Page 4 of 12. Community Champion HCAACD Central Are. Collaborative 23rd Ave ACT African American Veterans Group of. Washington. Previous. plans &. activities. City Support OPCD Arts OED. 2013. Adopted 2017. Page 4 of 12 ...